venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Generalizing GANs: A Turing Perspective Abstract Recently, a new class of machine learning algorithms has emerged, where models and discriminators are generated in a competitive setting. The most prominent example is Generative Adversarial Networks (GANs). In this paper we examine how these algorithms relate to the Turing test, and derive what—from a Turing perspective—can be considered their defining features. Based on these features, we outline directions for generalizing GANs—resulting in the family of algorithms referred to as Turing Learning. One such direction is to allow the discriminators to interact with the processes from which the data samples are obtained, making them “interrogators”, as in the Turing test. We validate this idea using two case studies. In the first case study, a computer infers the behavior of an agent while controlling its environment. In the second case study, a robot infers its own sensor configuration while controlling its movements. The results confirm that by allowing discriminators to interrogate, the accuracy of models is improved. 1 Introduction Generative Adversarial Networks [1] (GANs) are a framework for inferring generative models from training data. They place two neural networks—a model and a discriminator—in a competitive setting. The discriminator’s objective is to correctly label samples from either the model or the training data. The model’s objective is to deceive the discriminator, in other words, to produce samples that are categorized as training data by the discriminator. The networks are trained using a gradient-based optimization algorithm. Since their inception in 2014, GANs have been applied in a range of contexts [2, 3], but most prominently for the generation of photo-realistic images [1, 4]. In this paper we analyze the striking similarities between GANs and the Turing test [5]. The Turing test probes a machine’s ability to display behavior that, to an interrogator, is indistinguishable from that of a human. Developing machines that pass the Turing test could be considered as a canonical problem in computer science [6]. More generally, the problem is that of imitating (and hence inferring) the structure and/or behavior of any system, such as an organism, a device, a computer program, or a process. The idea to infer models in a competitive setting (model versus discriminator) was first proposed in [7]. The paper considered the problem of inferring the behavior of an agent in a simple environment. The behavior was deterministic, simplifying the identification task. In a subsequent work [8], the method, named Turing Learning, was used to infer the behavioral rules of a swarm of memoryless 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. robots. The robot’s movements were tracked using an external camera system, providing the training data. Additional robots executed the rules defined by the models. The contributions of this paper are • to examine the defining features of GANs (and variants)—assuming a Turing perspective; • to outline directions for generalizing GANs, in particular, to encourage alternative imple- mentations and novel applications; for example, ones involving physical systems; • to show, using two case studies, that more accurate models can be obtained if the discrimi- nators are allowed to interact with the processes from which data samples are obtained (as the interrogators in the Turing test).1 2 A Turing Perspective In 1950, Turing proposed an imitation game [5] consisting of three players A, B and C. Figure 1 shows a schematic of this game. Player C, also referred to as the interrogator, is unable to see the other players. However, the interrogator can pose questions to and receive answers from them. Answers from the same player are consistently labelled (but not revealing its identity, A or B). At the end of the game, the interrogator has to guess which label belongs to which player. There are two variants of the game, and we focus on the one where player A is a machine, while player B is human (the interrogator is always human). This variant, depicted in Figure 1, is commonly referred to as the Turing test [9, 10]. To pass the test, the machine would have to produce answers that the interrogator believes to originate from a human. If a machine passed this test, it would be considered intelligent. For GANs (and variants), player C, the interrogator, is no longer human, but rather a computer program that learns to discriminate between information originating from players A and B. Player A is a computer program that learns to trick the interrogator. Player B could be any system one wishes to imitate, including humans. 2.1 Defining Features of GANs Assuming a Turing perspective, we consider the following as the defining features of GANs (and variants): • a training agent, T , providing genuine data samples (the training data); • a model agent,M, providing counterfeit data samples; 1Different to [7], we consider substantially more complex case studies, where the discriminators are required to genuinely interact with the systems, as a pre-determined sequence of interventions would be unlikely to reveal all the observable behavioral features. • a discriminator agent, D, labelling data samples as either genuine or counterfeit; • a process by which D observes or interacts withM and T ; • D andM are being optimized: – D is rewarded for labelling data samples of T as genuine; – D is rewarded for labelling data samples ofM as counterfeit; – M is rewarded for misleading D (to label its data samples as genuine). It should be noted that in the Turing test there is a bi-directional exchange of information between player C and either player A or B. In GANs, however, during any particular “game”, data flows only in one direction: The discriminator agent receives data samples, but is unable to influence the agent at the origin during the sampling process. In the case studies presented in this paper, this limitation is overcome, and it is shown that this can lead to improved model accuracy. This, of course, does not imply that active discriminators are beneficial for every problem domain. 2.2 Implementation Options of (Generalized) GANs GANs and their generalizations, that is, algorithms that possess the aforementioned defining features, are instances of Turing Learning [8]. The Turing Learning formulation removes (from a Turing perspective unnecessary) restrictions of the original GAN formulation, for example, the need for models and discriminators to be represented as neural networks, or the need for optimizing these networks using gradient descent. As a result of this, the Turing Learning formulation is very general, and applicable to a wide range of problems (e.g., using models with discrete, continuous or mixed representations). In the following, we present the aspects of implementations that are not considered as defining features, but rather as implementation options. They allow Turing Learning to be tailored, for example, by using the most suitable model representation and optimization algorithm for the given problem domain. Moreover, users can choose implementation options they are familiar with, making the overall framework2 more accessible. • Training data. The training data could take any form. It could be artificial (e.g., audio, visual, textual data in a computer), or physical (e.g., a geological sample, engine, painting or human being). • Model presentation. The model could take any form. In GANs [1], it takes the form of a neural network that generates data when provided with a random input. Other representations include vectors, graphs, and computer programs. In any case, the representation should be expressive enough, allowing a model to produce data with the same distribution as the training data. The associated process could involve physical objects (e.g., robots [8]). If the training data originates from physical objects, but the model data originates from simulation, special attention is needed to avoid the so called reality gap [11]. Any difference caused not by the model but rather the process to collect the data (e.g., tracking equipment) may be detected by the discriminators, which could render model inference impossible. • Discriminator representation. The discriminator could take any form. Its representation should be expressive enough to distinguish between genuine and counterfeit data samples. These samples could be artificial or physical. For example, a discriminator could be networked to an experimental platform, observing and manipulating some physical objects or organisms. • Optimization algorithms. The optimization algorithms could take any form as long as they are compatible with the solution representations. They could use a single candidate solution or a population of candidate solutions [8, 12]. In the context of GANs, gradient-based optimization algorithms are widely applied [13]. These algorithms however require the objective function to be differentiable and (ideally) unimodal. A wide range of metaheuristic algorithms [14] could be explored for domains with more complex objective functions. For example, if the model was represented using a computer program, genetic programming algorithms could be used. 2For an algorithmic description of Turing Learning, see [8]. • Coupling mechanism between the model and discriminator optimizers. The optimization processes for the model and discriminator solutions are dependent on each other. Hence they may require careful synchronization [1]. Moreover, if using multiple models and/or multiple discriminators, choices have to be made for which pairs of solutions to evaluate. Elaborate evaluation schemes may take into account the performance of the opponents in other evaluations (e.g., using niching techniques). Synchronization challenges include those reported for coevolutionary systems.3 In particular, due to the so-called Red Queen Effect, the absolute quality of solutions in a population may increase while the quality of solutions relative to the other population may decrease, or vice versa [18]. Cycling [20] refers to the phenomenon that some solutions that have been lost, may get rediscovered in later generations. A method for overcoming the problem is to retain promising solutions in an archive—the “hall of fame” [21]. Disengagement can occur when one population (e.g., the discriminators) outperforms the other population, making it hard to reveal differences among the solutions. Methods for addressing disengagement include “resource sharing” [22] and “reducing virulence” [20]. • Termination criterion. Identifying a suitable criterion for terminating the optimization process can be challenging, as the performance is defined in relative rather than absolute terms. For example, a model that is found to produce genuine data by each of a population of discriminators may still not be useful (the discriminators may have performed poorly). In principle, however, any criterion can be applied (e.g., convergence data, fixed time limit, etc). 3 Case Study 1: Inferring Stochastic Behavioral Processes Through Interaction 3.1 Problem Formulation This case study is inspired from ethology—the study of animal behavior. Animals are sophisticated agents, whose actions depend on both their internal state and the stimuli present in their environment. Additionally, their behavior can have a stochastic component. In the following, we show how Turing Learning can infer the behavior of a simple agent that captures the aforementioned properties. The agent’s behavior is governed by the probabilistic finite-state machine (PFSM)4 shown in Figure 2. It has n states, and it is assumed that each state leads to some observable behavioral feature, v ∈ R, hereafter referred to as the agent’s velocity. The agent responds to a stimulus that can take two levels, low (L) or high (H). The agent starts in state 1. If the stimulus is L, it remains in state 1 with certainty. 3Coevolutionary algorithms have been studied in a range of contexts [15, 16, 17], including system identification [18, 19], though these works differ from GANs and Turing Learning in that no discriminators evolve, but rather pre-defined metrics gauge on how similar the model and training data are. For some system identification problems, the use of such pre-defined metrics can result in poor model accuracy, as shown in [8]. 4PFSMs generalize the concept of Markov chains [23, 24]. If the stimulus is H , it transitions to state 2 with probability p1, and remains in state 1 otherwise. In other words, on average, it transitions to state 2 after 1/p1 steps. In state k = 2, 3, . . . , n − 1, the behavior is as follows. If the stimulus is identical to that which brings the agent into state k from state k − 1, the state reverts to k − 1 with probability p2 and remains at k otherwise. If the stimulus is different to that which brings the agent into state k from state k − 1, the state progresses to k + 1 with probability p1 and remains at k otherwise. In state n, the only difference is that if the stimulus is different to that which brought about state n, the agent remains in state n with certainty (as there is no next state to progress to). By choosing p1 close to 0 and p2 = 1, we force the need for interaction if the higher states are to be observed for a meaningful amount of time. This is because once a transition to a higher state happens, the interrogator must immediately toggle the stimulus to prevent the agent from regressing back to the lower state. 3.2 Turing Learning Implementation We implement Turing Learning for this problem as follows: • Training data. To obtain the training data, the discriminator interacts with the PFSM, shown in Figure 2. The number of states are set to four (n = 4). The parameters used to generate the (genuine) data samples are given by: q = (p∗1, p ∗ 2, v ∗ 2 , v ∗ 3 , v ∗ 4) = (0.1, 1.0, 0.2, 0.4, 0.6). (1) • Model representation. It is assumed that the structure of the PFSM is known, while the parameters, q, are to be inferred. All parameters can vary in R. To interpret p1 and p2 as probabilities, they are mapped to the closest point in [0, 1], if outside this interval. The model data is derived analogously to that of the training data. • Discriminator representation. The discriminator is implemented as an Elman neural network [25] with 1 input neuron, 5 hidden neurons, and 2 output neurons. At each time step t, the observable feature (the agent’s velocity v) is fed into the input neuron.5 After updating the neural network, the output from one of the output neurons is used to determine the stimulus at time step t+ 1, L or H . At the end of a trial (100 time steps), the output from the other output neuron is used to determine whether the discriminator believes the agent under investigation to be the training agent (T ) or model agent (M). • Optimization Algorithms. We use a standard (µ+ λ) evolution strategy with self-adapting mutation strengths [26] for both the model and the discriminator populations. We use µ = λ = 50 in both cases. The populations are initialized at random. The parameter values of the optimization algorithm are set as described in [26]. • Coupling mechanism between the model and discriminator optimizers. The coupling comes from the evaluation process, which in turn affects the population selection. Each of the 100 candidate discriminators is evaluated once with each of the 100 models, as well as an additional 100 times with the training agent. It receives a point every time it correctly labels the data as either genuine or counterfeit. At the same time, each model receives a point for each time a discriminator mistakenly judges its data as genuine. • Termination criterion. The optimization process is stopped after 1000 generations. 3.3 Results To validate the advantages of the interactive approach, we use three setups for the Turing Learning algorithm. In the default setup, hereafter “Interactive” setup, the discriminator controls the environmental stimulus while observing the agent. In the other two setups, the discriminator observes the agent in a passive manner; that is, its output is not used to update the stimulus. Instead, the stimulus is uniformly randomly chosen at the beginning of the trial, and it is toggled with probability 0.1 at any time step (the stimulus is hence expected to change on average every 10 time steps). In setup “Passive 1”, the discriminator has the same input as in the “Interactive" setup (the observable feature, v). In setup “Passive 2”, the discriminator has one additional input, the current stimulus (S). All other aspects of the passive setups are identical to the “Interactive” setup. 5To emulate a noisy tracking process, the actual speed value is multiplied with a number chosen with a uniform distribution in the range (0.95, 1.05). For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 3(a) shows the distribution of the inferred models that achieved the highest evaluation value in the 1000th generation. The “Interactive” setup is the only one that inferred all parameters with good accuracy. Figure 3(b) shows a typical example of how a discriminator interacts with the agent. The discriminator initially sets the environmental stimulus to alternating values (i.e., toggling between H and L). Once the agent advances from state 1 to state 2, the discriminator instantly changes the stimulus to L and holds it constant. Once the agent advances to higher states, the stimulus is switched again, and so forth. This strategy allows the discriminator to observe the agent’s velocity in each state. 4 Case Study 2: A Robot Inferring Its Own Sensor Configuration 4.1 Problem Formulation The reality gap is a well-known problem in robotics: Often, behaviors that work well in simulation do not translate effectively into real-world implementations [11]. This is because simulations are generally unable to capture the full range of features of the real world, and therefore make simplifying assumptions. Yet, simulations can be important, even on-board a physical robot, as they facilitate planning and optimization. This case study investigates how a robot can use Turing Learning to improve the accuracy of a simulation model of itself, though a process of self-discovery, similar to [27]. In a practical scenario, the inference could take place on-board a physical platform. For convenience, we use an existing simulation platform [28], which has been extensively verified and shown to be able to cross the reality gap [29]. The robot, an e-puck [30], is represented as a cylinder of diameter 7.4 cm, height 4.7 cm and mass 152 g. It has two symmetrically aligned wheels. Their ground contact velocity (vleft and vright) can be set within [-12.8, 12.8] (cm/s). During the motion, random noise is applied to each wheel velocity, by multiplying it with a number chosen with a uniform distribution in the range (0.95, 1.05). The robot has eight infrared proximity sensors distributed around its cylindrical body, see Figure 4(a). The sensors provide noisy reading values (s1, s2, . . . , s8). We assume that the robot does not know where the sensors are located (neither their orientations, nor their displacements from the center). Situations like this are common in robotics, where uncertainties are introduced when sensors get mounted manually or when the sensor configuration may change during operation (e.g., at the time of collision with an object, or when the robot itself reconfigures the sensors). The sensor configuration can be described as follows: q = (θ1, θ2, . . . , θ8, d1, d2, . . . , d8) , (2) where di ∈ (0, R] defines the distance of sensor i from the robot’s center (R is the robot’s radius), and θi ∈ [−π, π] defines the bearing of sensor i relative to the robot’s front. The robot operates in a bounded square environment with sides 50 cm, shown in Figure 4(b). The environment also contains nine movable, cylindrical obstacles, arranged in a grid. The distance between the obstacles is just wide enough for an e-puck to pass through. 4.2 Turing Learning Implementation We implement Turing Learning for this problem as follows: • Training data. The training data comes from the eight proximity sensors of a “real” epuck robot, that is, using sensor configuration q as defined by the robot (see Figure 4(a)). The discriminator controls the movements of the robot within the environment shown in Figure 4(b), while observing the readings of its sensors. • Model representation. It is assumed that the sensor configuration, q, is to be inferred. In other words, a total of 16 parameters have to be estimated. • Discriminator representation. As in Case Study 1, the discriminator is implemented as an Elman neural network with 5 hidden neurons. The network has 8 inputs that receive values from the robot’s proximity sensors (s1, s2, . . . , s8). In addition to the classification output, the discriminator has two control outputs, which are used to set the robot’s wheel velocities (vleft and vright). In each trial, the robot starts from a random position and random orientation within the environment.6 The evaluation lasts for 10 seconds. As the robot’s sensors and actuators are updated 10 times per second, this results in 100 time steps. • The remaining aspects are implemented exactly as in Case Study 1. 6As the robot knows neither its relative position to the obstacles, nor its sensor configuration, the scenario can be considered as a chicken-and-egg problem. 4.3 Results To validate the advantages of the interactive approach, we use again three setups. In the “Interactive” setup the discriminator controls the movements of the robot while observing its sensor readings. In the other two setups, the discriminator observes the robot’s sensor readings in a passive manner; that is, its output is not used to update the movements of the robot. Rather, the pair of wheel velocities is uniformly randomly chosen at the beginning of the trial, and, with probability 0.1 at any time step (the movement pattern hence is expected to change on average every 10 time steps). In setup “Passive 1”, the discriminator has the same inputs as in the “Interactive” setup (the reading values of the robot’s sensors, s1, s2, . . . , s8). In setup “Passive 2”, the discriminator has two additional inputs, indicating the velocities of the left and right wheels (vleft and vright). All other aspects of the passive setups are identical to the “Interactive” setup. For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 5 shows the distribution of the inferred models that achieved the highest evaluation value in the 1000th generation. The “Interactive” setup is the only one that inferred the orientations of the proximity sensors with good accuracy. The displacement parameters were inferred with all setups, though none of them was able to provide accurate estimates. Figure 6 shows a typical example of how a discriminator controls the robot. At the beginning, the robot rotates clockwise, registering an obstacle with sensors s7, s6, . . . , s2 (in that order). The robot then moves forward, and registers the obstacle with sensors s1 and/or s8, while pushing it. This confirms that s1 and s8 are indeed forward-facing. Once the robot has no longer any obstacle in its front, it repeats the process. To validate if the sensor-to-motor coupling was of any significance for the discrimination task, we recorded the movements of a robot controlled by the best discriminator of each of the 20 runs. The robot used either the genuine sensor configuration (50 trials) or the best model configuration of the corresponding run (50 trials). In these 2000 “closed-loop” experiments, the discriminator made correct judgments in 69.45% of the cases. We then repeated the 2000 trials, now ignoring the discriminator’s control outputs, but rather using the movements recorded earlier. In these 2000 “open-loop” experiments, the discriminator made correct judgments in 58.60% of the cases—a significant drop, though still better than guessing (50%). 5 Conclusion In this paper we analyzed how Generative Adversarial Networks (GANs) relate to the Turing test. We identified the defining features of GANs, if assuming a Turing perspective. Other features, including choice of model representation, discriminator representation, and optimization algorithm, were viewed as implementation options of a generalized version of GANs, also referred to as Turing Learning. It was noted that the discriminator in GANs does not directly influence the sampling process, but rather is provided with a (static) data sample from either the generative model or training data set. This is in stark contrast to the Turing test, where the discriminator (the interrogator) plays an active role; it poses questions to the players, to reveal the information most relevant to the discrimination task. Such interactions are by no means always useful. For the purpose for generating photo-realistic images, for example, they may not be needed.7 For the two case studies presented here, however, interactions were shown to cause an improvement in the accuracy of models. The first case study showed how one can infer the behavior of an agent while controlling a stimulus present in its environment. It could serve as a template for studies of animal/human behavior, especially where some behavioral traits are revealed only through meaningful interactions. The inference task was not simple, as the agent’s actions depended on a hidden stochastic process. The latter was influenced by the stimulus, which was set to either low or high by the discriminator (100 times). It was not known in advance which of the 2100 sequences are useful. The discriminator thus needed to dynamically construct a suitable sequence, taking the observation data into account. The second case study focused on a different class of problems: active self-discovery. It showed that a robot can infer its own sensor configuration through controlled movements. This case study could serve as a template for modelling physical devices. The inference task was not simple, as the robot started from a random position in the environment, and its motors and sensors were affected by noise. The discriminator thus needed to dynamically construct a control sequence that let the robot approach an obstacle and perform movements for testing its sensor configuration. Future work could attempt to build models of more complex behaviors, including those of humans. Acknowledgments The authors thank Nathan Lepora for stimulating discussions. 7Though if the discriminator could request additional images by the same model or training agent, problems like mode collapse might be prevented.
1. What is the main contribution of the paper, and how does it differ from existing GANs? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its generalizability and practicality? 3. Do you have any concerns about the implementation and validation of the proposed method, especially in comparison to real-world applications?
Review
Review This paper proposes a generalization of GAN, which the authors refer to as Turing Learning. The idea is to let the discriminator to interact with the generator, and thus work as an active interrogator that can influence the sampling behavior of the generator. To show that this strategy is superior to existing GAN which has the discriminator to function passive responder, the authors perform two case studies of the proposed (high-level) model (interactive), comparing against models with passive discriminators. The results show that the interactive model largely outperforms the passive models, with much less variances across multiple runs. Pros 1) The paper is written well, with clear motivation. The idea of having the discriminator to work as active interrogator to influence the generator is novel and makes perfect sense. 2) The results show that the proposed addition of discriminator-generator interaction actually helps the model work more accurately. Cons 3) The proposed learning strategy is too general while the implemented models are too specific to each problem. I was hoping to see a specific GAN learning framework that implements the idea, but no concrete model is proposed and the interaction between the discriminator and the generator is implemented differently from model to model, in the two case studies. This severely limits the application of the model to new problems. 4) The two case studies consider toy problems, and it is not clear how this model will work in real-world problems. This gives me the impression that the work is still preliminary. Summary In sum, the paper presents a novel and interesting idea that can generalize and potentially improve the existing generative adversarial learning. However, the idea is not implemented into a concrete model that can generalize to general learning cases, and is only validated with two proof-of-concept case studies on toy problems. This limits the applicability of the model to new problems. Thus, despite its novelty and the potential impact, I think this is a preliminary work at its current status, and do not strongly support for its acceptance.
NIPS
Title Artistic Style Transfer with Internal-external Learning and Contrastive Learning Abstract Although existing artistic style transfer methods have achieved significant improvement with deep neural networks, they still suffer from artifacts such as disharmonious colors and repetitive patterns. Motivated by this, we propose an internal-external style transfer method with two contrastive losses. Specifically, we utilize internal statistics of a single style image to determine the colors and texture patterns of the stylized image, and in the meantime, we leverage the external information of the large-scale style dataset to learn the human-aware style information, which makes the color distributions and texture patterns in the stylized image more reasonable and harmonious. In addition, we argue that existing style transfer methods only consider the content-to-stylization and style-to-stylization relations, neglecting the stylization-to-stylization relations. To address this issue, we introduce two contrastive losses, which pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. We conduct extensive experiments, showing that our proposed method can not only produce visually more harmonious and satisfying artistic images, but also promote the stability and consistency of rendered video clips. 1 Introduction Artistic style transfer is a long-standing research topic that seeks to render a photograph with a given artwork style. Ever since Gatys et al. [10] for the first time proposed a neural method, which leverages a pre-trained Deep Convolutional Neural Network (DCNN) to separate and recombine contents and styles of arbitrary images, an unprecedented booming [20, 26, 15, 30, 36, 51, 48] in style transfer has been witnessed. Despite the recent progress, there still exists a large gap between real artworks and synthesized stylizations. As shown in Figure 1, the stylized images usually contain some disharmonious colors and repetitive patterns, which makes them easily distinguishable from real artworks. We argue that this is because existing style transfer methods often confine themselves to the internal style statistics of a single artistic image. In some other tasks (for example, image-to-image translation [17, 60, 16, 25, 8, 18]), the style is usually learned from a collection of images, which inspires us to leverage the external information reserved in the large-scale style dataset to improve the stylization results in style transfer. Why is the external information so important for style transfer? Our analyses are as follows: Although different images in the style dataset vary greatly in fine details, they share a key commonality: they are all human-created artworks, whose brushstrokes, color distributions, texture patterns, tones, ∗Corresponding author 35th Conference on Neural Information Processing Systems (NeurIPS 2021). AdaIN SANetAvatar-Net LSTGatys et al. WCTOursStyle Content Figure 1: Stylization examples. The first and second columns show the style and content images, respectively. The other seven columns show the stylized images produced by our method, Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. etc., are more consistent with human perception. Namely, they contain some human-aware style information that is lacked in synthesized stylizations. A natural idea is to utilize such human-aware style information to improve stylization results. To this end, we employ an internal-external learning scheme during training, which takes both internal learning and external learning into consideration. To be more specific, on the one hand, we follow previous methods [10, 20, 46, 54, 58], utilizing internal statistics of a single artwork to determine the colors and texture patterns of the stylized image. On the other hand, we employ Generative Adversarial Nets (GANs) [11, 39, 2, 56, 3] to externally learn the human-aware style information from the large-scale style dataset, which is then used to make the color distributions and texture patterns in the stylized image more reasonable and harmonious, significantly bridging the gap between human-created artworks and AI-created artworks. In addition, there is another problem with existing style transfer methods: they usually employ a content loss and a style loss to enforce the content-to-stylization relations and style-to-stylization relations, respectively, while neglect the stylization-to-stylization relations, which are also important for style transfer. What are stylization-to-stylization relations? Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. Inspired by this, in this paper we introduce two contrastive losses: content contrastive loss and style contrastive loss that can pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. To the best of our knowledge, this is the first work that successfully leverages the power of contrastive learning [6, 12, 21, 38] in the style transfer scenario. Our extensive experiments show that the proposed method can not only produce visually more harmonious and plausible artistic images, but also promote the stability and consistency of rendered video clips. To summarize, the main contributions of this work are threefold: • We propose a novel internal-external style transfer method which takes both internal learning and external learning into consideration, significantly bridging the gap between humancreated and AI-created artworks. • We for the first time introduce contrastive learning to style transfer, yielding more satisfying stylization results with the learned stylization-to-stylization relations. • We demonstrate the effectiveness and superiority of our approach by extensive comparisons with several state-of-the-art artistic style transfer methods. 2 Related Work Artistic style transfer. Artistic style transfer is an image editing task that aims at transferring artistic styles onto everyday photographs to create new artworks. Earlier methods usually resort to traditional techniques such as stroke rendering [13], image analogy [14, 42, 9, 31], and image filtering [52] to perform artistic style transfer. These methods typically rely on low-level statistics and often fail to capture semantic information. Recently, Gatys et al. [10] discovered that the Gram matrix upon deep features extracted from a pre-trained DCNN can notably represent the characteristics of visual styles, which opens up the neural style transfer era. Since then, a suite of neural methods have been proposed, boosting the development of style transfer from different concerns. Specifically, [20, 27, 46] utilize feed-forward networks to improve efficiency. [26, 54, 36, 58, 35] refine various elements in the stylized images (including content preservation, textures, brushstrokes, etc.) to enhance visual quality. [7, 15, 30, 41, 28] propose universal style transfer methods to achieve generalization. [29, 47, 51] inject random noise to the generative network to encourage diversity. Despite the rapid progress, these style transfer methods still suffer from spurious artifacts such as disharmonious colors and repetitive patterns. Notice that there is another line of work [40, 24, 23, 45, 4, 5] that aims to learn an artist’s style from all his/her artworks. In comparison, instead of learning an artist’s style, we focus on better leaning an artwork’s style (just like the style transfer methods mentioned in the previous paragraph) with the assist of the human-aware style information reserved in the external style dataset. Therefore, our method is orthogonal to these works. Image-to-image translation. Image-to-image translation (I2I) [17, 60, 16, 25, 8, 18] aims at learning the mapping between different visual domains, which is closely related to style transfer. [60, 16] have distinguished these two tasks: (i) I2I can only translate between content-similar visual domains (such as horses↔zebras and summer↔winter), while style transfer does not have such limitation, whose content image and style image can be totally different (e.g., the former is a photo of a person and the latter is van Gogh’s The Starry Night). (ii) I2I aims to learn the mapping between two image collections, while style transfer aims to learn the mapping between two specific images. However, we argue that we can borrow some insights from I2I, and leverage the external information of the large-scale style image collections to improve the stylization quality in style transfer. Internal-external learning. Internal-external learning has shown effectiveness in various image generation tasks, such as super-resolution, image inpainting, and so on. In detail, Soh et al. [44] presented a fast, flexible, and lightweight self-supervised super-resolution method by exploiting both external and internal samples. Park et al. [37] developed an internal-external super-resolution method that facilitates super-resolution networks to further enhance the quality of the restored images. Wang et al. [49] proposed a general external-internal learning inpainting scheme, which learns semantic knowledge externally by training on large datasets while fully utilizes internal statistics of the single test image. However, in the field of style transfer, existing methods only use a single artistic image to learn style, resulting in unsatisfying stylization results. Motivated by this, in this work we propose an internal-external style transfer method that takes both internal learning and external learning into consideration, significantly bridging the gap between human-created and AI-created artworks. Contrastive learning. Generally, there are three key ingredients in a contrastive learning process: query, positive examples, and negative examples. The target of contrastive learning is to associate a “query” with its “positive” example while disassociate the “query” with other examples that are referred to as “negatives”. Recently, contrastive learning has demonstrated its effectiveness in the field of conditional image synthesis. To be more specific, ContraGAN [21] introduced a conditional contrastive loss (2C loss) to learn both data-to-class and data-to-data relations. Park et al. [38] maximized the mutual information between input and output with contrastive learning to encourage content preservation in unpaired image translation problems. Liu et al. [34] introduced a latentaugmented contrastive loss to encourage images generated from adjacent latent codes to be similar and those generated from distinct latent codes to be dissimilar, achieving diverse image synthesis. Yu et al. [55] proposed a dual contrastive loss in adversarial training that generalizes representation to more effectively distinguish between real and fake, and further incentivizes the image generation quality. Wu et al. [53] improved the image dehazing result by introducing contrastive learning, which ensures that the restored image is pulled closer to the clear image and pushed far away from the hazy image in representation space. Note that all the above contrastive learning methods cannot be adopted for style transfer. In this work, we make the first attempt to adapt contrastive learning to artistic style transfer, and propose two novel contrastive losses: content contrastive loss and style contrastive loss to learn the stylization-tostylization relations that are ignored by existing style transfer methods. 3 Proposed Method Existing style transfer methods usually produce unsatisfying stylization results with disharmonious colors and repetitive patterns, which makes them pretty easy to be distinguished from real artworks. As an attempt to bridge the large gap between human-created and AI-created artworks, we propose a novel internal-external style transfer method with two contrastive losses. The overview of our method is shown in Figure 2. It is worth noting that our framework is built on the SANet [36] (one of the state-of-the-art style transfer methods) backbone, which consists of an encoder E, a transformation module T , and a decoder D. In detail, E is a pre-trained VGG-19 network [43] used to extract image features, T is a style-attentional network that can flexibly match the semantic nearest style features onto the content features, and D is a generative network used to transform encoded semantic feature maps into stylized images. We extend SANet [36] with our proposed changes, and our full model is described below. 3.1 Internal-external Learning Let C and S be the sets of photographs and artworks, respectively. We aim to learn both the internal style characteristics from a single artwork Is ∈ S and the external human-aware style information from the dataset S, and then transfer them to an arbitrary content image Ic ∈ C to create new artistic images Isc. Internal style learning. Following previous style transfer methods [15, 36, 1], we use a pre-trained VGG-19 network φ to capture the internal style characteristics from a single artistic image, and the style loss can be generally computed as: Ls := L∑ i=1 ‖ µ(φi(Isc))− µ(φi(Is)) ‖2 + ‖ σ(φi(Isc))− σ(φi(Is)) ‖2 (1) where φi denotes the ith layer (Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers are used in our model) of the VGG-19 network. µ and σ represent the mean and standard deviation of feature maps extracted by φi, respectively. External style learning. Here, we employ GAN [11, 39, 2, 56, 3] to learn the human-aware style information from the style dataset S. GAN is a popular generative model consisting of two networks (i.e., a generator G and a discriminator D) that compete against each other. Specifically, we input the stylized images produced by the generator and the artworks sampled from S to the discriminator as fake data and real data, respectively. In the training process, the generator will try to fool the discriminator by generating a realistic artistic image, while the discriminator will try to distinguish generated fake artworks from real ones. Joint training of these two networks leads to a generator that is able to produce remarkable realistic fake images with the learned human-aware style information. The adversarial training process can be formulated as (note that our generator G contains an encoder E, a transformation module T , and a decoder D, as shown in Figure 2 (a)): Ladv := E Is∼S [log(D(Is))] + E Ic∼C,Is∼S [log(1−D(D(T (E(Ic), E(Is)))))] (2) Content structure preservation. To preserve the content structure of Ic in the stylized image Isc, we adopt the widely-used perceptual loss: Lc :=‖ φconv4_2(Isc)− φconv4_2(Ic) ‖2 (3) Identity loss. Similar to [36, 32, 59], we utilize the identity loss to encourage the generator G to be an approximate identity mapping when the content image and style image are the same. In this manner, more content structures and style characteristics can be preserved in the stylization result. The identity loss is depicted in Figure 2 (b) and defined as: Lidentity := λidentity1(‖ Icc − Ic ‖2 + ‖ Iss − Is ‖2)+ λidentity2 L∑ i=1 (‖ φi(Icc)− φi(Ic) ‖2 + ‖ φi(Iss)− φi(Is) ‖2) (4) where Icc is the output image generated when both the content image and style image are Ic. Iss is analogous. λidentity1 and λidentity2 are the weights associated with different loss terms. For φi, we choose Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers in our experiments. 3.2 Contrastive Learning Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. We refer to such relations as stylization-to-stylization relations. Generally, existing style transfer methods only consider the content-to-stylization and style-to-stylization relations by applying the content loss and style loss (like Lc and Ls introduced above), while neglect the stylization-to-stylization relations. To tackle this problem, we for the first time introduce contrastive learning to style transfer. The core idea of contrastive learning is to associate data points with their “positive” examples while disassociate them from the other points that are regarded as “negatives”. Specifically, we propose two contrastive losses: a style contrastive loss and a content contrastive loss to learn the stylization-to-stylization relations. Note that for clearer expression, hereafter, we use si to represent the ith style image, ci to represent the ith content image, and sici to represent the stylized image generated with si and ci. To perform contrastive learning in every training batch, we arrange a batch of style and content images in the following manner: Assume the batch size = b, which is an even number. Then we get a batch of style images {s1, s2, ..., sb/2, s1, s2, ..., sb/2−1, sb/2}, and a batch of content images {c1, c2, ..., cb/2, c2, c3, ..., cb/2, c1}. Hence, the corresponding stylized images are {s1c1, s2c2, ..., sb/2cb/2, s1c2, s2c3, ..., sb/2−1cb/2, sb/2c1}. In this way, we ensure that for every stylized image sicj , we can find a stylized image sicx (x 6= j) that shares the same style with it, and a stylized image sycj (y 6= i) that shares the same content with it in the same batch. Figure 2 (c) depicts this process by taking b = 8 as an example. Style contrastive loss. To associate stylized images that share the same style, for a stylized image sicj , we select sicx (x 6= j) as its positive example (sicx shares the same style with sicj), and smcn (m 6= i and n 6= j) as its negative examples. Notice that smcn represents a series of stylized images, not just one image. Then we can formulate our style contrastive loss as follows: Ls−contra := −log( exp(ls(sicj) T ls(sicx)/τ) exp(ls(sicj)T ls(sicx)/τ) + ∑ exp(ls(sicj)T ls(smcn)/τ) ) (5) where ls = hs(φrelu3_1(·)), in which hs is a style projection network. ls is used to obtain the style embeddings from stylized images. τ is a temperature hyper-parameter to control push and pull force. Content contrastive loss. Similar to the style contrastive loss, to associate stylized images that share the same content, for a stylized image sicj , we select sycj (y 6= i) as its positive example (sycj shares the same content with sicj), and smcn (m 6= i and n 6= j) as its negative examples. We express the content contrastive loss as: Lc−contra := −log( exp(lc(sicj) T lc(sycj)/τ) exp(lc(sicj)T lc(sycj)/τ) + ∑ exp(lc(sicj)T lc(smcn)/τ) ) (6) where lc = hc(φrelu4_1(·)), in which hc is a content projection network. lc is used to obtain the content embeddings from stylized images. 3.3 Final Objective We summarize all aforementioned losses and obtain the final objective of our model, Lfinal := λ1Ls + λ2Ladv + λ3Lc + λ4Lidentity + λ5Ls−contra + λ6Lc−contra (7) where λ1, λ2, λ3, λ4, λ5, and λ6 are hyper-parameters for striking proper balance among losses. 4 Experimental Results In this section, we first introduce the experimental settings. Then we present qualitative and quantitative comparisons between the proposed method and several baseline models. Finally, we discuss the effect of each component in our model by conducting ablation studies. 4.1 Experimental Settings Implementation details. We build on the recent SANet [36] backbone and extend it with our proposed changes to further push the boundaries in automatic artwork generation. We refer to the original paper [36] for the detailed network architecture of the encoder E, transformation module T , and decoder D. As for the discriminator D, we employ the multi-scale discriminator proposed by Wang et al. [50]. The style projection network hs is a two-layer MLP (Multilayer Perceptron) with 256 units at the first layer and 128 units at the second layer. Similarly, the content projection network hc is a two-layer MLP with 128 units at each layer. The hyper-parameter τ in Equation (5) and (6) is set to 0.2. The loss weights in Equation (4) and (7) are set to λidentity1 = 50, λidentity2 = 1, λ1 = 1, λ2 = 5, λ3 = 1, λ4 = 1, λ5 = 0.3, and λ6 = 0.3. We train our network using the Adam optimizer with a learning rate of 0.0001 and a batch size of 16 for 160000 iterations. Our code is available at: https://github.com/HalbertCH/IEContraAST. Datasets. Like [15, 58, 36, 19], we take MS-COCO [33] and WikiArt [22] as the content dataset and style dataset, respectively. During the training stage, we first resize the smallest dimension of training images to 512 while preserving the aspect ratio, and then randomly crop 256 × 256 patches from these images as input. Note that in the reference stage, our method is applicable for content images and style images with any size. Baselines. We choose several state-of-the-art style transfer methods as our baselines, including Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. All these methods are conducted by using the public codes and default configurations. 4.2 Qualitative Comparisons In Figure 3, we show the qualitative comparisons between our method and six baselines introduced above. We observe that Gatys et al. [10] is prone to fall in a bad local minimum (e.g., 1st, 2nd, and 3rd columns). AdaIN [15] sometimes produces messy stylized images with unseen colors and unwanted halation around the edges (e.g., 1st, 3rd, and 6th columns). WCT [30] often introduces distorted patterns, yielding less-structured and blunt stylized images (e.g., 2nd, 4th, and 5th columns). Avatar-Net [41] is hard to produce sharp details and fine brushstrokes (e.g., 1st, 4th, and 5th columns). LST [28] usually produces less stylized images with very limited texture patterns (e.g., 2nd, 4th, and 6th columns). SANet [36] tends to apply similar repeated texture patterns among different styles (e.g., 1st, 3rd, and 6th columns). Despite the recent progress, the gap between synthesized artistic images and real artworks is still very large. To further narrow this gap, we introduce internal-external learning and contrastive learning to artistic style transfer, leading to visually more harmonious and plausible artistic images, as shown in the 2nd row of Figure 3. We also compare our method with 6 baselines on video style transfer, which is conducted between a content video and a style image in a frame-wise manner. The stylization results are shown in Figure 4. To visualize the stability and consistency of synthesized video clip, we also show the heat maps of differences between different frames in the last column of Figure 4. As we can see, our approach outperforms existing style transfer methods in terms of stability and consistency by a significant margin. This can be attributed to two points: (i) external learning smooths the stylization results by eliminating those distorted texture patterns; (ii) the proposed contrastive losses take the stylization-to-stylization relations into consideration, pulling adjacent stylized frames closer to each other since they share the same style and similar content. 4.3 Quantitative Comparisons As the qualitative assessment presented above could be subjective, in this section, we resort to several evaluation metrics to better assess the performance of the proposed method in a quantitative manner. User study [54, 36, 24, 23, 48] is the most widely adopted evaluation metric in style transfer, which investigates user preference over different stylization results for a more objective comparison. Preference score. We use 10 content images and 15 style images to synthesize 150 stylized images for each method. Then 20 content-style pairs are randomly selected for each participant and show Table 2: The average LPIPS distances for different methods. The lower the better. Inputs Gatys et al. AdaIN WCT Avatar-Net LST SANet Ours LPIPS Distance 0.231 0.488 0.369 0.460 0.341 0.326 0.372 0.317 (a) (b) LPIPS 0.317 0.325 0.321 Preference 0.388 0.281 0.331 Figure 5: Ablation studies of external learning (abbr. EL) and contrastive learning (abbr. CL) on (a) image style transfer and (b) video style transfer. Please zoom in for a better view and details. them the stylized images generated by our and competing methods side-by-side in a random order. Next, we ask each participant to choose his/her favorite stylization result for each content-style pair. We finally collect 1000 votes from 50 participants and present the percentage of votes for each method in the second row of Table 1. The results indicate that the stylized images generated by our method are more preferred by human participants compared to those generated by the competing methods. Deception score. To measure the gap between AI-created artistic images and human-created artworks, we conduct another user study: for each participant, we show them 80 artistic images which consist of 10 human-created artworks collected from WikiArt [22] and 70 stylized images generated by our and 6 baseline methods (note that each method provides 10 stylized images). Then for every image, we ask these participants to guess if it is a real artwork or not. The deception score is calculated as the fraction of times that the stylized images generated by this method are identified as “real”. For comparison, we also report the fraction of times that the human-created artworks are identified as “real”. The results are shown in the third row of Table 1, where we can see that the deception rate of our method is closest to that of human-created artworks, further demonstrating the effectiveness of our method. To quantitatively evaluate the stability and consistency of the proposed method on video style transfer, we adopt LPIPS (Learned Perceptual Image Patch Similarity) [57] as the evaluation metric. LPIPS. LPIPS is a widely used metric in the field of multimodal image-to-image translation (MI2I) [61, 16, 25, 8] to measure diversity. In this paper, we employ LPIPS to measure the stability and consistency of rendered clips by computing the average perceptual distances between adjacent frames. Note that contrary to MI2I methods that expect a higher LPIPS value to achieve better diversity, we expect a lower LPIPS value to achieve better stability and consistency. We synthesize 18 stylized video clips for each method and report the average LPIPS distances in Table 2, where we observe that our approach obtains the best score among all methods, consistent with the qualitative comparisons in Figure 4. 4.4 Ablation Studies In this section, we conduct several ablation studies to highlight the effect of different components in our model. We first explore the effect of external learning (abbr. EL) and contrastive learning (abbr. CL) on image style transfer. As for internal learning, since its effect has been fully validated in existing style transfer methods, we do not ablate it in this experiment. Figure 5 (a) shows the image stylization results of our method with and without EL/CL. It can be observed that, without EL, the stylized images become messier with abrupt colors and obvious distortions. The reason could be that the model without EL only focuses on increasing the style similarity between the stylized image and the style image, without considering whether the color distributions and texture patterns in the stylized image are natural and harmonious. In comparison, the model with EL can learn the human-aware style information from the large-scale style dataset, leading to more realistic and harmonious stylized images that cannot be distinguished from real artworks by the discriminator. In addition, we also find that our method can better match the target style to the content image with the proposed contrastive losses. This is because our contrastive losses can help the network to learn better style and content representations by taking the stylization-to-stylization relations into consideration, further refining the stylization results. The user preference results reported in the last column of Figure 5 (a) also demonstrate that our full model has the best performance. Similar ablation studies are also conducted on video style transfer. As shown in Figure 5 (b), the stability degradation can be observed after we remove external learning or contrastive learning from our method (notice the color of hair and skin), which is in line with the reported LPIPS distance. The results indicate that both external learning and contrastive learning can improve the stability of video style transfer. As we analyzed in Section 4.2, external learning obtains stability gains by eliminating distorted texture patterns, and contrastive learning obtains stability gains by pulling adjacent stylized frames closer to each other. 5 Limitations One limitation of this work is that the proposed internal-external learning scheme and two contrastive losses cannot be applied to learning-free style transfer methods, such as WCT [30], Avatar-Net [41], LST [28], etc. This is because the training process is necessary for our method. Therefore, our method can only be incorporated into learning-based methods, such as Johnson et al. [20], AdaIN [15], SANet [36] (in this work, we mainly take SANet as our backbone to show the effectiveness and superiority of our method), etc. Another limitation is that in the inference stage, the style images that are too different from the training styles may not benefit from the external learning scheme, since they are out of the learned style distributions. 6 Conclusion In this paper, we propose an internal-external style transfer method with two novel contrastive losses. The internal-external learning scheme learns simultaneously both the internal statistics from a single artistic image and the human-aware style information from the large-scale style dataset. As for the contrastive losses, they are dedicated to learning the stylization-to-stylization relations by pulling the multiple stylization embeddings closer to each other when they share the same content or style, but pushing far away otherwise. Extensive experiments show that our method can not only produce visually more harmonious and satisfying artistic images, but also significantly promote the stability and consistency of rendered video clips. The proposed method is simple and effective, and may shed light on more future understandings of artistic style transfer from a new perspective. In the future, we would like to extend our method to other vision tasks, for example, texture synthesis. Acknowledgments This work was supported in part by the projects No. 2020YFC1523202, 19ZDA197, LY21F020005, 2021009, 2019C03137, MOE Frontier Science Center for Brain Science & Brain-Machine Integration (Zhejiang University), National Natural Science Foundation of China (Research on Key Technologies of art image restoration based on decoupling learning), and Key Scientific Research Base for Digital Conservation of Cave Temples (Zhejiang University), State Administration for Cultural Heritage. We would also like to thank the reviewers and AC for their constructive and insightful comments on the early submission. Funding Transparency Statement The projects mentioned in our Acknowledgments provided funding and support to this work. There are no additional revenues related to this work.
1. What is the focus and contribution of the paper on style transfer? 2. What are the strengths of the proposed approach, particularly in terms of combining different losses? 3. What are the weaknesses of the paper, especially regarding the use of terminology and the need for more discussions and examples? 4. Do you have any concerns about the effectiveness of the external loss and its potential impact on the results? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors introduce an internal-external learning framework for style transfer. In the proposed method, the classical VGG-based style loss, which encourages the stylized result to capture the low-level statistics of the style image, is combined with a GAN loss designed to capture the style priors from the style database. In addition, contrastive losses are designed to encourage the results to better capture the stylization-to-stylization relation. Experiments in the paper demonstrate the proposed method can generate visually more appealing results compared to previous style transfer methods. Review Strength: The paper is well written and easy to follow. The idea of incorporating GAN loss to capture the style priors from a style database is interesting. The provided results seem to indicate that such external loss can help produce results that look more natural compared to previous works. The evaluation processes for evaluating the stylization results and comparison with existing works are solid, with both visual results on a range of different content and a thorough user study. Weaknesses: I found the use of the term "internal learning" in this context a bit misleading. Internal learning has been used in the literature to refer to the process of optimizing the model parameters on a single image at test time. In this paper, the internal learning part seems to refer to the conventional style-transfer loss (which consists of VGG-based losses for style and content loss), which is used at training time and applied when trained on the whole dataset rather than for test-time optimization. I think it is fine to overload the term if it serves the purpose but clarification needs to be provided to avoid confusion. While I find it interesting to explore applying GAN loss to style transfer problem and I appreciate the improvement it brings to the results, I feel that more discussions and examples should be provided to analyze the effects of it. In particular, one challenge in incorporating GAN loss into this setting could happen when the provided style image has a unique style that does not appear in the style database. How would (and how should) the model behave in that case? I would appreciate more discussions on that regard along with more examples to analyze the benefit of incorporating GAN loss. It's also not clear how much improvement was gained by the incorporation of the contrastive loss. Only three examples were provided (in Fig. 5) and the improvements from using contrastive loss in those examples do not seem obvious to me. More explanations on this would make the contributions of these additional losses better justified.
NIPS
Title Artistic Style Transfer with Internal-external Learning and Contrastive Learning Abstract Although existing artistic style transfer methods have achieved significant improvement with deep neural networks, they still suffer from artifacts such as disharmonious colors and repetitive patterns. Motivated by this, we propose an internal-external style transfer method with two contrastive losses. Specifically, we utilize internal statistics of a single style image to determine the colors and texture patterns of the stylized image, and in the meantime, we leverage the external information of the large-scale style dataset to learn the human-aware style information, which makes the color distributions and texture patterns in the stylized image more reasonable and harmonious. In addition, we argue that existing style transfer methods only consider the content-to-stylization and style-to-stylization relations, neglecting the stylization-to-stylization relations. To address this issue, we introduce two contrastive losses, which pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. We conduct extensive experiments, showing that our proposed method can not only produce visually more harmonious and satisfying artistic images, but also promote the stability and consistency of rendered video clips. 1 Introduction Artistic style transfer is a long-standing research topic that seeks to render a photograph with a given artwork style. Ever since Gatys et al. [10] for the first time proposed a neural method, which leverages a pre-trained Deep Convolutional Neural Network (DCNN) to separate and recombine contents and styles of arbitrary images, an unprecedented booming [20, 26, 15, 30, 36, 51, 48] in style transfer has been witnessed. Despite the recent progress, there still exists a large gap between real artworks and synthesized stylizations. As shown in Figure 1, the stylized images usually contain some disharmonious colors and repetitive patterns, which makes them easily distinguishable from real artworks. We argue that this is because existing style transfer methods often confine themselves to the internal style statistics of a single artistic image. In some other tasks (for example, image-to-image translation [17, 60, 16, 25, 8, 18]), the style is usually learned from a collection of images, which inspires us to leverage the external information reserved in the large-scale style dataset to improve the stylization results in style transfer. Why is the external information so important for style transfer? Our analyses are as follows: Although different images in the style dataset vary greatly in fine details, they share a key commonality: they are all human-created artworks, whose brushstrokes, color distributions, texture patterns, tones, ∗Corresponding author 35th Conference on Neural Information Processing Systems (NeurIPS 2021). AdaIN SANetAvatar-Net LSTGatys et al. WCTOursStyle Content Figure 1: Stylization examples. The first and second columns show the style and content images, respectively. The other seven columns show the stylized images produced by our method, Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. etc., are more consistent with human perception. Namely, they contain some human-aware style information that is lacked in synthesized stylizations. A natural idea is to utilize such human-aware style information to improve stylization results. To this end, we employ an internal-external learning scheme during training, which takes both internal learning and external learning into consideration. To be more specific, on the one hand, we follow previous methods [10, 20, 46, 54, 58], utilizing internal statistics of a single artwork to determine the colors and texture patterns of the stylized image. On the other hand, we employ Generative Adversarial Nets (GANs) [11, 39, 2, 56, 3] to externally learn the human-aware style information from the large-scale style dataset, which is then used to make the color distributions and texture patterns in the stylized image more reasonable and harmonious, significantly bridging the gap between human-created artworks and AI-created artworks. In addition, there is another problem with existing style transfer methods: they usually employ a content loss and a style loss to enforce the content-to-stylization relations and style-to-stylization relations, respectively, while neglect the stylization-to-stylization relations, which are also important for style transfer. What are stylization-to-stylization relations? Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. Inspired by this, in this paper we introduce two contrastive losses: content contrastive loss and style contrastive loss that can pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. To the best of our knowledge, this is the first work that successfully leverages the power of contrastive learning [6, 12, 21, 38] in the style transfer scenario. Our extensive experiments show that the proposed method can not only produce visually more harmonious and plausible artistic images, but also promote the stability and consistency of rendered video clips. To summarize, the main contributions of this work are threefold: • We propose a novel internal-external style transfer method which takes both internal learning and external learning into consideration, significantly bridging the gap between humancreated and AI-created artworks. • We for the first time introduce contrastive learning to style transfer, yielding more satisfying stylization results with the learned stylization-to-stylization relations. • We demonstrate the effectiveness and superiority of our approach by extensive comparisons with several state-of-the-art artistic style transfer methods. 2 Related Work Artistic style transfer. Artistic style transfer is an image editing task that aims at transferring artistic styles onto everyday photographs to create new artworks. Earlier methods usually resort to traditional techniques such as stroke rendering [13], image analogy [14, 42, 9, 31], and image filtering [52] to perform artistic style transfer. These methods typically rely on low-level statistics and often fail to capture semantic information. Recently, Gatys et al. [10] discovered that the Gram matrix upon deep features extracted from a pre-trained DCNN can notably represent the characteristics of visual styles, which opens up the neural style transfer era. Since then, a suite of neural methods have been proposed, boosting the development of style transfer from different concerns. Specifically, [20, 27, 46] utilize feed-forward networks to improve efficiency. [26, 54, 36, 58, 35] refine various elements in the stylized images (including content preservation, textures, brushstrokes, etc.) to enhance visual quality. [7, 15, 30, 41, 28] propose universal style transfer methods to achieve generalization. [29, 47, 51] inject random noise to the generative network to encourage diversity. Despite the rapid progress, these style transfer methods still suffer from spurious artifacts such as disharmonious colors and repetitive patterns. Notice that there is another line of work [40, 24, 23, 45, 4, 5] that aims to learn an artist’s style from all his/her artworks. In comparison, instead of learning an artist’s style, we focus on better leaning an artwork’s style (just like the style transfer methods mentioned in the previous paragraph) with the assist of the human-aware style information reserved in the external style dataset. Therefore, our method is orthogonal to these works. Image-to-image translation. Image-to-image translation (I2I) [17, 60, 16, 25, 8, 18] aims at learning the mapping between different visual domains, which is closely related to style transfer. [60, 16] have distinguished these two tasks: (i) I2I can only translate between content-similar visual domains (such as horses↔zebras and summer↔winter), while style transfer does not have such limitation, whose content image and style image can be totally different (e.g., the former is a photo of a person and the latter is van Gogh’s The Starry Night). (ii) I2I aims to learn the mapping between two image collections, while style transfer aims to learn the mapping between two specific images. However, we argue that we can borrow some insights from I2I, and leverage the external information of the large-scale style image collections to improve the stylization quality in style transfer. Internal-external learning. Internal-external learning has shown effectiveness in various image generation tasks, such as super-resolution, image inpainting, and so on. In detail, Soh et al. [44] presented a fast, flexible, and lightweight self-supervised super-resolution method by exploiting both external and internal samples. Park et al. [37] developed an internal-external super-resolution method that facilitates super-resolution networks to further enhance the quality of the restored images. Wang et al. [49] proposed a general external-internal learning inpainting scheme, which learns semantic knowledge externally by training on large datasets while fully utilizes internal statistics of the single test image. However, in the field of style transfer, existing methods only use a single artistic image to learn style, resulting in unsatisfying stylization results. Motivated by this, in this work we propose an internal-external style transfer method that takes both internal learning and external learning into consideration, significantly bridging the gap between human-created and AI-created artworks. Contrastive learning. Generally, there are three key ingredients in a contrastive learning process: query, positive examples, and negative examples. The target of contrastive learning is to associate a “query” with its “positive” example while disassociate the “query” with other examples that are referred to as “negatives”. Recently, contrastive learning has demonstrated its effectiveness in the field of conditional image synthesis. To be more specific, ContraGAN [21] introduced a conditional contrastive loss (2C loss) to learn both data-to-class and data-to-data relations. Park et al. [38] maximized the mutual information between input and output with contrastive learning to encourage content preservation in unpaired image translation problems. Liu et al. [34] introduced a latentaugmented contrastive loss to encourage images generated from adjacent latent codes to be similar and those generated from distinct latent codes to be dissimilar, achieving diverse image synthesis. Yu et al. [55] proposed a dual contrastive loss in adversarial training that generalizes representation to more effectively distinguish between real and fake, and further incentivizes the image generation quality. Wu et al. [53] improved the image dehazing result by introducing contrastive learning, which ensures that the restored image is pulled closer to the clear image and pushed far away from the hazy image in representation space. Note that all the above contrastive learning methods cannot be adopted for style transfer. In this work, we make the first attempt to adapt contrastive learning to artistic style transfer, and propose two novel contrastive losses: content contrastive loss and style contrastive loss to learn the stylization-tostylization relations that are ignored by existing style transfer methods. 3 Proposed Method Existing style transfer methods usually produce unsatisfying stylization results with disharmonious colors and repetitive patterns, which makes them pretty easy to be distinguished from real artworks. As an attempt to bridge the large gap between human-created and AI-created artworks, we propose a novel internal-external style transfer method with two contrastive losses. The overview of our method is shown in Figure 2. It is worth noting that our framework is built on the SANet [36] (one of the state-of-the-art style transfer methods) backbone, which consists of an encoder E, a transformation module T , and a decoder D. In detail, E is a pre-trained VGG-19 network [43] used to extract image features, T is a style-attentional network that can flexibly match the semantic nearest style features onto the content features, and D is a generative network used to transform encoded semantic feature maps into stylized images. We extend SANet [36] with our proposed changes, and our full model is described below. 3.1 Internal-external Learning Let C and S be the sets of photographs and artworks, respectively. We aim to learn both the internal style characteristics from a single artwork Is ∈ S and the external human-aware style information from the dataset S, and then transfer them to an arbitrary content image Ic ∈ C to create new artistic images Isc. Internal style learning. Following previous style transfer methods [15, 36, 1], we use a pre-trained VGG-19 network φ to capture the internal style characteristics from a single artistic image, and the style loss can be generally computed as: Ls := L∑ i=1 ‖ µ(φi(Isc))− µ(φi(Is)) ‖2 + ‖ σ(φi(Isc))− σ(φi(Is)) ‖2 (1) where φi denotes the ith layer (Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers are used in our model) of the VGG-19 network. µ and σ represent the mean and standard deviation of feature maps extracted by φi, respectively. External style learning. Here, we employ GAN [11, 39, 2, 56, 3] to learn the human-aware style information from the style dataset S. GAN is a popular generative model consisting of two networks (i.e., a generator G and a discriminator D) that compete against each other. Specifically, we input the stylized images produced by the generator and the artworks sampled from S to the discriminator as fake data and real data, respectively. In the training process, the generator will try to fool the discriminator by generating a realistic artistic image, while the discriminator will try to distinguish generated fake artworks from real ones. Joint training of these two networks leads to a generator that is able to produce remarkable realistic fake images with the learned human-aware style information. The adversarial training process can be formulated as (note that our generator G contains an encoder E, a transformation module T , and a decoder D, as shown in Figure 2 (a)): Ladv := E Is∼S [log(D(Is))] + E Ic∼C,Is∼S [log(1−D(D(T (E(Ic), E(Is)))))] (2) Content structure preservation. To preserve the content structure of Ic in the stylized image Isc, we adopt the widely-used perceptual loss: Lc :=‖ φconv4_2(Isc)− φconv4_2(Ic) ‖2 (3) Identity loss. Similar to [36, 32, 59], we utilize the identity loss to encourage the generator G to be an approximate identity mapping when the content image and style image are the same. In this manner, more content structures and style characteristics can be preserved in the stylization result. The identity loss is depicted in Figure 2 (b) and defined as: Lidentity := λidentity1(‖ Icc − Ic ‖2 + ‖ Iss − Is ‖2)+ λidentity2 L∑ i=1 (‖ φi(Icc)− φi(Ic) ‖2 + ‖ φi(Iss)− φi(Is) ‖2) (4) where Icc is the output image generated when both the content image and style image are Ic. Iss is analogous. λidentity1 and λidentity2 are the weights associated with different loss terms. For φi, we choose Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers in our experiments. 3.2 Contrastive Learning Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. We refer to such relations as stylization-to-stylization relations. Generally, existing style transfer methods only consider the content-to-stylization and style-to-stylization relations by applying the content loss and style loss (like Lc and Ls introduced above), while neglect the stylization-to-stylization relations. To tackle this problem, we for the first time introduce contrastive learning to style transfer. The core idea of contrastive learning is to associate data points with their “positive” examples while disassociate them from the other points that are regarded as “negatives”. Specifically, we propose two contrastive losses: a style contrastive loss and a content contrastive loss to learn the stylization-to-stylization relations. Note that for clearer expression, hereafter, we use si to represent the ith style image, ci to represent the ith content image, and sici to represent the stylized image generated with si and ci. To perform contrastive learning in every training batch, we arrange a batch of style and content images in the following manner: Assume the batch size = b, which is an even number. Then we get a batch of style images {s1, s2, ..., sb/2, s1, s2, ..., sb/2−1, sb/2}, and a batch of content images {c1, c2, ..., cb/2, c2, c3, ..., cb/2, c1}. Hence, the corresponding stylized images are {s1c1, s2c2, ..., sb/2cb/2, s1c2, s2c3, ..., sb/2−1cb/2, sb/2c1}. In this way, we ensure that for every stylized image sicj , we can find a stylized image sicx (x 6= j) that shares the same style with it, and a stylized image sycj (y 6= i) that shares the same content with it in the same batch. Figure 2 (c) depicts this process by taking b = 8 as an example. Style contrastive loss. To associate stylized images that share the same style, for a stylized image sicj , we select sicx (x 6= j) as its positive example (sicx shares the same style with sicj), and smcn (m 6= i and n 6= j) as its negative examples. Notice that smcn represents a series of stylized images, not just one image. Then we can formulate our style contrastive loss as follows: Ls−contra := −log( exp(ls(sicj) T ls(sicx)/τ) exp(ls(sicj)T ls(sicx)/τ) + ∑ exp(ls(sicj)T ls(smcn)/τ) ) (5) where ls = hs(φrelu3_1(·)), in which hs is a style projection network. ls is used to obtain the style embeddings from stylized images. τ is a temperature hyper-parameter to control push and pull force. Content contrastive loss. Similar to the style contrastive loss, to associate stylized images that share the same content, for a stylized image sicj , we select sycj (y 6= i) as its positive example (sycj shares the same content with sicj), and smcn (m 6= i and n 6= j) as its negative examples. We express the content contrastive loss as: Lc−contra := −log( exp(lc(sicj) T lc(sycj)/τ) exp(lc(sicj)T lc(sycj)/τ) + ∑ exp(lc(sicj)T lc(smcn)/τ) ) (6) where lc = hc(φrelu4_1(·)), in which hc is a content projection network. lc is used to obtain the content embeddings from stylized images. 3.3 Final Objective We summarize all aforementioned losses and obtain the final objective of our model, Lfinal := λ1Ls + λ2Ladv + λ3Lc + λ4Lidentity + λ5Ls−contra + λ6Lc−contra (7) where λ1, λ2, λ3, λ4, λ5, and λ6 are hyper-parameters for striking proper balance among losses. 4 Experimental Results In this section, we first introduce the experimental settings. Then we present qualitative and quantitative comparisons between the proposed method and several baseline models. Finally, we discuss the effect of each component in our model by conducting ablation studies. 4.1 Experimental Settings Implementation details. We build on the recent SANet [36] backbone and extend it with our proposed changes to further push the boundaries in automatic artwork generation. We refer to the original paper [36] for the detailed network architecture of the encoder E, transformation module T , and decoder D. As for the discriminator D, we employ the multi-scale discriminator proposed by Wang et al. [50]. The style projection network hs is a two-layer MLP (Multilayer Perceptron) with 256 units at the first layer and 128 units at the second layer. Similarly, the content projection network hc is a two-layer MLP with 128 units at each layer. The hyper-parameter τ in Equation (5) and (6) is set to 0.2. The loss weights in Equation (4) and (7) are set to λidentity1 = 50, λidentity2 = 1, λ1 = 1, λ2 = 5, λ3 = 1, λ4 = 1, λ5 = 0.3, and λ6 = 0.3. We train our network using the Adam optimizer with a learning rate of 0.0001 and a batch size of 16 for 160000 iterations. Our code is available at: https://github.com/HalbertCH/IEContraAST. Datasets. Like [15, 58, 36, 19], we take MS-COCO [33] and WikiArt [22] as the content dataset and style dataset, respectively. During the training stage, we first resize the smallest dimension of training images to 512 while preserving the aspect ratio, and then randomly crop 256 × 256 patches from these images as input. Note that in the reference stage, our method is applicable for content images and style images with any size. Baselines. We choose several state-of-the-art style transfer methods as our baselines, including Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. All these methods are conducted by using the public codes and default configurations. 4.2 Qualitative Comparisons In Figure 3, we show the qualitative comparisons between our method and six baselines introduced above. We observe that Gatys et al. [10] is prone to fall in a bad local minimum (e.g., 1st, 2nd, and 3rd columns). AdaIN [15] sometimes produces messy stylized images with unseen colors and unwanted halation around the edges (e.g., 1st, 3rd, and 6th columns). WCT [30] often introduces distorted patterns, yielding less-structured and blunt stylized images (e.g., 2nd, 4th, and 5th columns). Avatar-Net [41] is hard to produce sharp details and fine brushstrokes (e.g., 1st, 4th, and 5th columns). LST [28] usually produces less stylized images with very limited texture patterns (e.g., 2nd, 4th, and 6th columns). SANet [36] tends to apply similar repeated texture patterns among different styles (e.g., 1st, 3rd, and 6th columns). Despite the recent progress, the gap between synthesized artistic images and real artworks is still very large. To further narrow this gap, we introduce internal-external learning and contrastive learning to artistic style transfer, leading to visually more harmonious and plausible artistic images, as shown in the 2nd row of Figure 3. We also compare our method with 6 baselines on video style transfer, which is conducted between a content video and a style image in a frame-wise manner. The stylization results are shown in Figure 4. To visualize the stability and consistency of synthesized video clip, we also show the heat maps of differences between different frames in the last column of Figure 4. As we can see, our approach outperforms existing style transfer methods in terms of stability and consistency by a significant margin. This can be attributed to two points: (i) external learning smooths the stylization results by eliminating those distorted texture patterns; (ii) the proposed contrastive losses take the stylization-to-stylization relations into consideration, pulling adjacent stylized frames closer to each other since they share the same style and similar content. 4.3 Quantitative Comparisons As the qualitative assessment presented above could be subjective, in this section, we resort to several evaluation metrics to better assess the performance of the proposed method in a quantitative manner. User study [54, 36, 24, 23, 48] is the most widely adopted evaluation metric in style transfer, which investigates user preference over different stylization results for a more objective comparison. Preference score. We use 10 content images and 15 style images to synthesize 150 stylized images for each method. Then 20 content-style pairs are randomly selected for each participant and show Table 2: The average LPIPS distances for different methods. The lower the better. Inputs Gatys et al. AdaIN WCT Avatar-Net LST SANet Ours LPIPS Distance 0.231 0.488 0.369 0.460 0.341 0.326 0.372 0.317 (a) (b) LPIPS 0.317 0.325 0.321 Preference 0.388 0.281 0.331 Figure 5: Ablation studies of external learning (abbr. EL) and contrastive learning (abbr. CL) on (a) image style transfer and (b) video style transfer. Please zoom in for a better view and details. them the stylized images generated by our and competing methods side-by-side in a random order. Next, we ask each participant to choose his/her favorite stylization result for each content-style pair. We finally collect 1000 votes from 50 participants and present the percentage of votes for each method in the second row of Table 1. The results indicate that the stylized images generated by our method are more preferred by human participants compared to those generated by the competing methods. Deception score. To measure the gap between AI-created artistic images and human-created artworks, we conduct another user study: for each participant, we show them 80 artistic images which consist of 10 human-created artworks collected from WikiArt [22] and 70 stylized images generated by our and 6 baseline methods (note that each method provides 10 stylized images). Then for every image, we ask these participants to guess if it is a real artwork or not. The deception score is calculated as the fraction of times that the stylized images generated by this method are identified as “real”. For comparison, we also report the fraction of times that the human-created artworks are identified as “real”. The results are shown in the third row of Table 1, where we can see that the deception rate of our method is closest to that of human-created artworks, further demonstrating the effectiveness of our method. To quantitatively evaluate the stability and consistency of the proposed method on video style transfer, we adopt LPIPS (Learned Perceptual Image Patch Similarity) [57] as the evaluation metric. LPIPS. LPIPS is a widely used metric in the field of multimodal image-to-image translation (MI2I) [61, 16, 25, 8] to measure diversity. In this paper, we employ LPIPS to measure the stability and consistency of rendered clips by computing the average perceptual distances between adjacent frames. Note that contrary to MI2I methods that expect a higher LPIPS value to achieve better diversity, we expect a lower LPIPS value to achieve better stability and consistency. We synthesize 18 stylized video clips for each method and report the average LPIPS distances in Table 2, where we observe that our approach obtains the best score among all methods, consistent with the qualitative comparisons in Figure 4. 4.4 Ablation Studies In this section, we conduct several ablation studies to highlight the effect of different components in our model. We first explore the effect of external learning (abbr. EL) and contrastive learning (abbr. CL) on image style transfer. As for internal learning, since its effect has been fully validated in existing style transfer methods, we do not ablate it in this experiment. Figure 5 (a) shows the image stylization results of our method with and without EL/CL. It can be observed that, without EL, the stylized images become messier with abrupt colors and obvious distortions. The reason could be that the model without EL only focuses on increasing the style similarity between the stylized image and the style image, without considering whether the color distributions and texture patterns in the stylized image are natural and harmonious. In comparison, the model with EL can learn the human-aware style information from the large-scale style dataset, leading to more realistic and harmonious stylized images that cannot be distinguished from real artworks by the discriminator. In addition, we also find that our method can better match the target style to the content image with the proposed contrastive losses. This is because our contrastive losses can help the network to learn better style and content representations by taking the stylization-to-stylization relations into consideration, further refining the stylization results. The user preference results reported in the last column of Figure 5 (a) also demonstrate that our full model has the best performance. Similar ablation studies are also conducted on video style transfer. As shown in Figure 5 (b), the stability degradation can be observed after we remove external learning or contrastive learning from our method (notice the color of hair and skin), which is in line with the reported LPIPS distance. The results indicate that both external learning and contrastive learning can improve the stability of video style transfer. As we analyzed in Section 4.2, external learning obtains stability gains by eliminating distorted texture patterns, and contrastive learning obtains stability gains by pulling adjacent stylized frames closer to each other. 5 Limitations One limitation of this work is that the proposed internal-external learning scheme and two contrastive losses cannot be applied to learning-free style transfer methods, such as WCT [30], Avatar-Net [41], LST [28], etc. This is because the training process is necessary for our method. Therefore, our method can only be incorporated into learning-based methods, such as Johnson et al. [20], AdaIN [15], SANet [36] (in this work, we mainly take SANet as our backbone to show the effectiveness and superiority of our method), etc. Another limitation is that in the inference stage, the style images that are too different from the training styles may not benefit from the external learning scheme, since they are out of the learned style distributions. 6 Conclusion In this paper, we propose an internal-external style transfer method with two novel contrastive losses. The internal-external learning scheme learns simultaneously both the internal statistics from a single artistic image and the human-aware style information from the large-scale style dataset. As for the contrastive losses, they are dedicated to learning the stylization-to-stylization relations by pulling the multiple stylization embeddings closer to each other when they share the same content or style, but pushing far away otherwise. Extensive experiments show that our method can not only produce visually more harmonious and satisfying artistic images, but also significantly promote the stability and consistency of rendered video clips. The proposed method is simple and effective, and may shed light on more future understandings of artistic style transfer from a new perspective. In the future, we would like to extend our method to other vision tasks, for example, texture synthesis. Acknowledgments This work was supported in part by the projects No. 2020YFC1523202, 19ZDA197, LY21F020005, 2021009, 2019C03137, MOE Frontier Science Center for Brain Science & Brain-Machine Integration (Zhejiang University), National Natural Science Foundation of China (Research on Key Technologies of art image restoration based on decoupling learning), and Key Scientific Research Base for Digital Conservation of Cave Temples (Zhejiang University), State Administration for Cultural Heritage. We would also like to thank the reviewers and AC for their constructive and insightful comments on the early submission. Funding Transparency Statement The projects mentioned in our Acknowledgments provided funding and support to this work. There are no additional revenues related to this work.
1. What is the main contribution of the paper, and how does it extend the SANet architecture? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to produce plausible stylizations? 3. How does the reviewer assess the novelty and significance of the paper's contributions? 4. What are some potential practical issues with implementing the proposed approach, and how might they be addressed? 5. What additional information or clarification does the reviewer request regarding the paper's implementation and results?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a neural style transfer algorithm, for transferring the artistic style of one image to another. It is basically SANet [32] with some incremental novelties. SANet is an encoder-decoder transformer architecture that accepts two images (content/desired style) and outputs a stylized version of the content by optimizing a Gatys-like loss function. Two novelties are claimed over SANet. the use of ‘human external information’ by which the authors mean a corpus of real artworks in the desired style, to train a GAN discriminator added to the SANet. It is argued that adding knowledge of multiple real artworks of the desired style in this way helps produce more plausible stylizations. the use of contrastive learning to train their architecture. The idea here is to ensure pair-wise consistency between style descriptors of generated artwork and the aforementioned corpus of artwork sharing that desired style. Review The main contribution of this paper is extending SANet to draw upon external knowledge – i.e. in addition to taking in a single content/style image pair, a supporting corpus of images with similar content and similar style are provided. The extensions are fairly straightforward.. the same exact SANet losses are used (ensuring Gatys-like content and style consistency using pre-trained VGG) – here referred to as ‘internal content/style’ losses – and an identity loss agains, as defined, in SANet. The first technical tweak is adding a loss for the discriminator, which conducts the usual real/fake check on the output of the stylization vs. unpaired set of style images. This is shown via a ‘deception score’ in Sec.4 - a kind of artistic Turing test user study - that the output artwork is considered more realistic, and qualitatively this is shown in the figures of the paper. The idea is quite simple but seems to improve the output aesthetics. The practical downside is the need to have a large corpus of images in the desired output style. The second tweak is adding a contrastive loss and contrastive training methodology to the network – the formation of the batches is explained, but it is not explained how the training curriculum works in tandem with the GAN discriminator. Is there some alternating training pattern similar to DCGAN? How is this contrastive training integrated? A key technical detail omitted from the implementation description is the batch size – it is mentioned that Figure 2 diagram 'uses a batch size b=8 for illustration' but what was used for the experiments reported? Presumably for contrastive learning to make any difference you need batch size in the hundreds or even thousands? Again, is that practical given the number of style images required – and how would all of this fit into typical GPU VRAM it seems with this architecture it would be challenging. The actual contrastive loss introduced has limited novelty it is just SimCLR temperature softmax approach with a header network, exactly as in the original SimCLR paper. Overall I am not convinced by this paper. The novelty is not significant, and the benefit of the external corpus of style images is not well demonstrated versus the practical disadvantage of requiring it. The time cost of training such a network to perform a stylization, as well as the undescribed practical issues around large batch size, make me doubt this is a valuable approach.
NIPS
Title Artistic Style Transfer with Internal-external Learning and Contrastive Learning Abstract Although existing artistic style transfer methods have achieved significant improvement with deep neural networks, they still suffer from artifacts such as disharmonious colors and repetitive patterns. Motivated by this, we propose an internal-external style transfer method with two contrastive losses. Specifically, we utilize internal statistics of a single style image to determine the colors and texture patterns of the stylized image, and in the meantime, we leverage the external information of the large-scale style dataset to learn the human-aware style information, which makes the color distributions and texture patterns in the stylized image more reasonable and harmonious. In addition, we argue that existing style transfer methods only consider the content-to-stylization and style-to-stylization relations, neglecting the stylization-to-stylization relations. To address this issue, we introduce two contrastive losses, which pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. We conduct extensive experiments, showing that our proposed method can not only produce visually more harmonious and satisfying artistic images, but also promote the stability and consistency of rendered video clips. 1 Introduction Artistic style transfer is a long-standing research topic that seeks to render a photograph with a given artwork style. Ever since Gatys et al. [10] for the first time proposed a neural method, which leverages a pre-trained Deep Convolutional Neural Network (DCNN) to separate and recombine contents and styles of arbitrary images, an unprecedented booming [20, 26, 15, 30, 36, 51, 48] in style transfer has been witnessed. Despite the recent progress, there still exists a large gap between real artworks and synthesized stylizations. As shown in Figure 1, the stylized images usually contain some disharmonious colors and repetitive patterns, which makes them easily distinguishable from real artworks. We argue that this is because existing style transfer methods often confine themselves to the internal style statistics of a single artistic image. In some other tasks (for example, image-to-image translation [17, 60, 16, 25, 8, 18]), the style is usually learned from a collection of images, which inspires us to leverage the external information reserved in the large-scale style dataset to improve the stylization results in style transfer. Why is the external information so important for style transfer? Our analyses are as follows: Although different images in the style dataset vary greatly in fine details, they share a key commonality: they are all human-created artworks, whose brushstrokes, color distributions, texture patterns, tones, ∗Corresponding author 35th Conference on Neural Information Processing Systems (NeurIPS 2021). AdaIN SANetAvatar-Net LSTGatys et al. WCTOursStyle Content Figure 1: Stylization examples. The first and second columns show the style and content images, respectively. The other seven columns show the stylized images produced by our method, Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. etc., are more consistent with human perception. Namely, they contain some human-aware style information that is lacked in synthesized stylizations. A natural idea is to utilize such human-aware style information to improve stylization results. To this end, we employ an internal-external learning scheme during training, which takes both internal learning and external learning into consideration. To be more specific, on the one hand, we follow previous methods [10, 20, 46, 54, 58], utilizing internal statistics of a single artwork to determine the colors and texture patterns of the stylized image. On the other hand, we employ Generative Adversarial Nets (GANs) [11, 39, 2, 56, 3] to externally learn the human-aware style information from the large-scale style dataset, which is then used to make the color distributions and texture patterns in the stylized image more reasonable and harmonious, significantly bridging the gap between human-created artworks and AI-created artworks. In addition, there is another problem with existing style transfer methods: they usually employ a content loss and a style loss to enforce the content-to-stylization relations and style-to-stylization relations, respectively, while neglect the stylization-to-stylization relations, which are also important for style transfer. What are stylization-to-stylization relations? Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. Inspired by this, in this paper we introduce two contrastive losses: content contrastive loss and style contrastive loss that can pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. To the best of our knowledge, this is the first work that successfully leverages the power of contrastive learning [6, 12, 21, 38] in the style transfer scenario. Our extensive experiments show that the proposed method can not only produce visually more harmonious and plausible artistic images, but also promote the stability and consistency of rendered video clips. To summarize, the main contributions of this work are threefold: • We propose a novel internal-external style transfer method which takes both internal learning and external learning into consideration, significantly bridging the gap between humancreated and AI-created artworks. • We for the first time introduce contrastive learning to style transfer, yielding more satisfying stylization results with the learned stylization-to-stylization relations. • We demonstrate the effectiveness and superiority of our approach by extensive comparisons with several state-of-the-art artistic style transfer methods. 2 Related Work Artistic style transfer. Artistic style transfer is an image editing task that aims at transferring artistic styles onto everyday photographs to create new artworks. Earlier methods usually resort to traditional techniques such as stroke rendering [13], image analogy [14, 42, 9, 31], and image filtering [52] to perform artistic style transfer. These methods typically rely on low-level statistics and often fail to capture semantic information. Recently, Gatys et al. [10] discovered that the Gram matrix upon deep features extracted from a pre-trained DCNN can notably represent the characteristics of visual styles, which opens up the neural style transfer era. Since then, a suite of neural methods have been proposed, boosting the development of style transfer from different concerns. Specifically, [20, 27, 46] utilize feed-forward networks to improve efficiency. [26, 54, 36, 58, 35] refine various elements in the stylized images (including content preservation, textures, brushstrokes, etc.) to enhance visual quality. [7, 15, 30, 41, 28] propose universal style transfer methods to achieve generalization. [29, 47, 51] inject random noise to the generative network to encourage diversity. Despite the rapid progress, these style transfer methods still suffer from spurious artifacts such as disharmonious colors and repetitive patterns. Notice that there is another line of work [40, 24, 23, 45, 4, 5] that aims to learn an artist’s style from all his/her artworks. In comparison, instead of learning an artist’s style, we focus on better leaning an artwork’s style (just like the style transfer methods mentioned in the previous paragraph) with the assist of the human-aware style information reserved in the external style dataset. Therefore, our method is orthogonal to these works. Image-to-image translation. Image-to-image translation (I2I) [17, 60, 16, 25, 8, 18] aims at learning the mapping between different visual domains, which is closely related to style transfer. [60, 16] have distinguished these two tasks: (i) I2I can only translate between content-similar visual domains (such as horses↔zebras and summer↔winter), while style transfer does not have such limitation, whose content image and style image can be totally different (e.g., the former is a photo of a person and the latter is van Gogh’s The Starry Night). (ii) I2I aims to learn the mapping between two image collections, while style transfer aims to learn the mapping between two specific images. However, we argue that we can borrow some insights from I2I, and leverage the external information of the large-scale style image collections to improve the stylization quality in style transfer. Internal-external learning. Internal-external learning has shown effectiveness in various image generation tasks, such as super-resolution, image inpainting, and so on. In detail, Soh et al. [44] presented a fast, flexible, and lightweight self-supervised super-resolution method by exploiting both external and internal samples. Park et al. [37] developed an internal-external super-resolution method that facilitates super-resolution networks to further enhance the quality of the restored images. Wang et al. [49] proposed a general external-internal learning inpainting scheme, which learns semantic knowledge externally by training on large datasets while fully utilizes internal statistics of the single test image. However, in the field of style transfer, existing methods only use a single artistic image to learn style, resulting in unsatisfying stylization results. Motivated by this, in this work we propose an internal-external style transfer method that takes both internal learning and external learning into consideration, significantly bridging the gap between human-created and AI-created artworks. Contrastive learning. Generally, there are three key ingredients in a contrastive learning process: query, positive examples, and negative examples. The target of contrastive learning is to associate a “query” with its “positive” example while disassociate the “query” with other examples that are referred to as “negatives”. Recently, contrastive learning has demonstrated its effectiveness in the field of conditional image synthesis. To be more specific, ContraGAN [21] introduced a conditional contrastive loss (2C loss) to learn both data-to-class and data-to-data relations. Park et al. [38] maximized the mutual information between input and output with contrastive learning to encourage content preservation in unpaired image translation problems. Liu et al. [34] introduced a latentaugmented contrastive loss to encourage images generated from adjacent latent codes to be similar and those generated from distinct latent codes to be dissimilar, achieving diverse image synthesis. Yu et al. [55] proposed a dual contrastive loss in adversarial training that generalizes representation to more effectively distinguish between real and fake, and further incentivizes the image generation quality. Wu et al. [53] improved the image dehazing result by introducing contrastive learning, which ensures that the restored image is pulled closer to the clear image and pushed far away from the hazy image in representation space. Note that all the above contrastive learning methods cannot be adopted for style transfer. In this work, we make the first attempt to adapt contrastive learning to artistic style transfer, and propose two novel contrastive losses: content contrastive loss and style contrastive loss to learn the stylization-tostylization relations that are ignored by existing style transfer methods. 3 Proposed Method Existing style transfer methods usually produce unsatisfying stylization results with disharmonious colors and repetitive patterns, which makes them pretty easy to be distinguished from real artworks. As an attempt to bridge the large gap between human-created and AI-created artworks, we propose a novel internal-external style transfer method with two contrastive losses. The overview of our method is shown in Figure 2. It is worth noting that our framework is built on the SANet [36] (one of the state-of-the-art style transfer methods) backbone, which consists of an encoder E, a transformation module T , and a decoder D. In detail, E is a pre-trained VGG-19 network [43] used to extract image features, T is a style-attentional network that can flexibly match the semantic nearest style features onto the content features, and D is a generative network used to transform encoded semantic feature maps into stylized images. We extend SANet [36] with our proposed changes, and our full model is described below. 3.1 Internal-external Learning Let C and S be the sets of photographs and artworks, respectively. We aim to learn both the internal style characteristics from a single artwork Is ∈ S and the external human-aware style information from the dataset S, and then transfer them to an arbitrary content image Ic ∈ C to create new artistic images Isc. Internal style learning. Following previous style transfer methods [15, 36, 1], we use a pre-trained VGG-19 network φ to capture the internal style characteristics from a single artistic image, and the style loss can be generally computed as: Ls := L∑ i=1 ‖ µ(φi(Isc))− µ(φi(Is)) ‖2 + ‖ σ(φi(Isc))− σ(φi(Is)) ‖2 (1) where φi denotes the ith layer (Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers are used in our model) of the VGG-19 network. µ and σ represent the mean and standard deviation of feature maps extracted by φi, respectively. External style learning. Here, we employ GAN [11, 39, 2, 56, 3] to learn the human-aware style information from the style dataset S. GAN is a popular generative model consisting of two networks (i.e., a generator G and a discriminator D) that compete against each other. Specifically, we input the stylized images produced by the generator and the artworks sampled from S to the discriminator as fake data and real data, respectively. In the training process, the generator will try to fool the discriminator by generating a realistic artistic image, while the discriminator will try to distinguish generated fake artworks from real ones. Joint training of these two networks leads to a generator that is able to produce remarkable realistic fake images with the learned human-aware style information. The adversarial training process can be formulated as (note that our generator G contains an encoder E, a transformation module T , and a decoder D, as shown in Figure 2 (a)): Ladv := E Is∼S [log(D(Is))] + E Ic∼C,Is∼S [log(1−D(D(T (E(Ic), E(Is)))))] (2) Content structure preservation. To preserve the content structure of Ic in the stylized image Isc, we adopt the widely-used perceptual loss: Lc :=‖ φconv4_2(Isc)− φconv4_2(Ic) ‖2 (3) Identity loss. Similar to [36, 32, 59], we utilize the identity loss to encourage the generator G to be an approximate identity mapping when the content image and style image are the same. In this manner, more content structures and style characteristics can be preserved in the stylization result. The identity loss is depicted in Figure 2 (b) and defined as: Lidentity := λidentity1(‖ Icc − Ic ‖2 + ‖ Iss − Is ‖2)+ λidentity2 L∑ i=1 (‖ φi(Icc)− φi(Ic) ‖2 + ‖ φi(Iss)− φi(Is) ‖2) (4) where Icc is the output image generated when both the content image and style image are Ic. Iss is analogous. λidentity1 and λidentity2 are the weights associated with different loss terms. For φi, we choose Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers in our experiments. 3.2 Contrastive Learning Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. We refer to such relations as stylization-to-stylization relations. Generally, existing style transfer methods only consider the content-to-stylization and style-to-stylization relations by applying the content loss and style loss (like Lc and Ls introduced above), while neglect the stylization-to-stylization relations. To tackle this problem, we for the first time introduce contrastive learning to style transfer. The core idea of contrastive learning is to associate data points with their “positive” examples while disassociate them from the other points that are regarded as “negatives”. Specifically, we propose two contrastive losses: a style contrastive loss and a content contrastive loss to learn the stylization-to-stylization relations. Note that for clearer expression, hereafter, we use si to represent the ith style image, ci to represent the ith content image, and sici to represent the stylized image generated with si and ci. To perform contrastive learning in every training batch, we arrange a batch of style and content images in the following manner: Assume the batch size = b, which is an even number. Then we get a batch of style images {s1, s2, ..., sb/2, s1, s2, ..., sb/2−1, sb/2}, and a batch of content images {c1, c2, ..., cb/2, c2, c3, ..., cb/2, c1}. Hence, the corresponding stylized images are {s1c1, s2c2, ..., sb/2cb/2, s1c2, s2c3, ..., sb/2−1cb/2, sb/2c1}. In this way, we ensure that for every stylized image sicj , we can find a stylized image sicx (x 6= j) that shares the same style with it, and a stylized image sycj (y 6= i) that shares the same content with it in the same batch. Figure 2 (c) depicts this process by taking b = 8 as an example. Style contrastive loss. To associate stylized images that share the same style, for a stylized image sicj , we select sicx (x 6= j) as its positive example (sicx shares the same style with sicj), and smcn (m 6= i and n 6= j) as its negative examples. Notice that smcn represents a series of stylized images, not just one image. Then we can formulate our style contrastive loss as follows: Ls−contra := −log( exp(ls(sicj) T ls(sicx)/τ) exp(ls(sicj)T ls(sicx)/τ) + ∑ exp(ls(sicj)T ls(smcn)/τ) ) (5) where ls = hs(φrelu3_1(·)), in which hs is a style projection network. ls is used to obtain the style embeddings from stylized images. τ is a temperature hyper-parameter to control push and pull force. Content contrastive loss. Similar to the style contrastive loss, to associate stylized images that share the same content, for a stylized image sicj , we select sycj (y 6= i) as its positive example (sycj shares the same content with sicj), and smcn (m 6= i and n 6= j) as its negative examples. We express the content contrastive loss as: Lc−contra := −log( exp(lc(sicj) T lc(sycj)/τ) exp(lc(sicj)T lc(sycj)/τ) + ∑ exp(lc(sicj)T lc(smcn)/τ) ) (6) where lc = hc(φrelu4_1(·)), in which hc is a content projection network. lc is used to obtain the content embeddings from stylized images. 3.3 Final Objective We summarize all aforementioned losses and obtain the final objective of our model, Lfinal := λ1Ls + λ2Ladv + λ3Lc + λ4Lidentity + λ5Ls−contra + λ6Lc−contra (7) where λ1, λ2, λ3, λ4, λ5, and λ6 are hyper-parameters for striking proper balance among losses. 4 Experimental Results In this section, we first introduce the experimental settings. Then we present qualitative and quantitative comparisons between the proposed method and several baseline models. Finally, we discuss the effect of each component in our model by conducting ablation studies. 4.1 Experimental Settings Implementation details. We build on the recent SANet [36] backbone and extend it with our proposed changes to further push the boundaries in automatic artwork generation. We refer to the original paper [36] for the detailed network architecture of the encoder E, transformation module T , and decoder D. As for the discriminator D, we employ the multi-scale discriminator proposed by Wang et al. [50]. The style projection network hs is a two-layer MLP (Multilayer Perceptron) with 256 units at the first layer and 128 units at the second layer. Similarly, the content projection network hc is a two-layer MLP with 128 units at each layer. The hyper-parameter τ in Equation (5) and (6) is set to 0.2. The loss weights in Equation (4) and (7) are set to λidentity1 = 50, λidentity2 = 1, λ1 = 1, λ2 = 5, λ3 = 1, λ4 = 1, λ5 = 0.3, and λ6 = 0.3. We train our network using the Adam optimizer with a learning rate of 0.0001 and a batch size of 16 for 160000 iterations. Our code is available at: https://github.com/HalbertCH/IEContraAST. Datasets. Like [15, 58, 36, 19], we take MS-COCO [33] and WikiArt [22] as the content dataset and style dataset, respectively. During the training stage, we first resize the smallest dimension of training images to 512 while preserving the aspect ratio, and then randomly crop 256 × 256 patches from these images as input. Note that in the reference stage, our method is applicable for content images and style images with any size. Baselines. We choose several state-of-the-art style transfer methods as our baselines, including Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. All these methods are conducted by using the public codes and default configurations. 4.2 Qualitative Comparisons In Figure 3, we show the qualitative comparisons between our method and six baselines introduced above. We observe that Gatys et al. [10] is prone to fall in a bad local minimum (e.g., 1st, 2nd, and 3rd columns). AdaIN [15] sometimes produces messy stylized images with unseen colors and unwanted halation around the edges (e.g., 1st, 3rd, and 6th columns). WCT [30] often introduces distorted patterns, yielding less-structured and blunt stylized images (e.g., 2nd, 4th, and 5th columns). Avatar-Net [41] is hard to produce sharp details and fine brushstrokes (e.g., 1st, 4th, and 5th columns). LST [28] usually produces less stylized images with very limited texture patterns (e.g., 2nd, 4th, and 6th columns). SANet [36] tends to apply similar repeated texture patterns among different styles (e.g., 1st, 3rd, and 6th columns). Despite the recent progress, the gap between synthesized artistic images and real artworks is still very large. To further narrow this gap, we introduce internal-external learning and contrastive learning to artistic style transfer, leading to visually more harmonious and plausible artistic images, as shown in the 2nd row of Figure 3. We also compare our method with 6 baselines on video style transfer, which is conducted between a content video and a style image in a frame-wise manner. The stylization results are shown in Figure 4. To visualize the stability and consistency of synthesized video clip, we also show the heat maps of differences between different frames in the last column of Figure 4. As we can see, our approach outperforms existing style transfer methods in terms of stability and consistency by a significant margin. This can be attributed to two points: (i) external learning smooths the stylization results by eliminating those distorted texture patterns; (ii) the proposed contrastive losses take the stylization-to-stylization relations into consideration, pulling adjacent stylized frames closer to each other since they share the same style and similar content. 4.3 Quantitative Comparisons As the qualitative assessment presented above could be subjective, in this section, we resort to several evaluation metrics to better assess the performance of the proposed method in a quantitative manner. User study [54, 36, 24, 23, 48] is the most widely adopted evaluation metric in style transfer, which investigates user preference over different stylization results for a more objective comparison. Preference score. We use 10 content images and 15 style images to synthesize 150 stylized images for each method. Then 20 content-style pairs are randomly selected for each participant and show Table 2: The average LPIPS distances for different methods. The lower the better. Inputs Gatys et al. AdaIN WCT Avatar-Net LST SANet Ours LPIPS Distance 0.231 0.488 0.369 0.460 0.341 0.326 0.372 0.317 (a) (b) LPIPS 0.317 0.325 0.321 Preference 0.388 0.281 0.331 Figure 5: Ablation studies of external learning (abbr. EL) and contrastive learning (abbr. CL) on (a) image style transfer and (b) video style transfer. Please zoom in for a better view and details. them the stylized images generated by our and competing methods side-by-side in a random order. Next, we ask each participant to choose his/her favorite stylization result for each content-style pair. We finally collect 1000 votes from 50 participants and present the percentage of votes for each method in the second row of Table 1. The results indicate that the stylized images generated by our method are more preferred by human participants compared to those generated by the competing methods. Deception score. To measure the gap between AI-created artistic images and human-created artworks, we conduct another user study: for each participant, we show them 80 artistic images which consist of 10 human-created artworks collected from WikiArt [22] and 70 stylized images generated by our and 6 baseline methods (note that each method provides 10 stylized images). Then for every image, we ask these participants to guess if it is a real artwork or not. The deception score is calculated as the fraction of times that the stylized images generated by this method are identified as “real”. For comparison, we also report the fraction of times that the human-created artworks are identified as “real”. The results are shown in the third row of Table 1, where we can see that the deception rate of our method is closest to that of human-created artworks, further demonstrating the effectiveness of our method. To quantitatively evaluate the stability and consistency of the proposed method on video style transfer, we adopt LPIPS (Learned Perceptual Image Patch Similarity) [57] as the evaluation metric. LPIPS. LPIPS is a widely used metric in the field of multimodal image-to-image translation (MI2I) [61, 16, 25, 8] to measure diversity. In this paper, we employ LPIPS to measure the stability and consistency of rendered clips by computing the average perceptual distances between adjacent frames. Note that contrary to MI2I methods that expect a higher LPIPS value to achieve better diversity, we expect a lower LPIPS value to achieve better stability and consistency. We synthesize 18 stylized video clips for each method and report the average LPIPS distances in Table 2, where we observe that our approach obtains the best score among all methods, consistent with the qualitative comparisons in Figure 4. 4.4 Ablation Studies In this section, we conduct several ablation studies to highlight the effect of different components in our model. We first explore the effect of external learning (abbr. EL) and contrastive learning (abbr. CL) on image style transfer. As for internal learning, since its effect has been fully validated in existing style transfer methods, we do not ablate it in this experiment. Figure 5 (a) shows the image stylization results of our method with and without EL/CL. It can be observed that, without EL, the stylized images become messier with abrupt colors and obvious distortions. The reason could be that the model without EL only focuses on increasing the style similarity between the stylized image and the style image, without considering whether the color distributions and texture patterns in the stylized image are natural and harmonious. In comparison, the model with EL can learn the human-aware style information from the large-scale style dataset, leading to more realistic and harmonious stylized images that cannot be distinguished from real artworks by the discriminator. In addition, we also find that our method can better match the target style to the content image with the proposed contrastive losses. This is because our contrastive losses can help the network to learn better style and content representations by taking the stylization-to-stylization relations into consideration, further refining the stylization results. The user preference results reported in the last column of Figure 5 (a) also demonstrate that our full model has the best performance. Similar ablation studies are also conducted on video style transfer. As shown in Figure 5 (b), the stability degradation can be observed after we remove external learning or contrastive learning from our method (notice the color of hair and skin), which is in line with the reported LPIPS distance. The results indicate that both external learning and contrastive learning can improve the stability of video style transfer. As we analyzed in Section 4.2, external learning obtains stability gains by eliminating distorted texture patterns, and contrastive learning obtains stability gains by pulling adjacent stylized frames closer to each other. 5 Limitations One limitation of this work is that the proposed internal-external learning scheme and two contrastive losses cannot be applied to learning-free style transfer methods, such as WCT [30], Avatar-Net [41], LST [28], etc. This is because the training process is necessary for our method. Therefore, our method can only be incorporated into learning-based methods, such as Johnson et al. [20], AdaIN [15], SANet [36] (in this work, we mainly take SANet as our backbone to show the effectiveness and superiority of our method), etc. Another limitation is that in the inference stage, the style images that are too different from the training styles may not benefit from the external learning scheme, since they are out of the learned style distributions. 6 Conclusion In this paper, we propose an internal-external style transfer method with two novel contrastive losses. The internal-external learning scheme learns simultaneously both the internal statistics from a single artistic image and the human-aware style information from the large-scale style dataset. As for the contrastive losses, they are dedicated to learning the stylization-to-stylization relations by pulling the multiple stylization embeddings closer to each other when they share the same content or style, but pushing far away otherwise. Extensive experiments show that our method can not only produce visually more harmonious and satisfying artistic images, but also significantly promote the stability and consistency of rendered video clips. The proposed method is simple and effective, and may shed light on more future understandings of artistic style transfer from a new perspective. In the future, we would like to extend our method to other vision tasks, for example, texture synthesis. Acknowledgments This work was supported in part by the projects No. 2020YFC1523202, 19ZDA197, LY21F020005, 2021009, 2019C03137, MOE Frontier Science Center for Brain Science & Brain-Machine Integration (Zhejiang University), National Natural Science Foundation of China (Research on Key Technologies of art image restoration based on decoupling learning), and Key Scientific Research Base for Digital Conservation of Cave Temples (Zhejiang University), State Administration for Cultural Heritage. We would also like to thank the reviewers and AC for their constructive and insightful comments on the early submission. Funding Transparency Statement The projects mentioned in our Acknowledgments provided funding and support to this work. There are no additional revenues related to this work.
1. What are the strengths and weaknesses of the proposed approach in addressing style transfer challenges? 2. How does the reviewer assess the clarity and quality of the paper's content, particularly regarding its motivation, method description, experiment details, and result presentation? 3. What are the novel aspects introduced by the paper in utilizing contrastive and adversarial losses for style transfer tasks? 4. Do you have any suggestions for improving the paper, such as including non-cherry-picked results, discussing training and inference complexity, and enhancing reproducibility by providing code and more detailed experimental information?
Summary Of The Paper Review
Summary Of The Paper The paper proposes two new additional losses for style transfer: A contrastive loss to incorporate to bring the style of the stylized images with the same style image closer to each other, while pushing the stylized images from different style images further from each other. The same for content. An adversarial loss to learn the general aesthetics of stylized images over a larger dataset. The authors demonstrate that a combination of these losses plus the regular style-content loss can generate high quality, and surprisingly consistent stylized images, both qualitative and quantitatively as well as user studies. Review ===== Writing The paper is very well written but still can be improved. The motivation behind the new additions are clearly explained, although it can use more visual examples to demonstrate the shortcomings of the current approaches. Figure 1 can do this with a longer description as why the previous methods fail and how the proposed method shines. In fact, given the subjective nature of style transfer such descriptions are quite important. The method is clearly described and is visualized in Figure 2 (texts can be slightly bigger for better readability in print). The reasoning behind the design decisions are clearly described and justified. The details of the experiments are noted, however, the training details can be expanded (in supplementary materials). The code is not included and including such details is crucial to attain reproducibility. ===== Methodology The proposed method is novel, clever and impactful. To the best of my knowledge, and as claimed by the authors, this is the first usage contrastive loss in style transfer and it is applied in a way which is intuitive and makes sense. The paper tested their proposed method against multiple datasets and the majority of the SOTA models. For qualitative comparison, the authors conducted multiple user studies to capture the deception and preferences score which is pivotal for style transfer papers which lack good reliable metrics. However, I strongly suggest the authors to include a "non-cherry picked" section of their results to demonstrate the generalization capability of their model. The current set of results, look impressive but cherry-picked. This can be done as part of the supplementary materials or even better a website :) Another missing section from the paper, is the training and inference complexity. I suspect by adding the proposed losses the training time substantially increases while the generation runtime stays the same. Training time is important mainly because the proposed model cannot be applied to learning-free style transfer methods.
NIPS
Title Artistic Style Transfer with Internal-external Learning and Contrastive Learning Abstract Although existing artistic style transfer methods have achieved significant improvement with deep neural networks, they still suffer from artifacts such as disharmonious colors and repetitive patterns. Motivated by this, we propose an internal-external style transfer method with two contrastive losses. Specifically, we utilize internal statistics of a single style image to determine the colors and texture patterns of the stylized image, and in the meantime, we leverage the external information of the large-scale style dataset to learn the human-aware style information, which makes the color distributions and texture patterns in the stylized image more reasonable and harmonious. In addition, we argue that existing style transfer methods only consider the content-to-stylization and style-to-stylization relations, neglecting the stylization-to-stylization relations. To address this issue, we introduce two contrastive losses, which pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. We conduct extensive experiments, showing that our proposed method can not only produce visually more harmonious and satisfying artistic images, but also promote the stability and consistency of rendered video clips. 1 Introduction Artistic style transfer is a long-standing research topic that seeks to render a photograph with a given artwork style. Ever since Gatys et al. [10] for the first time proposed a neural method, which leverages a pre-trained Deep Convolutional Neural Network (DCNN) to separate and recombine contents and styles of arbitrary images, an unprecedented booming [20, 26, 15, 30, 36, 51, 48] in style transfer has been witnessed. Despite the recent progress, there still exists a large gap between real artworks and synthesized stylizations. As shown in Figure 1, the stylized images usually contain some disharmonious colors and repetitive patterns, which makes them easily distinguishable from real artworks. We argue that this is because existing style transfer methods often confine themselves to the internal style statistics of a single artistic image. In some other tasks (for example, image-to-image translation [17, 60, 16, 25, 8, 18]), the style is usually learned from a collection of images, which inspires us to leverage the external information reserved in the large-scale style dataset to improve the stylization results in style transfer. Why is the external information so important for style transfer? Our analyses are as follows: Although different images in the style dataset vary greatly in fine details, they share a key commonality: they are all human-created artworks, whose brushstrokes, color distributions, texture patterns, tones, ∗Corresponding author 35th Conference on Neural Information Processing Systems (NeurIPS 2021). AdaIN SANetAvatar-Net LSTGatys et al. WCTOursStyle Content Figure 1: Stylization examples. The first and second columns show the style and content images, respectively. The other seven columns show the stylized images produced by our method, Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. etc., are more consistent with human perception. Namely, they contain some human-aware style information that is lacked in synthesized stylizations. A natural idea is to utilize such human-aware style information to improve stylization results. To this end, we employ an internal-external learning scheme during training, which takes both internal learning and external learning into consideration. To be more specific, on the one hand, we follow previous methods [10, 20, 46, 54, 58], utilizing internal statistics of a single artwork to determine the colors and texture patterns of the stylized image. On the other hand, we employ Generative Adversarial Nets (GANs) [11, 39, 2, 56, 3] to externally learn the human-aware style information from the large-scale style dataset, which is then used to make the color distributions and texture patterns in the stylized image more reasonable and harmonious, significantly bridging the gap between human-created artworks and AI-created artworks. In addition, there is another problem with existing style transfer methods: they usually employ a content loss and a style loss to enforce the content-to-stylization relations and style-to-stylization relations, respectively, while neglect the stylization-to-stylization relations, which are also important for style transfer. What are stylization-to-stylization relations? Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. Inspired by this, in this paper we introduce two contrastive losses: content contrastive loss and style contrastive loss that can pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. To the best of our knowledge, this is the first work that successfully leverages the power of contrastive learning [6, 12, 21, 38] in the style transfer scenario. Our extensive experiments show that the proposed method can not only produce visually more harmonious and plausible artistic images, but also promote the stability and consistency of rendered video clips. To summarize, the main contributions of this work are threefold: • We propose a novel internal-external style transfer method which takes both internal learning and external learning into consideration, significantly bridging the gap between humancreated and AI-created artworks. • We for the first time introduce contrastive learning to style transfer, yielding more satisfying stylization results with the learned stylization-to-stylization relations. • We demonstrate the effectiveness and superiority of our approach by extensive comparisons with several state-of-the-art artistic style transfer methods. 2 Related Work Artistic style transfer. Artistic style transfer is an image editing task that aims at transferring artistic styles onto everyday photographs to create new artworks. Earlier methods usually resort to traditional techniques such as stroke rendering [13], image analogy [14, 42, 9, 31], and image filtering [52] to perform artistic style transfer. These methods typically rely on low-level statistics and often fail to capture semantic information. Recently, Gatys et al. [10] discovered that the Gram matrix upon deep features extracted from a pre-trained DCNN can notably represent the characteristics of visual styles, which opens up the neural style transfer era. Since then, a suite of neural methods have been proposed, boosting the development of style transfer from different concerns. Specifically, [20, 27, 46] utilize feed-forward networks to improve efficiency. [26, 54, 36, 58, 35] refine various elements in the stylized images (including content preservation, textures, brushstrokes, etc.) to enhance visual quality. [7, 15, 30, 41, 28] propose universal style transfer methods to achieve generalization. [29, 47, 51] inject random noise to the generative network to encourage diversity. Despite the rapid progress, these style transfer methods still suffer from spurious artifacts such as disharmonious colors and repetitive patterns. Notice that there is another line of work [40, 24, 23, 45, 4, 5] that aims to learn an artist’s style from all his/her artworks. In comparison, instead of learning an artist’s style, we focus on better leaning an artwork’s style (just like the style transfer methods mentioned in the previous paragraph) with the assist of the human-aware style information reserved in the external style dataset. Therefore, our method is orthogonal to these works. Image-to-image translation. Image-to-image translation (I2I) [17, 60, 16, 25, 8, 18] aims at learning the mapping between different visual domains, which is closely related to style transfer. [60, 16] have distinguished these two tasks: (i) I2I can only translate between content-similar visual domains (such as horses↔zebras and summer↔winter), while style transfer does not have such limitation, whose content image and style image can be totally different (e.g., the former is a photo of a person and the latter is van Gogh’s The Starry Night). (ii) I2I aims to learn the mapping between two image collections, while style transfer aims to learn the mapping between two specific images. However, we argue that we can borrow some insights from I2I, and leverage the external information of the large-scale style image collections to improve the stylization quality in style transfer. Internal-external learning. Internal-external learning has shown effectiveness in various image generation tasks, such as super-resolution, image inpainting, and so on. In detail, Soh et al. [44] presented a fast, flexible, and lightweight self-supervised super-resolution method by exploiting both external and internal samples. Park et al. [37] developed an internal-external super-resolution method that facilitates super-resolution networks to further enhance the quality of the restored images. Wang et al. [49] proposed a general external-internal learning inpainting scheme, which learns semantic knowledge externally by training on large datasets while fully utilizes internal statistics of the single test image. However, in the field of style transfer, existing methods only use a single artistic image to learn style, resulting in unsatisfying stylization results. Motivated by this, in this work we propose an internal-external style transfer method that takes both internal learning and external learning into consideration, significantly bridging the gap between human-created and AI-created artworks. Contrastive learning. Generally, there are three key ingredients in a contrastive learning process: query, positive examples, and negative examples. The target of contrastive learning is to associate a “query” with its “positive” example while disassociate the “query” with other examples that are referred to as “negatives”. Recently, contrastive learning has demonstrated its effectiveness in the field of conditional image synthesis. To be more specific, ContraGAN [21] introduced a conditional contrastive loss (2C loss) to learn both data-to-class and data-to-data relations. Park et al. [38] maximized the mutual information between input and output with contrastive learning to encourage content preservation in unpaired image translation problems. Liu et al. [34] introduced a latentaugmented contrastive loss to encourage images generated from adjacent latent codes to be similar and those generated from distinct latent codes to be dissimilar, achieving diverse image synthesis. Yu et al. [55] proposed a dual contrastive loss in adversarial training that generalizes representation to more effectively distinguish between real and fake, and further incentivizes the image generation quality. Wu et al. [53] improved the image dehazing result by introducing contrastive learning, which ensures that the restored image is pulled closer to the clear image and pushed far away from the hazy image in representation space. Note that all the above contrastive learning methods cannot be adopted for style transfer. In this work, we make the first attempt to adapt contrastive learning to artistic style transfer, and propose two novel contrastive losses: content contrastive loss and style contrastive loss to learn the stylization-tostylization relations that are ignored by existing style transfer methods. 3 Proposed Method Existing style transfer methods usually produce unsatisfying stylization results with disharmonious colors and repetitive patterns, which makes them pretty easy to be distinguished from real artworks. As an attempt to bridge the large gap between human-created and AI-created artworks, we propose a novel internal-external style transfer method with two contrastive losses. The overview of our method is shown in Figure 2. It is worth noting that our framework is built on the SANet [36] (one of the state-of-the-art style transfer methods) backbone, which consists of an encoder E, a transformation module T , and a decoder D. In detail, E is a pre-trained VGG-19 network [43] used to extract image features, T is a style-attentional network that can flexibly match the semantic nearest style features onto the content features, and D is a generative network used to transform encoded semantic feature maps into stylized images. We extend SANet [36] with our proposed changes, and our full model is described below. 3.1 Internal-external Learning Let C and S be the sets of photographs and artworks, respectively. We aim to learn both the internal style characteristics from a single artwork Is ∈ S and the external human-aware style information from the dataset S, and then transfer them to an arbitrary content image Ic ∈ C to create new artistic images Isc. Internal style learning. Following previous style transfer methods [15, 36, 1], we use a pre-trained VGG-19 network φ to capture the internal style characteristics from a single artistic image, and the style loss can be generally computed as: Ls := L∑ i=1 ‖ µ(φi(Isc))− µ(φi(Is)) ‖2 + ‖ σ(φi(Isc))− σ(φi(Is)) ‖2 (1) where φi denotes the ith layer (Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers are used in our model) of the VGG-19 network. µ and σ represent the mean and standard deviation of feature maps extracted by φi, respectively. External style learning. Here, we employ GAN [11, 39, 2, 56, 3] to learn the human-aware style information from the style dataset S. GAN is a popular generative model consisting of two networks (i.e., a generator G and a discriminator D) that compete against each other. Specifically, we input the stylized images produced by the generator and the artworks sampled from S to the discriminator as fake data and real data, respectively. In the training process, the generator will try to fool the discriminator by generating a realistic artistic image, while the discriminator will try to distinguish generated fake artworks from real ones. Joint training of these two networks leads to a generator that is able to produce remarkable realistic fake images with the learned human-aware style information. The adversarial training process can be formulated as (note that our generator G contains an encoder E, a transformation module T , and a decoder D, as shown in Figure 2 (a)): Ladv := E Is∼S [log(D(Is))] + E Ic∼C,Is∼S [log(1−D(D(T (E(Ic), E(Is)))))] (2) Content structure preservation. To preserve the content structure of Ic in the stylized image Isc, we adopt the widely-used perceptual loss: Lc :=‖ φconv4_2(Isc)− φconv4_2(Ic) ‖2 (3) Identity loss. Similar to [36, 32, 59], we utilize the identity loss to encourage the generator G to be an approximate identity mapping when the content image and style image are the same. In this manner, more content structures and style characteristics can be preserved in the stylization result. The identity loss is depicted in Figure 2 (b) and defined as: Lidentity := λidentity1(‖ Icc − Ic ‖2 + ‖ Iss − Is ‖2)+ λidentity2 L∑ i=1 (‖ φi(Icc)− φi(Ic) ‖2 + ‖ φi(Iss)− φi(Is) ‖2) (4) where Icc is the output image generated when both the content image and style image are Ic. Iss is analogous. λidentity1 and λidentity2 are the weights associated with different loss terms. For φi, we choose Relu1_1, Relu2_1, Relu3_1, Relu4_1, and Relu5_1 layers in our experiments. 3.2 Contrastive Learning Intuitively, stylized images rendered with the same style image should have closer relations in style than those rendered with different style images. Similarly, stylized images based on the same content image should have closer relations in content than those based on different content images. We refer to such relations as stylization-to-stylization relations. Generally, existing style transfer methods only consider the content-to-stylization and style-to-stylization relations by applying the content loss and style loss (like Lc and Ls introduced above), while neglect the stylization-to-stylization relations. To tackle this problem, we for the first time introduce contrastive learning to style transfer. The core idea of contrastive learning is to associate data points with their “positive” examples while disassociate them from the other points that are regarded as “negatives”. Specifically, we propose two contrastive losses: a style contrastive loss and a content contrastive loss to learn the stylization-to-stylization relations. Note that for clearer expression, hereafter, we use si to represent the ith style image, ci to represent the ith content image, and sici to represent the stylized image generated with si and ci. To perform contrastive learning in every training batch, we arrange a batch of style and content images in the following manner: Assume the batch size = b, which is an even number. Then we get a batch of style images {s1, s2, ..., sb/2, s1, s2, ..., sb/2−1, sb/2}, and a batch of content images {c1, c2, ..., cb/2, c2, c3, ..., cb/2, c1}. Hence, the corresponding stylized images are {s1c1, s2c2, ..., sb/2cb/2, s1c2, s2c3, ..., sb/2−1cb/2, sb/2c1}. In this way, we ensure that for every stylized image sicj , we can find a stylized image sicx (x 6= j) that shares the same style with it, and a stylized image sycj (y 6= i) that shares the same content with it in the same batch. Figure 2 (c) depicts this process by taking b = 8 as an example. Style contrastive loss. To associate stylized images that share the same style, for a stylized image sicj , we select sicx (x 6= j) as its positive example (sicx shares the same style with sicj), and smcn (m 6= i and n 6= j) as its negative examples. Notice that smcn represents a series of stylized images, not just one image. Then we can formulate our style contrastive loss as follows: Ls−contra := −log( exp(ls(sicj) T ls(sicx)/τ) exp(ls(sicj)T ls(sicx)/τ) + ∑ exp(ls(sicj)T ls(smcn)/τ) ) (5) where ls = hs(φrelu3_1(·)), in which hs is a style projection network. ls is used to obtain the style embeddings from stylized images. τ is a temperature hyper-parameter to control push and pull force. Content contrastive loss. Similar to the style contrastive loss, to associate stylized images that share the same content, for a stylized image sicj , we select sycj (y 6= i) as its positive example (sycj shares the same content with sicj), and smcn (m 6= i and n 6= j) as its negative examples. We express the content contrastive loss as: Lc−contra := −log( exp(lc(sicj) T lc(sycj)/τ) exp(lc(sicj)T lc(sycj)/τ) + ∑ exp(lc(sicj)T lc(smcn)/τ) ) (6) where lc = hc(φrelu4_1(·)), in which hc is a content projection network. lc is used to obtain the content embeddings from stylized images. 3.3 Final Objective We summarize all aforementioned losses and obtain the final objective of our model, Lfinal := λ1Ls + λ2Ladv + λ3Lc + λ4Lidentity + λ5Ls−contra + λ6Lc−contra (7) where λ1, λ2, λ3, λ4, λ5, and λ6 are hyper-parameters for striking proper balance among losses. 4 Experimental Results In this section, we first introduce the experimental settings. Then we present qualitative and quantitative comparisons between the proposed method and several baseline models. Finally, we discuss the effect of each component in our model by conducting ablation studies. 4.1 Experimental Settings Implementation details. We build on the recent SANet [36] backbone and extend it with our proposed changes to further push the boundaries in automatic artwork generation. We refer to the original paper [36] for the detailed network architecture of the encoder E, transformation module T , and decoder D. As for the discriminator D, we employ the multi-scale discriminator proposed by Wang et al. [50]. The style projection network hs is a two-layer MLP (Multilayer Perceptron) with 256 units at the first layer and 128 units at the second layer. Similarly, the content projection network hc is a two-layer MLP with 128 units at each layer. The hyper-parameter τ in Equation (5) and (6) is set to 0.2. The loss weights in Equation (4) and (7) are set to λidentity1 = 50, λidentity2 = 1, λ1 = 1, λ2 = 5, λ3 = 1, λ4 = 1, λ5 = 0.3, and λ6 = 0.3. We train our network using the Adam optimizer with a learning rate of 0.0001 and a batch size of 16 for 160000 iterations. Our code is available at: https://github.com/HalbertCH/IEContraAST. Datasets. Like [15, 58, 36, 19], we take MS-COCO [33] and WikiArt [22] as the content dataset and style dataset, respectively. During the training stage, we first resize the smallest dimension of training images to 512 while preserving the aspect ratio, and then randomly crop 256 × 256 patches from these images as input. Note that in the reference stage, our method is applicable for content images and style images with any size. Baselines. We choose several state-of-the-art style transfer methods as our baselines, including Gatys et al. [10], AdaIN [15], WCT [30], Avatar-Net [41], LST [28], and SANet [36]. All these methods are conducted by using the public codes and default configurations. 4.2 Qualitative Comparisons In Figure 3, we show the qualitative comparisons between our method and six baselines introduced above. We observe that Gatys et al. [10] is prone to fall in a bad local minimum (e.g., 1st, 2nd, and 3rd columns). AdaIN [15] sometimes produces messy stylized images with unseen colors and unwanted halation around the edges (e.g., 1st, 3rd, and 6th columns). WCT [30] often introduces distorted patterns, yielding less-structured and blunt stylized images (e.g., 2nd, 4th, and 5th columns). Avatar-Net [41] is hard to produce sharp details and fine brushstrokes (e.g., 1st, 4th, and 5th columns). LST [28] usually produces less stylized images with very limited texture patterns (e.g., 2nd, 4th, and 6th columns). SANet [36] tends to apply similar repeated texture patterns among different styles (e.g., 1st, 3rd, and 6th columns). Despite the recent progress, the gap between synthesized artistic images and real artworks is still very large. To further narrow this gap, we introduce internal-external learning and contrastive learning to artistic style transfer, leading to visually more harmonious and plausible artistic images, as shown in the 2nd row of Figure 3. We also compare our method with 6 baselines on video style transfer, which is conducted between a content video and a style image in a frame-wise manner. The stylization results are shown in Figure 4. To visualize the stability and consistency of synthesized video clip, we also show the heat maps of differences between different frames in the last column of Figure 4. As we can see, our approach outperforms existing style transfer methods in terms of stability and consistency by a significant margin. This can be attributed to two points: (i) external learning smooths the stylization results by eliminating those distorted texture patterns; (ii) the proposed contrastive losses take the stylization-to-stylization relations into consideration, pulling adjacent stylized frames closer to each other since they share the same style and similar content. 4.3 Quantitative Comparisons As the qualitative assessment presented above could be subjective, in this section, we resort to several evaluation metrics to better assess the performance of the proposed method in a quantitative manner. User study [54, 36, 24, 23, 48] is the most widely adopted evaluation metric in style transfer, which investigates user preference over different stylization results for a more objective comparison. Preference score. We use 10 content images and 15 style images to synthesize 150 stylized images for each method. Then 20 content-style pairs are randomly selected for each participant and show Table 2: The average LPIPS distances for different methods. The lower the better. Inputs Gatys et al. AdaIN WCT Avatar-Net LST SANet Ours LPIPS Distance 0.231 0.488 0.369 0.460 0.341 0.326 0.372 0.317 (a) (b) LPIPS 0.317 0.325 0.321 Preference 0.388 0.281 0.331 Figure 5: Ablation studies of external learning (abbr. EL) and contrastive learning (abbr. CL) on (a) image style transfer and (b) video style transfer. Please zoom in for a better view and details. them the stylized images generated by our and competing methods side-by-side in a random order. Next, we ask each participant to choose his/her favorite stylization result for each content-style pair. We finally collect 1000 votes from 50 participants and present the percentage of votes for each method in the second row of Table 1. The results indicate that the stylized images generated by our method are more preferred by human participants compared to those generated by the competing methods. Deception score. To measure the gap between AI-created artistic images and human-created artworks, we conduct another user study: for each participant, we show them 80 artistic images which consist of 10 human-created artworks collected from WikiArt [22] and 70 stylized images generated by our and 6 baseline methods (note that each method provides 10 stylized images). Then for every image, we ask these participants to guess if it is a real artwork or not. The deception score is calculated as the fraction of times that the stylized images generated by this method are identified as “real”. For comparison, we also report the fraction of times that the human-created artworks are identified as “real”. The results are shown in the third row of Table 1, where we can see that the deception rate of our method is closest to that of human-created artworks, further demonstrating the effectiveness of our method. To quantitatively evaluate the stability and consistency of the proposed method on video style transfer, we adopt LPIPS (Learned Perceptual Image Patch Similarity) [57] as the evaluation metric. LPIPS. LPIPS is a widely used metric in the field of multimodal image-to-image translation (MI2I) [61, 16, 25, 8] to measure diversity. In this paper, we employ LPIPS to measure the stability and consistency of rendered clips by computing the average perceptual distances between adjacent frames. Note that contrary to MI2I methods that expect a higher LPIPS value to achieve better diversity, we expect a lower LPIPS value to achieve better stability and consistency. We synthesize 18 stylized video clips for each method and report the average LPIPS distances in Table 2, where we observe that our approach obtains the best score among all methods, consistent with the qualitative comparisons in Figure 4. 4.4 Ablation Studies In this section, we conduct several ablation studies to highlight the effect of different components in our model. We first explore the effect of external learning (abbr. EL) and contrastive learning (abbr. CL) on image style transfer. As for internal learning, since its effect has been fully validated in existing style transfer methods, we do not ablate it in this experiment. Figure 5 (a) shows the image stylization results of our method with and without EL/CL. It can be observed that, without EL, the stylized images become messier with abrupt colors and obvious distortions. The reason could be that the model without EL only focuses on increasing the style similarity between the stylized image and the style image, without considering whether the color distributions and texture patterns in the stylized image are natural and harmonious. In comparison, the model with EL can learn the human-aware style information from the large-scale style dataset, leading to more realistic and harmonious stylized images that cannot be distinguished from real artworks by the discriminator. In addition, we also find that our method can better match the target style to the content image with the proposed contrastive losses. This is because our contrastive losses can help the network to learn better style and content representations by taking the stylization-to-stylization relations into consideration, further refining the stylization results. The user preference results reported in the last column of Figure 5 (a) also demonstrate that our full model has the best performance. Similar ablation studies are also conducted on video style transfer. As shown in Figure 5 (b), the stability degradation can be observed after we remove external learning or contrastive learning from our method (notice the color of hair and skin), which is in line with the reported LPIPS distance. The results indicate that both external learning and contrastive learning can improve the stability of video style transfer. As we analyzed in Section 4.2, external learning obtains stability gains by eliminating distorted texture patterns, and contrastive learning obtains stability gains by pulling adjacent stylized frames closer to each other. 5 Limitations One limitation of this work is that the proposed internal-external learning scheme and two contrastive losses cannot be applied to learning-free style transfer methods, such as WCT [30], Avatar-Net [41], LST [28], etc. This is because the training process is necessary for our method. Therefore, our method can only be incorporated into learning-based methods, such as Johnson et al. [20], AdaIN [15], SANet [36] (in this work, we mainly take SANet as our backbone to show the effectiveness and superiority of our method), etc. Another limitation is that in the inference stage, the style images that are too different from the training styles may not benefit from the external learning scheme, since they are out of the learned style distributions. 6 Conclusion In this paper, we propose an internal-external style transfer method with two novel contrastive losses. The internal-external learning scheme learns simultaneously both the internal statistics from a single artistic image and the human-aware style information from the large-scale style dataset. As for the contrastive losses, they are dedicated to learning the stylization-to-stylization relations by pulling the multiple stylization embeddings closer to each other when they share the same content or style, but pushing far away otherwise. Extensive experiments show that our method can not only produce visually more harmonious and satisfying artistic images, but also significantly promote the stability and consistency of rendered video clips. The proposed method is simple and effective, and may shed light on more future understandings of artistic style transfer from a new perspective. In the future, we would like to extend our method to other vision tasks, for example, texture synthesis. Acknowledgments This work was supported in part by the projects No. 2020YFC1523202, 19ZDA197, LY21F020005, 2021009, 2019C03137, MOE Frontier Science Center for Brain Science & Brain-Machine Integration (Zhejiang University), National Natural Science Foundation of China (Research on Key Technologies of art image restoration based on decoupling learning), and Key Scientific Research Base for Digital Conservation of Cave Temples (Zhejiang University), State Administration for Cultural Heritage. We would also like to thank the reviewers and AC for their constructive and insightful comments on the early submission. Funding Transparency Statement The projects mentioned in our Acknowledgments provided funding and support to this work. There are no additional revenues related to this work.
1. What is the novel approach proposed by the paper for feed-forward image stylization? 2. How effective are the contrastive losses in improving the visual quality of the results compared to baselines? 3. Are there any concerns regarding the reproducibility of the paper's content, particularly in terms of training and feature spaces? 4. How does the reviewer assess the motivation, explanation, and English language quality of the algorithm? 5. Does the paper provide sufficient comparisons with state-of-the-art methods in video style transfer?
Summary Of The Paper Review
Summary Of The Paper The paper describes a method for feed-forward image stylization that combines several losses. The novel losses are a contrastive loss that optimizes the style embedding of the training image to match the embedding of a target style and content and to be distinct from other style and contents. Other losses used during training include feature mean/variances and a GAN loss. Review Overall, the visual quality of the results (Figure 3) looks substantially better than the baselines. The detailed structure and scene contours are preserved better than the baselines, and the regions are filled with sensible textures from the exemplars. I haven't attempted to survey the literature to judge if these are fair representations of the baselines It's unclear to me how these improvements results from the contrastive losses. Better representation of image structure doesn't seem to encoded in the contrastive losses. The writing of the paper is very difficult to follow and I'm not sure that it's reproducible. Section 3 never lists the inputs or outputs for training. It took a long time to figure out that this section describes training and not test time. How are the parameters of the h_s and h_c networks determined? What feature space are the first two terms in Eq 4; in pixel space? Eqs 3 and 4 use ||_||_2 notation; does this mean there are square-roots in these terms? The motivation and explanation of the algorithm coudl also use a lot of improvement. The Englihs is not very good. L26: "existing style transfer methods often confine themselves to the internal style statistics of a single image". This is obviously false, e.g., see pix2pix, CycleGAN, CUTG. L31: "... more consistent with human perception" This is a bold claim, either remove it provide citations to justify whatever assertion is being made here. L50: "... this is the first work successfully leveraging the power of contrastive learning in style transfer": This is false; see CUT [34]. L109: "Existing style transfer methods usually produce unsatisfying stylization results with disharmonious colors..." So the claim here is that all previous methods are terrible? The video style transfer results are so weak that I think they should be removed if the paper is accepted. The frame rate of the results is so low that one cannot judge the temporal coherence of the provided videos. These are more like sequences of independent still images. The paper fails to compare to the state-of-the-art in video style transfer, or to cite it: Ondřej Jamriška, Šárka Sochorová, Ondřej Texler, Michal Lukáč, Jakub Fišer, Jingwan Lu, Eli Shechtman, and Daniel Sýkora Stylizing Video by Example In ACM Transactions on Graphics 38(4):107, 2019 (SIGGRAPH 2019, Los Angeles, USA, July 2019), ACM. Ondřej Texler, David Futschik, Michal Kučera, Ondřej Jamriška, Šárka Sochorová, Menglei Chai, Sergey Tulyakov, and Daniel Sýkora Interactive Video Stylization Using Few-Shot Patch-Based Training In ACM Transactions on Graphics 39(4):73, 2020 (SIGGRAPH, August 2020)
NIPS
Title Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks Abstract Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal’s intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the ’persistent homology dimension’ (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network’s intrinsic dimension in a variety of settings, which is predictive of the generalization error. 1 Introduction In recent years, deep neural networks (DNNs) have become the de facto machine learning tool and have revolutionized a variety of fields such as natural language processing [DCLT18], image perception [KSH12, RBH+21], geometry processing [QSMG17, ZBL+20] and 3D vision [DBI18, GLW+21]. Despite their widespread use, little is known about their theoretical properties. Even now the top-performing DNNs are designed by trial-and-error, a pesky, burdensome process for the average practitioner [EMH+19]. Furthermore, even if a top-performing architecture is found, it is difficult to provide performance guarantees on a large class of real-world datasets. This lack of theoretical understanding has motivated a plethora of work focusing on explaining what, how, and why a neural network learns. To answer many of these questions, one naturally examines the generalization error, a measure quantifying the differing performance on train and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) test data since this provides significant insights into whether the network is learning or simply memorizing [ZBH+21]. However, generalization in neural networks is particularly confusing as it refutes the classical proposals of statistical learning theory such as uniform bounds based on the Rademacher complexity [BM02] and the Vapnik–Chervonenkis (VC) dimension [Vap68]. Instead, recent analyses have started focusing on the dynamics of deep neural networks. [NBMS17, BO18, GJ16] provide analyses on the final trained network, but these miss out on critical training patterns. To remedy this, a recent study [SSDE20] connected generalization and the heavy tailed behavior of network trajectories–a phenomenon which had already been observed in practice [SSG19, ŞGN+19, SZTG20, GSZ21, CWZ+21, HM20, MM19]. [SSDE20] further showed that the generalization error can be linked to the fractal dimension of a parametric hypothesis class (which can then be taken as the optimization trajectories). Hence, the fractal dimension acts as a ‘capacity metric’ for generalization. While [SSDE20] brought a new perspective to generalization, several shortcomings prevent application in everyday training. In particular, their construction requires several conditions which may be infeasible in practice: (i) topological regularity conditions on the hypothesis class for fast computation, (ii) a Feller process assumption on the training algorithm trajectory, and that (iii) the Feller process exhibits a specific diffusive behavior near a minimum. Furthermore, the capacity metrics in [SSDE20] are not optimization friendly and therefore can’t be incorporated into training. In this work, we address these shortcomings by exploiting the recently developed connections between fractal dimension and topological data analysis (TDA). First, by relating the box dimension [Sch09] and the recently proposed persistent homology (PH) dimension [Sch20], we relax the assumptions in [SSDE20] to develop a topological intrinsic dimension (ID) estimator. Then, using this estimator we develop a general tool for computing and visualizing generalization properties in deep learning. Finally, by leveraging recently developed differentiable TDA tools [CHU17, CHN19], we employ our ID estimator to regularize training towards solutions that generalize better, even without having access to the test dataset. Our experiments demonstrate that this new measure of intrinsic dimension correlates highly with generalization error, regardless of the choice of optimizer. Furthermore, as a proof of concept, we illustrate that our topological regularizer is able to improve the test accuracy and lower the generalization error. In particular, this improvement is most pronounced when the learning rate/batch size normally results in a poorer test accuracy. Overall, our contributions are summarized as follows: • We make a novel connection between statistical learning theory and TDA in order to develop a generic computational framework for the generalization error. We remove the topological regularity condition and the decomposable Feller assumption on training trajectories, which were required in [SSDE20]. This leads to a more generic capacity metric. • Using insights from our above methodology, we leverage the differentiable properties of persistent homology to regularize neural network training. Our findings also provide the first steps towards theoretically justifying recent topological regularization methods [BGND+19, CNBW19]. • We provide extensive experiments to illustrate the theory, strength, and flexibility of our framework. We believe that the novel connections and the developed framework will open new theoretical and computational directions in the theory of deep learning. To foster further developments at the the intersection of persistent homology and statistical learning theory, we release our source code under: https://github.com/tolgabirdal/PHDimGeneralization. 2 Related Work Intrinsic dimension in deep networks Even though a large number of parameters are required to train deep networks [FC18], modern interpretations of deep networks avoid correlating model over-fitting or generalization to parameter counting. Instead, contemporary studies measure model complexity through the degrees of freedom of the parameter space [JFH15, GJ16], compressibility (pruning) [BO18] or intrinsic dimension [ALMZ19, LFLY18, MWH+18]. Tightly related to the ID, Janson et al. [JFH15] investigated the degrees of freedom [Ghr10] in deep networks and expected difference between test error and training error. Finally, LDMNet [ZQH+18] explicitly penalizes the ID regularizing the network training. Generalization bounds Several studies have provided theoretical justification to the observations that trained neural networks live in a lower-dimensional space, and this is related to the generalization performance. In particular, compression-based generalization bounds [AGNZ18, SAM+20, SAN20, HJTW21, BSE+21] have shown that the generalization error of a neural network can be much lower if it can be accurately represented in lower dimensional space. Approaching the problem from a geometric viewpoint, [SSDE20] showed that the generalization error can be formally linked to the fractal dimension of a parametric hypothesis class. This dimension indeed the plays role of the intrinsic dimension, which can be much smaller than the ambient dimension. When the hypothesis class is chosen as the trajectories of the training algorithm, [SSDE20] further showed that the error can be linked to the heavy-tail behavior of the trajectories. Deep networks & topology Previous works have linked neural network training and topological invariants, although all analyze the final trained network [FGFAEV21]. For example, in [RTB+19], the authors construct Neural Persistence, a measure on neural network layer weights. They furthermore show that Neural Persistence reflects many of the properties of convergence and can classify weights based on whether they overfit, underfit, or exactly fit the data. In a parallel line of work, [DZF19] analyze neural network training by calculating topological properties of the underlying graph structure. This is expanded upon in [CMEM20], where the authors compute correlations between neural network weights and show that the homology is linked with the generalization error. However, these previous constructions have been done mostly in an adhoc manner. As a result, many of the results are mostly empirical and work must still be done to show that these methods hold theoretically. Our proposed method, by contrast, is theoretically well-motivated and uses tools from statistical persistent homology theory to formally links the generalization error with the network training trajectory topology. We also would like to note that prior work has incorporated topological loss functions to help normalize training. In particular, [BGND+19] constructed a topological normalization term for GANs to help maintain the geometry of the generated 3d point clouds. 3 Preliminaries & Technical Background We imagine a point cloud W = {wi ∈ Rd} as a geometric realization of a d-dimensional topological space W ⊂ W ⊂ Rd. Bδ(x) ⊂ Rd denotes the closed ball centered around x ∈ Rd with radius δ. Persistent Homology From a topological perspective, W can be viewed a cell complex composed of the disjoint union of k-dimensional balls or cells σ ∈ W glued together. For k = 0, 1, 2, . . . , we form a chain complex C(W) = . . . Ck+1(W) ∂k+1−−−→ Ck(W) ∂k−→ . . . by sequencing chain groupsCk(W), whose elements are equivalence classes of cycles, via boundary maps ∂k : Ck(W) 7→ Ck−1(W) with ∂k−1◦∂k ≡ 0. In this paper, we work with finite simplicial complexes restricting the cells to be simplices. The kth homology group or k-dimensional homology is then defined as the equivalence classes of k-dimensional cycles who differ only by a boundary, or in other words, the quotient group Hk(W) = Zk(W)/Yk(W) where Zk(W) = ker ∂k and Yk(W) = im ∂k+1. The generators or basis of H0(W), H1(W) and H2(W) describe the shape of the topological spaceW by its connected components, holes and cavities, respectively. Their ranks are related to the Betti numbers i.e.βk = rank(Hk). Definition 1 (Čech and Vietoris-Rips Complexes). For W a set of fine points in a metric space, the Čech cell complex Čechr(W ) is constructed using the intersection of r-balls around W , Br(W ): Čechr(W ) = { Q ⊂ W : ∩x∈QBr(x) 6= 0 } . The construction of such complex is intricate. Instead, the Vietoris-Rips complex VRr(W ) closely approximates Čechr(W ) using only the pairwise distances or the intersection of two r-balls [RB21]: Wr = VRr(W ) = { Q ⊂ W : ∀x, x′ ∈ Q, Br(x) ∩Br(x′) 6= 0 } . Definition 2 (Persistent Homology). PH indicates a multi-scale version of homology applied over a filtration {Wt}t := VR(W ) : ∀(s ≤ t)Ws ⊂ Wt ⊂ W , keeping track of holes created (born) or filled (died) as t increases. Each persistence module PHk(VR(W )) = {γi}i keeps track of a single k-persistence cycle γi from birth to death. We denote the entire lifetime of cycle γ as I(γ) and its length as |I(γ)| = death(γ)− birth(γ). We will also use persistence diagrams, 2D plots of all persistence lifetimes (death vs. birth). Note that for PH0, the Čech and VR complexes are equivalent. Lifetime intervals are instrumental in TDA as they allow for extraction of topological features or summaries. Note that, each birth-death pair can be mapped to the cells that respectively created and destroyed the homology class, defining a unique map for a persistence diagram, which lends itself to differentibility [BGND+19, CHN19, CHU17]. We conclude this brief section by referring the interested reader to the well established literature of persistent homology [Car14, EH10] for a thorough understanding. Intrinsic Dimension The intrinsic dimension of a space can be measured by using various notions. In this study, we will consider two notions of dimension, namely the upper-box dimension (also called the Minkowski dimension) and the persistent homology dimension. The box dimension is based on covering numbers and can be linked to generalization via [SSDE20], whereas the PH dimension is based on the notions defined earlier in this section. We start by the box dimension. Definition 3 (Upper-Box Dimension). For a bounded metric space W , let Nδ(W) denote the maximal number of disjoint closed δ-balls with centers inW . The upper box dimension is defined as: dimBoxW = lim sup δ→0 ( log(Nδ(W))/log(1/δ) ) . (1) We proceed with the PH dimension. First let us define an intermediate construct, which will play a key role in our computational tools. Definition 4 (α-Weighted Lifetime Sum). For a finite set W ⊂ W ⊂ Rd, the weighted ith homology lifetime sum is defined as follows: Eiα(W ) = ∑ γ∈PHi(VR(W )) |I(γ)|α, (2) where PHi(VR(W )) is the i-dimensional persistent homology of the Čech complex on a finite point set W contained inW and |I(γ)| is the persistence lifetime as explained above. Now, we are ready to define the PH dimension, which is the key notion in this paper. Definition 5 (Persistent Homology Dimension). The PHi-dimension of a bounded metric spaceW is defined as follows: dimiPHW := inf { α : Eiα(W ) < C; ∃C > 0,∀ finite W ⊂ W } . (3) In words, dimiPHW is the smallest exponent α for which Eiα is uniformly bounded for all finite subsets ofW . 4 Generalization Error via Persistent Homology Dimension In this section, we will illustrate that the generalization error can be linked to the PH0 dimension. Our approach is based on the following fundamental result. Theorem 1 ([KLS06, Sch19]). LetW ⊂ Rd be a bounded set. Then, it holds that: dimPHW := dim0PHW = dimBoxW. In the light of this theorem, we combine the recent result showing that the generalization error can be linked to the box dimension [SSDE20], and Theorem 1, which shows that, for bounded subsets of Rd, the box dimension and the PH dimension of order 0 agree. By following the notation of [SSDE20], we consider a standard supervised learning setting, where the data space is denoted by Z = X × Y , and X and Y respectively denote the features and the labels. We assume that the data is generated via an unknown data distribution D and we have access to a training set of n points, i.e., S = {z1, . . . , zn}, with the samples {zi}ni=1 are independent and identically (i.i.d.) drawn from D. We further consider a parametric hypothesis classW ⊂ Rd, that potentially depends on S. We choose W to be optimization trajectories given by a training algorithm A, which returns the entire (random) trajectory of the network weights in the time frame [0, T ], such that [A(S)]t = wt being the network weights returned byA at ‘time’ t, and t is a continuous iteration index. Then, in the setW , we collect all the network weights that appear in the optimization trajectory: W := {w ∈ Rd : ∃t ∈ [0, T ], w = [A(S)]t} where we will set T = 1, without loss of generality. To measure the quality of a parameter vector w ∈ W , we use a loss function ` : Rd ×Z 7→ R+, such that `(w, z) denotes the loss corresponding to a single data point z. We then denote the population and empirical risks respectively by R(w) := Ez[`(w, z)] and R̂(w, S) := 1n ∑n i=1 `(w, zi). The generalization error is hence defined as |R̂(w, S)−R(w)|. We now recall [SSDE20, Asssumption H4], which is a form of algorithmic stability [BE02]. Let us first introduce the required notation. For any δ > 0, consider the fixed grid on Rd, G = {( (2j1 + 1)δ 2 √ d , . . . , (2jd + 1)δ 2 √ d ) : ji ∈ Z, i = 1, . . . , d } , and define the set Nδ := {x ∈ G : Bδ(x) ∩W 6= ∅}, that is the collection of the centers of each ball that intersectW . H1. Let Z∞ := (Z × Z × · · · ) denote the countable product endowed with the product topology and let B be the Borel σ-algebra generated by Z∞. Let F,G be the sub-σ-algebras of B generated by the collections of random variables given by {R̂(w, S) : w ∈ Rd, n ≥ 1} and { 1 {w ∈ Nδ} : δ ∈ Q>0, w ∈ G,n ≥ 1 } respectively. There exists a constant M ≥ 1 such that for any A ∈ F, B ∈ G we have P [A ∩B] ≤MP [A]P[B]. The next result forms our main observation, which will lead to our methodological developments. Proposition 1. LetW ⊂ Rd be a (random) compact set. Assume that H1 holds, ` is bounded by B and L-Lipschitz continuous in w. Then, for n sufficiently large, we have sup w∈W |R̂(w, S)−R(w)| ≤ 2B √ [dimPHW + 1] log2(nL2) n + log(7M/γ) n , (4) with probability at least 1− γ over S ∼ D⊗n. Proof. By using the same proof technique as [SSDE20, Theorem 2], we can show that (4) holds with dimBoxW in place of dimPHW . Since W is bounded, we have dimBoxW = dimPHW by Theorem 1. The result follows. This result shows that the generalization error of the trajectories of a training algorithm is deeply linked to its topological properties as measured by the PH dimension. Thanks to novel connection, we have now access to the rich TDA toolbox, to be used for different purposes. 4.1 Analyzing Deep Network Dynamics via Persistent Homology By exploiting TDA tools, our goal in this section is to develop an algorithm to compute dimPHW for two main purposes. The first goal is to predict the generalization performance by using dimPH. By this approach, we can use dimPH for hyperparameter tuning without having access to test data. The second goal is to incorporate dimPH as a regularizer to the optimization problem in order to improve generalization. Note that similar topological regularization strategies have already been proposed Algorithm 1: Computation of dimPH. 1 input :The set of iterates W = {wi}Ki=1, smallest sample size nmin, and a skip step ∆, α 2 output :dimPHW 3 n← nmin, E ← [] 4 while n ≤ K do 5 Wn ← sample(W,n) // random sampling 6 Wn ← VR(Wn) // Vietoris-Rips filtration 7 E[i]← Eα(Wn) , ∑ γ∈PH0(Wn) |I(γ)| α // compute lifetime sums from PH 8 n← n+ ∆ 9 m, b← fitline (log(nmin : ∆ : K), log(E)) // power law on Ei1(W ) 10 dimPHW ← α1−m [BGND+19, CNBW19] without a formal link to generalization. In this sense, our observations form the first step towards theoretically linking generalization and TDA. In [SSDE20], to develop a computational approach, the authors first linked the intrinsic dimension to certain statistical properties of the underlying training algorithm, which can be then estimated. To do so, they required an additional topological regularity condition, which necessitates the existence of an ‘Ahlfors regular’ measure defined onW , i.e., a finite Borel measure µ such that there exists s, r0 > 0 where 0 < ars ≤ µ(Br(x)) ≤ brs < ∞, holds for all x ∈ W, 0 < r ≤ r0. This assumption was used to link the box dimension to another notion called Hausdorff dimension, which can be then linked to statistical properties of the training trajectories under further assumptions (see Section 1). An interesting asset of our approach is that, we do not require this condition and thanks to the following result, we are able to develop an algorithm to directly estimate dimPHW , while staying agnostic to the finer topological properties ofW . Proposition 2. Let W ⊂ Rd be a bounded set with dimPHW =: d?. Then, for all ε > 0 and α ∈ (0, d? + ε), there exists a constant Dα,ε, such that the following inequality holds for all n ∈ N+ and all collections Wn = {w1, . . . , wn} with wi ∈ W , i = 1, . . . , n: E0α(Wn) ≤ Dα,εn d?+ε−α d?+ε . (5) Proof. Since W is bounded, we have dimBoxW = d? by Theorem 1. Fix ε > 0. Then, by Definition 3, there exists δ0 = δ0(ε) > 0 and a finite constant Cε > 0 such that for all δ ≤ δ0 the following inequality holds: Nδ(W) ≤ Cεδ−(d ?+ε). (6) Then, the result directly follows from [Sch20, Proposition 21]. This result suggests a simple strategy to estimate an upper bound of the intrinsic dimension from persistent homology. In particular, we note that rewriting (5) for logarithmic values give us that( 1− α d∗ + ) log n+ logDα, ≥ logE0α. (7) If logE0α and log n are sampled from the data and give an empirical slope m, then we see that d∗ + ≤ m1−α . In many cases, we see that d ∗ ≈ α1−m (as further explained in Sec. 5.2), so we take α 1−m as our PH dimension estimation. We provide the full algorithm for computing this from our sampled data in Alg. 1. Note that our algorithm is similar to that proposed in [AAF+20], although our method works for sets rather than probability measures. In our implementation we compute the homology by the celebrated Ripser package [Bau21] unless otherwise specified. On computational complexity. Computing the Vietoris Rips complex is an active area of research, as the worst-case time complexity is meaningless due to natural sparsity [Zom10]. Therefore, to calculate the time complexity of our estimator, we focus on analyzing the PH computation from the output simplices: calculating PH takes O(pw) time, where w < 2.4 is the constant of matrix multiplication and p is the number of simplices produced in the filtration [BP19]. Since we compute with 0th order homology, this would imply that the computational complexity is O(nw), where n is the number of points. In particular, this means that estimating the PH dimension would take O(knw) time, where k is the number of samples taken assuming that samples are evenly spaced in [0, n]. 4.2 Regularizing Deep Networks via Persistent Homology Motivated by our results in proposition 2, we theorize that controlling dimPHW would help in reducing the generalization error. Towards this end, we develop a regularizer for our training procedure which seeks to minimize dimPHW during train time. If we let L be our vanilla loss function, then we will instead optimize over our topological loss function Lλ := L+ λ dimPHW , where λ ≥ 0 controls the scale of the regularization andW now denotes a sliding window of iterates (e.g., the latest 50 iterates during training). This way, we aim to regularize the loss by considering the dimension of the ongoing training trajectory. In Alg. 1, we let wi be the stored weights from previous iterations for i ∈ {1, . . . ,K − 1} and let wK be the current weight iteration. Since the persistence diagram computation and linear regression are differentiable, this means that our estimate for dimPH is also differentiable, and, if wk is sampled as in Alg. 1, is connected in the computation graph with wK . We incorporate our regularizer into the network training using PyTorch [PGM+19] and the associated persistent homology package torchph [CHU17, CHN19]. 5 Experimental Evaluations This section presents our experimental results in two parts: (i) analyzing and quantifying generalization in practical deep networks on real data, (ii) ablation studies on a random diffusion process. In all the experiments we will assume that the intrinsic dimension is strictly larger than 1, hence we will set α = 1, unless specified otherwise. Further details are reported in the supplementary document. 5.1 Analyzing and Visualizing Deep Networks Measuring generalization. We first verify our main claim by showing that our persistent homology dimension derived from topological analysis of the training trajectories correctly measures of generalization. To demonstrate this, we apply our analysis to a wide variety of networks, training procedures, and hyperparameters. In particular, we train AlexNet [KSH12], a 5-layer (fcn-5) and 7-layer (fcn-7) fully connected networks, and a 9-layer convolutional netowork (cnn-9) on MNIST, CIFAR10 and CIFAR100 datasets for multiple batch sizes and learning rates until convergence. For AlexNet, we consider 1000 iterates prior to convergence and, for the others, we only consider 200. Then, we estimate dimPH on the last iterates by using Alg. 1. For varying n, we randomly pick n of last iterates and compute E0α, and then we use the relation given in (5). We obtain the ground truth (GT) generalization error as the gap between training and test accuracies. Fig. 2 plots the PH-dimension with respect to test accuracy and signals a strong correlation of our PH-dimension and actual performance gap. The lower the PH-dimension, the higher the test accuracy. Note that this results aligns well with that of [SSDE20]. The figure also shows that the intrinsic dimensions across different datasets can be similar, even if the parameters of the models can vary greatly. This supports the recent hypothesis that what matters for the generalization is the effective capacity and not the parameter count. In fact, the dimension should be as minimal as possible without collapsing important representation features onto the same dimension. The findings in Fig. 2 are further augmented with results in Fig. 3, where a similar pattern is observed on AlexNet and CIFAR100. Can dimPH capture intrinsic properties of trajectories? After revealing that our ID estimation is a gauge for generalization, we set out to investigate whether it really hinges on the intrinsic properties of the data. We train several instances of 7-fcn for different learning rates and batch sizes. We compute the PH-dimension of each network using training trajectories. We visualize the following in the rows of Fig. 4 sorted by dimPH: (i) 200× 200 distance matrix of the sequence of iterates w1, . . . , wK (which is the basis for PH computations), (ii) corresponding logE0α=1 estimates as we sweep over n in an increasing fashion, (iii) persistence diagrams per each distance matrix. It is clear that there is a strong correlation between dimPH and the structure of the distance matrix. As dimension increases, matrix of distances become non-uniformly pixelated. The slope estimated from the total edge lengths the second row is a quantity proportional to our dimension. Note that the slope decreases as our estimte increases (hence generalization tends to decrease). We further observe clusters emerging in the persistence diagram. The latter has also been reported for better generalizing networks, though using a different notion of a topological space [BGND+19]. Is dimPH a real indicator of generalization? To quantitatively assess the quality of our complexity measure, we gather two statistics: (i) we report the average p-value over different batch sizes for AlexNet trained with SGD on the Cifar100 dataset. The value of p = 0.0157 < 0.05 confirms the statistical significance. Next, we follow the recent literature [JFY+20] and consult the Kendall correlation coefficient (KCC). Similar to the p-value experiment above, we compute KCC for AlexNet+SGD for different batch sizes (64, 100, 128) and attain (0.933, 0.357, 0.733) respectively. Note that, a positive correlation signals that the test gap closes as dimPH decreases. Both of these experiments agree with our theoretical insights that connect generalization to a topological characteristic of a neural network: intrinsic dimension of training trajectories. Effect of different training algorithms. We also verify that our method is algorithm-agnostic and does not require assumptions on the training algorithm. In particular, we show that our above analyses extend to both the RMSProp [TH12] and Adam [KB15] optimizer. Our results are visualized in Fig. 3. We plot the dimension with respect to the generalization error for varying optimizers and batch sizes; our results verify that the generalization error (which is inversely related to the test accuracy) is positively correlated with the PH dimension. This corroborates our previous results in Fig. 2 and in particular shows that our dimension estimator of test gap is indeed algorithm-agnostic. Encouraging generalization via regularization dimPH. We furthermore verify that our topological regularizer is able to help control the test gap in accordance with our theory. We train a Lenet-5 network [LBBH98] on Cifar10 [Kri09] and compare a clean trianing with a training with our topological regularizer with λ set to 1. We train for 200 epochs with a batch size of 128 and report the train and test accuracies in Fig. 5 over a variety of learning rates. We tested over 10 trials and found that, with p < 0.05 for all cases except lr = 0.01, the results are different. Our topological optimizer is able to produce the best improvements when our network is not able to converge well. These results show that our regularizer behaves as expected: the regularizer is able to recover poor training dynamics. We note that this experiment uses a simple architecture and as such, it presents a proof of concept. We do not aim for the state of the art results. Furthermore, we directly compared our approach with the generalization estimator of [CMEM20], which most closely resembles our construction. In particular, we found their method does not scale and is often numerically unreliable. For example, their methodology grows quadratically with respect to number of network weights and linearly with the dataset size, while our method does not scale much beyond memory usage with vectorized computation. Furthermore, for many of our test networks, their metric space construction (which is based off of the correlation between activation functions and used for the Vietoris-Rips complex) would be numerically brittle and result in degenerate persistent homology. These prevent [CMEM20] to be applicable in this scenario. 5.2 Ablation Studies To assess the quality of our dimension estimator, we now perform ablation studies, on a synthetic data whose the ground truth ID is known. To this end, we use the synthetic experimental setting presented in [SSDE20] (see the supplementary document for details), and we simulate a d = 128 dimensional stable Levy process with varying number of points 100 ≤ n ≤ 1500 and tail indices 1 ≤ β ≤ 2. Note that the tail index equals the intrinsic dimension in this case, which is an order of magnitude lower for this experiment. Can dimPH match the ground truth ID? We first try to predict the GT intrinsic dimension running Alg. 1 on this data. We also estimate the TwoNN dimension [FdRL17] to quantify how the state of the art ID estimators correlate with GT in such heavy tailed regime. Our results are plotted in Fig. 6. Note that as n increases our estimator becomes smoother and well approximates the GT up to a slight over-estimation, a repeatedly observed phenomenon [CCCR15]. TwoNN does not guarantee recovering the box-dimension. While it is found to be useful in estimating the ID of data [ALMZ19], we find it to be less desirable in a heavy-tailed regime as reflected in the plots. Our supplementary material provides further results on other, non-dynamics like synthetic dataset such as points on a sphere where TwoNN can perform better. We also include a robust line fitting variant of our approach PH0-RANSAC, where a random sample consensus is applied iteratively. Though, as our data is not outlier-corrupted, we do not observe a large improvement. Effect of α on dimension estimation. While our theory requires α to be smaller than the intrinsic dimension of the trajectories, in all of our experiments we fix α = 1.0. It is of curiosity whether such choice hampers our estimates. To see the effect, we vary α in range [0.5, 2.5] and plot our estimates in Fig. 7. It is observed (blue curve) that our dimension estimate follows a U-shaped trend with increasing α. We indicate the GT ID by a dashed red line and our estimate as a dashed green line. Ideally, these two horizontal lines should overlap. It is noticeable that, given the oracle for GT ID, it might be possible to optimize for an α?. Yet, such information is not available for the deep networks. Nevertheless, α = 1 seems to yield reasonable performance and we leave the estimation of a better α for future work. We provide additional results in our supplementary material. 6 Conclusion In this paper, we developed novel connections between dimPH of the training trajectory and the generalization error. Using these insights, we proposed a method for estimating the dimPH from data and, unlike previous work [SSDE20], our approach does not presuppose any conditions on the trajectory and offers a simple algorithm. By leveraging the differentiability of PH computation, we showed that we can use dimPH as a regularizer during training, which improved the performance in different setups. Societal Impact and Limitations. We believe that our study will not pose any negative societal or ethical consequences due to its theoretical nature. The main limitation of our study is that it solely considers the terms E0α, whereas PH offers a much richer structure. Hence, as our next step, we will explore finer ways to incorporate PH in generalization performance. We will further extend our results in terms of dimensions of measures by using the techniques presented in [CDE+21]. Acknowledgements Umut Şimşekli’s research is supported by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19P3IA-0001 (PRAIRIE 3IA Institute).
1. What is the focus of the paper regarding generalization error and persistent homology? 2. What are the strengths of the proposed method in terms of computational complexity and runtime performance? 3. What are the weaknesses of the paper regarding the visualization of distance matrices and persistence diagrams? 4. How does the reviewer assess the clarity, quality, significance, and originality of the paper's content? 5. Are there any suggestions or recommendations for improving the paper?
Summary Of The Paper Review
Summary Of The Paper The authors establish a relationship between the generalization error of trajectories obtained from a training algorithm and the persistent homology (PH) dimension. The theoretical contribution of this work involves combining two results: (1) box dimension of a bounded set can be computed using PH0 dimension [KLS06, Sch19], and (2) previous work linking box dimension and generalization error [SSDE20]. Based on these theoretical findings, the authors propose a simple algorithm to compute PH dimension directly, by performing a line fit on 0-dimensional topological features derived from weights in previous training iterations. This algorithm is based on mild assumptions, in contrast to previous work on computing the fractal dimension of training trajectories. Experiments applying this algorithm to a variety of networks (including AlexNet, CNNs and FCNs trained on MNIST, CIFAR10 and CIFAR100) and training algorithms (SGD, RMSprop and Adam) show that the PH dimension is inversely correlated with test accuracy. Next, the authors took advantage of the differentiability of persistent homology to incorporate the PH dimension computed from previous iterates as a regularizer to control the generalization error. The topological regularizer improved performance on the test dataset, especially at high learning rates where the unregularized network has low test accuracy. Finally, the authors performed ablation studies using synthetic data generated from β-stable Levy processes that exhibit heavy-tails observed in network trajectories. PH dimension outperformed other intrinsic dimension estimators in these tests, with varying number of points, ambient dimensions and line fitting procedures. Review Originality: This work combines previous theoretical results to propose a practical methodology for estimating intrinsic dimension using topological data analysis (TDA). Novel contributions include verification of the proposed PH dimension calculation algorithm on synthetic data and its practical application in multiple networks to demonstrate its usefulness in quantifying generalization error. The proposed regularization, based on PH dimension, is shown to be effective at controlling generalization error during training. All related work, datasets, and models are cited. Quality: The exposition is technically sound, and theoretical findings are supported by experiments. The authors provide context for their work within the broader scope of intrinsic dimension estimation, manifold reconstruction, bounds of generalization error, and the application of TDA to analyze deep networks. The authors clearly state that the α parameter should be smaller than the intrinsic dimension in theory, but the experiments were performed with fixed α = 1. I suggest the following improvements: Definition 4: where PHi(VR(W)) is the i dimensional persistent homology of the Cech Vietoris-Rips complex on a finite point set … Please consider including computational complexity and/or runtime performance of estimating PH dimension and the extra training time required when using PH dimension as a regularizer. Figure 7 (main text) is hard to read and poorly labelled. In SM Fig. 1, please indicate in the caption that PCA is outside the bounds of the axis limits. In SM Fig. 7, consider using the same axis limits for all figures to allow for easy comparison. The observed changes in the persistence diagrams corresponding to changes in the intrinsic dimension are not explained. Why do points move closer to or away from the diagonal? Similar to comment (3) above, the visualization of distance matrices (Fig. 4 in the main text, SM Figs. 5 and 6) showing non-uniform pixelation merits further explanation. How are the rows and columns of these matrices organized? Can the non-uniform pixelation be quantified and what does it represent? Clarity: The main paper and the supplementary material are very well organized and clearly written. I discovered only a few typos and grammatical mistakes that can easily be corrected. Reference to Fig. 5 in the supplement shows up as a ‘?’. Significance: The analysis of generalization error associated with various training algorithms is important for uncovering how deep networks learn and operate. This work is a significant contribution in this direction. The authors describe a practical algorithm, based on solid theoretical foundations, for estimating the intrinsic dimension of training trajectories and link it to generalization error. As demonstrated in this manuscript, the PH dimension can be used to evaluate different hyperparameters and control generalization loss during training. Higher-dimensional topological features computed from training trajectories may yield more insight in the future. I am satisfied by the author responses and will maintain the high score.
NIPS
Title Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks Abstract Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal’s intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the ’persistent homology dimension’ (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network’s intrinsic dimension in a variety of settings, which is predictive of the generalization error. 1 Introduction In recent years, deep neural networks (DNNs) have become the de facto machine learning tool and have revolutionized a variety of fields such as natural language processing [DCLT18], image perception [KSH12, RBH+21], geometry processing [QSMG17, ZBL+20] and 3D vision [DBI18, GLW+21]. Despite their widespread use, little is known about their theoretical properties. Even now the top-performing DNNs are designed by trial-and-error, a pesky, burdensome process for the average practitioner [EMH+19]. Furthermore, even if a top-performing architecture is found, it is difficult to provide performance guarantees on a large class of real-world datasets. This lack of theoretical understanding has motivated a plethora of work focusing on explaining what, how, and why a neural network learns. To answer many of these questions, one naturally examines the generalization error, a measure quantifying the differing performance on train and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) test data since this provides significant insights into whether the network is learning or simply memorizing [ZBH+21]. However, generalization in neural networks is particularly confusing as it refutes the classical proposals of statistical learning theory such as uniform bounds based on the Rademacher complexity [BM02] and the Vapnik–Chervonenkis (VC) dimension [Vap68]. Instead, recent analyses have started focusing on the dynamics of deep neural networks. [NBMS17, BO18, GJ16] provide analyses on the final trained network, but these miss out on critical training patterns. To remedy this, a recent study [SSDE20] connected generalization and the heavy tailed behavior of network trajectories–a phenomenon which had already been observed in practice [SSG19, ŞGN+19, SZTG20, GSZ21, CWZ+21, HM20, MM19]. [SSDE20] further showed that the generalization error can be linked to the fractal dimension of a parametric hypothesis class (which can then be taken as the optimization trajectories). Hence, the fractal dimension acts as a ‘capacity metric’ for generalization. While [SSDE20] brought a new perspective to generalization, several shortcomings prevent application in everyday training. In particular, their construction requires several conditions which may be infeasible in practice: (i) topological regularity conditions on the hypothesis class for fast computation, (ii) a Feller process assumption on the training algorithm trajectory, and that (iii) the Feller process exhibits a specific diffusive behavior near a minimum. Furthermore, the capacity metrics in [SSDE20] are not optimization friendly and therefore can’t be incorporated into training. In this work, we address these shortcomings by exploiting the recently developed connections between fractal dimension and topological data analysis (TDA). First, by relating the box dimension [Sch09] and the recently proposed persistent homology (PH) dimension [Sch20], we relax the assumptions in [SSDE20] to develop a topological intrinsic dimension (ID) estimator. Then, using this estimator we develop a general tool for computing and visualizing generalization properties in deep learning. Finally, by leveraging recently developed differentiable TDA tools [CHU17, CHN19], we employ our ID estimator to regularize training towards solutions that generalize better, even without having access to the test dataset. Our experiments demonstrate that this new measure of intrinsic dimension correlates highly with generalization error, regardless of the choice of optimizer. Furthermore, as a proof of concept, we illustrate that our topological regularizer is able to improve the test accuracy and lower the generalization error. In particular, this improvement is most pronounced when the learning rate/batch size normally results in a poorer test accuracy. Overall, our contributions are summarized as follows: • We make a novel connection between statistical learning theory and TDA in order to develop a generic computational framework for the generalization error. We remove the topological regularity condition and the decomposable Feller assumption on training trajectories, which were required in [SSDE20]. This leads to a more generic capacity metric. • Using insights from our above methodology, we leverage the differentiable properties of persistent homology to regularize neural network training. Our findings also provide the first steps towards theoretically justifying recent topological regularization methods [BGND+19, CNBW19]. • We provide extensive experiments to illustrate the theory, strength, and flexibility of our framework. We believe that the novel connections and the developed framework will open new theoretical and computational directions in the theory of deep learning. To foster further developments at the the intersection of persistent homology and statistical learning theory, we release our source code under: https://github.com/tolgabirdal/PHDimGeneralization. 2 Related Work Intrinsic dimension in deep networks Even though a large number of parameters are required to train deep networks [FC18], modern interpretations of deep networks avoid correlating model over-fitting or generalization to parameter counting. Instead, contemporary studies measure model complexity through the degrees of freedom of the parameter space [JFH15, GJ16], compressibility (pruning) [BO18] or intrinsic dimension [ALMZ19, LFLY18, MWH+18]. Tightly related to the ID, Janson et al. [JFH15] investigated the degrees of freedom [Ghr10] in deep networks and expected difference between test error and training error. Finally, LDMNet [ZQH+18] explicitly penalizes the ID regularizing the network training. Generalization bounds Several studies have provided theoretical justification to the observations that trained neural networks live in a lower-dimensional space, and this is related to the generalization performance. In particular, compression-based generalization bounds [AGNZ18, SAM+20, SAN20, HJTW21, BSE+21] have shown that the generalization error of a neural network can be much lower if it can be accurately represented in lower dimensional space. Approaching the problem from a geometric viewpoint, [SSDE20] showed that the generalization error can be formally linked to the fractal dimension of a parametric hypothesis class. This dimension indeed the plays role of the intrinsic dimension, which can be much smaller than the ambient dimension. When the hypothesis class is chosen as the trajectories of the training algorithm, [SSDE20] further showed that the error can be linked to the heavy-tail behavior of the trajectories. Deep networks & topology Previous works have linked neural network training and topological invariants, although all analyze the final trained network [FGFAEV21]. For example, in [RTB+19], the authors construct Neural Persistence, a measure on neural network layer weights. They furthermore show that Neural Persistence reflects many of the properties of convergence and can classify weights based on whether they overfit, underfit, or exactly fit the data. In a parallel line of work, [DZF19] analyze neural network training by calculating topological properties of the underlying graph structure. This is expanded upon in [CMEM20], where the authors compute correlations between neural network weights and show that the homology is linked with the generalization error. However, these previous constructions have been done mostly in an adhoc manner. As a result, many of the results are mostly empirical and work must still be done to show that these methods hold theoretically. Our proposed method, by contrast, is theoretically well-motivated and uses tools from statistical persistent homology theory to formally links the generalization error with the network training trajectory topology. We also would like to note that prior work has incorporated topological loss functions to help normalize training. In particular, [BGND+19] constructed a topological normalization term for GANs to help maintain the geometry of the generated 3d point clouds. 3 Preliminaries & Technical Background We imagine a point cloud W = {wi ∈ Rd} as a geometric realization of a d-dimensional topological space W ⊂ W ⊂ Rd. Bδ(x) ⊂ Rd denotes the closed ball centered around x ∈ Rd with radius δ. Persistent Homology From a topological perspective, W can be viewed a cell complex composed of the disjoint union of k-dimensional balls or cells σ ∈ W glued together. For k = 0, 1, 2, . . . , we form a chain complex C(W) = . . . Ck+1(W) ∂k+1−−−→ Ck(W) ∂k−→ . . . by sequencing chain groupsCk(W), whose elements are equivalence classes of cycles, via boundary maps ∂k : Ck(W) 7→ Ck−1(W) with ∂k−1◦∂k ≡ 0. In this paper, we work with finite simplicial complexes restricting the cells to be simplices. The kth homology group or k-dimensional homology is then defined as the equivalence classes of k-dimensional cycles who differ only by a boundary, or in other words, the quotient group Hk(W) = Zk(W)/Yk(W) where Zk(W) = ker ∂k and Yk(W) = im ∂k+1. The generators or basis of H0(W), H1(W) and H2(W) describe the shape of the topological spaceW by its connected components, holes and cavities, respectively. Their ranks are related to the Betti numbers i.e.βk = rank(Hk). Definition 1 (Čech and Vietoris-Rips Complexes). For W a set of fine points in a metric space, the Čech cell complex Čechr(W ) is constructed using the intersection of r-balls around W , Br(W ): Čechr(W ) = { Q ⊂ W : ∩x∈QBr(x) 6= 0 } . The construction of such complex is intricate. Instead, the Vietoris-Rips complex VRr(W ) closely approximates Čechr(W ) using only the pairwise distances or the intersection of two r-balls [RB21]: Wr = VRr(W ) = { Q ⊂ W : ∀x, x′ ∈ Q, Br(x) ∩Br(x′) 6= 0 } . Definition 2 (Persistent Homology). PH indicates a multi-scale version of homology applied over a filtration {Wt}t := VR(W ) : ∀(s ≤ t)Ws ⊂ Wt ⊂ W , keeping track of holes created (born) or filled (died) as t increases. Each persistence module PHk(VR(W )) = {γi}i keeps track of a single k-persistence cycle γi from birth to death. We denote the entire lifetime of cycle γ as I(γ) and its length as |I(γ)| = death(γ)− birth(γ). We will also use persistence diagrams, 2D plots of all persistence lifetimes (death vs. birth). Note that for PH0, the Čech and VR complexes are equivalent. Lifetime intervals are instrumental in TDA as they allow for extraction of topological features or summaries. Note that, each birth-death pair can be mapped to the cells that respectively created and destroyed the homology class, defining a unique map for a persistence diagram, which lends itself to differentibility [BGND+19, CHN19, CHU17]. We conclude this brief section by referring the interested reader to the well established literature of persistent homology [Car14, EH10] for a thorough understanding. Intrinsic Dimension The intrinsic dimension of a space can be measured by using various notions. In this study, we will consider two notions of dimension, namely the upper-box dimension (also called the Minkowski dimension) and the persistent homology dimension. The box dimension is based on covering numbers and can be linked to generalization via [SSDE20], whereas the PH dimension is based on the notions defined earlier in this section. We start by the box dimension. Definition 3 (Upper-Box Dimension). For a bounded metric space W , let Nδ(W) denote the maximal number of disjoint closed δ-balls with centers inW . The upper box dimension is defined as: dimBoxW = lim sup δ→0 ( log(Nδ(W))/log(1/δ) ) . (1) We proceed with the PH dimension. First let us define an intermediate construct, which will play a key role in our computational tools. Definition 4 (α-Weighted Lifetime Sum). For a finite set W ⊂ W ⊂ Rd, the weighted ith homology lifetime sum is defined as follows: Eiα(W ) = ∑ γ∈PHi(VR(W )) |I(γ)|α, (2) where PHi(VR(W )) is the i-dimensional persistent homology of the Čech complex on a finite point set W contained inW and |I(γ)| is the persistence lifetime as explained above. Now, we are ready to define the PH dimension, which is the key notion in this paper. Definition 5 (Persistent Homology Dimension). The PHi-dimension of a bounded metric spaceW is defined as follows: dimiPHW := inf { α : Eiα(W ) < C; ∃C > 0,∀ finite W ⊂ W } . (3) In words, dimiPHW is the smallest exponent α for which Eiα is uniformly bounded for all finite subsets ofW . 4 Generalization Error via Persistent Homology Dimension In this section, we will illustrate that the generalization error can be linked to the PH0 dimension. Our approach is based on the following fundamental result. Theorem 1 ([KLS06, Sch19]). LetW ⊂ Rd be a bounded set. Then, it holds that: dimPHW := dim0PHW = dimBoxW. In the light of this theorem, we combine the recent result showing that the generalization error can be linked to the box dimension [SSDE20], and Theorem 1, which shows that, for bounded subsets of Rd, the box dimension and the PH dimension of order 0 agree. By following the notation of [SSDE20], we consider a standard supervised learning setting, where the data space is denoted by Z = X × Y , and X and Y respectively denote the features and the labels. We assume that the data is generated via an unknown data distribution D and we have access to a training set of n points, i.e., S = {z1, . . . , zn}, with the samples {zi}ni=1 are independent and identically (i.i.d.) drawn from D. We further consider a parametric hypothesis classW ⊂ Rd, that potentially depends on S. We choose W to be optimization trajectories given by a training algorithm A, which returns the entire (random) trajectory of the network weights in the time frame [0, T ], such that [A(S)]t = wt being the network weights returned byA at ‘time’ t, and t is a continuous iteration index. Then, in the setW , we collect all the network weights that appear in the optimization trajectory: W := {w ∈ Rd : ∃t ∈ [0, T ], w = [A(S)]t} where we will set T = 1, without loss of generality. To measure the quality of a parameter vector w ∈ W , we use a loss function ` : Rd ×Z 7→ R+, such that `(w, z) denotes the loss corresponding to a single data point z. We then denote the population and empirical risks respectively by R(w) := Ez[`(w, z)] and R̂(w, S) := 1n ∑n i=1 `(w, zi). The generalization error is hence defined as |R̂(w, S)−R(w)|. We now recall [SSDE20, Asssumption H4], which is a form of algorithmic stability [BE02]. Let us first introduce the required notation. For any δ > 0, consider the fixed grid on Rd, G = {( (2j1 + 1)δ 2 √ d , . . . , (2jd + 1)δ 2 √ d ) : ji ∈ Z, i = 1, . . . , d } , and define the set Nδ := {x ∈ G : Bδ(x) ∩W 6= ∅}, that is the collection of the centers of each ball that intersectW . H1. Let Z∞ := (Z × Z × · · · ) denote the countable product endowed with the product topology and let B be the Borel σ-algebra generated by Z∞. Let F,G be the sub-σ-algebras of B generated by the collections of random variables given by {R̂(w, S) : w ∈ Rd, n ≥ 1} and { 1 {w ∈ Nδ} : δ ∈ Q>0, w ∈ G,n ≥ 1 } respectively. There exists a constant M ≥ 1 such that for any A ∈ F, B ∈ G we have P [A ∩B] ≤MP [A]P[B]. The next result forms our main observation, which will lead to our methodological developments. Proposition 1. LetW ⊂ Rd be a (random) compact set. Assume that H1 holds, ` is bounded by B and L-Lipschitz continuous in w. Then, for n sufficiently large, we have sup w∈W |R̂(w, S)−R(w)| ≤ 2B √ [dimPHW + 1] log2(nL2) n + log(7M/γ) n , (4) with probability at least 1− γ over S ∼ D⊗n. Proof. By using the same proof technique as [SSDE20, Theorem 2], we can show that (4) holds with dimBoxW in place of dimPHW . Since W is bounded, we have dimBoxW = dimPHW by Theorem 1. The result follows. This result shows that the generalization error of the trajectories of a training algorithm is deeply linked to its topological properties as measured by the PH dimension. Thanks to novel connection, we have now access to the rich TDA toolbox, to be used for different purposes. 4.1 Analyzing Deep Network Dynamics via Persistent Homology By exploiting TDA tools, our goal in this section is to develop an algorithm to compute dimPHW for two main purposes. The first goal is to predict the generalization performance by using dimPH. By this approach, we can use dimPH for hyperparameter tuning without having access to test data. The second goal is to incorporate dimPH as a regularizer to the optimization problem in order to improve generalization. Note that similar topological regularization strategies have already been proposed Algorithm 1: Computation of dimPH. 1 input :The set of iterates W = {wi}Ki=1, smallest sample size nmin, and a skip step ∆, α 2 output :dimPHW 3 n← nmin, E ← [] 4 while n ≤ K do 5 Wn ← sample(W,n) // random sampling 6 Wn ← VR(Wn) // Vietoris-Rips filtration 7 E[i]← Eα(Wn) , ∑ γ∈PH0(Wn) |I(γ)| α // compute lifetime sums from PH 8 n← n+ ∆ 9 m, b← fitline (log(nmin : ∆ : K), log(E)) // power law on Ei1(W ) 10 dimPHW ← α1−m [BGND+19, CNBW19] without a formal link to generalization. In this sense, our observations form the first step towards theoretically linking generalization and TDA. In [SSDE20], to develop a computational approach, the authors first linked the intrinsic dimension to certain statistical properties of the underlying training algorithm, which can be then estimated. To do so, they required an additional topological regularity condition, which necessitates the existence of an ‘Ahlfors regular’ measure defined onW , i.e., a finite Borel measure µ such that there exists s, r0 > 0 where 0 < ars ≤ µ(Br(x)) ≤ brs < ∞, holds for all x ∈ W, 0 < r ≤ r0. This assumption was used to link the box dimension to another notion called Hausdorff dimension, which can be then linked to statistical properties of the training trajectories under further assumptions (see Section 1). An interesting asset of our approach is that, we do not require this condition and thanks to the following result, we are able to develop an algorithm to directly estimate dimPHW , while staying agnostic to the finer topological properties ofW . Proposition 2. Let W ⊂ Rd be a bounded set with dimPHW =: d?. Then, for all ε > 0 and α ∈ (0, d? + ε), there exists a constant Dα,ε, such that the following inequality holds for all n ∈ N+ and all collections Wn = {w1, . . . , wn} with wi ∈ W , i = 1, . . . , n: E0α(Wn) ≤ Dα,εn d?+ε−α d?+ε . (5) Proof. Since W is bounded, we have dimBoxW = d? by Theorem 1. Fix ε > 0. Then, by Definition 3, there exists δ0 = δ0(ε) > 0 and a finite constant Cε > 0 such that for all δ ≤ δ0 the following inequality holds: Nδ(W) ≤ Cεδ−(d ?+ε). (6) Then, the result directly follows from [Sch20, Proposition 21]. This result suggests a simple strategy to estimate an upper bound of the intrinsic dimension from persistent homology. In particular, we note that rewriting (5) for logarithmic values give us that( 1− α d∗ + ) log n+ logDα, ≥ logE0α. (7) If logE0α and log n are sampled from the data and give an empirical slope m, then we see that d∗ + ≤ m1−α . In many cases, we see that d ∗ ≈ α1−m (as further explained in Sec. 5.2), so we take α 1−m as our PH dimension estimation. We provide the full algorithm for computing this from our sampled data in Alg. 1. Note that our algorithm is similar to that proposed in [AAF+20], although our method works for sets rather than probability measures. In our implementation we compute the homology by the celebrated Ripser package [Bau21] unless otherwise specified. On computational complexity. Computing the Vietoris Rips complex is an active area of research, as the worst-case time complexity is meaningless due to natural sparsity [Zom10]. Therefore, to calculate the time complexity of our estimator, we focus on analyzing the PH computation from the output simplices: calculating PH takes O(pw) time, where w < 2.4 is the constant of matrix multiplication and p is the number of simplices produced in the filtration [BP19]. Since we compute with 0th order homology, this would imply that the computational complexity is O(nw), where n is the number of points. In particular, this means that estimating the PH dimension would take O(knw) time, where k is the number of samples taken assuming that samples are evenly spaced in [0, n]. 4.2 Regularizing Deep Networks via Persistent Homology Motivated by our results in proposition 2, we theorize that controlling dimPHW would help in reducing the generalization error. Towards this end, we develop a regularizer for our training procedure which seeks to minimize dimPHW during train time. If we let L be our vanilla loss function, then we will instead optimize over our topological loss function Lλ := L+ λ dimPHW , where λ ≥ 0 controls the scale of the regularization andW now denotes a sliding window of iterates (e.g., the latest 50 iterates during training). This way, we aim to regularize the loss by considering the dimension of the ongoing training trajectory. In Alg. 1, we let wi be the stored weights from previous iterations for i ∈ {1, . . . ,K − 1} and let wK be the current weight iteration. Since the persistence diagram computation and linear regression are differentiable, this means that our estimate for dimPH is also differentiable, and, if wk is sampled as in Alg. 1, is connected in the computation graph with wK . We incorporate our regularizer into the network training using PyTorch [PGM+19] and the associated persistent homology package torchph [CHU17, CHN19]. 5 Experimental Evaluations This section presents our experimental results in two parts: (i) analyzing and quantifying generalization in practical deep networks on real data, (ii) ablation studies on a random diffusion process. In all the experiments we will assume that the intrinsic dimension is strictly larger than 1, hence we will set α = 1, unless specified otherwise. Further details are reported in the supplementary document. 5.1 Analyzing and Visualizing Deep Networks Measuring generalization. We first verify our main claim by showing that our persistent homology dimension derived from topological analysis of the training trajectories correctly measures of generalization. To demonstrate this, we apply our analysis to a wide variety of networks, training procedures, and hyperparameters. In particular, we train AlexNet [KSH12], a 5-layer (fcn-5) and 7-layer (fcn-7) fully connected networks, and a 9-layer convolutional netowork (cnn-9) on MNIST, CIFAR10 and CIFAR100 datasets for multiple batch sizes and learning rates until convergence. For AlexNet, we consider 1000 iterates prior to convergence and, for the others, we only consider 200. Then, we estimate dimPH on the last iterates by using Alg. 1. For varying n, we randomly pick n of last iterates and compute E0α, and then we use the relation given in (5). We obtain the ground truth (GT) generalization error as the gap between training and test accuracies. Fig. 2 plots the PH-dimension with respect to test accuracy and signals a strong correlation of our PH-dimension and actual performance gap. The lower the PH-dimension, the higher the test accuracy. Note that this results aligns well with that of [SSDE20]. The figure also shows that the intrinsic dimensions across different datasets can be similar, even if the parameters of the models can vary greatly. This supports the recent hypothesis that what matters for the generalization is the effective capacity and not the parameter count. In fact, the dimension should be as minimal as possible without collapsing important representation features onto the same dimension. The findings in Fig. 2 are further augmented with results in Fig. 3, where a similar pattern is observed on AlexNet and CIFAR100. Can dimPH capture intrinsic properties of trajectories? After revealing that our ID estimation is a gauge for generalization, we set out to investigate whether it really hinges on the intrinsic properties of the data. We train several instances of 7-fcn for different learning rates and batch sizes. We compute the PH-dimension of each network using training trajectories. We visualize the following in the rows of Fig. 4 sorted by dimPH: (i) 200× 200 distance matrix of the sequence of iterates w1, . . . , wK (which is the basis for PH computations), (ii) corresponding logE0α=1 estimates as we sweep over n in an increasing fashion, (iii) persistence diagrams per each distance matrix. It is clear that there is a strong correlation between dimPH and the structure of the distance matrix. As dimension increases, matrix of distances become non-uniformly pixelated. The slope estimated from the total edge lengths the second row is a quantity proportional to our dimension. Note that the slope decreases as our estimte increases (hence generalization tends to decrease). We further observe clusters emerging in the persistence diagram. The latter has also been reported for better generalizing networks, though using a different notion of a topological space [BGND+19]. Is dimPH a real indicator of generalization? To quantitatively assess the quality of our complexity measure, we gather two statistics: (i) we report the average p-value over different batch sizes for AlexNet trained with SGD on the Cifar100 dataset. The value of p = 0.0157 < 0.05 confirms the statistical significance. Next, we follow the recent literature [JFY+20] and consult the Kendall correlation coefficient (KCC). Similar to the p-value experiment above, we compute KCC for AlexNet+SGD for different batch sizes (64, 100, 128) and attain (0.933, 0.357, 0.733) respectively. Note that, a positive correlation signals that the test gap closes as dimPH decreases. Both of these experiments agree with our theoretical insights that connect generalization to a topological characteristic of a neural network: intrinsic dimension of training trajectories. Effect of different training algorithms. We also verify that our method is algorithm-agnostic and does not require assumptions on the training algorithm. In particular, we show that our above analyses extend to both the RMSProp [TH12] and Adam [KB15] optimizer. Our results are visualized in Fig. 3. We plot the dimension with respect to the generalization error for varying optimizers and batch sizes; our results verify that the generalization error (which is inversely related to the test accuracy) is positively correlated with the PH dimension. This corroborates our previous results in Fig. 2 and in particular shows that our dimension estimator of test gap is indeed algorithm-agnostic. Encouraging generalization via regularization dimPH. We furthermore verify that our topological regularizer is able to help control the test gap in accordance with our theory. We train a Lenet-5 network [LBBH98] on Cifar10 [Kri09] and compare a clean trianing with a training with our topological regularizer with λ set to 1. We train for 200 epochs with a batch size of 128 and report the train and test accuracies in Fig. 5 over a variety of learning rates. We tested over 10 trials and found that, with p < 0.05 for all cases except lr = 0.01, the results are different. Our topological optimizer is able to produce the best improvements when our network is not able to converge well. These results show that our regularizer behaves as expected: the regularizer is able to recover poor training dynamics. We note that this experiment uses a simple architecture and as such, it presents a proof of concept. We do not aim for the state of the art results. Furthermore, we directly compared our approach with the generalization estimator of [CMEM20], which most closely resembles our construction. In particular, we found their method does not scale and is often numerically unreliable. For example, their methodology grows quadratically with respect to number of network weights and linearly with the dataset size, while our method does not scale much beyond memory usage with vectorized computation. Furthermore, for many of our test networks, their metric space construction (which is based off of the correlation between activation functions and used for the Vietoris-Rips complex) would be numerically brittle and result in degenerate persistent homology. These prevent [CMEM20] to be applicable in this scenario. 5.2 Ablation Studies To assess the quality of our dimension estimator, we now perform ablation studies, on a synthetic data whose the ground truth ID is known. To this end, we use the synthetic experimental setting presented in [SSDE20] (see the supplementary document for details), and we simulate a d = 128 dimensional stable Levy process with varying number of points 100 ≤ n ≤ 1500 and tail indices 1 ≤ β ≤ 2. Note that the tail index equals the intrinsic dimension in this case, which is an order of magnitude lower for this experiment. Can dimPH match the ground truth ID? We first try to predict the GT intrinsic dimension running Alg. 1 on this data. We also estimate the TwoNN dimension [FdRL17] to quantify how the state of the art ID estimators correlate with GT in such heavy tailed regime. Our results are plotted in Fig. 6. Note that as n increases our estimator becomes smoother and well approximates the GT up to a slight over-estimation, a repeatedly observed phenomenon [CCCR15]. TwoNN does not guarantee recovering the box-dimension. While it is found to be useful in estimating the ID of data [ALMZ19], we find it to be less desirable in a heavy-tailed regime as reflected in the plots. Our supplementary material provides further results on other, non-dynamics like synthetic dataset such as points on a sphere where TwoNN can perform better. We also include a robust line fitting variant of our approach PH0-RANSAC, where a random sample consensus is applied iteratively. Though, as our data is not outlier-corrupted, we do not observe a large improvement. Effect of α on dimension estimation. While our theory requires α to be smaller than the intrinsic dimension of the trajectories, in all of our experiments we fix α = 1.0. It is of curiosity whether such choice hampers our estimates. To see the effect, we vary α in range [0.5, 2.5] and plot our estimates in Fig. 7. It is observed (blue curve) that our dimension estimate follows a U-shaped trend with increasing α. We indicate the GT ID by a dashed red line and our estimate as a dashed green line. Ideally, these two horizontal lines should overlap. It is noticeable that, given the oracle for GT ID, it might be possible to optimize for an α?. Yet, such information is not available for the deep networks. Nevertheless, α = 1 seems to yield reasonable performance and we leave the estimation of a better α for future work. We provide additional results in our supplementary material. 6 Conclusion In this paper, we developed novel connections between dimPH of the training trajectory and the generalization error. Using these insights, we proposed a method for estimating the dimPH from data and, unlike previous work [SSDE20], our approach does not presuppose any conditions on the trajectory and offers a simple algorithm. By leveraging the differentiability of PH computation, we showed that we can use dimPH as a regularizer during training, which improved the performance in different setups. Societal Impact and Limitations. We believe that our study will not pose any negative societal or ethical consequences due to its theoretical nature. The main limitation of our study is that it solely considers the terms E0α, whereas PH offers a much richer structure. Hence, as our next step, we will explore finer ways to incorporate PH in generalization performance. We will further extend our results in terms of dimensions of measures by using the techniques presented in [CDE+21]. Acknowledgements Umut Şimşekli’s research is supported by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19P3IA-0001 (PRAIRIE 3IA Institute).
1. What is the main contribution of the paper regarding generalization in deep learning? 2. What are the strengths and weaknesses of the proposed persistence homology dimension (PHD) measure? 3. How does the reviewer assess the empirical evaluation of PHD's predictive ability of generalization? 4. What are the limitations of the proposed loss regularizer based on PHD? 5. Are there any concerns regarding the experimental methodology and its justification?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a measure of generalization based on persistence homology, a standard technique of topological data analysis. The proposed measure, the persistence homology dimension (PHD), is computed on optimization trajectories during training. A theoretical connect between PHD and generalization is made using existing results in the literature: first the equivalence of PHD and the box-dimension [KLS06, Sch19], and second that the generalization error may be bounded under the box dimension (under the assumption that the optimization dynamics follow a Feller process). From these main result of the paper, Proposition 1, then follows under assumption (H1) . The authors then proceed to remove the Feller requirement in Proposition. 2 using a recent result in the mathematics literature [Sch20]. Based on these results, an algorithm for computing PHD is proposed. The authors proceed to perform an empirical evaluation of how well PHD predicts generalization for select deep network architectures (alexnet, cnn-9, fcn-5, fcn-7), datasets (mnist, cifar-10, cifar-100), and training hyperparameters (learning rate, batch size). The authors visualize these results in Figures 2 and 3, where in Figure 3 the authors claim that PHD "directly" correlates with generalization error across experimental settings. Linear trends are evident in the plots, however no measures of goodness of fit are given. In addition, a novel loss regularizer is proposed based on their PHD measure. The effect on generalization of his regularizer is studied, but small differences are shown. Ablation studies are additionally carried out. Review This work addresses an important problem, predicting generalization, and approach it from an interesting perspective, that of TDA. I have some doubts about the empirical evaluation of PHD's predictive ability of generalization. No quantitative measures are given. There indeed appears to be trends in Figures 2 and 3, but their strength is unclear. Even so, as some have pointed out in the literature, correlative measures may be problematic and not indicative of a causal relationship [0,1]. For instance in [0], the authors write: Correlation with Generalization: Evaluating measures based on correlation with generalization is very useful but it can also provide a misleading picture. To check the correlation, we should vary architectures and optimization algorithms to produce a set of models. If the set is generated in an artificial way and is not representative of the typical setting, the conclusions might be deceiving and might not generalize to typical cases... Another pitfall is drawing conclusion from changing one or two hyper-parameters (e.g changing the width or batch-size and checking if a measure would correlate with generalization). In these cases, the hyper-parameter could be the true cause of both change in the measure and change in the generalization, but the measure itself has no causal relationship with generalization. Therefore,one needs to be very careful with experimental design to avoid unwanted correlations. Can the authors justify their choice of empirical methodology, over the Kendall's rank correlation and conditional mutual information as suggested by [1]? Using the measure for regularization is a natural and interesting idea. However it's empirical evaluation appears weak in Figure 5, i.e. < +2%, except for the high learning rate case. I would like to see error bars on this experiment to gauge it's significance. Minor notes: Typo on line 256 "netowork" [0] Fantastic Generalization Measures and Where to Find Them - Jiang et al. (2019) https://arxiv.org/abs/1912.02178 [1] NeurIPS 2020 Competition:Predicting Generalization in Deep Learning - Jiang et al. (2020) https://arxiv.org/abs/2012.07976v1
NIPS
Title Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks Abstract Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal’s intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the ’persistent homology dimension’ (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network’s intrinsic dimension in a variety of settings, which is predictive of the generalization error. 1 Introduction In recent years, deep neural networks (DNNs) have become the de facto machine learning tool and have revolutionized a variety of fields such as natural language processing [DCLT18], image perception [KSH12, RBH+21], geometry processing [QSMG17, ZBL+20] and 3D vision [DBI18, GLW+21]. Despite their widespread use, little is known about their theoretical properties. Even now the top-performing DNNs are designed by trial-and-error, a pesky, burdensome process for the average practitioner [EMH+19]. Furthermore, even if a top-performing architecture is found, it is difficult to provide performance guarantees on a large class of real-world datasets. This lack of theoretical understanding has motivated a plethora of work focusing on explaining what, how, and why a neural network learns. To answer many of these questions, one naturally examines the generalization error, a measure quantifying the differing performance on train and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) test data since this provides significant insights into whether the network is learning or simply memorizing [ZBH+21]. However, generalization in neural networks is particularly confusing as it refutes the classical proposals of statistical learning theory such as uniform bounds based on the Rademacher complexity [BM02] and the Vapnik–Chervonenkis (VC) dimension [Vap68]. Instead, recent analyses have started focusing on the dynamics of deep neural networks. [NBMS17, BO18, GJ16] provide analyses on the final trained network, but these miss out on critical training patterns. To remedy this, a recent study [SSDE20] connected generalization and the heavy tailed behavior of network trajectories–a phenomenon which had already been observed in practice [SSG19, ŞGN+19, SZTG20, GSZ21, CWZ+21, HM20, MM19]. [SSDE20] further showed that the generalization error can be linked to the fractal dimension of a parametric hypothesis class (which can then be taken as the optimization trajectories). Hence, the fractal dimension acts as a ‘capacity metric’ for generalization. While [SSDE20] brought a new perspective to generalization, several shortcomings prevent application in everyday training. In particular, their construction requires several conditions which may be infeasible in practice: (i) topological regularity conditions on the hypothesis class for fast computation, (ii) a Feller process assumption on the training algorithm trajectory, and that (iii) the Feller process exhibits a specific diffusive behavior near a minimum. Furthermore, the capacity metrics in [SSDE20] are not optimization friendly and therefore can’t be incorporated into training. In this work, we address these shortcomings by exploiting the recently developed connections between fractal dimension and topological data analysis (TDA). First, by relating the box dimension [Sch09] and the recently proposed persistent homology (PH) dimension [Sch20], we relax the assumptions in [SSDE20] to develop a topological intrinsic dimension (ID) estimator. Then, using this estimator we develop a general tool for computing and visualizing generalization properties in deep learning. Finally, by leveraging recently developed differentiable TDA tools [CHU17, CHN19], we employ our ID estimator to regularize training towards solutions that generalize better, even without having access to the test dataset. Our experiments demonstrate that this new measure of intrinsic dimension correlates highly with generalization error, regardless of the choice of optimizer. Furthermore, as a proof of concept, we illustrate that our topological regularizer is able to improve the test accuracy and lower the generalization error. In particular, this improvement is most pronounced when the learning rate/batch size normally results in a poorer test accuracy. Overall, our contributions are summarized as follows: • We make a novel connection between statistical learning theory and TDA in order to develop a generic computational framework for the generalization error. We remove the topological regularity condition and the decomposable Feller assumption on training trajectories, which were required in [SSDE20]. This leads to a more generic capacity metric. • Using insights from our above methodology, we leverage the differentiable properties of persistent homology to regularize neural network training. Our findings also provide the first steps towards theoretically justifying recent topological regularization methods [BGND+19, CNBW19]. • We provide extensive experiments to illustrate the theory, strength, and flexibility of our framework. We believe that the novel connections and the developed framework will open new theoretical and computational directions in the theory of deep learning. To foster further developments at the the intersection of persistent homology and statistical learning theory, we release our source code under: https://github.com/tolgabirdal/PHDimGeneralization. 2 Related Work Intrinsic dimension in deep networks Even though a large number of parameters are required to train deep networks [FC18], modern interpretations of deep networks avoid correlating model over-fitting or generalization to parameter counting. Instead, contemporary studies measure model complexity through the degrees of freedom of the parameter space [JFH15, GJ16], compressibility (pruning) [BO18] or intrinsic dimension [ALMZ19, LFLY18, MWH+18]. Tightly related to the ID, Janson et al. [JFH15] investigated the degrees of freedom [Ghr10] in deep networks and expected difference between test error and training error. Finally, LDMNet [ZQH+18] explicitly penalizes the ID regularizing the network training. Generalization bounds Several studies have provided theoretical justification to the observations that trained neural networks live in a lower-dimensional space, and this is related to the generalization performance. In particular, compression-based generalization bounds [AGNZ18, SAM+20, SAN20, HJTW21, BSE+21] have shown that the generalization error of a neural network can be much lower if it can be accurately represented in lower dimensional space. Approaching the problem from a geometric viewpoint, [SSDE20] showed that the generalization error can be formally linked to the fractal dimension of a parametric hypothesis class. This dimension indeed the plays role of the intrinsic dimension, which can be much smaller than the ambient dimension. When the hypothesis class is chosen as the trajectories of the training algorithm, [SSDE20] further showed that the error can be linked to the heavy-tail behavior of the trajectories. Deep networks & topology Previous works have linked neural network training and topological invariants, although all analyze the final trained network [FGFAEV21]. For example, in [RTB+19], the authors construct Neural Persistence, a measure on neural network layer weights. They furthermore show that Neural Persistence reflects many of the properties of convergence and can classify weights based on whether they overfit, underfit, or exactly fit the data. In a parallel line of work, [DZF19] analyze neural network training by calculating topological properties of the underlying graph structure. This is expanded upon in [CMEM20], where the authors compute correlations between neural network weights and show that the homology is linked with the generalization error. However, these previous constructions have been done mostly in an adhoc manner. As a result, many of the results are mostly empirical and work must still be done to show that these methods hold theoretically. Our proposed method, by contrast, is theoretically well-motivated and uses tools from statistical persistent homology theory to formally links the generalization error with the network training trajectory topology. We also would like to note that prior work has incorporated topological loss functions to help normalize training. In particular, [BGND+19] constructed a topological normalization term for GANs to help maintain the geometry of the generated 3d point clouds. 3 Preliminaries & Technical Background We imagine a point cloud W = {wi ∈ Rd} as a geometric realization of a d-dimensional topological space W ⊂ W ⊂ Rd. Bδ(x) ⊂ Rd denotes the closed ball centered around x ∈ Rd with radius δ. Persistent Homology From a topological perspective, W can be viewed a cell complex composed of the disjoint union of k-dimensional balls or cells σ ∈ W glued together. For k = 0, 1, 2, . . . , we form a chain complex C(W) = . . . Ck+1(W) ∂k+1−−−→ Ck(W) ∂k−→ . . . by sequencing chain groupsCk(W), whose elements are equivalence classes of cycles, via boundary maps ∂k : Ck(W) 7→ Ck−1(W) with ∂k−1◦∂k ≡ 0. In this paper, we work with finite simplicial complexes restricting the cells to be simplices. The kth homology group or k-dimensional homology is then defined as the equivalence classes of k-dimensional cycles who differ only by a boundary, or in other words, the quotient group Hk(W) = Zk(W)/Yk(W) where Zk(W) = ker ∂k and Yk(W) = im ∂k+1. The generators or basis of H0(W), H1(W) and H2(W) describe the shape of the topological spaceW by its connected components, holes and cavities, respectively. Their ranks are related to the Betti numbers i.e.βk = rank(Hk). Definition 1 (Čech and Vietoris-Rips Complexes). For W a set of fine points in a metric space, the Čech cell complex Čechr(W ) is constructed using the intersection of r-balls around W , Br(W ): Čechr(W ) = { Q ⊂ W : ∩x∈QBr(x) 6= 0 } . The construction of such complex is intricate. Instead, the Vietoris-Rips complex VRr(W ) closely approximates Čechr(W ) using only the pairwise distances or the intersection of two r-balls [RB21]: Wr = VRr(W ) = { Q ⊂ W : ∀x, x′ ∈ Q, Br(x) ∩Br(x′) 6= 0 } . Definition 2 (Persistent Homology). PH indicates a multi-scale version of homology applied over a filtration {Wt}t := VR(W ) : ∀(s ≤ t)Ws ⊂ Wt ⊂ W , keeping track of holes created (born) or filled (died) as t increases. Each persistence module PHk(VR(W )) = {γi}i keeps track of a single k-persistence cycle γi from birth to death. We denote the entire lifetime of cycle γ as I(γ) and its length as |I(γ)| = death(γ)− birth(γ). We will also use persistence diagrams, 2D plots of all persistence lifetimes (death vs. birth). Note that for PH0, the Čech and VR complexes are equivalent. Lifetime intervals are instrumental in TDA as they allow for extraction of topological features or summaries. Note that, each birth-death pair can be mapped to the cells that respectively created and destroyed the homology class, defining a unique map for a persistence diagram, which lends itself to differentibility [BGND+19, CHN19, CHU17]. We conclude this brief section by referring the interested reader to the well established literature of persistent homology [Car14, EH10] for a thorough understanding. Intrinsic Dimension The intrinsic dimension of a space can be measured by using various notions. In this study, we will consider two notions of dimension, namely the upper-box dimension (also called the Minkowski dimension) and the persistent homology dimension. The box dimension is based on covering numbers and can be linked to generalization via [SSDE20], whereas the PH dimension is based on the notions defined earlier in this section. We start by the box dimension. Definition 3 (Upper-Box Dimension). For a bounded metric space W , let Nδ(W) denote the maximal number of disjoint closed δ-balls with centers inW . The upper box dimension is defined as: dimBoxW = lim sup δ→0 ( log(Nδ(W))/log(1/δ) ) . (1) We proceed with the PH dimension. First let us define an intermediate construct, which will play a key role in our computational tools. Definition 4 (α-Weighted Lifetime Sum). For a finite set W ⊂ W ⊂ Rd, the weighted ith homology lifetime sum is defined as follows: Eiα(W ) = ∑ γ∈PHi(VR(W )) |I(γ)|α, (2) where PHi(VR(W )) is the i-dimensional persistent homology of the Čech complex on a finite point set W contained inW and |I(γ)| is the persistence lifetime as explained above. Now, we are ready to define the PH dimension, which is the key notion in this paper. Definition 5 (Persistent Homology Dimension). The PHi-dimension of a bounded metric spaceW is defined as follows: dimiPHW := inf { α : Eiα(W ) < C; ∃C > 0,∀ finite W ⊂ W } . (3) In words, dimiPHW is the smallest exponent α for which Eiα is uniformly bounded for all finite subsets ofW . 4 Generalization Error via Persistent Homology Dimension In this section, we will illustrate that the generalization error can be linked to the PH0 dimension. Our approach is based on the following fundamental result. Theorem 1 ([KLS06, Sch19]). LetW ⊂ Rd be a bounded set. Then, it holds that: dimPHW := dim0PHW = dimBoxW. In the light of this theorem, we combine the recent result showing that the generalization error can be linked to the box dimension [SSDE20], and Theorem 1, which shows that, for bounded subsets of Rd, the box dimension and the PH dimension of order 0 agree. By following the notation of [SSDE20], we consider a standard supervised learning setting, where the data space is denoted by Z = X × Y , and X and Y respectively denote the features and the labels. We assume that the data is generated via an unknown data distribution D and we have access to a training set of n points, i.e., S = {z1, . . . , zn}, with the samples {zi}ni=1 are independent and identically (i.i.d.) drawn from D. We further consider a parametric hypothesis classW ⊂ Rd, that potentially depends on S. We choose W to be optimization trajectories given by a training algorithm A, which returns the entire (random) trajectory of the network weights in the time frame [0, T ], such that [A(S)]t = wt being the network weights returned byA at ‘time’ t, and t is a continuous iteration index. Then, in the setW , we collect all the network weights that appear in the optimization trajectory: W := {w ∈ Rd : ∃t ∈ [0, T ], w = [A(S)]t} where we will set T = 1, without loss of generality. To measure the quality of a parameter vector w ∈ W , we use a loss function ` : Rd ×Z 7→ R+, such that `(w, z) denotes the loss corresponding to a single data point z. We then denote the population and empirical risks respectively by R(w) := Ez[`(w, z)] and R̂(w, S) := 1n ∑n i=1 `(w, zi). The generalization error is hence defined as |R̂(w, S)−R(w)|. We now recall [SSDE20, Asssumption H4], which is a form of algorithmic stability [BE02]. Let us first introduce the required notation. For any δ > 0, consider the fixed grid on Rd, G = {( (2j1 + 1)δ 2 √ d , . . . , (2jd + 1)δ 2 √ d ) : ji ∈ Z, i = 1, . . . , d } , and define the set Nδ := {x ∈ G : Bδ(x) ∩W 6= ∅}, that is the collection of the centers of each ball that intersectW . H1. Let Z∞ := (Z × Z × · · · ) denote the countable product endowed with the product topology and let B be the Borel σ-algebra generated by Z∞. Let F,G be the sub-σ-algebras of B generated by the collections of random variables given by {R̂(w, S) : w ∈ Rd, n ≥ 1} and { 1 {w ∈ Nδ} : δ ∈ Q>0, w ∈ G,n ≥ 1 } respectively. There exists a constant M ≥ 1 such that for any A ∈ F, B ∈ G we have P [A ∩B] ≤MP [A]P[B]. The next result forms our main observation, which will lead to our methodological developments. Proposition 1. LetW ⊂ Rd be a (random) compact set. Assume that H1 holds, ` is bounded by B and L-Lipschitz continuous in w. Then, for n sufficiently large, we have sup w∈W |R̂(w, S)−R(w)| ≤ 2B √ [dimPHW + 1] log2(nL2) n + log(7M/γ) n , (4) with probability at least 1− γ over S ∼ D⊗n. Proof. By using the same proof technique as [SSDE20, Theorem 2], we can show that (4) holds with dimBoxW in place of dimPHW . Since W is bounded, we have dimBoxW = dimPHW by Theorem 1. The result follows. This result shows that the generalization error of the trajectories of a training algorithm is deeply linked to its topological properties as measured by the PH dimension. Thanks to novel connection, we have now access to the rich TDA toolbox, to be used for different purposes. 4.1 Analyzing Deep Network Dynamics via Persistent Homology By exploiting TDA tools, our goal in this section is to develop an algorithm to compute dimPHW for two main purposes. The first goal is to predict the generalization performance by using dimPH. By this approach, we can use dimPH for hyperparameter tuning without having access to test data. The second goal is to incorporate dimPH as a regularizer to the optimization problem in order to improve generalization. Note that similar topological regularization strategies have already been proposed Algorithm 1: Computation of dimPH. 1 input :The set of iterates W = {wi}Ki=1, smallest sample size nmin, and a skip step ∆, α 2 output :dimPHW 3 n← nmin, E ← [] 4 while n ≤ K do 5 Wn ← sample(W,n) // random sampling 6 Wn ← VR(Wn) // Vietoris-Rips filtration 7 E[i]← Eα(Wn) , ∑ γ∈PH0(Wn) |I(γ)| α // compute lifetime sums from PH 8 n← n+ ∆ 9 m, b← fitline (log(nmin : ∆ : K), log(E)) // power law on Ei1(W ) 10 dimPHW ← α1−m [BGND+19, CNBW19] without a formal link to generalization. In this sense, our observations form the first step towards theoretically linking generalization and TDA. In [SSDE20], to develop a computational approach, the authors first linked the intrinsic dimension to certain statistical properties of the underlying training algorithm, which can be then estimated. To do so, they required an additional topological regularity condition, which necessitates the existence of an ‘Ahlfors regular’ measure defined onW , i.e., a finite Borel measure µ such that there exists s, r0 > 0 where 0 < ars ≤ µ(Br(x)) ≤ brs < ∞, holds for all x ∈ W, 0 < r ≤ r0. This assumption was used to link the box dimension to another notion called Hausdorff dimension, which can be then linked to statistical properties of the training trajectories under further assumptions (see Section 1). An interesting asset of our approach is that, we do not require this condition and thanks to the following result, we are able to develop an algorithm to directly estimate dimPHW , while staying agnostic to the finer topological properties ofW . Proposition 2. Let W ⊂ Rd be a bounded set with dimPHW =: d?. Then, for all ε > 0 and α ∈ (0, d? + ε), there exists a constant Dα,ε, such that the following inequality holds for all n ∈ N+ and all collections Wn = {w1, . . . , wn} with wi ∈ W , i = 1, . . . , n: E0α(Wn) ≤ Dα,εn d?+ε−α d?+ε . (5) Proof. Since W is bounded, we have dimBoxW = d? by Theorem 1. Fix ε > 0. Then, by Definition 3, there exists δ0 = δ0(ε) > 0 and a finite constant Cε > 0 such that for all δ ≤ δ0 the following inequality holds: Nδ(W) ≤ Cεδ−(d ?+ε). (6) Then, the result directly follows from [Sch20, Proposition 21]. This result suggests a simple strategy to estimate an upper bound of the intrinsic dimension from persistent homology. In particular, we note that rewriting (5) for logarithmic values give us that( 1− α d∗ + ) log n+ logDα, ≥ logE0α. (7) If logE0α and log n are sampled from the data and give an empirical slope m, then we see that d∗ + ≤ m1−α . In many cases, we see that d ∗ ≈ α1−m (as further explained in Sec. 5.2), so we take α 1−m as our PH dimension estimation. We provide the full algorithm for computing this from our sampled data in Alg. 1. Note that our algorithm is similar to that proposed in [AAF+20], although our method works for sets rather than probability measures. In our implementation we compute the homology by the celebrated Ripser package [Bau21] unless otherwise specified. On computational complexity. Computing the Vietoris Rips complex is an active area of research, as the worst-case time complexity is meaningless due to natural sparsity [Zom10]. Therefore, to calculate the time complexity of our estimator, we focus on analyzing the PH computation from the output simplices: calculating PH takes O(pw) time, where w < 2.4 is the constant of matrix multiplication and p is the number of simplices produced in the filtration [BP19]. Since we compute with 0th order homology, this would imply that the computational complexity is O(nw), where n is the number of points. In particular, this means that estimating the PH dimension would take O(knw) time, where k is the number of samples taken assuming that samples are evenly spaced in [0, n]. 4.2 Regularizing Deep Networks via Persistent Homology Motivated by our results in proposition 2, we theorize that controlling dimPHW would help in reducing the generalization error. Towards this end, we develop a regularizer for our training procedure which seeks to minimize dimPHW during train time. If we let L be our vanilla loss function, then we will instead optimize over our topological loss function Lλ := L+ λ dimPHW , where λ ≥ 0 controls the scale of the regularization andW now denotes a sliding window of iterates (e.g., the latest 50 iterates during training). This way, we aim to regularize the loss by considering the dimension of the ongoing training trajectory. In Alg. 1, we let wi be the stored weights from previous iterations for i ∈ {1, . . . ,K − 1} and let wK be the current weight iteration. Since the persistence diagram computation and linear regression are differentiable, this means that our estimate for dimPH is also differentiable, and, if wk is sampled as in Alg. 1, is connected in the computation graph with wK . We incorporate our regularizer into the network training using PyTorch [PGM+19] and the associated persistent homology package torchph [CHU17, CHN19]. 5 Experimental Evaluations This section presents our experimental results in two parts: (i) analyzing and quantifying generalization in practical deep networks on real data, (ii) ablation studies on a random diffusion process. In all the experiments we will assume that the intrinsic dimension is strictly larger than 1, hence we will set α = 1, unless specified otherwise. Further details are reported in the supplementary document. 5.1 Analyzing and Visualizing Deep Networks Measuring generalization. We first verify our main claim by showing that our persistent homology dimension derived from topological analysis of the training trajectories correctly measures of generalization. To demonstrate this, we apply our analysis to a wide variety of networks, training procedures, and hyperparameters. In particular, we train AlexNet [KSH12], a 5-layer (fcn-5) and 7-layer (fcn-7) fully connected networks, and a 9-layer convolutional netowork (cnn-9) on MNIST, CIFAR10 and CIFAR100 datasets for multiple batch sizes and learning rates until convergence. For AlexNet, we consider 1000 iterates prior to convergence and, for the others, we only consider 200. Then, we estimate dimPH on the last iterates by using Alg. 1. For varying n, we randomly pick n of last iterates and compute E0α, and then we use the relation given in (5). We obtain the ground truth (GT) generalization error as the gap between training and test accuracies. Fig. 2 plots the PH-dimension with respect to test accuracy and signals a strong correlation of our PH-dimension and actual performance gap. The lower the PH-dimension, the higher the test accuracy. Note that this results aligns well with that of [SSDE20]. The figure also shows that the intrinsic dimensions across different datasets can be similar, even if the parameters of the models can vary greatly. This supports the recent hypothesis that what matters for the generalization is the effective capacity and not the parameter count. In fact, the dimension should be as minimal as possible without collapsing important representation features onto the same dimension. The findings in Fig. 2 are further augmented with results in Fig. 3, where a similar pattern is observed on AlexNet and CIFAR100. Can dimPH capture intrinsic properties of trajectories? After revealing that our ID estimation is a gauge for generalization, we set out to investigate whether it really hinges on the intrinsic properties of the data. We train several instances of 7-fcn for different learning rates and batch sizes. We compute the PH-dimension of each network using training trajectories. We visualize the following in the rows of Fig. 4 sorted by dimPH: (i) 200× 200 distance matrix of the sequence of iterates w1, . . . , wK (which is the basis for PH computations), (ii) corresponding logE0α=1 estimates as we sweep over n in an increasing fashion, (iii) persistence diagrams per each distance matrix. It is clear that there is a strong correlation between dimPH and the structure of the distance matrix. As dimension increases, matrix of distances become non-uniformly pixelated. The slope estimated from the total edge lengths the second row is a quantity proportional to our dimension. Note that the slope decreases as our estimte increases (hence generalization tends to decrease). We further observe clusters emerging in the persistence diagram. The latter has also been reported for better generalizing networks, though using a different notion of a topological space [BGND+19]. Is dimPH a real indicator of generalization? To quantitatively assess the quality of our complexity measure, we gather two statistics: (i) we report the average p-value over different batch sizes for AlexNet trained with SGD on the Cifar100 dataset. The value of p = 0.0157 < 0.05 confirms the statistical significance. Next, we follow the recent literature [JFY+20] and consult the Kendall correlation coefficient (KCC). Similar to the p-value experiment above, we compute KCC for AlexNet+SGD for different batch sizes (64, 100, 128) and attain (0.933, 0.357, 0.733) respectively. Note that, a positive correlation signals that the test gap closes as dimPH decreases. Both of these experiments agree with our theoretical insights that connect generalization to a topological characteristic of a neural network: intrinsic dimension of training trajectories. Effect of different training algorithms. We also verify that our method is algorithm-agnostic and does not require assumptions on the training algorithm. In particular, we show that our above analyses extend to both the RMSProp [TH12] and Adam [KB15] optimizer. Our results are visualized in Fig. 3. We plot the dimension with respect to the generalization error for varying optimizers and batch sizes; our results verify that the generalization error (which is inversely related to the test accuracy) is positively correlated with the PH dimension. This corroborates our previous results in Fig. 2 and in particular shows that our dimension estimator of test gap is indeed algorithm-agnostic. Encouraging generalization via regularization dimPH. We furthermore verify that our topological regularizer is able to help control the test gap in accordance with our theory. We train a Lenet-5 network [LBBH98] on Cifar10 [Kri09] and compare a clean trianing with a training with our topological regularizer with λ set to 1. We train for 200 epochs with a batch size of 128 and report the train and test accuracies in Fig. 5 over a variety of learning rates. We tested over 10 trials and found that, with p < 0.05 for all cases except lr = 0.01, the results are different. Our topological optimizer is able to produce the best improvements when our network is not able to converge well. These results show that our regularizer behaves as expected: the regularizer is able to recover poor training dynamics. We note that this experiment uses a simple architecture and as such, it presents a proof of concept. We do not aim for the state of the art results. Furthermore, we directly compared our approach with the generalization estimator of [CMEM20], which most closely resembles our construction. In particular, we found their method does not scale and is often numerically unreliable. For example, their methodology grows quadratically with respect to number of network weights and linearly with the dataset size, while our method does not scale much beyond memory usage with vectorized computation. Furthermore, for many of our test networks, their metric space construction (which is based off of the correlation between activation functions and used for the Vietoris-Rips complex) would be numerically brittle and result in degenerate persistent homology. These prevent [CMEM20] to be applicable in this scenario. 5.2 Ablation Studies To assess the quality of our dimension estimator, we now perform ablation studies, on a synthetic data whose the ground truth ID is known. To this end, we use the synthetic experimental setting presented in [SSDE20] (see the supplementary document for details), and we simulate a d = 128 dimensional stable Levy process with varying number of points 100 ≤ n ≤ 1500 and tail indices 1 ≤ β ≤ 2. Note that the tail index equals the intrinsic dimension in this case, which is an order of magnitude lower for this experiment. Can dimPH match the ground truth ID? We first try to predict the GT intrinsic dimension running Alg. 1 on this data. We also estimate the TwoNN dimension [FdRL17] to quantify how the state of the art ID estimators correlate with GT in such heavy tailed regime. Our results are plotted in Fig. 6. Note that as n increases our estimator becomes smoother and well approximates the GT up to a slight over-estimation, a repeatedly observed phenomenon [CCCR15]. TwoNN does not guarantee recovering the box-dimension. While it is found to be useful in estimating the ID of data [ALMZ19], we find it to be less desirable in a heavy-tailed regime as reflected in the plots. Our supplementary material provides further results on other, non-dynamics like synthetic dataset such as points on a sphere where TwoNN can perform better. We also include a robust line fitting variant of our approach PH0-RANSAC, where a random sample consensus is applied iteratively. Though, as our data is not outlier-corrupted, we do not observe a large improvement. Effect of α on dimension estimation. While our theory requires α to be smaller than the intrinsic dimension of the trajectories, in all of our experiments we fix α = 1.0. It is of curiosity whether such choice hampers our estimates. To see the effect, we vary α in range [0.5, 2.5] and plot our estimates in Fig. 7. It is observed (blue curve) that our dimension estimate follows a U-shaped trend with increasing α. We indicate the GT ID by a dashed red line and our estimate as a dashed green line. Ideally, these two horizontal lines should overlap. It is noticeable that, given the oracle for GT ID, it might be possible to optimize for an α?. Yet, such information is not available for the deep networks. Nevertheless, α = 1 seems to yield reasonable performance and we leave the estimation of a better α for future work. We provide additional results in our supplementary material. 6 Conclusion In this paper, we developed novel connections between dimPH of the training trajectory and the generalization error. Using these insights, we proposed a method for estimating the dimPH from data and, unlike previous work [SSDE20], our approach does not presuppose any conditions on the trajectory and offers a simple algorithm. By leveraging the differentiability of PH computation, we showed that we can use dimPH as a regularizer during training, which improved the performance in different setups. Societal Impact and Limitations. We believe that our study will not pose any negative societal or ethical consequences due to its theoretical nature. The main limitation of our study is that it solely considers the terms E0α, whereas PH offers a much richer structure. Hence, as our next step, we will explore finer ways to incorporate PH in generalization performance. We will further extend our results in terms of dimensions of measures by using the techniques presented in [CDE+21]. Acknowledgements Umut Şimşekli’s research is supported by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19P3IA-0001 (PRAIRIE 3IA Institute).
1. What is the focus of the paper regarding neural networks and their loss? 2. What are the strengths of the paper's theoretical analysis? 3. Are there any concerns or limitations regarding the proposed normalization term? 4. How does the reviewer assess the significance and effectiveness of the proposed method compared to prior works? 5. What are the potential future research directions related to the paper's contributions?
Summary Of The Paper Review
Summary Of The Paper The relationship between generalized loss of NN and Intrinsic Dimension by Persistent homology is mathematically proved. They also propose a normalization term based on the theory presented. In addition, we experimentally verify the theoretical results and confirm the effectiveness of the proposed algorithm. Review For the normalization term of NN loss, many methods exist, but many are experimental and few have been mathematically proven. Mathematical analysis is necessary for the future development of technology. This paper logically shows that the generalization of NN can be adjusted by intrinsic dimension based on persistent homology. Most of the theories presented are based on previous results, but they are well developed for the generalization of NNs. Although the proposed normalization term is a simple one, it seems to be significant just because it shows the mathematical implications. Many normalization terms have been proposed in the past. In this paper, only the effect for one problem is shown, but in practical terms, comparison with other normalization terms may be necessary. In addition, since persistent homology is generally computationally expensive, future evaluation in terms of computational complexity will be necessary.
NIPS
Title Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks Abstract Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal’s intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the ’persistent homology dimension’ (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network’s intrinsic dimension in a variety of settings, which is predictive of the generalization error. 1 Introduction In recent years, deep neural networks (DNNs) have become the de facto machine learning tool and have revolutionized a variety of fields such as natural language processing [DCLT18], image perception [KSH12, RBH+21], geometry processing [QSMG17, ZBL+20] and 3D vision [DBI18, GLW+21]. Despite their widespread use, little is known about their theoretical properties. Even now the top-performing DNNs are designed by trial-and-error, a pesky, burdensome process for the average practitioner [EMH+19]. Furthermore, even if a top-performing architecture is found, it is difficult to provide performance guarantees on a large class of real-world datasets. This lack of theoretical understanding has motivated a plethora of work focusing on explaining what, how, and why a neural network learns. To answer many of these questions, one naturally examines the generalization error, a measure quantifying the differing performance on train and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) test data since this provides significant insights into whether the network is learning or simply memorizing [ZBH+21]. However, generalization in neural networks is particularly confusing as it refutes the classical proposals of statistical learning theory such as uniform bounds based on the Rademacher complexity [BM02] and the Vapnik–Chervonenkis (VC) dimension [Vap68]. Instead, recent analyses have started focusing on the dynamics of deep neural networks. [NBMS17, BO18, GJ16] provide analyses on the final trained network, but these miss out on critical training patterns. To remedy this, a recent study [SSDE20] connected generalization and the heavy tailed behavior of network trajectories–a phenomenon which had already been observed in practice [SSG19, ŞGN+19, SZTG20, GSZ21, CWZ+21, HM20, MM19]. [SSDE20] further showed that the generalization error can be linked to the fractal dimension of a parametric hypothesis class (which can then be taken as the optimization trajectories). Hence, the fractal dimension acts as a ‘capacity metric’ for generalization. While [SSDE20] brought a new perspective to generalization, several shortcomings prevent application in everyday training. In particular, their construction requires several conditions which may be infeasible in practice: (i) topological regularity conditions on the hypothesis class for fast computation, (ii) a Feller process assumption on the training algorithm trajectory, and that (iii) the Feller process exhibits a specific diffusive behavior near a minimum. Furthermore, the capacity metrics in [SSDE20] are not optimization friendly and therefore can’t be incorporated into training. In this work, we address these shortcomings by exploiting the recently developed connections between fractal dimension and topological data analysis (TDA). First, by relating the box dimension [Sch09] and the recently proposed persistent homology (PH) dimension [Sch20], we relax the assumptions in [SSDE20] to develop a topological intrinsic dimension (ID) estimator. Then, using this estimator we develop a general tool for computing and visualizing generalization properties in deep learning. Finally, by leveraging recently developed differentiable TDA tools [CHU17, CHN19], we employ our ID estimator to regularize training towards solutions that generalize better, even without having access to the test dataset. Our experiments demonstrate that this new measure of intrinsic dimension correlates highly with generalization error, regardless of the choice of optimizer. Furthermore, as a proof of concept, we illustrate that our topological regularizer is able to improve the test accuracy and lower the generalization error. In particular, this improvement is most pronounced when the learning rate/batch size normally results in a poorer test accuracy. Overall, our contributions are summarized as follows: • We make a novel connection between statistical learning theory and TDA in order to develop a generic computational framework for the generalization error. We remove the topological regularity condition and the decomposable Feller assumption on training trajectories, which were required in [SSDE20]. This leads to a more generic capacity metric. • Using insights from our above methodology, we leverage the differentiable properties of persistent homology to regularize neural network training. Our findings also provide the first steps towards theoretically justifying recent topological regularization methods [BGND+19, CNBW19]. • We provide extensive experiments to illustrate the theory, strength, and flexibility of our framework. We believe that the novel connections and the developed framework will open new theoretical and computational directions in the theory of deep learning. To foster further developments at the the intersection of persistent homology and statistical learning theory, we release our source code under: https://github.com/tolgabirdal/PHDimGeneralization. 2 Related Work Intrinsic dimension in deep networks Even though a large number of parameters are required to train deep networks [FC18], modern interpretations of deep networks avoid correlating model over-fitting or generalization to parameter counting. Instead, contemporary studies measure model complexity through the degrees of freedom of the parameter space [JFH15, GJ16], compressibility (pruning) [BO18] or intrinsic dimension [ALMZ19, LFLY18, MWH+18]. Tightly related to the ID, Janson et al. [JFH15] investigated the degrees of freedom [Ghr10] in deep networks and expected difference between test error and training error. Finally, LDMNet [ZQH+18] explicitly penalizes the ID regularizing the network training. Generalization bounds Several studies have provided theoretical justification to the observations that trained neural networks live in a lower-dimensional space, and this is related to the generalization performance. In particular, compression-based generalization bounds [AGNZ18, SAM+20, SAN20, HJTW21, BSE+21] have shown that the generalization error of a neural network can be much lower if it can be accurately represented in lower dimensional space. Approaching the problem from a geometric viewpoint, [SSDE20] showed that the generalization error can be formally linked to the fractal dimension of a parametric hypothesis class. This dimension indeed the plays role of the intrinsic dimension, which can be much smaller than the ambient dimension. When the hypothesis class is chosen as the trajectories of the training algorithm, [SSDE20] further showed that the error can be linked to the heavy-tail behavior of the trajectories. Deep networks & topology Previous works have linked neural network training and topological invariants, although all analyze the final trained network [FGFAEV21]. For example, in [RTB+19], the authors construct Neural Persistence, a measure on neural network layer weights. They furthermore show that Neural Persistence reflects many of the properties of convergence and can classify weights based on whether they overfit, underfit, or exactly fit the data. In a parallel line of work, [DZF19] analyze neural network training by calculating topological properties of the underlying graph structure. This is expanded upon in [CMEM20], where the authors compute correlations between neural network weights and show that the homology is linked with the generalization error. However, these previous constructions have been done mostly in an adhoc manner. As a result, many of the results are mostly empirical and work must still be done to show that these methods hold theoretically. Our proposed method, by contrast, is theoretically well-motivated and uses tools from statistical persistent homology theory to formally links the generalization error with the network training trajectory topology. We also would like to note that prior work has incorporated topological loss functions to help normalize training. In particular, [BGND+19] constructed a topological normalization term for GANs to help maintain the geometry of the generated 3d point clouds. 3 Preliminaries & Technical Background We imagine a point cloud W = {wi ∈ Rd} as a geometric realization of a d-dimensional topological space W ⊂ W ⊂ Rd. Bδ(x) ⊂ Rd denotes the closed ball centered around x ∈ Rd with radius δ. Persistent Homology From a topological perspective, W can be viewed a cell complex composed of the disjoint union of k-dimensional balls or cells σ ∈ W glued together. For k = 0, 1, 2, . . . , we form a chain complex C(W) = . . . Ck+1(W) ∂k+1−−−→ Ck(W) ∂k−→ . . . by sequencing chain groupsCk(W), whose elements are equivalence classes of cycles, via boundary maps ∂k : Ck(W) 7→ Ck−1(W) with ∂k−1◦∂k ≡ 0. In this paper, we work with finite simplicial complexes restricting the cells to be simplices. The kth homology group or k-dimensional homology is then defined as the equivalence classes of k-dimensional cycles who differ only by a boundary, or in other words, the quotient group Hk(W) = Zk(W)/Yk(W) where Zk(W) = ker ∂k and Yk(W) = im ∂k+1. The generators or basis of H0(W), H1(W) and H2(W) describe the shape of the topological spaceW by its connected components, holes and cavities, respectively. Their ranks are related to the Betti numbers i.e.βk = rank(Hk). Definition 1 (Čech and Vietoris-Rips Complexes). For W a set of fine points in a metric space, the Čech cell complex Čechr(W ) is constructed using the intersection of r-balls around W , Br(W ): Čechr(W ) = { Q ⊂ W : ∩x∈QBr(x) 6= 0 } . The construction of such complex is intricate. Instead, the Vietoris-Rips complex VRr(W ) closely approximates Čechr(W ) using only the pairwise distances or the intersection of two r-balls [RB21]: Wr = VRr(W ) = { Q ⊂ W : ∀x, x′ ∈ Q, Br(x) ∩Br(x′) 6= 0 } . Definition 2 (Persistent Homology). PH indicates a multi-scale version of homology applied over a filtration {Wt}t := VR(W ) : ∀(s ≤ t)Ws ⊂ Wt ⊂ W , keeping track of holes created (born) or filled (died) as t increases. Each persistence module PHk(VR(W )) = {γi}i keeps track of a single k-persistence cycle γi from birth to death. We denote the entire lifetime of cycle γ as I(γ) and its length as |I(γ)| = death(γ)− birth(γ). We will also use persistence diagrams, 2D plots of all persistence lifetimes (death vs. birth). Note that for PH0, the Čech and VR complexes are equivalent. Lifetime intervals are instrumental in TDA as they allow for extraction of topological features or summaries. Note that, each birth-death pair can be mapped to the cells that respectively created and destroyed the homology class, defining a unique map for a persistence diagram, which lends itself to differentibility [BGND+19, CHN19, CHU17]. We conclude this brief section by referring the interested reader to the well established literature of persistent homology [Car14, EH10] for a thorough understanding. Intrinsic Dimension The intrinsic dimension of a space can be measured by using various notions. In this study, we will consider two notions of dimension, namely the upper-box dimension (also called the Minkowski dimension) and the persistent homology dimension. The box dimension is based on covering numbers and can be linked to generalization via [SSDE20], whereas the PH dimension is based on the notions defined earlier in this section. We start by the box dimension. Definition 3 (Upper-Box Dimension). For a bounded metric space W , let Nδ(W) denote the maximal number of disjoint closed δ-balls with centers inW . The upper box dimension is defined as: dimBoxW = lim sup δ→0 ( log(Nδ(W))/log(1/δ) ) . (1) We proceed with the PH dimension. First let us define an intermediate construct, which will play a key role in our computational tools. Definition 4 (α-Weighted Lifetime Sum). For a finite set W ⊂ W ⊂ Rd, the weighted ith homology lifetime sum is defined as follows: Eiα(W ) = ∑ γ∈PHi(VR(W )) |I(γ)|α, (2) where PHi(VR(W )) is the i-dimensional persistent homology of the Čech complex on a finite point set W contained inW and |I(γ)| is the persistence lifetime as explained above. Now, we are ready to define the PH dimension, which is the key notion in this paper. Definition 5 (Persistent Homology Dimension). The PHi-dimension of a bounded metric spaceW is defined as follows: dimiPHW := inf { α : Eiα(W ) < C; ∃C > 0,∀ finite W ⊂ W } . (3) In words, dimiPHW is the smallest exponent α for which Eiα is uniformly bounded for all finite subsets ofW . 4 Generalization Error via Persistent Homology Dimension In this section, we will illustrate that the generalization error can be linked to the PH0 dimension. Our approach is based on the following fundamental result. Theorem 1 ([KLS06, Sch19]). LetW ⊂ Rd be a bounded set. Then, it holds that: dimPHW := dim0PHW = dimBoxW. In the light of this theorem, we combine the recent result showing that the generalization error can be linked to the box dimension [SSDE20], and Theorem 1, which shows that, for bounded subsets of Rd, the box dimension and the PH dimension of order 0 agree. By following the notation of [SSDE20], we consider a standard supervised learning setting, where the data space is denoted by Z = X × Y , and X and Y respectively denote the features and the labels. We assume that the data is generated via an unknown data distribution D and we have access to a training set of n points, i.e., S = {z1, . . . , zn}, with the samples {zi}ni=1 are independent and identically (i.i.d.) drawn from D. We further consider a parametric hypothesis classW ⊂ Rd, that potentially depends on S. We choose W to be optimization trajectories given by a training algorithm A, which returns the entire (random) trajectory of the network weights in the time frame [0, T ], such that [A(S)]t = wt being the network weights returned byA at ‘time’ t, and t is a continuous iteration index. Then, in the setW , we collect all the network weights that appear in the optimization trajectory: W := {w ∈ Rd : ∃t ∈ [0, T ], w = [A(S)]t} where we will set T = 1, without loss of generality. To measure the quality of a parameter vector w ∈ W , we use a loss function ` : Rd ×Z 7→ R+, such that `(w, z) denotes the loss corresponding to a single data point z. We then denote the population and empirical risks respectively by R(w) := Ez[`(w, z)] and R̂(w, S) := 1n ∑n i=1 `(w, zi). The generalization error is hence defined as |R̂(w, S)−R(w)|. We now recall [SSDE20, Asssumption H4], which is a form of algorithmic stability [BE02]. Let us first introduce the required notation. For any δ > 0, consider the fixed grid on Rd, G = {( (2j1 + 1)δ 2 √ d , . . . , (2jd + 1)δ 2 √ d ) : ji ∈ Z, i = 1, . . . , d } , and define the set Nδ := {x ∈ G : Bδ(x) ∩W 6= ∅}, that is the collection of the centers of each ball that intersectW . H1. Let Z∞ := (Z × Z × · · · ) denote the countable product endowed with the product topology and let B be the Borel σ-algebra generated by Z∞. Let F,G be the sub-σ-algebras of B generated by the collections of random variables given by {R̂(w, S) : w ∈ Rd, n ≥ 1} and { 1 {w ∈ Nδ} : δ ∈ Q>0, w ∈ G,n ≥ 1 } respectively. There exists a constant M ≥ 1 such that for any A ∈ F, B ∈ G we have P [A ∩B] ≤MP [A]P[B]. The next result forms our main observation, which will lead to our methodological developments. Proposition 1. LetW ⊂ Rd be a (random) compact set. Assume that H1 holds, ` is bounded by B and L-Lipschitz continuous in w. Then, for n sufficiently large, we have sup w∈W |R̂(w, S)−R(w)| ≤ 2B √ [dimPHW + 1] log2(nL2) n + log(7M/γ) n , (4) with probability at least 1− γ over S ∼ D⊗n. Proof. By using the same proof technique as [SSDE20, Theorem 2], we can show that (4) holds with dimBoxW in place of dimPHW . Since W is bounded, we have dimBoxW = dimPHW by Theorem 1. The result follows. This result shows that the generalization error of the trajectories of a training algorithm is deeply linked to its topological properties as measured by the PH dimension. Thanks to novel connection, we have now access to the rich TDA toolbox, to be used for different purposes. 4.1 Analyzing Deep Network Dynamics via Persistent Homology By exploiting TDA tools, our goal in this section is to develop an algorithm to compute dimPHW for two main purposes. The first goal is to predict the generalization performance by using dimPH. By this approach, we can use dimPH for hyperparameter tuning without having access to test data. The second goal is to incorporate dimPH as a regularizer to the optimization problem in order to improve generalization. Note that similar topological regularization strategies have already been proposed Algorithm 1: Computation of dimPH. 1 input :The set of iterates W = {wi}Ki=1, smallest sample size nmin, and a skip step ∆, α 2 output :dimPHW 3 n← nmin, E ← [] 4 while n ≤ K do 5 Wn ← sample(W,n) // random sampling 6 Wn ← VR(Wn) // Vietoris-Rips filtration 7 E[i]← Eα(Wn) , ∑ γ∈PH0(Wn) |I(γ)| α // compute lifetime sums from PH 8 n← n+ ∆ 9 m, b← fitline (log(nmin : ∆ : K), log(E)) // power law on Ei1(W ) 10 dimPHW ← α1−m [BGND+19, CNBW19] without a formal link to generalization. In this sense, our observations form the first step towards theoretically linking generalization and TDA. In [SSDE20], to develop a computational approach, the authors first linked the intrinsic dimension to certain statistical properties of the underlying training algorithm, which can be then estimated. To do so, they required an additional topological regularity condition, which necessitates the existence of an ‘Ahlfors regular’ measure defined onW , i.e., a finite Borel measure µ such that there exists s, r0 > 0 where 0 < ars ≤ µ(Br(x)) ≤ brs < ∞, holds for all x ∈ W, 0 < r ≤ r0. This assumption was used to link the box dimension to another notion called Hausdorff dimension, which can be then linked to statistical properties of the training trajectories under further assumptions (see Section 1). An interesting asset of our approach is that, we do not require this condition and thanks to the following result, we are able to develop an algorithm to directly estimate dimPHW , while staying agnostic to the finer topological properties ofW . Proposition 2. Let W ⊂ Rd be a bounded set with dimPHW =: d?. Then, for all ε > 0 and α ∈ (0, d? + ε), there exists a constant Dα,ε, such that the following inequality holds for all n ∈ N+ and all collections Wn = {w1, . . . , wn} with wi ∈ W , i = 1, . . . , n: E0α(Wn) ≤ Dα,εn d?+ε−α d?+ε . (5) Proof. Since W is bounded, we have dimBoxW = d? by Theorem 1. Fix ε > 0. Then, by Definition 3, there exists δ0 = δ0(ε) > 0 and a finite constant Cε > 0 such that for all δ ≤ δ0 the following inequality holds: Nδ(W) ≤ Cεδ−(d ?+ε). (6) Then, the result directly follows from [Sch20, Proposition 21]. This result suggests a simple strategy to estimate an upper bound of the intrinsic dimension from persistent homology. In particular, we note that rewriting (5) for logarithmic values give us that( 1− α d∗ + ) log n+ logDα, ≥ logE0α. (7) If logE0α and log n are sampled from the data and give an empirical slope m, then we see that d∗ + ≤ m1−α . In many cases, we see that d ∗ ≈ α1−m (as further explained in Sec. 5.2), so we take α 1−m as our PH dimension estimation. We provide the full algorithm for computing this from our sampled data in Alg. 1. Note that our algorithm is similar to that proposed in [AAF+20], although our method works for sets rather than probability measures. In our implementation we compute the homology by the celebrated Ripser package [Bau21] unless otherwise specified. On computational complexity. Computing the Vietoris Rips complex is an active area of research, as the worst-case time complexity is meaningless due to natural sparsity [Zom10]. Therefore, to calculate the time complexity of our estimator, we focus on analyzing the PH computation from the output simplices: calculating PH takes O(pw) time, where w < 2.4 is the constant of matrix multiplication and p is the number of simplices produced in the filtration [BP19]. Since we compute with 0th order homology, this would imply that the computational complexity is O(nw), where n is the number of points. In particular, this means that estimating the PH dimension would take O(knw) time, where k is the number of samples taken assuming that samples are evenly spaced in [0, n]. 4.2 Regularizing Deep Networks via Persistent Homology Motivated by our results in proposition 2, we theorize that controlling dimPHW would help in reducing the generalization error. Towards this end, we develop a regularizer for our training procedure which seeks to minimize dimPHW during train time. If we let L be our vanilla loss function, then we will instead optimize over our topological loss function Lλ := L+ λ dimPHW , where λ ≥ 0 controls the scale of the regularization andW now denotes a sliding window of iterates (e.g., the latest 50 iterates during training). This way, we aim to regularize the loss by considering the dimension of the ongoing training trajectory. In Alg. 1, we let wi be the stored weights from previous iterations for i ∈ {1, . . . ,K − 1} and let wK be the current weight iteration. Since the persistence diagram computation and linear regression are differentiable, this means that our estimate for dimPH is also differentiable, and, if wk is sampled as in Alg. 1, is connected in the computation graph with wK . We incorporate our regularizer into the network training using PyTorch [PGM+19] and the associated persistent homology package torchph [CHU17, CHN19]. 5 Experimental Evaluations This section presents our experimental results in two parts: (i) analyzing and quantifying generalization in practical deep networks on real data, (ii) ablation studies on a random diffusion process. In all the experiments we will assume that the intrinsic dimension is strictly larger than 1, hence we will set α = 1, unless specified otherwise. Further details are reported in the supplementary document. 5.1 Analyzing and Visualizing Deep Networks Measuring generalization. We first verify our main claim by showing that our persistent homology dimension derived from topological analysis of the training trajectories correctly measures of generalization. To demonstrate this, we apply our analysis to a wide variety of networks, training procedures, and hyperparameters. In particular, we train AlexNet [KSH12], a 5-layer (fcn-5) and 7-layer (fcn-7) fully connected networks, and a 9-layer convolutional netowork (cnn-9) on MNIST, CIFAR10 and CIFAR100 datasets for multiple batch sizes and learning rates until convergence. For AlexNet, we consider 1000 iterates prior to convergence and, for the others, we only consider 200. Then, we estimate dimPH on the last iterates by using Alg. 1. For varying n, we randomly pick n of last iterates and compute E0α, and then we use the relation given in (5). We obtain the ground truth (GT) generalization error as the gap between training and test accuracies. Fig. 2 plots the PH-dimension with respect to test accuracy and signals a strong correlation of our PH-dimension and actual performance gap. The lower the PH-dimension, the higher the test accuracy. Note that this results aligns well with that of [SSDE20]. The figure also shows that the intrinsic dimensions across different datasets can be similar, even if the parameters of the models can vary greatly. This supports the recent hypothesis that what matters for the generalization is the effective capacity and not the parameter count. In fact, the dimension should be as minimal as possible without collapsing important representation features onto the same dimension. The findings in Fig. 2 are further augmented with results in Fig. 3, where a similar pattern is observed on AlexNet and CIFAR100. Can dimPH capture intrinsic properties of trajectories? After revealing that our ID estimation is a gauge for generalization, we set out to investigate whether it really hinges on the intrinsic properties of the data. We train several instances of 7-fcn for different learning rates and batch sizes. We compute the PH-dimension of each network using training trajectories. We visualize the following in the rows of Fig. 4 sorted by dimPH: (i) 200× 200 distance matrix of the sequence of iterates w1, . . . , wK (which is the basis for PH computations), (ii) corresponding logE0α=1 estimates as we sweep over n in an increasing fashion, (iii) persistence diagrams per each distance matrix. It is clear that there is a strong correlation between dimPH and the structure of the distance matrix. As dimension increases, matrix of distances become non-uniformly pixelated. The slope estimated from the total edge lengths the second row is a quantity proportional to our dimension. Note that the slope decreases as our estimte increases (hence generalization tends to decrease). We further observe clusters emerging in the persistence diagram. The latter has also been reported for better generalizing networks, though using a different notion of a topological space [BGND+19]. Is dimPH a real indicator of generalization? To quantitatively assess the quality of our complexity measure, we gather two statistics: (i) we report the average p-value over different batch sizes for AlexNet trained with SGD on the Cifar100 dataset. The value of p = 0.0157 < 0.05 confirms the statistical significance. Next, we follow the recent literature [JFY+20] and consult the Kendall correlation coefficient (KCC). Similar to the p-value experiment above, we compute KCC for AlexNet+SGD for different batch sizes (64, 100, 128) and attain (0.933, 0.357, 0.733) respectively. Note that, a positive correlation signals that the test gap closes as dimPH decreases. Both of these experiments agree with our theoretical insights that connect generalization to a topological characteristic of a neural network: intrinsic dimension of training trajectories. Effect of different training algorithms. We also verify that our method is algorithm-agnostic and does not require assumptions on the training algorithm. In particular, we show that our above analyses extend to both the RMSProp [TH12] and Adam [KB15] optimizer. Our results are visualized in Fig. 3. We plot the dimension with respect to the generalization error for varying optimizers and batch sizes; our results verify that the generalization error (which is inversely related to the test accuracy) is positively correlated with the PH dimension. This corroborates our previous results in Fig. 2 and in particular shows that our dimension estimator of test gap is indeed algorithm-agnostic. Encouraging generalization via regularization dimPH. We furthermore verify that our topological regularizer is able to help control the test gap in accordance with our theory. We train a Lenet-5 network [LBBH98] on Cifar10 [Kri09] and compare a clean trianing with a training with our topological regularizer with λ set to 1. We train for 200 epochs with a batch size of 128 and report the train and test accuracies in Fig. 5 over a variety of learning rates. We tested over 10 trials and found that, with p < 0.05 for all cases except lr = 0.01, the results are different. Our topological optimizer is able to produce the best improvements when our network is not able to converge well. These results show that our regularizer behaves as expected: the regularizer is able to recover poor training dynamics. We note that this experiment uses a simple architecture and as such, it presents a proof of concept. We do not aim for the state of the art results. Furthermore, we directly compared our approach with the generalization estimator of [CMEM20], which most closely resembles our construction. In particular, we found their method does not scale and is often numerically unreliable. For example, their methodology grows quadratically with respect to number of network weights and linearly with the dataset size, while our method does not scale much beyond memory usage with vectorized computation. Furthermore, for many of our test networks, their metric space construction (which is based off of the correlation between activation functions and used for the Vietoris-Rips complex) would be numerically brittle and result in degenerate persistent homology. These prevent [CMEM20] to be applicable in this scenario. 5.2 Ablation Studies To assess the quality of our dimension estimator, we now perform ablation studies, on a synthetic data whose the ground truth ID is known. To this end, we use the synthetic experimental setting presented in [SSDE20] (see the supplementary document for details), and we simulate a d = 128 dimensional stable Levy process with varying number of points 100 ≤ n ≤ 1500 and tail indices 1 ≤ β ≤ 2. Note that the tail index equals the intrinsic dimension in this case, which is an order of magnitude lower for this experiment. Can dimPH match the ground truth ID? We first try to predict the GT intrinsic dimension running Alg. 1 on this data. We also estimate the TwoNN dimension [FdRL17] to quantify how the state of the art ID estimators correlate with GT in such heavy tailed regime. Our results are plotted in Fig. 6. Note that as n increases our estimator becomes smoother and well approximates the GT up to a slight over-estimation, a repeatedly observed phenomenon [CCCR15]. TwoNN does not guarantee recovering the box-dimension. While it is found to be useful in estimating the ID of data [ALMZ19], we find it to be less desirable in a heavy-tailed regime as reflected in the plots. Our supplementary material provides further results on other, non-dynamics like synthetic dataset such as points on a sphere where TwoNN can perform better. We also include a robust line fitting variant of our approach PH0-RANSAC, where a random sample consensus is applied iteratively. Though, as our data is not outlier-corrupted, we do not observe a large improvement. Effect of α on dimension estimation. While our theory requires α to be smaller than the intrinsic dimension of the trajectories, in all of our experiments we fix α = 1.0. It is of curiosity whether such choice hampers our estimates. To see the effect, we vary α in range [0.5, 2.5] and plot our estimates in Fig. 7. It is observed (blue curve) that our dimension estimate follows a U-shaped trend with increasing α. We indicate the GT ID by a dashed red line and our estimate as a dashed green line. Ideally, these two horizontal lines should overlap. It is noticeable that, given the oracle for GT ID, it might be possible to optimize for an α?. Yet, such information is not available for the deep networks. Nevertheless, α = 1 seems to yield reasonable performance and we leave the estimation of a better α for future work. We provide additional results in our supplementary material. 6 Conclusion In this paper, we developed novel connections between dimPH of the training trajectory and the generalization error. Using these insights, we proposed a method for estimating the dimPH from data and, unlike previous work [SSDE20], our approach does not presuppose any conditions on the trajectory and offers a simple algorithm. By leveraging the differentiability of PH computation, we showed that we can use dimPH as a regularizer during training, which improved the performance in different setups. Societal Impact and Limitations. We believe that our study will not pose any negative societal or ethical consequences due to its theoretical nature. The main limitation of our study is that it solely considers the terms E0α, whereas PH offers a much richer structure. Hence, as our next step, we will explore finer ways to incorporate PH in generalization performance. We will further extend our results in terms of dimensions of measures by using the techniques presented in [CDE+21]. Acknowledgements Umut Şimşekli’s research is supported by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19P3IA-0001 (PRAIRIE 3IA Institute).
1. What is the main contribution of the paper regarding the use of topological data analysis in neural network theory? 2. What are the concerns regarding the presentation quality and numerical tests in the review? 3. How does the reviewer assess the novelty and significance of the proposed approach compared to prior works? 4. What are the limitations and potential flaws in the methodology and results presented in the paper? 5. How could the authors improve the clarity and readability of their figures and better communicate their message?
Summary Of The Paper Review
Summary Of The Paper The paper proposes to use an estimator of the intrinsic dimension (ID) based on topological data analysis for two goals, both important in neural network theory: (1) as a proxy of the test accuracy, which allows estimating it without performing explicitly any validation and (2) as a regularizer, by adding an ID-dependent term to the loss. A part of the paper is devoted to derive (or recall from the literature) rigorous properties of this ID estimator. Review The quality of the presentation is in general poor and the results of the numerical tests seem to me already present in the literature, and in a case possibly flawed. Fig 2: the x axis are not labeled. Fig 3: the dataset on which the tests are performed is not specified. If the x axis (not labeled) is the difference between test and train accuracy, then this ranges between 30 and 40. IThese differences are enormous, for any dataset mentioned in the paper, pointing to possible flaws in the model. Fig. 4 is unreadable, even by magnifying it. Its message is, at least to me, obscure. In the middle panel we see points approximately lying on a line, which is floating up and down across the panels. What do we learn from this? the bottom lines the two sets of points are labeled H_0 and infinity. What does this mean? Fig 7 is also unreadable. The axis labels are missing. The different panels are also not labeled. The idea of regularizing learning by controlling the ID is not new. It was introduced in ref [MA] https://arxiv.org/abs/1806.02612 (2018). This is a key reference that should be cited and discussed. The results illustrated in the manuscript on this important point are not really convincing: in fig 5 it is shown that with the learning rate which allows obtaining the best test accuracy the effect of ID regularization is practically zero (65 % with and without regularization). Moreover, an accuracy of 65 % in cifar10 is way below the state of the art for this dataset which is 95 % for convolutional NNs, and almost 99 % for architectures exploiting transformers (see for example https://paperswithcode.com/sota/image-classification-on-cifar-10). Also this points to possible flaws. The intrinsic dimensions reported in fig 2 and 3 are of order 2, while those reported in ref ALMZ19 and [MA] range between 10 and 100, for the Imagenet dataset. The reason for this qualitative discrepancy should be discussed.
NIPS
Title Factored Bandits Abstract We introduce the factored bandits model, which is a framework for learning with limited (bandit) feedback, where actions can be decomposed into a Cartesian product of atomic actions. Factored bandits incorporate rank-1 bandits as a special case, but significantly relax the assumptions on the form of the reward function. We provide an anytime algorithm for stochastic factored bandits and up to constants matching upper and lower regret bounds for the problem. Furthermore, we show how a slight modification enables the proposed algorithm to be applied to utilitybased dueling bandits. We obtain an improvement in the additive terms of the regret bound compared to state-of-the-art algorithms (the additive terms are dominating up to time horizons that are exponential in the number of arms). 1 Introduction We introduce factored bandits, which is a bandit learning model, where actions can be decomposed into a Cartesian product of atomic actions. As an example, consider an advertising task, where the actions can be decomposed into (1) selection of an advertisement from a pool of advertisements and (2) selection of a location on a web page out of a set of locations, where it can be presented. The probability of a click is then a function of the quality of the two actions, the attractiveness of the advertisement and the visibility of the location it was placed at. In order to maximize the reward the learner has to maximize the quality of actions along each dimension of the problem. Factored bandits generalize the above example to an arbitrary number of atomic actions and arbitrary reward functions satisfying some mild assumptions. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. In a nutshell, at every round of a factored bandit game the player selects L atomic actions, a1, . . . , aL, each from a corresponding finite set A� of size |A�| of possible actions. The player then observes a reward, which is an arbitrary function of a1, . . . , aL satisfying some mild assumptions. For example, it can be a sum of the quality of atomic actions, a product of the qualities, or something else that does not necessarily need to have an analytical expression. The learner does not have to know the form of the reward function. Our way of dealing with combinatorial complexity of the problem is through introduction of unique identifiability assumption, by which the best action along each dimension is uniquely identifiable. A bit more precisely, when looking at a given dimension we call the collection of actions along all other dimensions a reference set. The unique identifiability assumption states that in expectation the best action along a dimension outperforms any other action along the same dimension by a certain margin when both are played with the same reference set, irrespective of the composition of the reference set. This assumption is satisfied, for example, by the reward structure in linear and generalized linear bandits, but it is much weaker than the linearity assumption. In Figure 1, we sketch the relations between factored bandits and other bandit models. We distinguish between bandits with explicit reward models, such as linear and generalized linear bandits, and bandits with weakly constrained reward models, including factored bandits and some relaxations of combinatorial bandits. A special case of factored bandits are rank-1 bandits [7]. In rank-1 bandits the player selects two actions and the reward is the product of their qualities. Factored bandits generalize this to an arbitrary number of actions and significantly relax the assumption on the form of the reward function. The relation with other bandit models is a bit more involved. There is an overlap between factored bandits and (generalized) linear bandits [1; 6], but neither is a special case of the other. When actions are represented by unit vectors, then for (generalized) linear reward functions the models coincide. However, the (generalized) linear bandits allow a continuum of actions, whereas factored bandits relax the (generalized) linearity assumption on the reward structure to uniform identifiability. There is a partial overlap between factored bandits and combinatorial bandits [3]. The action set in combinatorial bandits is a subset of t0, 1ud. If the action set is unrestricted, i.e. A “ t0, 1ud, then combinatorial bandits can be seen as factored bandits with just two actions along each of the d dimensions. However, typically in combinatorial bandits the action set is a strict subset of t0, 1ud and one of the parameters of interest is the permitted number of non-zero elements. This setting is not covered by factored bandits. While in the classical combinatorial bandits setting the reward structure is linear, there exist relaxations of the model, e.g. Chen et al. [4]. Dueling bandits are not directly related to factored bandits and, therefore, we depict them with faded dashed blocks in Figure 1. While the action set in dueling bandits can be decomposed into a product of the basic action set with itself (one for the first and one for the second action in the duel), the observations in dueling bandits are the identities of the winners rather than rewards. Nevertheless, we show that the proposed algorithm for factored bandits can be applied to utility-based dueling bandits. The main contributions of the paper can be summarized as follows: 1. We introduce factored bandits and the uniform identifiability assumption. 2. Factored bandits with uniformly identifiable actions are a generalization of rank-1 bandits. 3. We provide an anytime algorithm for playing factored bandits under uniform identifiability assumption in stochastic environments and analyze its regret. We also provide a lower bound matching up to constants. 4. Unlike the majority of bandit models, our approach does not require explicit specification or knowledge of the form of the reward function (as long as the uniform identifiability assumption is satisfied). For example, it can be a weighted sum of the qualities of atomic actions (as in linear bandits), a product thereof, or any other function not necessarily known to the algorithm. 5. We show that the algorithm can also be applied to utility-based dueling bandits, where the additive factor in the regret bound is reduced by a multiplicative factor of K compared to state-of-the-art (where K is the number of actions). It should be emphasized that in stateof-the-art regret bounds for utility-based dueling bandits the additive factor is dominating for time horizons below ΩpexppKqq, whereas in the new result it is only dominant for time horizons up to OpKq. 6. Our work provides a unified treatment of two distinct bandit models: rank-1 bandits and utility-based dueling bandits. The paper is organized in the following way. In Section 2 we introduce the factored bandit model and uniform identifiability assumption. In Section 3 we provide algorithms for factored bandits and dueling bandits. In Section 4 we analyze the regret of our algorithm and provide matching upper and lower regret bounds. In Section 5 we compare our work empirically and theoretically with prior work. We finish with a discussion in Section 6. 2 Problem Setting 2.1 Factored bandits We define the game in the following way. We assume that the set of actions A can be represented as a Cartesian product of atomic actions, A “ ÂL�“1 A�. We call the elements of A� atomic arms. For rounds t “ 1, 2, ... the player chooses an action At P A and observes a reward rt drawn according to an unknown probability distribution pAt (i.e., the game is “stochastic”). We assume that the mean rewards µpaq “ Errt|At “ as are bounded in r´1, 1s and that the noise ηt “ rt ´ µpAtq is conditionally 1-sub-Gaussian. Formally, this means that @λ P R E “eληt |Ft´1 ‰ ď exp ˆ λ2 2 ˙ , where Ft :“ tA1, r1,A2, r2, ...,At, rtu is the filtration defined by the history of the game up to and including round t. We denote a˚ “ pa1̊ , a2̊ , ..., aL̊q “ argmaxaPA µpaq. Definition 1 (uniform identifiability). An atomic set Ak has a uniformly identifiable best arm a˚k if and only if @a P Akzta˚ku : Δkpaq :“ min bPÂ�‰k A� µpa˚k ,bq ´ µpa,bq ą 0. (1) We assume that all atomic sets have uniformly identifiable best arms. The goal is to minimize the pseudo-regret, which is defined as RegT “ E « Tÿ t“1 µpa˚q ´ µpAtq ff . Due to generality of the uniform identifiability assumption we cannot upper bound the instantaneous regret µpa˚q ´ µpAtq in terms of the gaps Δ�pa�q. However, a sequential application of (1) provides a lower bound µpa˚q ´ µpaq “ µpa˚q ´ µpa1, a2̊ , ..., aL̊q ` µpa1, a2̊ , ..., aL̊q ´ µpaq ě Δ1pa1q ` µpa1, a2̊ , ..., aL̊q ´ µpaq ě ... ě Lÿ �“1 Δ�pa�q. (2) For the upper bound let κ be a problem dependent constant, such that µpa˚q´µpaq ď κřL�“1 Δ�pa�q holds for all a. Since the mean rewards are in r´1, 1s, the condition is always satisfied by κ “ mina,� 2Δ ´1 � pa�q and by equation (2) κ is always larger than 1. The constant κ appears in the regret bounds. In the extreme case when κ “ mina,� 2Δ´1� pa�q the regret guarantees are fairly weak. However, in many specific cases mentioned in the previous section, κ is typically small or even 1. We emphasize that algorithms proposed in the paper do not require the knowledge of κ. Thus, the dependence of the regret bounds on κ is not a limitation and the algorithms automatically adapt to more favorable environments. 2.2 Dueling bandits The set of actions in dueling bandits is factored into AˆA. However, strictly speaking the problem is not a factored bandit problem, because the observations in dueling bandits are not the rewards.1 When playing two arms, a and b, we observe the identity of the winning arm, but the regret is typically defined via average relative quality of a and b with respect to a “best” arm in A. The literature distinguishes between different dueling bandit settings. We focus on utility-based dueling bandits [14] and show that they satisfy the uniform identifiability assumption. In utility-based dueling bandits, it is assumed that each arm has a utility upaq and that the winning probabilities are defined by Pra wins against bs “ νpupaq´upbqq for a monotonously increasing link function ν. Let wpa, bq be 1 if a wins against b and 0 if b wins against a. Let a˚ :“ argmaxaPA upaq denote the best arm. Then for any arm b P A and any a P Aza˚, it holds that Erwpa˚, bqs ´ Erwpa, bqs “ νpupa˚q ´ upbqq ´ νpupaq ´ upbqq ą 0, which satisfies the uniform identifiability assumption. For the rest of the paper we consider the linear link function νpxq “ 1`x2 . The regret is then defined by RegT “ E « Tÿ t“1 upa˚q ´ upAtq 2 ` upa ˚q ´ upBtq 2 ff . (3) 3 Algorithms Although in theory an asymptotically optimal algorithm for any structured bandit problem was presented in [5], for factored bandits this algorithm does not only require solving an intractable semiinfinite linear program at every round, but it also suffers from additive constants which are exponential in the number of atomic actions L. An alternative naive approach could be an adaptation of sparring [16], where each factor runs an independent K-armed bandit algorithm and does not observe the atomic arm choices of other factors. The downside of sparring algorithms, both theoretically and practically, is that each algorithm operates under limited information and the rewards become non i.i.d. from the perspective of each individual factor. Our Temporary Elimination Algorithm (TEA, Algorithm 1) avoids these downsides. It runs independent instances of the Temporary Elimination Module (TEM, Algorithm 3) in parallel, one per each factor of the problem. Each TEM operates on a single atomic set. The TEA is responsible for the synchronization of TEM instances. Two main ingredients ensure information efficiency. First, we use relative comparisons between arms instead of comparing absolute mean rewards. This cancels out the effect of non-stationary means. The second idea is to use local randomization in order to obtain unbiased estimates of the relative performance without having to actually play each atomic arm with the same reference, which would have led to prohibitive time complexity. 1 @� : TEM� Ð new TEM(A�) 2 t Ð 1 3 for s “ 1, 2, . . . do 4 Ms Ð argmax� |TEM� . getActiveSetpfptq´1q| 5 Ts Ð pt, t ` 1, . . . , t ` Ms ´ 1q 6 for � P t1, . . . , Lu in parallel do 7 TEM� . scheduleNextpTsq 8 for t P Ts do 9 rt Ð playppTEM� .Atq�“1,...,Lq 10 for � P t1, . . . , Lu in parallel do 11 TEM� . feedbackpprt1 qt1PTsq 12 t Ð t ` |Ts| Algorithm 1: Factored Bandit TEA 1 TEM Ð new TEM(A) 2 t Ð 1 3 for s “ 1, 2, . . . do 4 As Ð TEM . getActiveSetpfptq´1q 5 Ts Ð pt, t ` 1, . . . , t ` |As| ´ 1q 6 TEM . scheduleNextpTsq 7 for b P As do 8 rt Ð playpTEM .At, bq 9 t Ð t ` 1 10 TEM . feedbackpprt1 qt1PTsq Algorithm 2: Dueling Bandit TEA 1In principle, it is possible to formulate a more general problem that would incorporate both factored bandits and dueling bandits. But such a definition becomes too general and hard to work with. For the sake of clarity we have avoided this path. The TEM instances run in parallel in externally synchronized phases. Each module selects active arms in getActiveSetpδq, such that the optimal arm is included with high probability. The length of a phase is chosen such that each module can play each potentially optimal arm at least once in every phase. All modules schedule all arms for the phase in scheduleNext. This is done by choosing arms in a round robin fashion (random choices if not all arms can be played equally often) and ordering them randomly. All scheduled plays are executed and the modules update their statistics through the call of feedback routine. The modules use slowly increasing lower confidence bounds for the gaps in order to temporarily eliminate arms that are with high probability suboptimal. In all algorithms, we use fptq :“ pt ` 1q log2pt ` 1q. Dueling bandits For dueling bandits we only use a single instance of TEM. In each phase the algorithm generates two random permutations of the active set and plays the corresponding actions from the two lists against each other. (The first permutation is generated in Line 6 and the second in Line 7 of Algorithm 2.) 3.1 TEM The TEM tracks empirical differences between rewards of all arms ai and aj in Dij . Based on these differences, it computes lower confidence bounds for all gaps. The set K˚ contains those arms where all LCB gaps are zero. Additionally the algorithm keeps track of arms that were never removed from B. During a phase, each arm from K˚ is played at least once, but only arms in B can be played more than once. This is necessary to keep the additive constants at M logpKq instead of MK. global :Ni,j , Di,j ,K˚,B 1 Function initialize(K) 2 @ai, aj P K : Ni,j , Di,j Ð 0, 0 3 B Ð K 4 5 Function getActiveSet(δ) 6 if DNi,j “ 0 then 7 K˚ Ð K 8 else 9 for ai P K do 10 Δ̂LCBpaiq Ð maxaj‰ai Dj,iNj,i ´c 12 logp2KfpNj,iqδ´1q Nj,i 11 K˚ Ð tai P K|Δ̂LCBpaiq ď 0u 12 if |K˚| “ 0 then 13 K˚ Ð K 14 B Ð B X K˚ 15 if |B| “ 0 then 16 B Ð K˚ 17 return K˚ 18 19 Function scheduleNext(T ) 20 for a P K˚ do 21 t̃ Ð random unassigned index in T 22 At̃ Ð a 23 while not all Ats , . . . , Ats`|T |´1 assigned do 24 for a P B do 25 t̃ Ð random unassigned index in T 26 At̃ Ð a 27 28 Function feedback(tRtuts,...,ts`Ms´1) 29 @ai : N is, Ris Ð 0, 0 30 for t “ ts, . . . , ts ` Ms ´ 1 do 31 RAts Ð RAts ` Rt 32 NAts Ð NAts ` 1 33 for ai, aj P K˚ do 34 Di,j Ð Di,j`mintNsi , Nsj up R i s Nis ´ Rjs N j s q 35 Ni,j Ð Ni,j ` mintNsi , Nsj u Algorithm 3: Temporary Elimination Module (TEM) Implementation 4 Analysis We start this section with the main theorem, which bounds the number of times the TEM pulls sub-optimal arms. Then we prove upper bounds on the regret for our main algorithms. Finally, we prove a lower bound for factored bandits that shows that our regret bound is tight up to constants. 4.1 Upper bound for the number of sub-optimal pulls by TEM Theorem 1. For any TEM submodule TEM� with an arm set of size K “ |A�|, running in the TEA algorithm with M :“ max� |A�| and any suboptimal atomic arm a ‰ a˚, let Ntpaq denote the number of times TEM has played the arm a up to time t. Then there exist constants Cpaq ď M for a ‰ a˚, such that ErNtpaqs ď 120 Δpaq2 ˜ logp2Kt log2ptqq ` 4 log ˆ 48 logp2Kt log2ptqq Δpaq2 ˙ ¸ ` Cpaq, where ř a‰a˚ Cpaq ď M logpKq ` 52K in the case of factored bandits and Cpaq ď 52 for dueling bandits. Proof sketch. [The complete proof is provided in the Appendix.] Step 1 We show that the confidence intervals are constructed in such a way that the probability of all confidence intervals holding at all epochs up from s1 is at least 1 ´ maxsěs1 fptsq´1. This requires a novel concentration inequality (Lemma 3) for a sum of conditionally σs-sub-gaussian random variables, where σs can be dependent on the history. This technique might be useful for other problems as well. Step 2 We split the number of pulls into pulls that happen in rounds where the confidence intervals hold and those where they fail: Ntpaq “ N conft paq ` N conft paq. We can bound the expectation of N conft paq based on the failure probabilities given by Prconf failure at round ss ď 1fptsq . Step 3 We define s1 as the last round in which the confidence intervals held and a was not eliminated. We can split N conft paq “ N confts1 paq ` Cpaq and use the confidence intervals to upper bound N confts1 paq. The upper bound on ř a Cpaq requires special handling of arms that were eliminated once and carefully separating the cases where confidence intervals never fail and those where they might fail. 4.2 Regret Upper bound for Dueling Bandit TEA A regret bound for the Factored Bandit TEA algorithm, Algorithm 1, is provided in the following theorem. Theorem 2. The pseudo-regret of Algorithm 1 at any time T is bounded by RegT ď κ ¨ ˝ Lÿ �“1 ÿ a�‰a˚� 120 Δ�pa�q ˜ logp2|A�|t log2ptqq ` 4 log ˆ 48 logp2|A�|t log2ptqq Δ�pa�q ˙ ¸¸ ` max � |A�| ÿ � logp|A�|q ` ÿ � 5 2 |A�|. Proof. The design of TEA allows application of Theorem 1 to each instance of TEM. Using µpa˚q ´ µpaq ď κřL�“1 Δ�pa�q, we have that RegT “ Er Tÿ t“1 µpa˚q ´ µpatqss ď κ Lÿ l“1 ÿ a�‰a˚� ErNT pa�qsΔ�pa�q. Applying Theorem 1 to the expected number of pulls and bounding the sums ř a CpaqΔpaq ďř a Cpaq completes the proof. 4.3 Dueling bandits A regret bound for the Dueling Bandit TEA algorithm (DBTEA), Algorithm 2, is provided in the following theorem. Theorem 3. The pseudo-regret of Algorithm 2 for any utility-based dueling bandit problem at any time T (defined in equation (3) satisfies RegT ď O ´ř a‰a˚ logpT q Δpaq ¯ ` OpKq. Proof. At every round, each arm in the active set is played once in position A and once in position B in playpA,Bq. Denote by NAt paq the number of plays of an arm a in the first position, NBt paq the number of plays in the second position, and Ntpaq the total number of plays of the arm. We have RegT “ ÿ a‰a˚ ErNtpaqsΔpaq “ ÿ a‰a˚ ErNAt paq ` NBt paqsΔpaq “ ÿ a‰a˚ 2ErNAt paqsΔpaq. The proof is completed by applying Theorem 1 to bound ErNAt paqs. 4.4 Lower bound We show that without additional assumptions the regret bound cannot be improved. The lower bound is based on the following construction. The mean reward of every arm is given by µpaq “ µpa˚q ´ ř� Δ�pa�q. The noise is Gaussian with variance 1. In this problem, the regret can be decomposed into a sum over atomic arms of the regret induced by pulling these arms, RegT “ř � ř a�PA� ErNT pa�qsΔ�pa�q. Assume that we only want to minimize the regret induced by a single atomic set A�. Further, assume that Δkpaq for all k ‰ � are given. Then the problem is reduced to a regular K-armed bandit problem. The asymptotic lower bound for K-armed bandit under 1-Gaussian noise goes back to [10]: For any consistent strategy θ, the asymptotic regret is lower bounded by lim infTÑ8 Reg θ T logpT q ě ř a‰a˚ 2 Δpaq . Due to regret decomposition, we can apply this bound to every atomic set separately. Therefore, the asymptotic regret in the factored bandit problem is lim inf TÑ8 RegθT logpT q ě Lÿ �“1 ÿ a�‰a�˚ 2 Δ�pa�q . This shows that our general upper bound is asymptotically tight up to leading constants and κ. κ-gap We note that there is a problem-dependent gap of κ between our upper and lower bounds. Currently we believe that this gap stems from the difference between information and computational complexity of the problem. Our algorithm operates on each factor of the problem independently of other factors and is based on the “optimism in the face of uncertainty” principle. It is possible to construct examples in which the optimal strategy requires playing surely sub-optimal arms for the sake of information gain. For example, this kind of constructions were used by Lattimore and Szepesvári [11] to show suboptimality of optimism-based algorithms. Therefore, we believe that removing κ from the upper bound is possible, but requires a fundamentally different algorithm design. What is not clear is whether it is possible to remove κ without significant sacrifice of the computational complexity. 5 Comparison to Prior Work 5.1 Stochastic rank-1 bandits Stochastic rank-1 bandits introduced by Katariya et al. [7] are a special case of factored bandits. The authors published a refined algorithm for Bernoulli rank-1 bandits using KL confidence sets in Katariya et al. [8]. We compare our theoretical results with the first paper because it matches our problem assumptions. In our experiments, we provide a comparison to both the original algorithm and the KL version. In the stochastic rank-1 problem there are only 2 atomic sets of size K1 and K2. The matrix of expected rewards for each pair of arms is of rank 1. It means that for each u P A1 and v P A2, there exist u, v P r0, 1s such that Errpu, vqs “ u ¨v. The proposed Stochastic rank-1 Elimination algorithm introduced by Katariya et al. is a typical elimination style algorithm. It requires knowledge of the time horizon and uses phases that increase exponentially in length. In each phase, all arms are played uniformly. At the end of a phase, all arms that are sub-optimal with high probability are eliminated. Theoretical comparison It is hard to make a fair comparison of the theoretical bounds because TEA operates under much weaker assumptions. Both algorithms have a regret bound of O ´´ř uPA1zu˚ 1 Δ1puq ` ř vPA2zv˚ 1 Δ2pvq ¯ logptq ¯ . The problem independent multiplicative factors hidden under O are smaller for TEA, even without considering that rank-1 Elimination requires a doubling trick for anytime applications. However, the problem dependent factors are in favor of rank-1 Elimination, where the gaps correspond to the mean difference under uniform sampling pu˚ ´ uq řvPA2 v{K2. In factored bandits, the gaps are defined as pu˚ ´ uqminvPA2 v, which is naturally smaller. The difference stems from different problem assumptions. Stronger assumptions of rank-1 bandits make elimination easier as the number of eliminated suboptimal arms increases. The TEA analysis holds in cases where it becomes harder to identify suboptimal arms after removal of bad arms. This may happen when highly suboptimal atomic actions in one factor provide more discriminative information on atomic actions in other factors than close to optimal atomic actions in the same factor (this follows the spirit of illustration of suboptimality of optimistic algorithms in [11]). We leave it to future work to improve the upper bound of TEA under stronger model assumptions. In terms of memory and computational complexity, TEA is inferior to regular elimination style algorithms, because we need to keep track of relative performances of the arms. That means both computational and memory complexities are Opř� |A�|2q per round in the worst case, as opposed to rank-1 Elimination that only requires O `|A1| ` |A2|˘. Empirical comparison The number of arms is set to 16 in both sets. We always fix u˚ ´ u “ v˚ ´ v “ 0.2. We vary the absolute value of u˚v˚. As expected, rank1ElimKL has an advantage when the Bernoulli random variables are strongly biased towards one side. When the bias is close to 12 , we clearly see the better constants of TEA. In the evaluation we clearly outperform rank-1 Elimination over different parameter settings and even beat the KL optimized version if the means are not too close to zero or one. This supports that our algorithm does not only provide a more practical anytime version of elimination, but also improves on constant factors in the regret. We believe that our algorithm design can be used to improve other elimination style algorithms as well. 5.2 Dueling Bandits: Related Work To the best of our knowledge, the proposed Dueling Bandit TEA is the first algorithm that satisfies the following three criteria simultaneously for utility-based dueling bandits: • It requires no prior knowledge of the time horizon (nor uses the doubling trick or restarts). • Its pseudo-regret is bounded by Opřa‰a˚ logptqΔpaq q. • There are no additive constants that dominate the regret for time horizons T ą OpKq. We want to stress the importance of the last point. For all state-of-the-art algorithms known to us, when the number of actions K is moderately large, the additive term is dominating for any realistic time horizon T . In particular, Ailon et al. [2] introduces three algorithms for the utility-based dueling bandit problem. The regret of Doubler scales with Oplog2ptqq. The regret of MultiSBM has an additive term of order ř a‰a˚ K Δpaq that is dominating for T ă ΩpexppKqq. The last algorithm, Sparring, has no theoretical analysis. Algorithms based on the weaker Condorcet winner assumption apply to utility-based setting, but they all suffer from equally large or even larger additive terms. The RUCB algorithm introduced by Zoghi et al. [17] has an additive term in the bound that is defined as 2DΔmax logp2Dq, for Δmax “ maxa‰a˚ Δpaq and D ą 12 ř ai‰a˚ ř aj‰ai 4α mintΔpaiq2,Δpajq2u . By unwrapping these definitions, we see that the RUCB regret bound has an additive term of order 2DΔmax ě řa‰a˚ KΔpaq . This is again the dominating term for time horizons T ď ΩpexppKqq. The same applies to the RMED algorithm introduced by Komiyama et al. [9], which has an additive term of OpK2q. (The dependencies on the gaps are hidden behind the O-notation.) The D-TS algorithm by Wu and Liu [13] based on Thompson Sampling shows one of the best empirical performances, but its regret bound includes an additive constant of order OpK3q. Other algorithms known to us, Interleaved Filter [16], Beat the Mean [15], and SAVAGE [12], all require knowledge of the time horizon T in advance. Empirical comparison We have used the framework provided by Komiyama et al. [9]. We use the same utility for all sub-optimal arms. In Figure 3, the winning probability of the optimal arm over suboptimal arms is always set to 0.7, we run the experiment for different number of arms K. TEA outperforms all algorithms besides RMED variants, as long as the number of arms are sufficiently big. To show that there also exists a regime where the improved constants gain an advantage over RMED, we conducted a second experiment in Figure 4 (in the Appendix), where we set the winning probability to 0.952 and significantly increase the number of arms. The evaluation shows that the additive terms are indeed non-negligible and that Dueling Bandit TEA outperforms all baseline algorithms when the number of arms is sufficiently large. 6 Discussion We have presented the factored bandits model and uniform identifiability assumption, which requires no knowledge of the reward model. We presented an algorithm for playing stochastic factored bandits with uniformly identifiable actions and provided matching upper and lower bounds for the problem up to constant factors. Our algorithm and proofs might serve as a template to turn other elimination style algorithms into improved anytime algorithms. Factored bandits with uniformly identifiable actions generalize rank-1 bandits. We have also provided a unified framework for the analysis of factored bandits and utility-based dueling bandits. Furthermore, we improve the additive constants in the regret bound compared to state-of-the-art algorithms for utility-based dueling bandits. There are multiple potential directions for future research. One example mentioned in the text is the possibility of improving the regret bound when additional restrictions on the form of the reward function are introduced or improvements of the lower bound when algorithms are restricted in computational or memory complexity. Another example is the adversarial version of the problem. 2Smaller gaps show the same behavior but require more arms and more timesteps.
1. What is the focus and contribution of the paper on bandit models? 2. What are the strengths of the proposed approach, particularly in its ability to generalize and provide better bounds? 3. What are the weaknesses of the paper regarding experimental comparisons and computational complexity? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review This paper proposes a bandit model where the actions can be decomposed into the Cartesian product of atomic actions. It generalizes rank-1 bandits, relates their framework to combinatorial and contextual bandits and results in better bounds in the dueling bandit setting. Simple synthetic experiments show this framework indeed leads to better regret in practice. The theoretical bounds are reasonable, although some intuition explaining the bounds would be helpful. 1. Please consider moving the experiments to the main paper. 2. Please compare the performance of your algorithm (with possibly higher rank since you can incorporate it) to rank-1 bandits in real-world experiments proposed in that paper. This will make the paper much stronger. 3. Please formally characterize and compare the computational complexity of your algorithm to the elimination based algorithms. 4. Lines 127 - 130: Please explain this better. *** After Rebuttal *** I have gone through the other reviews and the author response. And my opinion remains unchanged.
NIPS
Title Factored Bandits Abstract We introduce the factored bandits model, which is a framework for learning with limited (bandit) feedback, where actions can be decomposed into a Cartesian product of atomic actions. Factored bandits incorporate rank-1 bandits as a special case, but significantly relax the assumptions on the form of the reward function. We provide an anytime algorithm for stochastic factored bandits and up to constants matching upper and lower regret bounds for the problem. Furthermore, we show how a slight modification enables the proposed algorithm to be applied to utilitybased dueling bandits. We obtain an improvement in the additive terms of the regret bound compared to state-of-the-art algorithms (the additive terms are dominating up to time horizons that are exponential in the number of arms). 1 Introduction We introduce factored bandits, which is a bandit learning model, where actions can be decomposed into a Cartesian product of atomic actions. As an example, consider an advertising task, where the actions can be decomposed into (1) selection of an advertisement from a pool of advertisements and (2) selection of a location on a web page out of a set of locations, where it can be presented. The probability of a click is then a function of the quality of the two actions, the attractiveness of the advertisement and the visibility of the location it was placed at. In order to maximize the reward the learner has to maximize the quality of actions along each dimension of the problem. Factored bandits generalize the above example to an arbitrary number of atomic actions and arbitrary reward functions satisfying some mild assumptions. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. In a nutshell, at every round of a factored bandit game the player selects L atomic actions, a1, . . . , aL, each from a corresponding finite set A� of size |A�| of possible actions. The player then observes a reward, which is an arbitrary function of a1, . . . , aL satisfying some mild assumptions. For example, it can be a sum of the quality of atomic actions, a product of the qualities, or something else that does not necessarily need to have an analytical expression. The learner does not have to know the form of the reward function. Our way of dealing with combinatorial complexity of the problem is through introduction of unique identifiability assumption, by which the best action along each dimension is uniquely identifiable. A bit more precisely, when looking at a given dimension we call the collection of actions along all other dimensions a reference set. The unique identifiability assumption states that in expectation the best action along a dimension outperforms any other action along the same dimension by a certain margin when both are played with the same reference set, irrespective of the composition of the reference set. This assumption is satisfied, for example, by the reward structure in linear and generalized linear bandits, but it is much weaker than the linearity assumption. In Figure 1, we sketch the relations between factored bandits and other bandit models. We distinguish between bandits with explicit reward models, such as linear and generalized linear bandits, and bandits with weakly constrained reward models, including factored bandits and some relaxations of combinatorial bandits. A special case of factored bandits are rank-1 bandits [7]. In rank-1 bandits the player selects two actions and the reward is the product of their qualities. Factored bandits generalize this to an arbitrary number of actions and significantly relax the assumption on the form of the reward function. The relation with other bandit models is a bit more involved. There is an overlap between factored bandits and (generalized) linear bandits [1; 6], but neither is a special case of the other. When actions are represented by unit vectors, then for (generalized) linear reward functions the models coincide. However, the (generalized) linear bandits allow a continuum of actions, whereas factored bandits relax the (generalized) linearity assumption on the reward structure to uniform identifiability. There is a partial overlap between factored bandits and combinatorial bandits [3]. The action set in combinatorial bandits is a subset of t0, 1ud. If the action set is unrestricted, i.e. A “ t0, 1ud, then combinatorial bandits can be seen as factored bandits with just two actions along each of the d dimensions. However, typically in combinatorial bandits the action set is a strict subset of t0, 1ud and one of the parameters of interest is the permitted number of non-zero elements. This setting is not covered by factored bandits. While in the classical combinatorial bandits setting the reward structure is linear, there exist relaxations of the model, e.g. Chen et al. [4]. Dueling bandits are not directly related to factored bandits and, therefore, we depict them with faded dashed blocks in Figure 1. While the action set in dueling bandits can be decomposed into a product of the basic action set with itself (one for the first and one for the second action in the duel), the observations in dueling bandits are the identities of the winners rather than rewards. Nevertheless, we show that the proposed algorithm for factored bandits can be applied to utility-based dueling bandits. The main contributions of the paper can be summarized as follows: 1. We introduce factored bandits and the uniform identifiability assumption. 2. Factored bandits with uniformly identifiable actions are a generalization of rank-1 bandits. 3. We provide an anytime algorithm for playing factored bandits under uniform identifiability assumption in stochastic environments and analyze its regret. We also provide a lower bound matching up to constants. 4. Unlike the majority of bandit models, our approach does not require explicit specification or knowledge of the form of the reward function (as long as the uniform identifiability assumption is satisfied). For example, it can be a weighted sum of the qualities of atomic actions (as in linear bandits), a product thereof, or any other function not necessarily known to the algorithm. 5. We show that the algorithm can also be applied to utility-based dueling bandits, where the additive factor in the regret bound is reduced by a multiplicative factor of K compared to state-of-the-art (where K is the number of actions). It should be emphasized that in stateof-the-art regret bounds for utility-based dueling bandits the additive factor is dominating for time horizons below ΩpexppKqq, whereas in the new result it is only dominant for time horizons up to OpKq. 6. Our work provides a unified treatment of two distinct bandit models: rank-1 bandits and utility-based dueling bandits. The paper is organized in the following way. In Section 2 we introduce the factored bandit model and uniform identifiability assumption. In Section 3 we provide algorithms for factored bandits and dueling bandits. In Section 4 we analyze the regret of our algorithm and provide matching upper and lower regret bounds. In Section 5 we compare our work empirically and theoretically with prior work. We finish with a discussion in Section 6. 2 Problem Setting 2.1 Factored bandits We define the game in the following way. We assume that the set of actions A can be represented as a Cartesian product of atomic actions, A “ ÂL�“1 A�. We call the elements of A� atomic arms. For rounds t “ 1, 2, ... the player chooses an action At P A and observes a reward rt drawn according to an unknown probability distribution pAt (i.e., the game is “stochastic”). We assume that the mean rewards µpaq “ Errt|At “ as are bounded in r´1, 1s and that the noise ηt “ rt ´ µpAtq is conditionally 1-sub-Gaussian. Formally, this means that @λ P R E “eληt |Ft´1 ‰ ď exp ˆ λ2 2 ˙ , where Ft :“ tA1, r1,A2, r2, ...,At, rtu is the filtration defined by the history of the game up to and including round t. We denote a˚ “ pa1̊ , a2̊ , ..., aL̊q “ argmaxaPA µpaq. Definition 1 (uniform identifiability). An atomic set Ak has a uniformly identifiable best arm a˚k if and only if @a P Akzta˚ku : Δkpaq :“ min bPÂ�‰k A� µpa˚k ,bq ´ µpa,bq ą 0. (1) We assume that all atomic sets have uniformly identifiable best arms. The goal is to minimize the pseudo-regret, which is defined as RegT “ E « Tÿ t“1 µpa˚q ´ µpAtq ff . Due to generality of the uniform identifiability assumption we cannot upper bound the instantaneous regret µpa˚q ´ µpAtq in terms of the gaps Δ�pa�q. However, a sequential application of (1) provides a lower bound µpa˚q ´ µpaq “ µpa˚q ´ µpa1, a2̊ , ..., aL̊q ` µpa1, a2̊ , ..., aL̊q ´ µpaq ě Δ1pa1q ` µpa1, a2̊ , ..., aL̊q ´ µpaq ě ... ě Lÿ �“1 Δ�pa�q. (2) For the upper bound let κ be a problem dependent constant, such that µpa˚q´µpaq ď κřL�“1 Δ�pa�q holds for all a. Since the mean rewards are in r´1, 1s, the condition is always satisfied by κ “ mina,� 2Δ ´1 � pa�q and by equation (2) κ is always larger than 1. The constant κ appears in the regret bounds. In the extreme case when κ “ mina,� 2Δ´1� pa�q the regret guarantees are fairly weak. However, in many specific cases mentioned in the previous section, κ is typically small or even 1. We emphasize that algorithms proposed in the paper do not require the knowledge of κ. Thus, the dependence of the regret bounds on κ is not a limitation and the algorithms automatically adapt to more favorable environments. 2.2 Dueling bandits The set of actions in dueling bandits is factored into AˆA. However, strictly speaking the problem is not a factored bandit problem, because the observations in dueling bandits are not the rewards.1 When playing two arms, a and b, we observe the identity of the winning arm, but the regret is typically defined via average relative quality of a and b with respect to a “best” arm in A. The literature distinguishes between different dueling bandit settings. We focus on utility-based dueling bandits [14] and show that they satisfy the uniform identifiability assumption. In utility-based dueling bandits, it is assumed that each arm has a utility upaq and that the winning probabilities are defined by Pra wins against bs “ νpupaq´upbqq for a monotonously increasing link function ν. Let wpa, bq be 1 if a wins against b and 0 if b wins against a. Let a˚ :“ argmaxaPA upaq denote the best arm. Then for any arm b P A and any a P Aza˚, it holds that Erwpa˚, bqs ´ Erwpa, bqs “ νpupa˚q ´ upbqq ´ νpupaq ´ upbqq ą 0, which satisfies the uniform identifiability assumption. For the rest of the paper we consider the linear link function νpxq “ 1`x2 . The regret is then defined by RegT “ E « Tÿ t“1 upa˚q ´ upAtq 2 ` upa ˚q ´ upBtq 2 ff . (3) 3 Algorithms Although in theory an asymptotically optimal algorithm for any structured bandit problem was presented in [5], for factored bandits this algorithm does not only require solving an intractable semiinfinite linear program at every round, but it also suffers from additive constants which are exponential in the number of atomic actions L. An alternative naive approach could be an adaptation of sparring [16], where each factor runs an independent K-armed bandit algorithm and does not observe the atomic arm choices of other factors. The downside of sparring algorithms, both theoretically and practically, is that each algorithm operates under limited information and the rewards become non i.i.d. from the perspective of each individual factor. Our Temporary Elimination Algorithm (TEA, Algorithm 1) avoids these downsides. It runs independent instances of the Temporary Elimination Module (TEM, Algorithm 3) in parallel, one per each factor of the problem. Each TEM operates on a single atomic set. The TEA is responsible for the synchronization of TEM instances. Two main ingredients ensure information efficiency. First, we use relative comparisons between arms instead of comparing absolute mean rewards. This cancels out the effect of non-stationary means. The second idea is to use local randomization in order to obtain unbiased estimates of the relative performance without having to actually play each atomic arm with the same reference, which would have led to prohibitive time complexity. 1 @� : TEM� Ð new TEM(A�) 2 t Ð 1 3 for s “ 1, 2, . . . do 4 Ms Ð argmax� |TEM� . getActiveSetpfptq´1q| 5 Ts Ð pt, t ` 1, . . . , t ` Ms ´ 1q 6 for � P t1, . . . , Lu in parallel do 7 TEM� . scheduleNextpTsq 8 for t P Ts do 9 rt Ð playppTEM� .Atq�“1,...,Lq 10 for � P t1, . . . , Lu in parallel do 11 TEM� . feedbackpprt1 qt1PTsq 12 t Ð t ` |Ts| Algorithm 1: Factored Bandit TEA 1 TEM Ð new TEM(A) 2 t Ð 1 3 for s “ 1, 2, . . . do 4 As Ð TEM . getActiveSetpfptq´1q 5 Ts Ð pt, t ` 1, . . . , t ` |As| ´ 1q 6 TEM . scheduleNextpTsq 7 for b P As do 8 rt Ð playpTEM .At, bq 9 t Ð t ` 1 10 TEM . feedbackpprt1 qt1PTsq Algorithm 2: Dueling Bandit TEA 1In principle, it is possible to formulate a more general problem that would incorporate both factored bandits and dueling bandits. But such a definition becomes too general and hard to work with. For the sake of clarity we have avoided this path. The TEM instances run in parallel in externally synchronized phases. Each module selects active arms in getActiveSetpδq, such that the optimal arm is included with high probability. The length of a phase is chosen such that each module can play each potentially optimal arm at least once in every phase. All modules schedule all arms for the phase in scheduleNext. This is done by choosing arms in a round robin fashion (random choices if not all arms can be played equally often) and ordering them randomly. All scheduled plays are executed and the modules update their statistics through the call of feedback routine. The modules use slowly increasing lower confidence bounds for the gaps in order to temporarily eliminate arms that are with high probability suboptimal. In all algorithms, we use fptq :“ pt ` 1q log2pt ` 1q. Dueling bandits For dueling bandits we only use a single instance of TEM. In each phase the algorithm generates two random permutations of the active set and plays the corresponding actions from the two lists against each other. (The first permutation is generated in Line 6 and the second in Line 7 of Algorithm 2.) 3.1 TEM The TEM tracks empirical differences between rewards of all arms ai and aj in Dij . Based on these differences, it computes lower confidence bounds for all gaps. The set K˚ contains those arms where all LCB gaps are zero. Additionally the algorithm keeps track of arms that were never removed from B. During a phase, each arm from K˚ is played at least once, but only arms in B can be played more than once. This is necessary to keep the additive constants at M logpKq instead of MK. global :Ni,j , Di,j ,K˚,B 1 Function initialize(K) 2 @ai, aj P K : Ni,j , Di,j Ð 0, 0 3 B Ð K 4 5 Function getActiveSet(δ) 6 if DNi,j “ 0 then 7 K˚ Ð K 8 else 9 for ai P K do 10 Δ̂LCBpaiq Ð maxaj‰ai Dj,iNj,i ´c 12 logp2KfpNj,iqδ´1q Nj,i 11 K˚ Ð tai P K|Δ̂LCBpaiq ď 0u 12 if |K˚| “ 0 then 13 K˚ Ð K 14 B Ð B X K˚ 15 if |B| “ 0 then 16 B Ð K˚ 17 return K˚ 18 19 Function scheduleNext(T ) 20 for a P K˚ do 21 t̃ Ð random unassigned index in T 22 At̃ Ð a 23 while not all Ats , . . . , Ats`|T |´1 assigned do 24 for a P B do 25 t̃ Ð random unassigned index in T 26 At̃ Ð a 27 28 Function feedback(tRtuts,...,ts`Ms´1) 29 @ai : N is, Ris Ð 0, 0 30 for t “ ts, . . . , ts ` Ms ´ 1 do 31 RAts Ð RAts ` Rt 32 NAts Ð NAts ` 1 33 for ai, aj P K˚ do 34 Di,j Ð Di,j`mintNsi , Nsj up R i s Nis ´ Rjs N j s q 35 Ni,j Ð Ni,j ` mintNsi , Nsj u Algorithm 3: Temporary Elimination Module (TEM) Implementation 4 Analysis We start this section with the main theorem, which bounds the number of times the TEM pulls sub-optimal arms. Then we prove upper bounds on the regret for our main algorithms. Finally, we prove a lower bound for factored bandits that shows that our regret bound is tight up to constants. 4.1 Upper bound for the number of sub-optimal pulls by TEM Theorem 1. For any TEM submodule TEM� with an arm set of size K “ |A�|, running in the TEA algorithm with M :“ max� |A�| and any suboptimal atomic arm a ‰ a˚, let Ntpaq denote the number of times TEM has played the arm a up to time t. Then there exist constants Cpaq ď M for a ‰ a˚, such that ErNtpaqs ď 120 Δpaq2 ˜ logp2Kt log2ptqq ` 4 log ˆ 48 logp2Kt log2ptqq Δpaq2 ˙ ¸ ` Cpaq, where ř a‰a˚ Cpaq ď M logpKq ` 52K in the case of factored bandits and Cpaq ď 52 for dueling bandits. Proof sketch. [The complete proof is provided in the Appendix.] Step 1 We show that the confidence intervals are constructed in such a way that the probability of all confidence intervals holding at all epochs up from s1 is at least 1 ´ maxsěs1 fptsq´1. This requires a novel concentration inequality (Lemma 3) for a sum of conditionally σs-sub-gaussian random variables, where σs can be dependent on the history. This technique might be useful for other problems as well. Step 2 We split the number of pulls into pulls that happen in rounds where the confidence intervals hold and those where they fail: Ntpaq “ N conft paq ` N conft paq. We can bound the expectation of N conft paq based on the failure probabilities given by Prconf failure at round ss ď 1fptsq . Step 3 We define s1 as the last round in which the confidence intervals held and a was not eliminated. We can split N conft paq “ N confts1 paq ` Cpaq and use the confidence intervals to upper bound N confts1 paq. The upper bound on ř a Cpaq requires special handling of arms that were eliminated once and carefully separating the cases where confidence intervals never fail and those where they might fail. 4.2 Regret Upper bound for Dueling Bandit TEA A regret bound for the Factored Bandit TEA algorithm, Algorithm 1, is provided in the following theorem. Theorem 2. The pseudo-regret of Algorithm 1 at any time T is bounded by RegT ď κ ¨ ˝ Lÿ �“1 ÿ a�‰a˚� 120 Δ�pa�q ˜ logp2|A�|t log2ptqq ` 4 log ˆ 48 logp2|A�|t log2ptqq Δ�pa�q ˙ ¸¸ ` max � |A�| ÿ � logp|A�|q ` ÿ � 5 2 |A�|. Proof. The design of TEA allows application of Theorem 1 to each instance of TEM. Using µpa˚q ´ µpaq ď κřL�“1 Δ�pa�q, we have that RegT “ Er Tÿ t“1 µpa˚q ´ µpatqss ď κ Lÿ l“1 ÿ a�‰a˚� ErNT pa�qsΔ�pa�q. Applying Theorem 1 to the expected number of pulls and bounding the sums ř a CpaqΔpaq ďř a Cpaq completes the proof. 4.3 Dueling bandits A regret bound for the Dueling Bandit TEA algorithm (DBTEA), Algorithm 2, is provided in the following theorem. Theorem 3. The pseudo-regret of Algorithm 2 for any utility-based dueling bandit problem at any time T (defined in equation (3) satisfies RegT ď O ´ř a‰a˚ logpT q Δpaq ¯ ` OpKq. Proof. At every round, each arm in the active set is played once in position A and once in position B in playpA,Bq. Denote by NAt paq the number of plays of an arm a in the first position, NBt paq the number of plays in the second position, and Ntpaq the total number of plays of the arm. We have RegT “ ÿ a‰a˚ ErNtpaqsΔpaq “ ÿ a‰a˚ ErNAt paq ` NBt paqsΔpaq “ ÿ a‰a˚ 2ErNAt paqsΔpaq. The proof is completed by applying Theorem 1 to bound ErNAt paqs. 4.4 Lower bound We show that without additional assumptions the regret bound cannot be improved. The lower bound is based on the following construction. The mean reward of every arm is given by µpaq “ µpa˚q ´ ř� Δ�pa�q. The noise is Gaussian with variance 1. In this problem, the regret can be decomposed into a sum over atomic arms of the regret induced by pulling these arms, RegT “ř � ř a�PA� ErNT pa�qsΔ�pa�q. Assume that we only want to minimize the regret induced by a single atomic set A�. Further, assume that Δkpaq for all k ‰ � are given. Then the problem is reduced to a regular K-armed bandit problem. The asymptotic lower bound for K-armed bandit under 1-Gaussian noise goes back to [10]: For any consistent strategy θ, the asymptotic regret is lower bounded by lim infTÑ8 Reg θ T logpT q ě ř a‰a˚ 2 Δpaq . Due to regret decomposition, we can apply this bound to every atomic set separately. Therefore, the asymptotic regret in the factored bandit problem is lim inf TÑ8 RegθT logpT q ě Lÿ �“1 ÿ a�‰a�˚ 2 Δ�pa�q . This shows that our general upper bound is asymptotically tight up to leading constants and κ. κ-gap We note that there is a problem-dependent gap of κ between our upper and lower bounds. Currently we believe that this gap stems from the difference between information and computational complexity of the problem. Our algorithm operates on each factor of the problem independently of other factors and is based on the “optimism in the face of uncertainty” principle. It is possible to construct examples in which the optimal strategy requires playing surely sub-optimal arms for the sake of information gain. For example, this kind of constructions were used by Lattimore and Szepesvári [11] to show suboptimality of optimism-based algorithms. Therefore, we believe that removing κ from the upper bound is possible, but requires a fundamentally different algorithm design. What is not clear is whether it is possible to remove κ without significant sacrifice of the computational complexity. 5 Comparison to Prior Work 5.1 Stochastic rank-1 bandits Stochastic rank-1 bandits introduced by Katariya et al. [7] are a special case of factored bandits. The authors published a refined algorithm for Bernoulli rank-1 bandits using KL confidence sets in Katariya et al. [8]. We compare our theoretical results with the first paper because it matches our problem assumptions. In our experiments, we provide a comparison to both the original algorithm and the KL version. In the stochastic rank-1 problem there are only 2 atomic sets of size K1 and K2. The matrix of expected rewards for each pair of arms is of rank 1. It means that for each u P A1 and v P A2, there exist u, v P r0, 1s such that Errpu, vqs “ u ¨v. The proposed Stochastic rank-1 Elimination algorithm introduced by Katariya et al. is a typical elimination style algorithm. It requires knowledge of the time horizon and uses phases that increase exponentially in length. In each phase, all arms are played uniformly. At the end of a phase, all arms that are sub-optimal with high probability are eliminated. Theoretical comparison It is hard to make a fair comparison of the theoretical bounds because TEA operates under much weaker assumptions. Both algorithms have a regret bound of O ´´ř uPA1zu˚ 1 Δ1puq ` ř vPA2zv˚ 1 Δ2pvq ¯ logptq ¯ . The problem independent multiplicative factors hidden under O are smaller for TEA, even without considering that rank-1 Elimination requires a doubling trick for anytime applications. However, the problem dependent factors are in favor of rank-1 Elimination, where the gaps correspond to the mean difference under uniform sampling pu˚ ´ uq řvPA2 v{K2. In factored bandits, the gaps are defined as pu˚ ´ uqminvPA2 v, which is naturally smaller. The difference stems from different problem assumptions. Stronger assumptions of rank-1 bandits make elimination easier as the number of eliminated suboptimal arms increases. The TEA analysis holds in cases where it becomes harder to identify suboptimal arms after removal of bad arms. This may happen when highly suboptimal atomic actions in one factor provide more discriminative information on atomic actions in other factors than close to optimal atomic actions in the same factor (this follows the spirit of illustration of suboptimality of optimistic algorithms in [11]). We leave it to future work to improve the upper bound of TEA under stronger model assumptions. In terms of memory and computational complexity, TEA is inferior to regular elimination style algorithms, because we need to keep track of relative performances of the arms. That means both computational and memory complexities are Opř� |A�|2q per round in the worst case, as opposed to rank-1 Elimination that only requires O `|A1| ` |A2|˘. Empirical comparison The number of arms is set to 16 in both sets. We always fix u˚ ´ u “ v˚ ´ v “ 0.2. We vary the absolute value of u˚v˚. As expected, rank1ElimKL has an advantage when the Bernoulli random variables are strongly biased towards one side. When the bias is close to 12 , we clearly see the better constants of TEA. In the evaluation we clearly outperform rank-1 Elimination over different parameter settings and even beat the KL optimized version if the means are not too close to zero or one. This supports that our algorithm does not only provide a more practical anytime version of elimination, but also improves on constant factors in the regret. We believe that our algorithm design can be used to improve other elimination style algorithms as well. 5.2 Dueling Bandits: Related Work To the best of our knowledge, the proposed Dueling Bandit TEA is the first algorithm that satisfies the following three criteria simultaneously for utility-based dueling bandits: • It requires no prior knowledge of the time horizon (nor uses the doubling trick or restarts). • Its pseudo-regret is bounded by Opřa‰a˚ logptqΔpaq q. • There are no additive constants that dominate the regret for time horizons T ą OpKq. We want to stress the importance of the last point. For all state-of-the-art algorithms known to us, when the number of actions K is moderately large, the additive term is dominating for any realistic time horizon T . In particular, Ailon et al. [2] introduces three algorithms for the utility-based dueling bandit problem. The regret of Doubler scales with Oplog2ptqq. The regret of MultiSBM has an additive term of order ř a‰a˚ K Δpaq that is dominating for T ă ΩpexppKqq. The last algorithm, Sparring, has no theoretical analysis. Algorithms based on the weaker Condorcet winner assumption apply to utility-based setting, but they all suffer from equally large or even larger additive terms. The RUCB algorithm introduced by Zoghi et al. [17] has an additive term in the bound that is defined as 2DΔmax logp2Dq, for Δmax “ maxa‰a˚ Δpaq and D ą 12 ř ai‰a˚ ř aj‰ai 4α mintΔpaiq2,Δpajq2u . By unwrapping these definitions, we see that the RUCB regret bound has an additive term of order 2DΔmax ě řa‰a˚ KΔpaq . This is again the dominating term for time horizons T ď ΩpexppKqq. The same applies to the RMED algorithm introduced by Komiyama et al. [9], which has an additive term of OpK2q. (The dependencies on the gaps are hidden behind the O-notation.) The D-TS algorithm by Wu and Liu [13] based on Thompson Sampling shows one of the best empirical performances, but its regret bound includes an additive constant of order OpK3q. Other algorithms known to us, Interleaved Filter [16], Beat the Mean [15], and SAVAGE [12], all require knowledge of the time horizon T in advance. Empirical comparison We have used the framework provided by Komiyama et al. [9]. We use the same utility for all sub-optimal arms. In Figure 3, the winning probability of the optimal arm over suboptimal arms is always set to 0.7, we run the experiment for different number of arms K. TEA outperforms all algorithms besides RMED variants, as long as the number of arms are sufficiently big. To show that there also exists a regime where the improved constants gain an advantage over RMED, we conducted a second experiment in Figure 4 (in the Appendix), where we set the winning probability to 0.952 and significantly increase the number of arms. The evaluation shows that the additive terms are indeed non-negligible and that Dueling Bandit TEA outperforms all baseline algorithms when the number of arms is sufficiently large. 6 Discussion We have presented the factored bandits model and uniform identifiability assumption, which requires no knowledge of the reward model. We presented an algorithm for playing stochastic factored bandits with uniformly identifiable actions and provided matching upper and lower bounds for the problem up to constant factors. Our algorithm and proofs might serve as a template to turn other elimination style algorithms into improved anytime algorithms. Factored bandits with uniformly identifiable actions generalize rank-1 bandits. We have also provided a unified framework for the analysis of factored bandits and utility-based dueling bandits. Furthermore, we improve the additive constants in the regret bound compared to state-of-the-art algorithms for utility-based dueling bandits. There are multiple potential directions for future research. One example mentioned in the text is the possibility of improving the regret bound when additional restrictions on the form of the reward function are introduced or improvements of the lower bound when algorithms are restricted in computational or memory complexity. Another example is the adversarial version of the problem. 2Smaller gaps show the same behavior but require more arms and more timesteps.
1. What are the contributions and limitations of the proposed approach in the context of factored bandits? 2. How does the proposed method differ from existing methods, particularly in terms of its assumptions and sample complexity? 3. Can the proposed method be adapted to make stronger assumptions and achieve better regret bounds, similar to those of rank-1 bandits? 4. Are there any typos or errors in the algorithms presented in the paper?
Review
Review This paper considers factored bandits, where the actions can be decomposed into a cartesian product of atomic arms. The paper is clear and well-written. The authors criticize the uniform playing of arms in rank-1 bandits. However, in TEA, the phase lengths are chosen to be equal to the size of the largest active set, and the arms are chosen uniformly at random from the active set in each dimension, so in expectation the arms along each dimension are played uniformly. Hence I wouldn't expect the sample complexity of TEA to be different from that of Rank1Elim for the rank-1 problem. In fact the authors comment that the sample complexity of TEA is higher than that of Rank1Elim, and argue that this is because TEA makes fewer assumptions. I would be interested in knowing if TEA can be modified to make the same strong assumptions as rank-1 bandits, and then compare it with rank1Elim. Other minor comments: 1) T in line 5 of Algorithm 1 should be t; similarly for Algorithm 2. *** After rebuttal *** My verdict is unchanged, and I understand that the main focus is finding the minimal assumptions under which learning is possible in the factored setup. However, I would like to see an analysis of TEA for the rank-1 case, that achieves similar or better regret bounds than the rank-1 paper.
NIPS
Title Factored Bandits Abstract We introduce the factored bandits model, which is a framework for learning with limited (bandit) feedback, where actions can be decomposed into a Cartesian product of atomic actions. Factored bandits incorporate rank-1 bandits as a special case, but significantly relax the assumptions on the form of the reward function. We provide an anytime algorithm for stochastic factored bandits and up to constants matching upper and lower regret bounds for the problem. Furthermore, we show how a slight modification enables the proposed algorithm to be applied to utilitybased dueling bandits. We obtain an improvement in the additive terms of the regret bound compared to state-of-the-art algorithms (the additive terms are dominating up to time horizons that are exponential in the number of arms). 1 Introduction We introduce factored bandits, which is a bandit learning model, where actions can be decomposed into a Cartesian product of atomic actions. As an example, consider an advertising task, where the actions can be decomposed into (1) selection of an advertisement from a pool of advertisements and (2) selection of a location on a web page out of a set of locations, where it can be presented. The probability of a click is then a function of the quality of the two actions, the attractiveness of the advertisement and the visibility of the location it was placed at. In order to maximize the reward the learner has to maximize the quality of actions along each dimension of the problem. Factored bandits generalize the above example to an arbitrary number of atomic actions and arbitrary reward functions satisfying some mild assumptions. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. In a nutshell, at every round of a factored bandit game the player selects L atomic actions, a1, . . . , aL, each from a corresponding finite set A� of size |A�| of possible actions. The player then observes a reward, which is an arbitrary function of a1, . . . , aL satisfying some mild assumptions. For example, it can be a sum of the quality of atomic actions, a product of the qualities, or something else that does not necessarily need to have an analytical expression. The learner does not have to know the form of the reward function. Our way of dealing with combinatorial complexity of the problem is through introduction of unique identifiability assumption, by which the best action along each dimension is uniquely identifiable. A bit more precisely, when looking at a given dimension we call the collection of actions along all other dimensions a reference set. The unique identifiability assumption states that in expectation the best action along a dimension outperforms any other action along the same dimension by a certain margin when both are played with the same reference set, irrespective of the composition of the reference set. This assumption is satisfied, for example, by the reward structure in linear and generalized linear bandits, but it is much weaker than the linearity assumption. In Figure 1, we sketch the relations between factored bandits and other bandit models. We distinguish between bandits with explicit reward models, such as linear and generalized linear bandits, and bandits with weakly constrained reward models, including factored bandits and some relaxations of combinatorial bandits. A special case of factored bandits are rank-1 bandits [7]. In rank-1 bandits the player selects two actions and the reward is the product of their qualities. Factored bandits generalize this to an arbitrary number of actions and significantly relax the assumption on the form of the reward function. The relation with other bandit models is a bit more involved. There is an overlap between factored bandits and (generalized) linear bandits [1; 6], but neither is a special case of the other. When actions are represented by unit vectors, then for (generalized) linear reward functions the models coincide. However, the (generalized) linear bandits allow a continuum of actions, whereas factored bandits relax the (generalized) linearity assumption on the reward structure to uniform identifiability. There is a partial overlap between factored bandits and combinatorial bandits [3]. The action set in combinatorial bandits is a subset of t0, 1ud. If the action set is unrestricted, i.e. A “ t0, 1ud, then combinatorial bandits can be seen as factored bandits with just two actions along each of the d dimensions. However, typically in combinatorial bandits the action set is a strict subset of t0, 1ud and one of the parameters of interest is the permitted number of non-zero elements. This setting is not covered by factored bandits. While in the classical combinatorial bandits setting the reward structure is linear, there exist relaxations of the model, e.g. Chen et al. [4]. Dueling bandits are not directly related to factored bandits and, therefore, we depict them with faded dashed blocks in Figure 1. While the action set in dueling bandits can be decomposed into a product of the basic action set with itself (one for the first and one for the second action in the duel), the observations in dueling bandits are the identities of the winners rather than rewards. Nevertheless, we show that the proposed algorithm for factored bandits can be applied to utility-based dueling bandits. The main contributions of the paper can be summarized as follows: 1. We introduce factored bandits and the uniform identifiability assumption. 2. Factored bandits with uniformly identifiable actions are a generalization of rank-1 bandits. 3. We provide an anytime algorithm for playing factored bandits under uniform identifiability assumption in stochastic environments and analyze its regret. We also provide a lower bound matching up to constants. 4. Unlike the majority of bandit models, our approach does not require explicit specification or knowledge of the form of the reward function (as long as the uniform identifiability assumption is satisfied). For example, it can be a weighted sum of the qualities of atomic actions (as in linear bandits), a product thereof, or any other function not necessarily known to the algorithm. 5. We show that the algorithm can also be applied to utility-based dueling bandits, where the additive factor in the regret bound is reduced by a multiplicative factor of K compared to state-of-the-art (where K is the number of actions). It should be emphasized that in stateof-the-art regret bounds for utility-based dueling bandits the additive factor is dominating for time horizons below ΩpexppKqq, whereas in the new result it is only dominant for time horizons up to OpKq. 6. Our work provides a unified treatment of two distinct bandit models: rank-1 bandits and utility-based dueling bandits. The paper is organized in the following way. In Section 2 we introduce the factored bandit model and uniform identifiability assumption. In Section 3 we provide algorithms for factored bandits and dueling bandits. In Section 4 we analyze the regret of our algorithm and provide matching upper and lower regret bounds. In Section 5 we compare our work empirically and theoretically with prior work. We finish with a discussion in Section 6. 2 Problem Setting 2.1 Factored bandits We define the game in the following way. We assume that the set of actions A can be represented as a Cartesian product of atomic actions, A “ ÂL�“1 A�. We call the elements of A� atomic arms. For rounds t “ 1, 2, ... the player chooses an action At P A and observes a reward rt drawn according to an unknown probability distribution pAt (i.e., the game is “stochastic”). We assume that the mean rewards µpaq “ Errt|At “ as are bounded in r´1, 1s and that the noise ηt “ rt ´ µpAtq is conditionally 1-sub-Gaussian. Formally, this means that @λ P R E “eληt |Ft´1 ‰ ď exp ˆ λ2 2 ˙ , where Ft :“ tA1, r1,A2, r2, ...,At, rtu is the filtration defined by the history of the game up to and including round t. We denote a˚ “ pa1̊ , a2̊ , ..., aL̊q “ argmaxaPA µpaq. Definition 1 (uniform identifiability). An atomic set Ak has a uniformly identifiable best arm a˚k if and only if @a P Akzta˚ku : Δkpaq :“ min bPÂ�‰k A� µpa˚k ,bq ´ µpa,bq ą 0. (1) We assume that all atomic sets have uniformly identifiable best arms. The goal is to minimize the pseudo-regret, which is defined as RegT “ E « Tÿ t“1 µpa˚q ´ µpAtq ff . Due to generality of the uniform identifiability assumption we cannot upper bound the instantaneous regret µpa˚q ´ µpAtq in terms of the gaps Δ�pa�q. However, a sequential application of (1) provides a lower bound µpa˚q ´ µpaq “ µpa˚q ´ µpa1, a2̊ , ..., aL̊q ` µpa1, a2̊ , ..., aL̊q ´ µpaq ě Δ1pa1q ` µpa1, a2̊ , ..., aL̊q ´ µpaq ě ... ě Lÿ �“1 Δ�pa�q. (2) For the upper bound let κ be a problem dependent constant, such that µpa˚q´µpaq ď κřL�“1 Δ�pa�q holds for all a. Since the mean rewards are in r´1, 1s, the condition is always satisfied by κ “ mina,� 2Δ ´1 � pa�q and by equation (2) κ is always larger than 1. The constant κ appears in the regret bounds. In the extreme case when κ “ mina,� 2Δ´1� pa�q the regret guarantees are fairly weak. However, in many specific cases mentioned in the previous section, κ is typically small or even 1. We emphasize that algorithms proposed in the paper do not require the knowledge of κ. Thus, the dependence of the regret bounds on κ is not a limitation and the algorithms automatically adapt to more favorable environments. 2.2 Dueling bandits The set of actions in dueling bandits is factored into AˆA. However, strictly speaking the problem is not a factored bandit problem, because the observations in dueling bandits are not the rewards.1 When playing two arms, a and b, we observe the identity of the winning arm, but the regret is typically defined via average relative quality of a and b with respect to a “best” arm in A. The literature distinguishes between different dueling bandit settings. We focus on utility-based dueling bandits [14] and show that they satisfy the uniform identifiability assumption. In utility-based dueling bandits, it is assumed that each arm has a utility upaq and that the winning probabilities are defined by Pra wins against bs “ νpupaq´upbqq for a monotonously increasing link function ν. Let wpa, bq be 1 if a wins against b and 0 if b wins against a. Let a˚ :“ argmaxaPA upaq denote the best arm. Then for any arm b P A and any a P Aza˚, it holds that Erwpa˚, bqs ´ Erwpa, bqs “ νpupa˚q ´ upbqq ´ νpupaq ´ upbqq ą 0, which satisfies the uniform identifiability assumption. For the rest of the paper we consider the linear link function νpxq “ 1`x2 . The regret is then defined by RegT “ E « Tÿ t“1 upa˚q ´ upAtq 2 ` upa ˚q ´ upBtq 2 ff . (3) 3 Algorithms Although in theory an asymptotically optimal algorithm for any structured bandit problem was presented in [5], for factored bandits this algorithm does not only require solving an intractable semiinfinite linear program at every round, but it also suffers from additive constants which are exponential in the number of atomic actions L. An alternative naive approach could be an adaptation of sparring [16], where each factor runs an independent K-armed bandit algorithm and does not observe the atomic arm choices of other factors. The downside of sparring algorithms, both theoretically and practically, is that each algorithm operates under limited information and the rewards become non i.i.d. from the perspective of each individual factor. Our Temporary Elimination Algorithm (TEA, Algorithm 1) avoids these downsides. It runs independent instances of the Temporary Elimination Module (TEM, Algorithm 3) in parallel, one per each factor of the problem. Each TEM operates on a single atomic set. The TEA is responsible for the synchronization of TEM instances. Two main ingredients ensure information efficiency. First, we use relative comparisons between arms instead of comparing absolute mean rewards. This cancels out the effect of non-stationary means. The second idea is to use local randomization in order to obtain unbiased estimates of the relative performance without having to actually play each atomic arm with the same reference, which would have led to prohibitive time complexity. 1 @� : TEM� Ð new TEM(A�) 2 t Ð 1 3 for s “ 1, 2, . . . do 4 Ms Ð argmax� |TEM� . getActiveSetpfptq´1q| 5 Ts Ð pt, t ` 1, . . . , t ` Ms ´ 1q 6 for � P t1, . . . , Lu in parallel do 7 TEM� . scheduleNextpTsq 8 for t P Ts do 9 rt Ð playppTEM� .Atq�“1,...,Lq 10 for � P t1, . . . , Lu in parallel do 11 TEM� . feedbackpprt1 qt1PTsq 12 t Ð t ` |Ts| Algorithm 1: Factored Bandit TEA 1 TEM Ð new TEM(A) 2 t Ð 1 3 for s “ 1, 2, . . . do 4 As Ð TEM . getActiveSetpfptq´1q 5 Ts Ð pt, t ` 1, . . . , t ` |As| ´ 1q 6 TEM . scheduleNextpTsq 7 for b P As do 8 rt Ð playpTEM .At, bq 9 t Ð t ` 1 10 TEM . feedbackpprt1 qt1PTsq Algorithm 2: Dueling Bandit TEA 1In principle, it is possible to formulate a more general problem that would incorporate both factored bandits and dueling bandits. But such a definition becomes too general and hard to work with. For the sake of clarity we have avoided this path. The TEM instances run in parallel in externally synchronized phases. Each module selects active arms in getActiveSetpδq, such that the optimal arm is included with high probability. The length of a phase is chosen such that each module can play each potentially optimal arm at least once in every phase. All modules schedule all arms for the phase in scheduleNext. This is done by choosing arms in a round robin fashion (random choices if not all arms can be played equally often) and ordering them randomly. All scheduled plays are executed and the modules update their statistics through the call of feedback routine. The modules use slowly increasing lower confidence bounds for the gaps in order to temporarily eliminate arms that are with high probability suboptimal. In all algorithms, we use fptq :“ pt ` 1q log2pt ` 1q. Dueling bandits For dueling bandits we only use a single instance of TEM. In each phase the algorithm generates two random permutations of the active set and plays the corresponding actions from the two lists against each other. (The first permutation is generated in Line 6 and the second in Line 7 of Algorithm 2.) 3.1 TEM The TEM tracks empirical differences between rewards of all arms ai and aj in Dij . Based on these differences, it computes lower confidence bounds for all gaps. The set K˚ contains those arms where all LCB gaps are zero. Additionally the algorithm keeps track of arms that were never removed from B. During a phase, each arm from K˚ is played at least once, but only arms in B can be played more than once. This is necessary to keep the additive constants at M logpKq instead of MK. global :Ni,j , Di,j ,K˚,B 1 Function initialize(K) 2 @ai, aj P K : Ni,j , Di,j Ð 0, 0 3 B Ð K 4 5 Function getActiveSet(δ) 6 if DNi,j “ 0 then 7 K˚ Ð K 8 else 9 for ai P K do 10 Δ̂LCBpaiq Ð maxaj‰ai Dj,iNj,i ´c 12 logp2KfpNj,iqδ´1q Nj,i 11 K˚ Ð tai P K|Δ̂LCBpaiq ď 0u 12 if |K˚| “ 0 then 13 K˚ Ð K 14 B Ð B X K˚ 15 if |B| “ 0 then 16 B Ð K˚ 17 return K˚ 18 19 Function scheduleNext(T ) 20 for a P K˚ do 21 t̃ Ð random unassigned index in T 22 At̃ Ð a 23 while not all Ats , . . . , Ats`|T |´1 assigned do 24 for a P B do 25 t̃ Ð random unassigned index in T 26 At̃ Ð a 27 28 Function feedback(tRtuts,...,ts`Ms´1) 29 @ai : N is, Ris Ð 0, 0 30 for t “ ts, . . . , ts ` Ms ´ 1 do 31 RAts Ð RAts ` Rt 32 NAts Ð NAts ` 1 33 for ai, aj P K˚ do 34 Di,j Ð Di,j`mintNsi , Nsj up R i s Nis ´ Rjs N j s q 35 Ni,j Ð Ni,j ` mintNsi , Nsj u Algorithm 3: Temporary Elimination Module (TEM) Implementation 4 Analysis We start this section with the main theorem, which bounds the number of times the TEM pulls sub-optimal arms. Then we prove upper bounds on the regret for our main algorithms. Finally, we prove a lower bound for factored bandits that shows that our regret bound is tight up to constants. 4.1 Upper bound for the number of sub-optimal pulls by TEM Theorem 1. For any TEM submodule TEM� with an arm set of size K “ |A�|, running in the TEA algorithm with M :“ max� |A�| and any suboptimal atomic arm a ‰ a˚, let Ntpaq denote the number of times TEM has played the arm a up to time t. Then there exist constants Cpaq ď M for a ‰ a˚, such that ErNtpaqs ď 120 Δpaq2 ˜ logp2Kt log2ptqq ` 4 log ˆ 48 logp2Kt log2ptqq Δpaq2 ˙ ¸ ` Cpaq, where ř a‰a˚ Cpaq ď M logpKq ` 52K in the case of factored bandits and Cpaq ď 52 for dueling bandits. Proof sketch. [The complete proof is provided in the Appendix.] Step 1 We show that the confidence intervals are constructed in such a way that the probability of all confidence intervals holding at all epochs up from s1 is at least 1 ´ maxsěs1 fptsq´1. This requires a novel concentration inequality (Lemma 3) for a sum of conditionally σs-sub-gaussian random variables, where σs can be dependent on the history. This technique might be useful for other problems as well. Step 2 We split the number of pulls into pulls that happen in rounds where the confidence intervals hold and those where they fail: Ntpaq “ N conft paq ` N conft paq. We can bound the expectation of N conft paq based on the failure probabilities given by Prconf failure at round ss ď 1fptsq . Step 3 We define s1 as the last round in which the confidence intervals held and a was not eliminated. We can split N conft paq “ N confts1 paq ` Cpaq and use the confidence intervals to upper bound N confts1 paq. The upper bound on ř a Cpaq requires special handling of arms that were eliminated once and carefully separating the cases where confidence intervals never fail and those where they might fail. 4.2 Regret Upper bound for Dueling Bandit TEA A regret bound for the Factored Bandit TEA algorithm, Algorithm 1, is provided in the following theorem. Theorem 2. The pseudo-regret of Algorithm 1 at any time T is bounded by RegT ď κ ¨ ˝ Lÿ �“1 ÿ a�‰a˚� 120 Δ�pa�q ˜ logp2|A�|t log2ptqq ` 4 log ˆ 48 logp2|A�|t log2ptqq Δ�pa�q ˙ ¸¸ ` max � |A�| ÿ � logp|A�|q ` ÿ � 5 2 |A�|. Proof. The design of TEA allows application of Theorem 1 to each instance of TEM. Using µpa˚q ´ µpaq ď κřL�“1 Δ�pa�q, we have that RegT “ Er Tÿ t“1 µpa˚q ´ µpatqss ď κ Lÿ l“1 ÿ a�‰a˚� ErNT pa�qsΔ�pa�q. Applying Theorem 1 to the expected number of pulls and bounding the sums ř a CpaqΔpaq ďř a Cpaq completes the proof. 4.3 Dueling bandits A regret bound for the Dueling Bandit TEA algorithm (DBTEA), Algorithm 2, is provided in the following theorem. Theorem 3. The pseudo-regret of Algorithm 2 for any utility-based dueling bandit problem at any time T (defined in equation (3) satisfies RegT ď O ´ř a‰a˚ logpT q Δpaq ¯ ` OpKq. Proof. At every round, each arm in the active set is played once in position A and once in position B in playpA,Bq. Denote by NAt paq the number of plays of an arm a in the first position, NBt paq the number of plays in the second position, and Ntpaq the total number of plays of the arm. We have RegT “ ÿ a‰a˚ ErNtpaqsΔpaq “ ÿ a‰a˚ ErNAt paq ` NBt paqsΔpaq “ ÿ a‰a˚ 2ErNAt paqsΔpaq. The proof is completed by applying Theorem 1 to bound ErNAt paqs. 4.4 Lower bound We show that without additional assumptions the regret bound cannot be improved. The lower bound is based on the following construction. The mean reward of every arm is given by µpaq “ µpa˚q ´ ř� Δ�pa�q. The noise is Gaussian with variance 1. In this problem, the regret can be decomposed into a sum over atomic arms of the regret induced by pulling these arms, RegT “ř � ř a�PA� ErNT pa�qsΔ�pa�q. Assume that we only want to minimize the regret induced by a single atomic set A�. Further, assume that Δkpaq for all k ‰ � are given. Then the problem is reduced to a regular K-armed bandit problem. The asymptotic lower bound for K-armed bandit under 1-Gaussian noise goes back to [10]: For any consistent strategy θ, the asymptotic regret is lower bounded by lim infTÑ8 Reg θ T logpT q ě ř a‰a˚ 2 Δpaq . Due to regret decomposition, we can apply this bound to every atomic set separately. Therefore, the asymptotic regret in the factored bandit problem is lim inf TÑ8 RegθT logpT q ě Lÿ �“1 ÿ a�‰a�˚ 2 Δ�pa�q . This shows that our general upper bound is asymptotically tight up to leading constants and κ. κ-gap We note that there is a problem-dependent gap of κ between our upper and lower bounds. Currently we believe that this gap stems from the difference between information and computational complexity of the problem. Our algorithm operates on each factor of the problem independently of other factors and is based on the “optimism in the face of uncertainty” principle. It is possible to construct examples in which the optimal strategy requires playing surely sub-optimal arms for the sake of information gain. For example, this kind of constructions were used by Lattimore and Szepesvári [11] to show suboptimality of optimism-based algorithms. Therefore, we believe that removing κ from the upper bound is possible, but requires a fundamentally different algorithm design. What is not clear is whether it is possible to remove κ without significant sacrifice of the computational complexity. 5 Comparison to Prior Work 5.1 Stochastic rank-1 bandits Stochastic rank-1 bandits introduced by Katariya et al. [7] are a special case of factored bandits. The authors published a refined algorithm for Bernoulli rank-1 bandits using KL confidence sets in Katariya et al. [8]. We compare our theoretical results with the first paper because it matches our problem assumptions. In our experiments, we provide a comparison to both the original algorithm and the KL version. In the stochastic rank-1 problem there are only 2 atomic sets of size K1 and K2. The matrix of expected rewards for each pair of arms is of rank 1. It means that for each u P A1 and v P A2, there exist u, v P r0, 1s such that Errpu, vqs “ u ¨v. The proposed Stochastic rank-1 Elimination algorithm introduced by Katariya et al. is a typical elimination style algorithm. It requires knowledge of the time horizon and uses phases that increase exponentially in length. In each phase, all arms are played uniformly. At the end of a phase, all arms that are sub-optimal with high probability are eliminated. Theoretical comparison It is hard to make a fair comparison of the theoretical bounds because TEA operates under much weaker assumptions. Both algorithms have a regret bound of O ´´ř uPA1zu˚ 1 Δ1puq ` ř vPA2zv˚ 1 Δ2pvq ¯ logptq ¯ . The problem independent multiplicative factors hidden under O are smaller for TEA, even without considering that rank-1 Elimination requires a doubling trick for anytime applications. However, the problem dependent factors are in favor of rank-1 Elimination, where the gaps correspond to the mean difference under uniform sampling pu˚ ´ uq řvPA2 v{K2. In factored bandits, the gaps are defined as pu˚ ´ uqminvPA2 v, which is naturally smaller. The difference stems from different problem assumptions. Stronger assumptions of rank-1 bandits make elimination easier as the number of eliminated suboptimal arms increases. The TEA analysis holds in cases where it becomes harder to identify suboptimal arms after removal of bad arms. This may happen when highly suboptimal atomic actions in one factor provide more discriminative information on atomic actions in other factors than close to optimal atomic actions in the same factor (this follows the spirit of illustration of suboptimality of optimistic algorithms in [11]). We leave it to future work to improve the upper bound of TEA under stronger model assumptions. In terms of memory and computational complexity, TEA is inferior to regular elimination style algorithms, because we need to keep track of relative performances of the arms. That means both computational and memory complexities are Opř� |A�|2q per round in the worst case, as opposed to rank-1 Elimination that only requires O `|A1| ` |A2|˘. Empirical comparison The number of arms is set to 16 in both sets. We always fix u˚ ´ u “ v˚ ´ v “ 0.2. We vary the absolute value of u˚v˚. As expected, rank1ElimKL has an advantage when the Bernoulli random variables are strongly biased towards one side. When the bias is close to 12 , we clearly see the better constants of TEA. In the evaluation we clearly outperform rank-1 Elimination over different parameter settings and even beat the KL optimized version if the means are not too close to zero or one. This supports that our algorithm does not only provide a more practical anytime version of elimination, but also improves on constant factors in the regret. We believe that our algorithm design can be used to improve other elimination style algorithms as well. 5.2 Dueling Bandits: Related Work To the best of our knowledge, the proposed Dueling Bandit TEA is the first algorithm that satisfies the following three criteria simultaneously for utility-based dueling bandits: • It requires no prior knowledge of the time horizon (nor uses the doubling trick or restarts). • Its pseudo-regret is bounded by Opřa‰a˚ logptqΔpaq q. • There are no additive constants that dominate the regret for time horizons T ą OpKq. We want to stress the importance of the last point. For all state-of-the-art algorithms known to us, when the number of actions K is moderately large, the additive term is dominating for any realistic time horizon T . In particular, Ailon et al. [2] introduces three algorithms for the utility-based dueling bandit problem. The regret of Doubler scales with Oplog2ptqq. The regret of MultiSBM has an additive term of order ř a‰a˚ K Δpaq that is dominating for T ă ΩpexppKqq. The last algorithm, Sparring, has no theoretical analysis. Algorithms based on the weaker Condorcet winner assumption apply to utility-based setting, but they all suffer from equally large or even larger additive terms. The RUCB algorithm introduced by Zoghi et al. [17] has an additive term in the bound that is defined as 2DΔmax logp2Dq, for Δmax “ maxa‰a˚ Δpaq and D ą 12 ř ai‰a˚ ř aj‰ai 4α mintΔpaiq2,Δpajq2u . By unwrapping these definitions, we see that the RUCB regret bound has an additive term of order 2DΔmax ě řa‰a˚ KΔpaq . This is again the dominating term for time horizons T ď ΩpexppKqq. The same applies to the RMED algorithm introduced by Komiyama et al. [9], which has an additive term of OpK2q. (The dependencies on the gaps are hidden behind the O-notation.) The D-TS algorithm by Wu and Liu [13] based on Thompson Sampling shows one of the best empirical performances, but its regret bound includes an additive constant of order OpK3q. Other algorithms known to us, Interleaved Filter [16], Beat the Mean [15], and SAVAGE [12], all require knowledge of the time horizon T in advance. Empirical comparison We have used the framework provided by Komiyama et al. [9]. We use the same utility for all sub-optimal arms. In Figure 3, the winning probability of the optimal arm over suboptimal arms is always set to 0.7, we run the experiment for different number of arms K. TEA outperforms all algorithms besides RMED variants, as long as the number of arms are sufficiently big. To show that there also exists a regime where the improved constants gain an advantage over RMED, we conducted a second experiment in Figure 4 (in the Appendix), where we set the winning probability to 0.952 and significantly increase the number of arms. The evaluation shows that the additive terms are indeed non-negligible and that Dueling Bandit TEA outperforms all baseline algorithms when the number of arms is sufficiently large. 6 Discussion We have presented the factored bandits model and uniform identifiability assumption, which requires no knowledge of the reward model. We presented an algorithm for playing stochastic factored bandits with uniformly identifiable actions and provided matching upper and lower bounds for the problem up to constant factors. Our algorithm and proofs might serve as a template to turn other elimination style algorithms into improved anytime algorithms. Factored bandits with uniformly identifiable actions generalize rank-1 bandits. We have also provided a unified framework for the analysis of factored bandits and utility-based dueling bandits. Furthermore, we improve the additive constants in the regret bound compared to state-of-the-art algorithms for utility-based dueling bandits. There are multiple potential directions for future research. One example mentioned in the text is the possibility of improving the regret bound when additional restrictions on the form of the reward function are introduced or improvements of the lower bound when algorithms are restricted in computational or memory complexity. Another example is the adversarial version of the problem. 2Smaller gaps show the same behavior but require more arms and more timesteps.
1. What is the main contribution of the paper regarding combinatorial bandit problems? 2. What are the strengths and weaknesses of the proposed meta algorithm in different settings? 3. How does the paper's approach differ from existing solutions, particularly Rank-1 bandits? 4. Are there any inconsistencies or unclear descriptions in the writing that need clarification? 5. How does the algorithm's complexity change when K (number of atomic arms per factor) becomes large? 6. Can the authors provide more information on how their approach improves upon the state of the art in specific settings? 7. Do the empirical results have any implications for the performance of the proposed policy compared to other algorithms? 8. Is the regret bound provided by the authors optimal, and how does it compare to lower bounds in related work?
Review
Review The paper proposes to take a step back from several existing combinatorial bandit problems involving factored sets of actions (e.g. Rank-1 bandits). A meta algorithm based on sequential elimination is given and analyzed in two major settings: in general factored problems and in the dueling bandits setting. TL;DR: The whole algorithmic structure is extremely hard to parse… The improvement over Rank-1 bandits is not clear at all, and I am not an expert in Dueling Bandits to fully judge the impact of TEM on the ``constant” term in the analysis. Major comments: 1/ One recurrent inconsistency in the writing is that you define atomic arms at the beginning of section 2 and you barely use this vocabulary thereafter ! Arms and atomic arms are often called arms, especially in Definition 1 that actually defines gaps for atomic arms on each dimension. 1/ Algorithm description: For me, this is the main problem of this paper. The algorithm is extremely hard to parse for many reasons: many small inconsistencies (some are listed below) that do not help reading and a quite unclear description (l 130-137). Typically, I 134 : do you use comparisons between arms or atomic arms ? Algorithm 3 (function feedback) seems to do comparisons between atomic arms. - Algorithm 1 (and 2) : - M_s is not defined - This encapsulation of every step of your algorithm is actually more confusing than helping: basically you’re pulling all remaining atomic arm of each factor on each round. This is quite similar to Rank1Elim as far as I understand, and rather natural, but it took me quite some time making sure it was not more complicated than that. - Algorithm 3: If I understand well, you’re keeping track of all couples of atomic arms in each Module (so for each factor). That’s a quadratic memory complexity, typically of the order of LxK^2. Isn’t it an issue when K (number of atomic arm per factor) becomes large ? This is actually the interesting case of this type of structure I believe. - In function feedback, the indexing of N^i_s is inconsistent (sometimes N^s_i). - Using N_{i,j} and N^s_i for two different counters is quite confusing 2/ Link with Rank-1 bandits: TEA does not behave like rank1Elim but it could have been nice to explain better where is the improvement. Indeed, the rank-1 bandits is not a closed problem yet: there is still a gap between the lower bound of [Katariya et al.] and their algorithms (rank1Elim(KL)). If you think you are improving on this gap, it should be stated. If you are not, we would like you to tell us in which setting you are improving on the state of the art. I am really surprised by the empirical results. I see no reason why the rank1ElimKL would perform similarly as your policy, this is not discussed. 3/ Regret bound : It seems like your bound scales in 1/\Delta^2, which looks optimal but it is actually not. Typically, take the stochastic rank-1 bandits problem (which is a subclass of your setting). The deltas you defined are actually the smallest gaps while in [Katariya et al.] they consider the largest gaps (u_1v_1 - u_iv_1 = \Delta^U_i). They show a lower bound in 1/\Delta for such largest-gap definition. Their regret bound does not match their lower bound but it is still a bit tighter than yours as it scale as 1/(mean{u_1,…,u_K}) instead of 1/(min{u_1,…,u_K}). Minor comments: - l.22 Player The - It’s just my personal opinion but I think this paper tries to cover too wide of a scope. I am not an expert in dueling bandit but I know quite well the combinatorial / factored bandits literature in general. I think both contributions you make would be clearer if split in two separate papers. That would give you some space to explain better the behavior of your algorithm and where the improvement comes from. EDIT: Thank you for the detailed answers to my comments and remarks in the rebuttal. I raised my score a little bit because they were convincing :)
NIPS
Title Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses Abstract Policy optimization is a widely-used method in reinforcement learning. Due to its local-search nature, however, theoretical guarantees on global optimality often rely on extra assumptions on the Markov Decision Processes (MDPs) that bypass the challenge of global exploration. To eliminate the need of such assumptions, in this work, we develop a general solution that adds dilated bonuses to the policy update to facilitate global exploration. To showcase the power and generality of this technique, we apply it to several episodic MDP settings with adversarial losses and bandit feedback, improving and generalizing the state-of-the-art. Specifically, in the tabular case, we obtain Õ( √ T ) regret where T is the number of episodes, improving the Õ(T /3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T /3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 N/A √ T ) regret where T is the number of episodes, improving the Õ(T 2/3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T 2/3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 1 Introduction Policy optimization methods are among the most widely-used methods in reinforcement learning. Its empirical success has been demonstrated in various domains such as computer games [26] and robotics [21]. However, due to its local-search nature, global optimality guarantees of policy optimization often rely on unrealistic assumptions to ensure global exploration (see e.g., [1, 3, 24, 30]), making it theoretically less appealing compared to other methods. Motivated by this issue, a line of recent works [7, 27, 2, 35] equip policy optimization with global exploration by adding exploration bonuses to the update, and prove favorable guarantees even without making extra exploratory assumptions. Moreover, they all demonstrate some robustness aspect of policy optimization (such as being able to handle adversarial losses or a certain degree of model misspecification). Despite these important progresses, however, many limitations still exist, including worse regret rates comparing to the best value-based or model-based approaches [27, 2, 35], or requiring full-information feedback on the entire loss function (as opposed to the more realistic bandit feedback) [7]. ∗Equal contribution. 1In an improved version of this paper, we show that under the linear MDP assumption, an exploratory policy is not even needed. See https://arxiv.org/abs/2107.08346. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To address these issues, in this work, we propose a new type of exploration bonuses called dilated bonuses, which satisfies a certain dilated Bellman equation and provably leads to improved exploration compared to existing works (Section 3). We apply this general idea to advance the state-of-the-art of policy optimization for learning finite-horizon episodic MDPs with adversarial losses and bandit feedback. More specifically, our main results are: • First, in the tabular setting, addressing the main open question left in [27], we improve their Õ(T 2/3) regret to the optimal Õ( √ T ) regret. This shows that policy optimization, which performs local optimization, is as capable as other occupancy-measure-based global optimization algorithms [15, 20] in terms of global exploration. Moreover, our algorithm is computationally more efficient than those global methods since they require solving some convex optimization in each episode. (Section 4) • Second, to further deal with large-scale problems, we consider a linear function approximation setting where the state-action values are linear in some known low-dimensional features and also a simulator is available, the same setting considered by [24]. We obtain the same Õ(T 2/3) regret while importantly removing the need of an exploratory policy that their algorithm requires. Unlike the tabular setting (where we improve existing regret rates of policy optimization), note that researchers have not been able to show any sublinear regret for policy optimization without exploratory assumptions for this problem, which shows the critical role of our proposed dilated bonuses. In fact, there are simply no existing algorithms with sublinear regret at all for this setting, be it policy-optimization-type or not. This shows the advantage of policy optimization over other approaches, when combined with our dilated bonuses. (Section 5) • Finally, while the main focus of our work is to show how dilated bonuses are able to provide global exploration, we also discuss their roles in improving the regret rate to Õ( √ T ) in the linear setting above or removing the need of a simulator for the special case of linear MDPs (with Õ(T 6/7) regret), when an exploratory policy is available. (Section 6) Related work. In the tabular setting, except for [27], most algorithms apply the occupancymeasure-based framework to handle adversarial losses (e.g., [25, 15, 9, 8]), which as mentioned is computationally expensive. For stochastic losses, there are many more different approaches such as model-based ones [13, 10, 5, 12, 34] and value-based ones [14, 11]. Theoretical studies for linear function approximation have gained increasing interest recently [32, 33, 16]. Most of them study stochastic/stationary losses, with the exception of [24, 7]. Our algorithm for the linear MDP setting bears some similarity to those of [2, 35] which consider stationary losses. However, our algorithm and analysis are arguably simpler than theirs. Specifically, they divide the state space into a known part and an unknown part, with different exploration principle and bonus design for different parts. In contrast, we enjoy a unified bonus design for all states. Besides, in each episode, their algorithms first execute an exploratory policy (from a policy cover), and then switch to the policy suggested by the policy optimization algorithm, which inevitably leads to linear regret when facing adversarial losses. 2 Problem Setting We consider an MDP specified by a state space X (possibly infinite), a finite action space A, and a transition function P with P (·|x, a) specifying the distribution of the next state after taking action a in state x. In particular, we focus on the finite-horizon episodic setting in which X admits a layer structure and can be partitioned into X0, X1, . . . , XH for some fixed parameter H , where X0 contains only the initial state x0, XH contains only the terminal state xH , and for any x ∈ Xh, h = 0, . . . ,H − 1, P (·|x, a) is supported on Xh+1 for all a ∈ A (that is, transition is only possible from Xh to Xh+1). An episode refers to a trajectory that starts from x0 and ends at xH following some series of actions and the transition dynamic. The MDP may be assigned with a loss function ` : X ×A→ [0, 1] so that `(x, a) specifies the loss suffered when selecting action a in state x. A policy π for the MDP is a mapping X → ∆(A), where ∆(A) denotes the set of distributions over A and π(a|x) is the probability of choosing action a in state x. Given a loss function ` and a policy π, the expected total loss of π is given by V π(x0; `) = E [∑H−1 h=0 `(xh, ah) ∣∣ ah ∼ πt(·|xh), xh+1 ∼ P (·|xh, ah) ] . It can also be defined via the Bellman equation involving the state value function V π(x; `) and the state-action value function Qπ(x, a; `) (a.k.a. Q-function) defined as below: V (xH ; `) = 0, Qπ(x, a; `) = `(x, a) + Ex′∼P (·|x,a) [V π(x′; `)] , and V π(x; `) = Ea∼π(·|x) [Qπ(x, a; `)] . We study online learning in such a finite-horizon MDP with unknown transition, bandit feedback, and adversarial losses. The learning proceeds through T episodes. Ahead of time, an adversary arbitrarily decides T loss functions `1, . . . , `T , without revealing them to the learner. Then in each episode t, the learner decides a policy πt based on all information received prior to this episode, executes πt starting from the initial state x0, generates and observes a trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Importantly, the learner does not observe any other information about `t (a.k.a. bandit feedback).2 The goal of the learner is to minimize the regret, defined as Reg = T∑ t=1 V πtt (x0)−min π T∑ t=1 V πt (x0), where we use V πt (x) as a shorthand for V π(x; `t) (and similarly Qπt (x, a) as a shorthand for Qπ(x, a; `t)). Without further structures, the best existing regret bound is Õ(H|X| √ |A|T ) [15], with an extra √ X factor compared to the best existing lower bound [14]. Occupancy measures. For a policy π and a state x, we define qπ(x) to be the probability (or probability measure when |X| is infinite) of visiting state x within an episode when following π. When it is necessary to highlight the dependence on the transition, we write it as qP,π(x). Further define qπ(x, a) = qπ(x)π(a|x) and qt(x, a) = qπt(x, a). Finally, we use q? as a shorthand for qπ ? where π? ∈ argminπ ∑T t=1 V π t (x0) is one of the optimal policies. Note that by definition, we have V π(x0; `) = ∑ x,a q π(x, a)`(x, a). In fact, we will overload the notation and let V π(x0; b) = ∑ x,a q π(x, a)b(x, a) for any function b : X ×A→ R (even though it might not correspond to a real loss function). Other notations. We denote by Et[·] and Vart[·] the expectation and variance conditioned on everything prior to episode t. For a matrix Σ and a vector z (of appropriate dimension), ‖z‖Σ denotes the quadratic norm √ z>Σz. The notation Õ(·) hides all logarithmic factors. 3 Dilated Exploration Bonuses In this section, we start with a general discussion on designing exploration bonuses (not specific to policy optimization), and then introduce our new dilated bonuses for policy optimization. For simplicity, the exposition in this section assumes a finite state space, but the idea generalizes to an infinite state space. When analyzing the regret of an algorithm, very often we run into the following form: Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) ≤ o(T ) + T∑ t=1 ∑ x,a q?(x, a)bt(x, a) = o(T ) + T∑ t=1 V π ? (x0; bt), (1) for some function bt(x, a) usually related to some estimation error or variance that can be prohibitively large. For example, in policy optimization, the algorithm performs local search in each state essentially using a multi-armed bandit algorithm and treating Qπt(x, a) as the loss of action a in state x. Since Qπt(x, a) is unknown, however, the algorithm has to use some estimator of Qπt(x, a) instead, whose bias and variance both contribute to the bt function. Usually, bt(x, a) is large for a rarely-visited state-action pair (x, a) and is inversely related to qt(x, a), which is exactly why most analysis relies 2Full-information feedback, on the other hand, refers to the easier setting where the entire loss function `t is revealed to the learner at the end of episode t. on the assumption that some distribution mismatch coefficient related to q?(x,a)/qt(x,a) is bounded (see e.g., [3, 31]). On the other hand, an important observation is that while V π ? (x0; bt) can be prohibitively large, its counterpart with respect to the learner’s policy V πt(x0; bt) is usually nicely bounded. For example, if bt(x, a) is inversely related to qt(x, a) as mentioned, then V πt(x0; bt) = ∑ x,a qt(x, a)bt(x, a) is small no matter how small qt(x, a) could be for some (x, a). This observation, together with the linearity property V π(x0; `t − bt) = V π(x0; `t)− V π(x0; bt), suggests that we treat `t − bt as the loss function of the problem, or in other words, add a (negative) bonus to each state-action pair, which intuitively encourages exploration due to underestimation. Indeed, assuming for a moment that Eq. (1) still roughly holds even if we treat `t − bt as the loss function: T∑ t=1 V πt(x0; `t − bt)− T∑ t=1 V π ? (x0; `t − bt) . o(T ) + T∑ t=1 V π ? (x0; bt). (2) Then by linearity and rearranging, we have Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) . o(T ) + T∑ t=1 V πt(x0; bt). (3) Due to the switch from π? to πt in the last term compared to Eq. (1), this is usually enough to prove a desirable regret bound without making extra assumptions. The caveat of this discussion is the assumption of Eq. (2). Indeed, after adding the bonuses, which itself contributes some more bias and variance, one should expect that bt on the right-hand side of Eq. (2) becomes something larger, breaking the desired cancellation effect to achieve Eq. (3). Indeed, the definition of bt essentially becomes circular in this sense. Dilated Bonuses for Policy Optimization To address this issue, we take a closer look at the policy optimization algorithm specifically. As mentioned, policy optimization decomposes the problem into individual multi-armed bandit problems in each state and then performs local optimization. This is based on the well-known performance difference lemma [17]: Reg = ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) ) Qπtt (x, a), showing that in each state x, the learner is facing a bandit problem with Qπtt (x, a) being the loss for action a. Correspondingly, incorporating the bonuses bt for policy optimization means subtracting the bonus Qπt(x, a; bt) from Qπtt (x, a) for each action a in each state x. Recall that Q πt(x, a; bt) satisfies the Bellman equation Qπt(x, a; bt) = bt(x, a) + Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x′, a′)]. To resolve the issue mentioned earlier, we propose to replace this bonus function Qπt(x, a; bt) with its dilated version Bt(s, a) satisfying the following dilated Bellman equation: Bt(x, a) = bt(x, a) + ( 1 + 1 H ) Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (4) (with Bt(xH , a) = 0 for all a). The only difference compared to the standard Bellman equation is the extra (1 + 1H ) factor, which slightly increases the weight for deeper layers and thus intuitively induces more exploration for those layers. Due to the extra bonus compared to Qπt(x, a; bt), the regret bound also increases accordingly. In all our applications, this extra amount of regret turns out to be of the form 1H ∑T t=1 ∑ x,a q ?(x)πt(a|x)Bt(x, a), leading to ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) )( Qπtt (x, a)−Bt(x, a) ) ≤ o(T ) + T∑ t=1 V π ? (x0; bt) + 1 H T∑ t=1 ∑ x,a q?(x)πt(a|x)Bt(x, a). (5) With some direct calculation, one can show that this is enough to show a regret bound that is only a constant factor larger than the desired bound in Eq. (3)! This is summarized in the following lemma. Lemma 3.1. If Eq. (5) holds with Bt defined in Eq. (4), then Reg ≤ o(T ) + 3 ∑T t=1 V πt(x0; bt). The high-level idea of the proof is to show that the bonuses added to a layer h is enough to cancel the large bias/variance term (including those coming from the bonus itself) from layer h+ 1. Therefore, cancellation happens in a layer-by-layer manner except for layer 0, where the total amount of bonus can be shown to be at most (1 + 1H ) H ∑T t=1 V πt(x0; bt) ≤ 3 ∑T t=1 V πt(x0; bt). Recalling again that V πt(x0; bt) is usually nicely bounded, we thus arrive at a favorable regret guarantee without making extra assumptions. Of course, since the transition is unknown, we cannot compute Bt exactly. However, Lemma 3.1 is robust enough to handle either a good approximate version of Bt (see Lemma B.1) or a version where Eq. (4) and Eq. (5) only hold in expectation (see Lemma B.2), which is enough for us to handle unknown transition. In the next three sections, we apply this general idea to different settings, showing what bt and Bt are concretely in each case. 4 The Tabular Case In this section, we study the tabular case where the number of states is finite. We propose a policy optimization algorithm with Õ( √ T ) regret, improving the Õ(T 2/3) regret of [27]. See Algorithm 1 for the complete pseudocode. Algorithm design. First, to handle unknown transition, we follow the common practice (dating back to [13]) to maintain a confidence set of the transition, which is updated whenever the visitation count of a certain state-action pair is doubled. We call the period between two model updates an epoch, and use Pk to denote the confidence set for epoch k, formally defined in Eq. (10). In episode t, the policy πt is defined via the standard multiplicative weight algorithm (also connected to Natural Policy Gradient [18, 3, 30]), but importantly with the dilated bonuses incorporated such that πt(a|x) ∝ exp(−η ∑t−1 τ=1(Q̂τ (x, a)−Bτ (x, a))). Here, η is a step size parameter, Q̂τ (x, a) is an importance-weighted estimator for Qπττ (x, a) defined in Eq. (7), and Bτ (x, a) is the dilated bonus defined in Eq. (9). More specifically, for a state x in layer h, Q̂t(x, a) is defined as Lt,h1t(x,a) qt(x,a)+γ , where 1t(x, a) is the indicator of whether (x, a) is visited during episode t; Lt,h is the total loss suffered by the learner starting from layer h till the end of the episode; qt(x, a) = maxP̂∈Pk q P̂ ,πt(x, a) is the largest plausible value of qt(x, a) within the confidence set, which can be computed efficiently using the COMP-UOB procedure of [15] (see also Appendix C.1); and finally γ is a parameter used to control the maximum magnitude of Q̂t(x, a), inspired by the work of [23]. To get a sense of this estimator, consider the special case when γ = 0 and the transition is known so that we can set Pk = {P} and thus qt = qt. Then, since the expectation of Lt,h conditioned on (x, a) being visited is Q πt t (x, a) and the expectation of 1t(x, a) is qt(x, a), we know that Q̂t(x, a) is an unbiased estimator for Qπtt (x, a). The extra complication is simply due to the transition being unknown, forcing us to use qt and γ > 0 to make sure that Q̂t(x, a) is an optimistic underestimator, an idea similar to [15]. Next, we explain the design of the dilated bonus Bt. Following the discussions of Section 3, we first figure out what the corresponding bt function is in Eq. (1), by analyzing the regret bound without using any bonuses. The concrete form of bt turns out to be Eq. (8), whose value at (x, a) is independent of a and thus written as bt(x) for simplicity. Note that Eq. (8) depends on the occupancy measure lower bound q t (s, a) = minP̂∈Pk q P̂ ,πt(x, a), the opposite of qt(s, a), which can also be computed efficiently using a procedure similar to COMP-UOB (see Appendix C.1). Once again, to get a sense of this, consider the special case with a known transition so that we can set Pk = {P} and thus qt = qt = qt. Then, one see that bt(x) is simply upper bounded by Ea∼πt(·|x) [3γH/qt(x,a)] = 3γH|A|/qt(x), which is inversely related to the probability of visiting state x, matching the intuition we provided in Section 3 (that bt(x) is large if x is rarely visited). The extra complication of Eq. (8) is again just due to the unknown transition. With bt(x) ready, the final form of the dilated bonus Bt is defined following the dilated Bellman equation of Eq. (4), except that since P is unknown, we once again apply optimism and find the 3We use y +← z as a shorthand for the increment operation y ← y + z. Algorithm 1 Policy Optimization with Dilated Bonuses (Tabular Case) Parameters: δ ∈ (0, 1), η = min {1/24H3, 1/√|X||A|HT}, γ = 2ηH . Initialization: Set epoch index k = 1 and confidence set P1 as the set of all transition functions. For all (x, a, x′), initialize counters N0(x, a) = N1(x, a) = 0, N0(x, a, x′) = N1(x, a, x′) = 0. for t = 1, 2, . . . , T do Step 1: Compute and execute policy. Execute πt for one episode, where πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( Q̂τ (x, a)−Bτ (x, a) )) , (6) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct Q-function estimators. For all h ∈ {0, . . . ,H − 1} and (x, a) ∈ Xh ×A, Q̂t(x, a) = Lt,h qt(x, a) + γ 1t(x, a), (7) with Lt,h = H−1∑ i=h `t(xt,i, at,i), qt(x, a) = max P̂∈Pk qP̂ ,πt(x, a),1t(x, a) = 1{xt,h = x, at,h = a}. Step 3: Construct bonus functions. For all (x, a) ∈ X ×A, bt(x) = Ea∼πt(·|x) [ 3γH +H(qt(x, a)− qt(x, a)) qt(x, a) + γ ] (8) Bt(x, a) = bt(x) + ( 1 + 1 H ) max P̂∈Pk Ex′∼P̂ (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (9) where q t (x, a) = minP̂∈Pk q P̂ ,πt(x, a) and Bt(xH , a) = 0 for all a. Step 4: Update model estimation. ∀h < H , Nk(xt,h, at,h) +← 1, Nk(xt,h, at,h, xt,h+1) +← 1.3 if ∃h, Nk(xt,h, at,h) ≥ max{1, 2Nk−1(xt,h, at,h)} then Increment epoch index k +← 1 and copy counters: Nk ← Nk−1, Nk ← Nk−1. Compute empirical transition P k(x′|x, a) = Nk(x,a,x ′) max{1,Nk(x,a)} and confidence set: Pk = { P̂ : ∣∣∣P̂ (x′|x, a)− P k(x′|x, a)∣∣∣ ≤ confk(x′|x, a), ∀(x, a, x′) ∈ Xh ×A×Xh+1, h = 0, 1, . . . ,H − 1 } , (10) where confk(x ′|x, a) = 4 √ Pk(x′|x,a) ln(T |X||A|δ ) max{1,Nk(x,a)} + 28 ln(T |X||A|δ ) 3 max{1,Nk(x,a)} . largest possible value within the confidence set (see Eq. (9)). This can again be efficiently computed; see Appendix C.1. This concludes the complete algorithm design. Regret analysis. The regret guarantee of Algorithm 1 is presented below: Theorem 4.1. Algorithm 1 ensures that with probability 1−O(δ), Reg = Õ ( H2|X| √ AT +H4 ) . Again, this improves the Õ(T 2/3) regret of [27]. It almost matches the best existing upper bound for this problem, which is Õ(H|X| √ |A|T ) [15]. While it is unclear to us whether this small gap can be closed using policy optimization, we point out that our algorithm is arguably more efficient than that of [15], which performs global convex optimization over the set of all plausible occupancy measures in each episode. The complete proof of this theorem is deferred to Appendix C. Here, we only sketch an outline of proving Eq. (5), which, according to the discussions in Section 3, is the most important part of the analysis. Specifically, we decompose the left-hand side of Eq. (5),∑ x q ?(x) ∑ t 〈πt(·|x)− π?(·|x), Qt(x, ·)−Bt(x, ·)〉, as BIAS-1 + BIAS-2 + REG-TERM, where • BIAS-1 = ∑ x q ?(x) ∑ t〈πt(·|x), Qt(x, ·)− Q̂t(x, ·)〉 measures the amount of underestimation of Q̂t related to πt, which can be bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( 2γH+H(qt(x,a)−qt(x,a)) qt(x,a)+γ ) + Õ (H/η) with high probability (Lemma C.1); • BIAS-2 = ∑ x q ?(x) ∑ t〈π?(·|x), Q̂t(x, ·)−Qt(x, ·)〉 measures the amount of overestimation of Q̂t related to π?, which can be bounded by Õ (H/η) since Q̂t is an underestimator (Lemma C.2); • REG-TERM = ∑ x q ?(x) ∑ t〈πt(·|x)− π?(·|x), Q̂t(x, ·)−Bt(x, ·)〉 is directly controlled by the multiplicative weight update, and is bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( γH qt(x,a)+γ + Bt(x,a)H ) + Õ (H/η) with high probability (Lemma C.3). Combining all with the definition of bt proves the key Eq. (5) (with the o(T ) term being Õ(H/η)). 5 The Linear-Q Case In this section, we move on to the more challenging setting where the number of states might be infinite, and function approximation is used to generalize the learner’s experience to unseen states. We consider the most basic linear function approximation scheme where for any π, the Q-function Qπt (x, a) is linear in some known feature vector φ(x, a), formally stated below. Assumption 1 (Linear-Q). Let φ(x, a) ∈ Rd be a known feature vector of the state-action pair (x, a). We assume that for any episode t, policy π, and layer h, there exists an unknown weight vector θπt,h ∈ Rd such that for all (x, a) ∈ Xh × A, Qπt (x, a) = φ(x, a)>θπt,h. Without loss of generality, we assume ‖φ(x, a)‖ ≤ 1 for all (x, a) and ‖θπt,h‖ ≤ √ dH for all t, h, π. For justification on the last condition on norms, see [30, Lemma 8]. This linear-Q assumption has been made in several recent works with stationary losses [1, 30] and also in [24] with the same adversarial losses.4 It is weaker than the linear MDP assumption (see Section 6) as it does not pose explicit structure requirements on the loss and transition functions. Due to this generality, however, our algorithm also requires access to a simulator to obtain samples drawn from the transition, formally stated below. Assumption 2 (Simulator). The learner has access to a simulator, which takes a state-action pair (x, a) ∈ X ×A as input, and generates a random outcome of the next state x′ ∼ P (·|x, a). Note that this assumption is also made by [24] and more earlier works with stationary losses (see e.g., [4, 28]).5 In this setting, we propose a new policy optimization algorithm with Õ(T 2/3) regret. See Algorithm 2 for the pseudocode. Algorithm design. The algorithm still follows the multiplicative weight update Eq. (11) in each state x ∈ Xh (for some h), but now with φ(x, a)>θ̂t,h as an estimator for Qπtt (x, a) = φ(x, a)>θ πt t,h, and BONUS(t, x, a) as the dilated bonus Bt(x, a). Specifically, the construction of the weight estimator θ̂t,h follows the idea of [24] (which itself is based on the linear bandit literature) and is defined in Eq. (12) as Σ̂+t,hφ(xt,h, at,h)Lt,h. Here, Σ̂ + t,h is an -accurate estimator of (γI + Σt,h) −1, where γ is a small parameter and Σt,h = Et[φ(xt,h, at,h)φ(xt,h, at,h)>] is the covariance matrix for layer h under policy πt; Lt,h = ∑H−1 i=h `t(xt,i, at,i) is again the loss suffered by the learner starting from layer h, whose conditional expectation is Qπtt (xt,h, at,h) = φ(xt,h, at,h) >θπtt,h. Therefore, 4The assumption in [24] is stated slightly differently (e.g., their feature vectors are independent of the action). However, it is straightforward to verify that the two versions are equivalent. 5The simulator required by [24] is in fact slightly weaker than ours and those from earlier works — it only needs to be able to generate a trajectory starting from x0 for any policy. Algorithm 2 Policy Optimization with Dilated Bonuses (Linear-Q Case) parameters: γ, β, η, , M = ⌈ 24 ln(dHT ) 2γ2 ⌉ , N = ⌈ 2 γ ln 1 γ ⌉ . for t = 1, 2, . . . , T do Step 1: Interact with the environment. Execute πt, which is defined such that for each x ∈ Xh, πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( φ(x, a)>θ̂τ,h − BONUS(τ, x, a) )) , (11) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct covariance matrix inverse estimators.{ Σ̂+t,h }H−1 h=0 = GEOMETRICRESAMPLING (t,M,N, γ) . (see Algorithm 7) Step 3: Construct Q-function weight estimators. For h = 0, . . . ,H − 1, compute θ̂t,h = Σ̂ + t,hφ(xt,h, at,h)Lt,h, where Lt,h = H−1∑ i=h `t(xt,i, at,i). (12) Algorithm 3 BONUS(t, x, a) if BONUS(t, x, a) has been called before then return the value of BONUS(t, x, a) calculated last time. Let h be such that x ∈ Xh. if h = H then return 0. Compute πt(·|x), defined in Eq. (11) (which involves recursive calls to BONUS for smaller t). Get a sample of the next state x′ ← SIMULATOR(x, a). Compute πt(·|x′) (again, defined in Eq. (11)), and sample an action a′ ∼ πt(·|x′). return β‖φ(x, a)‖2 Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] + ( 1 + 1H ) BONUS(t, x′, a′). when γ and approach 0, one see that θ̂t,h is indeed an unbiased estimator of θπtt,h. We adopt the GEOMETRICRESAMPLING procedure (see Algorithm 7) of [24] to compute Σ̂+t,h, which involves calling the simulator multiple times. Next, we explain the design of the dilated bonus. Again, following the general principle discussed in Section 3, we identify bt(x, a) in this case as β‖φ(x, a)‖2Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] for some parameter β > 0. Further following the dilated Bellman equation Eq. (4), we thus define BONUS(t, x, a) recursively as the last line of Algorithm 3, where we replace the expectation E(x′,a′)[BONUS(t, x′, a′)] with one single sample for efficient implementation. However, even more care is needed to actually implement the algorithm. First, since the state space is potentially infinite, one cannot actually calculate and store the value of BONUS(t, x, a) for all (x, a), but can only calculate them on-the-fly when needed. Moreover, unlike the estimators for Qπtt (x, a), which can be succinctly represented and stored via the weight estimator θ̂t,h, this is not possible for BONUS(t, x, a) due to the lack of any structure. Even worse, the definition of BONUS(t, x, a) itself depends on πt(·|x) and also πt(·|x′) for the afterstate x′, which, according to Eq. (11), further depends on BONUS(τ, x, a) for τ < t, resulting in a complicated recursive structure. This is also why we present it as a procedure in Algorithm 3 (instead of Bt(x, a)). In total, this leads to (TAH)O(H) number of calls to the simulator. Whether this can be improved is left as a future direction. Regret guarantee By showing that Eq. (5) holds in expectation for our algorithm, we obtain the following regret guarantee. (See Appendix D for the proof.) Theorem 5.1. Under Assumption 1 and Assumption 2, with appropriate choices of the parameters γ, β, η, , Algorithm 2 ensures E[Reg] = Õ ( H2(dT )2/3 ) (the dependence on |A| is only logarithmic). This matches the Õ(T 2/3) regret of [24, Theorem 1], without the need of their assumption which essentially says that the learner is given an exploratory policy to start with.6 To our knowledge, this is the first no-regret algorithm for linear function approximation (with adversarial losses and bandit feedback) when no exploratory assumptions are made. 6 Improvements with an Exploratory Policy Previous sections have demonstrated the role of dilated bonuses in providing global exploration. In this section, we further discuss what dilated bonuses can achieve when an exploratory policy π0 is given in linear function approximation settings. Formally, let Σh = E[φ(xh, ah)φ(xh, ah)>] denote the covariance matrix for features in layer h following π0 (that is, the expectation is taken over a trajectory {(xh, ah)}H−1h=0 with ah ∼ π0(·|xh)), then we assume the following. Assumption 3 (An exploratory policy). An exploratory policy π0 is given to the learner ahead of time, and guarantees that for any h, the eigenvalues of Σh are at least λmin > 0. The same assumption is made by [24] (where they simply let π0 be the uniform exploration policy). As mentioned, under this assumption they achieve Õ(T 2/3) regret. By slightly modifying our Algorithm 2 (specifically, executing π0 with a small probability in each episode and setting the parameters differently), we achieve the following improved result. Theorem 6.1. Under Assumptions 1, 2, and 3, Algorithm 8 ensures E[Reg] = Õ (√ H4T λmin + √ H5dT ) . Removing the simulator One drawback of our algorithm is that it requires exponential in H number of calls to the simulator. To address this issue, and in fact, to also completely remove the need of a simulator, we further consider a special case where the transition function also has a low-rank structure, known as the linear MDP setting. Assumption 4 (Linear MDP). The MDP satisfies Assumption 1 and that for any h and x′ ∈ Xh+1, there exists a weight vector νx ′ h ∈ Rd such that P (x′|x, a) = φ(x, a)>νx ′ h for all (x, a) ∈ Xh ×A. There is a surge of works studying this setting, with [7] being the closest to us. They achieve Õ( √ T ) regret but require full-information feedback of the loss functions, and there are no existing results for the bandit feedback setting without a simulator. We propose the first algorithm with sublinear regret for this problem, shown in Algorithm 10 of Appendix F due to space limit. The structure of Algorithm 10 is very similar to that of Algorithm 2, with the same definition of bt(x, a). However, due to the low-rank transition structure, we are now able to efficiently construct estimators of Bt(x, a) even for unseen state-action pairs using function approximation, bypassing the requirement of a simulator. Specifically, observe that according to Eq. (4), for each x ∈ Xh, under Assumption 4 Bt(x, a) can be written as bt(x, a) + φ(x, a)>Λπtt,h, where Λ πt t,h = (1 + 1 H ) ∫ x′∈Xh+1 Ea′∼πt(·|x′)[Bt(x ′, a′)]νx ′ h dx ′ is a vector independent of (x, a). Thus, by the same idea of estimating θπtt,h, we can estimate Λ πt t,h as well, thus succinctly representing Bt(x, a) for all (x, a). Recall that estimating θπtt,h (and thus also Λ πt t,h) requires constructing the covariance matrix inverse estimate Σ̂+t,h. Due to the lack of a simulator, another important change in the algorithm is to construct Σ̂+t,h using online samples. To do so, we divide the entire horizon into epochs with equal length, and only update the policy optimization algorithm at the beginning of an epoch. Within an epoch, we keep executing the same policy and collect several trajectories, which are then used to construct Σ̂+t,h. With these changes, we successfully remove the need of a simulator, and prove the guarantee below. Theorem 6.2. Under Assumption 3 and Assumption 4, Algorithm 10 ensures E[Reg] = Õ ( T 6/7 ) (see Appendix F for dependence on other parameters). One potential direction to further improve our algorithm is to reuse data across different epochs, an idea adopted by several recent works [35, 19] for different problems. We also conjecture that 6Under an even strong assumption that every policy is exploratory, they also improve the regret to Õ( √ T ); see [24, Theorem 2]. Assumption 3 can be removed, but we meet some technical difficulty in proving so. We leave these for future investigation. Acknowledgments and Disclosure of Funding We thank Gergely Neu and Julia Olkhovskaya for discussions on the technical details of their GEOMETRICRESAMPLING procedure. This work is supported by NSF Award IIS-1943607 and a Google Faculty Research Award.
1. What is the focus of the paper regarding policy optimization algorithms? 2. What are the strengths of the proposed bonus design mechanism? 3. How does the reviewer assess the novelty and improvement of the proposed algorithm compared to prior works? 4. Are there any minor suggestions or comments regarding related works?
Summary Of The Paper Review
Summary Of The Paper This paper introduces a new bonus design mechanism, and utilizes it to derive new policy optimization algorithms with improved rates for adversarial MDPs. Review This is an interesting paper for the following reasons. The paper is well written. The mechanism of the bonus design and its technical motivation is clearly explained in Section 3. Implementing the dilated bonus on tabular MDPs gives a new policy optimization algorithm with T -regret, which improves over Shani et al., 2020. All other known algorithms with the same rate [e.g., Jin et al., 2020a] requires solving a large-scale convex optimization problem over the occupancy measure per episode, while the algorithm in this paper doesn’t because of its clever ‘local’ bonus design. I checked the proofs for Section 3&4 in details, and I believe they are correct. The authors further apply their bonus technique to adversarial linear MDPs and obtain improved results over prior arts. Overall, this paper made solid technical contributions by designing more efficient algorithms with improved rates. Minor suggestion on related works: The idea of adding γ in the denominator of Q ^ is actually first proposed in [1], which not only encourages exploration but also makes it possible to obtain high-probability bounds. If I remember correctly, [1] also proves some lemma that is highly similar to Lemma A.2 in this paper. [1] Explore no more: Improved high-probability regret bounds for non-stochastic bandits, Gergely Neu, NIPS 2015.
NIPS
Title Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses Abstract Policy optimization is a widely-used method in reinforcement learning. Due to its local-search nature, however, theoretical guarantees on global optimality often rely on extra assumptions on the Markov Decision Processes (MDPs) that bypass the challenge of global exploration. To eliminate the need of such assumptions, in this work, we develop a general solution that adds dilated bonuses to the policy update to facilitate global exploration. To showcase the power and generality of this technique, we apply it to several episodic MDP settings with adversarial losses and bandit feedback, improving and generalizing the state-of-the-art. Specifically, in the tabular case, we obtain Õ( √ T ) regret where T is the number of episodes, improving the Õ(T /3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T /3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 N/A √ T ) regret where T is the number of episodes, improving the Õ(T 2/3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T 2/3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 1 Introduction Policy optimization methods are among the most widely-used methods in reinforcement learning. Its empirical success has been demonstrated in various domains such as computer games [26] and robotics [21]. However, due to its local-search nature, global optimality guarantees of policy optimization often rely on unrealistic assumptions to ensure global exploration (see e.g., [1, 3, 24, 30]), making it theoretically less appealing compared to other methods. Motivated by this issue, a line of recent works [7, 27, 2, 35] equip policy optimization with global exploration by adding exploration bonuses to the update, and prove favorable guarantees even without making extra exploratory assumptions. Moreover, they all demonstrate some robustness aspect of policy optimization (such as being able to handle adversarial losses or a certain degree of model misspecification). Despite these important progresses, however, many limitations still exist, including worse regret rates comparing to the best value-based or model-based approaches [27, 2, 35], or requiring full-information feedback on the entire loss function (as opposed to the more realistic bandit feedback) [7]. ∗Equal contribution. 1In an improved version of this paper, we show that under the linear MDP assumption, an exploratory policy is not even needed. See https://arxiv.org/abs/2107.08346. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To address these issues, in this work, we propose a new type of exploration bonuses called dilated bonuses, which satisfies a certain dilated Bellman equation and provably leads to improved exploration compared to existing works (Section 3). We apply this general idea to advance the state-of-the-art of policy optimization for learning finite-horizon episodic MDPs with adversarial losses and bandit feedback. More specifically, our main results are: • First, in the tabular setting, addressing the main open question left in [27], we improve their Õ(T 2/3) regret to the optimal Õ( √ T ) regret. This shows that policy optimization, which performs local optimization, is as capable as other occupancy-measure-based global optimization algorithms [15, 20] in terms of global exploration. Moreover, our algorithm is computationally more efficient than those global methods since they require solving some convex optimization in each episode. (Section 4) • Second, to further deal with large-scale problems, we consider a linear function approximation setting where the state-action values are linear in some known low-dimensional features and also a simulator is available, the same setting considered by [24]. We obtain the same Õ(T 2/3) regret while importantly removing the need of an exploratory policy that their algorithm requires. Unlike the tabular setting (where we improve existing regret rates of policy optimization), note that researchers have not been able to show any sublinear regret for policy optimization without exploratory assumptions for this problem, which shows the critical role of our proposed dilated bonuses. In fact, there are simply no existing algorithms with sublinear regret at all for this setting, be it policy-optimization-type or not. This shows the advantage of policy optimization over other approaches, when combined with our dilated bonuses. (Section 5) • Finally, while the main focus of our work is to show how dilated bonuses are able to provide global exploration, we also discuss their roles in improving the regret rate to Õ( √ T ) in the linear setting above or removing the need of a simulator for the special case of linear MDPs (with Õ(T 6/7) regret), when an exploratory policy is available. (Section 6) Related work. In the tabular setting, except for [27], most algorithms apply the occupancymeasure-based framework to handle adversarial losses (e.g., [25, 15, 9, 8]), which as mentioned is computationally expensive. For stochastic losses, there are many more different approaches such as model-based ones [13, 10, 5, 12, 34] and value-based ones [14, 11]. Theoretical studies for linear function approximation have gained increasing interest recently [32, 33, 16]. Most of them study stochastic/stationary losses, with the exception of [24, 7]. Our algorithm for the linear MDP setting bears some similarity to those of [2, 35] which consider stationary losses. However, our algorithm and analysis are arguably simpler than theirs. Specifically, they divide the state space into a known part and an unknown part, with different exploration principle and bonus design for different parts. In contrast, we enjoy a unified bonus design for all states. Besides, in each episode, their algorithms first execute an exploratory policy (from a policy cover), and then switch to the policy suggested by the policy optimization algorithm, which inevitably leads to linear regret when facing adversarial losses. 2 Problem Setting We consider an MDP specified by a state space X (possibly infinite), a finite action space A, and a transition function P with P (·|x, a) specifying the distribution of the next state after taking action a in state x. In particular, we focus on the finite-horizon episodic setting in which X admits a layer structure and can be partitioned into X0, X1, . . . , XH for some fixed parameter H , where X0 contains only the initial state x0, XH contains only the terminal state xH , and for any x ∈ Xh, h = 0, . . . ,H − 1, P (·|x, a) is supported on Xh+1 for all a ∈ A (that is, transition is only possible from Xh to Xh+1). An episode refers to a trajectory that starts from x0 and ends at xH following some series of actions and the transition dynamic. The MDP may be assigned with a loss function ` : X ×A→ [0, 1] so that `(x, a) specifies the loss suffered when selecting action a in state x. A policy π for the MDP is a mapping X → ∆(A), where ∆(A) denotes the set of distributions over A and π(a|x) is the probability of choosing action a in state x. Given a loss function ` and a policy π, the expected total loss of π is given by V π(x0; `) = E [∑H−1 h=0 `(xh, ah) ∣∣ ah ∼ πt(·|xh), xh+1 ∼ P (·|xh, ah) ] . It can also be defined via the Bellman equation involving the state value function V π(x; `) and the state-action value function Qπ(x, a; `) (a.k.a. Q-function) defined as below: V (xH ; `) = 0, Qπ(x, a; `) = `(x, a) + Ex′∼P (·|x,a) [V π(x′; `)] , and V π(x; `) = Ea∼π(·|x) [Qπ(x, a; `)] . We study online learning in such a finite-horizon MDP with unknown transition, bandit feedback, and adversarial losses. The learning proceeds through T episodes. Ahead of time, an adversary arbitrarily decides T loss functions `1, . . . , `T , without revealing them to the learner. Then in each episode t, the learner decides a policy πt based on all information received prior to this episode, executes πt starting from the initial state x0, generates and observes a trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Importantly, the learner does not observe any other information about `t (a.k.a. bandit feedback).2 The goal of the learner is to minimize the regret, defined as Reg = T∑ t=1 V πtt (x0)−min π T∑ t=1 V πt (x0), where we use V πt (x) as a shorthand for V π(x; `t) (and similarly Qπt (x, a) as a shorthand for Qπ(x, a; `t)). Without further structures, the best existing regret bound is Õ(H|X| √ |A|T ) [15], with an extra √ X factor compared to the best existing lower bound [14]. Occupancy measures. For a policy π and a state x, we define qπ(x) to be the probability (or probability measure when |X| is infinite) of visiting state x within an episode when following π. When it is necessary to highlight the dependence on the transition, we write it as qP,π(x). Further define qπ(x, a) = qπ(x)π(a|x) and qt(x, a) = qπt(x, a). Finally, we use q? as a shorthand for qπ ? where π? ∈ argminπ ∑T t=1 V π t (x0) is one of the optimal policies. Note that by definition, we have V π(x0; `) = ∑ x,a q π(x, a)`(x, a). In fact, we will overload the notation and let V π(x0; b) = ∑ x,a q π(x, a)b(x, a) for any function b : X ×A→ R (even though it might not correspond to a real loss function). Other notations. We denote by Et[·] and Vart[·] the expectation and variance conditioned on everything prior to episode t. For a matrix Σ and a vector z (of appropriate dimension), ‖z‖Σ denotes the quadratic norm √ z>Σz. The notation Õ(·) hides all logarithmic factors. 3 Dilated Exploration Bonuses In this section, we start with a general discussion on designing exploration bonuses (not specific to policy optimization), and then introduce our new dilated bonuses for policy optimization. For simplicity, the exposition in this section assumes a finite state space, but the idea generalizes to an infinite state space. When analyzing the regret of an algorithm, very often we run into the following form: Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) ≤ o(T ) + T∑ t=1 ∑ x,a q?(x, a)bt(x, a) = o(T ) + T∑ t=1 V π ? (x0; bt), (1) for some function bt(x, a) usually related to some estimation error or variance that can be prohibitively large. For example, in policy optimization, the algorithm performs local search in each state essentially using a multi-armed bandit algorithm and treating Qπt(x, a) as the loss of action a in state x. Since Qπt(x, a) is unknown, however, the algorithm has to use some estimator of Qπt(x, a) instead, whose bias and variance both contribute to the bt function. Usually, bt(x, a) is large for a rarely-visited state-action pair (x, a) and is inversely related to qt(x, a), which is exactly why most analysis relies 2Full-information feedback, on the other hand, refers to the easier setting where the entire loss function `t is revealed to the learner at the end of episode t. on the assumption that some distribution mismatch coefficient related to q?(x,a)/qt(x,a) is bounded (see e.g., [3, 31]). On the other hand, an important observation is that while V π ? (x0; bt) can be prohibitively large, its counterpart with respect to the learner’s policy V πt(x0; bt) is usually nicely bounded. For example, if bt(x, a) is inversely related to qt(x, a) as mentioned, then V πt(x0; bt) = ∑ x,a qt(x, a)bt(x, a) is small no matter how small qt(x, a) could be for some (x, a). This observation, together with the linearity property V π(x0; `t − bt) = V π(x0; `t)− V π(x0; bt), suggests that we treat `t − bt as the loss function of the problem, or in other words, add a (negative) bonus to each state-action pair, which intuitively encourages exploration due to underestimation. Indeed, assuming for a moment that Eq. (1) still roughly holds even if we treat `t − bt as the loss function: T∑ t=1 V πt(x0; `t − bt)− T∑ t=1 V π ? (x0; `t − bt) . o(T ) + T∑ t=1 V π ? (x0; bt). (2) Then by linearity and rearranging, we have Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) . o(T ) + T∑ t=1 V πt(x0; bt). (3) Due to the switch from π? to πt in the last term compared to Eq. (1), this is usually enough to prove a desirable regret bound without making extra assumptions. The caveat of this discussion is the assumption of Eq. (2). Indeed, after adding the bonuses, which itself contributes some more bias and variance, one should expect that bt on the right-hand side of Eq. (2) becomes something larger, breaking the desired cancellation effect to achieve Eq. (3). Indeed, the definition of bt essentially becomes circular in this sense. Dilated Bonuses for Policy Optimization To address this issue, we take a closer look at the policy optimization algorithm specifically. As mentioned, policy optimization decomposes the problem into individual multi-armed bandit problems in each state and then performs local optimization. This is based on the well-known performance difference lemma [17]: Reg = ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) ) Qπtt (x, a), showing that in each state x, the learner is facing a bandit problem with Qπtt (x, a) being the loss for action a. Correspondingly, incorporating the bonuses bt for policy optimization means subtracting the bonus Qπt(x, a; bt) from Qπtt (x, a) for each action a in each state x. Recall that Q πt(x, a; bt) satisfies the Bellman equation Qπt(x, a; bt) = bt(x, a) + Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x′, a′)]. To resolve the issue mentioned earlier, we propose to replace this bonus function Qπt(x, a; bt) with its dilated version Bt(s, a) satisfying the following dilated Bellman equation: Bt(x, a) = bt(x, a) + ( 1 + 1 H ) Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (4) (with Bt(xH , a) = 0 for all a). The only difference compared to the standard Bellman equation is the extra (1 + 1H ) factor, which slightly increases the weight for deeper layers and thus intuitively induces more exploration for those layers. Due to the extra bonus compared to Qπt(x, a; bt), the regret bound also increases accordingly. In all our applications, this extra amount of regret turns out to be of the form 1H ∑T t=1 ∑ x,a q ?(x)πt(a|x)Bt(x, a), leading to ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) )( Qπtt (x, a)−Bt(x, a) ) ≤ o(T ) + T∑ t=1 V π ? (x0; bt) + 1 H T∑ t=1 ∑ x,a q?(x)πt(a|x)Bt(x, a). (5) With some direct calculation, one can show that this is enough to show a regret bound that is only a constant factor larger than the desired bound in Eq. (3)! This is summarized in the following lemma. Lemma 3.1. If Eq. (5) holds with Bt defined in Eq. (4), then Reg ≤ o(T ) + 3 ∑T t=1 V πt(x0; bt). The high-level idea of the proof is to show that the bonuses added to a layer h is enough to cancel the large bias/variance term (including those coming from the bonus itself) from layer h+ 1. Therefore, cancellation happens in a layer-by-layer manner except for layer 0, where the total amount of bonus can be shown to be at most (1 + 1H ) H ∑T t=1 V πt(x0; bt) ≤ 3 ∑T t=1 V πt(x0; bt). Recalling again that V πt(x0; bt) is usually nicely bounded, we thus arrive at a favorable regret guarantee without making extra assumptions. Of course, since the transition is unknown, we cannot compute Bt exactly. However, Lemma 3.1 is robust enough to handle either a good approximate version of Bt (see Lemma B.1) or a version where Eq. (4) and Eq. (5) only hold in expectation (see Lemma B.2), which is enough for us to handle unknown transition. In the next three sections, we apply this general idea to different settings, showing what bt and Bt are concretely in each case. 4 The Tabular Case In this section, we study the tabular case where the number of states is finite. We propose a policy optimization algorithm with Õ( √ T ) regret, improving the Õ(T 2/3) regret of [27]. See Algorithm 1 for the complete pseudocode. Algorithm design. First, to handle unknown transition, we follow the common practice (dating back to [13]) to maintain a confidence set of the transition, which is updated whenever the visitation count of a certain state-action pair is doubled. We call the period between two model updates an epoch, and use Pk to denote the confidence set for epoch k, formally defined in Eq. (10). In episode t, the policy πt is defined via the standard multiplicative weight algorithm (also connected to Natural Policy Gradient [18, 3, 30]), but importantly with the dilated bonuses incorporated such that πt(a|x) ∝ exp(−η ∑t−1 τ=1(Q̂τ (x, a)−Bτ (x, a))). Here, η is a step size parameter, Q̂τ (x, a) is an importance-weighted estimator for Qπττ (x, a) defined in Eq. (7), and Bτ (x, a) is the dilated bonus defined in Eq. (9). More specifically, for a state x in layer h, Q̂t(x, a) is defined as Lt,h1t(x,a) qt(x,a)+γ , where 1t(x, a) is the indicator of whether (x, a) is visited during episode t; Lt,h is the total loss suffered by the learner starting from layer h till the end of the episode; qt(x, a) = maxP̂∈Pk q P̂ ,πt(x, a) is the largest plausible value of qt(x, a) within the confidence set, which can be computed efficiently using the COMP-UOB procedure of [15] (see also Appendix C.1); and finally γ is a parameter used to control the maximum magnitude of Q̂t(x, a), inspired by the work of [23]. To get a sense of this estimator, consider the special case when γ = 0 and the transition is known so that we can set Pk = {P} and thus qt = qt. Then, since the expectation of Lt,h conditioned on (x, a) being visited is Q πt t (x, a) and the expectation of 1t(x, a) is qt(x, a), we know that Q̂t(x, a) is an unbiased estimator for Qπtt (x, a). The extra complication is simply due to the transition being unknown, forcing us to use qt and γ > 0 to make sure that Q̂t(x, a) is an optimistic underestimator, an idea similar to [15]. Next, we explain the design of the dilated bonus Bt. Following the discussions of Section 3, we first figure out what the corresponding bt function is in Eq. (1), by analyzing the regret bound without using any bonuses. The concrete form of bt turns out to be Eq. (8), whose value at (x, a) is independent of a and thus written as bt(x) for simplicity. Note that Eq. (8) depends on the occupancy measure lower bound q t (s, a) = minP̂∈Pk q P̂ ,πt(x, a), the opposite of qt(s, a), which can also be computed efficiently using a procedure similar to COMP-UOB (see Appendix C.1). Once again, to get a sense of this, consider the special case with a known transition so that we can set Pk = {P} and thus qt = qt = qt. Then, one see that bt(x) is simply upper bounded by Ea∼πt(·|x) [3γH/qt(x,a)] = 3γH|A|/qt(x), which is inversely related to the probability of visiting state x, matching the intuition we provided in Section 3 (that bt(x) is large if x is rarely visited). The extra complication of Eq. (8) is again just due to the unknown transition. With bt(x) ready, the final form of the dilated bonus Bt is defined following the dilated Bellman equation of Eq. (4), except that since P is unknown, we once again apply optimism and find the 3We use y +← z as a shorthand for the increment operation y ← y + z. Algorithm 1 Policy Optimization with Dilated Bonuses (Tabular Case) Parameters: δ ∈ (0, 1), η = min {1/24H3, 1/√|X||A|HT}, γ = 2ηH . Initialization: Set epoch index k = 1 and confidence set P1 as the set of all transition functions. For all (x, a, x′), initialize counters N0(x, a) = N1(x, a) = 0, N0(x, a, x′) = N1(x, a, x′) = 0. for t = 1, 2, . . . , T do Step 1: Compute and execute policy. Execute πt for one episode, where πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( Q̂τ (x, a)−Bτ (x, a) )) , (6) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct Q-function estimators. For all h ∈ {0, . . . ,H − 1} and (x, a) ∈ Xh ×A, Q̂t(x, a) = Lt,h qt(x, a) + γ 1t(x, a), (7) with Lt,h = H−1∑ i=h `t(xt,i, at,i), qt(x, a) = max P̂∈Pk qP̂ ,πt(x, a),1t(x, a) = 1{xt,h = x, at,h = a}. Step 3: Construct bonus functions. For all (x, a) ∈ X ×A, bt(x) = Ea∼πt(·|x) [ 3γH +H(qt(x, a)− qt(x, a)) qt(x, a) + γ ] (8) Bt(x, a) = bt(x) + ( 1 + 1 H ) max P̂∈Pk Ex′∼P̂ (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (9) where q t (x, a) = minP̂∈Pk q P̂ ,πt(x, a) and Bt(xH , a) = 0 for all a. Step 4: Update model estimation. ∀h < H , Nk(xt,h, at,h) +← 1, Nk(xt,h, at,h, xt,h+1) +← 1.3 if ∃h, Nk(xt,h, at,h) ≥ max{1, 2Nk−1(xt,h, at,h)} then Increment epoch index k +← 1 and copy counters: Nk ← Nk−1, Nk ← Nk−1. Compute empirical transition P k(x′|x, a) = Nk(x,a,x ′) max{1,Nk(x,a)} and confidence set: Pk = { P̂ : ∣∣∣P̂ (x′|x, a)− P k(x′|x, a)∣∣∣ ≤ confk(x′|x, a), ∀(x, a, x′) ∈ Xh ×A×Xh+1, h = 0, 1, . . . ,H − 1 } , (10) where confk(x ′|x, a) = 4 √ Pk(x′|x,a) ln(T |X||A|δ ) max{1,Nk(x,a)} + 28 ln(T |X||A|δ ) 3 max{1,Nk(x,a)} . largest possible value within the confidence set (see Eq. (9)). This can again be efficiently computed; see Appendix C.1. This concludes the complete algorithm design. Regret analysis. The regret guarantee of Algorithm 1 is presented below: Theorem 4.1. Algorithm 1 ensures that with probability 1−O(δ), Reg = Õ ( H2|X| √ AT +H4 ) . Again, this improves the Õ(T 2/3) regret of [27]. It almost matches the best existing upper bound for this problem, which is Õ(H|X| √ |A|T ) [15]. While it is unclear to us whether this small gap can be closed using policy optimization, we point out that our algorithm is arguably more efficient than that of [15], which performs global convex optimization over the set of all plausible occupancy measures in each episode. The complete proof of this theorem is deferred to Appendix C. Here, we only sketch an outline of proving Eq. (5), which, according to the discussions in Section 3, is the most important part of the analysis. Specifically, we decompose the left-hand side of Eq. (5),∑ x q ?(x) ∑ t 〈πt(·|x)− π?(·|x), Qt(x, ·)−Bt(x, ·)〉, as BIAS-1 + BIAS-2 + REG-TERM, where • BIAS-1 = ∑ x q ?(x) ∑ t〈πt(·|x), Qt(x, ·)− Q̂t(x, ·)〉 measures the amount of underestimation of Q̂t related to πt, which can be bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( 2γH+H(qt(x,a)−qt(x,a)) qt(x,a)+γ ) + Õ (H/η) with high probability (Lemma C.1); • BIAS-2 = ∑ x q ?(x) ∑ t〈π?(·|x), Q̂t(x, ·)−Qt(x, ·)〉 measures the amount of overestimation of Q̂t related to π?, which can be bounded by Õ (H/η) since Q̂t is an underestimator (Lemma C.2); • REG-TERM = ∑ x q ?(x) ∑ t〈πt(·|x)− π?(·|x), Q̂t(x, ·)−Bt(x, ·)〉 is directly controlled by the multiplicative weight update, and is bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( γH qt(x,a)+γ + Bt(x,a)H ) + Õ (H/η) with high probability (Lemma C.3). Combining all with the definition of bt proves the key Eq. (5) (with the o(T ) term being Õ(H/η)). 5 The Linear-Q Case In this section, we move on to the more challenging setting where the number of states might be infinite, and function approximation is used to generalize the learner’s experience to unseen states. We consider the most basic linear function approximation scheme where for any π, the Q-function Qπt (x, a) is linear in some known feature vector φ(x, a), formally stated below. Assumption 1 (Linear-Q). Let φ(x, a) ∈ Rd be a known feature vector of the state-action pair (x, a). We assume that for any episode t, policy π, and layer h, there exists an unknown weight vector θπt,h ∈ Rd such that for all (x, a) ∈ Xh × A, Qπt (x, a) = φ(x, a)>θπt,h. Without loss of generality, we assume ‖φ(x, a)‖ ≤ 1 for all (x, a) and ‖θπt,h‖ ≤ √ dH for all t, h, π. For justification on the last condition on norms, see [30, Lemma 8]. This linear-Q assumption has been made in several recent works with stationary losses [1, 30] and also in [24] with the same adversarial losses.4 It is weaker than the linear MDP assumption (see Section 6) as it does not pose explicit structure requirements on the loss and transition functions. Due to this generality, however, our algorithm also requires access to a simulator to obtain samples drawn from the transition, formally stated below. Assumption 2 (Simulator). The learner has access to a simulator, which takes a state-action pair (x, a) ∈ X ×A as input, and generates a random outcome of the next state x′ ∼ P (·|x, a). Note that this assumption is also made by [24] and more earlier works with stationary losses (see e.g., [4, 28]).5 In this setting, we propose a new policy optimization algorithm with Õ(T 2/3) regret. See Algorithm 2 for the pseudocode. Algorithm design. The algorithm still follows the multiplicative weight update Eq. (11) in each state x ∈ Xh (for some h), but now with φ(x, a)>θ̂t,h as an estimator for Qπtt (x, a) = φ(x, a)>θ πt t,h, and BONUS(t, x, a) as the dilated bonus Bt(x, a). Specifically, the construction of the weight estimator θ̂t,h follows the idea of [24] (which itself is based on the linear bandit literature) and is defined in Eq. (12) as Σ̂+t,hφ(xt,h, at,h)Lt,h. Here, Σ̂ + t,h is an -accurate estimator of (γI + Σt,h) −1, where γ is a small parameter and Σt,h = Et[φ(xt,h, at,h)φ(xt,h, at,h)>] is the covariance matrix for layer h under policy πt; Lt,h = ∑H−1 i=h `t(xt,i, at,i) is again the loss suffered by the learner starting from layer h, whose conditional expectation is Qπtt (xt,h, at,h) = φ(xt,h, at,h) >θπtt,h. Therefore, 4The assumption in [24] is stated slightly differently (e.g., their feature vectors are independent of the action). However, it is straightforward to verify that the two versions are equivalent. 5The simulator required by [24] is in fact slightly weaker than ours and those from earlier works — it only needs to be able to generate a trajectory starting from x0 for any policy. Algorithm 2 Policy Optimization with Dilated Bonuses (Linear-Q Case) parameters: γ, β, η, , M = ⌈ 24 ln(dHT ) 2γ2 ⌉ , N = ⌈ 2 γ ln 1 γ ⌉ . for t = 1, 2, . . . , T do Step 1: Interact with the environment. Execute πt, which is defined such that for each x ∈ Xh, πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( φ(x, a)>θ̂τ,h − BONUS(τ, x, a) )) , (11) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct covariance matrix inverse estimators.{ Σ̂+t,h }H−1 h=0 = GEOMETRICRESAMPLING (t,M,N, γ) . (see Algorithm 7) Step 3: Construct Q-function weight estimators. For h = 0, . . . ,H − 1, compute θ̂t,h = Σ̂ + t,hφ(xt,h, at,h)Lt,h, where Lt,h = H−1∑ i=h `t(xt,i, at,i). (12) Algorithm 3 BONUS(t, x, a) if BONUS(t, x, a) has been called before then return the value of BONUS(t, x, a) calculated last time. Let h be such that x ∈ Xh. if h = H then return 0. Compute πt(·|x), defined in Eq. (11) (which involves recursive calls to BONUS for smaller t). Get a sample of the next state x′ ← SIMULATOR(x, a). Compute πt(·|x′) (again, defined in Eq. (11)), and sample an action a′ ∼ πt(·|x′). return β‖φ(x, a)‖2 Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] + ( 1 + 1H ) BONUS(t, x′, a′). when γ and approach 0, one see that θ̂t,h is indeed an unbiased estimator of θπtt,h. We adopt the GEOMETRICRESAMPLING procedure (see Algorithm 7) of [24] to compute Σ̂+t,h, which involves calling the simulator multiple times. Next, we explain the design of the dilated bonus. Again, following the general principle discussed in Section 3, we identify bt(x, a) in this case as β‖φ(x, a)‖2Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] for some parameter β > 0. Further following the dilated Bellman equation Eq. (4), we thus define BONUS(t, x, a) recursively as the last line of Algorithm 3, where we replace the expectation E(x′,a′)[BONUS(t, x′, a′)] with one single sample for efficient implementation. However, even more care is needed to actually implement the algorithm. First, since the state space is potentially infinite, one cannot actually calculate and store the value of BONUS(t, x, a) for all (x, a), but can only calculate them on-the-fly when needed. Moreover, unlike the estimators for Qπtt (x, a), which can be succinctly represented and stored via the weight estimator θ̂t,h, this is not possible for BONUS(t, x, a) due to the lack of any structure. Even worse, the definition of BONUS(t, x, a) itself depends on πt(·|x) and also πt(·|x′) for the afterstate x′, which, according to Eq. (11), further depends on BONUS(τ, x, a) for τ < t, resulting in a complicated recursive structure. This is also why we present it as a procedure in Algorithm 3 (instead of Bt(x, a)). In total, this leads to (TAH)O(H) number of calls to the simulator. Whether this can be improved is left as a future direction. Regret guarantee By showing that Eq. (5) holds in expectation for our algorithm, we obtain the following regret guarantee. (See Appendix D for the proof.) Theorem 5.1. Under Assumption 1 and Assumption 2, with appropriate choices of the parameters γ, β, η, , Algorithm 2 ensures E[Reg] = Õ ( H2(dT )2/3 ) (the dependence on |A| is only logarithmic). This matches the Õ(T 2/3) regret of [24, Theorem 1], without the need of their assumption which essentially says that the learner is given an exploratory policy to start with.6 To our knowledge, this is the first no-regret algorithm for linear function approximation (with adversarial losses and bandit feedback) when no exploratory assumptions are made. 6 Improvements with an Exploratory Policy Previous sections have demonstrated the role of dilated bonuses in providing global exploration. In this section, we further discuss what dilated bonuses can achieve when an exploratory policy π0 is given in linear function approximation settings. Formally, let Σh = E[φ(xh, ah)φ(xh, ah)>] denote the covariance matrix for features in layer h following π0 (that is, the expectation is taken over a trajectory {(xh, ah)}H−1h=0 with ah ∼ π0(·|xh)), then we assume the following. Assumption 3 (An exploratory policy). An exploratory policy π0 is given to the learner ahead of time, and guarantees that for any h, the eigenvalues of Σh are at least λmin > 0. The same assumption is made by [24] (where they simply let π0 be the uniform exploration policy). As mentioned, under this assumption they achieve Õ(T 2/3) regret. By slightly modifying our Algorithm 2 (specifically, executing π0 with a small probability in each episode and setting the parameters differently), we achieve the following improved result. Theorem 6.1. Under Assumptions 1, 2, and 3, Algorithm 8 ensures E[Reg] = Õ (√ H4T λmin + √ H5dT ) . Removing the simulator One drawback of our algorithm is that it requires exponential in H number of calls to the simulator. To address this issue, and in fact, to also completely remove the need of a simulator, we further consider a special case where the transition function also has a low-rank structure, known as the linear MDP setting. Assumption 4 (Linear MDP). The MDP satisfies Assumption 1 and that for any h and x′ ∈ Xh+1, there exists a weight vector νx ′ h ∈ Rd such that P (x′|x, a) = φ(x, a)>νx ′ h for all (x, a) ∈ Xh ×A. There is a surge of works studying this setting, with [7] being the closest to us. They achieve Õ( √ T ) regret but require full-information feedback of the loss functions, and there are no existing results for the bandit feedback setting without a simulator. We propose the first algorithm with sublinear regret for this problem, shown in Algorithm 10 of Appendix F due to space limit. The structure of Algorithm 10 is very similar to that of Algorithm 2, with the same definition of bt(x, a). However, due to the low-rank transition structure, we are now able to efficiently construct estimators of Bt(x, a) even for unseen state-action pairs using function approximation, bypassing the requirement of a simulator. Specifically, observe that according to Eq. (4), for each x ∈ Xh, under Assumption 4 Bt(x, a) can be written as bt(x, a) + φ(x, a)>Λπtt,h, where Λ πt t,h = (1 + 1 H ) ∫ x′∈Xh+1 Ea′∼πt(·|x′)[Bt(x ′, a′)]νx ′ h dx ′ is a vector independent of (x, a). Thus, by the same idea of estimating θπtt,h, we can estimate Λ πt t,h as well, thus succinctly representing Bt(x, a) for all (x, a). Recall that estimating θπtt,h (and thus also Λ πt t,h) requires constructing the covariance matrix inverse estimate Σ̂+t,h. Due to the lack of a simulator, another important change in the algorithm is to construct Σ̂+t,h using online samples. To do so, we divide the entire horizon into epochs with equal length, and only update the policy optimization algorithm at the beginning of an epoch. Within an epoch, we keep executing the same policy and collect several trajectories, which are then used to construct Σ̂+t,h. With these changes, we successfully remove the need of a simulator, and prove the guarantee below. Theorem 6.2. Under Assumption 3 and Assumption 4, Algorithm 10 ensures E[Reg] = Õ ( T 6/7 ) (see Appendix F for dependence on other parameters). One potential direction to further improve our algorithm is to reuse data across different epochs, an idea adopted by several recent works [35, 19] for different problems. We also conjecture that 6Under an even strong assumption that every policy is exploratory, they also improve the regret to Õ( √ T ); see [24, Theorem 2]. Assumption 3 can be removed, but we meet some technical difficulty in proving so. We leave these for future investigation. Acknowledgments and Disclosure of Funding We thank Gergely Neu and Julia Olkhovskaya for discussions on the technical details of their GEOMETRICRESAMPLING procedure. This work is supported by NSF Award IIS-1943607 and a Google Faculty Research Award.
1. What is the focus of the paper regarding reinforcement learning in adversarial MDPs? 2. What are the strengths of the proposed algorithms, particularly in the novel bonus design and regret decomposition? 3. Do you have any concerns or questions about the results, especially when compared to prior works? 4. How does the reviewer assess the significance and originality of the contributions presented in the paper? 5. Are there any suggestions for improving the paper or future research directions related to this topic?
Summary Of The Paper Review
Summary Of The Paper The paper studies reinforcement learning in adversarial MDPs with policy optimization algorithms. Using a novel bonus design, the new algorithms achieve O ~ ( S A T ) regret in the tabular case, and O ~ ( T 2 / 3 ) regret in the linear function approximation case, assuming access to a simulator, significantly improving known results. Review The key algorithmic idea of this paper is the following observation: suppose the regret with respect to l t − b t can be bounded the loss of the optimal policy on b t (which can be large), one can cancel this term via linearity and bound the target regret with the loss of π t on b t (which can be controlled using standard techniques). This however requires b t to be designed in a "circular" fashion, which the authors resolve by using dilated bonuses. The new regret decomposition and bonuses are novel and interesting. In the tabular case, the main result is an O ~ ( H 2 S A T ) upper bound, which improves existing upper bound for policy optimization methods. Although it is one H factor away from the current best result for adversarial MDP with bandit feedback [Jin et al. 2020], it enjoys a better computational efficiency. In the linear function approximation case, the first result is an O ~ ( T 2 / 3 ) regret bound assuming access to a simulator, which is the first known sublinear result of its kind. Although access to a simulator is usually a strong assumption, it can be justified since it is required by existing work on adversarial MDPs or even stochastic MDPs [Lattimore et al. 2020]. Other results include a O ~ ( T ) rate assuming an exploratory policy, and a O ~ ( T 6 / 7 ) rate assuming exploratory policy and linear MDP (but removing the need of a simulator). The exploratory policy is also a strong assumption, but is also assumed in existing work to achieve sublinear regret in adversarial linear MDPs. Overall the results of this paper seem to be solid contributions to the existing literature. Other comments: The number of queries to the simulator is larger than A H but is not reflected in the regret bound, which is different from the stochastic settings where queries to the simulator is regarded as the sample complexity. In stochastic MDPs, with A H samples of uniform exploration, one can already evaluate any policy via Monte Carlo. The reason why this doesn't help in adversarial MDPs seems to be that the simulator does not return reward information. This makes one feel that the role of simulators in adversarial MDPs with linear function approximation (with bandits feedback) is a bit odd: the algorithm cannot extract reward information (as per Assumption 2) and cannot learn the dynamics as the state space can be infinite. If the state space is finite and if one allows an infinite budget of queries, one may treat the dynamics as known and evoke existing results (e.g. [Neu et al. 2010]) whose regret bound are independent of the number of states. Thus there seems to be a trade-off between simulator query complexity and online regret, and it would be interesting to know where A H is positioned in this trade-off. In the algorithm box of Algorithm 2, the reference to Algorithm 6 should be Algorithm 7. Neu et al. The online loop-free stochastic shortest-path problem. 2010. Lattimore et al. Learning with Good Feature Representations in Bandits and in RL with a Generative Model. 2020.
NIPS
Title Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses Abstract Policy optimization is a widely-used method in reinforcement learning. Due to its local-search nature, however, theoretical guarantees on global optimality often rely on extra assumptions on the Markov Decision Processes (MDPs) that bypass the challenge of global exploration. To eliminate the need of such assumptions, in this work, we develop a general solution that adds dilated bonuses to the policy update to facilitate global exploration. To showcase the power and generality of this technique, we apply it to several episodic MDP settings with adversarial losses and bandit feedback, improving and generalizing the state-of-the-art. Specifically, in the tabular case, we obtain Õ( √ T ) regret where T is the number of episodes, improving the Õ(T /3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T /3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 N/A √ T ) regret where T is the number of episodes, improving the Õ(T 2/3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T 2/3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 1 Introduction Policy optimization methods are among the most widely-used methods in reinforcement learning. Its empirical success has been demonstrated in various domains such as computer games [26] and robotics [21]. However, due to its local-search nature, global optimality guarantees of policy optimization often rely on unrealistic assumptions to ensure global exploration (see e.g., [1, 3, 24, 30]), making it theoretically less appealing compared to other methods. Motivated by this issue, a line of recent works [7, 27, 2, 35] equip policy optimization with global exploration by adding exploration bonuses to the update, and prove favorable guarantees even without making extra exploratory assumptions. Moreover, they all demonstrate some robustness aspect of policy optimization (such as being able to handle adversarial losses or a certain degree of model misspecification). Despite these important progresses, however, many limitations still exist, including worse regret rates comparing to the best value-based or model-based approaches [27, 2, 35], or requiring full-information feedback on the entire loss function (as opposed to the more realistic bandit feedback) [7]. ∗Equal contribution. 1In an improved version of this paper, we show that under the linear MDP assumption, an exploratory policy is not even needed. See https://arxiv.org/abs/2107.08346. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To address these issues, in this work, we propose a new type of exploration bonuses called dilated bonuses, which satisfies a certain dilated Bellman equation and provably leads to improved exploration compared to existing works (Section 3). We apply this general idea to advance the state-of-the-art of policy optimization for learning finite-horizon episodic MDPs with adversarial losses and bandit feedback. More specifically, our main results are: • First, in the tabular setting, addressing the main open question left in [27], we improve their Õ(T 2/3) regret to the optimal Õ( √ T ) regret. This shows that policy optimization, which performs local optimization, is as capable as other occupancy-measure-based global optimization algorithms [15, 20] in terms of global exploration. Moreover, our algorithm is computationally more efficient than those global methods since they require solving some convex optimization in each episode. (Section 4) • Second, to further deal with large-scale problems, we consider a linear function approximation setting where the state-action values are linear in some known low-dimensional features and also a simulator is available, the same setting considered by [24]. We obtain the same Õ(T 2/3) regret while importantly removing the need of an exploratory policy that their algorithm requires. Unlike the tabular setting (where we improve existing regret rates of policy optimization), note that researchers have not been able to show any sublinear regret for policy optimization without exploratory assumptions for this problem, which shows the critical role of our proposed dilated bonuses. In fact, there are simply no existing algorithms with sublinear regret at all for this setting, be it policy-optimization-type or not. This shows the advantage of policy optimization over other approaches, when combined with our dilated bonuses. (Section 5) • Finally, while the main focus of our work is to show how dilated bonuses are able to provide global exploration, we also discuss their roles in improving the regret rate to Õ( √ T ) in the linear setting above or removing the need of a simulator for the special case of linear MDPs (with Õ(T 6/7) regret), when an exploratory policy is available. (Section 6) Related work. In the tabular setting, except for [27], most algorithms apply the occupancymeasure-based framework to handle adversarial losses (e.g., [25, 15, 9, 8]), which as mentioned is computationally expensive. For stochastic losses, there are many more different approaches such as model-based ones [13, 10, 5, 12, 34] and value-based ones [14, 11]. Theoretical studies for linear function approximation have gained increasing interest recently [32, 33, 16]. Most of them study stochastic/stationary losses, with the exception of [24, 7]. Our algorithm for the linear MDP setting bears some similarity to those of [2, 35] which consider stationary losses. However, our algorithm and analysis are arguably simpler than theirs. Specifically, they divide the state space into a known part and an unknown part, with different exploration principle and bonus design for different parts. In contrast, we enjoy a unified bonus design for all states. Besides, in each episode, their algorithms first execute an exploratory policy (from a policy cover), and then switch to the policy suggested by the policy optimization algorithm, which inevitably leads to linear regret when facing adversarial losses. 2 Problem Setting We consider an MDP specified by a state space X (possibly infinite), a finite action space A, and a transition function P with P (·|x, a) specifying the distribution of the next state after taking action a in state x. In particular, we focus on the finite-horizon episodic setting in which X admits a layer structure and can be partitioned into X0, X1, . . . , XH for some fixed parameter H , where X0 contains only the initial state x0, XH contains only the terminal state xH , and for any x ∈ Xh, h = 0, . . . ,H − 1, P (·|x, a) is supported on Xh+1 for all a ∈ A (that is, transition is only possible from Xh to Xh+1). An episode refers to a trajectory that starts from x0 and ends at xH following some series of actions and the transition dynamic. The MDP may be assigned with a loss function ` : X ×A→ [0, 1] so that `(x, a) specifies the loss suffered when selecting action a in state x. A policy π for the MDP is a mapping X → ∆(A), where ∆(A) denotes the set of distributions over A and π(a|x) is the probability of choosing action a in state x. Given a loss function ` and a policy π, the expected total loss of π is given by V π(x0; `) = E [∑H−1 h=0 `(xh, ah) ∣∣ ah ∼ πt(·|xh), xh+1 ∼ P (·|xh, ah) ] . It can also be defined via the Bellman equation involving the state value function V π(x; `) and the state-action value function Qπ(x, a; `) (a.k.a. Q-function) defined as below: V (xH ; `) = 0, Qπ(x, a; `) = `(x, a) + Ex′∼P (·|x,a) [V π(x′; `)] , and V π(x; `) = Ea∼π(·|x) [Qπ(x, a; `)] . We study online learning in such a finite-horizon MDP with unknown transition, bandit feedback, and adversarial losses. The learning proceeds through T episodes. Ahead of time, an adversary arbitrarily decides T loss functions `1, . . . , `T , without revealing them to the learner. Then in each episode t, the learner decides a policy πt based on all information received prior to this episode, executes πt starting from the initial state x0, generates and observes a trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Importantly, the learner does not observe any other information about `t (a.k.a. bandit feedback).2 The goal of the learner is to minimize the regret, defined as Reg = T∑ t=1 V πtt (x0)−min π T∑ t=1 V πt (x0), where we use V πt (x) as a shorthand for V π(x; `t) (and similarly Qπt (x, a) as a shorthand for Qπ(x, a; `t)). Without further structures, the best existing regret bound is Õ(H|X| √ |A|T ) [15], with an extra √ X factor compared to the best existing lower bound [14]. Occupancy measures. For a policy π and a state x, we define qπ(x) to be the probability (or probability measure when |X| is infinite) of visiting state x within an episode when following π. When it is necessary to highlight the dependence on the transition, we write it as qP,π(x). Further define qπ(x, a) = qπ(x)π(a|x) and qt(x, a) = qπt(x, a). Finally, we use q? as a shorthand for qπ ? where π? ∈ argminπ ∑T t=1 V π t (x0) is one of the optimal policies. Note that by definition, we have V π(x0; `) = ∑ x,a q π(x, a)`(x, a). In fact, we will overload the notation and let V π(x0; b) = ∑ x,a q π(x, a)b(x, a) for any function b : X ×A→ R (even though it might not correspond to a real loss function). Other notations. We denote by Et[·] and Vart[·] the expectation and variance conditioned on everything prior to episode t. For a matrix Σ and a vector z (of appropriate dimension), ‖z‖Σ denotes the quadratic norm √ z>Σz. The notation Õ(·) hides all logarithmic factors. 3 Dilated Exploration Bonuses In this section, we start with a general discussion on designing exploration bonuses (not specific to policy optimization), and then introduce our new dilated bonuses for policy optimization. For simplicity, the exposition in this section assumes a finite state space, but the idea generalizes to an infinite state space. When analyzing the regret of an algorithm, very often we run into the following form: Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) ≤ o(T ) + T∑ t=1 ∑ x,a q?(x, a)bt(x, a) = o(T ) + T∑ t=1 V π ? (x0; bt), (1) for some function bt(x, a) usually related to some estimation error or variance that can be prohibitively large. For example, in policy optimization, the algorithm performs local search in each state essentially using a multi-armed bandit algorithm and treating Qπt(x, a) as the loss of action a in state x. Since Qπt(x, a) is unknown, however, the algorithm has to use some estimator of Qπt(x, a) instead, whose bias and variance both contribute to the bt function. Usually, bt(x, a) is large for a rarely-visited state-action pair (x, a) and is inversely related to qt(x, a), which is exactly why most analysis relies 2Full-information feedback, on the other hand, refers to the easier setting where the entire loss function `t is revealed to the learner at the end of episode t. on the assumption that some distribution mismatch coefficient related to q?(x,a)/qt(x,a) is bounded (see e.g., [3, 31]). On the other hand, an important observation is that while V π ? (x0; bt) can be prohibitively large, its counterpart with respect to the learner’s policy V πt(x0; bt) is usually nicely bounded. For example, if bt(x, a) is inversely related to qt(x, a) as mentioned, then V πt(x0; bt) = ∑ x,a qt(x, a)bt(x, a) is small no matter how small qt(x, a) could be for some (x, a). This observation, together with the linearity property V π(x0; `t − bt) = V π(x0; `t)− V π(x0; bt), suggests that we treat `t − bt as the loss function of the problem, or in other words, add a (negative) bonus to each state-action pair, which intuitively encourages exploration due to underestimation. Indeed, assuming for a moment that Eq. (1) still roughly holds even if we treat `t − bt as the loss function: T∑ t=1 V πt(x0; `t − bt)− T∑ t=1 V π ? (x0; `t − bt) . o(T ) + T∑ t=1 V π ? (x0; bt). (2) Then by linearity and rearranging, we have Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) . o(T ) + T∑ t=1 V πt(x0; bt). (3) Due to the switch from π? to πt in the last term compared to Eq. (1), this is usually enough to prove a desirable regret bound without making extra assumptions. The caveat of this discussion is the assumption of Eq. (2). Indeed, after adding the bonuses, which itself contributes some more bias and variance, one should expect that bt on the right-hand side of Eq. (2) becomes something larger, breaking the desired cancellation effect to achieve Eq. (3). Indeed, the definition of bt essentially becomes circular in this sense. Dilated Bonuses for Policy Optimization To address this issue, we take a closer look at the policy optimization algorithm specifically. As mentioned, policy optimization decomposes the problem into individual multi-armed bandit problems in each state and then performs local optimization. This is based on the well-known performance difference lemma [17]: Reg = ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) ) Qπtt (x, a), showing that in each state x, the learner is facing a bandit problem with Qπtt (x, a) being the loss for action a. Correspondingly, incorporating the bonuses bt for policy optimization means subtracting the bonus Qπt(x, a; bt) from Qπtt (x, a) for each action a in each state x. Recall that Q πt(x, a; bt) satisfies the Bellman equation Qπt(x, a; bt) = bt(x, a) + Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x′, a′)]. To resolve the issue mentioned earlier, we propose to replace this bonus function Qπt(x, a; bt) with its dilated version Bt(s, a) satisfying the following dilated Bellman equation: Bt(x, a) = bt(x, a) + ( 1 + 1 H ) Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (4) (with Bt(xH , a) = 0 for all a). The only difference compared to the standard Bellman equation is the extra (1 + 1H ) factor, which slightly increases the weight for deeper layers and thus intuitively induces more exploration for those layers. Due to the extra bonus compared to Qπt(x, a; bt), the regret bound also increases accordingly. In all our applications, this extra amount of regret turns out to be of the form 1H ∑T t=1 ∑ x,a q ?(x)πt(a|x)Bt(x, a), leading to ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) )( Qπtt (x, a)−Bt(x, a) ) ≤ o(T ) + T∑ t=1 V π ? (x0; bt) + 1 H T∑ t=1 ∑ x,a q?(x)πt(a|x)Bt(x, a). (5) With some direct calculation, one can show that this is enough to show a regret bound that is only a constant factor larger than the desired bound in Eq. (3)! This is summarized in the following lemma. Lemma 3.1. If Eq. (5) holds with Bt defined in Eq. (4), then Reg ≤ o(T ) + 3 ∑T t=1 V πt(x0; bt). The high-level idea of the proof is to show that the bonuses added to a layer h is enough to cancel the large bias/variance term (including those coming from the bonus itself) from layer h+ 1. Therefore, cancellation happens in a layer-by-layer manner except for layer 0, where the total amount of bonus can be shown to be at most (1 + 1H ) H ∑T t=1 V πt(x0; bt) ≤ 3 ∑T t=1 V πt(x0; bt). Recalling again that V πt(x0; bt) is usually nicely bounded, we thus arrive at a favorable regret guarantee without making extra assumptions. Of course, since the transition is unknown, we cannot compute Bt exactly. However, Lemma 3.1 is robust enough to handle either a good approximate version of Bt (see Lemma B.1) or a version where Eq. (4) and Eq. (5) only hold in expectation (see Lemma B.2), which is enough for us to handle unknown transition. In the next three sections, we apply this general idea to different settings, showing what bt and Bt are concretely in each case. 4 The Tabular Case In this section, we study the tabular case where the number of states is finite. We propose a policy optimization algorithm with Õ( √ T ) regret, improving the Õ(T 2/3) regret of [27]. See Algorithm 1 for the complete pseudocode. Algorithm design. First, to handle unknown transition, we follow the common practice (dating back to [13]) to maintain a confidence set of the transition, which is updated whenever the visitation count of a certain state-action pair is doubled. We call the period between two model updates an epoch, and use Pk to denote the confidence set for epoch k, formally defined in Eq. (10). In episode t, the policy πt is defined via the standard multiplicative weight algorithm (also connected to Natural Policy Gradient [18, 3, 30]), but importantly with the dilated bonuses incorporated such that πt(a|x) ∝ exp(−η ∑t−1 τ=1(Q̂τ (x, a)−Bτ (x, a))). Here, η is a step size parameter, Q̂τ (x, a) is an importance-weighted estimator for Qπττ (x, a) defined in Eq. (7), and Bτ (x, a) is the dilated bonus defined in Eq. (9). More specifically, for a state x in layer h, Q̂t(x, a) is defined as Lt,h1t(x,a) qt(x,a)+γ , where 1t(x, a) is the indicator of whether (x, a) is visited during episode t; Lt,h is the total loss suffered by the learner starting from layer h till the end of the episode; qt(x, a) = maxP̂∈Pk q P̂ ,πt(x, a) is the largest plausible value of qt(x, a) within the confidence set, which can be computed efficiently using the COMP-UOB procedure of [15] (see also Appendix C.1); and finally γ is a parameter used to control the maximum magnitude of Q̂t(x, a), inspired by the work of [23]. To get a sense of this estimator, consider the special case when γ = 0 and the transition is known so that we can set Pk = {P} and thus qt = qt. Then, since the expectation of Lt,h conditioned on (x, a) being visited is Q πt t (x, a) and the expectation of 1t(x, a) is qt(x, a), we know that Q̂t(x, a) is an unbiased estimator for Qπtt (x, a). The extra complication is simply due to the transition being unknown, forcing us to use qt and γ > 0 to make sure that Q̂t(x, a) is an optimistic underestimator, an idea similar to [15]. Next, we explain the design of the dilated bonus Bt. Following the discussions of Section 3, we first figure out what the corresponding bt function is in Eq. (1), by analyzing the regret bound without using any bonuses. The concrete form of bt turns out to be Eq. (8), whose value at (x, a) is independent of a and thus written as bt(x) for simplicity. Note that Eq. (8) depends on the occupancy measure lower bound q t (s, a) = minP̂∈Pk q P̂ ,πt(x, a), the opposite of qt(s, a), which can also be computed efficiently using a procedure similar to COMP-UOB (see Appendix C.1). Once again, to get a sense of this, consider the special case with a known transition so that we can set Pk = {P} and thus qt = qt = qt. Then, one see that bt(x) is simply upper bounded by Ea∼πt(·|x) [3γH/qt(x,a)] = 3γH|A|/qt(x), which is inversely related to the probability of visiting state x, matching the intuition we provided in Section 3 (that bt(x) is large if x is rarely visited). The extra complication of Eq. (8) is again just due to the unknown transition. With bt(x) ready, the final form of the dilated bonus Bt is defined following the dilated Bellman equation of Eq. (4), except that since P is unknown, we once again apply optimism and find the 3We use y +← z as a shorthand for the increment operation y ← y + z. Algorithm 1 Policy Optimization with Dilated Bonuses (Tabular Case) Parameters: δ ∈ (0, 1), η = min {1/24H3, 1/√|X||A|HT}, γ = 2ηH . Initialization: Set epoch index k = 1 and confidence set P1 as the set of all transition functions. For all (x, a, x′), initialize counters N0(x, a) = N1(x, a) = 0, N0(x, a, x′) = N1(x, a, x′) = 0. for t = 1, 2, . . . , T do Step 1: Compute and execute policy. Execute πt for one episode, where πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( Q̂τ (x, a)−Bτ (x, a) )) , (6) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct Q-function estimators. For all h ∈ {0, . . . ,H − 1} and (x, a) ∈ Xh ×A, Q̂t(x, a) = Lt,h qt(x, a) + γ 1t(x, a), (7) with Lt,h = H−1∑ i=h `t(xt,i, at,i), qt(x, a) = max P̂∈Pk qP̂ ,πt(x, a),1t(x, a) = 1{xt,h = x, at,h = a}. Step 3: Construct bonus functions. For all (x, a) ∈ X ×A, bt(x) = Ea∼πt(·|x) [ 3γH +H(qt(x, a)− qt(x, a)) qt(x, a) + γ ] (8) Bt(x, a) = bt(x) + ( 1 + 1 H ) max P̂∈Pk Ex′∼P̂ (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (9) where q t (x, a) = minP̂∈Pk q P̂ ,πt(x, a) and Bt(xH , a) = 0 for all a. Step 4: Update model estimation. ∀h < H , Nk(xt,h, at,h) +← 1, Nk(xt,h, at,h, xt,h+1) +← 1.3 if ∃h, Nk(xt,h, at,h) ≥ max{1, 2Nk−1(xt,h, at,h)} then Increment epoch index k +← 1 and copy counters: Nk ← Nk−1, Nk ← Nk−1. Compute empirical transition P k(x′|x, a) = Nk(x,a,x ′) max{1,Nk(x,a)} and confidence set: Pk = { P̂ : ∣∣∣P̂ (x′|x, a)− P k(x′|x, a)∣∣∣ ≤ confk(x′|x, a), ∀(x, a, x′) ∈ Xh ×A×Xh+1, h = 0, 1, . . . ,H − 1 } , (10) where confk(x ′|x, a) = 4 √ Pk(x′|x,a) ln(T |X||A|δ ) max{1,Nk(x,a)} + 28 ln(T |X||A|δ ) 3 max{1,Nk(x,a)} . largest possible value within the confidence set (see Eq. (9)). This can again be efficiently computed; see Appendix C.1. This concludes the complete algorithm design. Regret analysis. The regret guarantee of Algorithm 1 is presented below: Theorem 4.1. Algorithm 1 ensures that with probability 1−O(δ), Reg = Õ ( H2|X| √ AT +H4 ) . Again, this improves the Õ(T 2/3) regret of [27]. It almost matches the best existing upper bound for this problem, which is Õ(H|X| √ |A|T ) [15]. While it is unclear to us whether this small gap can be closed using policy optimization, we point out that our algorithm is arguably more efficient than that of [15], which performs global convex optimization over the set of all plausible occupancy measures in each episode. The complete proof of this theorem is deferred to Appendix C. Here, we only sketch an outline of proving Eq. (5), which, according to the discussions in Section 3, is the most important part of the analysis. Specifically, we decompose the left-hand side of Eq. (5),∑ x q ?(x) ∑ t 〈πt(·|x)− π?(·|x), Qt(x, ·)−Bt(x, ·)〉, as BIAS-1 + BIAS-2 + REG-TERM, where • BIAS-1 = ∑ x q ?(x) ∑ t〈πt(·|x), Qt(x, ·)− Q̂t(x, ·)〉 measures the amount of underestimation of Q̂t related to πt, which can be bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( 2γH+H(qt(x,a)−qt(x,a)) qt(x,a)+γ ) + Õ (H/η) with high probability (Lemma C.1); • BIAS-2 = ∑ x q ?(x) ∑ t〈π?(·|x), Q̂t(x, ·)−Qt(x, ·)〉 measures the amount of overestimation of Q̂t related to π?, which can be bounded by Õ (H/η) since Q̂t is an underestimator (Lemma C.2); • REG-TERM = ∑ x q ?(x) ∑ t〈πt(·|x)− π?(·|x), Q̂t(x, ·)−Bt(x, ·)〉 is directly controlled by the multiplicative weight update, and is bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( γH qt(x,a)+γ + Bt(x,a)H ) + Õ (H/η) with high probability (Lemma C.3). Combining all with the definition of bt proves the key Eq. (5) (with the o(T ) term being Õ(H/η)). 5 The Linear-Q Case In this section, we move on to the more challenging setting where the number of states might be infinite, and function approximation is used to generalize the learner’s experience to unseen states. We consider the most basic linear function approximation scheme where for any π, the Q-function Qπt (x, a) is linear in some known feature vector φ(x, a), formally stated below. Assumption 1 (Linear-Q). Let φ(x, a) ∈ Rd be a known feature vector of the state-action pair (x, a). We assume that for any episode t, policy π, and layer h, there exists an unknown weight vector θπt,h ∈ Rd such that for all (x, a) ∈ Xh × A, Qπt (x, a) = φ(x, a)>θπt,h. Without loss of generality, we assume ‖φ(x, a)‖ ≤ 1 for all (x, a) and ‖θπt,h‖ ≤ √ dH for all t, h, π. For justification on the last condition on norms, see [30, Lemma 8]. This linear-Q assumption has been made in several recent works with stationary losses [1, 30] and also in [24] with the same adversarial losses.4 It is weaker than the linear MDP assumption (see Section 6) as it does not pose explicit structure requirements on the loss and transition functions. Due to this generality, however, our algorithm also requires access to a simulator to obtain samples drawn from the transition, formally stated below. Assumption 2 (Simulator). The learner has access to a simulator, which takes a state-action pair (x, a) ∈ X ×A as input, and generates a random outcome of the next state x′ ∼ P (·|x, a). Note that this assumption is also made by [24] and more earlier works with stationary losses (see e.g., [4, 28]).5 In this setting, we propose a new policy optimization algorithm with Õ(T 2/3) regret. See Algorithm 2 for the pseudocode. Algorithm design. The algorithm still follows the multiplicative weight update Eq. (11) in each state x ∈ Xh (for some h), but now with φ(x, a)>θ̂t,h as an estimator for Qπtt (x, a) = φ(x, a)>θ πt t,h, and BONUS(t, x, a) as the dilated bonus Bt(x, a). Specifically, the construction of the weight estimator θ̂t,h follows the idea of [24] (which itself is based on the linear bandit literature) and is defined in Eq. (12) as Σ̂+t,hφ(xt,h, at,h)Lt,h. Here, Σ̂ + t,h is an -accurate estimator of (γI + Σt,h) −1, where γ is a small parameter and Σt,h = Et[φ(xt,h, at,h)φ(xt,h, at,h)>] is the covariance matrix for layer h under policy πt; Lt,h = ∑H−1 i=h `t(xt,i, at,i) is again the loss suffered by the learner starting from layer h, whose conditional expectation is Qπtt (xt,h, at,h) = φ(xt,h, at,h) >θπtt,h. Therefore, 4The assumption in [24] is stated slightly differently (e.g., their feature vectors are independent of the action). However, it is straightforward to verify that the two versions are equivalent. 5The simulator required by [24] is in fact slightly weaker than ours and those from earlier works — it only needs to be able to generate a trajectory starting from x0 for any policy. Algorithm 2 Policy Optimization with Dilated Bonuses (Linear-Q Case) parameters: γ, β, η, , M = ⌈ 24 ln(dHT ) 2γ2 ⌉ , N = ⌈ 2 γ ln 1 γ ⌉ . for t = 1, 2, . . . , T do Step 1: Interact with the environment. Execute πt, which is defined such that for each x ∈ Xh, πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( φ(x, a)>θ̂τ,h − BONUS(τ, x, a) )) , (11) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct covariance matrix inverse estimators.{ Σ̂+t,h }H−1 h=0 = GEOMETRICRESAMPLING (t,M,N, γ) . (see Algorithm 7) Step 3: Construct Q-function weight estimators. For h = 0, . . . ,H − 1, compute θ̂t,h = Σ̂ + t,hφ(xt,h, at,h)Lt,h, where Lt,h = H−1∑ i=h `t(xt,i, at,i). (12) Algorithm 3 BONUS(t, x, a) if BONUS(t, x, a) has been called before then return the value of BONUS(t, x, a) calculated last time. Let h be such that x ∈ Xh. if h = H then return 0. Compute πt(·|x), defined in Eq. (11) (which involves recursive calls to BONUS for smaller t). Get a sample of the next state x′ ← SIMULATOR(x, a). Compute πt(·|x′) (again, defined in Eq. (11)), and sample an action a′ ∼ πt(·|x′). return β‖φ(x, a)‖2 Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] + ( 1 + 1H ) BONUS(t, x′, a′). when γ and approach 0, one see that θ̂t,h is indeed an unbiased estimator of θπtt,h. We adopt the GEOMETRICRESAMPLING procedure (see Algorithm 7) of [24] to compute Σ̂+t,h, which involves calling the simulator multiple times. Next, we explain the design of the dilated bonus. Again, following the general principle discussed in Section 3, we identify bt(x, a) in this case as β‖φ(x, a)‖2Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] for some parameter β > 0. Further following the dilated Bellman equation Eq. (4), we thus define BONUS(t, x, a) recursively as the last line of Algorithm 3, where we replace the expectation E(x′,a′)[BONUS(t, x′, a′)] with one single sample for efficient implementation. However, even more care is needed to actually implement the algorithm. First, since the state space is potentially infinite, one cannot actually calculate and store the value of BONUS(t, x, a) for all (x, a), but can only calculate them on-the-fly when needed. Moreover, unlike the estimators for Qπtt (x, a), which can be succinctly represented and stored via the weight estimator θ̂t,h, this is not possible for BONUS(t, x, a) due to the lack of any structure. Even worse, the definition of BONUS(t, x, a) itself depends on πt(·|x) and also πt(·|x′) for the afterstate x′, which, according to Eq. (11), further depends on BONUS(τ, x, a) for τ < t, resulting in a complicated recursive structure. This is also why we present it as a procedure in Algorithm 3 (instead of Bt(x, a)). In total, this leads to (TAH)O(H) number of calls to the simulator. Whether this can be improved is left as a future direction. Regret guarantee By showing that Eq. (5) holds in expectation for our algorithm, we obtain the following regret guarantee. (See Appendix D for the proof.) Theorem 5.1. Under Assumption 1 and Assumption 2, with appropriate choices of the parameters γ, β, η, , Algorithm 2 ensures E[Reg] = Õ ( H2(dT )2/3 ) (the dependence on |A| is only logarithmic). This matches the Õ(T 2/3) regret of [24, Theorem 1], without the need of their assumption which essentially says that the learner is given an exploratory policy to start with.6 To our knowledge, this is the first no-regret algorithm for linear function approximation (with adversarial losses and bandit feedback) when no exploratory assumptions are made. 6 Improvements with an Exploratory Policy Previous sections have demonstrated the role of dilated bonuses in providing global exploration. In this section, we further discuss what dilated bonuses can achieve when an exploratory policy π0 is given in linear function approximation settings. Formally, let Σh = E[φ(xh, ah)φ(xh, ah)>] denote the covariance matrix for features in layer h following π0 (that is, the expectation is taken over a trajectory {(xh, ah)}H−1h=0 with ah ∼ π0(·|xh)), then we assume the following. Assumption 3 (An exploratory policy). An exploratory policy π0 is given to the learner ahead of time, and guarantees that for any h, the eigenvalues of Σh are at least λmin > 0. The same assumption is made by [24] (where they simply let π0 be the uniform exploration policy). As mentioned, under this assumption they achieve Õ(T 2/3) regret. By slightly modifying our Algorithm 2 (specifically, executing π0 with a small probability in each episode and setting the parameters differently), we achieve the following improved result. Theorem 6.1. Under Assumptions 1, 2, and 3, Algorithm 8 ensures E[Reg] = Õ (√ H4T λmin + √ H5dT ) . Removing the simulator One drawback of our algorithm is that it requires exponential in H number of calls to the simulator. To address this issue, and in fact, to also completely remove the need of a simulator, we further consider a special case where the transition function also has a low-rank structure, known as the linear MDP setting. Assumption 4 (Linear MDP). The MDP satisfies Assumption 1 and that for any h and x′ ∈ Xh+1, there exists a weight vector νx ′ h ∈ Rd such that P (x′|x, a) = φ(x, a)>νx ′ h for all (x, a) ∈ Xh ×A. There is a surge of works studying this setting, with [7] being the closest to us. They achieve Õ( √ T ) regret but require full-information feedback of the loss functions, and there are no existing results for the bandit feedback setting without a simulator. We propose the first algorithm with sublinear regret for this problem, shown in Algorithm 10 of Appendix F due to space limit. The structure of Algorithm 10 is very similar to that of Algorithm 2, with the same definition of bt(x, a). However, due to the low-rank transition structure, we are now able to efficiently construct estimators of Bt(x, a) even for unseen state-action pairs using function approximation, bypassing the requirement of a simulator. Specifically, observe that according to Eq. (4), for each x ∈ Xh, under Assumption 4 Bt(x, a) can be written as bt(x, a) + φ(x, a)>Λπtt,h, where Λ πt t,h = (1 + 1 H ) ∫ x′∈Xh+1 Ea′∼πt(·|x′)[Bt(x ′, a′)]νx ′ h dx ′ is a vector independent of (x, a). Thus, by the same idea of estimating θπtt,h, we can estimate Λ πt t,h as well, thus succinctly representing Bt(x, a) for all (x, a). Recall that estimating θπtt,h (and thus also Λ πt t,h) requires constructing the covariance matrix inverse estimate Σ̂+t,h. Due to the lack of a simulator, another important change in the algorithm is to construct Σ̂+t,h using online samples. To do so, we divide the entire horizon into epochs with equal length, and only update the policy optimization algorithm at the beginning of an epoch. Within an epoch, we keep executing the same policy and collect several trajectories, which are then used to construct Σ̂+t,h. With these changes, we successfully remove the need of a simulator, and prove the guarantee below. Theorem 6.2. Under Assumption 3 and Assumption 4, Algorithm 10 ensures E[Reg] = Õ ( T 6/7 ) (see Appendix F for dependence on other parameters). One potential direction to further improve our algorithm is to reuse data across different epochs, an idea adopted by several recent works [35, 19] for different problems. We also conjecture that 6Under an even strong assumption that every policy is exploratory, they also improve the regret to Õ( √ T ); see [24, Theorem 2]. Assumption 3 can be removed, but we meet some technical difficulty in proving so. We leave these for future investigation. Acknowledgments and Disclosure of Funding We thank Gergely Neu and Julia Olkhovskaya for discussions on the technical details of their GEOMETRICRESAMPLING procedure. This work is supported by NSF Award IIS-1943607 and a Google Faculty Research Award.
1. What are the contributions of the paper regarding episodic tabular settings and linear cases? 2. How does the paper's approach differ from previous works in terms of computational cheapness and access to MDP simulators? 3. What is the significance of dilated bonuses in the paper's analysis, and how do they contribute to the main results? 4. Can you explain the importance of removing exploratory policies and replacing them with covariance matrix regularization? 5. Are there any technical reasons why implicit exploration might work better than explicit exploration? 6. How does the paper's approach compare to other algorithms such as EXP3-IX for adversarial multi-armed bandits? 7. Could the dilated bonuses also be applied to other approaches like REPS? 8. Can the result of Theorem 6.2 be further improved under certain assumptions about the comparator policy?
Summary Of The Paper Review
Summary Of The Paper This paper makes several contributions: In the episodic tabular setting, with adversarial rewards, unknown MDP dynamics and bandit feedback, it provides the first algorithm that achieves O(\sqrt T) regret bound using the regret decomposition approach, which is more computationally cheap than the approach presented in Jin et al. Typically result obtained by using this approach scales with the inverse of the visitation probability. However, here the results don't have this drawback. For the linear case, the paper considers the setting with an infinite set of states and the algorithm has access to the simulator of the dynamics of MDP. Then it provides the algorithm that obtains T^2/3 regret without assuming that the smallest eigenvalue of the covariance matrix \Sigma_h is at least \lambda_min and T^1/2 taken this assumption. The latter improves the previous result in the considered setting, which was T^2/3 regret, assuming that the smallest eigenvalue of \Sigma_h is bounded away from zero. The only drawback is that the algorithm is more computationally expensive since it requires O(T^H) number of calls to the simulator. The third result presented in the paper is in the same setting as above but without access to the simulator of MDP, which is a very challenging setting. The obtained regret bound is T^6/7, and it makes use of the assumption on the smallest eigenvalue of \Sigma_h. All three results from above make use of dilated bonuses, which is the main contribution of this paper. The use of the dilated bonuses that decrease the contribution to the regret, induced by the term appearing from the mismatch of q^*(x) and q(x), shows up in the analysis. To the best of my knowledge, this idea has not been considered before, and I think this idea is beneficial for future studies. Review The authors state that it is important to remove the use of an exploratory policy by the algorithm. Is there an argument why it is important and why it is better to be replaced by regularization of covariance matrix? Also, is there a technical reason why implicit exploration works better? Introducing the loss estimate for the tabular setting, it worth mentioning EXP3-IX algorithm for adversarial multi-armed bandits. Would the dilated bonuses work the same way for the REPS approach? Can the result of the Theorem 6.2 be improved by assuming that the comparator policy plays actions uniformly with some probability?
NIPS
Title Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses Abstract Policy optimization is a widely-used method in reinforcement learning. Due to its local-search nature, however, theoretical guarantees on global optimality often rely on extra assumptions on the Markov Decision Processes (MDPs) that bypass the challenge of global exploration. To eliminate the need of such assumptions, in this work, we develop a general solution that adds dilated bonuses to the policy update to facilitate global exploration. To showcase the power and generality of this technique, we apply it to several episodic MDP settings with adversarial losses and bandit feedback, improving and generalizing the state-of-the-art. Specifically, in the tabular case, we obtain Õ( √ T ) regret where T is the number of episodes, improving the Õ(T /3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T /3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 N/A √ T ) regret where T is the number of episodes, improving the Õ(T 2/3) regret bound by [27]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain Õ(T 2/3) regret with the help of a simulator, matching the result of [24] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.1 1 Introduction Policy optimization methods are among the most widely-used methods in reinforcement learning. Its empirical success has been demonstrated in various domains such as computer games [26] and robotics [21]. However, due to its local-search nature, global optimality guarantees of policy optimization often rely on unrealistic assumptions to ensure global exploration (see e.g., [1, 3, 24, 30]), making it theoretically less appealing compared to other methods. Motivated by this issue, a line of recent works [7, 27, 2, 35] equip policy optimization with global exploration by adding exploration bonuses to the update, and prove favorable guarantees even without making extra exploratory assumptions. Moreover, they all demonstrate some robustness aspect of policy optimization (such as being able to handle adversarial losses or a certain degree of model misspecification). Despite these important progresses, however, many limitations still exist, including worse regret rates comparing to the best value-based or model-based approaches [27, 2, 35], or requiring full-information feedback on the entire loss function (as opposed to the more realistic bandit feedback) [7]. ∗Equal contribution. 1In an improved version of this paper, we show that under the linear MDP assumption, an exploratory policy is not even needed. See https://arxiv.org/abs/2107.08346. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To address these issues, in this work, we propose a new type of exploration bonuses called dilated bonuses, which satisfies a certain dilated Bellman equation and provably leads to improved exploration compared to existing works (Section 3). We apply this general idea to advance the state-of-the-art of policy optimization for learning finite-horizon episodic MDPs with adversarial losses and bandit feedback. More specifically, our main results are: • First, in the tabular setting, addressing the main open question left in [27], we improve their Õ(T 2/3) regret to the optimal Õ( √ T ) regret. This shows that policy optimization, which performs local optimization, is as capable as other occupancy-measure-based global optimization algorithms [15, 20] in terms of global exploration. Moreover, our algorithm is computationally more efficient than those global methods since they require solving some convex optimization in each episode. (Section 4) • Second, to further deal with large-scale problems, we consider a linear function approximation setting where the state-action values are linear in some known low-dimensional features and also a simulator is available, the same setting considered by [24]. We obtain the same Õ(T 2/3) regret while importantly removing the need of an exploratory policy that their algorithm requires. Unlike the tabular setting (where we improve existing regret rates of policy optimization), note that researchers have not been able to show any sublinear regret for policy optimization without exploratory assumptions for this problem, which shows the critical role of our proposed dilated bonuses. In fact, there are simply no existing algorithms with sublinear regret at all for this setting, be it policy-optimization-type or not. This shows the advantage of policy optimization over other approaches, when combined with our dilated bonuses. (Section 5) • Finally, while the main focus of our work is to show how dilated bonuses are able to provide global exploration, we also discuss their roles in improving the regret rate to Õ( √ T ) in the linear setting above or removing the need of a simulator for the special case of linear MDPs (with Õ(T 6/7) regret), when an exploratory policy is available. (Section 6) Related work. In the tabular setting, except for [27], most algorithms apply the occupancymeasure-based framework to handle adversarial losses (e.g., [25, 15, 9, 8]), which as mentioned is computationally expensive. For stochastic losses, there are many more different approaches such as model-based ones [13, 10, 5, 12, 34] and value-based ones [14, 11]. Theoretical studies for linear function approximation have gained increasing interest recently [32, 33, 16]. Most of them study stochastic/stationary losses, with the exception of [24, 7]. Our algorithm for the linear MDP setting bears some similarity to those of [2, 35] which consider stationary losses. However, our algorithm and analysis are arguably simpler than theirs. Specifically, they divide the state space into a known part and an unknown part, with different exploration principle and bonus design for different parts. In contrast, we enjoy a unified bonus design for all states. Besides, in each episode, their algorithms first execute an exploratory policy (from a policy cover), and then switch to the policy suggested by the policy optimization algorithm, which inevitably leads to linear regret when facing adversarial losses. 2 Problem Setting We consider an MDP specified by a state space X (possibly infinite), a finite action space A, and a transition function P with P (·|x, a) specifying the distribution of the next state after taking action a in state x. In particular, we focus on the finite-horizon episodic setting in which X admits a layer structure and can be partitioned into X0, X1, . . . , XH for some fixed parameter H , where X0 contains only the initial state x0, XH contains only the terminal state xH , and for any x ∈ Xh, h = 0, . . . ,H − 1, P (·|x, a) is supported on Xh+1 for all a ∈ A (that is, transition is only possible from Xh to Xh+1). An episode refers to a trajectory that starts from x0 and ends at xH following some series of actions and the transition dynamic. The MDP may be assigned with a loss function ` : X ×A→ [0, 1] so that `(x, a) specifies the loss suffered when selecting action a in state x. A policy π for the MDP is a mapping X → ∆(A), where ∆(A) denotes the set of distributions over A and π(a|x) is the probability of choosing action a in state x. Given a loss function ` and a policy π, the expected total loss of π is given by V π(x0; `) = E [∑H−1 h=0 `(xh, ah) ∣∣ ah ∼ πt(·|xh), xh+1 ∼ P (·|xh, ah) ] . It can also be defined via the Bellman equation involving the state value function V π(x; `) and the state-action value function Qπ(x, a; `) (a.k.a. Q-function) defined as below: V (xH ; `) = 0, Qπ(x, a; `) = `(x, a) + Ex′∼P (·|x,a) [V π(x′; `)] , and V π(x; `) = Ea∼π(·|x) [Qπ(x, a; `)] . We study online learning in such a finite-horizon MDP with unknown transition, bandit feedback, and adversarial losses. The learning proceeds through T episodes. Ahead of time, an adversary arbitrarily decides T loss functions `1, . . . , `T , without revealing them to the learner. Then in each episode t, the learner decides a policy πt based on all information received prior to this episode, executes πt starting from the initial state x0, generates and observes a trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Importantly, the learner does not observe any other information about `t (a.k.a. bandit feedback).2 The goal of the learner is to minimize the regret, defined as Reg = T∑ t=1 V πtt (x0)−min π T∑ t=1 V πt (x0), where we use V πt (x) as a shorthand for V π(x; `t) (and similarly Qπt (x, a) as a shorthand for Qπ(x, a; `t)). Without further structures, the best existing regret bound is Õ(H|X| √ |A|T ) [15], with an extra √ X factor compared to the best existing lower bound [14]. Occupancy measures. For a policy π and a state x, we define qπ(x) to be the probability (or probability measure when |X| is infinite) of visiting state x within an episode when following π. When it is necessary to highlight the dependence on the transition, we write it as qP,π(x). Further define qπ(x, a) = qπ(x)π(a|x) and qt(x, a) = qπt(x, a). Finally, we use q? as a shorthand for qπ ? where π? ∈ argminπ ∑T t=1 V π t (x0) is one of the optimal policies. Note that by definition, we have V π(x0; `) = ∑ x,a q π(x, a)`(x, a). In fact, we will overload the notation and let V π(x0; b) = ∑ x,a q π(x, a)b(x, a) for any function b : X ×A→ R (even though it might not correspond to a real loss function). Other notations. We denote by Et[·] and Vart[·] the expectation and variance conditioned on everything prior to episode t. For a matrix Σ and a vector z (of appropriate dimension), ‖z‖Σ denotes the quadratic norm √ z>Σz. The notation Õ(·) hides all logarithmic factors. 3 Dilated Exploration Bonuses In this section, we start with a general discussion on designing exploration bonuses (not specific to policy optimization), and then introduce our new dilated bonuses for policy optimization. For simplicity, the exposition in this section assumes a finite state space, but the idea generalizes to an infinite state space. When analyzing the regret of an algorithm, very often we run into the following form: Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) ≤ o(T ) + T∑ t=1 ∑ x,a q?(x, a)bt(x, a) = o(T ) + T∑ t=1 V π ? (x0; bt), (1) for some function bt(x, a) usually related to some estimation error or variance that can be prohibitively large. For example, in policy optimization, the algorithm performs local search in each state essentially using a multi-armed bandit algorithm and treating Qπt(x, a) as the loss of action a in state x. Since Qπt(x, a) is unknown, however, the algorithm has to use some estimator of Qπt(x, a) instead, whose bias and variance both contribute to the bt function. Usually, bt(x, a) is large for a rarely-visited state-action pair (x, a) and is inversely related to qt(x, a), which is exactly why most analysis relies 2Full-information feedback, on the other hand, refers to the easier setting where the entire loss function `t is revealed to the learner at the end of episode t. on the assumption that some distribution mismatch coefficient related to q?(x,a)/qt(x,a) is bounded (see e.g., [3, 31]). On the other hand, an important observation is that while V π ? (x0; bt) can be prohibitively large, its counterpart with respect to the learner’s policy V πt(x0; bt) is usually nicely bounded. For example, if bt(x, a) is inversely related to qt(x, a) as mentioned, then V πt(x0; bt) = ∑ x,a qt(x, a)bt(x, a) is small no matter how small qt(x, a) could be for some (x, a). This observation, together with the linearity property V π(x0; `t − bt) = V π(x0; `t)− V π(x0; bt), suggests that we treat `t − bt as the loss function of the problem, or in other words, add a (negative) bonus to each state-action pair, which intuitively encourages exploration due to underestimation. Indeed, assuming for a moment that Eq. (1) still roughly holds even if we treat `t − bt as the loss function: T∑ t=1 V πt(x0; `t − bt)− T∑ t=1 V π ? (x0; `t − bt) . o(T ) + T∑ t=1 V π ? (x0; bt). (2) Then by linearity and rearranging, we have Reg = T∑ t=1 V πtt (x0)− T∑ t=1 V π ? t (x0) . o(T ) + T∑ t=1 V πt(x0; bt). (3) Due to the switch from π? to πt in the last term compared to Eq. (1), this is usually enough to prove a desirable regret bound without making extra assumptions. The caveat of this discussion is the assumption of Eq. (2). Indeed, after adding the bonuses, which itself contributes some more bias and variance, one should expect that bt on the right-hand side of Eq. (2) becomes something larger, breaking the desired cancellation effect to achieve Eq. (3). Indeed, the definition of bt essentially becomes circular in this sense. Dilated Bonuses for Policy Optimization To address this issue, we take a closer look at the policy optimization algorithm specifically. As mentioned, policy optimization decomposes the problem into individual multi-armed bandit problems in each state and then performs local optimization. This is based on the well-known performance difference lemma [17]: Reg = ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) ) Qπtt (x, a), showing that in each state x, the learner is facing a bandit problem with Qπtt (x, a) being the loss for action a. Correspondingly, incorporating the bonuses bt for policy optimization means subtracting the bonus Qπt(x, a; bt) from Qπtt (x, a) for each action a in each state x. Recall that Q πt(x, a; bt) satisfies the Bellman equation Qπt(x, a; bt) = bt(x, a) + Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x′, a′)]. To resolve the issue mentioned earlier, we propose to replace this bonus function Qπt(x, a; bt) with its dilated version Bt(s, a) satisfying the following dilated Bellman equation: Bt(x, a) = bt(x, a) + ( 1 + 1 H ) Ex′∼P (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (4) (with Bt(xH , a) = 0 for all a). The only difference compared to the standard Bellman equation is the extra (1 + 1H ) factor, which slightly increases the weight for deeper layers and thus intuitively induces more exploration for those layers. Due to the extra bonus compared to Qπt(x, a; bt), the regret bound also increases accordingly. In all our applications, this extra amount of regret turns out to be of the form 1H ∑T t=1 ∑ x,a q ?(x)πt(a|x)Bt(x, a), leading to ∑ x q?(x) T∑ t=1 ∑ a ( πt(a|x)− π?(a|x) )( Qπtt (x, a)−Bt(x, a) ) ≤ o(T ) + T∑ t=1 V π ? (x0; bt) + 1 H T∑ t=1 ∑ x,a q?(x)πt(a|x)Bt(x, a). (5) With some direct calculation, one can show that this is enough to show a regret bound that is only a constant factor larger than the desired bound in Eq. (3)! This is summarized in the following lemma. Lemma 3.1. If Eq. (5) holds with Bt defined in Eq. (4), then Reg ≤ o(T ) + 3 ∑T t=1 V πt(x0; bt). The high-level idea of the proof is to show that the bonuses added to a layer h is enough to cancel the large bias/variance term (including those coming from the bonus itself) from layer h+ 1. Therefore, cancellation happens in a layer-by-layer manner except for layer 0, where the total amount of bonus can be shown to be at most (1 + 1H ) H ∑T t=1 V πt(x0; bt) ≤ 3 ∑T t=1 V πt(x0; bt). Recalling again that V πt(x0; bt) is usually nicely bounded, we thus arrive at a favorable regret guarantee without making extra assumptions. Of course, since the transition is unknown, we cannot compute Bt exactly. However, Lemma 3.1 is robust enough to handle either a good approximate version of Bt (see Lemma B.1) or a version where Eq. (4) and Eq. (5) only hold in expectation (see Lemma B.2), which is enough for us to handle unknown transition. In the next three sections, we apply this general idea to different settings, showing what bt and Bt are concretely in each case. 4 The Tabular Case In this section, we study the tabular case where the number of states is finite. We propose a policy optimization algorithm with Õ( √ T ) regret, improving the Õ(T 2/3) regret of [27]. See Algorithm 1 for the complete pseudocode. Algorithm design. First, to handle unknown transition, we follow the common practice (dating back to [13]) to maintain a confidence set of the transition, which is updated whenever the visitation count of a certain state-action pair is doubled. We call the period between two model updates an epoch, and use Pk to denote the confidence set for epoch k, formally defined in Eq. (10). In episode t, the policy πt is defined via the standard multiplicative weight algorithm (also connected to Natural Policy Gradient [18, 3, 30]), but importantly with the dilated bonuses incorporated such that πt(a|x) ∝ exp(−η ∑t−1 τ=1(Q̂τ (x, a)−Bτ (x, a))). Here, η is a step size parameter, Q̂τ (x, a) is an importance-weighted estimator for Qπττ (x, a) defined in Eq. (7), and Bτ (x, a) is the dilated bonus defined in Eq. (9). More specifically, for a state x in layer h, Q̂t(x, a) is defined as Lt,h1t(x,a) qt(x,a)+γ , where 1t(x, a) is the indicator of whether (x, a) is visited during episode t; Lt,h is the total loss suffered by the learner starting from layer h till the end of the episode; qt(x, a) = maxP̂∈Pk q P̂ ,πt(x, a) is the largest plausible value of qt(x, a) within the confidence set, which can be computed efficiently using the COMP-UOB procedure of [15] (see also Appendix C.1); and finally γ is a parameter used to control the maximum magnitude of Q̂t(x, a), inspired by the work of [23]. To get a sense of this estimator, consider the special case when γ = 0 and the transition is known so that we can set Pk = {P} and thus qt = qt. Then, since the expectation of Lt,h conditioned on (x, a) being visited is Q πt t (x, a) and the expectation of 1t(x, a) is qt(x, a), we know that Q̂t(x, a) is an unbiased estimator for Qπtt (x, a). The extra complication is simply due to the transition being unknown, forcing us to use qt and γ > 0 to make sure that Q̂t(x, a) is an optimistic underestimator, an idea similar to [15]. Next, we explain the design of the dilated bonus Bt. Following the discussions of Section 3, we first figure out what the corresponding bt function is in Eq. (1), by analyzing the regret bound without using any bonuses. The concrete form of bt turns out to be Eq. (8), whose value at (x, a) is independent of a and thus written as bt(x) for simplicity. Note that Eq. (8) depends on the occupancy measure lower bound q t (s, a) = minP̂∈Pk q P̂ ,πt(x, a), the opposite of qt(s, a), which can also be computed efficiently using a procedure similar to COMP-UOB (see Appendix C.1). Once again, to get a sense of this, consider the special case with a known transition so that we can set Pk = {P} and thus qt = qt = qt. Then, one see that bt(x) is simply upper bounded by Ea∼πt(·|x) [3γH/qt(x,a)] = 3γH|A|/qt(x), which is inversely related to the probability of visiting state x, matching the intuition we provided in Section 3 (that bt(x) is large if x is rarely visited). The extra complication of Eq. (8) is again just due to the unknown transition. With bt(x) ready, the final form of the dilated bonus Bt is defined following the dilated Bellman equation of Eq. (4), except that since P is unknown, we once again apply optimism and find the 3We use y +← z as a shorthand for the increment operation y ← y + z. Algorithm 1 Policy Optimization with Dilated Bonuses (Tabular Case) Parameters: δ ∈ (0, 1), η = min {1/24H3, 1/√|X||A|HT}, γ = 2ηH . Initialization: Set epoch index k = 1 and confidence set P1 as the set of all transition functions. For all (x, a, x′), initialize counters N0(x, a) = N1(x, a) = 0, N0(x, a, x′) = N1(x, a, x′) = 0. for t = 1, 2, . . . , T do Step 1: Compute and execute policy. Execute πt for one episode, where πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( Q̂τ (x, a)−Bτ (x, a) )) , (6) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct Q-function estimators. For all h ∈ {0, . . . ,H − 1} and (x, a) ∈ Xh ×A, Q̂t(x, a) = Lt,h qt(x, a) + γ 1t(x, a), (7) with Lt,h = H−1∑ i=h `t(xt,i, at,i), qt(x, a) = max P̂∈Pk qP̂ ,πt(x, a),1t(x, a) = 1{xt,h = x, at,h = a}. Step 3: Construct bonus functions. For all (x, a) ∈ X ×A, bt(x) = Ea∼πt(·|x) [ 3γH +H(qt(x, a)− qt(x, a)) qt(x, a) + γ ] (8) Bt(x, a) = bt(x) + ( 1 + 1 H ) max P̂∈Pk Ex′∼P̂ (·|x,a)Ea′∼πt(·|x′) [Bt(x ′, a′)] (9) where q t (x, a) = minP̂∈Pk q P̂ ,πt(x, a) and Bt(xH , a) = 0 for all a. Step 4: Update model estimation. ∀h < H , Nk(xt,h, at,h) +← 1, Nk(xt,h, at,h, xt,h+1) +← 1.3 if ∃h, Nk(xt,h, at,h) ≥ max{1, 2Nk−1(xt,h, at,h)} then Increment epoch index k +← 1 and copy counters: Nk ← Nk−1, Nk ← Nk−1. Compute empirical transition P k(x′|x, a) = Nk(x,a,x ′) max{1,Nk(x,a)} and confidence set: Pk = { P̂ : ∣∣∣P̂ (x′|x, a)− P k(x′|x, a)∣∣∣ ≤ confk(x′|x, a), ∀(x, a, x′) ∈ Xh ×A×Xh+1, h = 0, 1, . . . ,H − 1 } , (10) where confk(x ′|x, a) = 4 √ Pk(x′|x,a) ln(T |X||A|δ ) max{1,Nk(x,a)} + 28 ln(T |X||A|δ ) 3 max{1,Nk(x,a)} . largest possible value within the confidence set (see Eq. (9)). This can again be efficiently computed; see Appendix C.1. This concludes the complete algorithm design. Regret analysis. The regret guarantee of Algorithm 1 is presented below: Theorem 4.1. Algorithm 1 ensures that with probability 1−O(δ), Reg = Õ ( H2|X| √ AT +H4 ) . Again, this improves the Õ(T 2/3) regret of [27]. It almost matches the best existing upper bound for this problem, which is Õ(H|X| √ |A|T ) [15]. While it is unclear to us whether this small gap can be closed using policy optimization, we point out that our algorithm is arguably more efficient than that of [15], which performs global convex optimization over the set of all plausible occupancy measures in each episode. The complete proof of this theorem is deferred to Appendix C. Here, we only sketch an outline of proving Eq. (5), which, according to the discussions in Section 3, is the most important part of the analysis. Specifically, we decompose the left-hand side of Eq. (5),∑ x q ?(x) ∑ t 〈πt(·|x)− π?(·|x), Qt(x, ·)−Bt(x, ·)〉, as BIAS-1 + BIAS-2 + REG-TERM, where • BIAS-1 = ∑ x q ?(x) ∑ t〈πt(·|x), Qt(x, ·)− Q̂t(x, ·)〉 measures the amount of underestimation of Q̂t related to πt, which can be bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( 2γH+H(qt(x,a)−qt(x,a)) qt(x,a)+γ ) + Õ (H/η) with high probability (Lemma C.1); • BIAS-2 = ∑ x q ?(x) ∑ t〈π?(·|x), Q̂t(x, ·)−Qt(x, ·)〉 measures the amount of overestimation of Q̂t related to π?, which can be bounded by Õ (H/η) since Q̂t is an underestimator (Lemma C.2); • REG-TERM = ∑ x q ?(x) ∑ t〈πt(·|x)− π?(·|x), Q̂t(x, ·)−Bt(x, ·)〉 is directly controlled by the multiplicative weight update, and is bounded by ∑ t ∑ x,a q ?(x)πt(a|x) ( γH qt(x,a)+γ + Bt(x,a)H ) + Õ (H/η) with high probability (Lemma C.3). Combining all with the definition of bt proves the key Eq. (5) (with the o(T ) term being Õ(H/η)). 5 The Linear-Q Case In this section, we move on to the more challenging setting where the number of states might be infinite, and function approximation is used to generalize the learner’s experience to unseen states. We consider the most basic linear function approximation scheme where for any π, the Q-function Qπt (x, a) is linear in some known feature vector φ(x, a), formally stated below. Assumption 1 (Linear-Q). Let φ(x, a) ∈ Rd be a known feature vector of the state-action pair (x, a). We assume that for any episode t, policy π, and layer h, there exists an unknown weight vector θπt,h ∈ Rd such that for all (x, a) ∈ Xh × A, Qπt (x, a) = φ(x, a)>θπt,h. Without loss of generality, we assume ‖φ(x, a)‖ ≤ 1 for all (x, a) and ‖θπt,h‖ ≤ √ dH for all t, h, π. For justification on the last condition on norms, see [30, Lemma 8]. This linear-Q assumption has been made in several recent works with stationary losses [1, 30] and also in [24] with the same adversarial losses.4 It is weaker than the linear MDP assumption (see Section 6) as it does not pose explicit structure requirements on the loss and transition functions. Due to this generality, however, our algorithm also requires access to a simulator to obtain samples drawn from the transition, formally stated below. Assumption 2 (Simulator). The learner has access to a simulator, which takes a state-action pair (x, a) ∈ X ×A as input, and generates a random outcome of the next state x′ ∼ P (·|x, a). Note that this assumption is also made by [24] and more earlier works with stationary losses (see e.g., [4, 28]).5 In this setting, we propose a new policy optimization algorithm with Õ(T 2/3) regret. See Algorithm 2 for the pseudocode. Algorithm design. The algorithm still follows the multiplicative weight update Eq. (11) in each state x ∈ Xh (for some h), but now with φ(x, a)>θ̂t,h as an estimator for Qπtt (x, a) = φ(x, a)>θ πt t,h, and BONUS(t, x, a) as the dilated bonus Bt(x, a). Specifically, the construction of the weight estimator θ̂t,h follows the idea of [24] (which itself is based on the linear bandit literature) and is defined in Eq. (12) as Σ̂+t,hφ(xt,h, at,h)Lt,h. Here, Σ̂ + t,h is an -accurate estimator of (γI + Σt,h) −1, where γ is a small parameter and Σt,h = Et[φ(xt,h, at,h)φ(xt,h, at,h)>] is the covariance matrix for layer h under policy πt; Lt,h = ∑H−1 i=h `t(xt,i, at,i) is again the loss suffered by the learner starting from layer h, whose conditional expectation is Qπtt (xt,h, at,h) = φ(xt,h, at,h) >θπtt,h. Therefore, 4The assumption in [24] is stated slightly differently (e.g., their feature vectors are independent of the action). However, it is straightforward to verify that the two versions are equivalent. 5The simulator required by [24] is in fact slightly weaker than ours and those from earlier works — it only needs to be able to generate a trajectory starting from x0 for any policy. Algorithm 2 Policy Optimization with Dilated Bonuses (Linear-Q Case) parameters: γ, β, η, , M = ⌈ 24 ln(dHT ) 2γ2 ⌉ , N = ⌈ 2 γ ln 1 γ ⌉ . for t = 1, 2, . . . , T do Step 1: Interact with the environment. Execute πt, which is defined such that for each x ∈ Xh, πt(a|x) ∝ exp ( −η t−1∑ τ=1 ( φ(x, a)>θ̂τ,h − BONUS(τ, x, a) )) , (11) and obtain trajectory {(xt,h, at,h, `t(xt,h, at,h))}H−1h=0 . Step 2: Construct covariance matrix inverse estimators.{ Σ̂+t,h }H−1 h=0 = GEOMETRICRESAMPLING (t,M,N, γ) . (see Algorithm 7) Step 3: Construct Q-function weight estimators. For h = 0, . . . ,H − 1, compute θ̂t,h = Σ̂ + t,hφ(xt,h, at,h)Lt,h, where Lt,h = H−1∑ i=h `t(xt,i, at,i). (12) Algorithm 3 BONUS(t, x, a) if BONUS(t, x, a) has been called before then return the value of BONUS(t, x, a) calculated last time. Let h be such that x ∈ Xh. if h = H then return 0. Compute πt(·|x), defined in Eq. (11) (which involves recursive calls to BONUS for smaller t). Get a sample of the next state x′ ← SIMULATOR(x, a). Compute πt(·|x′) (again, defined in Eq. (11)), and sample an action a′ ∼ πt(·|x′). return β‖φ(x, a)‖2 Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] + ( 1 + 1H ) BONUS(t, x′, a′). when γ and approach 0, one see that θ̂t,h is indeed an unbiased estimator of θπtt,h. We adopt the GEOMETRICRESAMPLING procedure (see Algorithm 7) of [24] to compute Σ̂+t,h, which involves calling the simulator multiple times. Next, we explain the design of the dilated bonus. Again, following the general principle discussed in Section 3, we identify bt(x, a) in this case as β‖φ(x, a)‖2Σ̂+t,h + Ej∼πt(·|x) [ β‖φ(x, j)‖2 Σ̂+t,h ] for some parameter β > 0. Further following the dilated Bellman equation Eq. (4), we thus define BONUS(t, x, a) recursively as the last line of Algorithm 3, where we replace the expectation E(x′,a′)[BONUS(t, x′, a′)] with one single sample for efficient implementation. However, even more care is needed to actually implement the algorithm. First, since the state space is potentially infinite, one cannot actually calculate and store the value of BONUS(t, x, a) for all (x, a), but can only calculate them on-the-fly when needed. Moreover, unlike the estimators for Qπtt (x, a), which can be succinctly represented and stored via the weight estimator θ̂t,h, this is not possible for BONUS(t, x, a) due to the lack of any structure. Even worse, the definition of BONUS(t, x, a) itself depends on πt(·|x) and also πt(·|x′) for the afterstate x′, which, according to Eq. (11), further depends on BONUS(τ, x, a) for τ < t, resulting in a complicated recursive structure. This is also why we present it as a procedure in Algorithm 3 (instead of Bt(x, a)). In total, this leads to (TAH)O(H) number of calls to the simulator. Whether this can be improved is left as a future direction. Regret guarantee By showing that Eq. (5) holds in expectation for our algorithm, we obtain the following regret guarantee. (See Appendix D for the proof.) Theorem 5.1. Under Assumption 1 and Assumption 2, with appropriate choices of the parameters γ, β, η, , Algorithm 2 ensures E[Reg] = Õ ( H2(dT )2/3 ) (the dependence on |A| is only logarithmic). This matches the Õ(T 2/3) regret of [24, Theorem 1], without the need of their assumption which essentially says that the learner is given an exploratory policy to start with.6 To our knowledge, this is the first no-regret algorithm for linear function approximation (with adversarial losses and bandit feedback) when no exploratory assumptions are made. 6 Improvements with an Exploratory Policy Previous sections have demonstrated the role of dilated bonuses in providing global exploration. In this section, we further discuss what dilated bonuses can achieve when an exploratory policy π0 is given in linear function approximation settings. Formally, let Σh = E[φ(xh, ah)φ(xh, ah)>] denote the covariance matrix for features in layer h following π0 (that is, the expectation is taken over a trajectory {(xh, ah)}H−1h=0 with ah ∼ π0(·|xh)), then we assume the following. Assumption 3 (An exploratory policy). An exploratory policy π0 is given to the learner ahead of time, and guarantees that for any h, the eigenvalues of Σh are at least λmin > 0. The same assumption is made by [24] (where they simply let π0 be the uniform exploration policy). As mentioned, under this assumption they achieve Õ(T 2/3) regret. By slightly modifying our Algorithm 2 (specifically, executing π0 with a small probability in each episode and setting the parameters differently), we achieve the following improved result. Theorem 6.1. Under Assumptions 1, 2, and 3, Algorithm 8 ensures E[Reg] = Õ (√ H4T λmin + √ H5dT ) . Removing the simulator One drawback of our algorithm is that it requires exponential in H number of calls to the simulator. To address this issue, and in fact, to also completely remove the need of a simulator, we further consider a special case where the transition function also has a low-rank structure, known as the linear MDP setting. Assumption 4 (Linear MDP). The MDP satisfies Assumption 1 and that for any h and x′ ∈ Xh+1, there exists a weight vector νx ′ h ∈ Rd such that P (x′|x, a) = φ(x, a)>νx ′ h for all (x, a) ∈ Xh ×A. There is a surge of works studying this setting, with [7] being the closest to us. They achieve Õ( √ T ) regret but require full-information feedback of the loss functions, and there are no existing results for the bandit feedback setting without a simulator. We propose the first algorithm with sublinear regret for this problem, shown in Algorithm 10 of Appendix F due to space limit. The structure of Algorithm 10 is very similar to that of Algorithm 2, with the same definition of bt(x, a). However, due to the low-rank transition structure, we are now able to efficiently construct estimators of Bt(x, a) even for unseen state-action pairs using function approximation, bypassing the requirement of a simulator. Specifically, observe that according to Eq. (4), for each x ∈ Xh, under Assumption 4 Bt(x, a) can be written as bt(x, a) + φ(x, a)>Λπtt,h, where Λ πt t,h = (1 + 1 H ) ∫ x′∈Xh+1 Ea′∼πt(·|x′)[Bt(x ′, a′)]νx ′ h dx ′ is a vector independent of (x, a). Thus, by the same idea of estimating θπtt,h, we can estimate Λ πt t,h as well, thus succinctly representing Bt(x, a) for all (x, a). Recall that estimating θπtt,h (and thus also Λ πt t,h) requires constructing the covariance matrix inverse estimate Σ̂+t,h. Due to the lack of a simulator, another important change in the algorithm is to construct Σ̂+t,h using online samples. To do so, we divide the entire horizon into epochs with equal length, and only update the policy optimization algorithm at the beginning of an epoch. Within an epoch, we keep executing the same policy and collect several trajectories, which are then used to construct Σ̂+t,h. With these changes, we successfully remove the need of a simulator, and prove the guarantee below. Theorem 6.2. Under Assumption 3 and Assumption 4, Algorithm 10 ensures E[Reg] = Õ ( T 6/7 ) (see Appendix F for dependence on other parameters). One potential direction to further improve our algorithm is to reuse data across different epochs, an idea adopted by several recent works [35, 19] for different problems. We also conjecture that 6Under an even strong assumption that every policy is exploratory, they also improve the regret to Õ( √ T ); see [24, Theorem 2]. Assumption 3 can be removed, but we meet some technical difficulty in proving so. We leave these for future investigation. Acknowledgments and Disclosure of Funding We thank Gergely Neu and Julia Olkhovskaya for discussions on the technical details of their GEOMETRICRESAMPLING procedure. This work is supported by NSF Award IIS-1943607 and a Google Faculty Research Award.
1. What is the focus of the paper regarding policy optimization for adversarial MDPs? 2. What are the strengths of the proposed approach, particularly in terms of exploration and generalization? 3. Are there any concerns or suggestions regarding the notation used in the paper? 4. How does the reviewer assess the theoretical contributions and improvements compared to prior works? 5. Can you provide more details about the dilated bonuses and their role in the algorithm?
Summary Of The Paper Review
Summary Of The Paper The paper studies the regret of policy optimization for adversarial MDPs with bandit feedback. It develops a general solution that adds dilated bonuses to the policy update for exploration and applies the algorithm to the tabular, linear-Q, and linear MDP settings. It shows that such a solution improves and generalizes the state-of-the-art. Review (+) For the tabular setting, the paper improved the regret analysis to O ( T ) with dilated bonus. The idea is novel and the theoretical contribution is solid. (+) For the linear-Q setting, the paper establishes a regret of O ( T 2 / 3 ) with the assumption on the existence of a simulator and O ( T 1 / 2 ) with an exploratory policy. The assumptions are weaker than the previous work. The theoretical contribution of this part is solid. (+) The paper also provides regret for linear MDP without a simulator. (+) The mechanism of the dilated bonus is well discussed in Section 3. (-) The notation of Q ( x h , a ) and V ( x h ) are used without specification. Such notations of value functions with specifying the timestep h by subscription of x is confusing. I suggest the authors to use e.g., V ( x , h ) and Q ( x , a , h ) .
NIPS
Title Adaptive Reduced Rank Regression Abstract We study the low rank regression problem y = Mx + , where x and y are d1 and d2 dimensional vectors respectively. We consider the extreme high-dimensional setting where the number of observations n is less than d1+d2. Existing algorithms are designed for settings where n is typically as large as rank(M)(d1 + d2). This work provides an efficient algorithm which only involves two SVD, and establishes statistical guarantees on its performance. The algorithm decouples the problem by first estimating the precision matrix of the features, and then solving the matrix denoising problem. To complement the upper bound, we introduce new techniques for establishing lower bounds on the performance of any algorithm for this problem. Our preliminary experiments confirm that our algorithm often out-performs existing baselines, and is always at least competitive. 1 Introduction We consider the regression problem y = Mx + in the high dimensional setting, where x ∈ Rd1 is the vector of features, y ∈ Rd2 is a vector of responses, M ∈ Rd2×d1 are the learnable parameters, and ∼ N(0, σ2 Id2×d2) is a noise term. High-dimensional setting refers to the case where the number of observations n is insufficient for recovery and hence regularization for estimation is necessary [26, 30, 12]. This high-dimensional model is widely used in practice, such as identifying biomarkers [48], understanding risks associated with various diseases [18, 7], image recognition [34, 17], forecasting equity returns in financial markets [33, 39, 28, 8], and analyzing social networks [46, 35]. We consider the “large feature size” setting, in which the number of features d1 is excessively large and can be even larger than the number of observations n. This setting frequently arises in practice because it is often straightforward to perform feature-engineering and produce a large number of potentially useful features in many machine learning problems. For example, in a typical equity forecasting model, n is around 3,000 (i.e., using 10 years of market data), whereas the number of potentially relevant features can be in the order of thousands [33, 22, 25, 13]. In predicting the popularity of a user in an online social network, n is in the order of hundreds (each day is an observation and a typical dataset contains less than three years of data) whereas the feature size can easily be more than 10k [36, 6, 38]. Existing low-rank regularization techniques (e.g., [3, 23, 26, 30, 27] ) are not optimized for the large feature size setting. These results assume that either the features possess the so-called restricted isometry property [10], or their covariance matrix can be accurately estimated [30]. Therefore, their sample complexity n depends on either d1 or the smallest eigenvalue value λmin of x’s covariance matrix. For example, a mean-squared error (MSE) result that appeared in [30] is of the form ∗ Correspondence to: Qiong Wu <[email protected]>. † Currently at Google. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. O ( r(d1+d2) nλ2min ) . When n ≤ d1/λ2min, this result becomes trivial because the forecast ŷ = 0 produces a comparable MSE. We design an efficient algorithm for the large feature size setting. Our algorithm is a simple two-stage algorithm. Let X ∈ Rn×d1 be a matrix that stacks together all features and Y ∈ Rn×d2 be the one that stacks the responses. In the first stage, we run a principal component analysis (PCA) on X to obtain a set of uncorrelated features Ẑ. In the second stage, we run another PCA to obtain a low rank approximation of ẐTY and use it to construct an output. While the algorithm is operationally simple, we show a powerful and generic result on using PCA to process features, a widely used practice for “dimensionality reduction” [11, 21, 19]. PCA is known to be effective to orthogonalize features by keeping only the subspace explaining large variations. But its performance can only be analyzed under the so-called factor model [40, 39]. We show the efficacy of PCA without the factor model assumption. Instead, PCA should be interpreted as a robust estimator of x’s covariance matrix. The empirical estimator C = 1nXX T in the high-dimensional setting cannot be directly used because n d1 × d2, but it exhibits an interesting regularity: the leading eigenvectors of C are closer to ground truth than the remaining ones. In addition, the number of reliable eigenvectors grows as the sample size grows, so our PCA procedure projects the features along reliable eigenvectors and dynamically adjusts Ẑ’s rank to maximally utilize the raw features. Under mild conditions on the ground-truth covariance matrix C∗ of x, we show that it is always possible to decompose x into a set of near-independent features and a set of (discarded) features that have an inconsequential impact on a model’s MSE. When features x are transformed into uncorrelated ones z, our original problem becomes y = Nz+ , which can be reduced to a matrix denoising problem [16] and be solved by the second stage. Our algorithm guarantees that we can recover all singular vectors of N whose associated singular values are larger than a certain threshold τ . The performance guarantee can be translated into MSE bounds parametrized by commonly used variables (though, these translations usually lead to looser bounds). For example, when N ’s rank is r, our result reduces the MSE from O( r(d1+d2) nλ2min ) to O( rd2n + n −c) for a suitably small constant c. The improvement is most pronounced when n d1. We also provide a new matching lower bound. Our lower bound asserts that no algorithm can recover a fraction of singular vectors of N whose associated singular values are smaller than ρτ , where ρ is a “gap parameter”. Our lower bound contribution is twofold. First, we introduce a notion of “local minimax”, which enables us to define a lower bound parametrized by the singular values of N . This is a stronger lower bound than those delivered by the standard minimax framework, which are often parametrized by the rank r of N [26]. Second, we develop a new probabilistic technique for establishing lower bounds under the new local minimax framework. Roughly speaking, our techniques assemble a large collection of matrices that share the same singular values of N but are far from each other, so no algorithm can successfully distinguish these matrices with identical spectra. 2 Preliminaries Notation. Let X ∈ Rn×d1 and Y ∈ Rn×d2 be data matrices with their i-th rows representing the i-th observation. For matrix A, we denote its singular value decomposition as A = UAΣA(V A)T and Pr(A) , UAr Σ A r V A r T is the rank r approximation obtained by keeping the top r singular values and the corresponding singular vectors. When the context is clear, we drop the superscript A and use U,Σ, and V (Ur, Σr, and Vr) instead. Both σi(A) and σAi are used to refer to i-th singular value of A. We use MATLAB notation when we refer to a specific row or column, e.g., V1,: is the first row of V and V:,1 is the first column. ‖A‖F , ‖A‖2, and ‖A‖∗ are Frobenius, spectral, and nuclear norms of A. In general, we use boldface upper case (e.g., X) to denote data matrices and boldface lower case (e.g., x) to denote one sample. Regular fonts denote other matrices. Let C∗ = IE[xxT] and C = 1nX TX be the empirical estimate of C∗. Let C∗ = V ∗Λ∗(V ∗)T be the eigen-decomposition of the matrix C∗, and λ∗1 ≥ λ∗2, . . . ,≥ λ∗d1 ≥ 0 be the diagonal entries of Λ ∗. Let {u1,u2, . . .u`} be an arbitrary set of column vectors, and Span({u1,u2, . . . ,u`}) be the subspace spanned by it. An event happens with high probability means that it happens with probability ≥ 1− n−5, where 5 is an arbitrarily chosen large constant and is not optimized. Our model. We consider the model y = Mx + , where x ∈ Rd1 is a multivariate Gaussian, y ∈ Rd2 , M ∈ Rd2×d1 , and ∼ N(0, σ2 Id2×d2). We can relax the Gaussian assumptions on x and STEP-1-PCA-X(X) 1 [U,Σ, V ] = svd(X) 2 Λ = 1n (Σ 2); λi = Λi,i. 3 Gap thresholding. 4 δ = n−O(1) is a tunable parameter. 5 k1 = max{k1 : λk1 − λk1+1 ≥ δ}, 6 Λk1 : diagonal matrix comprised of {λi}i≤k1 . 7 Uk1 , Vk1 : k1 leading columns of U and V . 8 Π̂ = (Λk1) − 12V Tk1 9 Ẑ+ = √ nUk1(= XΠ̂ T). 10 return {Ẑ+, Π̂}. STEP-2-PCA-DENOISE(Ẑ+,Y) 1 N̂T+ ← 1n Ẑ T +Y. 2 Absolute value thresholding. 3 θ is a suitable constant; σ is std. of the noise. 4 k2 = max { k2 : σk2(N̂+) ≥ θσ √ d2 n } . 5 return Pk2(N̂+) ADAPTIVE-RRR(X,Y) 1 [Ẑ+, Π̂] = STEP-1-PCA-A(X). 2 Pk2(N̂+) = STEP-2-PCA-DENOISE(Ẑ+,Y). 3 return M̂ = Pk2(N̂+)Π̂ Figure 1: Our algorithm (ADAPTIVE-RRR) for solving the regression y =Mx+ . for most results we develop. We assume a PAC learning framework, i.e., we observe a sequence {(xi,yi)}i≤n of independent samples and our goal is to find an M̂ that minimizes the test error IEx,y[‖M̂x−Mx‖22]. We are specifically interested in the setting in which d2 ≈ n ≤ d1. The key assumption we make to circumvent the d1 ≥ n issue is that the features are correlated. This assumption can be justified for the following reasons: (i) In practice, it is difficult, if not impossible, to construct completely uncorrelated features. (ii) When n d1, it is not even possible to test whether the features are uncorrelated [5]. (iii) When we indeed know that the features are independent, there are significantly simpler methods to design models. For example, we can build multiple models such that each model regresses on an individual feature of x, and then use a boosting/bagging method [19, 37] to consolidate the predictions. The correlatedness assumption implies that the eigenvalues of C∗ decays. The only (full rank) positive semidefinite matrices that have non-decaying (uniform) eigenvalues are the identity matrix (up to some scaling). In other words, when C∗ has uniform eigenvalues, x has to be uncorrelated. We aim to design an algorithm that works even when the decay is slow, such as when λi(C∗) has a heavy tail. Specifically, our algorithm assumes λi’s are bounded by a heavy-tail power law series: Assumption 2.1. The λi(C∗) series satisfies λi(C∗) ≤ c · i−ω for a constant c and ω ≥ 2. We do not make functional form assumptions on λi’s. This assumption also covers many benign cases, such as when C∗ has low rank or its eigenvalues decay exponentially. Many empirical studies report power law distributions of data covariance matrices [2, 31, 44, 14]. Next, we make standard normalization assumptions. IE‖x‖22 = 1, ‖M‖2 ≤ Υ = O(1), and σ ≥ 1. Remark that we assume only the spectral norm of M is bounded, while its Frobenius norm can be unbounded. Also, we assume the noise σ ≥ 1 is sufficiently large, which is more important in practice. The case when σ is small can be tackled in a similar fashion. Finally, our studies avoid examining excessively unrealistic cases, so we assume d1 ≤ d32. We examine the setting where existing algorithms fail to deliver non-trivial MSE, so we assume that n ≤ rd1 ≤ d42. 3 Upper bound Our algorithm (see Fig. 1) consists of two steps. Step 1. Producing uncorrelated features. We run a PCA to obtain a total number of k1 orthogonalized features. See STEP-1-PCA-X in Fig. 1. Let the SVD of X be X = UΣ(V )T. Let k1 be a suitable rank chosen by inspecting the gaps of X’s singular values (Line 5 in STEP-1-PCA-X). Ẑ+ = √ nUk1 is the set of transformed features output by this step. The subscript + in Ẑ+ reflects that a dimension reduction happens so the number of columns in Ẑ+ is smaller than that in X. Compared to standard PCA dimension reduction, there are two differences: (i) We use the left leading singular vectors of X (with a re-scaling factor √ n) as the output, whereas the PCA reduction outputs Pk1(X). (ii) We design a specialized rule to choose k1 whereas PCA usually uses a hard thresholding or other ad-hoc rules. Step 2. Matrix denoising. We run a second PCA on the matrix (N̂+)T , 1n Ẑ T +Y. The rank k2 is chosen by a hard thresholding rule (Line 4 in STEP-2-PCA-DENOISE). Our final estimator is Pk2(N̂+)Π̂, where Π̂ = (Λk1) − 12V Tk1 is computed in STEP-1-PCA-X(X). 3.1 Intuition of the design While the algorithm is operationally simple, its design is motivated by carefully unfolding the statistical structure of the problem. We shall realize that applying PCA on the features should not be viewed as removing noise from a factor model, or finding subspaces that maximize variations explained by the subspaces as suggested in the standard literature [19, 40, 41]. Instead, it implicitly implements a robust estimator for x’s precision matrix, and the design of the estimator needs to be coupled with our objective of forecasting y, thus resulting in a new way of choosing the rank. Design motivation: warm up. We first examine a simplified problem y = Nz+ , where variables in z are assumed to be uncorrelated. Assume d = d1 = d2 in this simplified setting. Observe that 1 n ZTY = 1 n ZT(ZNT + E) = ( 1 n ZTZ)NT + 1 n ZTE ≈ Id1×d1NT + 1 n ZTE = NT + E , (1) where E is the noise term and E can be approximated by a matrix with independent zero-mean noises. Solving the matrix denoising problem. Eq. 1 implies that when we compute ZTY, the problem reduces to an extensively studied matrix denoising problem [16, 20]. We include the intuition for solving this problem for completeness. The signalNT is overlaid with a noise matrix E . E will elevate all the singular values of NT by an order of σ √ d/n. We run a PCA to extract reliable signals: when the singular value of a subspace is σ √ d/n, the subspace contains significantly more signal than noise and thus we keep the subspace. Similarly, a subspace associated a singular value . σ √ d/n mostly contains noise. This leads to a hard thresholding algorithm that sets N̂T = Pr(NT + E), where r is the maximum index such that σr(NT + E) ≥ c √ d/n for some constant c. In the general setting y = Mx + , x may not be uncorrelated. But when we set z = (Λ∗)− 1 2 (V ∗)Tx, we see that IE[zzT] = I . This means knowing C∗ suffices to reduce the original problem to a simplified one. Therefore, our algorithm uses Step 1 to estimate C∗ and Z, and uses Step 2 to reduce the problem to a matrix denoising one and solve it by standard thresholding techniques. Relationship between PCA and precision matrix estimation. In step 1, while we plan to estimate C∗, our algorithm runs a PCA on X. We observe that empirical covariance matrix C = 1nX TX = 1 nV (Σ) 2(V )T, i.e., C’s eigenvectors coincide with X’s right singular vectors. When we use the empirical estimator to construct ẑ, we obtain ẑ = √ n(Σ)−1(V )Tx. When we apply this map to every training point and assemble the new feature matrix, we exactly get Ẑ = √ nXV (Σ)−1 = √ nU . It means that using C to construct ẑ is the same as running a PCA in STEP-1-PCA-X with k1 = d1. trix. When C and C∗ are unrelated, then the plot behaves like a block of white Gaussian noise. We observe a pronounced pattern: the angle matrix can be roughly divided into two sub-blocks (see the red lines in Fig. 2). The upper left sub-block behaves like an identity matrix, suggesting that the leading eigenvectors of C are close to those of C∗. The lower right block behaves like a white noise matrix, suggesting that the “small” eigenvectors of C are far from those of C∗. When n grows, one can observe the upper left block becomes larger and this the eigenvectors of C will sequentially get stabilized. Leading eigenvectors are first stabilized, followed by smaller ones. Our algorithm leverages this regularity by keeping only a suitable number of reliable eigenvectors from C while ensuring not much information is lost when we throw away those “small” eigenvectors. Implementing the rank selection. We rely on three interacting building blocks: 1. Dimension-free matrix concentration. First, we need to find a concentration behavior of C for n ≤ d1 to decouple d1 from the MSE bound. We utilize a dimension-free matrix concentration inequality [32]. Roughly speaking, the concentration behaves as ‖C−C∗‖2 ≈ n− 1 2 . This guarantees that |λi(C)− λi(C∗)| ≤ n− 1 2 by standard matrix perturbation results [24]. 2. Davis-Kahan perturbation result. However, the pairwise closeness of the λi’s does not imply the eigenvectors are also close. When λi(C∗) and λi+1(C∗) are close, the corresponding eigenvectors in C can be “jammed” together. Thus, we need to identify an index i, at which λi(C∗)− λi+1(C∗) exhibits significant gap, and use a Davis-Kahan result to show that Pi(C) is close to Pi(C∗). On the other hand, the map Π∗(, (Λ∗)− 1 2 (V ∗)T) we aim to find depends on the square root of inverse (Λ∗)− 1 2 , so we need additional manipulation to argue our estimate is close to (Λ∗)− 1 2 (V ∗)T. 3. The connection between gap and tail. Finally, the performance of our procedure is also characterized by the total volume of signals that are discarded, i.e., ∑ i>k1 λi(C ∗), where k1 is the location that exhibits the gap. The question becomes whether it is possible to identify a k1 that simultaneously exhibits a large gap and ensures the tail after it is well-controlled, e.g., the sum of the tail is O(n−c) for a constant c. We develop a combinatorial analysis to show that it is always possible to find such a gap under the assumption that λi(C∗) is bounded by a power law distribution with exponent ω ≥ 2. Combining all these three building blocks, we have: Proposition 1. Let ξ and δ be two tunable parameters such that ξ = ω(log3 n/ √ n) and δ3 = ω(ξ). Assume that λ∗i ≤ c ·i−ω . Consider running STEP-1-PCA-X in Fig. 1, with high probability, we have (i) Leading eigenvectors/values are close: there exists a unitary matrix W and a constant c1 such that ‖Vk1(Λk1)− 1 2 − V ∗k1(Λ ∗ k1 )− 1 2W‖ ≤ c1ξδ3 . (ii) Small tail: ∑ i≥k1 λ ∗ i ≤ c2δ ω−1 ω+1 for a constant c2. Prop. 1 implies that our estimate ẑ+ = Π̂(x) is sufficiently close to z = Π∗(x), up to a unitary transform. We then execute STEP-2-PCA-DENOISE to reduce the problem to a matrix denoising one and solve it by hard-thresholding. Let us refer to y = Nz + , where z is a standard multivariate Gaussian and N = MV ∗(Λ∗) 1 2 as the orthogonalized form of the problem. While we do not directly observe z, our performance is characterized by spectra structure of N . Theorem 1. Consider running ADAPTIVE-RRR in Fig. 1 on n independent samples (x,y) from the model y = Mx + , where x ∈ Rd1 and y ∈ Rd2 . Let C∗ = IE[xxT]. Assume that (i) ‖M‖2 ≤ Υ = O(1), and (ii) x is a multivariate Gaussian with ‖x‖2 = 1. In addition, λ1(C∗) < 1 and for all i, λi(C∗) ≤ c/iω for a constant c, and (iii) ∼ N(0, σ2 Id1), where σ ≥ min{Υ, 1}. Let ξ = ω(log3 n/ √ n), δ3 = ω(ξ), and θ be a suitably large constant. Let y = Nz + be the orthogonalized form of the problem. Let `∗ be the largest index such that σN`∗ > θσ √ d2 n . Let ŷ be our testing forecast. With high probability over the training data: IE[‖ŷ − y‖22] ≤ ∑ i>`∗ (σNi ) 2 +O ( `∗d2θ 2σ2 n ) +O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) (2) The expectation is over the randomness of the test data. Theorem 1 also implies that there exists a way to parametrize ξ and δ such that IE[‖ŷ − y‖22] ≤∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) +O(n−c0) for some constant c0. We next interpret each term in (2). Terms ∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) are typical for solving a matrix denoising problem N̂T+ + E(≈ NT + E): we can extract signals associated with `∗ leading singular vectors of N , so ∑ i>`∗(σ N i ) 2 starts at i > `∗. For each direction we extract, we need to pay a noise term of order θ2σ2 d2 n , leading to the term O ( `∗d2θ 2σ2 n ) . Terms O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) come from the estimations error of ẑ+ produced from Prop. 1, consisting of both estimation errors of C∗’s leading eigenvectors and the error of cutting out a tail. We pay an exponent of 14 on both terms (e.g., δ ω−1 ω+1 in Prop. 1 becomes δ ω−1 4(ω+1) ) because we used Cauchy-Schwarz (CS) twice. One is used in running matrix denoising algorithm with inaccurate z+; the other one is used to bound the impact of cutting a tail. It remains open whether two CS is can be circumvented. Sec. 4 explains how Thm 1 and the lower bound imply the algorithm is near-optimal. Sec. 5 compares our result with existing ones under other parametrizations, e.g. rank(M). 4 Lower bound Our algorithm accurately estimates the singular vectors of N that correspond to singular values above the threshold τ = θσ √ d2 n . However, it may well happen that most of the spectral ‘mass’ of N lies only slightly below this threshold τ . In this section, we establish that no algorithm can do better than us, in a bi-criteria sense, i.e. we show that any algorithm that has a slightly smaller sample than ours can only minimally outperform ours in terms of MSE. We establish ‘instance dependent’ lower bounds: When there is more ‘spectral mass’ below the threshold, the performance of our algorithm will be worse, and we will need to establish that no algorithm can do much better. This departs from the standard minimax framework, in which one examines the entire parameter space of N , e.g. all rank r matrices, and produces a large set of statistically indistinguishable ‘bad’ instances [43]. These lower bounds are not sensitive to instancespecific quantities such as the spectrum of N , and in particular, if prior knowledge suggests that the unknown parameter N is far from these bad instances, the minimax lower bound cannot be applied. We introduce the notion of local minimax. We partition the space into parts so that similar matrices are together. Similar matrices are those N that have the same singular values and right singular vectors; we establish strong lower bounds even against algorithms that know the singular values and right singular vectors of N . An equivalent view is to assume that the algorithm has oracle access to C∗, M ’s singular values, and M ’s right singular vectors. This algorithm can solve the orthogonalized form as N ’s singular values and right singular vectors can easily be deduced. Thus, the only reason why the algorithm needs data is to learn the left singular vectors of N . The lower bound we establish is the minimax bound for this ‘unfair’ comparison, where the competing algorithm is given more information. In fact, this can be reduced further, i.e., even if the algorithm ‘knows’ that the left singular vectors of N are sparse, identifying the locations of the non-zero entries is the key difficulty that leads to the lower bound. Definition 1 (Local minimax bound). Consider a model y = Mx + , where x is a random vector, so C∗(x) = IE[xxT] represents the co-variance matrix of the data distribution, and M = UMΣM (VM )T. The relation (M,x) ∼ (M ′,x′)⇔ (ΣM = ΣM ′∧VM = VM ′∧C∗(x) = C∗(x′)) is an equivalence relation and let the equivalence class of (M,x) beR(M,x) = {(M ′,x′) : ΣM ′ = ΣM , VM ′ = VM , and C∗(x′) = C∗(x)}. The local minimax bound for y = Mx + with n independent samples and ∼ N(0, σ2 Id2×d2) is r(x,M, n, σ ) = min M̂ max (M ′,x′)∈R(M,x) E X,Y from y∼M′x′+ [ IEx′ [‖M̂(X,Y)x′ −M ′x′‖22 | X,Y] ] . (3) It is worth interpreting (3) in some detail. For any two (M,x), (M ′,x′) inR(M,x), the algorithm has the same ‘prior knowledge’, so it can only distinguish between the two instances by using the observed data, in particular M̂ is a function only of X and Y, and we denote it as M̂(X,Y) to emphasize this. Thus, we can evaluate the performance of M̂ by looking at the worst possible (M ′,x′) and considering the MSE IE‖M̂(X,Y)x′ −M ′x′‖2. Proposition 2. Consider the problem y = Mx + with normalized form y = Nz + . Let ξ be a sufficient small constant. There exists a sufficiently small constant ρ0 (that depends on ξ) and a constant c such that for any ρ ≤ ρ0, r(x,M, n, σ ) ≥ (1− cρ 1 2−ξ) ∑ i≥t(σ N i ) 2 −O ( ρ 1 2 −ξ dω−12 ) , where t is the smallest index such that σNt ≤ ρσ √ d2 n . Proposition 2 gives the lower bound on the MSE in expectation; it can be turned into a high probability result with suitable modifications. The proof of the lower bound uses a similar ‘trick’ to the one used in the analysis of the upper bound analysis to cut the tail. This results in an additional term O ( ρ 1 2 −ξ dω−12 ) which is generally smaller than the n−c0 tail term in Theorem 1 and does not dominate the gap. Gap requirement and bi-criteria approximation algorithms. Let τ = σ √ d2 n . Theorem 1 asserts that any signal above the threshold θτ can be detected, i.e., the MSE is at most ∑ σNi >θτ σ2i (N) (plus inevitable noise), whereas Proposition 2 asserts that any signal below the threshold ρτ cannot be detected, i.e., the MSE is approximately at least ∑ σNi ≥ρτ (1− poly(ρ))σ2i (N). There is a ‘gap’ between θτ and ρτ , as θ > 1 and ρ < 1. See Fig. 3(a). This kind of gap is inevitable because both bounds are ‘high probability’ statements. This gap phenomenon appears naturally when the sample size is small as can be illustrated by this simple example. Consider the problem of estimating µ when we see one sample from N(µ, σ2). Roughly speaking, when µ σ, the estimation is feasible, and whereas µ σ, the estimation is impossible. For the region µ ≈ σ, algorithms fail with constant probability and we cannot prove a high probability lower bound either. While many of the signals can ‘hide’ in the gap, the inability to detect signals in the gap is a transient phenomenon. When the number of samples n is modestly increased, our detection threshold τ = θσ √ d2 n shrinks, and this hidden signal can be fully recovered. This observation naturally leads to a notion of bi-criteria optimization that frequently arises in approximation algorithms. Definition 2. An algorithm for solving the y = Mx + problem is (α, β)-optimal if, when given an i.i.d. sample of size αn as input, it outputs an estimator whose MSE is at most β worse than the local minimax bound, i.e., IE[‖ŷ − y‖22] ≤ r(x,M, n, σ ) + β. Corollary 1. Let ξ and c0 be small constants and ρ be a tunable parameter. Our algorithm is (α, β)-optimal for α = θ 2 ρ 5 2 and β = O(ρ 1 2−ξ)‖Mx‖22 +O(n−c0) The error term β consists of ρ 1 2− ‖Mx‖22 that is directly characterized by the signal strength and an additive term O(n−c0) = o(1). Assuming that ‖Mx‖ = Ω(1), i.e., the signal is not too weak, the term β becomes a single multiplicative bound O(ρ 1 2−ξ + n−c0)‖Mx‖22. This gives an easily interpretable result. For example, when our data size is n log n, the performance gap between our algorithm and any algorithm that uses n samples is at most o(‖Mx‖22). The improvement is significant when other baselines deliver MSE in the additive form that could be larger than ‖Mx‖22 in the regime n ≤ d1. Preview of techniques. Let N = UNΣN (V N )T be the instance (in orthogonalized form). Our goal is to construct a collection N = {N1, . . . , NK} of K matrices so that (i) For any Ni ∈ N , ΣNi = ΣN and V Ni = V N . (ii) For any two Ni, Nj ∈ N , ‖N − N ′‖F is large, and (iii) K = exp(Ω(poly(ρ)d2)) (cf. [43, Chap. 2]) Condition (i) ensures that it suffices to construct unitary matrices UNi ’s for N , and that the resulting instances will be in the same equivalence class. Conditions (ii) and (iii) resemble standard construction of codes in information theory: we need a large ‘code rate’, corresponding to requiring a large K as well as large distances between codewords, corresponding to requiring that ‖Ui − Uj‖F be large. Standard approaches for constructing such collections run into difficulties. Getting a sufficiently tight concentration bound on the distance between two random unitary matrices is difficult as the matrix entries, by necessity, are correlated. On the other hand, starting with a large collection of random unit vectors and using its Cartesian product to build matrices does not necessarily yield unitary matrices. We design a two-stage approach to decouple condition (iii) from (i) and (ii) by only generating sparse matrices UNi . See Fig. 3(b)-(d). In the first stage (Steps 1 & 2 in Fig. 3(b)-(c)), we only specify the non-zero positions (sparsity pattern) in each UNi . It suffices to guarantee that the sparsity patterns of the matrices UNi and UNj have little overlap. The existence of such objects can easily be proved using the probabilistic method. Thus, in the first stage, we can build up a large number of sparsity patterns. In the second stage (Step 3 in Fig. 3(d)), we carefully fill in values in the non-zero positions for each UNi . When the number of non-zero entries is not too small, satisfying the unitary constraint is feasible. As the overlap of sparsity patterns of any two matrices is small, we can argue the distance between them is large. By carefully trading off the number of non-zero positions and the portion of overlap, we can simultaneously satisfy all three conditions. 5 Related work and comparison In this section, we compare our results to other regression algorithms that make low rank constraints on M . Most existing MSE results are parametrized by the rank or spectral properties of M , e.g. [30] defined a generalized notion of rank Bq(RAq ) ∈ { A ∈ Rd2×d1 : ∑d2 i=1 |σAi |q ≤ Rq } , where q ∈ [0, 1], A ∈ {N,M}, i.e. RNq characterizes the generalized rank of N whereas RMq characterizes that ofM . When q = 0,RNq = R M q is the rank of theN because rank(N) = rank(M) in our setting. In their setting, the MSE is parametrized by RM and is shown to be O ( RMq ( σ2 λ ∗ 1(d1+d2) (λ∗min) 2n )1−q/2) . In the special case when q = 0, this reduces to O ( σ2 λ ∗ 1rank(M)(d1+d2) (λ∗min) 2·n ) . On the other hand, the MSE in our case is bounded by (cf. Thm. 1). We have IE[‖ŷ− y‖22] = O ( RNq ( σ2 d2 n ) 1−q/2 + n−c0 ) . When q = 0, this becomes O (σ2 rank(M)d2 n + n −c0 ) . The improvement here is twofold. First, our bound is directly characterized by N in orthogonalized form, whereas result of [30] needs to examine the interaction between M and C∗, so their MSE depends on both RMq and λ ∗ min. Second, our bound no longer depends on d1 and pays only an additive factor n−c0 , thus, when n < d1, our result is significantly better. Other works have different parameters in the upper bounds, but all of these existing results require that n > d1 to obtain nontrivial upper bounds [26, 9, 12, 26]. Unlike these prior work, we require a stochastic assumption on X (the rows are i.i.d.) to ensure that the model is identifiable when n < d1, e.g. there could be two sets of disjoint features that fit the training data equally well. Our algorithm produces an adaptive model whose complexity is controlled by k1 and k2, which are adjusted dynamically depending on the sample size and noise level. [9] and [12] also point out the need for adaptivity; however they still require n > d1 and make some strong assumptions. For instance, [9] assumes that there is a gap between σi(XMT) and σi+1(XMT) for some i. In comparison, our sufficient condition, the decay of λ∗i , is more natural. Our work is not directly comparable to standard variable selection techniques such as LASSO [42] because they handle univariate y. Column selection algorithms [15] generalize variable selection methods for vector responses, but they cannot address the identifiability concern. 6 Experiments We apply our algorithm on an equity market and a social network dataset to predict equity returns and user popularity respectively. Our baselines include ridge regression (“Ridge”), reduced rank ridge regression [29] (“Reduced ridge”), LASSO (“Lasso”), nuclear norm regularized regression (“Nuclear norm”), reduced rank regression [45] (“RRR”), and principal component regression [1] (“PCR”). Predicting equity returns. We use a stock market dataset from an emerging market that consists of approximately 3600 stocks between 2011 and 2018. We focus on predicting the next 5-day returns. For each asset in the universe, we compute its past 1-day, past 5-day, and past 10-day returns as features. We use a standard approach to translate forecasts into positions [4, 47]. We examine two universes in this market: (i) Universe 1 is equivalent to S&P 500 and consists of 983 stocks, and (ii) Full universe consists of all stocks except for illiquid ones. Results. Table 1 (left) reports the forecasting power and portfolio return for out-of-sample periods in Full universe (see our full version for Universe 1). We observe that (i) The data has a low signal-tonoise ratio. The out-of-sample R2 values of all the methods are close to 0. (ii) ADAPTIVE-RRR has the highest forecasting power. (iii) ADAPTIVE-RRR has the smallest in-sample and out-of-sample gap (see column out− in), suggesting that our model is better at avoiding spurious signals. Predicting user popularity in social networks. We collected tweet data on political topics from Oct. 2016 to Dec. 2017. Our goal is to predict a user’s next 1-day popularity, which is defined as the sum of retweets, quotes, and replies received by the user. There are a total of 19 million distinct users, and due to the huge size, we extract the subset of 2000 users with the most interactions for evaluation. For each user in the 2000-user set, we use its past 5 days’ popularity as features. We further randomly sample 200 users and make predictions for them, i.e., setting d2 = 200 to make d2 of the same magnitude as n. Results. We randomly sample users for 10 times and report the average MSE and correlation (with standard deviations) for both in-sample and out-of-sample data (see full version for more results). In Table 1 (right) we can see results consistent with the equity returns experiment: (i) ADAPTIVE-RRR yields the best performance in out-of-sample MSE and correlation. (ii) ADAPTIVE-RRR achieves the best generalization error by having a much smaller gap between training and test metrics. 7 Conclusion This paper examines the low-rank regression problem under the high-dimensional setting. We design the first learning algorithm with provable statistical guarantees under a mild condition on the features’ covariance matrix. Our algorithm is simple and computationally more efficient than low rank methods based on optimizing nuclear norms. Our theoretical analysis of the upper bound and lower bound can be of independent interest. Our preliminary experimental results demonstrate the efficacy of our algorithm. The full version explains why our (algorithm) result is unlikely to be known or trivial. Broader Impact The main contribution of this work is theoretical. Productionizing downstream applications stated in the paper may need to take six months or more so there is no immediate societal impact from this project. Acknowledgement We thank anonymous reviewers for helpful comments and suggestions. Varun Kanade is supported in part by the Alan Turing Institute under the EPSRC grant EP/N510129/1. Yanhua Li was supported in part by NSF grants IIS-1942680 (CAREER), CNS-1952085, CMMI-1831140, and DGE-2021871. Qiong Wu and Zhenming Liu are supported by NSF grants NSF-2008557, NSF-1835821, and NSF-1755769. The authors acknowledge William & Mary Research Computing for providing computational resources and technical support that have contributed to the results reported within this paper.
1. What is the focus of the paper in terms of the problem it addresses? 2. What is the contribution of the paper, particularly in terms of the proposed algorithm and its efficiency? 3. What are the strengths of the paper regarding its theoretical analysis and experimental performance? 4. What are the weaknesses of the paper, specifically concerning the assumption made about the covariance matrix? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the low-rank regression problem in a high-dimensional setting. It provides an efficient algorithm with small mean squared error under an eigenvalue decay assumption about the covariance matrix of the features. Strengths The paper provides a simple two-stage algorithm with provable guarantees under a power-law eigenvalue decay assumption. The algorithm is natural and seems to perform well experimentally. The theoretical claims appear to be sound. Weaknesses It is not clear to me how restrictive the assumption on the covariance matrix is.
NIPS
Title Adaptive Reduced Rank Regression Abstract We study the low rank regression problem y = Mx + , where x and y are d1 and d2 dimensional vectors respectively. We consider the extreme high-dimensional setting where the number of observations n is less than d1+d2. Existing algorithms are designed for settings where n is typically as large as rank(M)(d1 + d2). This work provides an efficient algorithm which only involves two SVD, and establishes statistical guarantees on its performance. The algorithm decouples the problem by first estimating the precision matrix of the features, and then solving the matrix denoising problem. To complement the upper bound, we introduce new techniques for establishing lower bounds on the performance of any algorithm for this problem. Our preliminary experiments confirm that our algorithm often out-performs existing baselines, and is always at least competitive. 1 Introduction We consider the regression problem y = Mx + in the high dimensional setting, where x ∈ Rd1 is the vector of features, y ∈ Rd2 is a vector of responses, M ∈ Rd2×d1 are the learnable parameters, and ∼ N(0, σ2 Id2×d2) is a noise term. High-dimensional setting refers to the case where the number of observations n is insufficient for recovery and hence regularization for estimation is necessary [26, 30, 12]. This high-dimensional model is widely used in practice, such as identifying biomarkers [48], understanding risks associated with various diseases [18, 7], image recognition [34, 17], forecasting equity returns in financial markets [33, 39, 28, 8], and analyzing social networks [46, 35]. We consider the “large feature size” setting, in which the number of features d1 is excessively large and can be even larger than the number of observations n. This setting frequently arises in practice because it is often straightforward to perform feature-engineering and produce a large number of potentially useful features in many machine learning problems. For example, in a typical equity forecasting model, n is around 3,000 (i.e., using 10 years of market data), whereas the number of potentially relevant features can be in the order of thousands [33, 22, 25, 13]. In predicting the popularity of a user in an online social network, n is in the order of hundreds (each day is an observation and a typical dataset contains less than three years of data) whereas the feature size can easily be more than 10k [36, 6, 38]. Existing low-rank regularization techniques (e.g., [3, 23, 26, 30, 27] ) are not optimized for the large feature size setting. These results assume that either the features possess the so-called restricted isometry property [10], or their covariance matrix can be accurately estimated [30]. Therefore, their sample complexity n depends on either d1 or the smallest eigenvalue value λmin of x’s covariance matrix. For example, a mean-squared error (MSE) result that appeared in [30] is of the form ∗ Correspondence to: Qiong Wu <[email protected]>. † Currently at Google. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. O ( r(d1+d2) nλ2min ) . When n ≤ d1/λ2min, this result becomes trivial because the forecast ŷ = 0 produces a comparable MSE. We design an efficient algorithm for the large feature size setting. Our algorithm is a simple two-stage algorithm. Let X ∈ Rn×d1 be a matrix that stacks together all features and Y ∈ Rn×d2 be the one that stacks the responses. In the first stage, we run a principal component analysis (PCA) on X to obtain a set of uncorrelated features Ẑ. In the second stage, we run another PCA to obtain a low rank approximation of ẐTY and use it to construct an output. While the algorithm is operationally simple, we show a powerful and generic result on using PCA to process features, a widely used practice for “dimensionality reduction” [11, 21, 19]. PCA is known to be effective to orthogonalize features by keeping only the subspace explaining large variations. But its performance can only be analyzed under the so-called factor model [40, 39]. We show the efficacy of PCA without the factor model assumption. Instead, PCA should be interpreted as a robust estimator of x’s covariance matrix. The empirical estimator C = 1nXX T in the high-dimensional setting cannot be directly used because n d1 × d2, but it exhibits an interesting regularity: the leading eigenvectors of C are closer to ground truth than the remaining ones. In addition, the number of reliable eigenvectors grows as the sample size grows, so our PCA procedure projects the features along reliable eigenvectors and dynamically adjusts Ẑ’s rank to maximally utilize the raw features. Under mild conditions on the ground-truth covariance matrix C∗ of x, we show that it is always possible to decompose x into a set of near-independent features and a set of (discarded) features that have an inconsequential impact on a model’s MSE. When features x are transformed into uncorrelated ones z, our original problem becomes y = Nz+ , which can be reduced to a matrix denoising problem [16] and be solved by the second stage. Our algorithm guarantees that we can recover all singular vectors of N whose associated singular values are larger than a certain threshold τ . The performance guarantee can be translated into MSE bounds parametrized by commonly used variables (though, these translations usually lead to looser bounds). For example, when N ’s rank is r, our result reduces the MSE from O( r(d1+d2) nλ2min ) to O( rd2n + n −c) for a suitably small constant c. The improvement is most pronounced when n d1. We also provide a new matching lower bound. Our lower bound asserts that no algorithm can recover a fraction of singular vectors of N whose associated singular values are smaller than ρτ , where ρ is a “gap parameter”. Our lower bound contribution is twofold. First, we introduce a notion of “local minimax”, which enables us to define a lower bound parametrized by the singular values of N . This is a stronger lower bound than those delivered by the standard minimax framework, which are often parametrized by the rank r of N [26]. Second, we develop a new probabilistic technique for establishing lower bounds under the new local minimax framework. Roughly speaking, our techniques assemble a large collection of matrices that share the same singular values of N but are far from each other, so no algorithm can successfully distinguish these matrices with identical spectra. 2 Preliminaries Notation. Let X ∈ Rn×d1 and Y ∈ Rn×d2 be data matrices with their i-th rows representing the i-th observation. For matrix A, we denote its singular value decomposition as A = UAΣA(V A)T and Pr(A) , UAr Σ A r V A r T is the rank r approximation obtained by keeping the top r singular values and the corresponding singular vectors. When the context is clear, we drop the superscript A and use U,Σ, and V (Ur, Σr, and Vr) instead. Both σi(A) and σAi are used to refer to i-th singular value of A. We use MATLAB notation when we refer to a specific row or column, e.g., V1,: is the first row of V and V:,1 is the first column. ‖A‖F , ‖A‖2, and ‖A‖∗ are Frobenius, spectral, and nuclear norms of A. In general, we use boldface upper case (e.g., X) to denote data matrices and boldface lower case (e.g., x) to denote one sample. Regular fonts denote other matrices. Let C∗ = IE[xxT] and C = 1nX TX be the empirical estimate of C∗. Let C∗ = V ∗Λ∗(V ∗)T be the eigen-decomposition of the matrix C∗, and λ∗1 ≥ λ∗2, . . . ,≥ λ∗d1 ≥ 0 be the diagonal entries of Λ ∗. Let {u1,u2, . . .u`} be an arbitrary set of column vectors, and Span({u1,u2, . . . ,u`}) be the subspace spanned by it. An event happens with high probability means that it happens with probability ≥ 1− n−5, where 5 is an arbitrarily chosen large constant and is not optimized. Our model. We consider the model y = Mx + , where x ∈ Rd1 is a multivariate Gaussian, y ∈ Rd2 , M ∈ Rd2×d1 , and ∼ N(0, σ2 Id2×d2). We can relax the Gaussian assumptions on x and STEP-1-PCA-X(X) 1 [U,Σ, V ] = svd(X) 2 Λ = 1n (Σ 2); λi = Λi,i. 3 Gap thresholding. 4 δ = n−O(1) is a tunable parameter. 5 k1 = max{k1 : λk1 − λk1+1 ≥ δ}, 6 Λk1 : diagonal matrix comprised of {λi}i≤k1 . 7 Uk1 , Vk1 : k1 leading columns of U and V . 8 Π̂ = (Λk1) − 12V Tk1 9 Ẑ+ = √ nUk1(= XΠ̂ T). 10 return {Ẑ+, Π̂}. STEP-2-PCA-DENOISE(Ẑ+,Y) 1 N̂T+ ← 1n Ẑ T +Y. 2 Absolute value thresholding. 3 θ is a suitable constant; σ is std. of the noise. 4 k2 = max { k2 : σk2(N̂+) ≥ θσ √ d2 n } . 5 return Pk2(N̂+) ADAPTIVE-RRR(X,Y) 1 [Ẑ+, Π̂] = STEP-1-PCA-A(X). 2 Pk2(N̂+) = STEP-2-PCA-DENOISE(Ẑ+,Y). 3 return M̂ = Pk2(N̂+)Π̂ Figure 1: Our algorithm (ADAPTIVE-RRR) for solving the regression y =Mx+ . for most results we develop. We assume a PAC learning framework, i.e., we observe a sequence {(xi,yi)}i≤n of independent samples and our goal is to find an M̂ that minimizes the test error IEx,y[‖M̂x−Mx‖22]. We are specifically interested in the setting in which d2 ≈ n ≤ d1. The key assumption we make to circumvent the d1 ≥ n issue is that the features are correlated. This assumption can be justified for the following reasons: (i) In practice, it is difficult, if not impossible, to construct completely uncorrelated features. (ii) When n d1, it is not even possible to test whether the features are uncorrelated [5]. (iii) When we indeed know that the features are independent, there are significantly simpler methods to design models. For example, we can build multiple models such that each model regresses on an individual feature of x, and then use a boosting/bagging method [19, 37] to consolidate the predictions. The correlatedness assumption implies that the eigenvalues of C∗ decays. The only (full rank) positive semidefinite matrices that have non-decaying (uniform) eigenvalues are the identity matrix (up to some scaling). In other words, when C∗ has uniform eigenvalues, x has to be uncorrelated. We aim to design an algorithm that works even when the decay is slow, such as when λi(C∗) has a heavy tail. Specifically, our algorithm assumes λi’s are bounded by a heavy-tail power law series: Assumption 2.1. The λi(C∗) series satisfies λi(C∗) ≤ c · i−ω for a constant c and ω ≥ 2. We do not make functional form assumptions on λi’s. This assumption also covers many benign cases, such as when C∗ has low rank or its eigenvalues decay exponentially. Many empirical studies report power law distributions of data covariance matrices [2, 31, 44, 14]. Next, we make standard normalization assumptions. IE‖x‖22 = 1, ‖M‖2 ≤ Υ = O(1), and σ ≥ 1. Remark that we assume only the spectral norm of M is bounded, while its Frobenius norm can be unbounded. Also, we assume the noise σ ≥ 1 is sufficiently large, which is more important in practice. The case when σ is small can be tackled in a similar fashion. Finally, our studies avoid examining excessively unrealistic cases, so we assume d1 ≤ d32. We examine the setting where existing algorithms fail to deliver non-trivial MSE, so we assume that n ≤ rd1 ≤ d42. 3 Upper bound Our algorithm (see Fig. 1) consists of two steps. Step 1. Producing uncorrelated features. We run a PCA to obtain a total number of k1 orthogonalized features. See STEP-1-PCA-X in Fig. 1. Let the SVD of X be X = UΣ(V )T. Let k1 be a suitable rank chosen by inspecting the gaps of X’s singular values (Line 5 in STEP-1-PCA-X). Ẑ+ = √ nUk1 is the set of transformed features output by this step. The subscript + in Ẑ+ reflects that a dimension reduction happens so the number of columns in Ẑ+ is smaller than that in X. Compared to standard PCA dimension reduction, there are two differences: (i) We use the left leading singular vectors of X (with a re-scaling factor √ n) as the output, whereas the PCA reduction outputs Pk1(X). (ii) We design a specialized rule to choose k1 whereas PCA usually uses a hard thresholding or other ad-hoc rules. Step 2. Matrix denoising. We run a second PCA on the matrix (N̂+)T , 1n Ẑ T +Y. The rank k2 is chosen by a hard thresholding rule (Line 4 in STEP-2-PCA-DENOISE). Our final estimator is Pk2(N̂+)Π̂, where Π̂ = (Λk1) − 12V Tk1 is computed in STEP-1-PCA-X(X). 3.1 Intuition of the design While the algorithm is operationally simple, its design is motivated by carefully unfolding the statistical structure of the problem. We shall realize that applying PCA on the features should not be viewed as removing noise from a factor model, or finding subspaces that maximize variations explained by the subspaces as suggested in the standard literature [19, 40, 41]. Instead, it implicitly implements a robust estimator for x’s precision matrix, and the design of the estimator needs to be coupled with our objective of forecasting y, thus resulting in a new way of choosing the rank. Design motivation: warm up. We first examine a simplified problem y = Nz+ , where variables in z are assumed to be uncorrelated. Assume d = d1 = d2 in this simplified setting. Observe that 1 n ZTY = 1 n ZT(ZNT + E) = ( 1 n ZTZ)NT + 1 n ZTE ≈ Id1×d1NT + 1 n ZTE = NT + E , (1) where E is the noise term and E can be approximated by a matrix with independent zero-mean noises. Solving the matrix denoising problem. Eq. 1 implies that when we compute ZTY, the problem reduces to an extensively studied matrix denoising problem [16, 20]. We include the intuition for solving this problem for completeness. The signalNT is overlaid with a noise matrix E . E will elevate all the singular values of NT by an order of σ √ d/n. We run a PCA to extract reliable signals: when the singular value of a subspace is σ √ d/n, the subspace contains significantly more signal than noise and thus we keep the subspace. Similarly, a subspace associated a singular value . σ √ d/n mostly contains noise. This leads to a hard thresholding algorithm that sets N̂T = Pr(NT + E), where r is the maximum index such that σr(NT + E) ≥ c √ d/n for some constant c. In the general setting y = Mx + , x may not be uncorrelated. But when we set z = (Λ∗)− 1 2 (V ∗)Tx, we see that IE[zzT] = I . This means knowing C∗ suffices to reduce the original problem to a simplified one. Therefore, our algorithm uses Step 1 to estimate C∗ and Z, and uses Step 2 to reduce the problem to a matrix denoising one and solve it by standard thresholding techniques. Relationship between PCA and precision matrix estimation. In step 1, while we plan to estimate C∗, our algorithm runs a PCA on X. We observe that empirical covariance matrix C = 1nX TX = 1 nV (Σ) 2(V )T, i.e., C’s eigenvectors coincide with X’s right singular vectors. When we use the empirical estimator to construct ẑ, we obtain ẑ = √ n(Σ)−1(V )Tx. When we apply this map to every training point and assemble the new feature matrix, we exactly get Ẑ = √ nXV (Σ)−1 = √ nU . It means that using C to construct ẑ is the same as running a PCA in STEP-1-PCA-X with k1 = d1. trix. When C and C∗ are unrelated, then the plot behaves like a block of white Gaussian noise. We observe a pronounced pattern: the angle matrix can be roughly divided into two sub-blocks (see the red lines in Fig. 2). The upper left sub-block behaves like an identity matrix, suggesting that the leading eigenvectors of C are close to those of C∗. The lower right block behaves like a white noise matrix, suggesting that the “small” eigenvectors of C are far from those of C∗. When n grows, one can observe the upper left block becomes larger and this the eigenvectors of C will sequentially get stabilized. Leading eigenvectors are first stabilized, followed by smaller ones. Our algorithm leverages this regularity by keeping only a suitable number of reliable eigenvectors from C while ensuring not much information is lost when we throw away those “small” eigenvectors. Implementing the rank selection. We rely on three interacting building blocks: 1. Dimension-free matrix concentration. First, we need to find a concentration behavior of C for n ≤ d1 to decouple d1 from the MSE bound. We utilize a dimension-free matrix concentration inequality [32]. Roughly speaking, the concentration behaves as ‖C−C∗‖2 ≈ n− 1 2 . This guarantees that |λi(C)− λi(C∗)| ≤ n− 1 2 by standard matrix perturbation results [24]. 2. Davis-Kahan perturbation result. However, the pairwise closeness of the λi’s does not imply the eigenvectors are also close. When λi(C∗) and λi+1(C∗) are close, the corresponding eigenvectors in C can be “jammed” together. Thus, we need to identify an index i, at which λi(C∗)− λi+1(C∗) exhibits significant gap, and use a Davis-Kahan result to show that Pi(C) is close to Pi(C∗). On the other hand, the map Π∗(, (Λ∗)− 1 2 (V ∗)T) we aim to find depends on the square root of inverse (Λ∗)− 1 2 , so we need additional manipulation to argue our estimate is close to (Λ∗)− 1 2 (V ∗)T. 3. The connection between gap and tail. Finally, the performance of our procedure is also characterized by the total volume of signals that are discarded, i.e., ∑ i>k1 λi(C ∗), where k1 is the location that exhibits the gap. The question becomes whether it is possible to identify a k1 that simultaneously exhibits a large gap and ensures the tail after it is well-controlled, e.g., the sum of the tail is O(n−c) for a constant c. We develop a combinatorial analysis to show that it is always possible to find such a gap under the assumption that λi(C∗) is bounded by a power law distribution with exponent ω ≥ 2. Combining all these three building blocks, we have: Proposition 1. Let ξ and δ be two tunable parameters such that ξ = ω(log3 n/ √ n) and δ3 = ω(ξ). Assume that λ∗i ≤ c ·i−ω . Consider running STEP-1-PCA-X in Fig. 1, with high probability, we have (i) Leading eigenvectors/values are close: there exists a unitary matrix W and a constant c1 such that ‖Vk1(Λk1)− 1 2 − V ∗k1(Λ ∗ k1 )− 1 2W‖ ≤ c1ξδ3 . (ii) Small tail: ∑ i≥k1 λ ∗ i ≤ c2δ ω−1 ω+1 for a constant c2. Prop. 1 implies that our estimate ẑ+ = Π̂(x) is sufficiently close to z = Π∗(x), up to a unitary transform. We then execute STEP-2-PCA-DENOISE to reduce the problem to a matrix denoising one and solve it by hard-thresholding. Let us refer to y = Nz + , where z is a standard multivariate Gaussian and N = MV ∗(Λ∗) 1 2 as the orthogonalized form of the problem. While we do not directly observe z, our performance is characterized by spectra structure of N . Theorem 1. Consider running ADAPTIVE-RRR in Fig. 1 on n independent samples (x,y) from the model y = Mx + , where x ∈ Rd1 and y ∈ Rd2 . Let C∗ = IE[xxT]. Assume that (i) ‖M‖2 ≤ Υ = O(1), and (ii) x is a multivariate Gaussian with ‖x‖2 = 1. In addition, λ1(C∗) < 1 and for all i, λi(C∗) ≤ c/iω for a constant c, and (iii) ∼ N(0, σ2 Id1), where σ ≥ min{Υ, 1}. Let ξ = ω(log3 n/ √ n), δ3 = ω(ξ), and θ be a suitably large constant. Let y = Nz + be the orthogonalized form of the problem. Let `∗ be the largest index such that σN`∗ > θσ √ d2 n . Let ŷ be our testing forecast. With high probability over the training data: IE[‖ŷ − y‖22] ≤ ∑ i>`∗ (σNi ) 2 +O ( `∗d2θ 2σ2 n ) +O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) (2) The expectation is over the randomness of the test data. Theorem 1 also implies that there exists a way to parametrize ξ and δ such that IE[‖ŷ − y‖22] ≤∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) +O(n−c0) for some constant c0. We next interpret each term in (2). Terms ∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) are typical for solving a matrix denoising problem N̂T+ + E(≈ NT + E): we can extract signals associated with `∗ leading singular vectors of N , so ∑ i>`∗(σ N i ) 2 starts at i > `∗. For each direction we extract, we need to pay a noise term of order θ2σ2 d2 n , leading to the term O ( `∗d2θ 2σ2 n ) . Terms O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) come from the estimations error of ẑ+ produced from Prop. 1, consisting of both estimation errors of C∗’s leading eigenvectors and the error of cutting out a tail. We pay an exponent of 14 on both terms (e.g., δ ω−1 ω+1 in Prop. 1 becomes δ ω−1 4(ω+1) ) because we used Cauchy-Schwarz (CS) twice. One is used in running matrix denoising algorithm with inaccurate z+; the other one is used to bound the impact of cutting a tail. It remains open whether two CS is can be circumvented. Sec. 4 explains how Thm 1 and the lower bound imply the algorithm is near-optimal. Sec. 5 compares our result with existing ones under other parametrizations, e.g. rank(M). 4 Lower bound Our algorithm accurately estimates the singular vectors of N that correspond to singular values above the threshold τ = θσ √ d2 n . However, it may well happen that most of the spectral ‘mass’ of N lies only slightly below this threshold τ . In this section, we establish that no algorithm can do better than us, in a bi-criteria sense, i.e. we show that any algorithm that has a slightly smaller sample than ours can only minimally outperform ours in terms of MSE. We establish ‘instance dependent’ lower bounds: When there is more ‘spectral mass’ below the threshold, the performance of our algorithm will be worse, and we will need to establish that no algorithm can do much better. This departs from the standard minimax framework, in which one examines the entire parameter space of N , e.g. all rank r matrices, and produces a large set of statistically indistinguishable ‘bad’ instances [43]. These lower bounds are not sensitive to instancespecific quantities such as the spectrum of N , and in particular, if prior knowledge suggests that the unknown parameter N is far from these bad instances, the minimax lower bound cannot be applied. We introduce the notion of local minimax. We partition the space into parts so that similar matrices are together. Similar matrices are those N that have the same singular values and right singular vectors; we establish strong lower bounds even against algorithms that know the singular values and right singular vectors of N . An equivalent view is to assume that the algorithm has oracle access to C∗, M ’s singular values, and M ’s right singular vectors. This algorithm can solve the orthogonalized form as N ’s singular values and right singular vectors can easily be deduced. Thus, the only reason why the algorithm needs data is to learn the left singular vectors of N . The lower bound we establish is the minimax bound for this ‘unfair’ comparison, where the competing algorithm is given more information. In fact, this can be reduced further, i.e., even if the algorithm ‘knows’ that the left singular vectors of N are sparse, identifying the locations of the non-zero entries is the key difficulty that leads to the lower bound. Definition 1 (Local minimax bound). Consider a model y = Mx + , where x is a random vector, so C∗(x) = IE[xxT] represents the co-variance matrix of the data distribution, and M = UMΣM (VM )T. The relation (M,x) ∼ (M ′,x′)⇔ (ΣM = ΣM ′∧VM = VM ′∧C∗(x) = C∗(x′)) is an equivalence relation and let the equivalence class of (M,x) beR(M,x) = {(M ′,x′) : ΣM ′ = ΣM , VM ′ = VM , and C∗(x′) = C∗(x)}. The local minimax bound for y = Mx + with n independent samples and ∼ N(0, σ2 Id2×d2) is r(x,M, n, σ ) = min M̂ max (M ′,x′)∈R(M,x) E X,Y from y∼M′x′+ [ IEx′ [‖M̂(X,Y)x′ −M ′x′‖22 | X,Y] ] . (3) It is worth interpreting (3) in some detail. For any two (M,x), (M ′,x′) inR(M,x), the algorithm has the same ‘prior knowledge’, so it can only distinguish between the two instances by using the observed data, in particular M̂ is a function only of X and Y, and we denote it as M̂(X,Y) to emphasize this. Thus, we can evaluate the performance of M̂ by looking at the worst possible (M ′,x′) and considering the MSE IE‖M̂(X,Y)x′ −M ′x′‖2. Proposition 2. Consider the problem y = Mx + with normalized form y = Nz + . Let ξ be a sufficient small constant. There exists a sufficiently small constant ρ0 (that depends on ξ) and a constant c such that for any ρ ≤ ρ0, r(x,M, n, σ ) ≥ (1− cρ 1 2−ξ) ∑ i≥t(σ N i ) 2 −O ( ρ 1 2 −ξ dω−12 ) , where t is the smallest index such that σNt ≤ ρσ √ d2 n . Proposition 2 gives the lower bound on the MSE in expectation; it can be turned into a high probability result with suitable modifications. The proof of the lower bound uses a similar ‘trick’ to the one used in the analysis of the upper bound analysis to cut the tail. This results in an additional term O ( ρ 1 2 −ξ dω−12 ) which is generally smaller than the n−c0 tail term in Theorem 1 and does not dominate the gap. Gap requirement and bi-criteria approximation algorithms. Let τ = σ √ d2 n . Theorem 1 asserts that any signal above the threshold θτ can be detected, i.e., the MSE is at most ∑ σNi >θτ σ2i (N) (plus inevitable noise), whereas Proposition 2 asserts that any signal below the threshold ρτ cannot be detected, i.e., the MSE is approximately at least ∑ σNi ≥ρτ (1− poly(ρ))σ2i (N). There is a ‘gap’ between θτ and ρτ , as θ > 1 and ρ < 1. See Fig. 3(a). This kind of gap is inevitable because both bounds are ‘high probability’ statements. This gap phenomenon appears naturally when the sample size is small as can be illustrated by this simple example. Consider the problem of estimating µ when we see one sample from N(µ, σ2). Roughly speaking, when µ σ, the estimation is feasible, and whereas µ σ, the estimation is impossible. For the region µ ≈ σ, algorithms fail with constant probability and we cannot prove a high probability lower bound either. While many of the signals can ‘hide’ in the gap, the inability to detect signals in the gap is a transient phenomenon. When the number of samples n is modestly increased, our detection threshold τ = θσ √ d2 n shrinks, and this hidden signal can be fully recovered. This observation naturally leads to a notion of bi-criteria optimization that frequently arises in approximation algorithms. Definition 2. An algorithm for solving the y = Mx + problem is (α, β)-optimal if, when given an i.i.d. sample of size αn as input, it outputs an estimator whose MSE is at most β worse than the local minimax bound, i.e., IE[‖ŷ − y‖22] ≤ r(x,M, n, σ ) + β. Corollary 1. Let ξ and c0 be small constants and ρ be a tunable parameter. Our algorithm is (α, β)-optimal for α = θ 2 ρ 5 2 and β = O(ρ 1 2−ξ)‖Mx‖22 +O(n−c0) The error term β consists of ρ 1 2− ‖Mx‖22 that is directly characterized by the signal strength and an additive term O(n−c0) = o(1). Assuming that ‖Mx‖ = Ω(1), i.e., the signal is not too weak, the term β becomes a single multiplicative bound O(ρ 1 2−ξ + n−c0)‖Mx‖22. This gives an easily interpretable result. For example, when our data size is n log n, the performance gap between our algorithm and any algorithm that uses n samples is at most o(‖Mx‖22). The improvement is significant when other baselines deliver MSE in the additive form that could be larger than ‖Mx‖22 in the regime n ≤ d1. Preview of techniques. Let N = UNΣN (V N )T be the instance (in orthogonalized form). Our goal is to construct a collection N = {N1, . . . , NK} of K matrices so that (i) For any Ni ∈ N , ΣNi = ΣN and V Ni = V N . (ii) For any two Ni, Nj ∈ N , ‖N − N ′‖F is large, and (iii) K = exp(Ω(poly(ρ)d2)) (cf. [43, Chap. 2]) Condition (i) ensures that it suffices to construct unitary matrices UNi ’s for N , and that the resulting instances will be in the same equivalence class. Conditions (ii) and (iii) resemble standard construction of codes in information theory: we need a large ‘code rate’, corresponding to requiring a large K as well as large distances between codewords, corresponding to requiring that ‖Ui − Uj‖F be large. Standard approaches for constructing such collections run into difficulties. Getting a sufficiently tight concentration bound on the distance between two random unitary matrices is difficult as the matrix entries, by necessity, are correlated. On the other hand, starting with a large collection of random unit vectors and using its Cartesian product to build matrices does not necessarily yield unitary matrices. We design a two-stage approach to decouple condition (iii) from (i) and (ii) by only generating sparse matrices UNi . See Fig. 3(b)-(d). In the first stage (Steps 1 & 2 in Fig. 3(b)-(c)), we only specify the non-zero positions (sparsity pattern) in each UNi . It suffices to guarantee that the sparsity patterns of the matrices UNi and UNj have little overlap. The existence of such objects can easily be proved using the probabilistic method. Thus, in the first stage, we can build up a large number of sparsity patterns. In the second stage (Step 3 in Fig. 3(d)), we carefully fill in values in the non-zero positions for each UNi . When the number of non-zero entries is not too small, satisfying the unitary constraint is feasible. As the overlap of sparsity patterns of any two matrices is small, we can argue the distance between them is large. By carefully trading off the number of non-zero positions and the portion of overlap, we can simultaneously satisfy all three conditions. 5 Related work and comparison In this section, we compare our results to other regression algorithms that make low rank constraints on M . Most existing MSE results are parametrized by the rank or spectral properties of M , e.g. [30] defined a generalized notion of rank Bq(RAq ) ∈ { A ∈ Rd2×d1 : ∑d2 i=1 |σAi |q ≤ Rq } , where q ∈ [0, 1], A ∈ {N,M}, i.e. RNq characterizes the generalized rank of N whereas RMq characterizes that ofM . When q = 0,RNq = R M q is the rank of theN because rank(N) = rank(M) in our setting. In their setting, the MSE is parametrized by RM and is shown to be O ( RMq ( σ2 λ ∗ 1(d1+d2) (λ∗min) 2n )1−q/2) . In the special case when q = 0, this reduces to O ( σ2 λ ∗ 1rank(M)(d1+d2) (λ∗min) 2·n ) . On the other hand, the MSE in our case is bounded by (cf. Thm. 1). We have IE[‖ŷ− y‖22] = O ( RNq ( σ2 d2 n ) 1−q/2 + n−c0 ) . When q = 0, this becomes O (σ2 rank(M)d2 n + n −c0 ) . The improvement here is twofold. First, our bound is directly characterized by N in orthogonalized form, whereas result of [30] needs to examine the interaction between M and C∗, so their MSE depends on both RMq and λ ∗ min. Second, our bound no longer depends on d1 and pays only an additive factor n−c0 , thus, when n < d1, our result is significantly better. Other works have different parameters in the upper bounds, but all of these existing results require that n > d1 to obtain nontrivial upper bounds [26, 9, 12, 26]. Unlike these prior work, we require a stochastic assumption on X (the rows are i.i.d.) to ensure that the model is identifiable when n < d1, e.g. there could be two sets of disjoint features that fit the training data equally well. Our algorithm produces an adaptive model whose complexity is controlled by k1 and k2, which are adjusted dynamically depending on the sample size and noise level. [9] and [12] also point out the need for adaptivity; however they still require n > d1 and make some strong assumptions. For instance, [9] assumes that there is a gap between σi(XMT) and σi+1(XMT) for some i. In comparison, our sufficient condition, the decay of λ∗i , is more natural. Our work is not directly comparable to standard variable selection techniques such as LASSO [42] because they handle univariate y. Column selection algorithms [15] generalize variable selection methods for vector responses, but they cannot address the identifiability concern. 6 Experiments We apply our algorithm on an equity market and a social network dataset to predict equity returns and user popularity respectively. Our baselines include ridge regression (“Ridge”), reduced rank ridge regression [29] (“Reduced ridge”), LASSO (“Lasso”), nuclear norm regularized regression (“Nuclear norm”), reduced rank regression [45] (“RRR”), and principal component regression [1] (“PCR”). Predicting equity returns. We use a stock market dataset from an emerging market that consists of approximately 3600 stocks between 2011 and 2018. We focus on predicting the next 5-day returns. For each asset in the universe, we compute its past 1-day, past 5-day, and past 10-day returns as features. We use a standard approach to translate forecasts into positions [4, 47]. We examine two universes in this market: (i) Universe 1 is equivalent to S&P 500 and consists of 983 stocks, and (ii) Full universe consists of all stocks except for illiquid ones. Results. Table 1 (left) reports the forecasting power and portfolio return for out-of-sample periods in Full universe (see our full version for Universe 1). We observe that (i) The data has a low signal-tonoise ratio. The out-of-sample R2 values of all the methods are close to 0. (ii) ADAPTIVE-RRR has the highest forecasting power. (iii) ADAPTIVE-RRR has the smallest in-sample and out-of-sample gap (see column out− in), suggesting that our model is better at avoiding spurious signals. Predicting user popularity in social networks. We collected tweet data on political topics from Oct. 2016 to Dec. 2017. Our goal is to predict a user’s next 1-day popularity, which is defined as the sum of retweets, quotes, and replies received by the user. There are a total of 19 million distinct users, and due to the huge size, we extract the subset of 2000 users with the most interactions for evaluation. For each user in the 2000-user set, we use its past 5 days’ popularity as features. We further randomly sample 200 users and make predictions for them, i.e., setting d2 = 200 to make d2 of the same magnitude as n. Results. We randomly sample users for 10 times and report the average MSE and correlation (with standard deviations) for both in-sample and out-of-sample data (see full version for more results). In Table 1 (right) we can see results consistent with the equity returns experiment: (i) ADAPTIVE-RRR yields the best performance in out-of-sample MSE and correlation. (ii) ADAPTIVE-RRR achieves the best generalization error by having a much smaller gap between training and test metrics. 7 Conclusion This paper examines the low-rank regression problem under the high-dimensional setting. We design the first learning algorithm with provable statistical guarantees under a mild condition on the features’ covariance matrix. Our algorithm is simple and computationally more efficient than low rank methods based on optimizing nuclear norms. Our theoretical analysis of the upper bound and lower bound can be of independent interest. Our preliminary experimental results demonstrate the efficacy of our algorithm. The full version explains why our (algorithm) result is unlikely to be known or trivial. Broader Impact The main contribution of this work is theoretical. Productionizing downstream applications stated in the paper may need to take six months or more so there is no immediate societal impact from this project. Acknowledgement We thank anonymous reviewers for helpful comments and suggestions. Varun Kanade is supported in part by the Alan Turing Institute under the EPSRC grant EP/N510129/1. Yanhua Li was supported in part by NSF grants IIS-1942680 (CAREER), CNS-1952085, CMMI-1831140, and DGE-2021871. Qiong Wu and Zhenming Liu are supported by NSF grants NSF-2008557, NSF-1835821, and NSF-1755769. The authors acknowledge William & Mary Research Computing for providing computational resources and technical support that have contributed to the results reported within this paper.
1. What is the focus and contribution of the paper on low rank regression? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding the assumptions made? 4. Do you have any concerns or questions about the methodology used in the paper? 5. Can you provide any suggestions or alternative approaches that could be explored in future works?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies low rank regression problem: given observed responses Y in R^{d_2 x n} and features X in R^{d_1 x n}, the goal is to recover a low rank matrix M such that y = Mx + eps. Unlike previous literature, this paper gives the first result with theoretical guarantee when n << d_1, i.e., the number of observations is much less than the dimension of the features x. Strengths This paper introduce a simple and clean algorithm. The algorithm is easy to implement in practice. The analysis for the error bound is not trivial. Authors give a bunch of new views of PCA method. In particular, authors give a clever way to first choose a rank k_1 to reduce the features to a rank k_1 subspace. They show that the samples can give a good estimation to these features. Then authors apply a relatively standard denoise step to find a rank k_2 result. Weaknesses My first concern is that the authors assume that a feature x is a multivariate Gaussian. This is usually not the case in practice. I am not sure whether this is a standard assumption in the literature.Though authors claim in line 86 that they can relax the assumption for most the result, it is not clear to me how to extend. My second concern is that since n << d_1, I am curious why the following simpler algorithm could not work:compute a rank-k PCA Y' of Y where k is a tunable parameter. Then solve the Equation Y'=MX to find M. Notice that since n << d_1, the solution of the equation exists. I would appreciate if author gives more intuition why this simple denoise approach for Y does not work.
NIPS
Title Adaptive Reduced Rank Regression Abstract We study the low rank regression problem y = Mx + , where x and y are d1 and d2 dimensional vectors respectively. We consider the extreme high-dimensional setting where the number of observations n is less than d1+d2. Existing algorithms are designed for settings where n is typically as large as rank(M)(d1 + d2). This work provides an efficient algorithm which only involves two SVD, and establishes statistical guarantees on its performance. The algorithm decouples the problem by first estimating the precision matrix of the features, and then solving the matrix denoising problem. To complement the upper bound, we introduce new techniques for establishing lower bounds on the performance of any algorithm for this problem. Our preliminary experiments confirm that our algorithm often out-performs existing baselines, and is always at least competitive. 1 Introduction We consider the regression problem y = Mx + in the high dimensional setting, where x ∈ Rd1 is the vector of features, y ∈ Rd2 is a vector of responses, M ∈ Rd2×d1 are the learnable parameters, and ∼ N(0, σ2 Id2×d2) is a noise term. High-dimensional setting refers to the case where the number of observations n is insufficient for recovery and hence regularization for estimation is necessary [26, 30, 12]. This high-dimensional model is widely used in practice, such as identifying biomarkers [48], understanding risks associated with various diseases [18, 7], image recognition [34, 17], forecasting equity returns in financial markets [33, 39, 28, 8], and analyzing social networks [46, 35]. We consider the “large feature size” setting, in which the number of features d1 is excessively large and can be even larger than the number of observations n. This setting frequently arises in practice because it is often straightforward to perform feature-engineering and produce a large number of potentially useful features in many machine learning problems. For example, in a typical equity forecasting model, n is around 3,000 (i.e., using 10 years of market data), whereas the number of potentially relevant features can be in the order of thousands [33, 22, 25, 13]. In predicting the popularity of a user in an online social network, n is in the order of hundreds (each day is an observation and a typical dataset contains less than three years of data) whereas the feature size can easily be more than 10k [36, 6, 38]. Existing low-rank regularization techniques (e.g., [3, 23, 26, 30, 27] ) are not optimized for the large feature size setting. These results assume that either the features possess the so-called restricted isometry property [10], or their covariance matrix can be accurately estimated [30]. Therefore, their sample complexity n depends on either d1 or the smallest eigenvalue value λmin of x’s covariance matrix. For example, a mean-squared error (MSE) result that appeared in [30] is of the form ∗ Correspondence to: Qiong Wu <[email protected]>. † Currently at Google. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. O ( r(d1+d2) nλ2min ) . When n ≤ d1/λ2min, this result becomes trivial because the forecast ŷ = 0 produces a comparable MSE. We design an efficient algorithm for the large feature size setting. Our algorithm is a simple two-stage algorithm. Let X ∈ Rn×d1 be a matrix that stacks together all features and Y ∈ Rn×d2 be the one that stacks the responses. In the first stage, we run a principal component analysis (PCA) on X to obtain a set of uncorrelated features Ẑ. In the second stage, we run another PCA to obtain a low rank approximation of ẐTY and use it to construct an output. While the algorithm is operationally simple, we show a powerful and generic result on using PCA to process features, a widely used practice for “dimensionality reduction” [11, 21, 19]. PCA is known to be effective to orthogonalize features by keeping only the subspace explaining large variations. But its performance can only be analyzed under the so-called factor model [40, 39]. We show the efficacy of PCA without the factor model assumption. Instead, PCA should be interpreted as a robust estimator of x’s covariance matrix. The empirical estimator C = 1nXX T in the high-dimensional setting cannot be directly used because n d1 × d2, but it exhibits an interesting regularity: the leading eigenvectors of C are closer to ground truth than the remaining ones. In addition, the number of reliable eigenvectors grows as the sample size grows, so our PCA procedure projects the features along reliable eigenvectors and dynamically adjusts Ẑ’s rank to maximally utilize the raw features. Under mild conditions on the ground-truth covariance matrix C∗ of x, we show that it is always possible to decompose x into a set of near-independent features and a set of (discarded) features that have an inconsequential impact on a model’s MSE. When features x are transformed into uncorrelated ones z, our original problem becomes y = Nz+ , which can be reduced to a matrix denoising problem [16] and be solved by the second stage. Our algorithm guarantees that we can recover all singular vectors of N whose associated singular values are larger than a certain threshold τ . The performance guarantee can be translated into MSE bounds parametrized by commonly used variables (though, these translations usually lead to looser bounds). For example, when N ’s rank is r, our result reduces the MSE from O( r(d1+d2) nλ2min ) to O( rd2n + n −c) for a suitably small constant c. The improvement is most pronounced when n d1. We also provide a new matching lower bound. Our lower bound asserts that no algorithm can recover a fraction of singular vectors of N whose associated singular values are smaller than ρτ , where ρ is a “gap parameter”. Our lower bound contribution is twofold. First, we introduce a notion of “local minimax”, which enables us to define a lower bound parametrized by the singular values of N . This is a stronger lower bound than those delivered by the standard minimax framework, which are often parametrized by the rank r of N [26]. Second, we develop a new probabilistic technique for establishing lower bounds under the new local minimax framework. Roughly speaking, our techniques assemble a large collection of matrices that share the same singular values of N but are far from each other, so no algorithm can successfully distinguish these matrices with identical spectra. 2 Preliminaries Notation. Let X ∈ Rn×d1 and Y ∈ Rn×d2 be data matrices with their i-th rows representing the i-th observation. For matrix A, we denote its singular value decomposition as A = UAΣA(V A)T and Pr(A) , UAr Σ A r V A r T is the rank r approximation obtained by keeping the top r singular values and the corresponding singular vectors. When the context is clear, we drop the superscript A and use U,Σ, and V (Ur, Σr, and Vr) instead. Both σi(A) and σAi are used to refer to i-th singular value of A. We use MATLAB notation when we refer to a specific row or column, e.g., V1,: is the first row of V and V:,1 is the first column. ‖A‖F , ‖A‖2, and ‖A‖∗ are Frobenius, spectral, and nuclear norms of A. In general, we use boldface upper case (e.g., X) to denote data matrices and boldface lower case (e.g., x) to denote one sample. Regular fonts denote other matrices. Let C∗ = IE[xxT] and C = 1nX TX be the empirical estimate of C∗. Let C∗ = V ∗Λ∗(V ∗)T be the eigen-decomposition of the matrix C∗, and λ∗1 ≥ λ∗2, . . . ,≥ λ∗d1 ≥ 0 be the diagonal entries of Λ ∗. Let {u1,u2, . . .u`} be an arbitrary set of column vectors, and Span({u1,u2, . . . ,u`}) be the subspace spanned by it. An event happens with high probability means that it happens with probability ≥ 1− n−5, where 5 is an arbitrarily chosen large constant and is not optimized. Our model. We consider the model y = Mx + , where x ∈ Rd1 is a multivariate Gaussian, y ∈ Rd2 , M ∈ Rd2×d1 , and ∼ N(0, σ2 Id2×d2). We can relax the Gaussian assumptions on x and STEP-1-PCA-X(X) 1 [U,Σ, V ] = svd(X) 2 Λ = 1n (Σ 2); λi = Λi,i. 3 Gap thresholding. 4 δ = n−O(1) is a tunable parameter. 5 k1 = max{k1 : λk1 − λk1+1 ≥ δ}, 6 Λk1 : diagonal matrix comprised of {λi}i≤k1 . 7 Uk1 , Vk1 : k1 leading columns of U and V . 8 Π̂ = (Λk1) − 12V Tk1 9 Ẑ+ = √ nUk1(= XΠ̂ T). 10 return {Ẑ+, Π̂}. STEP-2-PCA-DENOISE(Ẑ+,Y) 1 N̂T+ ← 1n Ẑ T +Y. 2 Absolute value thresholding. 3 θ is a suitable constant; σ is std. of the noise. 4 k2 = max { k2 : σk2(N̂+) ≥ θσ √ d2 n } . 5 return Pk2(N̂+) ADAPTIVE-RRR(X,Y) 1 [Ẑ+, Π̂] = STEP-1-PCA-A(X). 2 Pk2(N̂+) = STEP-2-PCA-DENOISE(Ẑ+,Y). 3 return M̂ = Pk2(N̂+)Π̂ Figure 1: Our algorithm (ADAPTIVE-RRR) for solving the regression y =Mx+ . for most results we develop. We assume a PAC learning framework, i.e., we observe a sequence {(xi,yi)}i≤n of independent samples and our goal is to find an M̂ that minimizes the test error IEx,y[‖M̂x−Mx‖22]. We are specifically interested in the setting in which d2 ≈ n ≤ d1. The key assumption we make to circumvent the d1 ≥ n issue is that the features are correlated. This assumption can be justified for the following reasons: (i) In practice, it is difficult, if not impossible, to construct completely uncorrelated features. (ii) When n d1, it is not even possible to test whether the features are uncorrelated [5]. (iii) When we indeed know that the features are independent, there are significantly simpler methods to design models. For example, we can build multiple models such that each model regresses on an individual feature of x, and then use a boosting/bagging method [19, 37] to consolidate the predictions. The correlatedness assumption implies that the eigenvalues of C∗ decays. The only (full rank) positive semidefinite matrices that have non-decaying (uniform) eigenvalues are the identity matrix (up to some scaling). In other words, when C∗ has uniform eigenvalues, x has to be uncorrelated. We aim to design an algorithm that works even when the decay is slow, such as when λi(C∗) has a heavy tail. Specifically, our algorithm assumes λi’s are bounded by a heavy-tail power law series: Assumption 2.1. The λi(C∗) series satisfies λi(C∗) ≤ c · i−ω for a constant c and ω ≥ 2. We do not make functional form assumptions on λi’s. This assumption also covers many benign cases, such as when C∗ has low rank or its eigenvalues decay exponentially. Many empirical studies report power law distributions of data covariance matrices [2, 31, 44, 14]. Next, we make standard normalization assumptions. IE‖x‖22 = 1, ‖M‖2 ≤ Υ = O(1), and σ ≥ 1. Remark that we assume only the spectral norm of M is bounded, while its Frobenius norm can be unbounded. Also, we assume the noise σ ≥ 1 is sufficiently large, which is more important in practice. The case when σ is small can be tackled in a similar fashion. Finally, our studies avoid examining excessively unrealistic cases, so we assume d1 ≤ d32. We examine the setting where existing algorithms fail to deliver non-trivial MSE, so we assume that n ≤ rd1 ≤ d42. 3 Upper bound Our algorithm (see Fig. 1) consists of two steps. Step 1. Producing uncorrelated features. We run a PCA to obtain a total number of k1 orthogonalized features. See STEP-1-PCA-X in Fig. 1. Let the SVD of X be X = UΣ(V )T. Let k1 be a suitable rank chosen by inspecting the gaps of X’s singular values (Line 5 in STEP-1-PCA-X). Ẑ+ = √ nUk1 is the set of transformed features output by this step. The subscript + in Ẑ+ reflects that a dimension reduction happens so the number of columns in Ẑ+ is smaller than that in X. Compared to standard PCA dimension reduction, there are two differences: (i) We use the left leading singular vectors of X (with a re-scaling factor √ n) as the output, whereas the PCA reduction outputs Pk1(X). (ii) We design a specialized rule to choose k1 whereas PCA usually uses a hard thresholding or other ad-hoc rules. Step 2. Matrix denoising. We run a second PCA on the matrix (N̂+)T , 1n Ẑ T +Y. The rank k2 is chosen by a hard thresholding rule (Line 4 in STEP-2-PCA-DENOISE). Our final estimator is Pk2(N̂+)Π̂, where Π̂ = (Λk1) − 12V Tk1 is computed in STEP-1-PCA-X(X). 3.1 Intuition of the design While the algorithm is operationally simple, its design is motivated by carefully unfolding the statistical structure of the problem. We shall realize that applying PCA on the features should not be viewed as removing noise from a factor model, or finding subspaces that maximize variations explained by the subspaces as suggested in the standard literature [19, 40, 41]. Instead, it implicitly implements a robust estimator for x’s precision matrix, and the design of the estimator needs to be coupled with our objective of forecasting y, thus resulting in a new way of choosing the rank. Design motivation: warm up. We first examine a simplified problem y = Nz+ , where variables in z are assumed to be uncorrelated. Assume d = d1 = d2 in this simplified setting. Observe that 1 n ZTY = 1 n ZT(ZNT + E) = ( 1 n ZTZ)NT + 1 n ZTE ≈ Id1×d1NT + 1 n ZTE = NT + E , (1) where E is the noise term and E can be approximated by a matrix with independent zero-mean noises. Solving the matrix denoising problem. Eq. 1 implies that when we compute ZTY, the problem reduces to an extensively studied matrix denoising problem [16, 20]. We include the intuition for solving this problem for completeness. The signalNT is overlaid with a noise matrix E . E will elevate all the singular values of NT by an order of σ √ d/n. We run a PCA to extract reliable signals: when the singular value of a subspace is σ √ d/n, the subspace contains significantly more signal than noise and thus we keep the subspace. Similarly, a subspace associated a singular value . σ √ d/n mostly contains noise. This leads to a hard thresholding algorithm that sets N̂T = Pr(NT + E), where r is the maximum index such that σr(NT + E) ≥ c √ d/n for some constant c. In the general setting y = Mx + , x may not be uncorrelated. But when we set z = (Λ∗)− 1 2 (V ∗)Tx, we see that IE[zzT] = I . This means knowing C∗ suffices to reduce the original problem to a simplified one. Therefore, our algorithm uses Step 1 to estimate C∗ and Z, and uses Step 2 to reduce the problem to a matrix denoising one and solve it by standard thresholding techniques. Relationship between PCA and precision matrix estimation. In step 1, while we plan to estimate C∗, our algorithm runs a PCA on X. We observe that empirical covariance matrix C = 1nX TX = 1 nV (Σ) 2(V )T, i.e., C’s eigenvectors coincide with X’s right singular vectors. When we use the empirical estimator to construct ẑ, we obtain ẑ = √ n(Σ)−1(V )Tx. When we apply this map to every training point and assemble the new feature matrix, we exactly get Ẑ = √ nXV (Σ)−1 = √ nU . It means that using C to construct ẑ is the same as running a PCA in STEP-1-PCA-X with k1 = d1. trix. When C and C∗ are unrelated, then the plot behaves like a block of white Gaussian noise. We observe a pronounced pattern: the angle matrix can be roughly divided into two sub-blocks (see the red lines in Fig. 2). The upper left sub-block behaves like an identity matrix, suggesting that the leading eigenvectors of C are close to those of C∗. The lower right block behaves like a white noise matrix, suggesting that the “small” eigenvectors of C are far from those of C∗. When n grows, one can observe the upper left block becomes larger and this the eigenvectors of C will sequentially get stabilized. Leading eigenvectors are first stabilized, followed by smaller ones. Our algorithm leverages this regularity by keeping only a suitable number of reliable eigenvectors from C while ensuring not much information is lost when we throw away those “small” eigenvectors. Implementing the rank selection. We rely on three interacting building blocks: 1. Dimension-free matrix concentration. First, we need to find a concentration behavior of C for n ≤ d1 to decouple d1 from the MSE bound. We utilize a dimension-free matrix concentration inequality [32]. Roughly speaking, the concentration behaves as ‖C−C∗‖2 ≈ n− 1 2 . This guarantees that |λi(C)− λi(C∗)| ≤ n− 1 2 by standard matrix perturbation results [24]. 2. Davis-Kahan perturbation result. However, the pairwise closeness of the λi’s does not imply the eigenvectors are also close. When λi(C∗) and λi+1(C∗) are close, the corresponding eigenvectors in C can be “jammed” together. Thus, we need to identify an index i, at which λi(C∗)− λi+1(C∗) exhibits significant gap, and use a Davis-Kahan result to show that Pi(C) is close to Pi(C∗). On the other hand, the map Π∗(, (Λ∗)− 1 2 (V ∗)T) we aim to find depends on the square root of inverse (Λ∗)− 1 2 , so we need additional manipulation to argue our estimate is close to (Λ∗)− 1 2 (V ∗)T. 3. The connection between gap and tail. Finally, the performance of our procedure is also characterized by the total volume of signals that are discarded, i.e., ∑ i>k1 λi(C ∗), where k1 is the location that exhibits the gap. The question becomes whether it is possible to identify a k1 that simultaneously exhibits a large gap and ensures the tail after it is well-controlled, e.g., the sum of the tail is O(n−c) for a constant c. We develop a combinatorial analysis to show that it is always possible to find such a gap under the assumption that λi(C∗) is bounded by a power law distribution with exponent ω ≥ 2. Combining all these three building blocks, we have: Proposition 1. Let ξ and δ be two tunable parameters such that ξ = ω(log3 n/ √ n) and δ3 = ω(ξ). Assume that λ∗i ≤ c ·i−ω . Consider running STEP-1-PCA-X in Fig. 1, with high probability, we have (i) Leading eigenvectors/values are close: there exists a unitary matrix W and a constant c1 such that ‖Vk1(Λk1)− 1 2 − V ∗k1(Λ ∗ k1 )− 1 2W‖ ≤ c1ξδ3 . (ii) Small tail: ∑ i≥k1 λ ∗ i ≤ c2δ ω−1 ω+1 for a constant c2. Prop. 1 implies that our estimate ẑ+ = Π̂(x) is sufficiently close to z = Π∗(x), up to a unitary transform. We then execute STEP-2-PCA-DENOISE to reduce the problem to a matrix denoising one and solve it by hard-thresholding. Let us refer to y = Nz + , where z is a standard multivariate Gaussian and N = MV ∗(Λ∗) 1 2 as the orthogonalized form of the problem. While we do not directly observe z, our performance is characterized by spectra structure of N . Theorem 1. Consider running ADAPTIVE-RRR in Fig. 1 on n independent samples (x,y) from the model y = Mx + , where x ∈ Rd1 and y ∈ Rd2 . Let C∗ = IE[xxT]. Assume that (i) ‖M‖2 ≤ Υ = O(1), and (ii) x is a multivariate Gaussian with ‖x‖2 = 1. In addition, λ1(C∗) < 1 and for all i, λi(C∗) ≤ c/iω for a constant c, and (iii) ∼ N(0, σ2 Id1), where σ ≥ min{Υ, 1}. Let ξ = ω(log3 n/ √ n), δ3 = ω(ξ), and θ be a suitably large constant. Let y = Nz + be the orthogonalized form of the problem. Let `∗ be the largest index such that σN`∗ > θσ √ d2 n . Let ŷ be our testing forecast. With high probability over the training data: IE[‖ŷ − y‖22] ≤ ∑ i>`∗ (σNi ) 2 +O ( `∗d2θ 2σ2 n ) +O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) (2) The expectation is over the randomness of the test data. Theorem 1 also implies that there exists a way to parametrize ξ and δ such that IE[‖ŷ − y‖22] ≤∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) +O(n−c0) for some constant c0. We next interpret each term in (2). Terms ∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) are typical for solving a matrix denoising problem N̂T+ + E(≈ NT + E): we can extract signals associated with `∗ leading singular vectors of N , so ∑ i>`∗(σ N i ) 2 starts at i > `∗. For each direction we extract, we need to pay a noise term of order θ2σ2 d2 n , leading to the term O ( `∗d2θ 2σ2 n ) . Terms O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) come from the estimations error of ẑ+ produced from Prop. 1, consisting of both estimation errors of C∗’s leading eigenvectors and the error of cutting out a tail. We pay an exponent of 14 on both terms (e.g., δ ω−1 ω+1 in Prop. 1 becomes δ ω−1 4(ω+1) ) because we used Cauchy-Schwarz (CS) twice. One is used in running matrix denoising algorithm with inaccurate z+; the other one is used to bound the impact of cutting a tail. It remains open whether two CS is can be circumvented. Sec. 4 explains how Thm 1 and the lower bound imply the algorithm is near-optimal. Sec. 5 compares our result with existing ones under other parametrizations, e.g. rank(M). 4 Lower bound Our algorithm accurately estimates the singular vectors of N that correspond to singular values above the threshold τ = θσ √ d2 n . However, it may well happen that most of the spectral ‘mass’ of N lies only slightly below this threshold τ . In this section, we establish that no algorithm can do better than us, in a bi-criteria sense, i.e. we show that any algorithm that has a slightly smaller sample than ours can only minimally outperform ours in terms of MSE. We establish ‘instance dependent’ lower bounds: When there is more ‘spectral mass’ below the threshold, the performance of our algorithm will be worse, and we will need to establish that no algorithm can do much better. This departs from the standard minimax framework, in which one examines the entire parameter space of N , e.g. all rank r matrices, and produces a large set of statistically indistinguishable ‘bad’ instances [43]. These lower bounds are not sensitive to instancespecific quantities such as the spectrum of N , and in particular, if prior knowledge suggests that the unknown parameter N is far from these bad instances, the minimax lower bound cannot be applied. We introduce the notion of local minimax. We partition the space into parts so that similar matrices are together. Similar matrices are those N that have the same singular values and right singular vectors; we establish strong lower bounds even against algorithms that know the singular values and right singular vectors of N . An equivalent view is to assume that the algorithm has oracle access to C∗, M ’s singular values, and M ’s right singular vectors. This algorithm can solve the orthogonalized form as N ’s singular values and right singular vectors can easily be deduced. Thus, the only reason why the algorithm needs data is to learn the left singular vectors of N . The lower bound we establish is the minimax bound for this ‘unfair’ comparison, where the competing algorithm is given more information. In fact, this can be reduced further, i.e., even if the algorithm ‘knows’ that the left singular vectors of N are sparse, identifying the locations of the non-zero entries is the key difficulty that leads to the lower bound. Definition 1 (Local minimax bound). Consider a model y = Mx + , where x is a random vector, so C∗(x) = IE[xxT] represents the co-variance matrix of the data distribution, and M = UMΣM (VM )T. The relation (M,x) ∼ (M ′,x′)⇔ (ΣM = ΣM ′∧VM = VM ′∧C∗(x) = C∗(x′)) is an equivalence relation and let the equivalence class of (M,x) beR(M,x) = {(M ′,x′) : ΣM ′ = ΣM , VM ′ = VM , and C∗(x′) = C∗(x)}. The local minimax bound for y = Mx + with n independent samples and ∼ N(0, σ2 Id2×d2) is r(x,M, n, σ ) = min M̂ max (M ′,x′)∈R(M,x) E X,Y from y∼M′x′+ [ IEx′ [‖M̂(X,Y)x′ −M ′x′‖22 | X,Y] ] . (3) It is worth interpreting (3) in some detail. For any two (M,x), (M ′,x′) inR(M,x), the algorithm has the same ‘prior knowledge’, so it can only distinguish between the two instances by using the observed data, in particular M̂ is a function only of X and Y, and we denote it as M̂(X,Y) to emphasize this. Thus, we can evaluate the performance of M̂ by looking at the worst possible (M ′,x′) and considering the MSE IE‖M̂(X,Y)x′ −M ′x′‖2. Proposition 2. Consider the problem y = Mx + with normalized form y = Nz + . Let ξ be a sufficient small constant. There exists a sufficiently small constant ρ0 (that depends on ξ) and a constant c such that for any ρ ≤ ρ0, r(x,M, n, σ ) ≥ (1− cρ 1 2−ξ) ∑ i≥t(σ N i ) 2 −O ( ρ 1 2 −ξ dω−12 ) , where t is the smallest index such that σNt ≤ ρσ √ d2 n . Proposition 2 gives the lower bound on the MSE in expectation; it can be turned into a high probability result with suitable modifications. The proof of the lower bound uses a similar ‘trick’ to the one used in the analysis of the upper bound analysis to cut the tail. This results in an additional term O ( ρ 1 2 −ξ dω−12 ) which is generally smaller than the n−c0 tail term in Theorem 1 and does not dominate the gap. Gap requirement and bi-criteria approximation algorithms. Let τ = σ √ d2 n . Theorem 1 asserts that any signal above the threshold θτ can be detected, i.e., the MSE is at most ∑ σNi >θτ σ2i (N) (plus inevitable noise), whereas Proposition 2 asserts that any signal below the threshold ρτ cannot be detected, i.e., the MSE is approximately at least ∑ σNi ≥ρτ (1− poly(ρ))σ2i (N). There is a ‘gap’ between θτ and ρτ , as θ > 1 and ρ < 1. See Fig. 3(a). This kind of gap is inevitable because both bounds are ‘high probability’ statements. This gap phenomenon appears naturally when the sample size is small as can be illustrated by this simple example. Consider the problem of estimating µ when we see one sample from N(µ, σ2). Roughly speaking, when µ σ, the estimation is feasible, and whereas µ σ, the estimation is impossible. For the region µ ≈ σ, algorithms fail with constant probability and we cannot prove a high probability lower bound either. While many of the signals can ‘hide’ in the gap, the inability to detect signals in the gap is a transient phenomenon. When the number of samples n is modestly increased, our detection threshold τ = θσ √ d2 n shrinks, and this hidden signal can be fully recovered. This observation naturally leads to a notion of bi-criteria optimization that frequently arises in approximation algorithms. Definition 2. An algorithm for solving the y = Mx + problem is (α, β)-optimal if, when given an i.i.d. sample of size αn as input, it outputs an estimator whose MSE is at most β worse than the local minimax bound, i.e., IE[‖ŷ − y‖22] ≤ r(x,M, n, σ ) + β. Corollary 1. Let ξ and c0 be small constants and ρ be a tunable parameter. Our algorithm is (α, β)-optimal for α = θ 2 ρ 5 2 and β = O(ρ 1 2−ξ)‖Mx‖22 +O(n−c0) The error term β consists of ρ 1 2− ‖Mx‖22 that is directly characterized by the signal strength and an additive term O(n−c0) = o(1). Assuming that ‖Mx‖ = Ω(1), i.e., the signal is not too weak, the term β becomes a single multiplicative bound O(ρ 1 2−ξ + n−c0)‖Mx‖22. This gives an easily interpretable result. For example, when our data size is n log n, the performance gap between our algorithm and any algorithm that uses n samples is at most o(‖Mx‖22). The improvement is significant when other baselines deliver MSE in the additive form that could be larger than ‖Mx‖22 in the regime n ≤ d1. Preview of techniques. Let N = UNΣN (V N )T be the instance (in orthogonalized form). Our goal is to construct a collection N = {N1, . . . , NK} of K matrices so that (i) For any Ni ∈ N , ΣNi = ΣN and V Ni = V N . (ii) For any two Ni, Nj ∈ N , ‖N − N ′‖F is large, and (iii) K = exp(Ω(poly(ρ)d2)) (cf. [43, Chap. 2]) Condition (i) ensures that it suffices to construct unitary matrices UNi ’s for N , and that the resulting instances will be in the same equivalence class. Conditions (ii) and (iii) resemble standard construction of codes in information theory: we need a large ‘code rate’, corresponding to requiring a large K as well as large distances between codewords, corresponding to requiring that ‖Ui − Uj‖F be large. Standard approaches for constructing such collections run into difficulties. Getting a sufficiently tight concentration bound on the distance between two random unitary matrices is difficult as the matrix entries, by necessity, are correlated. On the other hand, starting with a large collection of random unit vectors and using its Cartesian product to build matrices does not necessarily yield unitary matrices. We design a two-stage approach to decouple condition (iii) from (i) and (ii) by only generating sparse matrices UNi . See Fig. 3(b)-(d). In the first stage (Steps 1 & 2 in Fig. 3(b)-(c)), we only specify the non-zero positions (sparsity pattern) in each UNi . It suffices to guarantee that the sparsity patterns of the matrices UNi and UNj have little overlap. The existence of such objects can easily be proved using the probabilistic method. Thus, in the first stage, we can build up a large number of sparsity patterns. In the second stage (Step 3 in Fig. 3(d)), we carefully fill in values in the non-zero positions for each UNi . When the number of non-zero entries is not too small, satisfying the unitary constraint is feasible. As the overlap of sparsity patterns of any two matrices is small, we can argue the distance between them is large. By carefully trading off the number of non-zero positions and the portion of overlap, we can simultaneously satisfy all three conditions. 5 Related work and comparison In this section, we compare our results to other regression algorithms that make low rank constraints on M . Most existing MSE results are parametrized by the rank or spectral properties of M , e.g. [30] defined a generalized notion of rank Bq(RAq ) ∈ { A ∈ Rd2×d1 : ∑d2 i=1 |σAi |q ≤ Rq } , where q ∈ [0, 1], A ∈ {N,M}, i.e. RNq characterizes the generalized rank of N whereas RMq characterizes that ofM . When q = 0,RNq = R M q is the rank of theN because rank(N) = rank(M) in our setting. In their setting, the MSE is parametrized by RM and is shown to be O ( RMq ( σ2 λ ∗ 1(d1+d2) (λ∗min) 2n )1−q/2) . In the special case when q = 0, this reduces to O ( σ2 λ ∗ 1rank(M)(d1+d2) (λ∗min) 2·n ) . On the other hand, the MSE in our case is bounded by (cf. Thm. 1). We have IE[‖ŷ− y‖22] = O ( RNq ( σ2 d2 n ) 1−q/2 + n−c0 ) . When q = 0, this becomes O (σ2 rank(M)d2 n + n −c0 ) . The improvement here is twofold. First, our bound is directly characterized by N in orthogonalized form, whereas result of [30] needs to examine the interaction between M and C∗, so their MSE depends on both RMq and λ ∗ min. Second, our bound no longer depends on d1 and pays only an additive factor n−c0 , thus, when n < d1, our result is significantly better. Other works have different parameters in the upper bounds, but all of these existing results require that n > d1 to obtain nontrivial upper bounds [26, 9, 12, 26]. Unlike these prior work, we require a stochastic assumption on X (the rows are i.i.d.) to ensure that the model is identifiable when n < d1, e.g. there could be two sets of disjoint features that fit the training data equally well. Our algorithm produces an adaptive model whose complexity is controlled by k1 and k2, which are adjusted dynamically depending on the sample size and noise level. [9] and [12] also point out the need for adaptivity; however they still require n > d1 and make some strong assumptions. For instance, [9] assumes that there is a gap between σi(XMT) and σi+1(XMT) for some i. In comparison, our sufficient condition, the decay of λ∗i , is more natural. Our work is not directly comparable to standard variable selection techniques such as LASSO [42] because they handle univariate y. Column selection algorithms [15] generalize variable selection methods for vector responses, but they cannot address the identifiability concern. 6 Experiments We apply our algorithm on an equity market and a social network dataset to predict equity returns and user popularity respectively. Our baselines include ridge regression (“Ridge”), reduced rank ridge regression [29] (“Reduced ridge”), LASSO (“Lasso”), nuclear norm regularized regression (“Nuclear norm”), reduced rank regression [45] (“RRR”), and principal component regression [1] (“PCR”). Predicting equity returns. We use a stock market dataset from an emerging market that consists of approximately 3600 stocks between 2011 and 2018. We focus on predicting the next 5-day returns. For each asset in the universe, we compute its past 1-day, past 5-day, and past 10-day returns as features. We use a standard approach to translate forecasts into positions [4, 47]. We examine two universes in this market: (i) Universe 1 is equivalent to S&P 500 and consists of 983 stocks, and (ii) Full universe consists of all stocks except for illiquid ones. Results. Table 1 (left) reports the forecasting power and portfolio return for out-of-sample periods in Full universe (see our full version for Universe 1). We observe that (i) The data has a low signal-tonoise ratio. The out-of-sample R2 values of all the methods are close to 0. (ii) ADAPTIVE-RRR has the highest forecasting power. (iii) ADAPTIVE-RRR has the smallest in-sample and out-of-sample gap (see column out− in), suggesting that our model is better at avoiding spurious signals. Predicting user popularity in social networks. We collected tweet data on political topics from Oct. 2016 to Dec. 2017. Our goal is to predict a user’s next 1-day popularity, which is defined as the sum of retweets, quotes, and replies received by the user. There are a total of 19 million distinct users, and due to the huge size, we extract the subset of 2000 users with the most interactions for evaluation. For each user in the 2000-user set, we use its past 5 days’ popularity as features. We further randomly sample 200 users and make predictions for them, i.e., setting d2 = 200 to make d2 of the same magnitude as n. Results. We randomly sample users for 10 times and report the average MSE and correlation (with standard deviations) for both in-sample and out-of-sample data (see full version for more results). In Table 1 (right) we can see results consistent with the equity returns experiment: (i) ADAPTIVE-RRR yields the best performance in out-of-sample MSE and correlation. (ii) ADAPTIVE-RRR achieves the best generalization error by having a much smaller gap between training and test metrics. 7 Conclusion This paper examines the low-rank regression problem under the high-dimensional setting. We design the first learning algorithm with provable statistical guarantees under a mild condition on the features’ covariance matrix. Our algorithm is simple and computationally more efficient than low rank methods based on optimizing nuclear norms. Our theoretical analysis of the upper bound and lower bound can be of independent interest. Our preliminary experimental results demonstrate the efficacy of our algorithm. The full version explains why our (algorithm) result is unlikely to be known or trivial. Broader Impact The main contribution of this work is theoretical. Productionizing downstream applications stated in the paper may need to take six months or more so there is no immediate societal impact from this project. Acknowledgement We thank anonymous reviewers for helpful comments and suggestions. Varun Kanade is supported in part by the Alan Turing Institute under the EPSRC grant EP/N510129/1. Yanhua Li was supported in part by NSF grants IIS-1942680 (CAREER), CNS-1952085, CMMI-1831140, and DGE-2021871. Qiong Wu and Zhenming Liu are supported by NSF grants NSF-2008557, NSF-1835821, and NSF-1755769. The authors acknowledge William & Mary Research Computing for providing computational resources and technical support that have contributed to the results reported within this paper.
1. What is the focus of the paper in terms of the problem it addresses? 2. What are the key contributions of the paper, particularly in regards to the proposed algorithm? 3. Are there any concerns or suggestions regarding the experimental results presented in the paper? 4. Is there anything else the reviewer would like to know or see in the paper, such as simplified corollaries to Theorem 1?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the problem of low rank regression where the number of samples is much less than the number of features. The authors give an algorithm achieving improved results in some parameter regimes, and the algorithm is arguably simpler than the previous state of the art (requiring two applications of PCA rather than solving an SDP). They also give lower bounds suggesting that their algorithm is essentially optimal in their setting. Experiments bear out that their algorithm is competitive. Strengths The algorithm described here is very simple, and solves a foundational problem nearly optimally. It's great that the authors are able to say something new just by analyzing PCA. Weaknesses I would be happy to see more compelling experimental datasets. What about predicting height based on genome, which seems well-suited for this algorithm? It would also be kind to provide one or two simplified corollaries to theorem 1 that state the result without so many parameters (e.g., under a rank assumption).
NIPS
Title Adaptive Reduced Rank Regression Abstract We study the low rank regression problem y = Mx + , where x and y are d1 and d2 dimensional vectors respectively. We consider the extreme high-dimensional setting where the number of observations n is less than d1+d2. Existing algorithms are designed for settings where n is typically as large as rank(M)(d1 + d2). This work provides an efficient algorithm which only involves two SVD, and establishes statistical guarantees on its performance. The algorithm decouples the problem by first estimating the precision matrix of the features, and then solving the matrix denoising problem. To complement the upper bound, we introduce new techniques for establishing lower bounds on the performance of any algorithm for this problem. Our preliminary experiments confirm that our algorithm often out-performs existing baselines, and is always at least competitive. 1 Introduction We consider the regression problem y = Mx + in the high dimensional setting, where x ∈ Rd1 is the vector of features, y ∈ Rd2 is a vector of responses, M ∈ Rd2×d1 are the learnable parameters, and ∼ N(0, σ2 Id2×d2) is a noise term. High-dimensional setting refers to the case where the number of observations n is insufficient for recovery and hence regularization for estimation is necessary [26, 30, 12]. This high-dimensional model is widely used in practice, such as identifying biomarkers [48], understanding risks associated with various diseases [18, 7], image recognition [34, 17], forecasting equity returns in financial markets [33, 39, 28, 8], and analyzing social networks [46, 35]. We consider the “large feature size” setting, in which the number of features d1 is excessively large and can be even larger than the number of observations n. This setting frequently arises in practice because it is often straightforward to perform feature-engineering and produce a large number of potentially useful features in many machine learning problems. For example, in a typical equity forecasting model, n is around 3,000 (i.e., using 10 years of market data), whereas the number of potentially relevant features can be in the order of thousands [33, 22, 25, 13]. In predicting the popularity of a user in an online social network, n is in the order of hundreds (each day is an observation and a typical dataset contains less than three years of data) whereas the feature size can easily be more than 10k [36, 6, 38]. Existing low-rank regularization techniques (e.g., [3, 23, 26, 30, 27] ) are not optimized for the large feature size setting. These results assume that either the features possess the so-called restricted isometry property [10], or their covariance matrix can be accurately estimated [30]. Therefore, their sample complexity n depends on either d1 or the smallest eigenvalue value λmin of x’s covariance matrix. For example, a mean-squared error (MSE) result that appeared in [30] is of the form ∗ Correspondence to: Qiong Wu <[email protected]>. † Currently at Google. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. O ( r(d1+d2) nλ2min ) . When n ≤ d1/λ2min, this result becomes trivial because the forecast ŷ = 0 produces a comparable MSE. We design an efficient algorithm for the large feature size setting. Our algorithm is a simple two-stage algorithm. Let X ∈ Rn×d1 be a matrix that stacks together all features and Y ∈ Rn×d2 be the one that stacks the responses. In the first stage, we run a principal component analysis (PCA) on X to obtain a set of uncorrelated features Ẑ. In the second stage, we run another PCA to obtain a low rank approximation of ẐTY and use it to construct an output. While the algorithm is operationally simple, we show a powerful and generic result on using PCA to process features, a widely used practice for “dimensionality reduction” [11, 21, 19]. PCA is known to be effective to orthogonalize features by keeping only the subspace explaining large variations. But its performance can only be analyzed under the so-called factor model [40, 39]. We show the efficacy of PCA without the factor model assumption. Instead, PCA should be interpreted as a robust estimator of x’s covariance matrix. The empirical estimator C = 1nXX T in the high-dimensional setting cannot be directly used because n d1 × d2, but it exhibits an interesting regularity: the leading eigenvectors of C are closer to ground truth than the remaining ones. In addition, the number of reliable eigenvectors grows as the sample size grows, so our PCA procedure projects the features along reliable eigenvectors and dynamically adjusts Ẑ’s rank to maximally utilize the raw features. Under mild conditions on the ground-truth covariance matrix C∗ of x, we show that it is always possible to decompose x into a set of near-independent features and a set of (discarded) features that have an inconsequential impact on a model’s MSE. When features x are transformed into uncorrelated ones z, our original problem becomes y = Nz+ , which can be reduced to a matrix denoising problem [16] and be solved by the second stage. Our algorithm guarantees that we can recover all singular vectors of N whose associated singular values are larger than a certain threshold τ . The performance guarantee can be translated into MSE bounds parametrized by commonly used variables (though, these translations usually lead to looser bounds). For example, when N ’s rank is r, our result reduces the MSE from O( r(d1+d2) nλ2min ) to O( rd2n + n −c) for a suitably small constant c. The improvement is most pronounced when n d1. We also provide a new matching lower bound. Our lower bound asserts that no algorithm can recover a fraction of singular vectors of N whose associated singular values are smaller than ρτ , where ρ is a “gap parameter”. Our lower bound contribution is twofold. First, we introduce a notion of “local minimax”, which enables us to define a lower bound parametrized by the singular values of N . This is a stronger lower bound than those delivered by the standard minimax framework, which are often parametrized by the rank r of N [26]. Second, we develop a new probabilistic technique for establishing lower bounds under the new local minimax framework. Roughly speaking, our techniques assemble a large collection of matrices that share the same singular values of N but are far from each other, so no algorithm can successfully distinguish these matrices with identical spectra. 2 Preliminaries Notation. Let X ∈ Rn×d1 and Y ∈ Rn×d2 be data matrices with their i-th rows representing the i-th observation. For matrix A, we denote its singular value decomposition as A = UAΣA(V A)T and Pr(A) , UAr Σ A r V A r T is the rank r approximation obtained by keeping the top r singular values and the corresponding singular vectors. When the context is clear, we drop the superscript A and use U,Σ, and V (Ur, Σr, and Vr) instead. Both σi(A) and σAi are used to refer to i-th singular value of A. We use MATLAB notation when we refer to a specific row or column, e.g., V1,: is the first row of V and V:,1 is the first column. ‖A‖F , ‖A‖2, and ‖A‖∗ are Frobenius, spectral, and nuclear norms of A. In general, we use boldface upper case (e.g., X) to denote data matrices and boldface lower case (e.g., x) to denote one sample. Regular fonts denote other matrices. Let C∗ = IE[xxT] and C = 1nX TX be the empirical estimate of C∗. Let C∗ = V ∗Λ∗(V ∗)T be the eigen-decomposition of the matrix C∗, and λ∗1 ≥ λ∗2, . . . ,≥ λ∗d1 ≥ 0 be the diagonal entries of Λ ∗. Let {u1,u2, . . .u`} be an arbitrary set of column vectors, and Span({u1,u2, . . . ,u`}) be the subspace spanned by it. An event happens with high probability means that it happens with probability ≥ 1− n−5, where 5 is an arbitrarily chosen large constant and is not optimized. Our model. We consider the model y = Mx + , where x ∈ Rd1 is a multivariate Gaussian, y ∈ Rd2 , M ∈ Rd2×d1 , and ∼ N(0, σ2 Id2×d2). We can relax the Gaussian assumptions on x and STEP-1-PCA-X(X) 1 [U,Σ, V ] = svd(X) 2 Λ = 1n (Σ 2); λi = Λi,i. 3 Gap thresholding. 4 δ = n−O(1) is a tunable parameter. 5 k1 = max{k1 : λk1 − λk1+1 ≥ δ}, 6 Λk1 : diagonal matrix comprised of {λi}i≤k1 . 7 Uk1 , Vk1 : k1 leading columns of U and V . 8 Π̂ = (Λk1) − 12V Tk1 9 Ẑ+ = √ nUk1(= XΠ̂ T). 10 return {Ẑ+, Π̂}. STEP-2-PCA-DENOISE(Ẑ+,Y) 1 N̂T+ ← 1n Ẑ T +Y. 2 Absolute value thresholding. 3 θ is a suitable constant; σ is std. of the noise. 4 k2 = max { k2 : σk2(N̂+) ≥ θσ √ d2 n } . 5 return Pk2(N̂+) ADAPTIVE-RRR(X,Y) 1 [Ẑ+, Π̂] = STEP-1-PCA-A(X). 2 Pk2(N̂+) = STEP-2-PCA-DENOISE(Ẑ+,Y). 3 return M̂ = Pk2(N̂+)Π̂ Figure 1: Our algorithm (ADAPTIVE-RRR) for solving the regression y =Mx+ . for most results we develop. We assume a PAC learning framework, i.e., we observe a sequence {(xi,yi)}i≤n of independent samples and our goal is to find an M̂ that minimizes the test error IEx,y[‖M̂x−Mx‖22]. We are specifically interested in the setting in which d2 ≈ n ≤ d1. The key assumption we make to circumvent the d1 ≥ n issue is that the features are correlated. This assumption can be justified for the following reasons: (i) In practice, it is difficult, if not impossible, to construct completely uncorrelated features. (ii) When n d1, it is not even possible to test whether the features are uncorrelated [5]. (iii) When we indeed know that the features are independent, there are significantly simpler methods to design models. For example, we can build multiple models such that each model regresses on an individual feature of x, and then use a boosting/bagging method [19, 37] to consolidate the predictions. The correlatedness assumption implies that the eigenvalues of C∗ decays. The only (full rank) positive semidefinite matrices that have non-decaying (uniform) eigenvalues are the identity matrix (up to some scaling). In other words, when C∗ has uniform eigenvalues, x has to be uncorrelated. We aim to design an algorithm that works even when the decay is slow, such as when λi(C∗) has a heavy tail. Specifically, our algorithm assumes λi’s are bounded by a heavy-tail power law series: Assumption 2.1. The λi(C∗) series satisfies λi(C∗) ≤ c · i−ω for a constant c and ω ≥ 2. We do not make functional form assumptions on λi’s. This assumption also covers many benign cases, such as when C∗ has low rank or its eigenvalues decay exponentially. Many empirical studies report power law distributions of data covariance matrices [2, 31, 44, 14]. Next, we make standard normalization assumptions. IE‖x‖22 = 1, ‖M‖2 ≤ Υ = O(1), and σ ≥ 1. Remark that we assume only the spectral norm of M is bounded, while its Frobenius norm can be unbounded. Also, we assume the noise σ ≥ 1 is sufficiently large, which is more important in practice. The case when σ is small can be tackled in a similar fashion. Finally, our studies avoid examining excessively unrealistic cases, so we assume d1 ≤ d32. We examine the setting where existing algorithms fail to deliver non-trivial MSE, so we assume that n ≤ rd1 ≤ d42. 3 Upper bound Our algorithm (see Fig. 1) consists of two steps. Step 1. Producing uncorrelated features. We run a PCA to obtain a total number of k1 orthogonalized features. See STEP-1-PCA-X in Fig. 1. Let the SVD of X be X = UΣ(V )T. Let k1 be a suitable rank chosen by inspecting the gaps of X’s singular values (Line 5 in STEP-1-PCA-X). Ẑ+ = √ nUk1 is the set of transformed features output by this step. The subscript + in Ẑ+ reflects that a dimension reduction happens so the number of columns in Ẑ+ is smaller than that in X. Compared to standard PCA dimension reduction, there are two differences: (i) We use the left leading singular vectors of X (with a re-scaling factor √ n) as the output, whereas the PCA reduction outputs Pk1(X). (ii) We design a specialized rule to choose k1 whereas PCA usually uses a hard thresholding or other ad-hoc rules. Step 2. Matrix denoising. We run a second PCA on the matrix (N̂+)T , 1n Ẑ T +Y. The rank k2 is chosen by a hard thresholding rule (Line 4 in STEP-2-PCA-DENOISE). Our final estimator is Pk2(N̂+)Π̂, where Π̂ = (Λk1) − 12V Tk1 is computed in STEP-1-PCA-X(X). 3.1 Intuition of the design While the algorithm is operationally simple, its design is motivated by carefully unfolding the statistical structure of the problem. We shall realize that applying PCA on the features should not be viewed as removing noise from a factor model, or finding subspaces that maximize variations explained by the subspaces as suggested in the standard literature [19, 40, 41]. Instead, it implicitly implements a robust estimator for x’s precision matrix, and the design of the estimator needs to be coupled with our objective of forecasting y, thus resulting in a new way of choosing the rank. Design motivation: warm up. We first examine a simplified problem y = Nz+ , where variables in z are assumed to be uncorrelated. Assume d = d1 = d2 in this simplified setting. Observe that 1 n ZTY = 1 n ZT(ZNT + E) = ( 1 n ZTZ)NT + 1 n ZTE ≈ Id1×d1NT + 1 n ZTE = NT + E , (1) where E is the noise term and E can be approximated by a matrix with independent zero-mean noises. Solving the matrix denoising problem. Eq. 1 implies that when we compute ZTY, the problem reduces to an extensively studied matrix denoising problem [16, 20]. We include the intuition for solving this problem for completeness. The signalNT is overlaid with a noise matrix E . E will elevate all the singular values of NT by an order of σ √ d/n. We run a PCA to extract reliable signals: when the singular value of a subspace is σ √ d/n, the subspace contains significantly more signal than noise and thus we keep the subspace. Similarly, a subspace associated a singular value . σ √ d/n mostly contains noise. This leads to a hard thresholding algorithm that sets N̂T = Pr(NT + E), where r is the maximum index such that σr(NT + E) ≥ c √ d/n for some constant c. In the general setting y = Mx + , x may not be uncorrelated. But when we set z = (Λ∗)− 1 2 (V ∗)Tx, we see that IE[zzT] = I . This means knowing C∗ suffices to reduce the original problem to a simplified one. Therefore, our algorithm uses Step 1 to estimate C∗ and Z, and uses Step 2 to reduce the problem to a matrix denoising one and solve it by standard thresholding techniques. Relationship between PCA and precision matrix estimation. In step 1, while we plan to estimate C∗, our algorithm runs a PCA on X. We observe that empirical covariance matrix C = 1nX TX = 1 nV (Σ) 2(V )T, i.e., C’s eigenvectors coincide with X’s right singular vectors. When we use the empirical estimator to construct ẑ, we obtain ẑ = √ n(Σ)−1(V )Tx. When we apply this map to every training point and assemble the new feature matrix, we exactly get Ẑ = √ nXV (Σ)−1 = √ nU . It means that using C to construct ẑ is the same as running a PCA in STEP-1-PCA-X with k1 = d1. trix. When C and C∗ are unrelated, then the plot behaves like a block of white Gaussian noise. We observe a pronounced pattern: the angle matrix can be roughly divided into two sub-blocks (see the red lines in Fig. 2). The upper left sub-block behaves like an identity matrix, suggesting that the leading eigenvectors of C are close to those of C∗. The lower right block behaves like a white noise matrix, suggesting that the “small” eigenvectors of C are far from those of C∗. When n grows, one can observe the upper left block becomes larger and this the eigenvectors of C will sequentially get stabilized. Leading eigenvectors are first stabilized, followed by smaller ones. Our algorithm leverages this regularity by keeping only a suitable number of reliable eigenvectors from C while ensuring not much information is lost when we throw away those “small” eigenvectors. Implementing the rank selection. We rely on three interacting building blocks: 1. Dimension-free matrix concentration. First, we need to find a concentration behavior of C for n ≤ d1 to decouple d1 from the MSE bound. We utilize a dimension-free matrix concentration inequality [32]. Roughly speaking, the concentration behaves as ‖C−C∗‖2 ≈ n− 1 2 . This guarantees that |λi(C)− λi(C∗)| ≤ n− 1 2 by standard matrix perturbation results [24]. 2. Davis-Kahan perturbation result. However, the pairwise closeness of the λi’s does not imply the eigenvectors are also close. When λi(C∗) and λi+1(C∗) are close, the corresponding eigenvectors in C can be “jammed” together. Thus, we need to identify an index i, at which λi(C∗)− λi+1(C∗) exhibits significant gap, and use a Davis-Kahan result to show that Pi(C) is close to Pi(C∗). On the other hand, the map Π∗(, (Λ∗)− 1 2 (V ∗)T) we aim to find depends on the square root of inverse (Λ∗)− 1 2 , so we need additional manipulation to argue our estimate is close to (Λ∗)− 1 2 (V ∗)T. 3. The connection between gap and tail. Finally, the performance of our procedure is also characterized by the total volume of signals that are discarded, i.e., ∑ i>k1 λi(C ∗), where k1 is the location that exhibits the gap. The question becomes whether it is possible to identify a k1 that simultaneously exhibits a large gap and ensures the tail after it is well-controlled, e.g., the sum of the tail is O(n−c) for a constant c. We develop a combinatorial analysis to show that it is always possible to find such a gap under the assumption that λi(C∗) is bounded by a power law distribution with exponent ω ≥ 2. Combining all these three building blocks, we have: Proposition 1. Let ξ and δ be two tunable parameters such that ξ = ω(log3 n/ √ n) and δ3 = ω(ξ). Assume that λ∗i ≤ c ·i−ω . Consider running STEP-1-PCA-X in Fig. 1, with high probability, we have (i) Leading eigenvectors/values are close: there exists a unitary matrix W and a constant c1 such that ‖Vk1(Λk1)− 1 2 − V ∗k1(Λ ∗ k1 )− 1 2W‖ ≤ c1ξδ3 . (ii) Small tail: ∑ i≥k1 λ ∗ i ≤ c2δ ω−1 ω+1 for a constant c2. Prop. 1 implies that our estimate ẑ+ = Π̂(x) is sufficiently close to z = Π∗(x), up to a unitary transform. We then execute STEP-2-PCA-DENOISE to reduce the problem to a matrix denoising one and solve it by hard-thresholding. Let us refer to y = Nz + , where z is a standard multivariate Gaussian and N = MV ∗(Λ∗) 1 2 as the orthogonalized form of the problem. While we do not directly observe z, our performance is characterized by spectra structure of N . Theorem 1. Consider running ADAPTIVE-RRR in Fig. 1 on n independent samples (x,y) from the model y = Mx + , where x ∈ Rd1 and y ∈ Rd2 . Let C∗ = IE[xxT]. Assume that (i) ‖M‖2 ≤ Υ = O(1), and (ii) x is a multivariate Gaussian with ‖x‖2 = 1. In addition, λ1(C∗) < 1 and for all i, λi(C∗) ≤ c/iω for a constant c, and (iii) ∼ N(0, σ2 Id1), where σ ≥ min{Υ, 1}. Let ξ = ω(log3 n/ √ n), δ3 = ω(ξ), and θ be a suitably large constant. Let y = Nz + be the orthogonalized form of the problem. Let `∗ be the largest index such that σN`∗ > θσ √ d2 n . Let ŷ be our testing forecast. With high probability over the training data: IE[‖ŷ − y‖22] ≤ ∑ i>`∗ (σNi ) 2 +O ( `∗d2θ 2σ2 n ) +O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) (2) The expectation is over the randomness of the test data. Theorem 1 also implies that there exists a way to parametrize ξ and δ such that IE[‖ŷ − y‖22] ≤∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) +O(n−c0) for some constant c0. We next interpret each term in (2). Terms ∑ i>`∗(σ N i ) 2 +O ( `∗d2θ 2σ2 n ) are typical for solving a matrix denoising problem N̂T+ + E(≈ NT + E): we can extract signals associated with `∗ leading singular vectors of N , so ∑ i>`∗(σ N i ) 2 starts at i > `∗. For each direction we extract, we need to pay a noise term of order θ2σ2 d2 n , leading to the term O ( `∗d2θ 2σ2 n ) . Terms O (√ ξ δ3 ) +O ( δ ω−1 4(ω+1) ) come from the estimations error of ẑ+ produced from Prop. 1, consisting of both estimation errors of C∗’s leading eigenvectors and the error of cutting out a tail. We pay an exponent of 14 on both terms (e.g., δ ω−1 ω+1 in Prop. 1 becomes δ ω−1 4(ω+1) ) because we used Cauchy-Schwarz (CS) twice. One is used in running matrix denoising algorithm with inaccurate z+; the other one is used to bound the impact of cutting a tail. It remains open whether two CS is can be circumvented. Sec. 4 explains how Thm 1 and the lower bound imply the algorithm is near-optimal. Sec. 5 compares our result with existing ones under other parametrizations, e.g. rank(M). 4 Lower bound Our algorithm accurately estimates the singular vectors of N that correspond to singular values above the threshold τ = θσ √ d2 n . However, it may well happen that most of the spectral ‘mass’ of N lies only slightly below this threshold τ . In this section, we establish that no algorithm can do better than us, in a bi-criteria sense, i.e. we show that any algorithm that has a slightly smaller sample than ours can only minimally outperform ours in terms of MSE. We establish ‘instance dependent’ lower bounds: When there is more ‘spectral mass’ below the threshold, the performance of our algorithm will be worse, and we will need to establish that no algorithm can do much better. This departs from the standard minimax framework, in which one examines the entire parameter space of N , e.g. all rank r matrices, and produces a large set of statistically indistinguishable ‘bad’ instances [43]. These lower bounds are not sensitive to instancespecific quantities such as the spectrum of N , and in particular, if prior knowledge suggests that the unknown parameter N is far from these bad instances, the minimax lower bound cannot be applied. We introduce the notion of local minimax. We partition the space into parts so that similar matrices are together. Similar matrices are those N that have the same singular values and right singular vectors; we establish strong lower bounds even against algorithms that know the singular values and right singular vectors of N . An equivalent view is to assume that the algorithm has oracle access to C∗, M ’s singular values, and M ’s right singular vectors. This algorithm can solve the orthogonalized form as N ’s singular values and right singular vectors can easily be deduced. Thus, the only reason why the algorithm needs data is to learn the left singular vectors of N . The lower bound we establish is the minimax bound for this ‘unfair’ comparison, where the competing algorithm is given more information. In fact, this can be reduced further, i.e., even if the algorithm ‘knows’ that the left singular vectors of N are sparse, identifying the locations of the non-zero entries is the key difficulty that leads to the lower bound. Definition 1 (Local minimax bound). Consider a model y = Mx + , where x is a random vector, so C∗(x) = IE[xxT] represents the co-variance matrix of the data distribution, and M = UMΣM (VM )T. The relation (M,x) ∼ (M ′,x′)⇔ (ΣM = ΣM ′∧VM = VM ′∧C∗(x) = C∗(x′)) is an equivalence relation and let the equivalence class of (M,x) beR(M,x) = {(M ′,x′) : ΣM ′ = ΣM , VM ′ = VM , and C∗(x′) = C∗(x)}. The local minimax bound for y = Mx + with n independent samples and ∼ N(0, σ2 Id2×d2) is r(x,M, n, σ ) = min M̂ max (M ′,x′)∈R(M,x) E X,Y from y∼M′x′+ [ IEx′ [‖M̂(X,Y)x′ −M ′x′‖22 | X,Y] ] . (3) It is worth interpreting (3) in some detail. For any two (M,x), (M ′,x′) inR(M,x), the algorithm has the same ‘prior knowledge’, so it can only distinguish between the two instances by using the observed data, in particular M̂ is a function only of X and Y, and we denote it as M̂(X,Y) to emphasize this. Thus, we can evaluate the performance of M̂ by looking at the worst possible (M ′,x′) and considering the MSE IE‖M̂(X,Y)x′ −M ′x′‖2. Proposition 2. Consider the problem y = Mx + with normalized form y = Nz + . Let ξ be a sufficient small constant. There exists a sufficiently small constant ρ0 (that depends on ξ) and a constant c such that for any ρ ≤ ρ0, r(x,M, n, σ ) ≥ (1− cρ 1 2−ξ) ∑ i≥t(σ N i ) 2 −O ( ρ 1 2 −ξ dω−12 ) , where t is the smallest index such that σNt ≤ ρσ √ d2 n . Proposition 2 gives the lower bound on the MSE in expectation; it can be turned into a high probability result with suitable modifications. The proof of the lower bound uses a similar ‘trick’ to the one used in the analysis of the upper bound analysis to cut the tail. This results in an additional term O ( ρ 1 2 −ξ dω−12 ) which is generally smaller than the n−c0 tail term in Theorem 1 and does not dominate the gap. Gap requirement and bi-criteria approximation algorithms. Let τ = σ √ d2 n . Theorem 1 asserts that any signal above the threshold θτ can be detected, i.e., the MSE is at most ∑ σNi >θτ σ2i (N) (plus inevitable noise), whereas Proposition 2 asserts that any signal below the threshold ρτ cannot be detected, i.e., the MSE is approximately at least ∑ σNi ≥ρτ (1− poly(ρ))σ2i (N). There is a ‘gap’ between θτ and ρτ , as θ > 1 and ρ < 1. See Fig. 3(a). This kind of gap is inevitable because both bounds are ‘high probability’ statements. This gap phenomenon appears naturally when the sample size is small as can be illustrated by this simple example. Consider the problem of estimating µ when we see one sample from N(µ, σ2). Roughly speaking, when µ σ, the estimation is feasible, and whereas µ σ, the estimation is impossible. For the region µ ≈ σ, algorithms fail with constant probability and we cannot prove a high probability lower bound either. While many of the signals can ‘hide’ in the gap, the inability to detect signals in the gap is a transient phenomenon. When the number of samples n is modestly increased, our detection threshold τ = θσ √ d2 n shrinks, and this hidden signal can be fully recovered. This observation naturally leads to a notion of bi-criteria optimization that frequently arises in approximation algorithms. Definition 2. An algorithm for solving the y = Mx + problem is (α, β)-optimal if, when given an i.i.d. sample of size αn as input, it outputs an estimator whose MSE is at most β worse than the local minimax bound, i.e., IE[‖ŷ − y‖22] ≤ r(x,M, n, σ ) + β. Corollary 1. Let ξ and c0 be small constants and ρ be a tunable parameter. Our algorithm is (α, β)-optimal for α = θ 2 ρ 5 2 and β = O(ρ 1 2−ξ)‖Mx‖22 +O(n−c0) The error term β consists of ρ 1 2− ‖Mx‖22 that is directly characterized by the signal strength and an additive term O(n−c0) = o(1). Assuming that ‖Mx‖ = Ω(1), i.e., the signal is not too weak, the term β becomes a single multiplicative bound O(ρ 1 2−ξ + n−c0)‖Mx‖22. This gives an easily interpretable result. For example, when our data size is n log n, the performance gap between our algorithm and any algorithm that uses n samples is at most o(‖Mx‖22). The improvement is significant when other baselines deliver MSE in the additive form that could be larger than ‖Mx‖22 in the regime n ≤ d1. Preview of techniques. Let N = UNΣN (V N )T be the instance (in orthogonalized form). Our goal is to construct a collection N = {N1, . . . , NK} of K matrices so that (i) For any Ni ∈ N , ΣNi = ΣN and V Ni = V N . (ii) For any two Ni, Nj ∈ N , ‖N − N ′‖F is large, and (iii) K = exp(Ω(poly(ρ)d2)) (cf. [43, Chap. 2]) Condition (i) ensures that it suffices to construct unitary matrices UNi ’s for N , and that the resulting instances will be in the same equivalence class. Conditions (ii) and (iii) resemble standard construction of codes in information theory: we need a large ‘code rate’, corresponding to requiring a large K as well as large distances between codewords, corresponding to requiring that ‖Ui − Uj‖F be large. Standard approaches for constructing such collections run into difficulties. Getting a sufficiently tight concentration bound on the distance between two random unitary matrices is difficult as the matrix entries, by necessity, are correlated. On the other hand, starting with a large collection of random unit vectors and using its Cartesian product to build matrices does not necessarily yield unitary matrices. We design a two-stage approach to decouple condition (iii) from (i) and (ii) by only generating sparse matrices UNi . See Fig. 3(b)-(d). In the first stage (Steps 1 & 2 in Fig. 3(b)-(c)), we only specify the non-zero positions (sparsity pattern) in each UNi . It suffices to guarantee that the sparsity patterns of the matrices UNi and UNj have little overlap. The existence of such objects can easily be proved using the probabilistic method. Thus, in the first stage, we can build up a large number of sparsity patterns. In the second stage (Step 3 in Fig. 3(d)), we carefully fill in values in the non-zero positions for each UNi . When the number of non-zero entries is not too small, satisfying the unitary constraint is feasible. As the overlap of sparsity patterns of any two matrices is small, we can argue the distance between them is large. By carefully trading off the number of non-zero positions and the portion of overlap, we can simultaneously satisfy all three conditions. 5 Related work and comparison In this section, we compare our results to other regression algorithms that make low rank constraints on M . Most existing MSE results are parametrized by the rank or spectral properties of M , e.g. [30] defined a generalized notion of rank Bq(RAq ) ∈ { A ∈ Rd2×d1 : ∑d2 i=1 |σAi |q ≤ Rq } , where q ∈ [0, 1], A ∈ {N,M}, i.e. RNq characterizes the generalized rank of N whereas RMq characterizes that ofM . When q = 0,RNq = R M q is the rank of theN because rank(N) = rank(M) in our setting. In their setting, the MSE is parametrized by RM and is shown to be O ( RMq ( σ2 λ ∗ 1(d1+d2) (λ∗min) 2n )1−q/2) . In the special case when q = 0, this reduces to O ( σ2 λ ∗ 1rank(M)(d1+d2) (λ∗min) 2·n ) . On the other hand, the MSE in our case is bounded by (cf. Thm. 1). We have IE[‖ŷ− y‖22] = O ( RNq ( σ2 d2 n ) 1−q/2 + n−c0 ) . When q = 0, this becomes O (σ2 rank(M)d2 n + n −c0 ) . The improvement here is twofold. First, our bound is directly characterized by N in orthogonalized form, whereas result of [30] needs to examine the interaction between M and C∗, so their MSE depends on both RMq and λ ∗ min. Second, our bound no longer depends on d1 and pays only an additive factor n−c0 , thus, when n < d1, our result is significantly better. Other works have different parameters in the upper bounds, but all of these existing results require that n > d1 to obtain nontrivial upper bounds [26, 9, 12, 26]. Unlike these prior work, we require a stochastic assumption on X (the rows are i.i.d.) to ensure that the model is identifiable when n < d1, e.g. there could be two sets of disjoint features that fit the training data equally well. Our algorithm produces an adaptive model whose complexity is controlled by k1 and k2, which are adjusted dynamically depending on the sample size and noise level. [9] and [12] also point out the need for adaptivity; however they still require n > d1 and make some strong assumptions. For instance, [9] assumes that there is a gap between σi(XMT) and σi+1(XMT) for some i. In comparison, our sufficient condition, the decay of λ∗i , is more natural. Our work is not directly comparable to standard variable selection techniques such as LASSO [42] because they handle univariate y. Column selection algorithms [15] generalize variable selection methods for vector responses, but they cannot address the identifiability concern. 6 Experiments We apply our algorithm on an equity market and a social network dataset to predict equity returns and user popularity respectively. Our baselines include ridge regression (“Ridge”), reduced rank ridge regression [29] (“Reduced ridge”), LASSO (“Lasso”), nuclear norm regularized regression (“Nuclear norm”), reduced rank regression [45] (“RRR”), and principal component regression [1] (“PCR”). Predicting equity returns. We use a stock market dataset from an emerging market that consists of approximately 3600 stocks between 2011 and 2018. We focus on predicting the next 5-day returns. For each asset in the universe, we compute its past 1-day, past 5-day, and past 10-day returns as features. We use a standard approach to translate forecasts into positions [4, 47]. We examine two universes in this market: (i) Universe 1 is equivalent to S&P 500 and consists of 983 stocks, and (ii) Full universe consists of all stocks except for illiquid ones. Results. Table 1 (left) reports the forecasting power and portfolio return for out-of-sample periods in Full universe (see our full version for Universe 1). We observe that (i) The data has a low signal-tonoise ratio. The out-of-sample R2 values of all the methods are close to 0. (ii) ADAPTIVE-RRR has the highest forecasting power. (iii) ADAPTIVE-RRR has the smallest in-sample and out-of-sample gap (see column out− in), suggesting that our model is better at avoiding spurious signals. Predicting user popularity in social networks. We collected tweet data on political topics from Oct. 2016 to Dec. 2017. Our goal is to predict a user’s next 1-day popularity, which is defined as the sum of retweets, quotes, and replies received by the user. There are a total of 19 million distinct users, and due to the huge size, we extract the subset of 2000 users with the most interactions for evaluation. For each user in the 2000-user set, we use its past 5 days’ popularity as features. We further randomly sample 200 users and make predictions for them, i.e., setting d2 = 200 to make d2 of the same magnitude as n. Results. We randomly sample users for 10 times and report the average MSE and correlation (with standard deviations) for both in-sample and out-of-sample data (see full version for more results). In Table 1 (right) we can see results consistent with the equity returns experiment: (i) ADAPTIVE-RRR yields the best performance in out-of-sample MSE and correlation. (ii) ADAPTIVE-RRR achieves the best generalization error by having a much smaller gap between training and test metrics. 7 Conclusion This paper examines the low-rank regression problem under the high-dimensional setting. We design the first learning algorithm with provable statistical guarantees under a mild condition on the features’ covariance matrix. Our algorithm is simple and computationally more efficient than low rank methods based on optimizing nuclear norms. Our theoretical analysis of the upper bound and lower bound can be of independent interest. Our preliminary experimental results demonstrate the efficacy of our algorithm. The full version explains why our (algorithm) result is unlikely to be known or trivial. Broader Impact The main contribution of this work is theoretical. Productionizing downstream applications stated in the paper may need to take six months or more so there is no immediate societal impact from this project. Acknowledgement We thank anonymous reviewers for helpful comments and suggestions. Varun Kanade is supported in part by the Alan Turing Institute under the EPSRC grant EP/N510129/1. Yanhua Li was supported in part by NSF grants IIS-1942680 (CAREER), CNS-1952085, CMMI-1831140, and DGE-2021871. Qiong Wu and Zhenming Liu are supported by NSF grants NSF-2008557, NSF-1835821, and NSF-1755769. The authors acknowledge William & Mary Research Computing for providing computational resources and technical support that have contributed to the results reported within this paper.
1. What is the focus of the paper regarding reduced-rank regression? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the novelty and impact of the paper's contributions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper suggests a reduced-rank regression (RRR) estimator suitable for the high-dimensional n<<p setting. The estimator is very simple and consists of two steps: (1) reduce X with PCA to Z; (2) do SVD on cross-covariance between Z and Y. The paper claims that this procedure has good statistical guarantees and outpeforms all existing competitors. Strengths The paper addresses an important problem of estimating regression coefficients of multivariate regression in n<<p setting. It develops a detailed mathematical treatment (mostly in the Appendix) to provide some statistical guarantees on the performance. Weaknesses That said, I am not convinced that this paper provides a contribution of NeurIPS level. The suggested estimator is extremely simple; one could even say "naive". Both ingredients are standard in statistics and machine learning: step (1) is the same as in principal component regression (PCR); step (2) is the same as in partial least squares (PLS). Together, this method is something like a PCR-PLS, with some singular value thresholding. The authors claim that it's very novel and outperforms all the competitors, but I remain not entirely convinced by this (see below). Disclaimer: I did not attempt to follow the mathematical proofs in the Appendix.
NIPS
Title Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers Abstract We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. N/A We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. 1 Introduction Estimating information from structured data is a central theme in statistics that by now has found applications in a wide array of disciplines. On a high level, a typical assumption in an estimation problem is the existence of a –known a priori– family of probability distributions P : { β β ∈ Ω} over some spaceZ that are each indexed by some parameter β ∈ Ω. We then observe a collection 35th Conference on Neural Information Processing Systems (NeurIPS 2021). of n independent observations Z (Z1 , . . . , Zn) drawn from an unknown probability distribution β∗ ∈ P. The goal is to (approximately) recover the hidden parameter β∗.1 Oftentimes, real-world data may contain skewed, imprecise or corrupted measurements. Hence, a desirable property for an estimator is to be robust to significant, possibly malicious, noise perturbations on the given observations. Indeed, in the last two decades, a large body of work has been developed on designing robust algorithms (e.g see (BGN09; BBC11; MMYSB19)). However, proving strong guarantees often either demands strong assumptions on the noise model or requires that the fraction of perturbed observations is small. More concretely, when we allow the noise to be chosen adaptively, i.e., chosen dependently on the observations and hidden parameters, a common theme is that consistent estimators – estimators whose error tends to zero as the number of observations grows – can be attained only when the fraction of outliers is small. In order to make vanishing error possible in the presence of large fractions of outliers, it is necessary to consider weaker adversary models that are oblivious to the underlying structured data. In recent years, a flurry of works have investigated oblivious noise models (CLMW11; ZLW+10; TJSO14; BJKK17; SZF18; SBRJ19; PJL21; dNS21). These results however are tailor-made to the specific models and problems. To overcome this limitation, in this paper we aim to provide a simple blueprint to design provably robust estimators under minimalistic noise assumptions for a large class of estimation problems. As a testbed for our blueprint, we investigate two well-studied problems: Principal component analysis (PCA): Given a matrix Y : L∗ + N where L∗ ∈ n×n is an unknown parameter matrix and N is an n-by-n random noise matrix, the goal is to find an estimator L̂ for L∗ which is as close as possible to L∗ in Frobenius norm. Sparse regression: Given observations (X1 , y1), . . . , (Xn , yn) following the linear model yi 〈Xi , β∗〉 + ηi where Xi ∈ d , β∗ ∈ d is the k-sparse parameter vector of interest (by k-sparse we mean that it has at most k nonzero entries) and η1 , . . . ηn is noise, the goal is to find an estimator β̂ for β∗ achieving small squared prediction error 1n ‖X(β̂ − β∗)‖2, where X is the matrix whose rows are X1 , . . . ,Xn .2 Principal component analysis. A natural way to describe principal component analysis under oblivious perturbations is that of assuming the noise matrix N to be an n-by-n matrix with a uniformly random set of α · n2 entries bounded by some small ζ > 0 in absolute value. In these settings, we may think of the α fraction of entries with small noise as the set of uncorrupted observations. Moreover, for ζ > 0, the fact that even for uncorrupted observations the noise is non-zero allows us to capture both gross sparse errors and small entry-wise noise in the measurements at the same time (for example if ζ 1 then the model captures settings with additional standard Gaussian noise). Remarkably, for ζ 0, Candès et al.’s seminal work (CLMW11) provided an algorithm that exactly recovers L∗ even for a vanishing fraction of inliers, under incoherence conditions on the signal L∗. The result was slightly extended in (ZLW+10) where the authors provided an algorithm recovering L∗ up to squared error O(ζ2 · n4) thus allowing polynomially small measurement noise ζ, but still failing to capture settings where standard Gaussian measurement noise is added to the sparse noise. Even for simple signal matrices L∗, prior to this work, it remained an open question whether consistent estimators could be designed in presence of both oblivious noise and more reasonable measurement noise (e.g., standard Gaussian). Linear regression. Similarly to the context of principal component analysis, a convenient model for oblivious adversarial corruptions is that of assuming the noise vector η to have a random set of α · n entries bounded by ζ in absolute value (here all results of interest can be extended to any ζ > 0 just by scaling, so we consider only the case ζ 1). This model trivially captures the classical settings with Gaussian noise (a Gaussian vector with variance σ2 will have an α Θ( 1σ ) fraction of inliers) and, again, allows us to think of the α-fraction of entries with small noise as the set of uncorrupted observations. Early works on consistent regression focused on the regime with Gaussian design X1 , . . . ,Xn ∼ N(0,Σ) and deterministic noise η.3 (BJKK17) presented an estimator achieving error 1In this paper, we assume Ω ⊆ d andZ ⊆ D for some d ,D, denote random variables in bold face, and hide absolute constant factors with the notation O(·),Ω(·) , & , . and logarithmic factors with Õ(·), Ω̃(·). 2Our analysis also works for the parameter error ‖β∗ − β̂‖. 3For Gaussian design X one may also consider a deterministic noise model. This noise model is subsumed by the random noise model discussed here (if X is Gaussian). As shown in (dNS21), roughly speaking the Õ(d/(α2 · n)) for any α larger than a fixed constant. (SBRJ19) extended this result by achieving comparable error rates even for a vanishing fraction of inliers α & 1/log log n. Assuming n & d2/α2 (TJSO14) proved that the Huber-loss estimator (Hub64) achieves optimal error rate O(d/α2 · n) even for polynomially small fraction of inliers α & √ d/n. This line of work culminated in (dNS21) which extended the result of (TJSO14) achieving the same guarantees with optimal sample complexity n & d/α2. For Gaussian design, similar result as (dNS21) can be extracted from independent work (PF20). Furthermore, the authors of (dNS21) extended these guarantees to deterministic design matrices satisfying only a spreadness condition (trivially satisfied by sub-Gaussian design matrices). Huber loss was also analyzed in context of linear regression in (SZF18; DT19; SF20; PJL21). These works studied models that are different from our model, and these results do not work in the case α o(1). (SZF18) used assumptions on the moments of the noise, (DT19) and (SF20) studied the model with non-oblivious adversary, and the model from (PJL21) allows corruptions in covariates. In the context of sparse regression, less is known. When the fraction of observations is constant α > Ω(1), (NTN11) provided a first consistent estimator. Later (DT19) and (SF20) improved that result (assuming α > Ω(1), but these results also work with non-oblivious adversary). In the case α o(1), the algorithm of (SBRJ19) (for Gaussian designs) achieves nearly optimal error convergence Õ ( (k log2 d)/α2n ) , but requires α & 1/log log n. More recently, (dNS21) presented an algorithm for standard Gaussian design X ∼ N(0, 1)n×d , achieving the nearly optimal error convergence Õ(k/(α2 · n)) for nearly optimal inliers fraction α & √ (k · log2 d)/n. Both (SBRJ19) and (dNS21) use an iterative process, and require a bound ‖β∗‖ 6 dO(1) (for larger ‖β∗‖ these algorithms also work, but the fraction of inliers or the error convergence is worse). The algorithm from (dNS21) , however, heavily relies on the assumption that X ∼ N(0, 1)n×d and appears unlikely to be generalizable to more general families of matrices (including non-spherical Gaussian designs). Our contribution. We propose new machinery to design efficiently computable consistent estimators achieving optimal error rates and sample complexity against oblivious outliers. In particular, we extend the approach of (dNS21) to structured estimation problems, by finding a way to exploit adequately the structure therein. While consistent estimators have already been designed under more benign noise assumptions (e.g. the LASSO estimator for sparse linear regression under Gaussian noise), it was previously unclear how to exploit this structure in the setting of oblivious noise. One key consequence of our work is hence to demonstrate what minimal assumptions on the noise are sufficient to make effective recovery (in the sense above) possible. Concretely, we show Oblivious PCA: Under mild assumptions on the noise matrix N and common assumption on the parameter matrix L∗ –traditionally applied in the context of matrix completion (NW12)– we provide an algorithm that achieves optimal error guarantees. Sparse regression: Under mild assumptions on the design matrix and the noise vector –similar to the ones used in (dNS21) for dense parameter vectors β∗ – we provide an algorithm that achieves optimal error guarantees and sample complexity. For both problems, our analysis improves over the state-of-the-art and recovers the classical optimal guarantees, not only for Gaussian noise, but also under much less restrictive noise assumptions. At a high-level, we achieve the above results by equipping the Huber loss estimator with appropriate regularizers. Our techniques closely follow standard analyses for M-estimators, but crucially depart from them when dealing with the observations with large perturbations. Furthermore, our analysis appears to be mechanical and thus easily applicable to many different estimation problems. 2 Results Our estimators are based on regularized versions of the Huber loss. The regularizer we choose depends on the underlying structure of the estimation problem: We use `1 regularization to enforce sparsity in linear regression and nuclear norm regularization to enforce a low-rank structure in the context of PCA. More formally, the Huber penalty is defined as the function fh : → >0 such that underlying reason is that the Gaussianity of X allows one to obtain several other desirable properties "for free". For example, one could ensure that the noise vector is symmetric by randomly flipping the sign of each observation (yi ,Xi), as the design matrix will still be Gaussian. See Section 2 for a more in-depth discussion. fh(t) : { 1 2 t 2 for |t | 6 h , h(|t | − h2 ) otherwise. (2.1) where h > 0 is a penalty parameter. For X ∈ D , the Huber loss is defined as the function Fh(X) : ∑ i∈[D] fh(Xi). We will define the regularized versions in the following sections. For a matrix A, we use ‖A‖, ‖A‖nuc, ‖A‖F, ‖A‖max to denote its spectral, nuclear, Frobenius, maximum4 norms, respectively. For a vector v, we use ‖v‖ and ‖v‖1 to denote its `2 and `1 norms. 2.1 Oblivious principal component analysis For oblivious PCA, we provide guarantees for the following estimator (ζ will be defined shortly): L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fh(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (2.2) Theorem 2.1. Let L∗ ∈ n×n be an unknown deterministic matrix and let N be an n-byn random matrix with independent, symmetrically distributed (about zero) entries and α : mini , j∈[n] { Ni j 6 ζ} for some ζ > 0. Suppose that rank(L∗) r and ‖L∗‖max 6 ρ/n. Then, with probability at least 1 − 2−n over N , given Y L∗ + N , ζ and ρ, the estimator Eq. (2.2) with Huber parameter h ζ + ρ/n satisfies L̂ − L∗ F 6 O (√rnα ) · (ζ + ρ/n) . We first compare the guarantees of Theorem 2.1 with the previous results on robust PCA (CLMW11; ZLW+10).5 The first difference is that they require L∗ to satisfy certain incoherent conditions.6 Concretely, they provide theoretical guarantees for r 6 O ( µ−1n(log n)−2 ) , where µ is the incoherence parameter. In certain regimes, such a constraint strongly binds with the eigenvectors of L∗, restricting the set of admissible signal matrices. Using the different assumption ‖L∗‖max 6 ρ/n (commonly used for matrix completion, see (NW12)), we can obtain nontrivial guarantees (i.e. L̂ − L∗ F/‖L∗‖F → 0 as n →∞) even when the µ-incoherence conditions are not satisfied for any µ 6 n/log2 n, and hence the results (CLMW11; ZLW+10) cannot be applied. We remark that, without assuming incoherence, the dependence of the error on the maximal entry of L∗ is inherent (see Remark 3.2). The second difference is that Theorem 2.1 provides a significantly better dependence on the magnitude ζ of the entry-wise measurement error. Specifically, in the settings of Theorem 2.1, (ZLW+10) showed that if L∗ satisfies the incoherence conditions, the error of their estimator is O ( n2ζ ) . If the entries of N are standard Gaussian with probability α (and hence ζ 6 O(1)), and the entries of L∗ are bounded by O(1), then the error of our estimator is O( √ rn/α), which is considerably better than O(n2) as in (ZLW+10). On the other hand, their error does not depend on the magnitude ρ/n of the signal entries, so in the extreme regimes when the singular vectors of L∗ satisfy the incoherence conditions but L∗ has very large singular values (so that the magnitude of the entries of L is significantly larger than n), their analysis provides better guarantees than Theorem 2.1. As another observation to understand Theorem 2.1, notice that our robust PCA model also captures the classical matrix completion settings. In fact, any instance of matrix completion can be easily transformed into an instance of our PCA model: for the entries (i , j) that we do not observe, we can set Ni , j to some arbitrarily large value ±C(ρ, n) ρ/n, making the signal-to-noise ratio of the entry arbitrarily small. The observed (i.e. uncorrupted) Θ(α · n2) entries may additionally be perturbed by Gaussian noise with variance Θ(ζ2). The error guarantees of the estimator in Theorem 2.1 is 4For n × m matrix A, ‖A‖max maxi∈[n], j∈[m] |Ai j |. 5We remark that in (CLMW11) the authors showed that they can consider non-symmetric noise when the fraction of inliers is large α > 1/2. However for smaller fraction of inliers their analysis requires the entries of the noise to be symmetric and independent, so for α < 1/2, their assumptions are captured by Theorem 2.1. 6A rank-r n × n dimensional matrix M is µ-incoherent if its singular vector decomposition M : UΣV> satisfies maxi∈[n]‖U>ei ‖2 6 µr n , maxi∈[n]‖V>ei ‖2 6 µr n and ‖UV>‖∞ 6 √ µr n . O ( ( ρ/n + ζ )√ rn/α ) . Thus, the dependency on the parameters ρ, n , ζ, and r is the same as in matrix completion and the error is within a factor of Θ( √ 1/α) from the optimum for matrix completion. However, this worse dependency on α is intrinsic to the more general model considered and it turns out to be optimal (see Theorem 2.4). On a high level, the additional factor of Θ( √ 1/α) comes from the fact that in our PCA model we do not know which entries are corrupted. The main consequence of this phenomenon is that a condition of the form α & √ r/n appears inherent to achieve consistency. To get some intuition on why this condition is necessary, consider the Wigner model where we are given a matrix Y xx> + σW for a flat vector x ∈ {±1}n and a standard Gaussian matrix W . Note, that the entries of W fit our noise model for ζ 1 , ρ/n 1 , r 1 and α Θ(1/σ). The spectral norm of σW concentrates around 2σ √ n and thus it is information-theoretically impossible to approximately recover the vector x for σ 1/α ω( √ n) (see (PWBM16)). 2.2 Sparse regression Our regression model considers a fixed design matrix X ∈ n×d and observations y : Xβ∗+η ∈ n where β∗ is an unknown k-sparse parameter vector and η is random noise with (|ηi | 6 1) > α for all i ∈ [n]. Earlier works (BJKK17), (SBRJ19) focused on the setting that the design matrix consists of i.i.d. rows with Gaussian distribution N(0,Σ) and the noise is η ζ + w where ζ is deterministic (α · n)-sparse vector and w is subgaussian. As in (dNS21), our results for a fixed design and random noise can, in fact, extend to yield the same guarantees for this early setting (see Theorem 2.3). Hence, a key advantage of our results is that the design X does not have to consist of Gaussian entries. Remarkably, we can handle arbitrary deterministic designs as long as they satisfy some mild conditions. Concretely, we make the following three assumptions, the first two of which are standard in the literature of sparse regression (e.g., see (Wai19), section 7.3): 1. For every column X i of X, X i 6 √n. 2. Restricted eigenvalue property (RE-property): For every vector u ∈ d such that7 usupp(β∗) 1 > 0.1 · ‖u‖1, we have 1n ‖Xu‖2 > λ · ‖u‖2 for some parameter λ > 0. 3. Well-spreadness property: For some (large enough) m ∈ [n] and for every vector u ∈ d such that usupp(β∗) 1 > 0.1 · ‖u‖1 and for every subset S ⊆ [n] with |S | > n − m, it holds that ‖(Xu)S‖ > 12 ‖Xu‖. Denote F2(β) : n∑ i 1 f2 ( yi − 〈Xi , β〉 ) , where Xi are the rows of X, and f2 is as in Eq. (2.1). We devise our estimator for sparse regression and state its statistical guarantees below: β̂ B arg min β∈ d ( F2(β) + 100 √ n log d · β 1) . (2.3) Theorem 2.2. Let β∗ ∈ d be an unknown k-sparse vector and let X ∈ n×d be a deterministic matrix such that for each column X i of X, ‖X i ‖ 6 √ n, satisfying the RE-property with λ > 0 and well-spreadness property with m & k log d λ·α2 (recall that n > m). Further, let η be an n-dimensional random vector with independent, symmetrically distributed (about zero) entries and α mini∈[n] { ηi 6 1}. Then with probability at least 1 − d−10 over η, given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( 1 λ · k log d α2 · n ) and β̂ − β∗ 2 6 O ( 1 λ2 · k log d α2 · n ) . There are important considerations when interpreting this theorem. The first is the special case η ∼ N(0, σ2)n , which satisfies our model for α Θ(1/σ). For this case, it is well known (e.g., 7For a vector v ∈ d and a set S ⊆ [d], we denote by vS the restriction of v to the coordinates in S. see (Wai19), section 7.3) that under the same RE assumption, the LASSO estimator achieves a prediction error rate of O( σ2λ · k log d n ) O( k log d λ·α2 ·n ), matching our result. Moreover, this error rate is essentially optimal. Under a standard assumption in complexity theory (NP 1 P/poly), the RE assumption is necessary when considering polynomial-time estimators (ZWJ14). Further, this also shows that the dependence on the RE constant seems unavoidable. Under mild conditions on the design matrix, trivially satisfied if the rows are i.i.d. Gaussian with covariance Σ whose condition number is constant, our guarantees are optimal up to constant factors for all estimators if k 6 d1−Ω(1) (e.g. k 6 d0.99), see (RWY11). This optimality also shows that our bound on the number of samples is best possible since otherwise we would not be able to achieve vanishing error. The (non-sparse version of) well-spreadness property was first used in the context of regression in (dNS21). In the same work the authors also showed that, under oblivious noise assumptions, some weak form of spreadness property is indeed necessary. The second consideration is the optimal dependence on α: Theorem 2.2 achieves consistency as long as the fraction of inliers satisfies α ω( √ k log d/n). To get an intuition, observe that lower bounds for standard sparse regression show that, already for η ∼ N(0, σ · Idn), it is possible to achieve consistency only for n ω(σ2k log d) (if k 6 d1−Ω(1)). As for this η, the number of entries of magnitude at most 1 is O(n/σ) with high probability, it follows that for α Θ(1/σ) 6 O (√ (k log d)/n ) , no estimator is consistent. To the best of our knowledge, Theorem 2.2 is the first result to achieve consistency under such minimalistic noise settings and deterministic designs. Previous results (BJKK17; SBRJ19; dNS21) focused on simpler settings of Gaussian design X and deterministic noise, and provide no guarantees for more general models. The techniques for Theorem 2.2 also extend to this case. Theorem 2.3. Let β∗ ∈ d be an unknown k-sparse vector and let X be a n-by-d random matrix with i.i.d. rows X1 , . . .Xn ∼ N(0,Σ) for a positive definite matrix Σ. Further, let η ∈ n be a deterministic vector with α·n coordinates bounded by 1 in absolute value. Suppose that n & ν(Σ)·k log d σmin(Σ)·α2 , where ν(Σ) is the maximum diagonal entry of Σ and σmin(Σ) is its smallest eigenvalue. Then, with probability at least 1 − d−10 over X , given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( ν(Σ) · k log d σmin(Σ) · α2 · n ) and β̂ − β∗ 2 6 O ( ν(Σ) · k log d σ2min(Σ) · α2 · n ) . Even for standard Gaussian design X ∼ N(0, 1)n×d , the above theorem improves over previous results, which required sub-optimal sample complexity n & ( k/α2 ) · log d · log β∗ . For non-spherical Gaussian designs, the improvement over state of the art (SBRJ19) is more serious: their algorithm requires α > Ω(1/log log n), while our Theorem 2.3 doesn’t have such restrictions and works for all α & √ ν(Σ) σmin(Σ) · k log d n ; in many interesting regimes α is allowed to be smaller than n −Ω(1). The dependence on α is nearly optimal: the estimator is consistent as long as α > ω (√ ν(Σ) σmin(Σ) · k log d n ) , and from the discussion after Theorem 2.3, if α 6 O( √ (k log d)/n), no estimator is consistent. Note that while we can deal with general covariance matrices, to compare Theorem 2.3 with Theorem 2.2 it is easier to consider Σ in a normalized form, when ν(Σ) 6 1. This can be easily achieved by scaling X . Also note that Theorem 2.2 can be generalized to the case ‖X i ‖ 6 √ νn for arbitrary ν > 0, and then the error bounds and the bound on m should be multiplied by ν. The RE-property of Theorem 2.2 is a standard assumption in sparse regression and is satisfied by a large family of matrices. For example, with high probability a random matrix X with i.i.d. rows sampled from N(0,Σ), with positive definite Σ ∈ d×d whose diagonal entries are bounded by 1, satisfies the RE-property with parameter Ω(σmin(Σ)) for all subsets of [d] of size k (so for every possible support of β∗) as long as long as n & 1σmin(Σ) · k log d (see (Wai19), section 7.3.3). The wellspread assumption is satisfied for such X with for all sets S ⊂ [n] of size m 6 cn (for sufficiently small c) and for all subsets of [d] of size k as long as n & 1σmin(Σ) · k log d. 2.3 Optimal fraction of inliers for principal component analysis under oblivious noise We show here that the dependence on α we obtain in Theorem 2.1 is information theroetically optimal up to constant factors. Concretely, let L∗ ,N , Y , α, ρ and ζ be as in Theorem 2.1, and let 0 < ε < 1 and 0 < δ < 1. A successful (ε, δ)-weak recovery algorithm for PCA is an algorithm that takes Y as input and returns a matrix L̂ such that L̂ − L∗ F 6 ε · ρ with probability at least 1 − δ. It can be easily seen that the Huber-loss estimator of Theorem 2.1 fails to be a successful weakrecovery algorithm if α o( √ r/n) (for both cases ζ 6 ρ/n and ρ/n 6 ζ, we need α Ω( √ r/n).) A natural question to ask is whether the condition α Ω( √ r/n) is necessary in general. The following theorem shows that if α o( √ r/n), then weak-recovery is information-theoretically impossible. This means that the (polynomially small) fraction of inliers that the Huber-loss estimator of Theorem 2.1 can deal with is optimal up to a constant factor. Theorem 2.4. There exists a universal constant C0 > 0 such that for every 0 < ε < 1 and 0 < δ < 1, if α : mini , j∈[n] [|Ni , j | 6 ζ] satisfies α < C0 · (1 − ε2)2 · (1 − δ) · √ r/n, and n is large enough, then it is information-theoretically impossible to have a successful (ε, δ)-weak recovery algorithm. The problem remains information-theoretically impossible (for the same regime of parameters) even if we assume that L∗ is incoherent; more precisely, even if we know that L∗ has incoherence parameters that are as good as those of a random flat matrix of rank r, the theorem still holds. 3 Techniques To illustrate our techniques in proving statistical guarantees for the Huber-loss estimator, we first use sparse linear regression as a running example. Then, we discuss how the same ideas apply to principal component analysis. Finally, we also remark our techniques for lower bounds. 3.1 Sparse linear regression under oblivious noise We consider the model of Theorem 2.2. Our starting point to attain the guarantees for our estimator Eq. (2.3), i.e., β̂ B arg minβ∈ d F2(β) + 100 √ n log d β 1, is a classical approach for M-estimators (see e.g. (Wai19), chapter 9). For simplicity, we will refer to F2(β) as the loss function and to β 1 as the regularizer. At a high level, it consists of the following two ingredients: (I) an upper bound on some norm of the gradient of the loss function at the parameter β∗, (II) a lower bound on the curvature of the loss function (in form of a local strong convexity bound) within a structured neighborhood of β∗. The structure of this neighborhood can roughly be controlled by choosing the appropriate regularizer. The key aspect of this strategy is that the strength of the statistical guarantees of the estimator crucially depends on the directions and the radius in which we can establish lower bounds on the curvature of the function. Since these features are inherently dependent on the landscape of the loss function and the regularizer, they may differ significantly from problem to problem. This strategy has been applied successfully for many related problems such as compressed sensing or matrix completion albeit with standard noise assumptions.8 Under oblivious noise, (dNS21) used a particular instantiation of this framework to prove optimal convergence of the Huber-loss – without any regularizer – for standard linear regression. Such estimator, however, doesn’t impose any structure on the neighborhood of β∗ considered in (II) and thus, can only be used to obtain sub-optimal guarantees for sparse regression. In the context of sparse regression, the above two conditions translate to: (I) an upper bound on the largest entry in absolute value of the gradient of the loss function at β∗, and (II) a lower bound on the curvature of F2 within the set of approximately k-sparse vectors9 close to β∗. We use this recipe to show that all approximate minimizers of F are close to β∗. While the idea of restricting to only approximately sparse directions has also been applied for the LASSO estimator in sparse 8The term "standard noise assumptions" is deliberately vague; as a concrete example, we will refer to (sub)-Gaussian noise distributions. See again chapter 9 of (Wai19) for a survey. 9We will clarify this notion in the subsequent paragraphs. regression under standard (sub)-Gaussian noise, in the presence of oblivious noise, our analysis of the Huber-loss function requires a more careful approach. More precisely, under the assumptions of Theorem 2.2, the error bound can be computed as O ( s‖G‖∗reg κ ) , (3.1) where G is a gradient of Huber loss at β∗, ‖·‖∗reg is a norm dual to the regularization norm (which is equal to ‖·‖max for `1 regularizer), s is a structure parameter, which is equal to √ k/λ, and κ is a restricted strong convexity parameter. Note that by error here we mean 1√ n ‖X(β̂ − β∗)‖. Similarly, under under assumptions of Theorem 2.1, we get the error bound Eq. (3.1), where G is a gradient of Huber loss at L∗, ‖·‖∗reg is a norm dual to the nuclear norm (i.e. the spectral norm), structure parameter s is √ r, and κ is a restricted strong convexity parameter. For more details on the conditions of the error bound, see the supplementary material. Below, we explore the bounds on the norm of the gradient and on the restricted strong convexity parameter. Bounding the gradient of the Huber loss. The gradient of the Huber-loss F2(·) at β∗ has the form ∇F2(β∗) ∑n i 1 f ′ 2 [ ηi ] · Xi , where Xi is the i-th row of X. The random variables f ′2 [ ηi ] , i ∈ [n], are independent, centered, symmetric and bounded by 2. Since we assume that each column of X has norm at most √ n, the entries of the row Xi are easily bounded by √ n. Thus, ∇F2(β∗) is a vector with independent, symmetric entries with bounded variance, so its behavior can be easily studied through standard concentration bounds. In particular, by a simple application of Hoeffding’s inequality, we obtain, with high probability, ∇F2(β∗) max maxj∈[d] ∑ i∈[n] f ′2 [ ηi ] · Xi j 6 O (√n log d) . (3.2) Local strong convexity of the Huber loss. Proving local strong convexity presents additional challenges. Without the sparsity constraint, (dNS21) showed that under a slightly stronger spreadness assumptions than Theorem 2.2, the Huber loss is locally strongly convex within a constant radius R ball centered at β∗ whenever n & d/α2. (This function is not globally strongly convex due to its linear parts.) Using their result as a black-box, one can obtain the error guarantees of Theorem 2.2, but with suboptimal sample complexity. The issue is that with the substantially smaller sample size of n > Õ(k/α2) that resembles the usual considerations in the context of sparse regression, the Huber loss is not locally strongly convex around β∗ uniformly across all directions, so we cannot hope to prove convergence with optimal sample complexity using this argument. To overcome this obstacle, we make use of the framework of M-estimators: Since we consider a regularized version of the Huber loss, it will be enough to show local strong convexity in a radius R uniformly across all directions which are approximately k-sparse. For this substantially weaker condition, Õ(k/α2) will be enough. More in details, for observations y Xβ∗ + η and an arbitrary u ∈ d of norm ‖u‖ 6 R, it is possible to lower bound the Hessian10 of the Huber loss at β∗ + u by:11 HF2(β∗ + u) n∑ i 1 f ′′2 [ (Xu)i − ηi ] · XiXiT n∑ i 1 1[|(Xu)i−ηi |62] · XiXi T M(u) : n∑ i 1 1[|〈Xi ,u〉|61] · 1[|ηi |61] · XiXi T As can be observed, we do not attempt to exploit cancellations between Xu and η. Let Q : { i ∈ [n] ηi 6 1} be the set of uncorrupted entries of η. Given that with high probability Q has size Ω(α ·n), the best outcome we can hope for is to provide a lower bound of the form 〈u ,M(u)u〉 > αn in the direction β̂ − β∗ . In (dNS21), it was shown that if the span of the measurement matrix X is well spread, then 〈u ,M(u)u〉 > Ω(α · n). 10The Hessian does not exist everywhere. Nevertheless, the second derivative of the penalty function f2 exists as an L1 function in the sense that f ′2(b) − f ′ 2(a) 2 ∫ b a 1[|t |62]dt. This property is enough for our purposes. 11A more extensive explanation of the first part of this analysis can be found in (dNS21). If the direction β̂− β∗ was fixed, it would suffice to show the curvature in that single direction through the above reasoning. However, β̂ depends on the unknown random noise vector η . Without the regularizer in Eq. (2.3), this dependence indicates that the vector β̂ may take any possible direction, so one needs to ensure local strong convexity to hold in a constant-radius ball centered at β∗. That is, min‖u‖6R λmin(M(u)) > Ω(α · n) . It can be shown through a covering argument (of the ball) that this bound holds true for n > d/α2. This is the approach of (dNS21). The minimizer of the Huber loss follows a sparse direction. The main issue with the above approach is that no information concerning the direction β̂ − β∗ is used. In the settings of sparse regression, however, our estimator contains the regularizer ‖β‖1. The main consequence of the regularizer is that the direction β̂−β∗ is approximately flat in the sense ‖β̂−β∗‖1 6 O (√ k‖β̂ − β∗‖ ) . The reason12 is that due to the structure of the objective function in Eq. (2.3) and concentration of the gradient Eq. (3.2), the penalty for dense vectors is larger than the absolute value of the inner product 〈∇F(β∗), β̂ − β∗〉 (which, as previously argued, concentrates around its (zero) expectation). This specific structure of the minimizer implies that it suffices to prove local strong convexity only in approximately sparse directions. For these set of directions, we carefully construct a sufficiently small covering set so that n > Õ(k/α2) samples suffice to ensure local strong convexity over it. Remark 3.1 (Comparison with LASSO). it is important to remark that while this approach of only considering approximately sparse directions has also been used in the context of sparse regression under Gaussian noise (e.g. the LASSO estimator), obtaining the desired lower bound is considerably easier in these settings as it directly follows from the restricted eigenvalue property of the design matrix. In our case, we require an additional careful probabilistic analysis which uses a covering argument for the set of approximately sparse vectors. As we see however, it turns out that we do not need any additional assumptions on the design matrix when compared with the LASSO estimator except for the well-spreadness property (recall that some weak version of well-spreadness is indeed necessary in robust settings, see (dNS21)). 3.2 Principal component analysis under oblivious noise A convenient feature of the approach in Section 3.1 for sparse regression, is that it can be easily applied to additional problems. We briefly explain here how to apply it for principal component analysis. We consider the model defined in Theorem 2.1. We use an estimator based on the Huber loss equipped with the nuclear norm as a regularizer to enforce the low-rank structure in our estimator L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fζ+ρ/n(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (3.3) In this setting, the gradient ∇Fζ+ρ/n(Y − L∗) is a matrix with independent, symmetric entries which are bounded (by ζ + ρ/n) and hence its spectral norm is O ( (ζ + ρ/n) √ n ) with high probability. Local strong convexity can be obtained in a similar fashion as shown in Section 3.1: due to the choice of the Huber transition point all entries with small noise are in the quadratic part of F. Moreover, the nuclear norm regularizer ensures that the minimizer is an approximately low-rank matrix in the sense that ‖M‖nuc 6 O (√ r‖M‖F ) . So again, it suffices to provide curvature of the loss function only on these subset of structured directions. Remark 3.2 (Incoherence vs. spikiness). Recall the discussion on incoherence in Section 2. If for every µ 6 n/log2 n matrix L∗ doesn’t satisfy µ-incoherence conditions, the results in (CLMW11; ZLW+10) cannot be applied. However, our estimator achieves error ‖L̂−L∗‖F/‖L∗‖F → 0 as n →∞. Indeed,let ω(1) 6 f (n) 6 o(log2 n) and assume ζ 0. Let u ∈ n be an f (n)-sparse unit vector whose nonzero entries are equal to 1/ √ f (n). Let v ∈ n be a vector with all entries equal to 1/ √ n. Then, uvT does not satisfy incoherence with any µ < n/ f (n). We have uvT F 1, and the error of our estimator is O(1/(α √ f (n))), so it tends to zero for constant (or even some subconstant) α. Furthermore notice that the dependence of the error of Theorem 2.1 on the maximal entry of L∗ is inherent if we do not require incoherence. Indeed, consider L1 b · e1e1T for large enough b > 0 12This phenomenon is a consequence of the decomposability of the L1 norm, see the supplementary material. and L2 e2e2T. For constant α, let |Ni j | be 1 with probability α/2, 0 with probability α/2 and b with probability 1 − α. Then given Y we cannot even distinguish between cases L∗ L1 or L∗ L2, and since ‖L1 − L2‖F > b, the error should also depend on b. Remark 3.3 (α vs α2: what if one knows which entries are corrupted?). As was observed in Section 2, the error bound of our estimator is worse than the error for matrix completion by a factor 1/ √ α. We observe similar effect in linear regression: if, as in matrix completion, we are given a randomly chosen α fraction of observations {( Xi , yi 〈Xi , β∗〉 + ηi )}n i 1 where η ∼ N(0, 1) n , and since for the remaining samples we may not assume any bound on the signal-to-noise ratio, then this problem is essentialy the same as linear regression with αn observations. Thus the optimal prediction error rate is Θ( √ d/(αn)). Now, if we have y Xβ∗ + η, where η ∼ N(0, 1/α2), then with probability Θ(α), |ηi | 6 1, but the optimal prediction error rate in this case is Θ( √ d/(α2n)). So in both linear regression and robust PCA, prior knowledge of the set of corrupted entries makes the problem easier. 3.3 Optimal fraction of inliers for principal component analysis under oblivious noise In order to prove Theorem 2.4, we will adopt a generative model for the hidden matrix L∗: We will generate L∗ randomly but assume that the distribution is known to the algorithm. This makes the problem easier. Therefore, any impossibility result for this generative model would imply impossibility for the more restrictive model for which L∗ is deterministic but unknown. We generate a random flat matrix L∗ using n · r independent and uniform random bits in such a way that L∗ is of rank r and incoherent with high probability. Then, for every constant 0 < ξ < 1, we find a distribution for the random noise N in such a way that the fraction of inliers satisfies α : [|Ni j | 6 ζ] Θ ( ξ √ r/n ) , and such that the mutual information between L∗ and Y L∗ + N can be upper bounded as I(L∗; Y ) 6 O(ξ · n · r). Roughly speaking the smaller ξ gets, the more independent L∗ and Y will be. Now using an inequality that is similar to the standard Fano-inequality but adapted to weak-recovery, we show that if there is a successful (ε, δ)-weak recovery algorithm for L∗ and N , then I(L∗; Y ) > Ω ( (1 − ε2)2 · (1 − δ) · n · r ) . By combining all these observations together, we can deduce that if ξ is small enough, it is impossible to have a successful (ε, δ)-weak recovery algorithm for L∗ and N . Acknowledgments and Disclosure of Funding The authors thank the anonymous reviewers for useful comments. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 815464).
1. What is the focus of the paper regarding sparse regression and principle component analysis? 2. What are the strengths of the proposed approach, particularly in its ability to handle overwhelming oblivious noise? 3. Are there any concerns or suggestions regarding the paper's citations and references? 4. How does the proposed method compare to other existing works on robust linear regression using Huber loss? 5. Can you provide more details regarding the condition on α and its optimality? 6. What is the significance of the obtained error rates and dependence on α for a wide range of distribution assumptions?
Summary Of The Paper Review
Summary Of The Paper The paper studies algorithms for sparse regression and principle component analysis using Huber loss in the presence of overwhelming oblivious noise. In particular, the fraction of inliers, α , is going to 0 but it is still possible to achieve consistency as the noise model is benign (oblivious, symmetric noise). The paper obtains optimal error rates and dependence on α for a wide range of distribution assumptions. These rates are achieved by using regularized Huber loss (nuclear norm for the matrix case and ℓ 1 -norm for regression case). Review I think the paper makes significant contributions for sparse (and rank-constrained) models in the presence of overwhelming noise. The proof strategy follows the standard analysis of regularized high-dimensional models [Wai19], in combination with a recent line of work on Huber regression for optimal error in robust regression ([SFZ19, PJL20, SF20, dNS21]). The paper is also well-written and usually clear. I thus recommend "accept". Some comments and suggestions are attached below: Main Comment Citations to several existing works on robust linear linear regression that use Huber loss are missing: [SFZ19, PJL20, SF20]. (Surprisingly [SFZ19] is listed in references but not mentioned in the paper. [SFZ19] is not even cited in the full paper attached as the supplementary material.) These papers also follow the same proof strategy. In particular, regarding Line 201: The (non-sparse version of) well-spreadness condition looks very similar to the weak stability of [PJL20] (also see [PJL20, Theorem 3.1] ). Although the aforementioned works are looking at a different corruption model, there seems to be a strong relation in techniques, and thus should be discussed. Other Comments Please make it explicit early on that the regression setting only allows corruption in responses (and not the covariates). Assuming clean covariates simplifies the problem considerably. Lines 48-55: It says that the error will be mentioned in | ⋅ | but the following text in Section 1 mentions rates for error in | ⋅ | 2 . Footnote 3: The second sentence should make it explicit that the reduction is for Gaussian design. I found it confusing at first. Line 123: Define maximum norm. It is less standard than the others. Lines 204-209: it is not clear from these lines that the condition on α is optimal. Please add more details for clarity. When referencing to books, please mention the specific result being used (or a chapter for a broad topic). References [SF20] Sasai, T. & Fujisawa, H. Robust estimation with Lasso when outputs are adversarially contaminated. arXiv:2004.05990 (2020). [PJL20] Pensia, A., Jog, V. & Loh, P. Robust regression with covariate filtering: Heavy tails and adversarial contamination. arXiv:2009.12976 (2020). [SFZ19] Sun, Q., Zhou, W. & Fan, J. Adaptive Huber Regression. arXiv:1706.06991 (2019). After the Author Feedback I thank the authors for their thoughtful response. I am maintaining my score and thus recommend acceptance of the paper.
NIPS
Title Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers Abstract We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. N/A We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. 1 Introduction Estimating information from structured data is a central theme in statistics that by now has found applications in a wide array of disciplines. On a high level, a typical assumption in an estimation problem is the existence of a –known a priori– family of probability distributions P : { β β ∈ Ω} over some spaceZ that are each indexed by some parameter β ∈ Ω. We then observe a collection 35th Conference on Neural Information Processing Systems (NeurIPS 2021). of n independent observations Z (Z1 , . . . , Zn) drawn from an unknown probability distribution β∗ ∈ P. The goal is to (approximately) recover the hidden parameter β∗.1 Oftentimes, real-world data may contain skewed, imprecise or corrupted measurements. Hence, a desirable property for an estimator is to be robust to significant, possibly malicious, noise perturbations on the given observations. Indeed, in the last two decades, a large body of work has been developed on designing robust algorithms (e.g see (BGN09; BBC11; MMYSB19)). However, proving strong guarantees often either demands strong assumptions on the noise model or requires that the fraction of perturbed observations is small. More concretely, when we allow the noise to be chosen adaptively, i.e., chosen dependently on the observations and hidden parameters, a common theme is that consistent estimators – estimators whose error tends to zero as the number of observations grows – can be attained only when the fraction of outliers is small. In order to make vanishing error possible in the presence of large fractions of outliers, it is necessary to consider weaker adversary models that are oblivious to the underlying structured data. In recent years, a flurry of works have investigated oblivious noise models (CLMW11; ZLW+10; TJSO14; BJKK17; SZF18; SBRJ19; PJL21; dNS21). These results however are tailor-made to the specific models and problems. To overcome this limitation, in this paper we aim to provide a simple blueprint to design provably robust estimators under minimalistic noise assumptions for a large class of estimation problems. As a testbed for our blueprint, we investigate two well-studied problems: Principal component analysis (PCA): Given a matrix Y : L∗ + N where L∗ ∈ n×n is an unknown parameter matrix and N is an n-by-n random noise matrix, the goal is to find an estimator L̂ for L∗ which is as close as possible to L∗ in Frobenius norm. Sparse regression: Given observations (X1 , y1), . . . , (Xn , yn) following the linear model yi 〈Xi , β∗〉 + ηi where Xi ∈ d , β∗ ∈ d is the k-sparse parameter vector of interest (by k-sparse we mean that it has at most k nonzero entries) and η1 , . . . ηn is noise, the goal is to find an estimator β̂ for β∗ achieving small squared prediction error 1n ‖X(β̂ − β∗)‖2, where X is the matrix whose rows are X1 , . . . ,Xn .2 Principal component analysis. A natural way to describe principal component analysis under oblivious perturbations is that of assuming the noise matrix N to be an n-by-n matrix with a uniformly random set of α · n2 entries bounded by some small ζ > 0 in absolute value. In these settings, we may think of the α fraction of entries with small noise as the set of uncorrupted observations. Moreover, for ζ > 0, the fact that even for uncorrupted observations the noise is non-zero allows us to capture both gross sparse errors and small entry-wise noise in the measurements at the same time (for example if ζ 1 then the model captures settings with additional standard Gaussian noise). Remarkably, for ζ 0, Candès et al.’s seminal work (CLMW11) provided an algorithm that exactly recovers L∗ even for a vanishing fraction of inliers, under incoherence conditions on the signal L∗. The result was slightly extended in (ZLW+10) where the authors provided an algorithm recovering L∗ up to squared error O(ζ2 · n4) thus allowing polynomially small measurement noise ζ, but still failing to capture settings where standard Gaussian measurement noise is added to the sparse noise. Even for simple signal matrices L∗, prior to this work, it remained an open question whether consistent estimators could be designed in presence of both oblivious noise and more reasonable measurement noise (e.g., standard Gaussian). Linear regression. Similarly to the context of principal component analysis, a convenient model for oblivious adversarial corruptions is that of assuming the noise vector η to have a random set of α · n entries bounded by ζ in absolute value (here all results of interest can be extended to any ζ > 0 just by scaling, so we consider only the case ζ 1). This model trivially captures the classical settings with Gaussian noise (a Gaussian vector with variance σ2 will have an α Θ( 1σ ) fraction of inliers) and, again, allows us to think of the α-fraction of entries with small noise as the set of uncorrupted observations. Early works on consistent regression focused on the regime with Gaussian design X1 , . . . ,Xn ∼ N(0,Σ) and deterministic noise η.3 (BJKK17) presented an estimator achieving error 1In this paper, we assume Ω ⊆ d andZ ⊆ D for some d ,D, denote random variables in bold face, and hide absolute constant factors with the notation O(·),Ω(·) , & , . and logarithmic factors with Õ(·), Ω̃(·). 2Our analysis also works for the parameter error ‖β∗ − β̂‖. 3For Gaussian design X one may also consider a deterministic noise model. This noise model is subsumed by the random noise model discussed here (if X is Gaussian). As shown in (dNS21), roughly speaking the Õ(d/(α2 · n)) for any α larger than a fixed constant. (SBRJ19) extended this result by achieving comparable error rates even for a vanishing fraction of inliers α & 1/log log n. Assuming n & d2/α2 (TJSO14) proved that the Huber-loss estimator (Hub64) achieves optimal error rate O(d/α2 · n) even for polynomially small fraction of inliers α & √ d/n. This line of work culminated in (dNS21) which extended the result of (TJSO14) achieving the same guarantees with optimal sample complexity n & d/α2. For Gaussian design, similar result as (dNS21) can be extracted from independent work (PF20). Furthermore, the authors of (dNS21) extended these guarantees to deterministic design matrices satisfying only a spreadness condition (trivially satisfied by sub-Gaussian design matrices). Huber loss was also analyzed in context of linear regression in (SZF18; DT19; SF20; PJL21). These works studied models that are different from our model, and these results do not work in the case α o(1). (SZF18) used assumptions on the moments of the noise, (DT19) and (SF20) studied the model with non-oblivious adversary, and the model from (PJL21) allows corruptions in covariates. In the context of sparse regression, less is known. When the fraction of observations is constant α > Ω(1), (NTN11) provided a first consistent estimator. Later (DT19) and (SF20) improved that result (assuming α > Ω(1), but these results also work with non-oblivious adversary). In the case α o(1), the algorithm of (SBRJ19) (for Gaussian designs) achieves nearly optimal error convergence Õ ( (k log2 d)/α2n ) , but requires α & 1/log log n. More recently, (dNS21) presented an algorithm for standard Gaussian design X ∼ N(0, 1)n×d , achieving the nearly optimal error convergence Õ(k/(α2 · n)) for nearly optimal inliers fraction α & √ (k · log2 d)/n. Both (SBRJ19) and (dNS21) use an iterative process, and require a bound ‖β∗‖ 6 dO(1) (for larger ‖β∗‖ these algorithms also work, but the fraction of inliers or the error convergence is worse). The algorithm from (dNS21) , however, heavily relies on the assumption that X ∼ N(0, 1)n×d and appears unlikely to be generalizable to more general families of matrices (including non-spherical Gaussian designs). Our contribution. We propose new machinery to design efficiently computable consistent estimators achieving optimal error rates and sample complexity against oblivious outliers. In particular, we extend the approach of (dNS21) to structured estimation problems, by finding a way to exploit adequately the structure therein. While consistent estimators have already been designed under more benign noise assumptions (e.g. the LASSO estimator for sparse linear regression under Gaussian noise), it was previously unclear how to exploit this structure in the setting of oblivious noise. One key consequence of our work is hence to demonstrate what minimal assumptions on the noise are sufficient to make effective recovery (in the sense above) possible. Concretely, we show Oblivious PCA: Under mild assumptions on the noise matrix N and common assumption on the parameter matrix L∗ –traditionally applied in the context of matrix completion (NW12)– we provide an algorithm that achieves optimal error guarantees. Sparse regression: Under mild assumptions on the design matrix and the noise vector –similar to the ones used in (dNS21) for dense parameter vectors β∗ – we provide an algorithm that achieves optimal error guarantees and sample complexity. For both problems, our analysis improves over the state-of-the-art and recovers the classical optimal guarantees, not only for Gaussian noise, but also under much less restrictive noise assumptions. At a high-level, we achieve the above results by equipping the Huber loss estimator with appropriate regularizers. Our techniques closely follow standard analyses for M-estimators, but crucially depart from them when dealing with the observations with large perturbations. Furthermore, our analysis appears to be mechanical and thus easily applicable to many different estimation problems. 2 Results Our estimators are based on regularized versions of the Huber loss. The regularizer we choose depends on the underlying structure of the estimation problem: We use `1 regularization to enforce sparsity in linear regression and nuclear norm regularization to enforce a low-rank structure in the context of PCA. More formally, the Huber penalty is defined as the function fh : → >0 such that underlying reason is that the Gaussianity of X allows one to obtain several other desirable properties "for free". For example, one could ensure that the noise vector is symmetric by randomly flipping the sign of each observation (yi ,Xi), as the design matrix will still be Gaussian. See Section 2 for a more in-depth discussion. fh(t) : { 1 2 t 2 for |t | 6 h , h(|t | − h2 ) otherwise. (2.1) where h > 0 is a penalty parameter. For X ∈ D , the Huber loss is defined as the function Fh(X) : ∑ i∈[D] fh(Xi). We will define the regularized versions in the following sections. For a matrix A, we use ‖A‖, ‖A‖nuc, ‖A‖F, ‖A‖max to denote its spectral, nuclear, Frobenius, maximum4 norms, respectively. For a vector v, we use ‖v‖ and ‖v‖1 to denote its `2 and `1 norms. 2.1 Oblivious principal component analysis For oblivious PCA, we provide guarantees for the following estimator (ζ will be defined shortly): L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fh(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (2.2) Theorem 2.1. Let L∗ ∈ n×n be an unknown deterministic matrix and let N be an n-byn random matrix with independent, symmetrically distributed (about zero) entries and α : mini , j∈[n] { Ni j 6 ζ} for some ζ > 0. Suppose that rank(L∗) r and ‖L∗‖max 6 ρ/n. Then, with probability at least 1 − 2−n over N , given Y L∗ + N , ζ and ρ, the estimator Eq. (2.2) with Huber parameter h ζ + ρ/n satisfies L̂ − L∗ F 6 O (√rnα ) · (ζ + ρ/n) . We first compare the guarantees of Theorem 2.1 with the previous results on robust PCA (CLMW11; ZLW+10).5 The first difference is that they require L∗ to satisfy certain incoherent conditions.6 Concretely, they provide theoretical guarantees for r 6 O ( µ−1n(log n)−2 ) , where µ is the incoherence parameter. In certain regimes, such a constraint strongly binds with the eigenvectors of L∗, restricting the set of admissible signal matrices. Using the different assumption ‖L∗‖max 6 ρ/n (commonly used for matrix completion, see (NW12)), we can obtain nontrivial guarantees (i.e. L̂ − L∗ F/‖L∗‖F → 0 as n →∞) even when the µ-incoherence conditions are not satisfied for any µ 6 n/log2 n, and hence the results (CLMW11; ZLW+10) cannot be applied. We remark that, without assuming incoherence, the dependence of the error on the maximal entry of L∗ is inherent (see Remark 3.2). The second difference is that Theorem 2.1 provides a significantly better dependence on the magnitude ζ of the entry-wise measurement error. Specifically, in the settings of Theorem 2.1, (ZLW+10) showed that if L∗ satisfies the incoherence conditions, the error of their estimator is O ( n2ζ ) . If the entries of N are standard Gaussian with probability α (and hence ζ 6 O(1)), and the entries of L∗ are bounded by O(1), then the error of our estimator is O( √ rn/α), which is considerably better than O(n2) as in (ZLW+10). On the other hand, their error does not depend on the magnitude ρ/n of the signal entries, so in the extreme regimes when the singular vectors of L∗ satisfy the incoherence conditions but L∗ has very large singular values (so that the magnitude of the entries of L is significantly larger than n), their analysis provides better guarantees than Theorem 2.1. As another observation to understand Theorem 2.1, notice that our robust PCA model also captures the classical matrix completion settings. In fact, any instance of matrix completion can be easily transformed into an instance of our PCA model: for the entries (i , j) that we do not observe, we can set Ni , j to some arbitrarily large value ±C(ρ, n) ρ/n, making the signal-to-noise ratio of the entry arbitrarily small. The observed (i.e. uncorrupted) Θ(α · n2) entries may additionally be perturbed by Gaussian noise with variance Θ(ζ2). The error guarantees of the estimator in Theorem 2.1 is 4For n × m matrix A, ‖A‖max maxi∈[n], j∈[m] |Ai j |. 5We remark that in (CLMW11) the authors showed that they can consider non-symmetric noise when the fraction of inliers is large α > 1/2. However for smaller fraction of inliers their analysis requires the entries of the noise to be symmetric and independent, so for α < 1/2, their assumptions are captured by Theorem 2.1. 6A rank-r n × n dimensional matrix M is µ-incoherent if its singular vector decomposition M : UΣV> satisfies maxi∈[n]‖U>ei ‖2 6 µr n , maxi∈[n]‖V>ei ‖2 6 µr n and ‖UV>‖∞ 6 √ µr n . O ( ( ρ/n + ζ )√ rn/α ) . Thus, the dependency on the parameters ρ, n , ζ, and r is the same as in matrix completion and the error is within a factor of Θ( √ 1/α) from the optimum for matrix completion. However, this worse dependency on α is intrinsic to the more general model considered and it turns out to be optimal (see Theorem 2.4). On a high level, the additional factor of Θ( √ 1/α) comes from the fact that in our PCA model we do not know which entries are corrupted. The main consequence of this phenomenon is that a condition of the form α & √ r/n appears inherent to achieve consistency. To get some intuition on why this condition is necessary, consider the Wigner model where we are given a matrix Y xx> + σW for a flat vector x ∈ {±1}n and a standard Gaussian matrix W . Note, that the entries of W fit our noise model for ζ 1 , ρ/n 1 , r 1 and α Θ(1/σ). The spectral norm of σW concentrates around 2σ √ n and thus it is information-theoretically impossible to approximately recover the vector x for σ 1/α ω( √ n) (see (PWBM16)). 2.2 Sparse regression Our regression model considers a fixed design matrix X ∈ n×d and observations y : Xβ∗+η ∈ n where β∗ is an unknown k-sparse parameter vector and η is random noise with (|ηi | 6 1) > α for all i ∈ [n]. Earlier works (BJKK17), (SBRJ19) focused on the setting that the design matrix consists of i.i.d. rows with Gaussian distribution N(0,Σ) and the noise is η ζ + w where ζ is deterministic (α · n)-sparse vector and w is subgaussian. As in (dNS21), our results for a fixed design and random noise can, in fact, extend to yield the same guarantees for this early setting (see Theorem 2.3). Hence, a key advantage of our results is that the design X does not have to consist of Gaussian entries. Remarkably, we can handle arbitrary deterministic designs as long as they satisfy some mild conditions. Concretely, we make the following three assumptions, the first two of which are standard in the literature of sparse regression (e.g., see (Wai19), section 7.3): 1. For every column X i of X, X i 6 √n. 2. Restricted eigenvalue property (RE-property): For every vector u ∈ d such that7 usupp(β∗) 1 > 0.1 · ‖u‖1, we have 1n ‖Xu‖2 > λ · ‖u‖2 for some parameter λ > 0. 3. Well-spreadness property: For some (large enough) m ∈ [n] and for every vector u ∈ d such that usupp(β∗) 1 > 0.1 · ‖u‖1 and for every subset S ⊆ [n] with |S | > n − m, it holds that ‖(Xu)S‖ > 12 ‖Xu‖. Denote F2(β) : n∑ i 1 f2 ( yi − 〈Xi , β〉 ) , where Xi are the rows of X, and f2 is as in Eq. (2.1). We devise our estimator for sparse regression and state its statistical guarantees below: β̂ B arg min β∈ d ( F2(β) + 100 √ n log d · β 1) . (2.3) Theorem 2.2. Let β∗ ∈ d be an unknown k-sparse vector and let X ∈ n×d be a deterministic matrix such that for each column X i of X, ‖X i ‖ 6 √ n, satisfying the RE-property with λ > 0 and well-spreadness property with m & k log d λ·α2 (recall that n > m). Further, let η be an n-dimensional random vector with independent, symmetrically distributed (about zero) entries and α mini∈[n] { ηi 6 1}. Then with probability at least 1 − d−10 over η, given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( 1 λ · k log d α2 · n ) and β̂ − β∗ 2 6 O ( 1 λ2 · k log d α2 · n ) . There are important considerations when interpreting this theorem. The first is the special case η ∼ N(0, σ2)n , which satisfies our model for α Θ(1/σ). For this case, it is well known (e.g., 7For a vector v ∈ d and a set S ⊆ [d], we denote by vS the restriction of v to the coordinates in S. see (Wai19), section 7.3) that under the same RE assumption, the LASSO estimator achieves a prediction error rate of O( σ2λ · k log d n ) O( k log d λ·α2 ·n ), matching our result. Moreover, this error rate is essentially optimal. Under a standard assumption in complexity theory (NP 1 P/poly), the RE assumption is necessary when considering polynomial-time estimators (ZWJ14). Further, this also shows that the dependence on the RE constant seems unavoidable. Under mild conditions on the design matrix, trivially satisfied if the rows are i.i.d. Gaussian with covariance Σ whose condition number is constant, our guarantees are optimal up to constant factors for all estimators if k 6 d1−Ω(1) (e.g. k 6 d0.99), see (RWY11). This optimality also shows that our bound on the number of samples is best possible since otherwise we would not be able to achieve vanishing error. The (non-sparse version of) well-spreadness property was first used in the context of regression in (dNS21). In the same work the authors also showed that, under oblivious noise assumptions, some weak form of spreadness property is indeed necessary. The second consideration is the optimal dependence on α: Theorem 2.2 achieves consistency as long as the fraction of inliers satisfies α ω( √ k log d/n). To get an intuition, observe that lower bounds for standard sparse regression show that, already for η ∼ N(0, σ · Idn), it is possible to achieve consistency only for n ω(σ2k log d) (if k 6 d1−Ω(1)). As for this η, the number of entries of magnitude at most 1 is O(n/σ) with high probability, it follows that for α Θ(1/σ) 6 O (√ (k log d)/n ) , no estimator is consistent. To the best of our knowledge, Theorem 2.2 is the first result to achieve consistency under such minimalistic noise settings and deterministic designs. Previous results (BJKK17; SBRJ19; dNS21) focused on simpler settings of Gaussian design X and deterministic noise, and provide no guarantees for more general models. The techniques for Theorem 2.2 also extend to this case. Theorem 2.3. Let β∗ ∈ d be an unknown k-sparse vector and let X be a n-by-d random matrix with i.i.d. rows X1 , . . .Xn ∼ N(0,Σ) for a positive definite matrix Σ. Further, let η ∈ n be a deterministic vector with α·n coordinates bounded by 1 in absolute value. Suppose that n & ν(Σ)·k log d σmin(Σ)·α2 , where ν(Σ) is the maximum diagonal entry of Σ and σmin(Σ) is its smallest eigenvalue. Then, with probability at least 1 − d−10 over X , given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( ν(Σ) · k log d σmin(Σ) · α2 · n ) and β̂ − β∗ 2 6 O ( ν(Σ) · k log d σ2min(Σ) · α2 · n ) . Even for standard Gaussian design X ∼ N(0, 1)n×d , the above theorem improves over previous results, which required sub-optimal sample complexity n & ( k/α2 ) · log d · log β∗ . For non-spherical Gaussian designs, the improvement over state of the art (SBRJ19) is more serious: their algorithm requires α > Ω(1/log log n), while our Theorem 2.3 doesn’t have such restrictions and works for all α & √ ν(Σ) σmin(Σ) · k log d n ; in many interesting regimes α is allowed to be smaller than n −Ω(1). The dependence on α is nearly optimal: the estimator is consistent as long as α > ω (√ ν(Σ) σmin(Σ) · k log d n ) , and from the discussion after Theorem 2.3, if α 6 O( √ (k log d)/n), no estimator is consistent. Note that while we can deal with general covariance matrices, to compare Theorem 2.3 with Theorem 2.2 it is easier to consider Σ in a normalized form, when ν(Σ) 6 1. This can be easily achieved by scaling X . Also note that Theorem 2.2 can be generalized to the case ‖X i ‖ 6 √ νn for arbitrary ν > 0, and then the error bounds and the bound on m should be multiplied by ν. The RE-property of Theorem 2.2 is a standard assumption in sparse regression and is satisfied by a large family of matrices. For example, with high probability a random matrix X with i.i.d. rows sampled from N(0,Σ), with positive definite Σ ∈ d×d whose diagonal entries are bounded by 1, satisfies the RE-property with parameter Ω(σmin(Σ)) for all subsets of [d] of size k (so for every possible support of β∗) as long as long as n & 1σmin(Σ) · k log d (see (Wai19), section 7.3.3). The wellspread assumption is satisfied for such X with for all sets S ⊂ [n] of size m 6 cn (for sufficiently small c) and for all subsets of [d] of size k as long as n & 1σmin(Σ) · k log d. 2.3 Optimal fraction of inliers for principal component analysis under oblivious noise We show here that the dependence on α we obtain in Theorem 2.1 is information theroetically optimal up to constant factors. Concretely, let L∗ ,N , Y , α, ρ and ζ be as in Theorem 2.1, and let 0 < ε < 1 and 0 < δ < 1. A successful (ε, δ)-weak recovery algorithm for PCA is an algorithm that takes Y as input and returns a matrix L̂ such that L̂ − L∗ F 6 ε · ρ with probability at least 1 − δ. It can be easily seen that the Huber-loss estimator of Theorem 2.1 fails to be a successful weakrecovery algorithm if α o( √ r/n) (for both cases ζ 6 ρ/n and ρ/n 6 ζ, we need α Ω( √ r/n).) A natural question to ask is whether the condition α Ω( √ r/n) is necessary in general. The following theorem shows that if α o( √ r/n), then weak-recovery is information-theoretically impossible. This means that the (polynomially small) fraction of inliers that the Huber-loss estimator of Theorem 2.1 can deal with is optimal up to a constant factor. Theorem 2.4. There exists a universal constant C0 > 0 such that for every 0 < ε < 1 and 0 < δ < 1, if α : mini , j∈[n] [|Ni , j | 6 ζ] satisfies α < C0 · (1 − ε2)2 · (1 − δ) · √ r/n, and n is large enough, then it is information-theoretically impossible to have a successful (ε, δ)-weak recovery algorithm. The problem remains information-theoretically impossible (for the same regime of parameters) even if we assume that L∗ is incoherent; more precisely, even if we know that L∗ has incoherence parameters that are as good as those of a random flat matrix of rank r, the theorem still holds. 3 Techniques To illustrate our techniques in proving statistical guarantees for the Huber-loss estimator, we first use sparse linear regression as a running example. Then, we discuss how the same ideas apply to principal component analysis. Finally, we also remark our techniques for lower bounds. 3.1 Sparse linear regression under oblivious noise We consider the model of Theorem 2.2. Our starting point to attain the guarantees for our estimator Eq. (2.3), i.e., β̂ B arg minβ∈ d F2(β) + 100 √ n log d β 1, is a classical approach for M-estimators (see e.g. (Wai19), chapter 9). For simplicity, we will refer to F2(β) as the loss function and to β 1 as the regularizer. At a high level, it consists of the following two ingredients: (I) an upper bound on some norm of the gradient of the loss function at the parameter β∗, (II) a lower bound on the curvature of the loss function (in form of a local strong convexity bound) within a structured neighborhood of β∗. The structure of this neighborhood can roughly be controlled by choosing the appropriate regularizer. The key aspect of this strategy is that the strength of the statistical guarantees of the estimator crucially depends on the directions and the radius in which we can establish lower bounds on the curvature of the function. Since these features are inherently dependent on the landscape of the loss function and the regularizer, they may differ significantly from problem to problem. This strategy has been applied successfully for many related problems such as compressed sensing or matrix completion albeit with standard noise assumptions.8 Under oblivious noise, (dNS21) used a particular instantiation of this framework to prove optimal convergence of the Huber-loss – without any regularizer – for standard linear regression. Such estimator, however, doesn’t impose any structure on the neighborhood of β∗ considered in (II) and thus, can only be used to obtain sub-optimal guarantees for sparse regression. In the context of sparse regression, the above two conditions translate to: (I) an upper bound on the largest entry in absolute value of the gradient of the loss function at β∗, and (II) a lower bound on the curvature of F2 within the set of approximately k-sparse vectors9 close to β∗. We use this recipe to show that all approximate minimizers of F are close to β∗. While the idea of restricting to only approximately sparse directions has also been applied for the LASSO estimator in sparse 8The term "standard noise assumptions" is deliberately vague; as a concrete example, we will refer to (sub)-Gaussian noise distributions. See again chapter 9 of (Wai19) for a survey. 9We will clarify this notion in the subsequent paragraphs. regression under standard (sub)-Gaussian noise, in the presence of oblivious noise, our analysis of the Huber-loss function requires a more careful approach. More precisely, under the assumptions of Theorem 2.2, the error bound can be computed as O ( s‖G‖∗reg κ ) , (3.1) where G is a gradient of Huber loss at β∗, ‖·‖∗reg is a norm dual to the regularization norm (which is equal to ‖·‖max for `1 regularizer), s is a structure parameter, which is equal to √ k/λ, and κ is a restricted strong convexity parameter. Note that by error here we mean 1√ n ‖X(β̂ − β∗)‖. Similarly, under under assumptions of Theorem 2.1, we get the error bound Eq. (3.1), where G is a gradient of Huber loss at L∗, ‖·‖∗reg is a norm dual to the nuclear norm (i.e. the spectral norm), structure parameter s is √ r, and κ is a restricted strong convexity parameter. For more details on the conditions of the error bound, see the supplementary material. Below, we explore the bounds on the norm of the gradient and on the restricted strong convexity parameter. Bounding the gradient of the Huber loss. The gradient of the Huber-loss F2(·) at β∗ has the form ∇F2(β∗) ∑n i 1 f ′ 2 [ ηi ] · Xi , where Xi is the i-th row of X. The random variables f ′2 [ ηi ] , i ∈ [n], are independent, centered, symmetric and bounded by 2. Since we assume that each column of X has norm at most √ n, the entries of the row Xi are easily bounded by √ n. Thus, ∇F2(β∗) is a vector with independent, symmetric entries with bounded variance, so its behavior can be easily studied through standard concentration bounds. In particular, by a simple application of Hoeffding’s inequality, we obtain, with high probability, ∇F2(β∗) max maxj∈[d] ∑ i∈[n] f ′2 [ ηi ] · Xi j 6 O (√n log d) . (3.2) Local strong convexity of the Huber loss. Proving local strong convexity presents additional challenges. Without the sparsity constraint, (dNS21) showed that under a slightly stronger spreadness assumptions than Theorem 2.2, the Huber loss is locally strongly convex within a constant radius R ball centered at β∗ whenever n & d/α2. (This function is not globally strongly convex due to its linear parts.) Using their result as a black-box, one can obtain the error guarantees of Theorem 2.2, but with suboptimal sample complexity. The issue is that with the substantially smaller sample size of n > Õ(k/α2) that resembles the usual considerations in the context of sparse regression, the Huber loss is not locally strongly convex around β∗ uniformly across all directions, so we cannot hope to prove convergence with optimal sample complexity using this argument. To overcome this obstacle, we make use of the framework of M-estimators: Since we consider a regularized version of the Huber loss, it will be enough to show local strong convexity in a radius R uniformly across all directions which are approximately k-sparse. For this substantially weaker condition, Õ(k/α2) will be enough. More in details, for observations y Xβ∗ + η and an arbitrary u ∈ d of norm ‖u‖ 6 R, it is possible to lower bound the Hessian10 of the Huber loss at β∗ + u by:11 HF2(β∗ + u) n∑ i 1 f ′′2 [ (Xu)i − ηi ] · XiXiT n∑ i 1 1[|(Xu)i−ηi |62] · XiXi T M(u) : n∑ i 1 1[|〈Xi ,u〉|61] · 1[|ηi |61] · XiXi T As can be observed, we do not attempt to exploit cancellations between Xu and η. Let Q : { i ∈ [n] ηi 6 1} be the set of uncorrupted entries of η. Given that with high probability Q has size Ω(α ·n), the best outcome we can hope for is to provide a lower bound of the form 〈u ,M(u)u〉 > αn in the direction β̂ − β∗ . In (dNS21), it was shown that if the span of the measurement matrix X is well spread, then 〈u ,M(u)u〉 > Ω(α · n). 10The Hessian does not exist everywhere. Nevertheless, the second derivative of the penalty function f2 exists as an L1 function in the sense that f ′2(b) − f ′ 2(a) 2 ∫ b a 1[|t |62]dt. This property is enough for our purposes. 11A more extensive explanation of the first part of this analysis can be found in (dNS21). If the direction β̂− β∗ was fixed, it would suffice to show the curvature in that single direction through the above reasoning. However, β̂ depends on the unknown random noise vector η . Without the regularizer in Eq. (2.3), this dependence indicates that the vector β̂ may take any possible direction, so one needs to ensure local strong convexity to hold in a constant-radius ball centered at β∗. That is, min‖u‖6R λmin(M(u)) > Ω(α · n) . It can be shown through a covering argument (of the ball) that this bound holds true for n > d/α2. This is the approach of (dNS21). The minimizer of the Huber loss follows a sparse direction. The main issue with the above approach is that no information concerning the direction β̂ − β∗ is used. In the settings of sparse regression, however, our estimator contains the regularizer ‖β‖1. The main consequence of the regularizer is that the direction β̂−β∗ is approximately flat in the sense ‖β̂−β∗‖1 6 O (√ k‖β̂ − β∗‖ ) . The reason12 is that due to the structure of the objective function in Eq. (2.3) and concentration of the gradient Eq. (3.2), the penalty for dense vectors is larger than the absolute value of the inner product 〈∇F(β∗), β̂ − β∗〉 (which, as previously argued, concentrates around its (zero) expectation). This specific structure of the minimizer implies that it suffices to prove local strong convexity only in approximately sparse directions. For these set of directions, we carefully construct a sufficiently small covering set so that n > Õ(k/α2) samples suffice to ensure local strong convexity over it. Remark 3.1 (Comparison with LASSO). it is important to remark that while this approach of only considering approximately sparse directions has also been used in the context of sparse regression under Gaussian noise (e.g. the LASSO estimator), obtaining the desired lower bound is considerably easier in these settings as it directly follows from the restricted eigenvalue property of the design matrix. In our case, we require an additional careful probabilistic analysis which uses a covering argument for the set of approximately sparse vectors. As we see however, it turns out that we do not need any additional assumptions on the design matrix when compared with the LASSO estimator except for the well-spreadness property (recall that some weak version of well-spreadness is indeed necessary in robust settings, see (dNS21)). 3.2 Principal component analysis under oblivious noise A convenient feature of the approach in Section 3.1 for sparse regression, is that it can be easily applied to additional problems. We briefly explain here how to apply it for principal component analysis. We consider the model defined in Theorem 2.1. We use an estimator based on the Huber loss equipped with the nuclear norm as a regularizer to enforce the low-rank structure in our estimator L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fζ+ρ/n(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (3.3) In this setting, the gradient ∇Fζ+ρ/n(Y − L∗) is a matrix with independent, symmetric entries which are bounded (by ζ + ρ/n) and hence its spectral norm is O ( (ζ + ρ/n) √ n ) with high probability. Local strong convexity can be obtained in a similar fashion as shown in Section 3.1: due to the choice of the Huber transition point all entries with small noise are in the quadratic part of F. Moreover, the nuclear norm regularizer ensures that the minimizer is an approximately low-rank matrix in the sense that ‖M‖nuc 6 O (√ r‖M‖F ) . So again, it suffices to provide curvature of the loss function only on these subset of structured directions. Remark 3.2 (Incoherence vs. spikiness). Recall the discussion on incoherence in Section 2. If for every µ 6 n/log2 n matrix L∗ doesn’t satisfy µ-incoherence conditions, the results in (CLMW11; ZLW+10) cannot be applied. However, our estimator achieves error ‖L̂−L∗‖F/‖L∗‖F → 0 as n →∞. Indeed,let ω(1) 6 f (n) 6 o(log2 n) and assume ζ 0. Let u ∈ n be an f (n)-sparse unit vector whose nonzero entries are equal to 1/ √ f (n). Let v ∈ n be a vector with all entries equal to 1/ √ n. Then, uvT does not satisfy incoherence with any µ < n/ f (n). We have uvT F 1, and the error of our estimator is O(1/(α √ f (n))), so it tends to zero for constant (or even some subconstant) α. Furthermore notice that the dependence of the error of Theorem 2.1 on the maximal entry of L∗ is inherent if we do not require incoherence. Indeed, consider L1 b · e1e1T for large enough b > 0 12This phenomenon is a consequence of the decomposability of the L1 norm, see the supplementary material. and L2 e2e2T. For constant α, let |Ni j | be 1 with probability α/2, 0 with probability α/2 and b with probability 1 − α. Then given Y we cannot even distinguish between cases L∗ L1 or L∗ L2, and since ‖L1 − L2‖F > b, the error should also depend on b. Remark 3.3 (α vs α2: what if one knows which entries are corrupted?). As was observed in Section 2, the error bound of our estimator is worse than the error for matrix completion by a factor 1/ √ α. We observe similar effect in linear regression: if, as in matrix completion, we are given a randomly chosen α fraction of observations {( Xi , yi 〈Xi , β∗〉 + ηi )}n i 1 where η ∼ N(0, 1) n , and since for the remaining samples we may not assume any bound on the signal-to-noise ratio, then this problem is essentialy the same as linear regression with αn observations. Thus the optimal prediction error rate is Θ( √ d/(αn)). Now, if we have y Xβ∗ + η, where η ∼ N(0, 1/α2), then with probability Θ(α), |ηi | 6 1, but the optimal prediction error rate in this case is Θ( √ d/(α2n)). So in both linear regression and robust PCA, prior knowledge of the set of corrupted entries makes the problem easier. 3.3 Optimal fraction of inliers for principal component analysis under oblivious noise In order to prove Theorem 2.4, we will adopt a generative model for the hidden matrix L∗: We will generate L∗ randomly but assume that the distribution is known to the algorithm. This makes the problem easier. Therefore, any impossibility result for this generative model would imply impossibility for the more restrictive model for which L∗ is deterministic but unknown. We generate a random flat matrix L∗ using n · r independent and uniform random bits in such a way that L∗ is of rank r and incoherent with high probability. Then, for every constant 0 < ξ < 1, we find a distribution for the random noise N in such a way that the fraction of inliers satisfies α : [|Ni j | 6 ζ] Θ ( ξ √ r/n ) , and such that the mutual information between L∗ and Y L∗ + N can be upper bounded as I(L∗; Y ) 6 O(ξ · n · r). Roughly speaking the smaller ξ gets, the more independent L∗ and Y will be. Now using an inequality that is similar to the standard Fano-inequality but adapted to weak-recovery, we show that if there is a successful (ε, δ)-weak recovery algorithm for L∗ and N , then I(L∗; Y ) > Ω ( (1 − ε2)2 · (1 − δ) · n · r ) . By combining all these observations together, we can deduce that if ξ is small enough, it is impossible to have a successful (ε, δ)-weak recovery algorithm for L∗ and N . Acknowledgments and Disclosure of Funding The authors thank the anonymous reviewers for useful comments. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 815464).
1. What is the focus of the paper in terms of the problem addressed? 2. What are the unique aspects of the proposed approach compared to previous works? 3. How does the reviewer assess the quality and clarity of the writing? 4. Are there any concerns or suggestions regarding the presentation of the material? 5. Can the reviewer provide additional insights into the significance of the results achieved by the proposed method?
Summary Of The Paper Review
Summary Of The Paper The authors study PCA and sparse regression in the presence of an oblivious adversary. The observation model is as follows: the observations may have symmetric, independent, additive noise. Each noise term is bounded by ζ with probability at least α which can be much smaller than 1. The authors show that the estimators which minimize Huber loss with appropriate ℓ 1 penalty achieve optimal error rates. Review The authors explain the implications of their results in detail. They give intuition to setup sketch their proofs. Assumptions about the design matrix and the noise are weaker than those in related prior work. Line 205: \alpha should be \alpha^2 Line 208: use appropriate placeholder constants in front of = and <=
NIPS
Title Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers Abstract We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. N/A We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. 1 Introduction Estimating information from structured data is a central theme in statistics that by now has found applications in a wide array of disciplines. On a high level, a typical assumption in an estimation problem is the existence of a –known a priori– family of probability distributions P : { β β ∈ Ω} over some spaceZ that are each indexed by some parameter β ∈ Ω. We then observe a collection 35th Conference on Neural Information Processing Systems (NeurIPS 2021). of n independent observations Z (Z1 , . . . , Zn) drawn from an unknown probability distribution β∗ ∈ P. The goal is to (approximately) recover the hidden parameter β∗.1 Oftentimes, real-world data may contain skewed, imprecise or corrupted measurements. Hence, a desirable property for an estimator is to be robust to significant, possibly malicious, noise perturbations on the given observations. Indeed, in the last two decades, a large body of work has been developed on designing robust algorithms (e.g see (BGN09; BBC11; MMYSB19)). However, proving strong guarantees often either demands strong assumptions on the noise model or requires that the fraction of perturbed observations is small. More concretely, when we allow the noise to be chosen adaptively, i.e., chosen dependently on the observations and hidden parameters, a common theme is that consistent estimators – estimators whose error tends to zero as the number of observations grows – can be attained only when the fraction of outliers is small. In order to make vanishing error possible in the presence of large fractions of outliers, it is necessary to consider weaker adversary models that are oblivious to the underlying structured data. In recent years, a flurry of works have investigated oblivious noise models (CLMW11; ZLW+10; TJSO14; BJKK17; SZF18; SBRJ19; PJL21; dNS21). These results however are tailor-made to the specific models and problems. To overcome this limitation, in this paper we aim to provide a simple blueprint to design provably robust estimators under minimalistic noise assumptions for a large class of estimation problems. As a testbed for our blueprint, we investigate two well-studied problems: Principal component analysis (PCA): Given a matrix Y : L∗ + N where L∗ ∈ n×n is an unknown parameter matrix and N is an n-by-n random noise matrix, the goal is to find an estimator L̂ for L∗ which is as close as possible to L∗ in Frobenius norm. Sparse regression: Given observations (X1 , y1), . . . , (Xn , yn) following the linear model yi 〈Xi , β∗〉 + ηi where Xi ∈ d , β∗ ∈ d is the k-sparse parameter vector of interest (by k-sparse we mean that it has at most k nonzero entries) and η1 , . . . ηn is noise, the goal is to find an estimator β̂ for β∗ achieving small squared prediction error 1n ‖X(β̂ − β∗)‖2, where X is the matrix whose rows are X1 , . . . ,Xn .2 Principal component analysis. A natural way to describe principal component analysis under oblivious perturbations is that of assuming the noise matrix N to be an n-by-n matrix with a uniformly random set of α · n2 entries bounded by some small ζ > 0 in absolute value. In these settings, we may think of the α fraction of entries with small noise as the set of uncorrupted observations. Moreover, for ζ > 0, the fact that even for uncorrupted observations the noise is non-zero allows us to capture both gross sparse errors and small entry-wise noise in the measurements at the same time (for example if ζ 1 then the model captures settings with additional standard Gaussian noise). Remarkably, for ζ 0, Candès et al.’s seminal work (CLMW11) provided an algorithm that exactly recovers L∗ even for a vanishing fraction of inliers, under incoherence conditions on the signal L∗. The result was slightly extended in (ZLW+10) where the authors provided an algorithm recovering L∗ up to squared error O(ζ2 · n4) thus allowing polynomially small measurement noise ζ, but still failing to capture settings where standard Gaussian measurement noise is added to the sparse noise. Even for simple signal matrices L∗, prior to this work, it remained an open question whether consistent estimators could be designed in presence of both oblivious noise and more reasonable measurement noise (e.g., standard Gaussian). Linear regression. Similarly to the context of principal component analysis, a convenient model for oblivious adversarial corruptions is that of assuming the noise vector η to have a random set of α · n entries bounded by ζ in absolute value (here all results of interest can be extended to any ζ > 0 just by scaling, so we consider only the case ζ 1). This model trivially captures the classical settings with Gaussian noise (a Gaussian vector with variance σ2 will have an α Θ( 1σ ) fraction of inliers) and, again, allows us to think of the α-fraction of entries with small noise as the set of uncorrupted observations. Early works on consistent regression focused on the regime with Gaussian design X1 , . . . ,Xn ∼ N(0,Σ) and deterministic noise η.3 (BJKK17) presented an estimator achieving error 1In this paper, we assume Ω ⊆ d andZ ⊆ D for some d ,D, denote random variables in bold face, and hide absolute constant factors with the notation O(·),Ω(·) , & , . and logarithmic factors with Õ(·), Ω̃(·). 2Our analysis also works for the parameter error ‖β∗ − β̂‖. 3For Gaussian design X one may also consider a deterministic noise model. This noise model is subsumed by the random noise model discussed here (if X is Gaussian). As shown in (dNS21), roughly speaking the Õ(d/(α2 · n)) for any α larger than a fixed constant. (SBRJ19) extended this result by achieving comparable error rates even for a vanishing fraction of inliers α & 1/log log n. Assuming n & d2/α2 (TJSO14) proved that the Huber-loss estimator (Hub64) achieves optimal error rate O(d/α2 · n) even for polynomially small fraction of inliers α & √ d/n. This line of work culminated in (dNS21) which extended the result of (TJSO14) achieving the same guarantees with optimal sample complexity n & d/α2. For Gaussian design, similar result as (dNS21) can be extracted from independent work (PF20). Furthermore, the authors of (dNS21) extended these guarantees to deterministic design matrices satisfying only a spreadness condition (trivially satisfied by sub-Gaussian design matrices). Huber loss was also analyzed in context of linear regression in (SZF18; DT19; SF20; PJL21). These works studied models that are different from our model, and these results do not work in the case α o(1). (SZF18) used assumptions on the moments of the noise, (DT19) and (SF20) studied the model with non-oblivious adversary, and the model from (PJL21) allows corruptions in covariates. In the context of sparse regression, less is known. When the fraction of observations is constant α > Ω(1), (NTN11) provided a first consistent estimator. Later (DT19) and (SF20) improved that result (assuming α > Ω(1), but these results also work with non-oblivious adversary). In the case α o(1), the algorithm of (SBRJ19) (for Gaussian designs) achieves nearly optimal error convergence Õ ( (k log2 d)/α2n ) , but requires α & 1/log log n. More recently, (dNS21) presented an algorithm for standard Gaussian design X ∼ N(0, 1)n×d , achieving the nearly optimal error convergence Õ(k/(α2 · n)) for nearly optimal inliers fraction α & √ (k · log2 d)/n. Both (SBRJ19) and (dNS21) use an iterative process, and require a bound ‖β∗‖ 6 dO(1) (for larger ‖β∗‖ these algorithms also work, but the fraction of inliers or the error convergence is worse). The algorithm from (dNS21) , however, heavily relies on the assumption that X ∼ N(0, 1)n×d and appears unlikely to be generalizable to more general families of matrices (including non-spherical Gaussian designs). Our contribution. We propose new machinery to design efficiently computable consistent estimators achieving optimal error rates and sample complexity against oblivious outliers. In particular, we extend the approach of (dNS21) to structured estimation problems, by finding a way to exploit adequately the structure therein. While consistent estimators have already been designed under more benign noise assumptions (e.g. the LASSO estimator for sparse linear regression under Gaussian noise), it was previously unclear how to exploit this structure in the setting of oblivious noise. One key consequence of our work is hence to demonstrate what minimal assumptions on the noise are sufficient to make effective recovery (in the sense above) possible. Concretely, we show Oblivious PCA: Under mild assumptions on the noise matrix N and common assumption on the parameter matrix L∗ –traditionally applied in the context of matrix completion (NW12)– we provide an algorithm that achieves optimal error guarantees. Sparse regression: Under mild assumptions on the design matrix and the noise vector –similar to the ones used in (dNS21) for dense parameter vectors β∗ – we provide an algorithm that achieves optimal error guarantees and sample complexity. For both problems, our analysis improves over the state-of-the-art and recovers the classical optimal guarantees, not only for Gaussian noise, but also under much less restrictive noise assumptions. At a high-level, we achieve the above results by equipping the Huber loss estimator with appropriate regularizers. Our techniques closely follow standard analyses for M-estimators, but crucially depart from them when dealing with the observations with large perturbations. Furthermore, our analysis appears to be mechanical and thus easily applicable to many different estimation problems. 2 Results Our estimators are based on regularized versions of the Huber loss. The regularizer we choose depends on the underlying structure of the estimation problem: We use `1 regularization to enforce sparsity in linear regression and nuclear norm regularization to enforce a low-rank structure in the context of PCA. More formally, the Huber penalty is defined as the function fh : → >0 such that underlying reason is that the Gaussianity of X allows one to obtain several other desirable properties "for free". For example, one could ensure that the noise vector is symmetric by randomly flipping the sign of each observation (yi ,Xi), as the design matrix will still be Gaussian. See Section 2 for a more in-depth discussion. fh(t) : { 1 2 t 2 for |t | 6 h , h(|t | − h2 ) otherwise. (2.1) where h > 0 is a penalty parameter. For X ∈ D , the Huber loss is defined as the function Fh(X) : ∑ i∈[D] fh(Xi). We will define the regularized versions in the following sections. For a matrix A, we use ‖A‖, ‖A‖nuc, ‖A‖F, ‖A‖max to denote its spectral, nuclear, Frobenius, maximum4 norms, respectively. For a vector v, we use ‖v‖ and ‖v‖1 to denote its `2 and `1 norms. 2.1 Oblivious principal component analysis For oblivious PCA, we provide guarantees for the following estimator (ζ will be defined shortly): L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fh(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (2.2) Theorem 2.1. Let L∗ ∈ n×n be an unknown deterministic matrix and let N be an n-byn random matrix with independent, symmetrically distributed (about zero) entries and α : mini , j∈[n] { Ni j 6 ζ} for some ζ > 0. Suppose that rank(L∗) r and ‖L∗‖max 6 ρ/n. Then, with probability at least 1 − 2−n over N , given Y L∗ + N , ζ and ρ, the estimator Eq. (2.2) with Huber parameter h ζ + ρ/n satisfies L̂ − L∗ F 6 O (√rnα ) · (ζ + ρ/n) . We first compare the guarantees of Theorem 2.1 with the previous results on robust PCA (CLMW11; ZLW+10).5 The first difference is that they require L∗ to satisfy certain incoherent conditions.6 Concretely, they provide theoretical guarantees for r 6 O ( µ−1n(log n)−2 ) , where µ is the incoherence parameter. In certain regimes, such a constraint strongly binds with the eigenvectors of L∗, restricting the set of admissible signal matrices. Using the different assumption ‖L∗‖max 6 ρ/n (commonly used for matrix completion, see (NW12)), we can obtain nontrivial guarantees (i.e. L̂ − L∗ F/‖L∗‖F → 0 as n →∞) even when the µ-incoherence conditions are not satisfied for any µ 6 n/log2 n, and hence the results (CLMW11; ZLW+10) cannot be applied. We remark that, without assuming incoherence, the dependence of the error on the maximal entry of L∗ is inherent (see Remark 3.2). The second difference is that Theorem 2.1 provides a significantly better dependence on the magnitude ζ of the entry-wise measurement error. Specifically, in the settings of Theorem 2.1, (ZLW+10) showed that if L∗ satisfies the incoherence conditions, the error of their estimator is O ( n2ζ ) . If the entries of N are standard Gaussian with probability α (and hence ζ 6 O(1)), and the entries of L∗ are bounded by O(1), then the error of our estimator is O( √ rn/α), which is considerably better than O(n2) as in (ZLW+10). On the other hand, their error does not depend on the magnitude ρ/n of the signal entries, so in the extreme regimes when the singular vectors of L∗ satisfy the incoherence conditions but L∗ has very large singular values (so that the magnitude of the entries of L is significantly larger than n), their analysis provides better guarantees than Theorem 2.1. As another observation to understand Theorem 2.1, notice that our robust PCA model also captures the classical matrix completion settings. In fact, any instance of matrix completion can be easily transformed into an instance of our PCA model: for the entries (i , j) that we do not observe, we can set Ni , j to some arbitrarily large value ±C(ρ, n) ρ/n, making the signal-to-noise ratio of the entry arbitrarily small. The observed (i.e. uncorrupted) Θ(α · n2) entries may additionally be perturbed by Gaussian noise with variance Θ(ζ2). The error guarantees of the estimator in Theorem 2.1 is 4For n × m matrix A, ‖A‖max maxi∈[n], j∈[m] |Ai j |. 5We remark that in (CLMW11) the authors showed that they can consider non-symmetric noise when the fraction of inliers is large α > 1/2. However for smaller fraction of inliers their analysis requires the entries of the noise to be symmetric and independent, so for α < 1/2, their assumptions are captured by Theorem 2.1. 6A rank-r n × n dimensional matrix M is µ-incoherent if its singular vector decomposition M : UΣV> satisfies maxi∈[n]‖U>ei ‖2 6 µr n , maxi∈[n]‖V>ei ‖2 6 µr n and ‖UV>‖∞ 6 √ µr n . O ( ( ρ/n + ζ )√ rn/α ) . Thus, the dependency on the parameters ρ, n , ζ, and r is the same as in matrix completion and the error is within a factor of Θ( √ 1/α) from the optimum for matrix completion. However, this worse dependency on α is intrinsic to the more general model considered and it turns out to be optimal (see Theorem 2.4). On a high level, the additional factor of Θ( √ 1/α) comes from the fact that in our PCA model we do not know which entries are corrupted. The main consequence of this phenomenon is that a condition of the form α & √ r/n appears inherent to achieve consistency. To get some intuition on why this condition is necessary, consider the Wigner model where we are given a matrix Y xx> + σW for a flat vector x ∈ {±1}n and a standard Gaussian matrix W . Note, that the entries of W fit our noise model for ζ 1 , ρ/n 1 , r 1 and α Θ(1/σ). The spectral norm of σW concentrates around 2σ √ n and thus it is information-theoretically impossible to approximately recover the vector x for σ 1/α ω( √ n) (see (PWBM16)). 2.2 Sparse regression Our regression model considers a fixed design matrix X ∈ n×d and observations y : Xβ∗+η ∈ n where β∗ is an unknown k-sparse parameter vector and η is random noise with (|ηi | 6 1) > α for all i ∈ [n]. Earlier works (BJKK17), (SBRJ19) focused on the setting that the design matrix consists of i.i.d. rows with Gaussian distribution N(0,Σ) and the noise is η ζ + w where ζ is deterministic (α · n)-sparse vector and w is subgaussian. As in (dNS21), our results for a fixed design and random noise can, in fact, extend to yield the same guarantees for this early setting (see Theorem 2.3). Hence, a key advantage of our results is that the design X does not have to consist of Gaussian entries. Remarkably, we can handle arbitrary deterministic designs as long as they satisfy some mild conditions. Concretely, we make the following three assumptions, the first two of which are standard in the literature of sparse regression (e.g., see (Wai19), section 7.3): 1. For every column X i of X, X i 6 √n. 2. Restricted eigenvalue property (RE-property): For every vector u ∈ d such that7 usupp(β∗) 1 > 0.1 · ‖u‖1, we have 1n ‖Xu‖2 > λ · ‖u‖2 for some parameter λ > 0. 3. Well-spreadness property: For some (large enough) m ∈ [n] and for every vector u ∈ d such that usupp(β∗) 1 > 0.1 · ‖u‖1 and for every subset S ⊆ [n] with |S | > n − m, it holds that ‖(Xu)S‖ > 12 ‖Xu‖. Denote F2(β) : n∑ i 1 f2 ( yi − 〈Xi , β〉 ) , where Xi are the rows of X, and f2 is as in Eq. (2.1). We devise our estimator for sparse regression and state its statistical guarantees below: β̂ B arg min β∈ d ( F2(β) + 100 √ n log d · β 1) . (2.3) Theorem 2.2. Let β∗ ∈ d be an unknown k-sparse vector and let X ∈ n×d be a deterministic matrix such that for each column X i of X, ‖X i ‖ 6 √ n, satisfying the RE-property with λ > 0 and well-spreadness property with m & k log d λ·α2 (recall that n > m). Further, let η be an n-dimensional random vector with independent, symmetrically distributed (about zero) entries and α mini∈[n] { ηi 6 1}. Then with probability at least 1 − d−10 over η, given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( 1 λ · k log d α2 · n ) and β̂ − β∗ 2 6 O ( 1 λ2 · k log d α2 · n ) . There are important considerations when interpreting this theorem. The first is the special case η ∼ N(0, σ2)n , which satisfies our model for α Θ(1/σ). For this case, it is well known (e.g., 7For a vector v ∈ d and a set S ⊆ [d], we denote by vS the restriction of v to the coordinates in S. see (Wai19), section 7.3) that under the same RE assumption, the LASSO estimator achieves a prediction error rate of O( σ2λ · k log d n ) O( k log d λ·α2 ·n ), matching our result. Moreover, this error rate is essentially optimal. Under a standard assumption in complexity theory (NP 1 P/poly), the RE assumption is necessary when considering polynomial-time estimators (ZWJ14). Further, this also shows that the dependence on the RE constant seems unavoidable. Under mild conditions on the design matrix, trivially satisfied if the rows are i.i.d. Gaussian with covariance Σ whose condition number is constant, our guarantees are optimal up to constant factors for all estimators if k 6 d1−Ω(1) (e.g. k 6 d0.99), see (RWY11). This optimality also shows that our bound on the number of samples is best possible since otherwise we would not be able to achieve vanishing error. The (non-sparse version of) well-spreadness property was first used in the context of regression in (dNS21). In the same work the authors also showed that, under oblivious noise assumptions, some weak form of spreadness property is indeed necessary. The second consideration is the optimal dependence on α: Theorem 2.2 achieves consistency as long as the fraction of inliers satisfies α ω( √ k log d/n). To get an intuition, observe that lower bounds for standard sparse regression show that, already for η ∼ N(0, σ · Idn), it is possible to achieve consistency only for n ω(σ2k log d) (if k 6 d1−Ω(1)). As for this η, the number of entries of magnitude at most 1 is O(n/σ) with high probability, it follows that for α Θ(1/σ) 6 O (√ (k log d)/n ) , no estimator is consistent. To the best of our knowledge, Theorem 2.2 is the first result to achieve consistency under such minimalistic noise settings and deterministic designs. Previous results (BJKK17; SBRJ19; dNS21) focused on simpler settings of Gaussian design X and deterministic noise, and provide no guarantees for more general models. The techniques for Theorem 2.2 also extend to this case. Theorem 2.3. Let β∗ ∈ d be an unknown k-sparse vector and let X be a n-by-d random matrix with i.i.d. rows X1 , . . .Xn ∼ N(0,Σ) for a positive definite matrix Σ. Further, let η ∈ n be a deterministic vector with α·n coordinates bounded by 1 in absolute value. Suppose that n & ν(Σ)·k log d σmin(Σ)·α2 , where ν(Σ) is the maximum diagonal entry of Σ and σmin(Σ) is its smallest eigenvalue. Then, with probability at least 1 − d−10 over X , given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( ν(Σ) · k log d σmin(Σ) · α2 · n ) and β̂ − β∗ 2 6 O ( ν(Σ) · k log d σ2min(Σ) · α2 · n ) . Even for standard Gaussian design X ∼ N(0, 1)n×d , the above theorem improves over previous results, which required sub-optimal sample complexity n & ( k/α2 ) · log d · log β∗ . For non-spherical Gaussian designs, the improvement over state of the art (SBRJ19) is more serious: their algorithm requires α > Ω(1/log log n), while our Theorem 2.3 doesn’t have such restrictions and works for all α & √ ν(Σ) σmin(Σ) · k log d n ; in many interesting regimes α is allowed to be smaller than n −Ω(1). The dependence on α is nearly optimal: the estimator is consistent as long as α > ω (√ ν(Σ) σmin(Σ) · k log d n ) , and from the discussion after Theorem 2.3, if α 6 O( √ (k log d)/n), no estimator is consistent. Note that while we can deal with general covariance matrices, to compare Theorem 2.3 with Theorem 2.2 it is easier to consider Σ in a normalized form, when ν(Σ) 6 1. This can be easily achieved by scaling X . Also note that Theorem 2.2 can be generalized to the case ‖X i ‖ 6 √ νn for arbitrary ν > 0, and then the error bounds and the bound on m should be multiplied by ν. The RE-property of Theorem 2.2 is a standard assumption in sparse regression and is satisfied by a large family of matrices. For example, with high probability a random matrix X with i.i.d. rows sampled from N(0,Σ), with positive definite Σ ∈ d×d whose diagonal entries are bounded by 1, satisfies the RE-property with parameter Ω(σmin(Σ)) for all subsets of [d] of size k (so for every possible support of β∗) as long as long as n & 1σmin(Σ) · k log d (see (Wai19), section 7.3.3). The wellspread assumption is satisfied for such X with for all sets S ⊂ [n] of size m 6 cn (for sufficiently small c) and for all subsets of [d] of size k as long as n & 1σmin(Σ) · k log d. 2.3 Optimal fraction of inliers for principal component analysis under oblivious noise We show here that the dependence on α we obtain in Theorem 2.1 is information theroetically optimal up to constant factors. Concretely, let L∗ ,N , Y , α, ρ and ζ be as in Theorem 2.1, and let 0 < ε < 1 and 0 < δ < 1. A successful (ε, δ)-weak recovery algorithm for PCA is an algorithm that takes Y as input and returns a matrix L̂ such that L̂ − L∗ F 6 ε · ρ with probability at least 1 − δ. It can be easily seen that the Huber-loss estimator of Theorem 2.1 fails to be a successful weakrecovery algorithm if α o( √ r/n) (for both cases ζ 6 ρ/n and ρ/n 6 ζ, we need α Ω( √ r/n).) A natural question to ask is whether the condition α Ω( √ r/n) is necessary in general. The following theorem shows that if α o( √ r/n), then weak-recovery is information-theoretically impossible. This means that the (polynomially small) fraction of inliers that the Huber-loss estimator of Theorem 2.1 can deal with is optimal up to a constant factor. Theorem 2.4. There exists a universal constant C0 > 0 such that for every 0 < ε < 1 and 0 < δ < 1, if α : mini , j∈[n] [|Ni , j | 6 ζ] satisfies α < C0 · (1 − ε2)2 · (1 − δ) · √ r/n, and n is large enough, then it is information-theoretically impossible to have a successful (ε, δ)-weak recovery algorithm. The problem remains information-theoretically impossible (for the same regime of parameters) even if we assume that L∗ is incoherent; more precisely, even if we know that L∗ has incoherence parameters that are as good as those of a random flat matrix of rank r, the theorem still holds. 3 Techniques To illustrate our techniques in proving statistical guarantees for the Huber-loss estimator, we first use sparse linear regression as a running example. Then, we discuss how the same ideas apply to principal component analysis. Finally, we also remark our techniques for lower bounds. 3.1 Sparse linear regression under oblivious noise We consider the model of Theorem 2.2. Our starting point to attain the guarantees for our estimator Eq. (2.3), i.e., β̂ B arg minβ∈ d F2(β) + 100 √ n log d β 1, is a classical approach for M-estimators (see e.g. (Wai19), chapter 9). For simplicity, we will refer to F2(β) as the loss function and to β 1 as the regularizer. At a high level, it consists of the following two ingredients: (I) an upper bound on some norm of the gradient of the loss function at the parameter β∗, (II) a lower bound on the curvature of the loss function (in form of a local strong convexity bound) within a structured neighborhood of β∗. The structure of this neighborhood can roughly be controlled by choosing the appropriate regularizer. The key aspect of this strategy is that the strength of the statistical guarantees of the estimator crucially depends on the directions and the radius in which we can establish lower bounds on the curvature of the function. Since these features are inherently dependent on the landscape of the loss function and the regularizer, they may differ significantly from problem to problem. This strategy has been applied successfully for many related problems such as compressed sensing or matrix completion albeit with standard noise assumptions.8 Under oblivious noise, (dNS21) used a particular instantiation of this framework to prove optimal convergence of the Huber-loss – without any regularizer – for standard linear regression. Such estimator, however, doesn’t impose any structure on the neighborhood of β∗ considered in (II) and thus, can only be used to obtain sub-optimal guarantees for sparse regression. In the context of sparse regression, the above two conditions translate to: (I) an upper bound on the largest entry in absolute value of the gradient of the loss function at β∗, and (II) a lower bound on the curvature of F2 within the set of approximately k-sparse vectors9 close to β∗. We use this recipe to show that all approximate minimizers of F are close to β∗. While the idea of restricting to only approximately sparse directions has also been applied for the LASSO estimator in sparse 8The term "standard noise assumptions" is deliberately vague; as a concrete example, we will refer to (sub)-Gaussian noise distributions. See again chapter 9 of (Wai19) for a survey. 9We will clarify this notion in the subsequent paragraphs. regression under standard (sub)-Gaussian noise, in the presence of oblivious noise, our analysis of the Huber-loss function requires a more careful approach. More precisely, under the assumptions of Theorem 2.2, the error bound can be computed as O ( s‖G‖∗reg κ ) , (3.1) where G is a gradient of Huber loss at β∗, ‖·‖∗reg is a norm dual to the regularization norm (which is equal to ‖·‖max for `1 regularizer), s is a structure parameter, which is equal to √ k/λ, and κ is a restricted strong convexity parameter. Note that by error here we mean 1√ n ‖X(β̂ − β∗)‖. Similarly, under under assumptions of Theorem 2.1, we get the error bound Eq. (3.1), where G is a gradient of Huber loss at L∗, ‖·‖∗reg is a norm dual to the nuclear norm (i.e. the spectral norm), structure parameter s is √ r, and κ is a restricted strong convexity parameter. For more details on the conditions of the error bound, see the supplementary material. Below, we explore the bounds on the norm of the gradient and on the restricted strong convexity parameter. Bounding the gradient of the Huber loss. The gradient of the Huber-loss F2(·) at β∗ has the form ∇F2(β∗) ∑n i 1 f ′ 2 [ ηi ] · Xi , where Xi is the i-th row of X. The random variables f ′2 [ ηi ] , i ∈ [n], are independent, centered, symmetric and bounded by 2. Since we assume that each column of X has norm at most √ n, the entries of the row Xi are easily bounded by √ n. Thus, ∇F2(β∗) is a vector with independent, symmetric entries with bounded variance, so its behavior can be easily studied through standard concentration bounds. In particular, by a simple application of Hoeffding’s inequality, we obtain, with high probability, ∇F2(β∗) max maxj∈[d] ∑ i∈[n] f ′2 [ ηi ] · Xi j 6 O (√n log d) . (3.2) Local strong convexity of the Huber loss. Proving local strong convexity presents additional challenges. Without the sparsity constraint, (dNS21) showed that under a slightly stronger spreadness assumptions than Theorem 2.2, the Huber loss is locally strongly convex within a constant radius R ball centered at β∗ whenever n & d/α2. (This function is not globally strongly convex due to its linear parts.) Using their result as a black-box, one can obtain the error guarantees of Theorem 2.2, but with suboptimal sample complexity. The issue is that with the substantially smaller sample size of n > Õ(k/α2) that resembles the usual considerations in the context of sparse regression, the Huber loss is not locally strongly convex around β∗ uniformly across all directions, so we cannot hope to prove convergence with optimal sample complexity using this argument. To overcome this obstacle, we make use of the framework of M-estimators: Since we consider a regularized version of the Huber loss, it will be enough to show local strong convexity in a radius R uniformly across all directions which are approximately k-sparse. For this substantially weaker condition, Õ(k/α2) will be enough. More in details, for observations y Xβ∗ + η and an arbitrary u ∈ d of norm ‖u‖ 6 R, it is possible to lower bound the Hessian10 of the Huber loss at β∗ + u by:11 HF2(β∗ + u) n∑ i 1 f ′′2 [ (Xu)i − ηi ] · XiXiT n∑ i 1 1[|(Xu)i−ηi |62] · XiXi T M(u) : n∑ i 1 1[|〈Xi ,u〉|61] · 1[|ηi |61] · XiXi T As can be observed, we do not attempt to exploit cancellations between Xu and η. Let Q : { i ∈ [n] ηi 6 1} be the set of uncorrupted entries of η. Given that with high probability Q has size Ω(α ·n), the best outcome we can hope for is to provide a lower bound of the form 〈u ,M(u)u〉 > αn in the direction β̂ − β∗ . In (dNS21), it was shown that if the span of the measurement matrix X is well spread, then 〈u ,M(u)u〉 > Ω(α · n). 10The Hessian does not exist everywhere. Nevertheless, the second derivative of the penalty function f2 exists as an L1 function in the sense that f ′2(b) − f ′ 2(a) 2 ∫ b a 1[|t |62]dt. This property is enough for our purposes. 11A more extensive explanation of the first part of this analysis can be found in (dNS21). If the direction β̂− β∗ was fixed, it would suffice to show the curvature in that single direction through the above reasoning. However, β̂ depends on the unknown random noise vector η . Without the regularizer in Eq. (2.3), this dependence indicates that the vector β̂ may take any possible direction, so one needs to ensure local strong convexity to hold in a constant-radius ball centered at β∗. That is, min‖u‖6R λmin(M(u)) > Ω(α · n) . It can be shown through a covering argument (of the ball) that this bound holds true for n > d/α2. This is the approach of (dNS21). The minimizer of the Huber loss follows a sparse direction. The main issue with the above approach is that no information concerning the direction β̂ − β∗ is used. In the settings of sparse regression, however, our estimator contains the regularizer ‖β‖1. The main consequence of the regularizer is that the direction β̂−β∗ is approximately flat in the sense ‖β̂−β∗‖1 6 O (√ k‖β̂ − β∗‖ ) . The reason12 is that due to the structure of the objective function in Eq. (2.3) and concentration of the gradient Eq. (3.2), the penalty for dense vectors is larger than the absolute value of the inner product 〈∇F(β∗), β̂ − β∗〉 (which, as previously argued, concentrates around its (zero) expectation). This specific structure of the minimizer implies that it suffices to prove local strong convexity only in approximately sparse directions. For these set of directions, we carefully construct a sufficiently small covering set so that n > Õ(k/α2) samples suffice to ensure local strong convexity over it. Remark 3.1 (Comparison with LASSO). it is important to remark that while this approach of only considering approximately sparse directions has also been used in the context of sparse regression under Gaussian noise (e.g. the LASSO estimator), obtaining the desired lower bound is considerably easier in these settings as it directly follows from the restricted eigenvalue property of the design matrix. In our case, we require an additional careful probabilistic analysis which uses a covering argument for the set of approximately sparse vectors. As we see however, it turns out that we do not need any additional assumptions on the design matrix when compared with the LASSO estimator except for the well-spreadness property (recall that some weak version of well-spreadness is indeed necessary in robust settings, see (dNS21)). 3.2 Principal component analysis under oblivious noise A convenient feature of the approach in Section 3.1 for sparse regression, is that it can be easily applied to additional problems. We briefly explain here how to apply it for principal component analysis. We consider the model defined in Theorem 2.1. We use an estimator based on the Huber loss equipped with the nuclear norm as a regularizer to enforce the low-rank structure in our estimator L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fζ+ρ/n(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (3.3) In this setting, the gradient ∇Fζ+ρ/n(Y − L∗) is a matrix with independent, symmetric entries which are bounded (by ζ + ρ/n) and hence its spectral norm is O ( (ζ + ρ/n) √ n ) with high probability. Local strong convexity can be obtained in a similar fashion as shown in Section 3.1: due to the choice of the Huber transition point all entries with small noise are in the quadratic part of F. Moreover, the nuclear norm regularizer ensures that the minimizer is an approximately low-rank matrix in the sense that ‖M‖nuc 6 O (√ r‖M‖F ) . So again, it suffices to provide curvature of the loss function only on these subset of structured directions. Remark 3.2 (Incoherence vs. spikiness). Recall the discussion on incoherence in Section 2. If for every µ 6 n/log2 n matrix L∗ doesn’t satisfy µ-incoherence conditions, the results in (CLMW11; ZLW+10) cannot be applied. However, our estimator achieves error ‖L̂−L∗‖F/‖L∗‖F → 0 as n →∞. Indeed,let ω(1) 6 f (n) 6 o(log2 n) and assume ζ 0. Let u ∈ n be an f (n)-sparse unit vector whose nonzero entries are equal to 1/ √ f (n). Let v ∈ n be a vector with all entries equal to 1/ √ n. Then, uvT does not satisfy incoherence with any µ < n/ f (n). We have uvT F 1, and the error of our estimator is O(1/(α √ f (n))), so it tends to zero for constant (or even some subconstant) α. Furthermore notice that the dependence of the error of Theorem 2.1 on the maximal entry of L∗ is inherent if we do not require incoherence. Indeed, consider L1 b · e1e1T for large enough b > 0 12This phenomenon is a consequence of the decomposability of the L1 norm, see the supplementary material. and L2 e2e2T. For constant α, let |Ni j | be 1 with probability α/2, 0 with probability α/2 and b with probability 1 − α. Then given Y we cannot even distinguish between cases L∗ L1 or L∗ L2, and since ‖L1 − L2‖F > b, the error should also depend on b. Remark 3.3 (α vs α2: what if one knows which entries are corrupted?). As was observed in Section 2, the error bound of our estimator is worse than the error for matrix completion by a factor 1/ √ α. We observe similar effect in linear regression: if, as in matrix completion, we are given a randomly chosen α fraction of observations {( Xi , yi 〈Xi , β∗〉 + ηi )}n i 1 where η ∼ N(0, 1) n , and since for the remaining samples we may not assume any bound on the signal-to-noise ratio, then this problem is essentialy the same as linear regression with αn observations. Thus the optimal prediction error rate is Θ( √ d/(αn)). Now, if we have y Xβ∗ + η, where η ∼ N(0, 1/α2), then with probability Θ(α), |ηi | 6 1, but the optimal prediction error rate in this case is Θ( √ d/(α2n)). So in both linear regression and robust PCA, prior knowledge of the set of corrupted entries makes the problem easier. 3.3 Optimal fraction of inliers for principal component analysis under oblivious noise In order to prove Theorem 2.4, we will adopt a generative model for the hidden matrix L∗: We will generate L∗ randomly but assume that the distribution is known to the algorithm. This makes the problem easier. Therefore, any impossibility result for this generative model would imply impossibility for the more restrictive model for which L∗ is deterministic but unknown. We generate a random flat matrix L∗ using n · r independent and uniform random bits in such a way that L∗ is of rank r and incoherent with high probability. Then, for every constant 0 < ξ < 1, we find a distribution for the random noise N in such a way that the fraction of inliers satisfies α : [|Ni j | 6 ζ] Θ ( ξ √ r/n ) , and such that the mutual information between L∗ and Y L∗ + N can be upper bounded as I(L∗; Y ) 6 O(ξ · n · r). Roughly speaking the smaller ξ gets, the more independent L∗ and Y will be. Now using an inequality that is similar to the standard Fano-inequality but adapted to weak-recovery, we show that if there is a successful (ε, δ)-weak recovery algorithm for L∗ and N , then I(L∗; Y ) > Ω ( (1 − ε2)2 · (1 − δ) · n · r ) . By combining all these observations together, we can deduce that if ξ is small enough, it is impossible to have a successful (ε, δ)-weak recovery algorithm for L∗ and N . Acknowledgments and Disclosure of Funding The authors thank the anonymous reviewers for useful comments. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 815464).
1. What are the main contributions of the paper regarding sparse regression and principal component analysis? 2. How does the proposed method achieve optimal error guarantees for PCA and sparse regression under oblivious perturbations? 3. What is the potential application of the proposed machinery in other estimation problems? 4. What are the strengths and weaknesses of the paper regarding its theoretical analysis and experimental results? 5. Could you provide more details about the desired size of m in the well-spreadness property? 6. Are there any typos or errors in the paper that need correction?
Summary Of The Paper Review
Summary Of The Paper This paper studies sparse regression and principal component analysis under oblivious perturbations. Main contributions: The authors propose estimators that achieve the optimal error guarantees for PCA and sparse regression under oblivious perturbations separately, by minimizing the Huber loss function with some regularization. For sparse regression the estimator also achieves the optimal sample complexity. This machinery has the potential to be applied to other estimation problems. Post rebuttal: I acknowledge that I have read the authors responses and other reviewers' comments. Review This paper is reasonably well-written, with clear introduction of the problem and comparison with the related work. The result of the paper improves over the state-of-the-art. Some comments on improving the paper: Current version of the paper is very theoretical, it will be better if some experimental results could be added. The well-spreadness property at line 181 could is not clear: in the current statement it sounds to me like this property is satisfied as long as it holds for any arbitrary choice of m . But according to my understanding you do need the m here to have some lower bound (otherwise setting m = 0 is trivial). What is the desired size of m ? Typos in the paper: (1) At line 57, "preturbations" should be "perturbations".
NIPS
Title Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers Abstract We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. N/A We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n & (k log d)/α2 and optimal error rate O( √ (k log d)/(n · α2)) where n is the number of observations, d is the number of dimensions and k is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α is o(1/log log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d’Orsi et al. (dNS21). In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in n (e.g., Gaussian with variance 1/n2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the `1 norm or the nuclear norm, and extend d’Orsi et al.’s approach (dNS21) in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems. We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor. 1 Introduction Estimating information from structured data is a central theme in statistics that by now has found applications in a wide array of disciplines. On a high level, a typical assumption in an estimation problem is the existence of a –known a priori– family of probability distributions P : { β β ∈ Ω} over some spaceZ that are each indexed by some parameter β ∈ Ω. We then observe a collection 35th Conference on Neural Information Processing Systems (NeurIPS 2021). of n independent observations Z (Z1 , . . . , Zn) drawn from an unknown probability distribution β∗ ∈ P. The goal is to (approximately) recover the hidden parameter β∗.1 Oftentimes, real-world data may contain skewed, imprecise or corrupted measurements. Hence, a desirable property for an estimator is to be robust to significant, possibly malicious, noise perturbations on the given observations. Indeed, in the last two decades, a large body of work has been developed on designing robust algorithms (e.g see (BGN09; BBC11; MMYSB19)). However, proving strong guarantees often either demands strong assumptions on the noise model or requires that the fraction of perturbed observations is small. More concretely, when we allow the noise to be chosen adaptively, i.e., chosen dependently on the observations and hidden parameters, a common theme is that consistent estimators – estimators whose error tends to zero as the number of observations grows – can be attained only when the fraction of outliers is small. In order to make vanishing error possible in the presence of large fractions of outliers, it is necessary to consider weaker adversary models that are oblivious to the underlying structured data. In recent years, a flurry of works have investigated oblivious noise models (CLMW11; ZLW+10; TJSO14; BJKK17; SZF18; SBRJ19; PJL21; dNS21). These results however are tailor-made to the specific models and problems. To overcome this limitation, in this paper we aim to provide a simple blueprint to design provably robust estimators under minimalistic noise assumptions for a large class of estimation problems. As a testbed for our blueprint, we investigate two well-studied problems: Principal component analysis (PCA): Given a matrix Y : L∗ + N where L∗ ∈ n×n is an unknown parameter matrix and N is an n-by-n random noise matrix, the goal is to find an estimator L̂ for L∗ which is as close as possible to L∗ in Frobenius norm. Sparse regression: Given observations (X1 , y1), . . . , (Xn , yn) following the linear model yi 〈Xi , β∗〉 + ηi where Xi ∈ d , β∗ ∈ d is the k-sparse parameter vector of interest (by k-sparse we mean that it has at most k nonzero entries) and η1 , . . . ηn is noise, the goal is to find an estimator β̂ for β∗ achieving small squared prediction error 1n ‖X(β̂ − β∗)‖2, where X is the matrix whose rows are X1 , . . . ,Xn .2 Principal component analysis. A natural way to describe principal component analysis under oblivious perturbations is that of assuming the noise matrix N to be an n-by-n matrix with a uniformly random set of α · n2 entries bounded by some small ζ > 0 in absolute value. In these settings, we may think of the α fraction of entries with small noise as the set of uncorrupted observations. Moreover, for ζ > 0, the fact that even for uncorrupted observations the noise is non-zero allows us to capture both gross sparse errors and small entry-wise noise in the measurements at the same time (for example if ζ 1 then the model captures settings with additional standard Gaussian noise). Remarkably, for ζ 0, Candès et al.’s seminal work (CLMW11) provided an algorithm that exactly recovers L∗ even for a vanishing fraction of inliers, under incoherence conditions on the signal L∗. The result was slightly extended in (ZLW+10) where the authors provided an algorithm recovering L∗ up to squared error O(ζ2 · n4) thus allowing polynomially small measurement noise ζ, but still failing to capture settings where standard Gaussian measurement noise is added to the sparse noise. Even for simple signal matrices L∗, prior to this work, it remained an open question whether consistent estimators could be designed in presence of both oblivious noise and more reasonable measurement noise (e.g., standard Gaussian). Linear regression. Similarly to the context of principal component analysis, a convenient model for oblivious adversarial corruptions is that of assuming the noise vector η to have a random set of α · n entries bounded by ζ in absolute value (here all results of interest can be extended to any ζ > 0 just by scaling, so we consider only the case ζ 1). This model trivially captures the classical settings with Gaussian noise (a Gaussian vector with variance σ2 will have an α Θ( 1σ ) fraction of inliers) and, again, allows us to think of the α-fraction of entries with small noise as the set of uncorrupted observations. Early works on consistent regression focused on the regime with Gaussian design X1 , . . . ,Xn ∼ N(0,Σ) and deterministic noise η.3 (BJKK17) presented an estimator achieving error 1In this paper, we assume Ω ⊆ d andZ ⊆ D for some d ,D, denote random variables in bold face, and hide absolute constant factors with the notation O(·),Ω(·) , & , . and logarithmic factors with Õ(·), Ω̃(·). 2Our analysis also works for the parameter error ‖β∗ − β̂‖. 3For Gaussian design X one may also consider a deterministic noise model. This noise model is subsumed by the random noise model discussed here (if X is Gaussian). As shown in (dNS21), roughly speaking the Õ(d/(α2 · n)) for any α larger than a fixed constant. (SBRJ19) extended this result by achieving comparable error rates even for a vanishing fraction of inliers α & 1/log log n. Assuming n & d2/α2 (TJSO14) proved that the Huber-loss estimator (Hub64) achieves optimal error rate O(d/α2 · n) even for polynomially small fraction of inliers α & √ d/n. This line of work culminated in (dNS21) which extended the result of (TJSO14) achieving the same guarantees with optimal sample complexity n & d/α2. For Gaussian design, similar result as (dNS21) can be extracted from independent work (PF20). Furthermore, the authors of (dNS21) extended these guarantees to deterministic design matrices satisfying only a spreadness condition (trivially satisfied by sub-Gaussian design matrices). Huber loss was also analyzed in context of linear regression in (SZF18; DT19; SF20; PJL21). These works studied models that are different from our model, and these results do not work in the case α o(1). (SZF18) used assumptions on the moments of the noise, (DT19) and (SF20) studied the model with non-oblivious adversary, and the model from (PJL21) allows corruptions in covariates. In the context of sparse regression, less is known. When the fraction of observations is constant α > Ω(1), (NTN11) provided a first consistent estimator. Later (DT19) and (SF20) improved that result (assuming α > Ω(1), but these results also work with non-oblivious adversary). In the case α o(1), the algorithm of (SBRJ19) (for Gaussian designs) achieves nearly optimal error convergence Õ ( (k log2 d)/α2n ) , but requires α & 1/log log n. More recently, (dNS21) presented an algorithm for standard Gaussian design X ∼ N(0, 1)n×d , achieving the nearly optimal error convergence Õ(k/(α2 · n)) for nearly optimal inliers fraction α & √ (k · log2 d)/n. Both (SBRJ19) and (dNS21) use an iterative process, and require a bound ‖β∗‖ 6 dO(1) (for larger ‖β∗‖ these algorithms also work, but the fraction of inliers or the error convergence is worse). The algorithm from (dNS21) , however, heavily relies on the assumption that X ∼ N(0, 1)n×d and appears unlikely to be generalizable to more general families of matrices (including non-spherical Gaussian designs). Our contribution. We propose new machinery to design efficiently computable consistent estimators achieving optimal error rates and sample complexity against oblivious outliers. In particular, we extend the approach of (dNS21) to structured estimation problems, by finding a way to exploit adequately the structure therein. While consistent estimators have already been designed under more benign noise assumptions (e.g. the LASSO estimator for sparse linear regression under Gaussian noise), it was previously unclear how to exploit this structure in the setting of oblivious noise. One key consequence of our work is hence to demonstrate what minimal assumptions on the noise are sufficient to make effective recovery (in the sense above) possible. Concretely, we show Oblivious PCA: Under mild assumptions on the noise matrix N and common assumption on the parameter matrix L∗ –traditionally applied in the context of matrix completion (NW12)– we provide an algorithm that achieves optimal error guarantees. Sparse regression: Under mild assumptions on the design matrix and the noise vector –similar to the ones used in (dNS21) for dense parameter vectors β∗ – we provide an algorithm that achieves optimal error guarantees and sample complexity. For both problems, our analysis improves over the state-of-the-art and recovers the classical optimal guarantees, not only for Gaussian noise, but also under much less restrictive noise assumptions. At a high-level, we achieve the above results by equipping the Huber loss estimator with appropriate regularizers. Our techniques closely follow standard analyses for M-estimators, but crucially depart from them when dealing with the observations with large perturbations. Furthermore, our analysis appears to be mechanical and thus easily applicable to many different estimation problems. 2 Results Our estimators are based on regularized versions of the Huber loss. The regularizer we choose depends on the underlying structure of the estimation problem: We use `1 regularization to enforce sparsity in linear regression and nuclear norm regularization to enforce a low-rank structure in the context of PCA. More formally, the Huber penalty is defined as the function fh : → >0 such that underlying reason is that the Gaussianity of X allows one to obtain several other desirable properties "for free". For example, one could ensure that the noise vector is symmetric by randomly flipping the sign of each observation (yi ,Xi), as the design matrix will still be Gaussian. See Section 2 for a more in-depth discussion. fh(t) : { 1 2 t 2 for |t | 6 h , h(|t | − h2 ) otherwise. (2.1) where h > 0 is a penalty parameter. For X ∈ D , the Huber loss is defined as the function Fh(X) : ∑ i∈[D] fh(Xi). We will define the regularized versions in the following sections. For a matrix A, we use ‖A‖, ‖A‖nuc, ‖A‖F, ‖A‖max to denote its spectral, nuclear, Frobenius, maximum4 norms, respectively. For a vector v, we use ‖v‖ and ‖v‖1 to denote its `2 and `1 norms. 2.1 Oblivious principal component analysis For oblivious PCA, we provide guarantees for the following estimator (ζ will be defined shortly): L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fh(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (2.2) Theorem 2.1. Let L∗ ∈ n×n be an unknown deterministic matrix and let N be an n-byn random matrix with independent, symmetrically distributed (about zero) entries and α : mini , j∈[n] { Ni j 6 ζ} for some ζ > 0. Suppose that rank(L∗) r and ‖L∗‖max 6 ρ/n. Then, with probability at least 1 − 2−n over N , given Y L∗ + N , ζ and ρ, the estimator Eq. (2.2) with Huber parameter h ζ + ρ/n satisfies L̂ − L∗ F 6 O (√rnα ) · (ζ + ρ/n) . We first compare the guarantees of Theorem 2.1 with the previous results on robust PCA (CLMW11; ZLW+10).5 The first difference is that they require L∗ to satisfy certain incoherent conditions.6 Concretely, they provide theoretical guarantees for r 6 O ( µ−1n(log n)−2 ) , where µ is the incoherence parameter. In certain regimes, such a constraint strongly binds with the eigenvectors of L∗, restricting the set of admissible signal matrices. Using the different assumption ‖L∗‖max 6 ρ/n (commonly used for matrix completion, see (NW12)), we can obtain nontrivial guarantees (i.e. L̂ − L∗ F/‖L∗‖F → 0 as n →∞) even when the µ-incoherence conditions are not satisfied for any µ 6 n/log2 n, and hence the results (CLMW11; ZLW+10) cannot be applied. We remark that, without assuming incoherence, the dependence of the error on the maximal entry of L∗ is inherent (see Remark 3.2). The second difference is that Theorem 2.1 provides a significantly better dependence on the magnitude ζ of the entry-wise measurement error. Specifically, in the settings of Theorem 2.1, (ZLW+10) showed that if L∗ satisfies the incoherence conditions, the error of their estimator is O ( n2ζ ) . If the entries of N are standard Gaussian with probability α (and hence ζ 6 O(1)), and the entries of L∗ are bounded by O(1), then the error of our estimator is O( √ rn/α), which is considerably better than O(n2) as in (ZLW+10). On the other hand, their error does not depend on the magnitude ρ/n of the signal entries, so in the extreme regimes when the singular vectors of L∗ satisfy the incoherence conditions but L∗ has very large singular values (so that the magnitude of the entries of L is significantly larger than n), their analysis provides better guarantees than Theorem 2.1. As another observation to understand Theorem 2.1, notice that our robust PCA model also captures the classical matrix completion settings. In fact, any instance of matrix completion can be easily transformed into an instance of our PCA model: for the entries (i , j) that we do not observe, we can set Ni , j to some arbitrarily large value ±C(ρ, n) ρ/n, making the signal-to-noise ratio of the entry arbitrarily small. The observed (i.e. uncorrupted) Θ(α · n2) entries may additionally be perturbed by Gaussian noise with variance Θ(ζ2). The error guarantees of the estimator in Theorem 2.1 is 4For n × m matrix A, ‖A‖max maxi∈[n], j∈[m] |Ai j |. 5We remark that in (CLMW11) the authors showed that they can consider non-symmetric noise when the fraction of inliers is large α > 1/2. However for smaller fraction of inliers their analysis requires the entries of the noise to be symmetric and independent, so for α < 1/2, their assumptions are captured by Theorem 2.1. 6A rank-r n × n dimensional matrix M is µ-incoherent if its singular vector decomposition M : UΣV> satisfies maxi∈[n]‖U>ei ‖2 6 µr n , maxi∈[n]‖V>ei ‖2 6 µr n and ‖UV>‖∞ 6 √ µr n . O ( ( ρ/n + ζ )√ rn/α ) . Thus, the dependency on the parameters ρ, n , ζ, and r is the same as in matrix completion and the error is within a factor of Θ( √ 1/α) from the optimum for matrix completion. However, this worse dependency on α is intrinsic to the more general model considered and it turns out to be optimal (see Theorem 2.4). On a high level, the additional factor of Θ( √ 1/α) comes from the fact that in our PCA model we do not know which entries are corrupted. The main consequence of this phenomenon is that a condition of the form α & √ r/n appears inherent to achieve consistency. To get some intuition on why this condition is necessary, consider the Wigner model where we are given a matrix Y xx> + σW for a flat vector x ∈ {±1}n and a standard Gaussian matrix W . Note, that the entries of W fit our noise model for ζ 1 , ρ/n 1 , r 1 and α Θ(1/σ). The spectral norm of σW concentrates around 2σ √ n and thus it is information-theoretically impossible to approximately recover the vector x for σ 1/α ω( √ n) (see (PWBM16)). 2.2 Sparse regression Our regression model considers a fixed design matrix X ∈ n×d and observations y : Xβ∗+η ∈ n where β∗ is an unknown k-sparse parameter vector and η is random noise with (|ηi | 6 1) > α for all i ∈ [n]. Earlier works (BJKK17), (SBRJ19) focused on the setting that the design matrix consists of i.i.d. rows with Gaussian distribution N(0,Σ) and the noise is η ζ + w where ζ is deterministic (α · n)-sparse vector and w is subgaussian. As in (dNS21), our results for a fixed design and random noise can, in fact, extend to yield the same guarantees for this early setting (see Theorem 2.3). Hence, a key advantage of our results is that the design X does not have to consist of Gaussian entries. Remarkably, we can handle arbitrary deterministic designs as long as they satisfy some mild conditions. Concretely, we make the following three assumptions, the first two of which are standard in the literature of sparse regression (e.g., see (Wai19), section 7.3): 1. For every column X i of X, X i 6 √n. 2. Restricted eigenvalue property (RE-property): For every vector u ∈ d such that7 usupp(β∗) 1 > 0.1 · ‖u‖1, we have 1n ‖Xu‖2 > λ · ‖u‖2 for some parameter λ > 0. 3. Well-spreadness property: For some (large enough) m ∈ [n] and for every vector u ∈ d such that usupp(β∗) 1 > 0.1 · ‖u‖1 and for every subset S ⊆ [n] with |S | > n − m, it holds that ‖(Xu)S‖ > 12 ‖Xu‖. Denote F2(β) : n∑ i 1 f2 ( yi − 〈Xi , β〉 ) , where Xi are the rows of X, and f2 is as in Eq. (2.1). We devise our estimator for sparse regression and state its statistical guarantees below: β̂ B arg min β∈ d ( F2(β) + 100 √ n log d · β 1) . (2.3) Theorem 2.2. Let β∗ ∈ d be an unknown k-sparse vector and let X ∈ n×d be a deterministic matrix such that for each column X i of X, ‖X i ‖ 6 √ n, satisfying the RE-property with λ > 0 and well-spreadness property with m & k log d λ·α2 (recall that n > m). Further, let η be an n-dimensional random vector with independent, symmetrically distributed (about zero) entries and α mini∈[n] { ηi 6 1}. Then with probability at least 1 − d−10 over η, given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( 1 λ · k log d α2 · n ) and β̂ − β∗ 2 6 O ( 1 λ2 · k log d α2 · n ) . There are important considerations when interpreting this theorem. The first is the special case η ∼ N(0, σ2)n , which satisfies our model for α Θ(1/σ). For this case, it is well known (e.g., 7For a vector v ∈ d and a set S ⊆ [d], we denote by vS the restriction of v to the coordinates in S. see (Wai19), section 7.3) that under the same RE assumption, the LASSO estimator achieves a prediction error rate of O( σ2λ · k log d n ) O( k log d λ·α2 ·n ), matching our result. Moreover, this error rate is essentially optimal. Under a standard assumption in complexity theory (NP 1 P/poly), the RE assumption is necessary when considering polynomial-time estimators (ZWJ14). Further, this also shows that the dependence on the RE constant seems unavoidable. Under mild conditions on the design matrix, trivially satisfied if the rows are i.i.d. Gaussian with covariance Σ whose condition number is constant, our guarantees are optimal up to constant factors for all estimators if k 6 d1−Ω(1) (e.g. k 6 d0.99), see (RWY11). This optimality also shows that our bound on the number of samples is best possible since otherwise we would not be able to achieve vanishing error. The (non-sparse version of) well-spreadness property was first used in the context of regression in (dNS21). In the same work the authors also showed that, under oblivious noise assumptions, some weak form of spreadness property is indeed necessary. The second consideration is the optimal dependence on α: Theorem 2.2 achieves consistency as long as the fraction of inliers satisfies α ω( √ k log d/n). To get an intuition, observe that lower bounds for standard sparse regression show that, already for η ∼ N(0, σ · Idn), it is possible to achieve consistency only for n ω(σ2k log d) (if k 6 d1−Ω(1)). As for this η, the number of entries of magnitude at most 1 is O(n/σ) with high probability, it follows that for α Θ(1/σ) 6 O (√ (k log d)/n ) , no estimator is consistent. To the best of our knowledge, Theorem 2.2 is the first result to achieve consistency under such minimalistic noise settings and deterministic designs. Previous results (BJKK17; SBRJ19; dNS21) focused on simpler settings of Gaussian design X and deterministic noise, and provide no guarantees for more general models. The techniques for Theorem 2.2 also extend to this case. Theorem 2.3. Let β∗ ∈ d be an unknown k-sparse vector and let X be a n-by-d random matrix with i.i.d. rows X1 , . . .Xn ∼ N(0,Σ) for a positive definite matrix Σ. Further, let η ∈ n be a deterministic vector with α·n coordinates bounded by 1 in absolute value. Suppose that n & ν(Σ)·k log d σmin(Σ)·α2 , where ν(Σ) is the maximum diagonal entry of Σ and σmin(Σ) is its smallest eigenvalue. Then, with probability at least 1 − d−10 over X , given X and y Xβ∗ + η, the estimator Eq. (2.3) satisfies 1 n X (β̂ − β∗) 2 6 O ( ν(Σ) · k log d σmin(Σ) · α2 · n ) and β̂ − β∗ 2 6 O ( ν(Σ) · k log d σ2min(Σ) · α2 · n ) . Even for standard Gaussian design X ∼ N(0, 1)n×d , the above theorem improves over previous results, which required sub-optimal sample complexity n & ( k/α2 ) · log d · log β∗ . For non-spherical Gaussian designs, the improvement over state of the art (SBRJ19) is more serious: their algorithm requires α > Ω(1/log log n), while our Theorem 2.3 doesn’t have such restrictions and works for all α & √ ν(Σ) σmin(Σ) · k log d n ; in many interesting regimes α is allowed to be smaller than n −Ω(1). The dependence on α is nearly optimal: the estimator is consistent as long as α > ω (√ ν(Σ) σmin(Σ) · k log d n ) , and from the discussion after Theorem 2.3, if α 6 O( √ (k log d)/n), no estimator is consistent. Note that while we can deal with general covariance matrices, to compare Theorem 2.3 with Theorem 2.2 it is easier to consider Σ in a normalized form, when ν(Σ) 6 1. This can be easily achieved by scaling X . Also note that Theorem 2.2 can be generalized to the case ‖X i ‖ 6 √ νn for arbitrary ν > 0, and then the error bounds and the bound on m should be multiplied by ν. The RE-property of Theorem 2.2 is a standard assumption in sparse regression and is satisfied by a large family of matrices. For example, with high probability a random matrix X with i.i.d. rows sampled from N(0,Σ), with positive definite Σ ∈ d×d whose diagonal entries are bounded by 1, satisfies the RE-property with parameter Ω(σmin(Σ)) for all subsets of [d] of size k (so for every possible support of β∗) as long as long as n & 1σmin(Σ) · k log d (see (Wai19), section 7.3.3). The wellspread assumption is satisfied for such X with for all sets S ⊂ [n] of size m 6 cn (for sufficiently small c) and for all subsets of [d] of size k as long as n & 1σmin(Σ) · k log d. 2.3 Optimal fraction of inliers for principal component analysis under oblivious noise We show here that the dependence on α we obtain in Theorem 2.1 is information theroetically optimal up to constant factors. Concretely, let L∗ ,N , Y , α, ρ and ζ be as in Theorem 2.1, and let 0 < ε < 1 and 0 < δ < 1. A successful (ε, δ)-weak recovery algorithm for PCA is an algorithm that takes Y as input and returns a matrix L̂ such that L̂ − L∗ F 6 ε · ρ with probability at least 1 − δ. It can be easily seen that the Huber-loss estimator of Theorem 2.1 fails to be a successful weakrecovery algorithm if α o( √ r/n) (for both cases ζ 6 ρ/n and ρ/n 6 ζ, we need α Ω( √ r/n).) A natural question to ask is whether the condition α Ω( √ r/n) is necessary in general. The following theorem shows that if α o( √ r/n), then weak-recovery is information-theoretically impossible. This means that the (polynomially small) fraction of inliers that the Huber-loss estimator of Theorem 2.1 can deal with is optimal up to a constant factor. Theorem 2.4. There exists a universal constant C0 > 0 such that for every 0 < ε < 1 and 0 < δ < 1, if α : mini , j∈[n] [|Ni , j | 6 ζ] satisfies α < C0 · (1 − ε2)2 · (1 − δ) · √ r/n, and n is large enough, then it is information-theoretically impossible to have a successful (ε, δ)-weak recovery algorithm. The problem remains information-theoretically impossible (for the same regime of parameters) even if we assume that L∗ is incoherent; more precisely, even if we know that L∗ has incoherence parameters that are as good as those of a random flat matrix of rank r, the theorem still holds. 3 Techniques To illustrate our techniques in proving statistical guarantees for the Huber-loss estimator, we first use sparse linear regression as a running example. Then, we discuss how the same ideas apply to principal component analysis. Finally, we also remark our techniques for lower bounds. 3.1 Sparse linear regression under oblivious noise We consider the model of Theorem 2.2. Our starting point to attain the guarantees for our estimator Eq. (2.3), i.e., β̂ B arg minβ∈ d F2(β) + 100 √ n log d β 1, is a classical approach for M-estimators (see e.g. (Wai19), chapter 9). For simplicity, we will refer to F2(β) as the loss function and to β 1 as the regularizer. At a high level, it consists of the following two ingredients: (I) an upper bound on some norm of the gradient of the loss function at the parameter β∗, (II) a lower bound on the curvature of the loss function (in form of a local strong convexity bound) within a structured neighborhood of β∗. The structure of this neighborhood can roughly be controlled by choosing the appropriate regularizer. The key aspect of this strategy is that the strength of the statistical guarantees of the estimator crucially depends on the directions and the radius in which we can establish lower bounds on the curvature of the function. Since these features are inherently dependent on the landscape of the loss function and the regularizer, they may differ significantly from problem to problem. This strategy has been applied successfully for many related problems such as compressed sensing or matrix completion albeit with standard noise assumptions.8 Under oblivious noise, (dNS21) used a particular instantiation of this framework to prove optimal convergence of the Huber-loss – without any regularizer – for standard linear regression. Such estimator, however, doesn’t impose any structure on the neighborhood of β∗ considered in (II) and thus, can only be used to obtain sub-optimal guarantees for sparse regression. In the context of sparse regression, the above two conditions translate to: (I) an upper bound on the largest entry in absolute value of the gradient of the loss function at β∗, and (II) a lower bound on the curvature of F2 within the set of approximately k-sparse vectors9 close to β∗. We use this recipe to show that all approximate minimizers of F are close to β∗. While the idea of restricting to only approximately sparse directions has also been applied for the LASSO estimator in sparse 8The term "standard noise assumptions" is deliberately vague; as a concrete example, we will refer to (sub)-Gaussian noise distributions. See again chapter 9 of (Wai19) for a survey. 9We will clarify this notion in the subsequent paragraphs. regression under standard (sub)-Gaussian noise, in the presence of oblivious noise, our analysis of the Huber-loss function requires a more careful approach. More precisely, under the assumptions of Theorem 2.2, the error bound can be computed as O ( s‖G‖∗reg κ ) , (3.1) where G is a gradient of Huber loss at β∗, ‖·‖∗reg is a norm dual to the regularization norm (which is equal to ‖·‖max for `1 regularizer), s is a structure parameter, which is equal to √ k/λ, and κ is a restricted strong convexity parameter. Note that by error here we mean 1√ n ‖X(β̂ − β∗)‖. Similarly, under under assumptions of Theorem 2.1, we get the error bound Eq. (3.1), where G is a gradient of Huber loss at L∗, ‖·‖∗reg is a norm dual to the nuclear norm (i.e. the spectral norm), structure parameter s is √ r, and κ is a restricted strong convexity parameter. For more details on the conditions of the error bound, see the supplementary material. Below, we explore the bounds on the norm of the gradient and on the restricted strong convexity parameter. Bounding the gradient of the Huber loss. The gradient of the Huber-loss F2(·) at β∗ has the form ∇F2(β∗) ∑n i 1 f ′ 2 [ ηi ] · Xi , where Xi is the i-th row of X. The random variables f ′2 [ ηi ] , i ∈ [n], are independent, centered, symmetric and bounded by 2. Since we assume that each column of X has norm at most √ n, the entries of the row Xi are easily bounded by √ n. Thus, ∇F2(β∗) is a vector with independent, symmetric entries with bounded variance, so its behavior can be easily studied through standard concentration bounds. In particular, by a simple application of Hoeffding’s inequality, we obtain, with high probability, ∇F2(β∗) max maxj∈[d] ∑ i∈[n] f ′2 [ ηi ] · Xi j 6 O (√n log d) . (3.2) Local strong convexity of the Huber loss. Proving local strong convexity presents additional challenges. Without the sparsity constraint, (dNS21) showed that under a slightly stronger spreadness assumptions than Theorem 2.2, the Huber loss is locally strongly convex within a constant radius R ball centered at β∗ whenever n & d/α2. (This function is not globally strongly convex due to its linear parts.) Using their result as a black-box, one can obtain the error guarantees of Theorem 2.2, but with suboptimal sample complexity. The issue is that with the substantially smaller sample size of n > Õ(k/α2) that resembles the usual considerations in the context of sparse regression, the Huber loss is not locally strongly convex around β∗ uniformly across all directions, so we cannot hope to prove convergence with optimal sample complexity using this argument. To overcome this obstacle, we make use of the framework of M-estimators: Since we consider a regularized version of the Huber loss, it will be enough to show local strong convexity in a radius R uniformly across all directions which are approximately k-sparse. For this substantially weaker condition, Õ(k/α2) will be enough. More in details, for observations y Xβ∗ + η and an arbitrary u ∈ d of norm ‖u‖ 6 R, it is possible to lower bound the Hessian10 of the Huber loss at β∗ + u by:11 HF2(β∗ + u) n∑ i 1 f ′′2 [ (Xu)i − ηi ] · XiXiT n∑ i 1 1[|(Xu)i−ηi |62] · XiXi T M(u) : n∑ i 1 1[|〈Xi ,u〉|61] · 1[|ηi |61] · XiXi T As can be observed, we do not attempt to exploit cancellations between Xu and η. Let Q : { i ∈ [n] ηi 6 1} be the set of uncorrupted entries of η. Given that with high probability Q has size Ω(α ·n), the best outcome we can hope for is to provide a lower bound of the form 〈u ,M(u)u〉 > αn in the direction β̂ − β∗ . In (dNS21), it was shown that if the span of the measurement matrix X is well spread, then 〈u ,M(u)u〉 > Ω(α · n). 10The Hessian does not exist everywhere. Nevertheless, the second derivative of the penalty function f2 exists as an L1 function in the sense that f ′2(b) − f ′ 2(a) 2 ∫ b a 1[|t |62]dt. This property is enough for our purposes. 11A more extensive explanation of the first part of this analysis can be found in (dNS21). If the direction β̂− β∗ was fixed, it would suffice to show the curvature in that single direction through the above reasoning. However, β̂ depends on the unknown random noise vector η . Without the regularizer in Eq. (2.3), this dependence indicates that the vector β̂ may take any possible direction, so one needs to ensure local strong convexity to hold in a constant-radius ball centered at β∗. That is, min‖u‖6R λmin(M(u)) > Ω(α · n) . It can be shown through a covering argument (of the ball) that this bound holds true for n > d/α2. This is the approach of (dNS21). The minimizer of the Huber loss follows a sparse direction. The main issue with the above approach is that no information concerning the direction β̂ − β∗ is used. In the settings of sparse regression, however, our estimator contains the regularizer ‖β‖1. The main consequence of the regularizer is that the direction β̂−β∗ is approximately flat in the sense ‖β̂−β∗‖1 6 O (√ k‖β̂ − β∗‖ ) . The reason12 is that due to the structure of the objective function in Eq. (2.3) and concentration of the gradient Eq. (3.2), the penalty for dense vectors is larger than the absolute value of the inner product 〈∇F(β∗), β̂ − β∗〉 (which, as previously argued, concentrates around its (zero) expectation). This specific structure of the minimizer implies that it suffices to prove local strong convexity only in approximately sparse directions. For these set of directions, we carefully construct a sufficiently small covering set so that n > Õ(k/α2) samples suffice to ensure local strong convexity over it. Remark 3.1 (Comparison with LASSO). it is important to remark that while this approach of only considering approximately sparse directions has also been used in the context of sparse regression under Gaussian noise (e.g. the LASSO estimator), obtaining the desired lower bound is considerably easier in these settings as it directly follows from the restricted eigenvalue property of the design matrix. In our case, we require an additional careful probabilistic analysis which uses a covering argument for the set of approximately sparse vectors. As we see however, it turns out that we do not need any additional assumptions on the design matrix when compared with the LASSO estimator except for the well-spreadness property (recall that some weak version of well-spreadness is indeed necessary in robust settings, see (dNS21)). 3.2 Principal component analysis under oblivious noise A convenient feature of the approach in Section 3.1 for sparse regression, is that it can be easily applied to additional problems. We briefly explain here how to apply it for principal component analysis. We consider the model defined in Theorem 2.1. We use an estimator based on the Huber loss equipped with the nuclear norm as a regularizer to enforce the low-rank structure in our estimator L̂ B argmin L∈ n×n , ‖L‖max6ρ/n ( Fζ+ρ/n(Y − L) + 100 √ n ( ζ + ρ/n ) ‖L‖nuc ) . (3.3) In this setting, the gradient ∇Fζ+ρ/n(Y − L∗) is a matrix with independent, symmetric entries which are bounded (by ζ + ρ/n) and hence its spectral norm is O ( (ζ + ρ/n) √ n ) with high probability. Local strong convexity can be obtained in a similar fashion as shown in Section 3.1: due to the choice of the Huber transition point all entries with small noise are in the quadratic part of F. Moreover, the nuclear norm regularizer ensures that the minimizer is an approximately low-rank matrix in the sense that ‖M‖nuc 6 O (√ r‖M‖F ) . So again, it suffices to provide curvature of the loss function only on these subset of structured directions. Remark 3.2 (Incoherence vs. spikiness). Recall the discussion on incoherence in Section 2. If for every µ 6 n/log2 n matrix L∗ doesn’t satisfy µ-incoherence conditions, the results in (CLMW11; ZLW+10) cannot be applied. However, our estimator achieves error ‖L̂−L∗‖F/‖L∗‖F → 0 as n →∞. Indeed,let ω(1) 6 f (n) 6 o(log2 n) and assume ζ 0. Let u ∈ n be an f (n)-sparse unit vector whose nonzero entries are equal to 1/ √ f (n). Let v ∈ n be a vector with all entries equal to 1/ √ n. Then, uvT does not satisfy incoherence with any µ < n/ f (n). We have uvT F 1, and the error of our estimator is O(1/(α √ f (n))), so it tends to zero for constant (or even some subconstant) α. Furthermore notice that the dependence of the error of Theorem 2.1 on the maximal entry of L∗ is inherent if we do not require incoherence. Indeed, consider L1 b · e1e1T for large enough b > 0 12This phenomenon is a consequence of the decomposability of the L1 norm, see the supplementary material. and L2 e2e2T. For constant α, let |Ni j | be 1 with probability α/2, 0 with probability α/2 and b with probability 1 − α. Then given Y we cannot even distinguish between cases L∗ L1 or L∗ L2, and since ‖L1 − L2‖F > b, the error should also depend on b. Remark 3.3 (α vs α2: what if one knows which entries are corrupted?). As was observed in Section 2, the error bound of our estimator is worse than the error for matrix completion by a factor 1/ √ α. We observe similar effect in linear regression: if, as in matrix completion, we are given a randomly chosen α fraction of observations {( Xi , yi 〈Xi , β∗〉 + ηi )}n i 1 where η ∼ N(0, 1) n , and since for the remaining samples we may not assume any bound on the signal-to-noise ratio, then this problem is essentialy the same as linear regression with αn observations. Thus the optimal prediction error rate is Θ( √ d/(αn)). Now, if we have y Xβ∗ + η, where η ∼ N(0, 1/α2), then with probability Θ(α), |ηi | 6 1, but the optimal prediction error rate in this case is Θ( √ d/(α2n)). So in both linear regression and robust PCA, prior knowledge of the set of corrupted entries makes the problem easier. 3.3 Optimal fraction of inliers for principal component analysis under oblivious noise In order to prove Theorem 2.4, we will adopt a generative model for the hidden matrix L∗: We will generate L∗ randomly but assume that the distribution is known to the algorithm. This makes the problem easier. Therefore, any impossibility result for this generative model would imply impossibility for the more restrictive model for which L∗ is deterministic but unknown. We generate a random flat matrix L∗ using n · r independent and uniform random bits in such a way that L∗ is of rank r and incoherent with high probability. Then, for every constant 0 < ξ < 1, we find a distribution for the random noise N in such a way that the fraction of inliers satisfies α : [|Ni j | 6 ζ] Θ ( ξ √ r/n ) , and such that the mutual information between L∗ and Y L∗ + N can be upper bounded as I(L∗; Y ) 6 O(ξ · n · r). Roughly speaking the smaller ξ gets, the more independent L∗ and Y will be. Now using an inequality that is similar to the standard Fano-inequality but adapted to weak-recovery, we show that if there is a successful (ε, δ)-weak recovery algorithm for L∗ and N , then I(L∗; Y ) > Ω ( (1 − ε2)2 · (1 − δ) · n · r ) . By combining all these observations together, we can deduce that if ξ is small enough, it is impossible to have a successful (ε, δ)-weak recovery algorithm for L∗ and N . Acknowledgments and Disclosure of Funding The authors thank the anonymous reviewers for useful comments. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 815464).
1. What is the focus of the paper in terms of robust estimation? 2. What are the strengths of the paper regarding its comparison to prior works and clarity of technique discussions? 3. What are the weaknesses of the paper regarding its claims of providing a general framework? 4. Why did the authors choose to focus on the Huber loss function specifically? 5. How could the paper be improved to make it more general and acceptable?
Summary Of The Paper Review
Summary Of The Paper The authors have proved non-trivial estimation error bounds for one type of robust estimate for sparse linear regression and PCA. Specifically, the estimate is constructed by minimizing the Huber Loss function with L1 regularization. Review I think overall the paper is well written. For each problem analyzed in this paper, there are sufficient comparisons with the prior work in terms of the sharpness of the error bound and shows when the error bounds derived in this paper are tighter. The discussion of the techniques looks clear. My major concerns of the paper are: While the authors have claimed that their techniques provide a general framework that proves tight error bound for Huber loss-based robust estimate. However, besides the proof details specific to the two problems studied in the paper, I do not find anything new in the approach. To claim a general framework, I am expecting a statement that as long as the problem satisfies certain general properties, then there is a tight error bound for a type of robust estimate. Related to the above point, it is unclear to me why the authors only looked at the Huber loss function. It would be nice and more general to say that as long as the loss function satisfies some assumptions, the error bounds hold. Then point out that Huber loss is one example and hopefully the authors can show a second example of the robust loss function. Overall, despite the concerns, I think it is a good paper to be accepted.
NIPS
Title Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Abstract This paper presents a simple yet effective approach to modeling space-time correspondences in the context of video object segmentation. Unlike most existing approaches, we establish correspondences directly between frames without reencoding the mask features for every object, leading to a highly efficient and robust framework. With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion. We cast the aggregation process as a voting problem and find that the existing inner-product affinity leads to poor use of memory with a small (fixed) subset of memory nodes dominating the votes, regardless of the query. In light of this phenomenon, we propose using the negative squared Euclidean distance instead to compute the affinities. We validate that every memory node now has a chance to contribute, and experimentally show that such diversified voting is beneficial to both memory efficiency and inference accuracy. The synergy of correspondence networks and diversified voting works exceedingly well, achieves new state-of-the-art results on both DAVIS and YouTubeVOS datasets while running significantly faster at 20+ FPS for multiple objects without bells and whistles. 1 Introduction Video object segmentation (VOS) aims to identify and segment target instances in a video sequence. This work focuses on the semi-supervised setting where the first-frame segmentation is given and the algorithm needs to infer the segmentation for the remaining frames. This task is an extension of video object tracking [1, 2], requiring detailed object masks instead of simple bounding boxes. A high-performing algorithm should be able to delineate an object from the background or other distractors (e.g., similar instances) under partial or complete occlusion, appearance changes, and object deformation [3]. Most current methods either fit a model using the initial segmentation [4, 5, 6, 7, 8, 9] or leverage temporal propagation [10, 11, 12, 13, 14, 15, 16], particularly with spatio-temporal matching [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Space-Time Memory networks [18] are especially popular recently due to its high performance and simplicity – many variants [22, 16, 23, 21, 24, 28, 29, 30], including competitions’ winners [31, 32], have been developed to improve the speed, reduce memory usage, or to regularize the memory readout process of STM. In this work, we aim to subtract from STM to arrive at a minimalistic form of matching networks, dubbed Space-Time Correspondence Network (STCN) 1. Specifically, we start from the basic premise †This work was done in The Hong Kong University of Science and Technology. 1Training/inference code and pretrained models: https://github.com/hkchengrex/STCN 35th Conference on Neural Information Processing Systems (NeurIPS 2021). that correspondences are target-agnostic. Instead of building a specific memory bank and therefore affinity for every object in the video as in STM, we build a single affinity matrix using only RGB relations. For querying, each target object passes through the same affinity matrix for feature transfer. This is not only more efficient but also more robust – the model is forced to learn all object relations beyond just the labeled ones. With the learned affinity, the algorithm can propagate features from the first frame to the rest of the video sequence, with intermediate features stored as memory. While STCN already reaches state-of-the-art performance and speed in this simple form, we further probe into the inner workings of the construction of affinities. Traditionally, affinities are constructed from dot products followed by a softmax as in attention mechanisms [18, 33]. This however implicitly encoded “confidence” (magnitude) with high-confidence points dominating the affinities all the time, regardless of query features. Some memory nodes will therefore be always suppressed, and the (large) memory bank will be underutilized, reducing effective diversity and robustness. We find this to be harmful, and propose using the negative squared Euclidean distance as a similarity measure with an efficient implementation instead. Though simple, this small change ensures that every memory node has a chance to contribute significantly (given the right query), leading to better performance, higher robustness, and more efficient use of memory. Our contribution is three-fold: • We propose STCN with direct image-to-image correspondence that is simpler, more efficient, and more effective than STM. • We examine the affinity in detail, and propose using L2 similarity in place of dot product for a better memory coverage, where every memory node contributes instead of just a few. • The synergy of the above two results in a simple and strong method, which suppresses previous state-of-the-art performance without additional complications while running fast at 20+ FPS. 2 Related Works Correspondence Learning Finding correspondences is one of the most fundamental problems in computer vision. Local correspondences have been used heavily in optical flow [34, 35, 36] and object tracking [37, 38, 39] with fast running time and high performance. More explicit correspondence learning has also been achieved with deep learning [40, 41, 42]. Few-shots learning can be considered as a matching problem where the query is compared with every element in the support set [43, 44, 45, 46]. Typical approaches use a Siamese network [47] and compare the embedded query/support features using a similarity measure such as cosine similarity [43], squared Euclidean distance [48], or even a learned function [49]. Our task can also be formulated as a few-shots problem, where our memory bank acts as the support set. This connection helps us with the choice of similarity function, albeit we are dealing with a million times more pointwise comparisons. Video Object Segmentation Early VOS methods [4, 5, 50] employ online first-frame finetuning which is very slow in inference and have been gradually phased out. Faster approaches have been proposed such as a more efficient online learning algorithm [8, 6, 7], MRF graph inference [51], temporal CNN [52], capsule routing [53], tracking [11, 13, 15, 54, 55, 56, 57], embedding learning [10, 58, 59] and space-time matching [17, 18, 19, 20]. Embedding learning bears a high similarity to space-time matching, both attempting to learn a deep feature representation of an object that remains consistent across a video. Usually embedding learning methods are more constrained [10, 58], adopting local search window and hard one-to-one matching. We are particularly interested in the class of Space-Time Memory networks (STM) [18] which are the backbone for many follow-up state-of-the-art VOS methods. STM constructs a memory bank for each object in the video, and matches every query frame to the memory bank to perform “memory readout”. Newly inferred frames can be added to the memory, and then the algorithm propagates forward in time. Derivatives either apply STM at other tasks [21, 60], improve the training data or augmentation policy [21, 22], augment the memory readout process [16, 21, 22, 24, 28], use optical flow [29], or reduce the size of the memory bank by limiting its growth [23, 30]. MAST [61] is an adjacent research that focused on unsupervised learning with a photometric reconstruction loss. Without the input mask, they use Siamese networks on RGB images to build the correspondence out of necessity. In this work, we deliberately build such connections and establish that building correspondences between images is a better choice, even when input masks are available, rather than a concession. We propose to overhaul STM into STCN where the construction of affinity is redefined to be between frames only. We also take a close look at the similarity function, which has always been the dot product in all STM variants, make changes and comparisons according to our findings. The resultant framework is both faster and better while still principled. STCN is even fundamentally simpler than STM, and we hope that STCN can be adopted as the new and efficient backbone for future works. 3 Space-Time Correspondence Networks (STCN) Given a video sequence and the first-frame annotation, we process the frames sequentially and maintain a memory bank of features. For each query frame, we extract a key feature which is compared with the keys in the memory bank, and retrieve corresponding value features from memory using key affinities as in STM [18]. 3.1 Feature Extraction Figure 1 illustrates the overall flow of STCN. While STM [18] parameterizes a Query Encoder (image as input) and a Memory Encoder (image and mask as input) with two ResNet50 [62], we instead construct a Key Encoder (image as input) and a Value Encoder (image and mask as input) with a ResNet50 and a ResNet18 respectively. Thus, unlike in STM [18], the key features (and thus the resultant affinity) can be extracted independently without the mask, computed only once for each frame, and symmetric between memory and query.2 The rationales are 1) Correspondences (key features) are more difficult to extract than value, hence a deeper network, and 2) Correspondences should exist between frames in a video, and there is little reason to introduce the mask as a distraction. From another perspective, we are using a Siamese structure [47] which is widely adopted in few-shots learning [63, 49] for computing the key features, as if our memory bank is the few-shots support set. As the key features are independent of the mask, we can reuse the “query key” later as a “memory key” if we decide to turn the query frame into a memory frame during propagation (strategy to be discussed in Section 3.3). This means the key encoder is used exactly once per image in the entire process, despite the two appearances in Figure 1 (which is for brevity). 2That is, matching between two points does not depend on whether they are query or memory points (not true in STM [18] as they are from different encoders). Architecture. Following the STM practice [18], we take res4 features with stride 16 from the base ResNets as our backbone features and discard res5. A 3×3 convolutional layer without non-linearity is used as a projection head from the backbone feature to either the key space (Ck dimensional) or the value space (Cv dimensional). We set Cv to be 512 following STM and discuss the choice of Ck in Section 4.1. Feature reuse. As seen from Figure 1, both the key encoder and the value encoder are processing the same frame, albeit with different inputs. It is natural to reuse features from the key encoder (with fewer inputs and a deeper network) at the value encoder. To avoid bloating the feature dimensions and for simplicity, we concatenate the last layer features from both encoders (before the projection head) and process them with two ResBlocks [62] and a CBAM block3 [64] as the final value output. 3.2 Memory Reading and Decoding Given T memory frames and a query frame, the feature extraction step would generate the followings: memory key kM ∈ RCk×THW , memory value vM ∈ RCv×THW , and query key kQ ∈ RCk×HW , where H and W are (stride 16) spatial dimensions. Then, for any similarity measure c : RCk×RCk → R, we can compute the pairwise affinity matrix S and the softmax-normalized affinity matrix W, where S,W ∈ RTHW×HW with: Sij = c(k M i ,k Q j ) Wij = exp (Sij)∑ n (exp (Snj)) , (1) where ki denotes the feature vector at the i-th position. The similarities are normalized by √ Ck as in standard practice [18, 33] and is not shown for brevity. In STM [18], the dot product is used as c. Memory reading regularization like KMN [22] or top-k filtering [21] can be applied at this step. With the normalized affinity matrix W, the aggregated readout feature vQ ∈ RCv×HW for the query frame can be computed as a weighted sum of the memory features with an efficient matrix multiplication: vQ = vMW, (2) which is then passed to the decoder for mask generation. In the case of multi-object segmentation, only Equation 2 has to be repeated as W is defined between image features only, and thus is the same for different objects. In the case of STM [18], W must be recomputed instead. Detailed running time analysis can be found in Section 6.2. Decoder. Our decoder structure stays close to that of the STM [18] as it is not the focus of this paper. Features are processed and upsampled at a scale of two gradually with higher-resolution features from the key encoder incorporated using skip-connections. The final layer of the decoder produces a stride 4 mask which is bilinearly upsampled to the original resolution. In the case of multiple objects, soft aggregation [18] of the output masks is used. 3.3 Memory Management So far we have assumed the existence of a memory bank of size T . Here, we will describe the construction of the memory bank. For each memory frame, we store two items: memory key and memory value. Note that all memory frames (except the first one) are once query frames. The memory key is simply reused from the query key, as described in Section 3.1 without extra computation. The memory value is computed after mask generation of that frame, independently for each object as the value encoder takes both the image and the object mask as inputs. STM [18] consider every fifth query frame as a memory frame, and the immediately previous frame as a temporary memory frame to ensure accurate matching. In the case of STCN, we find that it is unnecessary, and in fact harmful, to include the last frame as temporary memory. This is a direct consequence of using shared key encoders – 1) key features are sufficiently robust to match well without the need for close-range (temporal) propagation, and 2) the temporary memory key would otherwise be too similar to that of the query, as the image context usually changes smoothly and we do not have the encoding noises resultant from distinct encoders, leading to drifting.4 This modification also reduces the number of calls to the value encoder, contributing a significant speedup. 3We find this block to be non-essential in a later experiment but it is kept for consistency. 4This effect is amplified by the use of L2 similarity. See the supplementary material for a full comparison. Table 1 tabulates the performance comparisons between STM and STCN. For a video of length L with m ≥ 1 objects, and a final memory bank of size T < L, STM [18] would need to invoke the memory encoder and compute the affinity mL times. Our proposed STCN, on the other hand, only invokes the value encoder mT times and computes the affinity L times. It is therefore evident that STCN is significantly faster. Section 6.2 provides a breakdown of running time. 4 Computing Affinity The similarity function c : RCk × RCk → R plays a crucial role in both STM and STCN, as it supports the construction of affinity that is central to both correspondences and memory reading. It also has to be fast and memory-efficient as there can be up to 50M pairwise relations (THW ×HW ) to compute for just one query frame. To recap, we need to compute the similarity between a memory key kM ∈ RCk×HW and a query key kQ ∈ RCk×HW . The resultant pairwise affinity matrix is denoted as S ∈ RTHW×HW , with Sij = c(k M i ,k Q j ) denoting the similarity between k M i (the memory feature vector at the i-th position) and kQj (the query feature vector at the j-th position). In the case of dot product, it can be implemented very efficiently with a matrix multiplication: Sdotij = k M i · k Q j ⇒ S dot = ( kM )T kQ (3) In the following, we will also discuss the use of cosine similarity and negative squared Euclidean distance as similarity functions. They are defined as (with efficient implementation discussed later): Scosij = kMi · k Q j∥∥kMi ∥∥2 × ∥∥kQj ∥∥2 SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 (4) For brevity, we will use the shorthand “L2” or “L2 similarity” to denote the negative squared Euclidean distance in the rest of the paper. The ranges for dot product, cosine similarity and L2 similarity are (−∞,∞), [−1, 1], and (−∞, 0] respectively. Note that cosine similarity has a limited range. Non-related points are encouraged to have a low similarity score through back-propagation such that they have a close-to-zero affinity (Eq. 1), and thus no value is propagated (Eq. 2). 4.1 A Closer Look at the Affinity The affinity matrix is core to STCN and deserves close attention. Previous works [18, 21, 22, 23, 24], almost by default, use the dot product as the similarity function – but is this a good choice? Cosine similarity computes the angle between two vectors and is often regarded as the normalized dot product. Reversely, we can consider dot product as a scaled version of cosine similarity, with the scale equals to the product of vectors’ norms. Note that this is query-agnostic, meaning that every similarity with a memory key kMi will be scaled by its norm. If we cast the aggregation process (Eq. 2) as voting with similarity representing the weights, memory keys with large magnitudes will predominately suppress any representation from other memory nodes. Figure 2 visualizes this phenomenon in a 2D feature space. For dot product, only a subset of points (labeled as triangles) has a chance to contribute the most for any query. Outliers (top-right red) can suppress existing clusters; clusters with dominant value in one dimension (top-left cyan) can suppress other clusters; some points may be able to contribute the most in a region even it is outside of the region (bottom-right beige). These undesirable situations will however not happen if the proposed L2 similarity is used: a Voronoi diagram [66] is formed and every memory point can be fully utilized, leading to a diversified, queryspecific voting mechanism with ease. Figure 3 shows a closer look at the same problem with soft weights. With dot product, the blue/green point has low weights for every possible query in the first quadrant while a smooth transition is created with our proposed L2 similarity. Note that cosine similarity has the same benefits, but its limited range [−1, 1] means that an extra softmax temperature hyperparameter is required to shape the affinity distribution, or one more parameter to tune. L2 works well without extra temperature tuning in our experiments. Connection to self-attention, and whether some points are more important than others. Dotproducts have been used extensively in self-attention models [33, 67, 68]. One way to look at the dot-product affinity positively is to consider the points with large magnitudes as more important – naturally they should enjoy a higher influence. Admittedly, this is probably true in NLP [33] where a stop word (“the”) is almost useless compared to a noun (“London”) or in video classification [67] where the foreground human is far more important than a pixel in the plain blue sky. This is however not true for STCN where pixels are more or less equal. It is beneficial to match every pixel in the query frame accurately, including the background (also noted by [10]). After all, if we can know that a pixel is part of the background, we would also know that it does not belong to the foreground. In fact, we find STCN can track the background fairly well (floor, lake, etc.) even when it is never explicitly trained to do so. The notion of relative importance therefore does not generally apply in our context. Efficient implementation. The naïve implementation of negative squared Euclidean distance in Eq. 4 needs to materialize a Ck × THW × HW element-wise difference matrix which is then squared and summed. This process is much slower than simple dot product and cannot be run on the same hardware. A simple decomposition greatly simplifies the implementation, as noted in [69]: SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 = 2kMi · k Q j − ∥∥kMi ∥∥22 − ∥∥kQj ∥∥22 (5) which has only slightly more computation than the baseline dot product, and can be implemented with standard matrix operations. In fact, we can further drop the last term as softmax is invariant to translation in the target dimension (details in the supplementary material). For cosine similarity, we first normalize the input vectors, then compute dot product. Table 2 tabulates the actual computational and memory costs. 4.2 Experimental Verification Here, we verify three claims: 1) the aforementioned phenomenon does happen in a high-dimension key space for real-data and a fully-trained model; 2) using L2 similarity diversifies the voting; 3) L2 similarity brings about higher efficiency and performance. Affinity distribution. We verify the first two claims by training two different models with dot product and L2 similarity respectively as the similarity function and plot the maximum contribution given by each memory node in its lifetime. We use the same setting for the two models and report the distribution on the DAVIS 2017 [65] dataset. Figure 4 shows the pertinent distributions. Under the L2 similarity measure, a lot more memory nodes contribute a fair share. Specifically, around 3% memory nodes never contribute more than 1% weight under dot product while only 0.06% suffer the same fate with L2. Under dot product, 31% memory nodes contribute less than 10% weight at best while the same only happen for 7% of the memory with L2 similarity. To measure the distribution inequality, we additionally compute the Gini coefficient [70] (the higher it is, the more unequal the distribution). The Gini coefficient for dot product is 44.0, while the Gini coefficient for L2 similarity is much lower at 31.8. Performance and efficiency. Next, we show that using L2 similarity does improve performance with negligible overhead. We compare three similarity measures: dot product, cosine similarity, and L2 similarity. For cosine similarity, we use a softmax temperature of 0.01 while a default temperature of 1 is used for both dot product and L2 similarity. This scaling is crucial for cosine similarity only since it is the only one with a limited output range [−1, 1]. Searching for an extra hyperparameter is computationally demanding – we simply picked one that converges fairly quickly without collapsing. Table 2 tabulates the main results. Interestingly, we find that reducing the key space dimension (Ck) is beneficial to both cosine similarity and L2 similarity but not dot product. This can be explained in the context of Section 4.1 – the network needs more dimensions so that it can spread the memory key features out to save them from being suppressed by high-magnitude points. Cosine similarity and L2 similarity do not suffer from this problem and can utilize the full key space. The reduced key space in turn benefits memory efficiency and improves running time. 5 Implementation Details Models are trained with two 11GB 2080Ti GPUs with the Adam optimizer [71] using PyTorch [72]. Following previous practices [18, 21], we first pretrain the model on static image datasets [73, 74, 75, 76, 77] with synthetic deformation then perform main training on YouTubeVOS [78] and DAVIS [3, 65]. We also experimented with the synthetic dataset BL30K [79, 80] proposed in [21] which is not used unless otherwise specified. We use a batch size of 16 during pretraining and a batch size of 8 during main training. Pre-training takes about 36 hours and main training takes around 16 hours with batchnorm layers frozen during training following [18]. Bootstrapped cross entropy is used following [21]. The full set of hyperparameters can be found in the open-sourced code. In each iteration, we pick three temporally ordered frames (with the ground-truth mask for the first frame) from a video to form a training sample [18]. First, we predict the second frame using the first frame as memory. The prediction will be saved as the second memory frame, and then the third frame will be predicted using the union of the first and the second frame. The temporal distance between the frames will first gradually increase from 5 to 25 as a curriculum learning schedule and anneal back to 5 towards the end of training. This process follows the implementation of MiVOS [21]. For memory-read augmentation, we experimented with kernelized memory reading [22] and top-k filtering [21]. We find that top-k works well universally and improves running time while kernelized memory reading is slower and does not always help. We find that k = 20 always works better for STCN (original paper uses k = 50) and we adopt top-k filtering in all our experiments with k = 20. For fairness, we also re-run all experiments in MiVOS [21] with k = 20, and pick the best result in their favor. We use L2 similarity with Ck = 64 in all experiments unless otherwise specified. For inference, a 2080Ti GPU is used with full floating point precision for a fair running time comparison. We memorize every 5th frame and no temporary frame is used as discussed in Section 3.3. 6 Experiments We mainly conduct experiments in the DAVIS 2017 validation [65] set and the YouTubeVOS 2018 [78] validation set. For completeness, we also include results in the single object DAVIS 2016 validation [3] set and the expanded YouTubeVOS 2019 [78] validation set. Results for the DAVIS 2017 test-dev [65] set are included in the supplementary material. We first conduct quantitative comparisons with previous methods, and then analyze the running time for each component in STCN. For reference, we also present results without pretraining on stataic images. Ablation studies have been included in previous sections (Table 1 and Table 2). 6.1 Evaluations Table 3 tabulates the comparisons of STCN with previous methods in semi-supervised video object segmentation benchmarks. For DAVIS 2017 [65], we compare the standard metrics: region similarity J , contour accuracy F , and their average J&F . For YouTubeVOS [78], we report J and F for both seen and unseen categories, and the averaged overall score G. For comparing the speed, we compute the multi-object FPS that is the total number of output frames divided by the total processing time for the entire DAVIS 2017 [65] validation set. We either copy the FPS directly from papers/project websites, or estimate based on their single object inference FPS (simply labeled as <5). We use 480p resolution videos for both DAVIS and YouTubeVOS. Table 4, 5, 6, and 7 tabulate additional results. For the interactive setting, we replace the propagation module of MiVOS [21] with STCN. Visualizations. Figure 6 visualizes the learned correspondences. Note that our correspondences are general and mask-free, naturally associating every pixel (including background bystanders) even when it is only trained with foreground masks. Figure 7 visualizes our semi-supervised mask propagation results with the last row being a failure case (Section 7). Leaderboard results. Our method is also very competitive on the public VOS challenge leaderboard [78]. Methods on the leaderboard are typically cutting-edge, with engineering extensions like deeper network, multi-scale inference, and model ensemble. They usually represent the highest achievable performance at the time. On the latest YouTubeVOS 2019 validation split [78], our base model (84.2 G) outperforms the previous challenge winner [32] (based on STM [18], 82.0 G) by a large margin. With ensemble and multi-scale testing (details in the supplementary material), our method is ranked first place (86.7 G) at the time of submission on the still active leaderboard. 6.2 Running Time Analysis Here, we analyze the running time of each component in STM and STCN on DAVIS 2017 [65]. For a fair comparison, we use our own implementation of STM, enabled top-k filtering [21], and set Ck = 64 for both methods such that all the speed improvements come from the fundamental differences between STM and STCN. Our affinity matching time is lower because we compute a single affinity between raw images while STM [18] compute one for every object. Our value encoder takes much less time than the memory encoder in STM [18] because of our light network, feature reuse, and robust memory bank/management as discussed in Section 3.3. 5A linear extrapolation would severally underestimate the performance of many previous methods. 7 Limitations To alienate our method from other possible enhancement, we only use fundamentally simple global matching. Like STM [18], we have no notion of temporal consistency as we do not employ local matching [58, 10, 17] or optical flow [29]. This means we may incorrectly segment objects that are far away with similar appearance. One such failure case is shown on the last row of Figure 7. We expect that given our framework’s simplicity, our method can be readily extended to include temporal consistency consideration for further improvement. 8 Conclusion We present STCN, a simple, effective, and efficient framework for video object segmentation. We propose to use direct image-to-image correspondence for efficiency and more robust matching, and examine the inner workings of affinity in details – L2 similarity is proposed as a result of our observations. With its clear technical advantages, We hope that STCN can serve as a new baseline backbone for future contributions. Table 4: Results on the DAVIS 2016 validation set. Method J&F J F OSMN [8] 73.5 74.0 72.9 MaskTrack [15] 77.6 79.7 75.4 OSVOS [5] 80.2 79.8 80.6 FAVOS [14] 81.0 82.4 79.5 FEELVOS [58] 81.7 81.1 82.2 RGMP [55] 81.8 81.5 82.0 Track-Seg [54] 83.1 82.6 83.6 FRTM-VOS [6] 83.5 - - CINN [51] 84.2 83.4 85.0 OnAVOS [50] 85.5 86.1 84.9 PReMVOS [81] 86.8 84.9 88.6 GC [24] 86.8 87.6 85.7 RMNet [29] 88.8 88.9 88.7 STM [18] 89.3 88.7 89.9 CFBI [10] 89.4 88.3 90.5 CFBI+ [83] 89.9 88.7 91.1 MiVOS [21] 90.0 88.9 91.1 SwiftNet [30] 90.4 90.5 90.3 KMN [22] 90.5 89.5 91.5 LCM [28] 90.7 91.4 89.9 Ours 91.6 90.8 92.5 MiVOS [21] + BL30K 91.0 89.6 92.4 Ours + BL30K 91.7 90.4 93.0 Method AUC-J&F J&F @ 60s Time (s) ATNet [84] 80.9 82.7 55+ STM [85] 80.3 84.8 37 GIS [86] 85.6 86.6 34 MiVOS [21] 87.9 88.5 12 Ours 88.4 88.8 7.3 Table 7: Effects of pretraining on static images/maintraining on the DAVIS 2017 validation set. J&F J F Pre-training only 75.8 73.1 78.6 Main training only 82.5 79.3 85.7 Both 85.4 82.2 88.6 ST M O ur s M iV O S O ur s Figure 7: Visualization of semi-supervised VOS results with the first column being the reference masks to be propagated. The first two examples show comparisons of our method with STM [18] and MiVOS [21]. In the second example, zoom-ins inset (orange) are shown with the corresponding ground-truths inset (green) to highlight their differences. The last row shows a failure case: we cannot distinguish the real duck from the duck picture, as no temporal consistency clue is used in our method. Broader Impacts Malicious use of VOS software can bring potential negative societal impacts, including but not limited to unauthorized mass surveillance or privacy infringing human/vehicle tracking. We believe that the task itself is neutral with positive uses as well, such as video editing for amateurs or making safe self-driving cars. Acknowledgment This research is supported in part by Kuaishou Technology and the Research Grant Council of the Hong Kong SAR under grant no. 16201818.
1. What is the focus and contribution of the paper on space-time correspondence networks? 2. What are the strengths of the proposed approach, particularly in its novel architecture and efficient L2 similarity computation? 3. Do you have any concerns regarding the comparison between the proposed method and recent works, including the work mentioned in [1]? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor questions or comments regarding the paper's content, such as the necessity of Resnet+CBAM layers or the difference between the light value encoder and resizing the binary segmentation mask?
Summary Of The Paper Review
Summary Of The Paper This work proposes a space-time correspondence network which improves performance and speed over the standard STM network by finding pixel-level correspondences in individual frames rather than frames+segmentations. Furthermore, this work shows that the dot-product, which is commonly used in STM is suboptimal and they propose an efficient L2 similarity computation which leads to improved correspondences throughout a video. It is evaluated on two standard VOS benchmarks - Youtube-VOS and DAVIS-2017 - and there are extensive analyses and ablations which give insights into the proposed method. Review Overall, the paper is well written and there are no major issues. The proposed approach seems novel, and achieves strong VOS performance when compared with previous SOTA methods. Furthermore, the major claims made by the work are supported by experiments or ablations. There are some questions/issues (see below), so I suggest this work is marginally above the acceptance threshold until these are addressed. Questions/Issues A recent paper [1] has a similar network architecture, where the correspondence/affinity between pixels is computed separate from the value/query. Although the specific problems on which the networks are applied are different - [1] attempts to perform VOS while training in an unsupervised manner, whereas this work perform the traditional semi-supervised setting - can the authors comment on the differences between the two approaches? Some discussion differentiating the two methods/architectures would help highlight the novelty of this approach. Table 2 shows that L2 leads to better results on STCN than dot-product and cosine similarity. Is this, however, specific to the STCN, or would replacing the dot-product in another network, e.g. STM, lead to a similar performance improvement? Table 1 uses dot-product for STM and L2 Similarity for the STCN. Would the drop in performance when using the last frames (results in col 3 and col 4) be found when using the dot-product with STCN? Is this specific to the STCN or the L2 Similarity? An ablation exploring this would be beneficial to better understand the strengths/weaknesses of the proposed approach. Minor Questions/Comments Is the use of Resnet+CBAM layers to combine query+values (line 117) necessary for strong performance? How would the light value encoder differ from resizing the binary segmentation mask and using the resized mask as v M ? It would be useful to see scores when not using static image pretraining, so that the method could be compared with other methods that do not use such data (i.e. only training on Youtube-VOS). [1] Lai, Z., Lu, E., & Xie, W. (2020). MAST: A memory-augmented self-supervised tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6479-6488).
NIPS
Title Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Abstract This paper presents a simple yet effective approach to modeling space-time correspondences in the context of video object segmentation. Unlike most existing approaches, we establish correspondences directly between frames without reencoding the mask features for every object, leading to a highly efficient and robust framework. With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion. We cast the aggregation process as a voting problem and find that the existing inner-product affinity leads to poor use of memory with a small (fixed) subset of memory nodes dominating the votes, regardless of the query. In light of this phenomenon, we propose using the negative squared Euclidean distance instead to compute the affinities. We validate that every memory node now has a chance to contribute, and experimentally show that such diversified voting is beneficial to both memory efficiency and inference accuracy. The synergy of correspondence networks and diversified voting works exceedingly well, achieves new state-of-the-art results on both DAVIS and YouTubeVOS datasets while running significantly faster at 20+ FPS for multiple objects without bells and whistles. 1 Introduction Video object segmentation (VOS) aims to identify and segment target instances in a video sequence. This work focuses on the semi-supervised setting where the first-frame segmentation is given and the algorithm needs to infer the segmentation for the remaining frames. This task is an extension of video object tracking [1, 2], requiring detailed object masks instead of simple bounding boxes. A high-performing algorithm should be able to delineate an object from the background or other distractors (e.g., similar instances) under partial or complete occlusion, appearance changes, and object deformation [3]. Most current methods either fit a model using the initial segmentation [4, 5, 6, 7, 8, 9] or leverage temporal propagation [10, 11, 12, 13, 14, 15, 16], particularly with spatio-temporal matching [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Space-Time Memory networks [18] are especially popular recently due to its high performance and simplicity – many variants [22, 16, 23, 21, 24, 28, 29, 30], including competitions’ winners [31, 32], have been developed to improve the speed, reduce memory usage, or to regularize the memory readout process of STM. In this work, we aim to subtract from STM to arrive at a minimalistic form of matching networks, dubbed Space-Time Correspondence Network (STCN) 1. Specifically, we start from the basic premise †This work was done in The Hong Kong University of Science and Technology. 1Training/inference code and pretrained models: https://github.com/hkchengrex/STCN 35th Conference on Neural Information Processing Systems (NeurIPS 2021). that correspondences are target-agnostic. Instead of building a specific memory bank and therefore affinity for every object in the video as in STM, we build a single affinity matrix using only RGB relations. For querying, each target object passes through the same affinity matrix for feature transfer. This is not only more efficient but also more robust – the model is forced to learn all object relations beyond just the labeled ones. With the learned affinity, the algorithm can propagate features from the first frame to the rest of the video sequence, with intermediate features stored as memory. While STCN already reaches state-of-the-art performance and speed in this simple form, we further probe into the inner workings of the construction of affinities. Traditionally, affinities are constructed from dot products followed by a softmax as in attention mechanisms [18, 33]. This however implicitly encoded “confidence” (magnitude) with high-confidence points dominating the affinities all the time, regardless of query features. Some memory nodes will therefore be always suppressed, and the (large) memory bank will be underutilized, reducing effective diversity and robustness. We find this to be harmful, and propose using the negative squared Euclidean distance as a similarity measure with an efficient implementation instead. Though simple, this small change ensures that every memory node has a chance to contribute significantly (given the right query), leading to better performance, higher robustness, and more efficient use of memory. Our contribution is three-fold: • We propose STCN with direct image-to-image correspondence that is simpler, more efficient, and more effective than STM. • We examine the affinity in detail, and propose using L2 similarity in place of dot product for a better memory coverage, where every memory node contributes instead of just a few. • The synergy of the above two results in a simple and strong method, which suppresses previous state-of-the-art performance without additional complications while running fast at 20+ FPS. 2 Related Works Correspondence Learning Finding correspondences is one of the most fundamental problems in computer vision. Local correspondences have been used heavily in optical flow [34, 35, 36] and object tracking [37, 38, 39] with fast running time and high performance. More explicit correspondence learning has also been achieved with deep learning [40, 41, 42]. Few-shots learning can be considered as a matching problem where the query is compared with every element in the support set [43, 44, 45, 46]. Typical approaches use a Siamese network [47] and compare the embedded query/support features using a similarity measure such as cosine similarity [43], squared Euclidean distance [48], or even a learned function [49]. Our task can also be formulated as a few-shots problem, where our memory bank acts as the support set. This connection helps us with the choice of similarity function, albeit we are dealing with a million times more pointwise comparisons. Video Object Segmentation Early VOS methods [4, 5, 50] employ online first-frame finetuning which is very slow in inference and have been gradually phased out. Faster approaches have been proposed such as a more efficient online learning algorithm [8, 6, 7], MRF graph inference [51], temporal CNN [52], capsule routing [53], tracking [11, 13, 15, 54, 55, 56, 57], embedding learning [10, 58, 59] and space-time matching [17, 18, 19, 20]. Embedding learning bears a high similarity to space-time matching, both attempting to learn a deep feature representation of an object that remains consistent across a video. Usually embedding learning methods are more constrained [10, 58], adopting local search window and hard one-to-one matching. We are particularly interested in the class of Space-Time Memory networks (STM) [18] which are the backbone for many follow-up state-of-the-art VOS methods. STM constructs a memory bank for each object in the video, and matches every query frame to the memory bank to perform “memory readout”. Newly inferred frames can be added to the memory, and then the algorithm propagates forward in time. Derivatives either apply STM at other tasks [21, 60], improve the training data or augmentation policy [21, 22], augment the memory readout process [16, 21, 22, 24, 28], use optical flow [29], or reduce the size of the memory bank by limiting its growth [23, 30]. MAST [61] is an adjacent research that focused on unsupervised learning with a photometric reconstruction loss. Without the input mask, they use Siamese networks on RGB images to build the correspondence out of necessity. In this work, we deliberately build such connections and establish that building correspondences between images is a better choice, even when input masks are available, rather than a concession. We propose to overhaul STM into STCN where the construction of affinity is redefined to be between frames only. We also take a close look at the similarity function, which has always been the dot product in all STM variants, make changes and comparisons according to our findings. The resultant framework is both faster and better while still principled. STCN is even fundamentally simpler than STM, and we hope that STCN can be adopted as the new and efficient backbone for future works. 3 Space-Time Correspondence Networks (STCN) Given a video sequence and the first-frame annotation, we process the frames sequentially and maintain a memory bank of features. For each query frame, we extract a key feature which is compared with the keys in the memory bank, and retrieve corresponding value features from memory using key affinities as in STM [18]. 3.1 Feature Extraction Figure 1 illustrates the overall flow of STCN. While STM [18] parameterizes a Query Encoder (image as input) and a Memory Encoder (image and mask as input) with two ResNet50 [62], we instead construct a Key Encoder (image as input) and a Value Encoder (image and mask as input) with a ResNet50 and a ResNet18 respectively. Thus, unlike in STM [18], the key features (and thus the resultant affinity) can be extracted independently without the mask, computed only once for each frame, and symmetric between memory and query.2 The rationales are 1) Correspondences (key features) are more difficult to extract than value, hence a deeper network, and 2) Correspondences should exist between frames in a video, and there is little reason to introduce the mask as a distraction. From another perspective, we are using a Siamese structure [47] which is widely adopted in few-shots learning [63, 49] for computing the key features, as if our memory bank is the few-shots support set. As the key features are independent of the mask, we can reuse the “query key” later as a “memory key” if we decide to turn the query frame into a memory frame during propagation (strategy to be discussed in Section 3.3). This means the key encoder is used exactly once per image in the entire process, despite the two appearances in Figure 1 (which is for brevity). 2That is, matching between two points does not depend on whether they are query or memory points (not true in STM [18] as they are from different encoders). Architecture. Following the STM practice [18], we take res4 features with stride 16 from the base ResNets as our backbone features and discard res5. A 3×3 convolutional layer without non-linearity is used as a projection head from the backbone feature to either the key space (Ck dimensional) or the value space (Cv dimensional). We set Cv to be 512 following STM and discuss the choice of Ck in Section 4.1. Feature reuse. As seen from Figure 1, both the key encoder and the value encoder are processing the same frame, albeit with different inputs. It is natural to reuse features from the key encoder (with fewer inputs and a deeper network) at the value encoder. To avoid bloating the feature dimensions and for simplicity, we concatenate the last layer features from both encoders (before the projection head) and process them with two ResBlocks [62] and a CBAM block3 [64] as the final value output. 3.2 Memory Reading and Decoding Given T memory frames and a query frame, the feature extraction step would generate the followings: memory key kM ∈ RCk×THW , memory value vM ∈ RCv×THW , and query key kQ ∈ RCk×HW , where H and W are (stride 16) spatial dimensions. Then, for any similarity measure c : RCk×RCk → R, we can compute the pairwise affinity matrix S and the softmax-normalized affinity matrix W, where S,W ∈ RTHW×HW with: Sij = c(k M i ,k Q j ) Wij = exp (Sij)∑ n (exp (Snj)) , (1) where ki denotes the feature vector at the i-th position. The similarities are normalized by √ Ck as in standard practice [18, 33] and is not shown for brevity. In STM [18], the dot product is used as c. Memory reading regularization like KMN [22] or top-k filtering [21] can be applied at this step. With the normalized affinity matrix W, the aggregated readout feature vQ ∈ RCv×HW for the query frame can be computed as a weighted sum of the memory features with an efficient matrix multiplication: vQ = vMW, (2) which is then passed to the decoder for mask generation. In the case of multi-object segmentation, only Equation 2 has to be repeated as W is defined between image features only, and thus is the same for different objects. In the case of STM [18], W must be recomputed instead. Detailed running time analysis can be found in Section 6.2. Decoder. Our decoder structure stays close to that of the STM [18] as it is not the focus of this paper. Features are processed and upsampled at a scale of two gradually with higher-resolution features from the key encoder incorporated using skip-connections. The final layer of the decoder produces a stride 4 mask which is bilinearly upsampled to the original resolution. In the case of multiple objects, soft aggregation [18] of the output masks is used. 3.3 Memory Management So far we have assumed the existence of a memory bank of size T . Here, we will describe the construction of the memory bank. For each memory frame, we store two items: memory key and memory value. Note that all memory frames (except the first one) are once query frames. The memory key is simply reused from the query key, as described in Section 3.1 without extra computation. The memory value is computed after mask generation of that frame, independently for each object as the value encoder takes both the image and the object mask as inputs. STM [18] consider every fifth query frame as a memory frame, and the immediately previous frame as a temporary memory frame to ensure accurate matching. In the case of STCN, we find that it is unnecessary, and in fact harmful, to include the last frame as temporary memory. This is a direct consequence of using shared key encoders – 1) key features are sufficiently robust to match well without the need for close-range (temporal) propagation, and 2) the temporary memory key would otherwise be too similar to that of the query, as the image context usually changes smoothly and we do not have the encoding noises resultant from distinct encoders, leading to drifting.4 This modification also reduces the number of calls to the value encoder, contributing a significant speedup. 3We find this block to be non-essential in a later experiment but it is kept for consistency. 4This effect is amplified by the use of L2 similarity. See the supplementary material for a full comparison. Table 1 tabulates the performance comparisons between STM and STCN. For a video of length L with m ≥ 1 objects, and a final memory bank of size T < L, STM [18] would need to invoke the memory encoder and compute the affinity mL times. Our proposed STCN, on the other hand, only invokes the value encoder mT times and computes the affinity L times. It is therefore evident that STCN is significantly faster. Section 6.2 provides a breakdown of running time. 4 Computing Affinity The similarity function c : RCk × RCk → R plays a crucial role in both STM and STCN, as it supports the construction of affinity that is central to both correspondences and memory reading. It also has to be fast and memory-efficient as there can be up to 50M pairwise relations (THW ×HW ) to compute for just one query frame. To recap, we need to compute the similarity between a memory key kM ∈ RCk×HW and a query key kQ ∈ RCk×HW . The resultant pairwise affinity matrix is denoted as S ∈ RTHW×HW , with Sij = c(k M i ,k Q j ) denoting the similarity between k M i (the memory feature vector at the i-th position) and kQj (the query feature vector at the j-th position). In the case of dot product, it can be implemented very efficiently with a matrix multiplication: Sdotij = k M i · k Q j ⇒ S dot = ( kM )T kQ (3) In the following, we will also discuss the use of cosine similarity and negative squared Euclidean distance as similarity functions. They are defined as (with efficient implementation discussed later): Scosij = kMi · k Q j∥∥kMi ∥∥2 × ∥∥kQj ∥∥2 SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 (4) For brevity, we will use the shorthand “L2” or “L2 similarity” to denote the negative squared Euclidean distance in the rest of the paper. The ranges for dot product, cosine similarity and L2 similarity are (−∞,∞), [−1, 1], and (−∞, 0] respectively. Note that cosine similarity has a limited range. Non-related points are encouraged to have a low similarity score through back-propagation such that they have a close-to-zero affinity (Eq. 1), and thus no value is propagated (Eq. 2). 4.1 A Closer Look at the Affinity The affinity matrix is core to STCN and deserves close attention. Previous works [18, 21, 22, 23, 24], almost by default, use the dot product as the similarity function – but is this a good choice? Cosine similarity computes the angle between two vectors and is often regarded as the normalized dot product. Reversely, we can consider dot product as a scaled version of cosine similarity, with the scale equals to the product of vectors’ norms. Note that this is query-agnostic, meaning that every similarity with a memory key kMi will be scaled by its norm. If we cast the aggregation process (Eq. 2) as voting with similarity representing the weights, memory keys with large magnitudes will predominately suppress any representation from other memory nodes. Figure 2 visualizes this phenomenon in a 2D feature space. For dot product, only a subset of points (labeled as triangles) has a chance to contribute the most for any query. Outliers (top-right red) can suppress existing clusters; clusters with dominant value in one dimension (top-left cyan) can suppress other clusters; some points may be able to contribute the most in a region even it is outside of the region (bottom-right beige). These undesirable situations will however not happen if the proposed L2 similarity is used: a Voronoi diagram [66] is formed and every memory point can be fully utilized, leading to a diversified, queryspecific voting mechanism with ease. Figure 3 shows a closer look at the same problem with soft weights. With dot product, the blue/green point has low weights for every possible query in the first quadrant while a smooth transition is created with our proposed L2 similarity. Note that cosine similarity has the same benefits, but its limited range [−1, 1] means that an extra softmax temperature hyperparameter is required to shape the affinity distribution, or one more parameter to tune. L2 works well without extra temperature tuning in our experiments. Connection to self-attention, and whether some points are more important than others. Dotproducts have been used extensively in self-attention models [33, 67, 68]. One way to look at the dot-product affinity positively is to consider the points with large magnitudes as more important – naturally they should enjoy a higher influence. Admittedly, this is probably true in NLP [33] where a stop word (“the”) is almost useless compared to a noun (“London”) or in video classification [67] where the foreground human is far more important than a pixel in the plain blue sky. This is however not true for STCN where pixels are more or less equal. It is beneficial to match every pixel in the query frame accurately, including the background (also noted by [10]). After all, if we can know that a pixel is part of the background, we would also know that it does not belong to the foreground. In fact, we find STCN can track the background fairly well (floor, lake, etc.) even when it is never explicitly trained to do so. The notion of relative importance therefore does not generally apply in our context. Efficient implementation. The naïve implementation of negative squared Euclidean distance in Eq. 4 needs to materialize a Ck × THW × HW element-wise difference matrix which is then squared and summed. This process is much slower than simple dot product and cannot be run on the same hardware. A simple decomposition greatly simplifies the implementation, as noted in [69]: SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 = 2kMi · k Q j − ∥∥kMi ∥∥22 − ∥∥kQj ∥∥22 (5) which has only slightly more computation than the baseline dot product, and can be implemented with standard matrix operations. In fact, we can further drop the last term as softmax is invariant to translation in the target dimension (details in the supplementary material). For cosine similarity, we first normalize the input vectors, then compute dot product. Table 2 tabulates the actual computational and memory costs. 4.2 Experimental Verification Here, we verify three claims: 1) the aforementioned phenomenon does happen in a high-dimension key space for real-data and a fully-trained model; 2) using L2 similarity diversifies the voting; 3) L2 similarity brings about higher efficiency and performance. Affinity distribution. We verify the first two claims by training two different models with dot product and L2 similarity respectively as the similarity function and plot the maximum contribution given by each memory node in its lifetime. We use the same setting for the two models and report the distribution on the DAVIS 2017 [65] dataset. Figure 4 shows the pertinent distributions. Under the L2 similarity measure, a lot more memory nodes contribute a fair share. Specifically, around 3% memory nodes never contribute more than 1% weight under dot product while only 0.06% suffer the same fate with L2. Under dot product, 31% memory nodes contribute less than 10% weight at best while the same only happen for 7% of the memory with L2 similarity. To measure the distribution inequality, we additionally compute the Gini coefficient [70] (the higher it is, the more unequal the distribution). The Gini coefficient for dot product is 44.0, while the Gini coefficient for L2 similarity is much lower at 31.8. Performance and efficiency. Next, we show that using L2 similarity does improve performance with negligible overhead. We compare three similarity measures: dot product, cosine similarity, and L2 similarity. For cosine similarity, we use a softmax temperature of 0.01 while a default temperature of 1 is used for both dot product and L2 similarity. This scaling is crucial for cosine similarity only since it is the only one with a limited output range [−1, 1]. Searching for an extra hyperparameter is computationally demanding – we simply picked one that converges fairly quickly without collapsing. Table 2 tabulates the main results. Interestingly, we find that reducing the key space dimension (Ck) is beneficial to both cosine similarity and L2 similarity but not dot product. This can be explained in the context of Section 4.1 – the network needs more dimensions so that it can spread the memory key features out to save them from being suppressed by high-magnitude points. Cosine similarity and L2 similarity do not suffer from this problem and can utilize the full key space. The reduced key space in turn benefits memory efficiency and improves running time. 5 Implementation Details Models are trained with two 11GB 2080Ti GPUs with the Adam optimizer [71] using PyTorch [72]. Following previous practices [18, 21], we first pretrain the model on static image datasets [73, 74, 75, 76, 77] with synthetic deformation then perform main training on YouTubeVOS [78] and DAVIS [3, 65]. We also experimented with the synthetic dataset BL30K [79, 80] proposed in [21] which is not used unless otherwise specified. We use a batch size of 16 during pretraining and a batch size of 8 during main training. Pre-training takes about 36 hours and main training takes around 16 hours with batchnorm layers frozen during training following [18]. Bootstrapped cross entropy is used following [21]. The full set of hyperparameters can be found in the open-sourced code. In each iteration, we pick three temporally ordered frames (with the ground-truth mask for the first frame) from a video to form a training sample [18]. First, we predict the second frame using the first frame as memory. The prediction will be saved as the second memory frame, and then the third frame will be predicted using the union of the first and the second frame. The temporal distance between the frames will first gradually increase from 5 to 25 as a curriculum learning schedule and anneal back to 5 towards the end of training. This process follows the implementation of MiVOS [21]. For memory-read augmentation, we experimented with kernelized memory reading [22] and top-k filtering [21]. We find that top-k works well universally and improves running time while kernelized memory reading is slower and does not always help. We find that k = 20 always works better for STCN (original paper uses k = 50) and we adopt top-k filtering in all our experiments with k = 20. For fairness, we also re-run all experiments in MiVOS [21] with k = 20, and pick the best result in their favor. We use L2 similarity with Ck = 64 in all experiments unless otherwise specified. For inference, a 2080Ti GPU is used with full floating point precision for a fair running time comparison. We memorize every 5th frame and no temporary frame is used as discussed in Section 3.3. 6 Experiments We mainly conduct experiments in the DAVIS 2017 validation [65] set and the YouTubeVOS 2018 [78] validation set. For completeness, we also include results in the single object DAVIS 2016 validation [3] set and the expanded YouTubeVOS 2019 [78] validation set. Results for the DAVIS 2017 test-dev [65] set are included in the supplementary material. We first conduct quantitative comparisons with previous methods, and then analyze the running time for each component in STCN. For reference, we also present results without pretraining on stataic images. Ablation studies have been included in previous sections (Table 1 and Table 2). 6.1 Evaluations Table 3 tabulates the comparisons of STCN with previous methods in semi-supervised video object segmentation benchmarks. For DAVIS 2017 [65], we compare the standard metrics: region similarity J , contour accuracy F , and their average J&F . For YouTubeVOS [78], we report J and F for both seen and unseen categories, and the averaged overall score G. For comparing the speed, we compute the multi-object FPS that is the total number of output frames divided by the total processing time for the entire DAVIS 2017 [65] validation set. We either copy the FPS directly from papers/project websites, or estimate based on their single object inference FPS (simply labeled as <5). We use 480p resolution videos for both DAVIS and YouTubeVOS. Table 4, 5, 6, and 7 tabulate additional results. For the interactive setting, we replace the propagation module of MiVOS [21] with STCN. Visualizations. Figure 6 visualizes the learned correspondences. Note that our correspondences are general and mask-free, naturally associating every pixel (including background bystanders) even when it is only trained with foreground masks. Figure 7 visualizes our semi-supervised mask propagation results with the last row being a failure case (Section 7). Leaderboard results. Our method is also very competitive on the public VOS challenge leaderboard [78]. Methods on the leaderboard are typically cutting-edge, with engineering extensions like deeper network, multi-scale inference, and model ensemble. They usually represent the highest achievable performance at the time. On the latest YouTubeVOS 2019 validation split [78], our base model (84.2 G) outperforms the previous challenge winner [32] (based on STM [18], 82.0 G) by a large margin. With ensemble and multi-scale testing (details in the supplementary material), our method is ranked first place (86.7 G) at the time of submission on the still active leaderboard. 6.2 Running Time Analysis Here, we analyze the running time of each component in STM and STCN on DAVIS 2017 [65]. For a fair comparison, we use our own implementation of STM, enabled top-k filtering [21], and set Ck = 64 for both methods such that all the speed improvements come from the fundamental differences between STM and STCN. Our affinity matching time is lower because we compute a single affinity between raw images while STM [18] compute one for every object. Our value encoder takes much less time than the memory encoder in STM [18] because of our light network, feature reuse, and robust memory bank/management as discussed in Section 3.3. 5A linear extrapolation would severally underestimate the performance of many previous methods. 7 Limitations To alienate our method from other possible enhancement, we only use fundamentally simple global matching. Like STM [18], we have no notion of temporal consistency as we do not employ local matching [58, 10, 17] or optical flow [29]. This means we may incorrectly segment objects that are far away with similar appearance. One such failure case is shown on the last row of Figure 7. We expect that given our framework’s simplicity, our method can be readily extended to include temporal consistency consideration for further improvement. 8 Conclusion We present STCN, a simple, effective, and efficient framework for video object segmentation. We propose to use direct image-to-image correspondence for efficiency and more robust matching, and examine the inner workings of affinity in details – L2 similarity is proposed as a result of our observations. With its clear technical advantages, We hope that STCN can serve as a new baseline backbone for future contributions. Table 4: Results on the DAVIS 2016 validation set. Method J&F J F OSMN [8] 73.5 74.0 72.9 MaskTrack [15] 77.6 79.7 75.4 OSVOS [5] 80.2 79.8 80.6 FAVOS [14] 81.0 82.4 79.5 FEELVOS [58] 81.7 81.1 82.2 RGMP [55] 81.8 81.5 82.0 Track-Seg [54] 83.1 82.6 83.6 FRTM-VOS [6] 83.5 - - CINN [51] 84.2 83.4 85.0 OnAVOS [50] 85.5 86.1 84.9 PReMVOS [81] 86.8 84.9 88.6 GC [24] 86.8 87.6 85.7 RMNet [29] 88.8 88.9 88.7 STM [18] 89.3 88.7 89.9 CFBI [10] 89.4 88.3 90.5 CFBI+ [83] 89.9 88.7 91.1 MiVOS [21] 90.0 88.9 91.1 SwiftNet [30] 90.4 90.5 90.3 KMN [22] 90.5 89.5 91.5 LCM [28] 90.7 91.4 89.9 Ours 91.6 90.8 92.5 MiVOS [21] + BL30K 91.0 89.6 92.4 Ours + BL30K 91.7 90.4 93.0 Method AUC-J&F J&F @ 60s Time (s) ATNet [84] 80.9 82.7 55+ STM [85] 80.3 84.8 37 GIS [86] 85.6 86.6 34 MiVOS [21] 87.9 88.5 12 Ours 88.4 88.8 7.3 Table 7: Effects of pretraining on static images/maintraining on the DAVIS 2017 validation set. J&F J F Pre-training only 75.8 73.1 78.6 Main training only 82.5 79.3 85.7 Both 85.4 82.2 88.6 ST M O ur s M iV O S O ur s Figure 7: Visualization of semi-supervised VOS results with the first column being the reference masks to be propagated. The first two examples show comparisons of our method with STM [18] and MiVOS [21]. In the second example, zoom-ins inset (orange) are shown with the corresponding ground-truths inset (green) to highlight their differences. The last row shows a failure case: we cannot distinguish the real duck from the duck picture, as no temporal consistency clue is used in our method. Broader Impacts Malicious use of VOS software can bring potential negative societal impacts, including but not limited to unauthorized mass surveillance or privacy infringing human/vehicle tracking. We believe that the task itself is neutral with positive uses as well, such as video editing for amateurs or making safe self-driving cars. Acknowledgment This research is supported in part by Kuaishou Technology and the Research Grant Council of the Hong Kong SAR under grant no. 16201818.
1. What is the focus and contribution of the paper on semi-supervised video object segmentation? 2. What are the strengths of the proposed Space-Time Correspondence Network (STCN)? 3. Do you have any concerns or questions regarding the computation of similarity and its visualization? 4. Are there any limitations or inconsistencies in the experimental results presented in the paper? 5. How does the reviewer assess the novelty and relevance of the paper compared to other recent works in fast video object segmentation?
Summary Of The Paper Review
Summary Of The Paper This paper presents the Space-Time Correspondence Network (STCN) to tackle the semi-supervised video object segmentation. STCN improves previous STM by building affinity based on RGB-only input and reused key features. Moreover, the author further investigated the computation of similarity and find that negative squared Euclidean distance leads to better performance. Experiments on multiple datasets are conducted to demonstrate the effectiveness of the proposed method. Review Pros: This paper is clear and easy to read. The proposed method achieves state-of-the-art accuracy and high efficiency. The proposed STCN is reasonable and effective. The investigation about the similarity computation is meaningful. Cons: There may need more intuition for why L2 similarity being better than dot-product similarity. Even though dot-product provides less smooth similarity, it can be relived by the Softmax function. Sec3.1 tries to provide explanations for this, yet the visualization is based on toy examples. It would be better to visualize with real examples. In Fig.4, the numbers of total nodes for Dot product and L2 similarity are different. Shouldn't they be the same? Are the results in Tab.1 both based on L2 similarity? If not, this should be clarified. Also, it would be better to report performance for STM with L2 similarity. I noticed that all the experiments are conducted on the validation sets. It would be better to incorporate results on testing sets into the paper. From the Tab-1 supplementary, I noticed that the ''ours'' model actually performs worse than the previous method on the test-dev of DAVIS17. This is contradictory to the performance on the validation set. Any explanation? Some related works for fast VOS are missing [a,b]. [a] Fast video object segmentation via dynamic targeting network, cvpr19 [b] Motion-guided cascaded refinement network for video object segmentation,cvpr18 In general, I like this work for its motivation and effective/efficient performance. I would like to see the authors' responses.
NIPS
Title Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Abstract This paper presents a simple yet effective approach to modeling space-time correspondences in the context of video object segmentation. Unlike most existing approaches, we establish correspondences directly between frames without reencoding the mask features for every object, leading to a highly efficient and robust framework. With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion. We cast the aggregation process as a voting problem and find that the existing inner-product affinity leads to poor use of memory with a small (fixed) subset of memory nodes dominating the votes, regardless of the query. In light of this phenomenon, we propose using the negative squared Euclidean distance instead to compute the affinities. We validate that every memory node now has a chance to contribute, and experimentally show that such diversified voting is beneficial to both memory efficiency and inference accuracy. The synergy of correspondence networks and diversified voting works exceedingly well, achieves new state-of-the-art results on both DAVIS and YouTubeVOS datasets while running significantly faster at 20+ FPS for multiple objects without bells and whistles. 1 Introduction Video object segmentation (VOS) aims to identify and segment target instances in a video sequence. This work focuses on the semi-supervised setting where the first-frame segmentation is given and the algorithm needs to infer the segmentation for the remaining frames. This task is an extension of video object tracking [1, 2], requiring detailed object masks instead of simple bounding boxes. A high-performing algorithm should be able to delineate an object from the background or other distractors (e.g., similar instances) under partial or complete occlusion, appearance changes, and object deformation [3]. Most current methods either fit a model using the initial segmentation [4, 5, 6, 7, 8, 9] or leverage temporal propagation [10, 11, 12, 13, 14, 15, 16], particularly with spatio-temporal matching [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Space-Time Memory networks [18] are especially popular recently due to its high performance and simplicity – many variants [22, 16, 23, 21, 24, 28, 29, 30], including competitions’ winners [31, 32], have been developed to improve the speed, reduce memory usage, or to regularize the memory readout process of STM. In this work, we aim to subtract from STM to arrive at a minimalistic form of matching networks, dubbed Space-Time Correspondence Network (STCN) 1. Specifically, we start from the basic premise †This work was done in The Hong Kong University of Science and Technology. 1Training/inference code and pretrained models: https://github.com/hkchengrex/STCN 35th Conference on Neural Information Processing Systems (NeurIPS 2021). that correspondences are target-agnostic. Instead of building a specific memory bank and therefore affinity for every object in the video as in STM, we build a single affinity matrix using only RGB relations. For querying, each target object passes through the same affinity matrix for feature transfer. This is not only more efficient but also more robust – the model is forced to learn all object relations beyond just the labeled ones. With the learned affinity, the algorithm can propagate features from the first frame to the rest of the video sequence, with intermediate features stored as memory. While STCN already reaches state-of-the-art performance and speed in this simple form, we further probe into the inner workings of the construction of affinities. Traditionally, affinities are constructed from dot products followed by a softmax as in attention mechanisms [18, 33]. This however implicitly encoded “confidence” (magnitude) with high-confidence points dominating the affinities all the time, regardless of query features. Some memory nodes will therefore be always suppressed, and the (large) memory bank will be underutilized, reducing effective diversity and robustness. We find this to be harmful, and propose using the negative squared Euclidean distance as a similarity measure with an efficient implementation instead. Though simple, this small change ensures that every memory node has a chance to contribute significantly (given the right query), leading to better performance, higher robustness, and more efficient use of memory. Our contribution is three-fold: • We propose STCN with direct image-to-image correspondence that is simpler, more efficient, and more effective than STM. • We examine the affinity in detail, and propose using L2 similarity in place of dot product for a better memory coverage, where every memory node contributes instead of just a few. • The synergy of the above two results in a simple and strong method, which suppresses previous state-of-the-art performance without additional complications while running fast at 20+ FPS. 2 Related Works Correspondence Learning Finding correspondences is one of the most fundamental problems in computer vision. Local correspondences have been used heavily in optical flow [34, 35, 36] and object tracking [37, 38, 39] with fast running time and high performance. More explicit correspondence learning has also been achieved with deep learning [40, 41, 42]. Few-shots learning can be considered as a matching problem where the query is compared with every element in the support set [43, 44, 45, 46]. Typical approaches use a Siamese network [47] and compare the embedded query/support features using a similarity measure such as cosine similarity [43], squared Euclidean distance [48], or even a learned function [49]. Our task can also be formulated as a few-shots problem, where our memory bank acts as the support set. This connection helps us with the choice of similarity function, albeit we are dealing with a million times more pointwise comparisons. Video Object Segmentation Early VOS methods [4, 5, 50] employ online first-frame finetuning which is very slow in inference and have been gradually phased out. Faster approaches have been proposed such as a more efficient online learning algorithm [8, 6, 7], MRF graph inference [51], temporal CNN [52], capsule routing [53], tracking [11, 13, 15, 54, 55, 56, 57], embedding learning [10, 58, 59] and space-time matching [17, 18, 19, 20]. Embedding learning bears a high similarity to space-time matching, both attempting to learn a deep feature representation of an object that remains consistent across a video. Usually embedding learning methods are more constrained [10, 58], adopting local search window and hard one-to-one matching. We are particularly interested in the class of Space-Time Memory networks (STM) [18] which are the backbone for many follow-up state-of-the-art VOS methods. STM constructs a memory bank for each object in the video, and matches every query frame to the memory bank to perform “memory readout”. Newly inferred frames can be added to the memory, and then the algorithm propagates forward in time. Derivatives either apply STM at other tasks [21, 60], improve the training data or augmentation policy [21, 22], augment the memory readout process [16, 21, 22, 24, 28], use optical flow [29], or reduce the size of the memory bank by limiting its growth [23, 30]. MAST [61] is an adjacent research that focused on unsupervised learning with a photometric reconstruction loss. Without the input mask, they use Siamese networks on RGB images to build the correspondence out of necessity. In this work, we deliberately build such connections and establish that building correspondences between images is a better choice, even when input masks are available, rather than a concession. We propose to overhaul STM into STCN where the construction of affinity is redefined to be between frames only. We also take a close look at the similarity function, which has always been the dot product in all STM variants, make changes and comparisons according to our findings. The resultant framework is both faster and better while still principled. STCN is even fundamentally simpler than STM, and we hope that STCN can be adopted as the new and efficient backbone for future works. 3 Space-Time Correspondence Networks (STCN) Given a video sequence and the first-frame annotation, we process the frames sequentially and maintain a memory bank of features. For each query frame, we extract a key feature which is compared with the keys in the memory bank, and retrieve corresponding value features from memory using key affinities as in STM [18]. 3.1 Feature Extraction Figure 1 illustrates the overall flow of STCN. While STM [18] parameterizes a Query Encoder (image as input) and a Memory Encoder (image and mask as input) with two ResNet50 [62], we instead construct a Key Encoder (image as input) and a Value Encoder (image and mask as input) with a ResNet50 and a ResNet18 respectively. Thus, unlike in STM [18], the key features (and thus the resultant affinity) can be extracted independently without the mask, computed only once for each frame, and symmetric between memory and query.2 The rationales are 1) Correspondences (key features) are more difficult to extract than value, hence a deeper network, and 2) Correspondences should exist between frames in a video, and there is little reason to introduce the mask as a distraction. From another perspective, we are using a Siamese structure [47] which is widely adopted in few-shots learning [63, 49] for computing the key features, as if our memory bank is the few-shots support set. As the key features are independent of the mask, we can reuse the “query key” later as a “memory key” if we decide to turn the query frame into a memory frame during propagation (strategy to be discussed in Section 3.3). This means the key encoder is used exactly once per image in the entire process, despite the two appearances in Figure 1 (which is for brevity). 2That is, matching between two points does not depend on whether they are query or memory points (not true in STM [18] as they are from different encoders). Architecture. Following the STM practice [18], we take res4 features with stride 16 from the base ResNets as our backbone features and discard res5. A 3×3 convolutional layer without non-linearity is used as a projection head from the backbone feature to either the key space (Ck dimensional) or the value space (Cv dimensional). We set Cv to be 512 following STM and discuss the choice of Ck in Section 4.1. Feature reuse. As seen from Figure 1, both the key encoder and the value encoder are processing the same frame, albeit with different inputs. It is natural to reuse features from the key encoder (with fewer inputs and a deeper network) at the value encoder. To avoid bloating the feature dimensions and for simplicity, we concatenate the last layer features from both encoders (before the projection head) and process them with two ResBlocks [62] and a CBAM block3 [64] as the final value output. 3.2 Memory Reading and Decoding Given T memory frames and a query frame, the feature extraction step would generate the followings: memory key kM ∈ RCk×THW , memory value vM ∈ RCv×THW , and query key kQ ∈ RCk×HW , where H and W are (stride 16) spatial dimensions. Then, for any similarity measure c : RCk×RCk → R, we can compute the pairwise affinity matrix S and the softmax-normalized affinity matrix W, where S,W ∈ RTHW×HW with: Sij = c(k M i ,k Q j ) Wij = exp (Sij)∑ n (exp (Snj)) , (1) where ki denotes the feature vector at the i-th position. The similarities are normalized by √ Ck as in standard practice [18, 33] and is not shown for brevity. In STM [18], the dot product is used as c. Memory reading regularization like KMN [22] or top-k filtering [21] can be applied at this step. With the normalized affinity matrix W, the aggregated readout feature vQ ∈ RCv×HW for the query frame can be computed as a weighted sum of the memory features with an efficient matrix multiplication: vQ = vMW, (2) which is then passed to the decoder for mask generation. In the case of multi-object segmentation, only Equation 2 has to be repeated as W is defined between image features only, and thus is the same for different objects. In the case of STM [18], W must be recomputed instead. Detailed running time analysis can be found in Section 6.2. Decoder. Our decoder structure stays close to that of the STM [18] as it is not the focus of this paper. Features are processed and upsampled at a scale of two gradually with higher-resolution features from the key encoder incorporated using skip-connections. The final layer of the decoder produces a stride 4 mask which is bilinearly upsampled to the original resolution. In the case of multiple objects, soft aggregation [18] of the output masks is used. 3.3 Memory Management So far we have assumed the existence of a memory bank of size T . Here, we will describe the construction of the memory bank. For each memory frame, we store two items: memory key and memory value. Note that all memory frames (except the first one) are once query frames. The memory key is simply reused from the query key, as described in Section 3.1 without extra computation. The memory value is computed after mask generation of that frame, independently for each object as the value encoder takes both the image and the object mask as inputs. STM [18] consider every fifth query frame as a memory frame, and the immediately previous frame as a temporary memory frame to ensure accurate matching. In the case of STCN, we find that it is unnecessary, and in fact harmful, to include the last frame as temporary memory. This is a direct consequence of using shared key encoders – 1) key features are sufficiently robust to match well without the need for close-range (temporal) propagation, and 2) the temporary memory key would otherwise be too similar to that of the query, as the image context usually changes smoothly and we do not have the encoding noises resultant from distinct encoders, leading to drifting.4 This modification also reduces the number of calls to the value encoder, contributing a significant speedup. 3We find this block to be non-essential in a later experiment but it is kept for consistency. 4This effect is amplified by the use of L2 similarity. See the supplementary material for a full comparison. Table 1 tabulates the performance comparisons between STM and STCN. For a video of length L with m ≥ 1 objects, and a final memory bank of size T < L, STM [18] would need to invoke the memory encoder and compute the affinity mL times. Our proposed STCN, on the other hand, only invokes the value encoder mT times and computes the affinity L times. It is therefore evident that STCN is significantly faster. Section 6.2 provides a breakdown of running time. 4 Computing Affinity The similarity function c : RCk × RCk → R plays a crucial role in both STM and STCN, as it supports the construction of affinity that is central to both correspondences and memory reading. It also has to be fast and memory-efficient as there can be up to 50M pairwise relations (THW ×HW ) to compute for just one query frame. To recap, we need to compute the similarity between a memory key kM ∈ RCk×HW and a query key kQ ∈ RCk×HW . The resultant pairwise affinity matrix is denoted as S ∈ RTHW×HW , with Sij = c(k M i ,k Q j ) denoting the similarity between k M i (the memory feature vector at the i-th position) and kQj (the query feature vector at the j-th position). In the case of dot product, it can be implemented very efficiently with a matrix multiplication: Sdotij = k M i · k Q j ⇒ S dot = ( kM )T kQ (3) In the following, we will also discuss the use of cosine similarity and negative squared Euclidean distance as similarity functions. They are defined as (with efficient implementation discussed later): Scosij = kMi · k Q j∥∥kMi ∥∥2 × ∥∥kQj ∥∥2 SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 (4) For brevity, we will use the shorthand “L2” or “L2 similarity” to denote the negative squared Euclidean distance in the rest of the paper. The ranges for dot product, cosine similarity and L2 similarity are (−∞,∞), [−1, 1], and (−∞, 0] respectively. Note that cosine similarity has a limited range. Non-related points are encouraged to have a low similarity score through back-propagation such that they have a close-to-zero affinity (Eq. 1), and thus no value is propagated (Eq. 2). 4.1 A Closer Look at the Affinity The affinity matrix is core to STCN and deserves close attention. Previous works [18, 21, 22, 23, 24], almost by default, use the dot product as the similarity function – but is this a good choice? Cosine similarity computes the angle between two vectors and is often regarded as the normalized dot product. Reversely, we can consider dot product as a scaled version of cosine similarity, with the scale equals to the product of vectors’ norms. Note that this is query-agnostic, meaning that every similarity with a memory key kMi will be scaled by its norm. If we cast the aggregation process (Eq. 2) as voting with similarity representing the weights, memory keys with large magnitudes will predominately suppress any representation from other memory nodes. Figure 2 visualizes this phenomenon in a 2D feature space. For dot product, only a subset of points (labeled as triangles) has a chance to contribute the most for any query. Outliers (top-right red) can suppress existing clusters; clusters with dominant value in one dimension (top-left cyan) can suppress other clusters; some points may be able to contribute the most in a region even it is outside of the region (bottom-right beige). These undesirable situations will however not happen if the proposed L2 similarity is used: a Voronoi diagram [66] is formed and every memory point can be fully utilized, leading to a diversified, queryspecific voting mechanism with ease. Figure 3 shows a closer look at the same problem with soft weights. With dot product, the blue/green point has low weights for every possible query in the first quadrant while a smooth transition is created with our proposed L2 similarity. Note that cosine similarity has the same benefits, but its limited range [−1, 1] means that an extra softmax temperature hyperparameter is required to shape the affinity distribution, or one more parameter to tune. L2 works well without extra temperature tuning in our experiments. Connection to self-attention, and whether some points are more important than others. Dotproducts have been used extensively in self-attention models [33, 67, 68]. One way to look at the dot-product affinity positively is to consider the points with large magnitudes as more important – naturally they should enjoy a higher influence. Admittedly, this is probably true in NLP [33] where a stop word (“the”) is almost useless compared to a noun (“London”) or in video classification [67] where the foreground human is far more important than a pixel in the plain blue sky. This is however not true for STCN where pixels are more or less equal. It is beneficial to match every pixel in the query frame accurately, including the background (also noted by [10]). After all, if we can know that a pixel is part of the background, we would also know that it does not belong to the foreground. In fact, we find STCN can track the background fairly well (floor, lake, etc.) even when it is never explicitly trained to do so. The notion of relative importance therefore does not generally apply in our context. Efficient implementation. The naïve implementation of negative squared Euclidean distance in Eq. 4 needs to materialize a Ck × THW × HW element-wise difference matrix which is then squared and summed. This process is much slower than simple dot product and cannot be run on the same hardware. A simple decomposition greatly simplifies the implementation, as noted in [69]: SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 = 2kMi · k Q j − ∥∥kMi ∥∥22 − ∥∥kQj ∥∥22 (5) which has only slightly more computation than the baseline dot product, and can be implemented with standard matrix operations. In fact, we can further drop the last term as softmax is invariant to translation in the target dimension (details in the supplementary material). For cosine similarity, we first normalize the input vectors, then compute dot product. Table 2 tabulates the actual computational and memory costs. 4.2 Experimental Verification Here, we verify three claims: 1) the aforementioned phenomenon does happen in a high-dimension key space for real-data and a fully-trained model; 2) using L2 similarity diversifies the voting; 3) L2 similarity brings about higher efficiency and performance. Affinity distribution. We verify the first two claims by training two different models with dot product and L2 similarity respectively as the similarity function and plot the maximum contribution given by each memory node in its lifetime. We use the same setting for the two models and report the distribution on the DAVIS 2017 [65] dataset. Figure 4 shows the pertinent distributions. Under the L2 similarity measure, a lot more memory nodes contribute a fair share. Specifically, around 3% memory nodes never contribute more than 1% weight under dot product while only 0.06% suffer the same fate with L2. Under dot product, 31% memory nodes contribute less than 10% weight at best while the same only happen for 7% of the memory with L2 similarity. To measure the distribution inequality, we additionally compute the Gini coefficient [70] (the higher it is, the more unequal the distribution). The Gini coefficient for dot product is 44.0, while the Gini coefficient for L2 similarity is much lower at 31.8. Performance and efficiency. Next, we show that using L2 similarity does improve performance with negligible overhead. We compare three similarity measures: dot product, cosine similarity, and L2 similarity. For cosine similarity, we use a softmax temperature of 0.01 while a default temperature of 1 is used for both dot product and L2 similarity. This scaling is crucial for cosine similarity only since it is the only one with a limited output range [−1, 1]. Searching for an extra hyperparameter is computationally demanding – we simply picked one that converges fairly quickly without collapsing. Table 2 tabulates the main results. Interestingly, we find that reducing the key space dimension (Ck) is beneficial to both cosine similarity and L2 similarity but not dot product. This can be explained in the context of Section 4.1 – the network needs more dimensions so that it can spread the memory key features out to save them from being suppressed by high-magnitude points. Cosine similarity and L2 similarity do not suffer from this problem and can utilize the full key space. The reduced key space in turn benefits memory efficiency and improves running time. 5 Implementation Details Models are trained with two 11GB 2080Ti GPUs with the Adam optimizer [71] using PyTorch [72]. Following previous practices [18, 21], we first pretrain the model on static image datasets [73, 74, 75, 76, 77] with synthetic deformation then perform main training on YouTubeVOS [78] and DAVIS [3, 65]. We also experimented with the synthetic dataset BL30K [79, 80] proposed in [21] which is not used unless otherwise specified. We use a batch size of 16 during pretraining and a batch size of 8 during main training. Pre-training takes about 36 hours and main training takes around 16 hours with batchnorm layers frozen during training following [18]. Bootstrapped cross entropy is used following [21]. The full set of hyperparameters can be found in the open-sourced code. In each iteration, we pick three temporally ordered frames (with the ground-truth mask for the first frame) from a video to form a training sample [18]. First, we predict the second frame using the first frame as memory. The prediction will be saved as the second memory frame, and then the third frame will be predicted using the union of the first and the second frame. The temporal distance between the frames will first gradually increase from 5 to 25 as a curriculum learning schedule and anneal back to 5 towards the end of training. This process follows the implementation of MiVOS [21]. For memory-read augmentation, we experimented with kernelized memory reading [22] and top-k filtering [21]. We find that top-k works well universally and improves running time while kernelized memory reading is slower and does not always help. We find that k = 20 always works better for STCN (original paper uses k = 50) and we adopt top-k filtering in all our experiments with k = 20. For fairness, we also re-run all experiments in MiVOS [21] with k = 20, and pick the best result in their favor. We use L2 similarity with Ck = 64 in all experiments unless otherwise specified. For inference, a 2080Ti GPU is used with full floating point precision for a fair running time comparison. We memorize every 5th frame and no temporary frame is used as discussed in Section 3.3. 6 Experiments We mainly conduct experiments in the DAVIS 2017 validation [65] set and the YouTubeVOS 2018 [78] validation set. For completeness, we also include results in the single object DAVIS 2016 validation [3] set and the expanded YouTubeVOS 2019 [78] validation set. Results for the DAVIS 2017 test-dev [65] set are included in the supplementary material. We first conduct quantitative comparisons with previous methods, and then analyze the running time for each component in STCN. For reference, we also present results without pretraining on stataic images. Ablation studies have been included in previous sections (Table 1 and Table 2). 6.1 Evaluations Table 3 tabulates the comparisons of STCN with previous methods in semi-supervised video object segmentation benchmarks. For DAVIS 2017 [65], we compare the standard metrics: region similarity J , contour accuracy F , and their average J&F . For YouTubeVOS [78], we report J and F for both seen and unseen categories, and the averaged overall score G. For comparing the speed, we compute the multi-object FPS that is the total number of output frames divided by the total processing time for the entire DAVIS 2017 [65] validation set. We either copy the FPS directly from papers/project websites, or estimate based on their single object inference FPS (simply labeled as <5). We use 480p resolution videos for both DAVIS and YouTubeVOS. Table 4, 5, 6, and 7 tabulate additional results. For the interactive setting, we replace the propagation module of MiVOS [21] with STCN. Visualizations. Figure 6 visualizes the learned correspondences. Note that our correspondences are general and mask-free, naturally associating every pixel (including background bystanders) even when it is only trained with foreground masks. Figure 7 visualizes our semi-supervised mask propagation results with the last row being a failure case (Section 7). Leaderboard results. Our method is also very competitive on the public VOS challenge leaderboard [78]. Methods on the leaderboard are typically cutting-edge, with engineering extensions like deeper network, multi-scale inference, and model ensemble. They usually represent the highest achievable performance at the time. On the latest YouTubeVOS 2019 validation split [78], our base model (84.2 G) outperforms the previous challenge winner [32] (based on STM [18], 82.0 G) by a large margin. With ensemble and multi-scale testing (details in the supplementary material), our method is ranked first place (86.7 G) at the time of submission on the still active leaderboard. 6.2 Running Time Analysis Here, we analyze the running time of each component in STM and STCN on DAVIS 2017 [65]. For a fair comparison, we use our own implementation of STM, enabled top-k filtering [21], and set Ck = 64 for both methods such that all the speed improvements come from the fundamental differences between STM and STCN. Our affinity matching time is lower because we compute a single affinity between raw images while STM [18] compute one for every object. Our value encoder takes much less time than the memory encoder in STM [18] because of our light network, feature reuse, and robust memory bank/management as discussed in Section 3.3. 5A linear extrapolation would severally underestimate the performance of many previous methods. 7 Limitations To alienate our method from other possible enhancement, we only use fundamentally simple global matching. Like STM [18], we have no notion of temporal consistency as we do not employ local matching [58, 10, 17] or optical flow [29]. This means we may incorrectly segment objects that are far away with similar appearance. One such failure case is shown on the last row of Figure 7. We expect that given our framework’s simplicity, our method can be readily extended to include temporal consistency consideration for further improvement. 8 Conclusion We present STCN, a simple, effective, and efficient framework for video object segmentation. We propose to use direct image-to-image correspondence for efficiency and more robust matching, and examine the inner workings of affinity in details – L2 similarity is proposed as a result of our observations. With its clear technical advantages, We hope that STCN can serve as a new baseline backbone for future contributions. Table 4: Results on the DAVIS 2016 validation set. Method J&F J F OSMN [8] 73.5 74.0 72.9 MaskTrack [15] 77.6 79.7 75.4 OSVOS [5] 80.2 79.8 80.6 FAVOS [14] 81.0 82.4 79.5 FEELVOS [58] 81.7 81.1 82.2 RGMP [55] 81.8 81.5 82.0 Track-Seg [54] 83.1 82.6 83.6 FRTM-VOS [6] 83.5 - - CINN [51] 84.2 83.4 85.0 OnAVOS [50] 85.5 86.1 84.9 PReMVOS [81] 86.8 84.9 88.6 GC [24] 86.8 87.6 85.7 RMNet [29] 88.8 88.9 88.7 STM [18] 89.3 88.7 89.9 CFBI [10] 89.4 88.3 90.5 CFBI+ [83] 89.9 88.7 91.1 MiVOS [21] 90.0 88.9 91.1 SwiftNet [30] 90.4 90.5 90.3 KMN [22] 90.5 89.5 91.5 LCM [28] 90.7 91.4 89.9 Ours 91.6 90.8 92.5 MiVOS [21] + BL30K 91.0 89.6 92.4 Ours + BL30K 91.7 90.4 93.0 Method AUC-J&F J&F @ 60s Time (s) ATNet [84] 80.9 82.7 55+ STM [85] 80.3 84.8 37 GIS [86] 85.6 86.6 34 MiVOS [21] 87.9 88.5 12 Ours 88.4 88.8 7.3 Table 7: Effects of pretraining on static images/maintraining on the DAVIS 2017 validation set. J&F J F Pre-training only 75.8 73.1 78.6 Main training only 82.5 79.3 85.7 Both 85.4 82.2 88.6 ST M O ur s M iV O S O ur s Figure 7: Visualization of semi-supervised VOS results with the first column being the reference masks to be propagated. The first two examples show comparisons of our method with STM [18] and MiVOS [21]. In the second example, zoom-ins inset (orange) are shown with the corresponding ground-truths inset (green) to highlight their differences. The last row shows a failure case: we cannot distinguish the real duck from the duck picture, as no temporal consistency clue is used in our method. Broader Impacts Malicious use of VOS software can bring potential negative societal impacts, including but not limited to unauthorized mass surveillance or privacy infringing human/vehicle tracking. We believe that the task itself is neutral with positive uses as well, such as video editing for amateurs or making safe self-driving cars. Acknowledgment This research is supported in part by Kuaishou Technology and the Research Grant Council of the Hong Kong SAR under grant no. 16201818.
1. What is the focus of the paper regarding space-time correspondence? 2. What are the strengths of the proposed approach, particularly in terms of simplification and efficiency? 3. What are the weaknesses of the paper, especially regarding the description and limitations? 4. Do you have any concerns about the experimental analysis and results? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper simplifies space-time correspondence with shared Siamese key encoders for all raw frames and gets rid of the unnecessary re-encoding procedure for every object in the same frame. The authors further discuss the similarity functions used in affinity computation and provide comprehensive experiments on memory composition and running time. Review Pros: I enjoy reading the manuscript, which is well-structured. The ideas are well-presented and easy to follow. The proposed pipeline is clear and reasonable albeit with some minor contradictions in description. I appreciate the analysis and experiments on similarity functions and running time, which help a lot for better understanding the proposed method and potential issues in previous works. The authors promise to release the code, which would be truly valuable. Cons: About the counterintuitive result on Memory management, where the short-term consistency seems to be harmful to STCN when applying L2 similarity, I wonder what would happen if cosine similarity and dot product similarity are adopted instead of L2. Additionally, I recommend an ablation on merely maintaining the last frame in memory and provide corresponding qualitative results to prove the drifting assumption in Ln152. I appreciate the assumption that ‘every pixel counts’ and consent the extra robustness from additional ‘meaningful’ memory nodes. But I think the proposed method does not explicitly explore the correspondence between background pixels, as [9] did, but just spread the ‘attention’ to more foreground pixels, and thus introduces the mismatching issues as described in the limitation section, which makes the discussion in Ln206-215 not that convincing where the authors claim they could benefit from distinguishing the backgrounds. The authors also analyze the different undergoing situations between STCN and other NLP methods or video classification methods, but I could not get the point and the discussion seems a little bit redundant. I would recommend a refinement or some detailed explanation. [9]: Collaborative video object segmentation by foreground-background integration. In ECCV2020.
NIPS
Title Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Abstract This paper presents a simple yet effective approach to modeling space-time correspondences in the context of video object segmentation. Unlike most existing approaches, we establish correspondences directly between frames without reencoding the mask features for every object, leading to a highly efficient and robust framework. With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion. We cast the aggregation process as a voting problem and find that the existing inner-product affinity leads to poor use of memory with a small (fixed) subset of memory nodes dominating the votes, regardless of the query. In light of this phenomenon, we propose using the negative squared Euclidean distance instead to compute the affinities. We validate that every memory node now has a chance to contribute, and experimentally show that such diversified voting is beneficial to both memory efficiency and inference accuracy. The synergy of correspondence networks and diversified voting works exceedingly well, achieves new state-of-the-art results on both DAVIS and YouTubeVOS datasets while running significantly faster at 20+ FPS for multiple objects without bells and whistles. 1 Introduction Video object segmentation (VOS) aims to identify and segment target instances in a video sequence. This work focuses on the semi-supervised setting where the first-frame segmentation is given and the algorithm needs to infer the segmentation for the remaining frames. This task is an extension of video object tracking [1, 2], requiring detailed object masks instead of simple bounding boxes. A high-performing algorithm should be able to delineate an object from the background or other distractors (e.g., similar instances) under partial or complete occlusion, appearance changes, and object deformation [3]. Most current methods either fit a model using the initial segmentation [4, 5, 6, 7, 8, 9] or leverage temporal propagation [10, 11, 12, 13, 14, 15, 16], particularly with spatio-temporal matching [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Space-Time Memory networks [18] are especially popular recently due to its high performance and simplicity – many variants [22, 16, 23, 21, 24, 28, 29, 30], including competitions’ winners [31, 32], have been developed to improve the speed, reduce memory usage, or to regularize the memory readout process of STM. In this work, we aim to subtract from STM to arrive at a minimalistic form of matching networks, dubbed Space-Time Correspondence Network (STCN) 1. Specifically, we start from the basic premise †This work was done in The Hong Kong University of Science and Technology. 1Training/inference code and pretrained models: https://github.com/hkchengrex/STCN 35th Conference on Neural Information Processing Systems (NeurIPS 2021). that correspondences are target-agnostic. Instead of building a specific memory bank and therefore affinity for every object in the video as in STM, we build a single affinity matrix using only RGB relations. For querying, each target object passes through the same affinity matrix for feature transfer. This is not only more efficient but also more robust – the model is forced to learn all object relations beyond just the labeled ones. With the learned affinity, the algorithm can propagate features from the first frame to the rest of the video sequence, with intermediate features stored as memory. While STCN already reaches state-of-the-art performance and speed in this simple form, we further probe into the inner workings of the construction of affinities. Traditionally, affinities are constructed from dot products followed by a softmax as in attention mechanisms [18, 33]. This however implicitly encoded “confidence” (magnitude) with high-confidence points dominating the affinities all the time, regardless of query features. Some memory nodes will therefore be always suppressed, and the (large) memory bank will be underutilized, reducing effective diversity and robustness. We find this to be harmful, and propose using the negative squared Euclidean distance as a similarity measure with an efficient implementation instead. Though simple, this small change ensures that every memory node has a chance to contribute significantly (given the right query), leading to better performance, higher robustness, and more efficient use of memory. Our contribution is three-fold: • We propose STCN with direct image-to-image correspondence that is simpler, more efficient, and more effective than STM. • We examine the affinity in detail, and propose using L2 similarity in place of dot product for a better memory coverage, where every memory node contributes instead of just a few. • The synergy of the above two results in a simple and strong method, which suppresses previous state-of-the-art performance without additional complications while running fast at 20+ FPS. 2 Related Works Correspondence Learning Finding correspondences is one of the most fundamental problems in computer vision. Local correspondences have been used heavily in optical flow [34, 35, 36] and object tracking [37, 38, 39] with fast running time and high performance. More explicit correspondence learning has also been achieved with deep learning [40, 41, 42]. Few-shots learning can be considered as a matching problem where the query is compared with every element in the support set [43, 44, 45, 46]. Typical approaches use a Siamese network [47] and compare the embedded query/support features using a similarity measure such as cosine similarity [43], squared Euclidean distance [48], or even a learned function [49]. Our task can also be formulated as a few-shots problem, where our memory bank acts as the support set. This connection helps us with the choice of similarity function, albeit we are dealing with a million times more pointwise comparisons. Video Object Segmentation Early VOS methods [4, 5, 50] employ online first-frame finetuning which is very slow in inference and have been gradually phased out. Faster approaches have been proposed such as a more efficient online learning algorithm [8, 6, 7], MRF graph inference [51], temporal CNN [52], capsule routing [53], tracking [11, 13, 15, 54, 55, 56, 57], embedding learning [10, 58, 59] and space-time matching [17, 18, 19, 20]. Embedding learning bears a high similarity to space-time matching, both attempting to learn a deep feature representation of an object that remains consistent across a video. Usually embedding learning methods are more constrained [10, 58], adopting local search window and hard one-to-one matching. We are particularly interested in the class of Space-Time Memory networks (STM) [18] which are the backbone for many follow-up state-of-the-art VOS methods. STM constructs a memory bank for each object in the video, and matches every query frame to the memory bank to perform “memory readout”. Newly inferred frames can be added to the memory, and then the algorithm propagates forward in time. Derivatives either apply STM at other tasks [21, 60], improve the training data or augmentation policy [21, 22], augment the memory readout process [16, 21, 22, 24, 28], use optical flow [29], or reduce the size of the memory bank by limiting its growth [23, 30]. MAST [61] is an adjacent research that focused on unsupervised learning with a photometric reconstruction loss. Without the input mask, they use Siamese networks on RGB images to build the correspondence out of necessity. In this work, we deliberately build such connections and establish that building correspondences between images is a better choice, even when input masks are available, rather than a concession. We propose to overhaul STM into STCN where the construction of affinity is redefined to be between frames only. We also take a close look at the similarity function, which has always been the dot product in all STM variants, make changes and comparisons according to our findings. The resultant framework is both faster and better while still principled. STCN is even fundamentally simpler than STM, and we hope that STCN can be adopted as the new and efficient backbone for future works. 3 Space-Time Correspondence Networks (STCN) Given a video sequence and the first-frame annotation, we process the frames sequentially and maintain a memory bank of features. For each query frame, we extract a key feature which is compared with the keys in the memory bank, and retrieve corresponding value features from memory using key affinities as in STM [18]. 3.1 Feature Extraction Figure 1 illustrates the overall flow of STCN. While STM [18] parameterizes a Query Encoder (image as input) and a Memory Encoder (image and mask as input) with two ResNet50 [62], we instead construct a Key Encoder (image as input) and a Value Encoder (image and mask as input) with a ResNet50 and a ResNet18 respectively. Thus, unlike in STM [18], the key features (and thus the resultant affinity) can be extracted independently without the mask, computed only once for each frame, and symmetric between memory and query.2 The rationales are 1) Correspondences (key features) are more difficult to extract than value, hence a deeper network, and 2) Correspondences should exist between frames in a video, and there is little reason to introduce the mask as a distraction. From another perspective, we are using a Siamese structure [47] which is widely adopted in few-shots learning [63, 49] for computing the key features, as if our memory bank is the few-shots support set. As the key features are independent of the mask, we can reuse the “query key” later as a “memory key” if we decide to turn the query frame into a memory frame during propagation (strategy to be discussed in Section 3.3). This means the key encoder is used exactly once per image in the entire process, despite the two appearances in Figure 1 (which is for brevity). 2That is, matching between two points does not depend on whether they are query or memory points (not true in STM [18] as they are from different encoders). Architecture. Following the STM practice [18], we take res4 features with stride 16 from the base ResNets as our backbone features and discard res5. A 3×3 convolutional layer without non-linearity is used as a projection head from the backbone feature to either the key space (Ck dimensional) or the value space (Cv dimensional). We set Cv to be 512 following STM and discuss the choice of Ck in Section 4.1. Feature reuse. As seen from Figure 1, both the key encoder and the value encoder are processing the same frame, albeit with different inputs. It is natural to reuse features from the key encoder (with fewer inputs and a deeper network) at the value encoder. To avoid bloating the feature dimensions and for simplicity, we concatenate the last layer features from both encoders (before the projection head) and process them with two ResBlocks [62] and a CBAM block3 [64] as the final value output. 3.2 Memory Reading and Decoding Given T memory frames and a query frame, the feature extraction step would generate the followings: memory key kM ∈ RCk×THW , memory value vM ∈ RCv×THW , and query key kQ ∈ RCk×HW , where H and W are (stride 16) spatial dimensions. Then, for any similarity measure c : RCk×RCk → R, we can compute the pairwise affinity matrix S and the softmax-normalized affinity matrix W, where S,W ∈ RTHW×HW with: Sij = c(k M i ,k Q j ) Wij = exp (Sij)∑ n (exp (Snj)) , (1) where ki denotes the feature vector at the i-th position. The similarities are normalized by √ Ck as in standard practice [18, 33] and is not shown for brevity. In STM [18], the dot product is used as c. Memory reading regularization like KMN [22] or top-k filtering [21] can be applied at this step. With the normalized affinity matrix W, the aggregated readout feature vQ ∈ RCv×HW for the query frame can be computed as a weighted sum of the memory features with an efficient matrix multiplication: vQ = vMW, (2) which is then passed to the decoder for mask generation. In the case of multi-object segmentation, only Equation 2 has to be repeated as W is defined between image features only, and thus is the same for different objects. In the case of STM [18], W must be recomputed instead. Detailed running time analysis can be found in Section 6.2. Decoder. Our decoder structure stays close to that of the STM [18] as it is not the focus of this paper. Features are processed and upsampled at a scale of two gradually with higher-resolution features from the key encoder incorporated using skip-connections. The final layer of the decoder produces a stride 4 mask which is bilinearly upsampled to the original resolution. In the case of multiple objects, soft aggregation [18] of the output masks is used. 3.3 Memory Management So far we have assumed the existence of a memory bank of size T . Here, we will describe the construction of the memory bank. For each memory frame, we store two items: memory key and memory value. Note that all memory frames (except the first one) are once query frames. The memory key is simply reused from the query key, as described in Section 3.1 without extra computation. The memory value is computed after mask generation of that frame, independently for each object as the value encoder takes both the image and the object mask as inputs. STM [18] consider every fifth query frame as a memory frame, and the immediately previous frame as a temporary memory frame to ensure accurate matching. In the case of STCN, we find that it is unnecessary, and in fact harmful, to include the last frame as temporary memory. This is a direct consequence of using shared key encoders – 1) key features are sufficiently robust to match well without the need for close-range (temporal) propagation, and 2) the temporary memory key would otherwise be too similar to that of the query, as the image context usually changes smoothly and we do not have the encoding noises resultant from distinct encoders, leading to drifting.4 This modification also reduces the number of calls to the value encoder, contributing a significant speedup. 3We find this block to be non-essential in a later experiment but it is kept for consistency. 4This effect is amplified by the use of L2 similarity. See the supplementary material for a full comparison. Table 1 tabulates the performance comparisons between STM and STCN. For a video of length L with m ≥ 1 objects, and a final memory bank of size T < L, STM [18] would need to invoke the memory encoder and compute the affinity mL times. Our proposed STCN, on the other hand, only invokes the value encoder mT times and computes the affinity L times. It is therefore evident that STCN is significantly faster. Section 6.2 provides a breakdown of running time. 4 Computing Affinity The similarity function c : RCk × RCk → R plays a crucial role in both STM and STCN, as it supports the construction of affinity that is central to both correspondences and memory reading. It also has to be fast and memory-efficient as there can be up to 50M pairwise relations (THW ×HW ) to compute for just one query frame. To recap, we need to compute the similarity between a memory key kM ∈ RCk×HW and a query key kQ ∈ RCk×HW . The resultant pairwise affinity matrix is denoted as S ∈ RTHW×HW , with Sij = c(k M i ,k Q j ) denoting the similarity between k M i (the memory feature vector at the i-th position) and kQj (the query feature vector at the j-th position). In the case of dot product, it can be implemented very efficiently with a matrix multiplication: Sdotij = k M i · k Q j ⇒ S dot = ( kM )T kQ (3) In the following, we will also discuss the use of cosine similarity and negative squared Euclidean distance as similarity functions. They are defined as (with efficient implementation discussed later): Scosij = kMi · k Q j∥∥kMi ∥∥2 × ∥∥kQj ∥∥2 SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 (4) For brevity, we will use the shorthand “L2” or “L2 similarity” to denote the negative squared Euclidean distance in the rest of the paper. The ranges for dot product, cosine similarity and L2 similarity are (−∞,∞), [−1, 1], and (−∞, 0] respectively. Note that cosine similarity has a limited range. Non-related points are encouraged to have a low similarity score through back-propagation such that they have a close-to-zero affinity (Eq. 1), and thus no value is propagated (Eq. 2). 4.1 A Closer Look at the Affinity The affinity matrix is core to STCN and deserves close attention. Previous works [18, 21, 22, 23, 24], almost by default, use the dot product as the similarity function – but is this a good choice? Cosine similarity computes the angle between two vectors and is often regarded as the normalized dot product. Reversely, we can consider dot product as a scaled version of cosine similarity, with the scale equals to the product of vectors’ norms. Note that this is query-agnostic, meaning that every similarity with a memory key kMi will be scaled by its norm. If we cast the aggregation process (Eq. 2) as voting with similarity representing the weights, memory keys with large magnitudes will predominately suppress any representation from other memory nodes. Figure 2 visualizes this phenomenon in a 2D feature space. For dot product, only a subset of points (labeled as triangles) has a chance to contribute the most for any query. Outliers (top-right red) can suppress existing clusters; clusters with dominant value in one dimension (top-left cyan) can suppress other clusters; some points may be able to contribute the most in a region even it is outside of the region (bottom-right beige). These undesirable situations will however not happen if the proposed L2 similarity is used: a Voronoi diagram [66] is formed and every memory point can be fully utilized, leading to a diversified, queryspecific voting mechanism with ease. Figure 3 shows a closer look at the same problem with soft weights. With dot product, the blue/green point has low weights for every possible query in the first quadrant while a smooth transition is created with our proposed L2 similarity. Note that cosine similarity has the same benefits, but its limited range [−1, 1] means that an extra softmax temperature hyperparameter is required to shape the affinity distribution, or one more parameter to tune. L2 works well without extra temperature tuning in our experiments. Connection to self-attention, and whether some points are more important than others. Dotproducts have been used extensively in self-attention models [33, 67, 68]. One way to look at the dot-product affinity positively is to consider the points with large magnitudes as more important – naturally they should enjoy a higher influence. Admittedly, this is probably true in NLP [33] where a stop word (“the”) is almost useless compared to a noun (“London”) or in video classification [67] where the foreground human is far more important than a pixel in the plain blue sky. This is however not true for STCN where pixels are more or less equal. It is beneficial to match every pixel in the query frame accurately, including the background (also noted by [10]). After all, if we can know that a pixel is part of the background, we would also know that it does not belong to the foreground. In fact, we find STCN can track the background fairly well (floor, lake, etc.) even when it is never explicitly trained to do so. The notion of relative importance therefore does not generally apply in our context. Efficient implementation. The naïve implementation of negative squared Euclidean distance in Eq. 4 needs to materialize a Ck × THW × HW element-wise difference matrix which is then squared and summed. This process is much slower than simple dot product and cannot be run on the same hardware. A simple decomposition greatly simplifies the implementation, as noted in [69]: SL2ij = − ∥∥∥kMi − kQj ∥∥∥2 2 = 2kMi · k Q j − ∥∥kMi ∥∥22 − ∥∥kQj ∥∥22 (5) which has only slightly more computation than the baseline dot product, and can be implemented with standard matrix operations. In fact, we can further drop the last term as softmax is invariant to translation in the target dimension (details in the supplementary material). For cosine similarity, we first normalize the input vectors, then compute dot product. Table 2 tabulates the actual computational and memory costs. 4.2 Experimental Verification Here, we verify three claims: 1) the aforementioned phenomenon does happen in a high-dimension key space for real-data and a fully-trained model; 2) using L2 similarity diversifies the voting; 3) L2 similarity brings about higher efficiency and performance. Affinity distribution. We verify the first two claims by training two different models with dot product and L2 similarity respectively as the similarity function and plot the maximum contribution given by each memory node in its lifetime. We use the same setting for the two models and report the distribution on the DAVIS 2017 [65] dataset. Figure 4 shows the pertinent distributions. Under the L2 similarity measure, a lot more memory nodes contribute a fair share. Specifically, around 3% memory nodes never contribute more than 1% weight under dot product while only 0.06% suffer the same fate with L2. Under dot product, 31% memory nodes contribute less than 10% weight at best while the same only happen for 7% of the memory with L2 similarity. To measure the distribution inequality, we additionally compute the Gini coefficient [70] (the higher it is, the more unequal the distribution). The Gini coefficient for dot product is 44.0, while the Gini coefficient for L2 similarity is much lower at 31.8. Performance and efficiency. Next, we show that using L2 similarity does improve performance with negligible overhead. We compare three similarity measures: dot product, cosine similarity, and L2 similarity. For cosine similarity, we use a softmax temperature of 0.01 while a default temperature of 1 is used for both dot product and L2 similarity. This scaling is crucial for cosine similarity only since it is the only one with a limited output range [−1, 1]. Searching for an extra hyperparameter is computationally demanding – we simply picked one that converges fairly quickly without collapsing. Table 2 tabulates the main results. Interestingly, we find that reducing the key space dimension (Ck) is beneficial to both cosine similarity and L2 similarity but not dot product. This can be explained in the context of Section 4.1 – the network needs more dimensions so that it can spread the memory key features out to save them from being suppressed by high-magnitude points. Cosine similarity and L2 similarity do not suffer from this problem and can utilize the full key space. The reduced key space in turn benefits memory efficiency and improves running time. 5 Implementation Details Models are trained with two 11GB 2080Ti GPUs with the Adam optimizer [71] using PyTorch [72]. Following previous practices [18, 21], we first pretrain the model on static image datasets [73, 74, 75, 76, 77] with synthetic deformation then perform main training on YouTubeVOS [78] and DAVIS [3, 65]. We also experimented with the synthetic dataset BL30K [79, 80] proposed in [21] which is not used unless otherwise specified. We use a batch size of 16 during pretraining and a batch size of 8 during main training. Pre-training takes about 36 hours and main training takes around 16 hours with batchnorm layers frozen during training following [18]. Bootstrapped cross entropy is used following [21]. The full set of hyperparameters can be found in the open-sourced code. In each iteration, we pick three temporally ordered frames (with the ground-truth mask for the first frame) from a video to form a training sample [18]. First, we predict the second frame using the first frame as memory. The prediction will be saved as the second memory frame, and then the third frame will be predicted using the union of the first and the second frame. The temporal distance between the frames will first gradually increase from 5 to 25 as a curriculum learning schedule and anneal back to 5 towards the end of training. This process follows the implementation of MiVOS [21]. For memory-read augmentation, we experimented with kernelized memory reading [22] and top-k filtering [21]. We find that top-k works well universally and improves running time while kernelized memory reading is slower and does not always help. We find that k = 20 always works better for STCN (original paper uses k = 50) and we adopt top-k filtering in all our experiments with k = 20. For fairness, we also re-run all experiments in MiVOS [21] with k = 20, and pick the best result in their favor. We use L2 similarity with Ck = 64 in all experiments unless otherwise specified. For inference, a 2080Ti GPU is used with full floating point precision for a fair running time comparison. We memorize every 5th frame and no temporary frame is used as discussed in Section 3.3. 6 Experiments We mainly conduct experiments in the DAVIS 2017 validation [65] set and the YouTubeVOS 2018 [78] validation set. For completeness, we also include results in the single object DAVIS 2016 validation [3] set and the expanded YouTubeVOS 2019 [78] validation set. Results for the DAVIS 2017 test-dev [65] set are included in the supplementary material. We first conduct quantitative comparisons with previous methods, and then analyze the running time for each component in STCN. For reference, we also present results without pretraining on stataic images. Ablation studies have been included in previous sections (Table 1 and Table 2). 6.1 Evaluations Table 3 tabulates the comparisons of STCN with previous methods in semi-supervised video object segmentation benchmarks. For DAVIS 2017 [65], we compare the standard metrics: region similarity J , contour accuracy F , and their average J&F . For YouTubeVOS [78], we report J and F for both seen and unseen categories, and the averaged overall score G. For comparing the speed, we compute the multi-object FPS that is the total number of output frames divided by the total processing time for the entire DAVIS 2017 [65] validation set. We either copy the FPS directly from papers/project websites, or estimate based on their single object inference FPS (simply labeled as <5). We use 480p resolution videos for both DAVIS and YouTubeVOS. Table 4, 5, 6, and 7 tabulate additional results. For the interactive setting, we replace the propagation module of MiVOS [21] with STCN. Visualizations. Figure 6 visualizes the learned correspondences. Note that our correspondences are general and mask-free, naturally associating every pixel (including background bystanders) even when it is only trained with foreground masks. Figure 7 visualizes our semi-supervised mask propagation results with the last row being a failure case (Section 7). Leaderboard results. Our method is also very competitive on the public VOS challenge leaderboard [78]. Methods on the leaderboard are typically cutting-edge, with engineering extensions like deeper network, multi-scale inference, and model ensemble. They usually represent the highest achievable performance at the time. On the latest YouTubeVOS 2019 validation split [78], our base model (84.2 G) outperforms the previous challenge winner [32] (based on STM [18], 82.0 G) by a large margin. With ensemble and multi-scale testing (details in the supplementary material), our method is ranked first place (86.7 G) at the time of submission on the still active leaderboard. 6.2 Running Time Analysis Here, we analyze the running time of each component in STM and STCN on DAVIS 2017 [65]. For a fair comparison, we use our own implementation of STM, enabled top-k filtering [21], and set Ck = 64 for both methods such that all the speed improvements come from the fundamental differences between STM and STCN. Our affinity matching time is lower because we compute a single affinity between raw images while STM [18] compute one for every object. Our value encoder takes much less time than the memory encoder in STM [18] because of our light network, feature reuse, and robust memory bank/management as discussed in Section 3.3. 5A linear extrapolation would severally underestimate the performance of many previous methods. 7 Limitations To alienate our method from other possible enhancement, we only use fundamentally simple global matching. Like STM [18], we have no notion of temporal consistency as we do not employ local matching [58, 10, 17] or optical flow [29]. This means we may incorrectly segment objects that are far away with similar appearance. One such failure case is shown on the last row of Figure 7. We expect that given our framework’s simplicity, our method can be readily extended to include temporal consistency consideration for further improvement. 8 Conclusion We present STCN, a simple, effective, and efficient framework for video object segmentation. We propose to use direct image-to-image correspondence for efficiency and more robust matching, and examine the inner workings of affinity in details – L2 similarity is proposed as a result of our observations. With its clear technical advantages, We hope that STCN can serve as a new baseline backbone for future contributions. Table 4: Results on the DAVIS 2016 validation set. Method J&F J F OSMN [8] 73.5 74.0 72.9 MaskTrack [15] 77.6 79.7 75.4 OSVOS [5] 80.2 79.8 80.6 FAVOS [14] 81.0 82.4 79.5 FEELVOS [58] 81.7 81.1 82.2 RGMP [55] 81.8 81.5 82.0 Track-Seg [54] 83.1 82.6 83.6 FRTM-VOS [6] 83.5 - - CINN [51] 84.2 83.4 85.0 OnAVOS [50] 85.5 86.1 84.9 PReMVOS [81] 86.8 84.9 88.6 GC [24] 86.8 87.6 85.7 RMNet [29] 88.8 88.9 88.7 STM [18] 89.3 88.7 89.9 CFBI [10] 89.4 88.3 90.5 CFBI+ [83] 89.9 88.7 91.1 MiVOS [21] 90.0 88.9 91.1 SwiftNet [30] 90.4 90.5 90.3 KMN [22] 90.5 89.5 91.5 LCM [28] 90.7 91.4 89.9 Ours 91.6 90.8 92.5 MiVOS [21] + BL30K 91.0 89.6 92.4 Ours + BL30K 91.7 90.4 93.0 Method AUC-J&F J&F @ 60s Time (s) ATNet [84] 80.9 82.7 55+ STM [85] 80.3 84.8 37 GIS [86] 85.6 86.6 34 MiVOS [21] 87.9 88.5 12 Ours 88.4 88.8 7.3 Table 7: Effects of pretraining on static images/maintraining on the DAVIS 2017 validation set. J&F J F Pre-training only 75.8 73.1 78.6 Main training only 82.5 79.3 85.7 Both 85.4 82.2 88.6 ST M O ur s M iV O S O ur s Figure 7: Visualization of semi-supervised VOS results with the first column being the reference masks to be propagated. The first two examples show comparisons of our method with STM [18] and MiVOS [21]. In the second example, zoom-ins inset (orange) are shown with the corresponding ground-truths inset (green) to highlight their differences. The last row shows a failure case: we cannot distinguish the real duck from the duck picture, as no temporal consistency clue is used in our method. Broader Impacts Malicious use of VOS software can bring potential negative societal impacts, including but not limited to unauthorized mass surveillance or privacy infringing human/vehicle tracking. We believe that the task itself is neutral with positive uses as well, such as video editing for amateurs or making safe self-driving cars. Acknowledgment This research is supported in part by Kuaishou Technology and the Research Grant Council of the Hong Kong SAR under grant no. 16201818.
1. What is the focus of the paper in terms of computer vision tasks? 2. What are the modifications made by the authors compared to previous works? 3. How does the paper analyze and improve the feature reuse aspect? 4. What are the benefits of the proposed approach regarding speed and performance? 5. Are there any concerns or suggestions regarding the simplicity of the method?
Summary Of The Paper Review
Summary Of The Paper The authors propose STCN for video object segmentation. Compared to STM, the dot product similarity is replaced with the L2 similarity. Moreover, the key/value encoders are separated and thus the features can be reused. Though simple, these small changes make STCN archive the STOA results and faster inference speed. Moreover, it gives a closer look at the similarity measure. Review The paper is clearly written. Section 4 gives a good analysis of different similarity measures. The proposed STCN archives the STOA results on mainstream benchmarks.
NIPS
Title Sub-exponential time Sum-of-Squares lower bounds for Principal Components Analysis Abstract Principal Components Analysis (PCA) is a dimension-reduction technique widely used in machine learning and statistics. However, due to the dependence of the principal components on all the dimensions, the components are notoriously hard to interpret. Therefore, a variant known as sparse PCA is often preferred. Sparse PCA learns principal components of the data but enforces that such components must be sparse. This has applications in diverse fields such as computational biology and image processing. To learn sparse principal components, it’s well known that standard PCA will not work, especially in high dimensions, and therefore algorithms for sparse PCA are often studied as a separate endeavor. Various algorithms have been proposed for Sparse PCA over the years, but given how fundamental it is for applications in science, the limits of efficient algorithms are only partially understood. In this work, we study the limits of the powerful Sum of Squares (SoS) family of algorithms for Sparse PCA. SoS algorithms have recently revolutionized robust statistics, leading to breakthrough algorithms for long-standing open problems in machine learning, such as optimally learning mixtures of gaussians, robust clustering, robust regression, etc. Moreover, it is believed to be the optimal robust algorithm for many statistical problems. Therefore, for sparse PCA, it’s plausible that it can beat simpler algorithms such as diagonal thresholding that have been traditionally used. In this work, we show that this is not the case, by exhibiting strong tradeoffs between the number of samples required, the sparsity and the ambient dimension, for which SoS algorithms, even if allowed sub-exponential time, will fail to optimally recover the component. Our results are complemented by known algorithms in literature, thereby painting an almost complete picture of the behavior of efficient algorithms for sparse PCA. Since SoS algorithms encapsulate many algorithmic techniques such as spectral or statistical query algorithms, this solidifies the message that known algorithms are optimal for sparse PCA. Moreover, our techniques are strong enough to obtain similar tradeoffs for Tensor PCA, another important higher order variant of PCA with applications in topic modeling, video processing, etc. 1 Introduction Principal components analysis (PCA) [62] is a popular data processing and dimension reduction routine that is widely used. It has numerous applications in Machine Learning, Statistics, Engineering, Biology, etc. Given a dataset, PCA projects the data to a lower dimensional space spanned by the principal components. The intuition is that PCA sheds lower order information such as noise ⇤Equal contribution †A.P. was supported in part by NSF grant CCF-2008920 and G.R. was supported in part by NSF grants CCF-1816372 and CCF-2008920 36th Conference on Neural Information Processing Systems (NeurIPS 2022). but importantly preserves much of the intrinsic information present in the data that are needed for downstream tasks. However, despite great optimality properties, PCA has its drawbacks. Firstly, because the principal components are linear combinations of all the original variables, it’s notoriously hard to interpret them [84]. Secondly, it’s well known that PCA does not yield good estimators in high dimensional settings [13, 97, 61]. To address these issues, a variant of PCA known as Sparse PCA is often used. Sparse PCA searches for principal components of the data with the added constraint of sparsity. Concretely, consider given data v1, v2, . . . , vm 2 Rd. In Sparse PCA, we want to find the top principal component of the data under the extra constraint that it has sparsity at most k. That is, we want to find a vector v 2 Rd that maximizes Pm i=1hv, vii2 such that kvk0 k. Sparse PCA has enjoyed applications in a diverse range of fields ranging from medicine, computational biology, economics, image and signal processing, finance and of course, machine learning and statistics (e.g. [117, 89, 85, 115, 31, 2]). It’s worth noting that in some of these applications, other algorithms are also often used to learn statistical models with sparse structure, such as greedy algorithms (e.g. [60, 81, 59, 124]) and score-based algorithms (e.g. [28, 90, 107]) but in this work, we focus on the widely used sparse PCA technique. Sparse PCA comes with the important benefit that the learnt components are easier to interpret. A notable example of this is to recover topics from documents [32, 95]. Moreover, this has important benefits for algorithmic fairness in machine learning. A large volume of research has been devoted to study Sparse PCA and its variants. Algorithms have been proposed and studied by several works, e.g. [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36]. For example, simple variants of PCA such as thresholding on top of standard PCA [61, 29] work well in certain parameter settings. This leads to the natural question whether more sophisticated algorithms can do better either for these settings or other parameter settings. On the other hand, there have been works from the inapproximability perspective as well (e.g. [20, 54, 23, 73, 35, 118], see Section 3.1 for a more detailed overview) In particular, a lot of these inapproximability results have relied on various other conjectures, due to the difficulty of proving unconditional lower bounds. Despite these prior works, exactly understanding the limits of efficient algorithms to this problem is still an active research area. This is natural considering the importance of sparse PCA and how fundamental it is to a multitude of applications. In this work, we focus on the powerful Sum-of-Squares (SoS) family of algorithms [113, 92, 96, 48] based on semidefinite programming relaxations. SoS algorithms have recently revolutionized robust machine learning, a branch of machine learning where the underlying dataset is noisy, with the noise being either random or adversarial. Robust machine learning has gotten a lot of attention in recent years because of its wide variety of use cases in machine learning and other downstream applications, including safety-critical ones like autonomous driving. For example, there has been a high volume of practical works in computer vision [114, 47, 121, 50, 112, 122, 42, 76] and speech recognition [57, 119, 106, 108, 78, 3, 91, 94]. In this important field, SoS has recently lead to breakthrough algorithms for long-standing open problems [16, 80, 51, 70, 43, 72, 14, 15, 111]. Highlights include - Robustly learning mixtures of high dimensional Gaussians. This is an extremely important problem that has been subjected to intense scrutiny, with a long line of work culminating in [16, 80]. - Efficient algorithms for the fundamental problems of regression [70], moment estimation [72], clustering [14] and subspace recovery [15] in the presence of outliers. Also known as robust machine learning, this setting is more akin to real life data which almost always has outliers or corrupted data. Moreover, SoS algorithms are believed to be the optimal robust algorithm for many statistical problems. In a different direction, SoS algorithms have led to the design of fast algorithms for problems such as tensor decomposition [53, 111]. Put more concretely, SoS algorithms, also known as the SoS hierarchy or the Lasserre hieararchy, offers a series of convex semidefinite programming (SDP) based relaxations to optimization problems. Due to its ability to capture a wide variety of algorithmic techniques, it has become a fundamental tool in algorithms and optimization. It was and still remains an extremely versatile tool for combinatorial optimization [46, 9, 49, 102]) but recently, it is being extensively used in Statistics and Machine Learning (apart from the references above, see also [17, 18, 52, 100]). Therefore, we ask (also raised by and posed as an open problem in the works [82, 54, 55]) Can Sum-of-Squares algorithms beat known algorithms for Sparse PCA? In this work, we show that SoS algorithms cannot beat known spectral algorithms, even if we allow sub-exponential time! Therefore, this suggests that currently used algorithms such as thresholding or other spectral algorithms are in a sense optimal for this problem. To prove our results, we will consider random instances of Sparse PCA and show that they are naturally hard for SoS. In particular, we focus on the Wishart random model of Sparse PCA. This model is a more natural modeling assumption compared to other random models that have been studied before, such as the Wigner random model. Note importantly that our model assumptions only strengthen our results because we are proving impossibility results. In other words, if SoS algorithms do not work for this restricted version of sparse PCA, then it will not work for more general models, e.g. with general covariance or multiple spikes. We now describe the model. The Wishart model of Sparse PCA, also known as the Spiked Covariance model, was originally proposed by [61]. In this model, we observe m vectors v1, . . . , vm 2 Rd from the distribution N (0, Id + uuT ) where u is a k-sparse unit vector, that is, kuk0 k and we would like to recover the principal component u. Here, the sparsity of a vector is the number of nonzero entries and is known as the signal-to-noise ratio. As the signal to noise ratio gets lower, it becomes harder and maybe even impossible to recover u since the signature left by u in the data becomes fainter. But it’s possible that this may be mitigated if the number of samples m grows. Therefore, there is a tradeoff between m, d, k and at play here. Algorithms proposed earlier have been able to recover u at various regimes. For example, if the number of samples is really large, namely m max( d , d 2 ), then standard PCA will work. But if this is not the case, we may still be able to recover u by assuming that the sparsity is not too large compared to the number of samples, namely m k 2 2 . To do this, we use a variant of standard PCA known as diagonal thresholding. Similar results have been obtained for various regimes, while some regimes have resisted attack to algorithms. Our results here complete the picture by showing that in the regimes that have so far resisted attack by efficient algorithms, the powerful Sum of Squares algorithms also cannot recover the principal component. We now state our theorem informally, with the formal statement in Theorem 3.1. Theorem 1.1. For the Wishart model of Sparse PCA, sub-exponential time SoS algorithms fail to recover the principal component when the number of samples m ⌧ min( d 2 , k2 2 ) . In particular, this theorem resolves an open problem posed by [82] and [54, 55]. In almost all other regimes, algorithms to recover the principal component u exist. We give a summary of such algorithms in Section 3, captured succinctly in Fig. 1. We say almost all other regimes because there is one interesting regime, namely d 2 m min(d,k) marked by light green in Fig. 1, where we can show that information theoretically, we cannot recover u but it’s possible to do hypothesis testing of Sparse PCA. That is, in this regime, we can distinguish purely random unspiked samples from the spiked samples. However, we will not be able to recover the principal component even if we use an exponential time bruteforce algorithm. We use our techniques to also obtain strong results for the related Tensor Principal components analysis (Tensor PCA) problem. Tensor PCA, originally introduced by [110], is a generalization of PCA to higher order tensors. Formally, given an order k tensor of the form u⌦k + B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries, we would like to recover the principal component u. Here, is known as the signal-to-noise ratio. Tensor PCA is a remarkably useful statistical and computational technique to exploit higher order moments of the data. It was originally envisaged to be applied in latent variable modeling and indeed, it has found multiple applications in this context (e.g. [5, 68, 69, 6]). Here, a tensor containing statistics of the input data is computed and then it’s decomposed in order to recover the latent variables. Because of the technique’s versatility, it has gathered a lot of attention in machine learning with applications in topic modeling, video processing, collaborative filtering, community detection, etc. (see e.g. [56, 7, 110, 5, 6, 38, 79] and references therein.) For Tensor PCA, similar to sparse PCA, there has been wide interest in the community to study algorithms (e.g. [11, 22, 52, 53, 110, 125, 120, 67, 8]) as well as approximability and hardness (e.g. [88, 75, 24, 54], see Section 3.2 for a more detailed overview). It’s worth noting that many of these hardness results are conditional, that is, they rely on various conjectures, sometimes stronger than P 6= NP. Moreover, there has been widespread interest from the statistics community as well, e.g. [58, 98, 77, 26, 27], due to fascinating connections to random matrix theory and statistical physics. In this work, we study the performance of sub-exponential time Sum of Squares algorithms for Tensor PCA. Our main result is stated informally below and formally in Theorem 3.2. Theorem 1.2. For Tensor PCA, sub-exponential time SoS algorithms fail to recover the principal component when the signal to noise ratio ⌧ n k4 . In particular, this resolves an open question posed by the works [52, 22, 54, 55]. Therefore, our main contributions can be summarized as follows 1. Despite the huge breakthroughs achieved by Sum-of-Squares algorithms in recent works on high dimensional statistics, we show barriers to it for the fundamental problems of Sparse PCA and Tensor PCA. 2. We achieve optimal tradeoffs compared to known algorithms, thereby painting a full picture of the computational thresholds of tractable algorithms. This suggests that existing algorithms are preferrable for PCA and its variants. 3. Prior lower bounds for these problems have either focused on weaker classes of algorithms or were obtained assuming other hardness conjectures, whereas we prove high degree sub-exponential time SoS lower bounds without relying on any conjectures. Acknowledgements and Bibliographic note We thank Sam Hopkins, Pravesh Kothari, Prasad Raghavendra, Tselil Schramm, David Steurer and Madhur Tulsiani for helpful discussions. We also thank Sam Hopkins and Pravesh Kothari for assistance in drafting the informal description of the machinery (Section C). Parts of this work have also appeared in [99, 104]. 2 Sum-of-Squares algorithms The Sum of Squares (SoS) hierarchy is a powerful class of algorithms that utilizes the power of semidefinite programming for optimization problems, which has achieved breakthrough algorithms for many problems in machine learning and statistics. In this section, we briefly describe the sum of squares hierarchy of algorithms. For a more detailed treament with an eye towards applications to machine learning and statistics, see the ICM survey [102] or the monograph [43]. Given an optimization problem given by a program with polynomial constraints, the SoS hierarchy of algorithms gives a family of convex relaxations parameterized by an integer known as its degree. As the degree gets higher, the running time to solve the convex relaxation increases but on the other hand, the relaxation gets stronger and hence serves as a better algorithm. This offers a smooth tradeoff between running time and the quality of approximation. In general, we can solve degree-Dsos SoS in nO(Dsos) time †. Therefore, constant degree SoS corresponds to polynomial time algorithms which in general translates to efficient algorithms. In this work, we focus on and show limitations of degree n" SoS which corresponds to subexponential running time. Suppose we are given multivariate polynomials p, g1, . . . , gm on n variables x1, . . . , xn (denoted collectively by x) taking real values. Consider the task: maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 In general, we could also allow inequality constraints, e.g., gi(x) 0. In this work, we only have equality constraints but much of the theory generalizes when we have inequality constraints instead. We now formally describe the Sum of Squares hierarchy of algorithms, via the so-called pseudoexpectation operators. Definition 2.1 (Pseudo-expectation values). Given multivariate polynomial constraints g1 = 0,. . . ,gm = 0 on n variables x1, . . . , xn, degree Dsos pseudo-expectation values are a linear map Ẽ from polynomials of x1, . . . , xn of degree at most Dsos to R satisfying the following conditions: 1. Ẽ[1] = 1, 2. Ẽ[f · gi] = 0 for every i 2 [m] and polynomial f such that deg(f · gi) Dsos. 3. Ẽ[f2] 0 for every polynomial f such that deg(f2) Dsos. Any linear map Ẽ satisfying the above properties is known as a degree Dsos pseudoexpectation operator satisfying the constraints g1 = 0, . . . , gm = 0. The intuition behind pseudo-expectation values is that the conditions on the pseudo-expectation values are conditions that would be satisfied by any actual expectation operator that takes expected values over a distribution of true optimal solutions, so optimizing over pseudo-expectation values gives a relaxation of the problem. †In pathological cases, there may be issues with bit complexity but that will not appear in our settings. For details, see [93, 101] Definition 2.2 (Degree Dsos SoS). The degree Dsos SoS relaxation for the polynomial optimization problem maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 is the program that maximizes Ẽ[p(x)] over all degree Dsos pseudoexpectation operators Ẽ satisfying the constraints g1 = 0, . . . , gm = 0. The main advantage is that the SoS relaxation can be efficiently solved via convex programming! In particular, Item 3 in Definition 2.1 is equivalent to a matrix being positive semidefinite, therefore the degree Dsos SoS relaxation can be done via semidefinite programming [116]. This meta-algorithm is known as a degree-Dsos SoS algorithm. This algorithm runs in nO(Dsos) time†. Therefore, constant degree SoS can be solved in polynomial time. In the next section, we apply SoS on PCA and formally state our results. 2.1 Related algorithmic techniques Statistical Query algorithms Statistical query algorithms are another popular restricted class of algorithms introduced by [66]. In this model, for a given data distribution, we are allowed to query expected value of functions. Concretely, for a dataset D on Rn, we have access to it via an oracle that given as query a function f : Rn ! [ 1, 1] returns Ex⇠D f(x) up to some additive adversarial error. SQ algorithms capture a broad class of algorithms in statistics and machine learning and have been used to study information-computation tradeoffs [109, 40, 30]. There has also been significant work trying to understand the limits of SQ algorithms (e.g. [40, 41, 34]). Formally, SQ algorithms and SoS are in general incomparable. However, the recent work [25] showed that under mild conditions, low-degree polynomial algorithms (defined next) and statistical query algorithms have equivalent power. But also, under these conditions, it’s easy to see that SoS is a more powerful algorithm than low degree algorithms and hence, SoS algorithms are stronger than statistical query algorithms. Therefore, SoS lower bounds as shown in this work give strictly stronger evidence of hardness than SQ lower bounds. Low degree polynomial algorithms In statistics, a hypothesis testing problem is a problem where the input is sampled from one of two distributions and we would like to identify which distribution it was sampled from. In this setting, a low degree polynomial algorithm is to compare the expectation of a low-degree polynomial to try and distinguish the two distributions. This method has been used to conjecture hardness thresholds for various problems [54, 55, 75]. However, under mild conditions, the SoS hierarchy of algorithms is more powerful than low degree polynomial algorithms [54] and therefore potentially yields better algorithms. Therefore, the SoS lower bounds shown in this work are stronger than low degree polynomial lower bounds as well. 3 Lower bounds for Sparse Principal Components Analysis In this section, we will state our main results for Sparse PCA and Tensor PCA. 3.1 Sparse PCA We recall the setting of the Wishart model of Sparse PCA: We are given v1, . . . , vm 2 Rd sampled from N (0, Id + uuT ) where u is a k-sparse unit vector and we wish to recover u. We will further assume that the entries of u are in { 1p k , 0, 1p k } chosen such that the sparsity is k (and hence, the norm is 1). Note importantly that this assumption is only strengthening our result: If SoS cannot solve this problem even for this specific u, it cannot do any better for the general problem with arbitrary u. Let the vectors from the given dataset be v1, . . . , vm. Let them form the rows of a matrix S 2 Rm⇥d. Let ⌃ = 1m Pm i=1 viv T i be the sample covariance matrix. Then the standard PCA objective is to maximize xT⌃x and recover x = p ku. Therefore, the sparse PCA problem can be rephrased as maximize m k · xT⌃x = 1 k mX i=1 hx, vii2 such that x3i = xi for all i d and dX i=1 x2i = k where the program variables are x1, . . . , xd. The constraint x3i = xi enforces that the entries of x are in { 1, 0, 1} and along with these constraints, the last condition Pd i=1 x 2 i = k enforces k-sparsity. Now, we will consider the series of convex relaxations for Sparse PCA obtained by SoS algorithms. In particular, we will consider SoS degree of d" for a small constant " > 0. Note that this corresponds to SoS algorithms of subexponential running time in the input size dO(1). Our main result states that for choices of m below a certain threshold, when the vectors v1, . . . , vm are sampled from the unspiked standard Gaussian N (0, Id), then sub-exponential time SoS algorithms will have optimal value at least m + m . This is also the optimal value of the objective in the case when the vectors v1, . . . , vm are indeed sampled from the spiked Gaussian N (0, Id + uuT ) and x = p ku. Therefore, SoS is unable to distinguish N (0, Id) from N (0, Id + uuT ) and hence cannot solve sparse PCA. Formally, Theorem 3.1. For all sufficiently small constants " > 0, suppose m d 1 " 2 ,m k2 " 2 , and for some A > 0, dA k d1 A", p p k d A", then for an absolute constant C > 0, with high probability over a random m⇥ d input matrix S with Gaussian entries, the sub-exponential time SoS algorithm of degree dC" for sparse PCA has optimal value at least m+m o(1). In other words, sub-exponential time SoS cannot certify that for a random dataset with Gaussian entries, there is no unit vector u with k nonzero entries and m · uT⌃u ⇡ m +m . The proof of Theorem 3.1 is deferred to the appendix. A few remarks are in order. 1. Note here that m+m is approximately the value of the objective when the input vectors v1, . . . , vm are indeed sampled from the spiked model N (0, Id + uuT ) and x = p ku. Therefore, sub-exponential time SoS is unable to distinguish a completely random distribution from the spiked distribution and hence is unable to solve sparse PCA. 2. The constant A can be thought of as ⇡ 0 and it appears for technical reasons, to ensure that we have sufficient decay in our bounds (see Remark K.8). In particular, most values of k, fall under the conditions of the theorem. Informally, our main result says that when m ⌧ min ⇣ d 2 , k2 2 ⌘ , then subexponential time SoS cannot recover the principal component u. This is the content of Theorem 1.1. Prior work on algorithms Due to its widespread importance, a tremendous amount of work has been devoted to obtaining algorithms for sparse PCA, both theoretically and practically, [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36] to cite a few. We now place our result in the context of known algorithms for Sparse PCA and explain why it offers tight tradeoffs between approximability and inapproximability. Between this work and prior works, we completely understand the parameter regimes where sparse PCA is easy or conjectured to be hard up to polylogarithmic factors. In Fig. 1a and Fig. 1b, we assign the different parameter regimes into the following categories. - Diagonal thresholding: In this regime, Diagonal thresholding [61, 4] recovers the sparse vector. Covariance thresholding [73, 33] and SoS algorithms [37] can also be used in this regime. The benefits of these alternate algorithms are that covariance thresholding has better dependence on logarithmic factors and SoS algorithms works in the presence of adversarial errors. - Vanilla PCA: Vanilla PCA (i.e. standard PCA) can recover the vector, i.e. we do not need to use the fact that the vector is sparse (see e.g. [21, 37]). - Spectral: An efficient spectral algorithm recovers the sparse vector (see e.g. [37]). - Can test but not recover: A simple spectral algorithm can solve the hypothesis testing version of Sparse PCA but it is information theoretically impossible to recover the sparse vector [37, Appendix E]. - Hard: A regime where it is conjectured to be hard for algorithms to recover the sparse principal component. We discuss this in more detail below. In Fig. 1a and Fig. 1b, the regimes corresponding to Diagonal thresholding, Vanilla PCA and Spectral are dark green, while the regimes corresponding to Spectral* and Hard are light green and red respectively. Prior work on hardness Prior works have explored statistical query lower bounds [25], basic SDP lower bounds [73], reductions from conjectured hard problems [21, 20, 23, 44, 118], lower bounds via the low-degree conjecture [35, 37], lower bounds via statistical physics [35, 12], etc. We note that similar threshold behaviors as us have been predicted by [37], but importantly, they assume a conjecture known as the low-degree likelihood conjecture. Similarly, many of these other lower bounds rely on various conjectures. To put in context, the low-degree likelihood conjecture is a stronger assumption than P 6= NP. In contrast, our results are unconditional and do not assume any conjectures. Compared to these other lower bounds, there have only been two prior works on lower bounds against SoS algorithms [73, 21, 82] which are only for degree 2 and degree 4 SoS. In particular, degree 2 SoS lower bounds have been studied in [73, 21] although they don’t state it this way. And [82] obtained degree 4 SoS lower bounds but they were very lossy, i.e. they hold for a strict subset of the Hard regime m ⌧ k 2 2 and m ⌧ d 2 . Moreover, the ideas used in these prior works do not generalize for higher degrees. The lack of other SoS lower bounds can be attributed to the difficulty in proving such lower bounds. In this paper, we vastly strengthen these known results and show almost-tight lower bounds for SoS algorithms of degree d" which correspond to sub-exponential running time dd O(") . We note that SoS algorithms get stronger as the degree increases, therefore our results immediately imply these prior results and even in the special case of degree 4 SoS, we improve the known lossy bounds. In summary, Theorem 3.1 subsumes all these earlier known results and is a vast improvement over prior known SoS lower bounds which provides compelling evidence for the hardness of Sparse PCA in this parameter range. The work [54] also states SoS lower bounds for Sparse PCA but it differs from our work in three important aspects. First, they handle the related but qualitatively different Wigner model of Sparse PCA. Their techniques fail for the Wishart model of Sparse PCA, which is more natural in practice. We overcome this shortcoming and work with the Wishart model. We emphasize that their techniques are insufficient to handle this generality and overcoming this is far from being a mere technicality. On the other hand, our techniques can easily recover their results. Second, while they sketch a high level proof overview for their lower bound, they don’t give a proof. On the other hand, our proofs are fully explicit. Finally, they assume the input distribution has entries in {±1}, that is, they work with the ±1 variant of PCA. On the other hand, we work with the more realistic setting where the distribution is N (0, 1). Again, our techniques can easily recover their results as well. 3.2 Tensor PCA We will now state our main result for Tensor PCA. Let k 2 be an integer. We are given an order k tensor A of the form A = u⌦k +B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries and we would like to recover the principal component u. Tensor PCA can be rephrased by the program maximize hA, x⌦ki = hA, x⌦ . . .⌦ x| {z } k times i such that nX i=1 x2i = 1 where the program variables are x1, . . . , xn. The principal component u will then just be the returned solution x. We will again consider sub-exponential time SoS algorithms, in particular degree n" SoS, for this problem. This is sub-exponential time because the input size is nO(1). We then show that if the signal to noise ratio is below a certain threshold, then sub-exponential time SoS for the unspiked input A ⇠ N (0, I[n]k) will have optimal value close to , which is also the optimal objective value in the spiked case when A = u⌦k +B,B ⇠ N (0, I[n]k) and x = u. In other words, SoS cannot distinguish the unspiked and spiked distributions and hence cannot recover the principal component u. Theorem 3.2. Let k 2 be an integer. For all sufficiently small " > 0, if n k4 ", for an absolute constant C > 0, with high probability over a random tensor A ⇠ N (0, I[n]k), the sub-exponential time SoS algorithm of degree nC" for Tensor PCA has optimal value at least o(1). Therefore, sub-exponential time SoS cannot certify that for a random tensor A ⇠ N (0, I[n]k), there is no unit vector u such that hA, u⌦ . . .⌦ u| {z } k times i ⇡ . The proof of Theorem 3.2 is deferred to the appendix. We again remark that when the tensor A is actually sampled from the spiked model A = u⌦k +B, the optimal objective value is approximately when x = u. Therefore, this shows that sub-exponential time SoS algorithms cannot solve Tensor PCA. Informally, the theorem says that when the signal to noise ratio ⌧ n k4 , SoS algorithms cannot solve Tensor PCA, as stated in Theorem 1.2. Prior work Algorithms for Tensor PCA have been studied in the works [11, 22, 52, 53, 110, 125, 120, 67, 8]. It was shown in [22] that the degree q SoS algorithm certifies an upper bound of 2 O(k)(n·polylog(n))k/4 qk/4 1/2 for the Tensor PCA problem. When q = n" this gives an upper bound of n k 4 O("). Therefore, our result is tight, giving insight into the computational threshold for Tensor PCA. Lower bounds for Tensor PCA have been studied in various forms including statistical query lower bounds [25, 39], reductions from conjectured hard problems [123, 24], lower bounds from the low-degree conjecture [54, 55, 75], evidence based on the landscape behavior [10, 88], etc. Compared to a lot of these works which rely on various conjectures, we remark that our lower bounds are unconditional and do not rely on any conjectures. In [54], similar to Sparse PCA, they state a similar theorem for a different variant of Tensor PCA. However, they do not give a proof whereas we give explicit proofs. In particular, they state their result without proof for the ±1 variant of Tensor PCA whereas we work with the more realistic setting where the distribution is N (0, 1). We remark that their techniques do not recover our results but on the other hand, our techniques can recover theirs. 4 Related work As stated in their respective sections, there have been some prior works on (degree at most 4) SoS lower bounds on Sparse and Tensor PCA and various other lower bounds that have mostly relied on various hardness conjectures, some of which are stronger than P 6= NP . The lack of results on higher degree SoS, compared to other models, can be attributed to the difficulty of proving such lower bounds, which we undertake in this work. Sum of Squares lower bounds have been obtained for various problems of interest, such as Sherrington-Kirkpatrick Hamiltonian [45, 74, 63, 104], Maximum Cut [87], Maximum Independent Set [64, 104], Constraint Satisfaction Problems [71], Densest k-Subgraph [65], etc. The techniques used in this work are closely related to the work [19] which proved Sum of Squares lower bounds for a problem known as Planted Clique. Some of the ideas and techniques we employ in this work, namely pseudo-calibration and graph matrices have also appeared in other works [87, 103, 45, 1, 64, 105, 63, 65]. It’s plausible that our generalized techniques could be applied to other high dimensional statistical problems, which we leave for future work. 5 Conclusion In this work, we show sub-exponential time lower bounds for the powerful Sum-of-Squares algorithms for Sparse PCA and Tensor PCA. With the evergrowing research into better algorithms for Sparse PCA [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 36] and Tensor PCA [11, 22, 52, 53, 110, 125, 120, 67, 8], combined with the recent breakthrough of Sum of Squares algorithms in statistics [16, 80, 51, 70, 43, 72, 14, 15, 111], it’s therefore an important goal to understand whether Sum of Squares algorithms can beat state of the art algorithms for these problems. In this work, we answer this negatively and show that even sub-exponential time SoS algorithms cannot do much better than relatively simpler algorithms. In particular, we settle open problems raised by [82, 54, 55, 52, 22]. Our work does not handle exponential time ⌦(n) degree SoS so analyzing these algorithms is a potential future direction. Another important direction is to understand the limits of powerful algorithms such as SoS for other statistical problems of importance, such as mixture modeling or clustering. For algorithm designers, our results illustrates the intrinsic difficulty of PCA problems and sheds light on information-computation gaps exhibited by PCA. For practitioners, this result provides strong evidence that existing algorithms work relatively well.
1. What is the focus and contribution of the paper on lower bounds for sparse PCA and tensor PCA? 2. What are the strengths of the proposed approach, particularly in its proof techniques? 3. What are the weaknesses of the paper regarding its notation, organization, and accessibility? 4. Do you have any questions or suggestions regarding the paper's content, such as the choice of parameters or the inclusion of technical details in the appendix? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper develops high-degree lower bounds for the sparse PCA and tensor PCA problems. For both of these problems, there is a well-known discrepancy between the statistical complexity (i.e. #samples or snr required to detect the hidden signal by any means feasible) and computational complexity (i.e. #samples or snr required to approximately recover the hidden signal by computationally efficient i.e. poly time algorithms). (I phrase the above in terms of detection to align with the paper, analogous definitions using recovery may be made with the obvious ordering that detection is easier than recovery). Along with the planted clique problem, sparse and tensor PCA are arguably some of the most well-studied examples of statistical problems exhibiting this computational-statistical phenomenon. While one way of developing hardness arguments is to construct problem reductions, this paper contributes to the line of unconditional hardness results by analyzing the sum-of-squares hierarchy of convex programs for sparse/tensor PCA. The first two steps (also called degrees) in this hierarchy for sparse PCA were shown ineffective by Krauthgamer et al and Wigderson and Ma respectively. This paper develops bounds for the next dimension^{a small const} many degrees. Strengths And Weaknesses Strengths: I think the paper is an interesting and solid contribution to the hardness results on sparse PCA and tensor PCA. Both problems are (sparse PCA somewhat more than tensor PCA) well-motivated and used in applications. The proof techniques are (to my mind) also potentially interesting to the theory-inclined audience. Proving sum-of-squares lower bounds tends to be a technically challenging, intricate and involved task and the authors do a good job in the beginning of the appendix explaining some of the high-level ideas and challenges. Weaknesses: At the outset, and given 2, Neurips might not be the best fit for this submission. I would expect COLT, theoretical CS conferences STOC/FOCS and of course, a number of journals, on the other hand to be quite well-suited. The interesting and generalizable part of the paper seem to be some of the proof techniques, and these are relegated to the appendix. I did not have time to read the full appendix (65 pages) but the first 15-20 were useful. Separately from 1, the paper is made inaccessible by a somewhat loose use of notation that is widespread. As a small example, after using 'd' for dimension, the authors also use it for 'degree' of the SOS program in the beginning of Sec. 2. For accessibility to a stats/ml audience, this paper would require significant polishing. Related to 3, I trust the results despite point 2, but some of the main results in the text (Theorems 3.1 and 3.2) likely should be rephrased. Questions The following questions are roughly "chronological" in the paper. L101: what is n for sparse pca? This should be m perhaps? L251: Technically this is a lower bound (by plugging in u in the objective). It is not obvious (though it probably is) to me that this is an upper bound. As I understand it, Thm 3.1 proves a lower bound on the value of the SDP applied to pure noise, i.e. the input covariance matrix is identity and has no dependence on lambda. If so, why is lambda involved in the value? This confused me for a while before I realized presumably this is because your solution (or certificate) is via pseudocalibration and explicitly has an alternative "planted" distribution involving lambda. But then why not sup over lambda in the feasible set? The same question holds for the Thm 3.2. Some suggestions regarding notation/organization: For sparse PCA it is best to keep the parameters (n, k, d, lambda) as (sample size, sparsity, dimension, snr). Similarly for tensor pca, it is probably better to keep something like (d, r, lambda) (dimension, tensor order/rank, snr) respectively and avoid the collision use of "k". The choice in the paper appears to be first choosing dimension=n for tensor pca and then avoiding collision by choosing the somewhat confusing 'm' for sparse pca sample size. In terms of readability, this is significantly sub-optimal I'd relegate the "general definition" of sos programs to the appendix, and just write the specific programs for sparse/tensor pca. Perhaps just refer to a textbook for the general definition, and it anyway collides rather unfortunately (d, n etc...) with the notational choices in 1. Alternatively use different fonts to make things obvious. There are multiple points where things were also quite confusing to read in the appendix, which makes it difficult to understand/penetrate the proof techniques. For example, in L.1044 there is V(Id_U), U_{Id_U}, V_{Id_U} and E(Id_U), which is (at least all the same font) very confusing at first go. After a while I realized the first and the last were vertex/edge sets for the whole graph and the middle two were left/right parts respectively. The notation did not help though. Limitations NA
NIPS
Title Sub-exponential time Sum-of-Squares lower bounds for Principal Components Analysis Abstract Principal Components Analysis (PCA) is a dimension-reduction technique widely used in machine learning and statistics. However, due to the dependence of the principal components on all the dimensions, the components are notoriously hard to interpret. Therefore, a variant known as sparse PCA is often preferred. Sparse PCA learns principal components of the data but enforces that such components must be sparse. This has applications in diverse fields such as computational biology and image processing. To learn sparse principal components, it’s well known that standard PCA will not work, especially in high dimensions, and therefore algorithms for sparse PCA are often studied as a separate endeavor. Various algorithms have been proposed for Sparse PCA over the years, but given how fundamental it is for applications in science, the limits of efficient algorithms are only partially understood. In this work, we study the limits of the powerful Sum of Squares (SoS) family of algorithms for Sparse PCA. SoS algorithms have recently revolutionized robust statistics, leading to breakthrough algorithms for long-standing open problems in machine learning, such as optimally learning mixtures of gaussians, robust clustering, robust regression, etc. Moreover, it is believed to be the optimal robust algorithm for many statistical problems. Therefore, for sparse PCA, it’s plausible that it can beat simpler algorithms such as diagonal thresholding that have been traditionally used. In this work, we show that this is not the case, by exhibiting strong tradeoffs between the number of samples required, the sparsity and the ambient dimension, for which SoS algorithms, even if allowed sub-exponential time, will fail to optimally recover the component. Our results are complemented by known algorithms in literature, thereby painting an almost complete picture of the behavior of efficient algorithms for sparse PCA. Since SoS algorithms encapsulate many algorithmic techniques such as spectral or statistical query algorithms, this solidifies the message that known algorithms are optimal for sparse PCA. Moreover, our techniques are strong enough to obtain similar tradeoffs for Tensor PCA, another important higher order variant of PCA with applications in topic modeling, video processing, etc. 1 Introduction Principal components analysis (PCA) [62] is a popular data processing and dimension reduction routine that is widely used. It has numerous applications in Machine Learning, Statistics, Engineering, Biology, etc. Given a dataset, PCA projects the data to a lower dimensional space spanned by the principal components. The intuition is that PCA sheds lower order information such as noise ⇤Equal contribution †A.P. was supported in part by NSF grant CCF-2008920 and G.R. was supported in part by NSF grants CCF-1816372 and CCF-2008920 36th Conference on Neural Information Processing Systems (NeurIPS 2022). but importantly preserves much of the intrinsic information present in the data that are needed for downstream tasks. However, despite great optimality properties, PCA has its drawbacks. Firstly, because the principal components are linear combinations of all the original variables, it’s notoriously hard to interpret them [84]. Secondly, it’s well known that PCA does not yield good estimators in high dimensional settings [13, 97, 61]. To address these issues, a variant of PCA known as Sparse PCA is often used. Sparse PCA searches for principal components of the data with the added constraint of sparsity. Concretely, consider given data v1, v2, . . . , vm 2 Rd. In Sparse PCA, we want to find the top principal component of the data under the extra constraint that it has sparsity at most k. That is, we want to find a vector v 2 Rd that maximizes Pm i=1hv, vii2 such that kvk0 k. Sparse PCA has enjoyed applications in a diverse range of fields ranging from medicine, computational biology, economics, image and signal processing, finance and of course, machine learning and statistics (e.g. [117, 89, 85, 115, 31, 2]). It’s worth noting that in some of these applications, other algorithms are also often used to learn statistical models with sparse structure, such as greedy algorithms (e.g. [60, 81, 59, 124]) and score-based algorithms (e.g. [28, 90, 107]) but in this work, we focus on the widely used sparse PCA technique. Sparse PCA comes with the important benefit that the learnt components are easier to interpret. A notable example of this is to recover topics from documents [32, 95]. Moreover, this has important benefits for algorithmic fairness in machine learning. A large volume of research has been devoted to study Sparse PCA and its variants. Algorithms have been proposed and studied by several works, e.g. [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36]. For example, simple variants of PCA such as thresholding on top of standard PCA [61, 29] work well in certain parameter settings. This leads to the natural question whether more sophisticated algorithms can do better either for these settings or other parameter settings. On the other hand, there have been works from the inapproximability perspective as well (e.g. [20, 54, 23, 73, 35, 118], see Section 3.1 for a more detailed overview) In particular, a lot of these inapproximability results have relied on various other conjectures, due to the difficulty of proving unconditional lower bounds. Despite these prior works, exactly understanding the limits of efficient algorithms to this problem is still an active research area. This is natural considering the importance of sparse PCA and how fundamental it is to a multitude of applications. In this work, we focus on the powerful Sum-of-Squares (SoS) family of algorithms [113, 92, 96, 48] based on semidefinite programming relaxations. SoS algorithms have recently revolutionized robust machine learning, a branch of machine learning where the underlying dataset is noisy, with the noise being either random or adversarial. Robust machine learning has gotten a lot of attention in recent years because of its wide variety of use cases in machine learning and other downstream applications, including safety-critical ones like autonomous driving. For example, there has been a high volume of practical works in computer vision [114, 47, 121, 50, 112, 122, 42, 76] and speech recognition [57, 119, 106, 108, 78, 3, 91, 94]. In this important field, SoS has recently lead to breakthrough algorithms for long-standing open problems [16, 80, 51, 70, 43, 72, 14, 15, 111]. Highlights include - Robustly learning mixtures of high dimensional Gaussians. This is an extremely important problem that has been subjected to intense scrutiny, with a long line of work culminating in [16, 80]. - Efficient algorithms for the fundamental problems of regression [70], moment estimation [72], clustering [14] and subspace recovery [15] in the presence of outliers. Also known as robust machine learning, this setting is more akin to real life data which almost always has outliers or corrupted data. Moreover, SoS algorithms are believed to be the optimal robust algorithm for many statistical problems. In a different direction, SoS algorithms have led to the design of fast algorithms for problems such as tensor decomposition [53, 111]. Put more concretely, SoS algorithms, also known as the SoS hierarchy or the Lasserre hieararchy, offers a series of convex semidefinite programming (SDP) based relaxations to optimization problems. Due to its ability to capture a wide variety of algorithmic techniques, it has become a fundamental tool in algorithms and optimization. It was and still remains an extremely versatile tool for combinatorial optimization [46, 9, 49, 102]) but recently, it is being extensively used in Statistics and Machine Learning (apart from the references above, see also [17, 18, 52, 100]). Therefore, we ask (also raised by and posed as an open problem in the works [82, 54, 55]) Can Sum-of-Squares algorithms beat known algorithms for Sparse PCA? In this work, we show that SoS algorithms cannot beat known spectral algorithms, even if we allow sub-exponential time! Therefore, this suggests that currently used algorithms such as thresholding or other spectral algorithms are in a sense optimal for this problem. To prove our results, we will consider random instances of Sparse PCA and show that they are naturally hard for SoS. In particular, we focus on the Wishart random model of Sparse PCA. This model is a more natural modeling assumption compared to other random models that have been studied before, such as the Wigner random model. Note importantly that our model assumptions only strengthen our results because we are proving impossibility results. In other words, if SoS algorithms do not work for this restricted version of sparse PCA, then it will not work for more general models, e.g. with general covariance or multiple spikes. We now describe the model. The Wishart model of Sparse PCA, also known as the Spiked Covariance model, was originally proposed by [61]. In this model, we observe m vectors v1, . . . , vm 2 Rd from the distribution N (0, Id + uuT ) where u is a k-sparse unit vector, that is, kuk0 k and we would like to recover the principal component u. Here, the sparsity of a vector is the number of nonzero entries and is known as the signal-to-noise ratio. As the signal to noise ratio gets lower, it becomes harder and maybe even impossible to recover u since the signature left by u in the data becomes fainter. But it’s possible that this may be mitigated if the number of samples m grows. Therefore, there is a tradeoff between m, d, k and at play here. Algorithms proposed earlier have been able to recover u at various regimes. For example, if the number of samples is really large, namely m max( d , d 2 ), then standard PCA will work. But if this is not the case, we may still be able to recover u by assuming that the sparsity is not too large compared to the number of samples, namely m k 2 2 . To do this, we use a variant of standard PCA known as diagonal thresholding. Similar results have been obtained for various regimes, while some regimes have resisted attack to algorithms. Our results here complete the picture by showing that in the regimes that have so far resisted attack by efficient algorithms, the powerful Sum of Squares algorithms also cannot recover the principal component. We now state our theorem informally, with the formal statement in Theorem 3.1. Theorem 1.1. For the Wishart model of Sparse PCA, sub-exponential time SoS algorithms fail to recover the principal component when the number of samples m ⌧ min( d 2 , k2 2 ) . In particular, this theorem resolves an open problem posed by [82] and [54, 55]. In almost all other regimes, algorithms to recover the principal component u exist. We give a summary of such algorithms in Section 3, captured succinctly in Fig. 1. We say almost all other regimes because there is one interesting regime, namely d 2 m min(d,k) marked by light green in Fig. 1, where we can show that information theoretically, we cannot recover u but it’s possible to do hypothesis testing of Sparse PCA. That is, in this regime, we can distinguish purely random unspiked samples from the spiked samples. However, we will not be able to recover the principal component even if we use an exponential time bruteforce algorithm. We use our techniques to also obtain strong results for the related Tensor Principal components analysis (Tensor PCA) problem. Tensor PCA, originally introduced by [110], is a generalization of PCA to higher order tensors. Formally, given an order k tensor of the form u⌦k + B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries, we would like to recover the principal component u. Here, is known as the signal-to-noise ratio. Tensor PCA is a remarkably useful statistical and computational technique to exploit higher order moments of the data. It was originally envisaged to be applied in latent variable modeling and indeed, it has found multiple applications in this context (e.g. [5, 68, 69, 6]). Here, a tensor containing statistics of the input data is computed and then it’s decomposed in order to recover the latent variables. Because of the technique’s versatility, it has gathered a lot of attention in machine learning with applications in topic modeling, video processing, collaborative filtering, community detection, etc. (see e.g. [56, 7, 110, 5, 6, 38, 79] and references therein.) For Tensor PCA, similar to sparse PCA, there has been wide interest in the community to study algorithms (e.g. [11, 22, 52, 53, 110, 125, 120, 67, 8]) as well as approximability and hardness (e.g. [88, 75, 24, 54], see Section 3.2 for a more detailed overview). It’s worth noting that many of these hardness results are conditional, that is, they rely on various conjectures, sometimes stronger than P 6= NP. Moreover, there has been widespread interest from the statistics community as well, e.g. [58, 98, 77, 26, 27], due to fascinating connections to random matrix theory and statistical physics. In this work, we study the performance of sub-exponential time Sum of Squares algorithms for Tensor PCA. Our main result is stated informally below and formally in Theorem 3.2. Theorem 1.2. For Tensor PCA, sub-exponential time SoS algorithms fail to recover the principal component when the signal to noise ratio ⌧ n k4 . In particular, this resolves an open question posed by the works [52, 22, 54, 55]. Therefore, our main contributions can be summarized as follows 1. Despite the huge breakthroughs achieved by Sum-of-Squares algorithms in recent works on high dimensional statistics, we show barriers to it for the fundamental problems of Sparse PCA and Tensor PCA. 2. We achieve optimal tradeoffs compared to known algorithms, thereby painting a full picture of the computational thresholds of tractable algorithms. This suggests that existing algorithms are preferrable for PCA and its variants. 3. Prior lower bounds for these problems have either focused on weaker classes of algorithms or were obtained assuming other hardness conjectures, whereas we prove high degree sub-exponential time SoS lower bounds without relying on any conjectures. Acknowledgements and Bibliographic note We thank Sam Hopkins, Pravesh Kothari, Prasad Raghavendra, Tselil Schramm, David Steurer and Madhur Tulsiani for helpful discussions. We also thank Sam Hopkins and Pravesh Kothari for assistance in drafting the informal description of the machinery (Section C). Parts of this work have also appeared in [99, 104]. 2 Sum-of-Squares algorithms The Sum of Squares (SoS) hierarchy is a powerful class of algorithms that utilizes the power of semidefinite programming for optimization problems, which has achieved breakthrough algorithms for many problems in machine learning and statistics. In this section, we briefly describe the sum of squares hierarchy of algorithms. For a more detailed treament with an eye towards applications to machine learning and statistics, see the ICM survey [102] or the monograph [43]. Given an optimization problem given by a program with polynomial constraints, the SoS hierarchy of algorithms gives a family of convex relaxations parameterized by an integer known as its degree. As the degree gets higher, the running time to solve the convex relaxation increases but on the other hand, the relaxation gets stronger and hence serves as a better algorithm. This offers a smooth tradeoff between running time and the quality of approximation. In general, we can solve degree-Dsos SoS in nO(Dsos) time †. Therefore, constant degree SoS corresponds to polynomial time algorithms which in general translates to efficient algorithms. In this work, we focus on and show limitations of degree n" SoS which corresponds to subexponential running time. Suppose we are given multivariate polynomials p, g1, . . . , gm on n variables x1, . . . , xn (denoted collectively by x) taking real values. Consider the task: maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 In general, we could also allow inequality constraints, e.g., gi(x) 0. In this work, we only have equality constraints but much of the theory generalizes when we have inequality constraints instead. We now formally describe the Sum of Squares hierarchy of algorithms, via the so-called pseudoexpectation operators. Definition 2.1 (Pseudo-expectation values). Given multivariate polynomial constraints g1 = 0,. . . ,gm = 0 on n variables x1, . . . , xn, degree Dsos pseudo-expectation values are a linear map Ẽ from polynomials of x1, . . . , xn of degree at most Dsos to R satisfying the following conditions: 1. Ẽ[1] = 1, 2. Ẽ[f · gi] = 0 for every i 2 [m] and polynomial f such that deg(f · gi) Dsos. 3. Ẽ[f2] 0 for every polynomial f such that deg(f2) Dsos. Any linear map Ẽ satisfying the above properties is known as a degree Dsos pseudoexpectation operator satisfying the constraints g1 = 0, . . . , gm = 0. The intuition behind pseudo-expectation values is that the conditions on the pseudo-expectation values are conditions that would be satisfied by any actual expectation operator that takes expected values over a distribution of true optimal solutions, so optimizing over pseudo-expectation values gives a relaxation of the problem. †In pathological cases, there may be issues with bit complexity but that will not appear in our settings. For details, see [93, 101] Definition 2.2 (Degree Dsos SoS). The degree Dsos SoS relaxation for the polynomial optimization problem maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 is the program that maximizes Ẽ[p(x)] over all degree Dsos pseudoexpectation operators Ẽ satisfying the constraints g1 = 0, . . . , gm = 0. The main advantage is that the SoS relaxation can be efficiently solved via convex programming! In particular, Item 3 in Definition 2.1 is equivalent to a matrix being positive semidefinite, therefore the degree Dsos SoS relaxation can be done via semidefinite programming [116]. This meta-algorithm is known as a degree-Dsos SoS algorithm. This algorithm runs in nO(Dsos) time†. Therefore, constant degree SoS can be solved in polynomial time. In the next section, we apply SoS on PCA and formally state our results. 2.1 Related algorithmic techniques Statistical Query algorithms Statistical query algorithms are another popular restricted class of algorithms introduced by [66]. In this model, for a given data distribution, we are allowed to query expected value of functions. Concretely, for a dataset D on Rn, we have access to it via an oracle that given as query a function f : Rn ! [ 1, 1] returns Ex⇠D f(x) up to some additive adversarial error. SQ algorithms capture a broad class of algorithms in statistics and machine learning and have been used to study information-computation tradeoffs [109, 40, 30]. There has also been significant work trying to understand the limits of SQ algorithms (e.g. [40, 41, 34]). Formally, SQ algorithms and SoS are in general incomparable. However, the recent work [25] showed that under mild conditions, low-degree polynomial algorithms (defined next) and statistical query algorithms have equivalent power. But also, under these conditions, it’s easy to see that SoS is a more powerful algorithm than low degree algorithms and hence, SoS algorithms are stronger than statistical query algorithms. Therefore, SoS lower bounds as shown in this work give strictly stronger evidence of hardness than SQ lower bounds. Low degree polynomial algorithms In statistics, a hypothesis testing problem is a problem where the input is sampled from one of two distributions and we would like to identify which distribution it was sampled from. In this setting, a low degree polynomial algorithm is to compare the expectation of a low-degree polynomial to try and distinguish the two distributions. This method has been used to conjecture hardness thresholds for various problems [54, 55, 75]. However, under mild conditions, the SoS hierarchy of algorithms is more powerful than low degree polynomial algorithms [54] and therefore potentially yields better algorithms. Therefore, the SoS lower bounds shown in this work are stronger than low degree polynomial lower bounds as well. 3 Lower bounds for Sparse Principal Components Analysis In this section, we will state our main results for Sparse PCA and Tensor PCA. 3.1 Sparse PCA We recall the setting of the Wishart model of Sparse PCA: We are given v1, . . . , vm 2 Rd sampled from N (0, Id + uuT ) where u is a k-sparse unit vector and we wish to recover u. We will further assume that the entries of u are in { 1p k , 0, 1p k } chosen such that the sparsity is k (and hence, the norm is 1). Note importantly that this assumption is only strengthening our result: If SoS cannot solve this problem even for this specific u, it cannot do any better for the general problem with arbitrary u. Let the vectors from the given dataset be v1, . . . , vm. Let them form the rows of a matrix S 2 Rm⇥d. Let ⌃ = 1m Pm i=1 viv T i be the sample covariance matrix. Then the standard PCA objective is to maximize xT⌃x and recover x = p ku. Therefore, the sparse PCA problem can be rephrased as maximize m k · xT⌃x = 1 k mX i=1 hx, vii2 such that x3i = xi for all i d and dX i=1 x2i = k where the program variables are x1, . . . , xd. The constraint x3i = xi enforces that the entries of x are in { 1, 0, 1} and along with these constraints, the last condition Pd i=1 x 2 i = k enforces k-sparsity. Now, we will consider the series of convex relaxations for Sparse PCA obtained by SoS algorithms. In particular, we will consider SoS degree of d" for a small constant " > 0. Note that this corresponds to SoS algorithms of subexponential running time in the input size dO(1). Our main result states that for choices of m below a certain threshold, when the vectors v1, . . . , vm are sampled from the unspiked standard Gaussian N (0, Id), then sub-exponential time SoS algorithms will have optimal value at least m + m . This is also the optimal value of the objective in the case when the vectors v1, . . . , vm are indeed sampled from the spiked Gaussian N (0, Id + uuT ) and x = p ku. Therefore, SoS is unable to distinguish N (0, Id) from N (0, Id + uuT ) and hence cannot solve sparse PCA. Formally, Theorem 3.1. For all sufficiently small constants " > 0, suppose m d 1 " 2 ,m k2 " 2 , and for some A > 0, dA k d1 A", p p k d A", then for an absolute constant C > 0, with high probability over a random m⇥ d input matrix S with Gaussian entries, the sub-exponential time SoS algorithm of degree dC" for sparse PCA has optimal value at least m+m o(1). In other words, sub-exponential time SoS cannot certify that for a random dataset with Gaussian entries, there is no unit vector u with k nonzero entries and m · uT⌃u ⇡ m +m . The proof of Theorem 3.1 is deferred to the appendix. A few remarks are in order. 1. Note here that m+m is approximately the value of the objective when the input vectors v1, . . . , vm are indeed sampled from the spiked model N (0, Id + uuT ) and x = p ku. Therefore, sub-exponential time SoS is unable to distinguish a completely random distribution from the spiked distribution and hence is unable to solve sparse PCA. 2. The constant A can be thought of as ⇡ 0 and it appears for technical reasons, to ensure that we have sufficient decay in our bounds (see Remark K.8). In particular, most values of k, fall under the conditions of the theorem. Informally, our main result says that when m ⌧ min ⇣ d 2 , k2 2 ⌘ , then subexponential time SoS cannot recover the principal component u. This is the content of Theorem 1.1. Prior work on algorithms Due to its widespread importance, a tremendous amount of work has been devoted to obtaining algorithms for sparse PCA, both theoretically and practically, [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36] to cite a few. We now place our result in the context of known algorithms for Sparse PCA and explain why it offers tight tradeoffs between approximability and inapproximability. Between this work and prior works, we completely understand the parameter regimes where sparse PCA is easy or conjectured to be hard up to polylogarithmic factors. In Fig. 1a and Fig. 1b, we assign the different parameter regimes into the following categories. - Diagonal thresholding: In this regime, Diagonal thresholding [61, 4] recovers the sparse vector. Covariance thresholding [73, 33] and SoS algorithms [37] can also be used in this regime. The benefits of these alternate algorithms are that covariance thresholding has better dependence on logarithmic factors and SoS algorithms works in the presence of adversarial errors. - Vanilla PCA: Vanilla PCA (i.e. standard PCA) can recover the vector, i.e. we do not need to use the fact that the vector is sparse (see e.g. [21, 37]). - Spectral: An efficient spectral algorithm recovers the sparse vector (see e.g. [37]). - Can test but not recover: A simple spectral algorithm can solve the hypothesis testing version of Sparse PCA but it is information theoretically impossible to recover the sparse vector [37, Appendix E]. - Hard: A regime where it is conjectured to be hard for algorithms to recover the sparse principal component. We discuss this in more detail below. In Fig. 1a and Fig. 1b, the regimes corresponding to Diagonal thresholding, Vanilla PCA and Spectral are dark green, while the regimes corresponding to Spectral* and Hard are light green and red respectively. Prior work on hardness Prior works have explored statistical query lower bounds [25], basic SDP lower bounds [73], reductions from conjectured hard problems [21, 20, 23, 44, 118], lower bounds via the low-degree conjecture [35, 37], lower bounds via statistical physics [35, 12], etc. We note that similar threshold behaviors as us have been predicted by [37], but importantly, they assume a conjecture known as the low-degree likelihood conjecture. Similarly, many of these other lower bounds rely on various conjectures. To put in context, the low-degree likelihood conjecture is a stronger assumption than P 6= NP. In contrast, our results are unconditional and do not assume any conjectures. Compared to these other lower bounds, there have only been two prior works on lower bounds against SoS algorithms [73, 21, 82] which are only for degree 2 and degree 4 SoS. In particular, degree 2 SoS lower bounds have been studied in [73, 21] although they don’t state it this way. And [82] obtained degree 4 SoS lower bounds but they were very lossy, i.e. they hold for a strict subset of the Hard regime m ⌧ k 2 2 and m ⌧ d 2 . Moreover, the ideas used in these prior works do not generalize for higher degrees. The lack of other SoS lower bounds can be attributed to the difficulty in proving such lower bounds. In this paper, we vastly strengthen these known results and show almost-tight lower bounds for SoS algorithms of degree d" which correspond to sub-exponential running time dd O(") . We note that SoS algorithms get stronger as the degree increases, therefore our results immediately imply these prior results and even in the special case of degree 4 SoS, we improve the known lossy bounds. In summary, Theorem 3.1 subsumes all these earlier known results and is a vast improvement over prior known SoS lower bounds which provides compelling evidence for the hardness of Sparse PCA in this parameter range. The work [54] also states SoS lower bounds for Sparse PCA but it differs from our work in three important aspects. First, they handle the related but qualitatively different Wigner model of Sparse PCA. Their techniques fail for the Wishart model of Sparse PCA, which is more natural in practice. We overcome this shortcoming and work with the Wishart model. We emphasize that their techniques are insufficient to handle this generality and overcoming this is far from being a mere technicality. On the other hand, our techniques can easily recover their results. Second, while they sketch a high level proof overview for their lower bound, they don’t give a proof. On the other hand, our proofs are fully explicit. Finally, they assume the input distribution has entries in {±1}, that is, they work with the ±1 variant of PCA. On the other hand, we work with the more realistic setting where the distribution is N (0, 1). Again, our techniques can easily recover their results as well. 3.2 Tensor PCA We will now state our main result for Tensor PCA. Let k 2 be an integer. We are given an order k tensor A of the form A = u⌦k +B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries and we would like to recover the principal component u. Tensor PCA can be rephrased by the program maximize hA, x⌦ki = hA, x⌦ . . .⌦ x| {z } k times i such that nX i=1 x2i = 1 where the program variables are x1, . . . , xn. The principal component u will then just be the returned solution x. We will again consider sub-exponential time SoS algorithms, in particular degree n" SoS, for this problem. This is sub-exponential time because the input size is nO(1). We then show that if the signal to noise ratio is below a certain threshold, then sub-exponential time SoS for the unspiked input A ⇠ N (0, I[n]k) will have optimal value close to , which is also the optimal objective value in the spiked case when A = u⌦k +B,B ⇠ N (0, I[n]k) and x = u. In other words, SoS cannot distinguish the unspiked and spiked distributions and hence cannot recover the principal component u. Theorem 3.2. Let k 2 be an integer. For all sufficiently small " > 0, if n k4 ", for an absolute constant C > 0, with high probability over a random tensor A ⇠ N (0, I[n]k), the sub-exponential time SoS algorithm of degree nC" for Tensor PCA has optimal value at least o(1). Therefore, sub-exponential time SoS cannot certify that for a random tensor A ⇠ N (0, I[n]k), there is no unit vector u such that hA, u⌦ . . .⌦ u| {z } k times i ⇡ . The proof of Theorem 3.2 is deferred to the appendix. We again remark that when the tensor A is actually sampled from the spiked model A = u⌦k +B, the optimal objective value is approximately when x = u. Therefore, this shows that sub-exponential time SoS algorithms cannot solve Tensor PCA. Informally, the theorem says that when the signal to noise ratio ⌧ n k4 , SoS algorithms cannot solve Tensor PCA, as stated in Theorem 1.2. Prior work Algorithms for Tensor PCA have been studied in the works [11, 22, 52, 53, 110, 125, 120, 67, 8]. It was shown in [22] that the degree q SoS algorithm certifies an upper bound of 2 O(k)(n·polylog(n))k/4 qk/4 1/2 for the Tensor PCA problem. When q = n" this gives an upper bound of n k 4 O("). Therefore, our result is tight, giving insight into the computational threshold for Tensor PCA. Lower bounds for Tensor PCA have been studied in various forms including statistical query lower bounds [25, 39], reductions from conjectured hard problems [123, 24], lower bounds from the low-degree conjecture [54, 55, 75], evidence based on the landscape behavior [10, 88], etc. Compared to a lot of these works which rely on various conjectures, we remark that our lower bounds are unconditional and do not rely on any conjectures. In [54], similar to Sparse PCA, they state a similar theorem for a different variant of Tensor PCA. However, they do not give a proof whereas we give explicit proofs. In particular, they state their result without proof for the ±1 variant of Tensor PCA whereas we work with the more realistic setting where the distribution is N (0, 1). We remark that their techniques do not recover our results but on the other hand, our techniques can recover theirs. 4 Related work As stated in their respective sections, there have been some prior works on (degree at most 4) SoS lower bounds on Sparse and Tensor PCA and various other lower bounds that have mostly relied on various hardness conjectures, some of which are stronger than P 6= NP . The lack of results on higher degree SoS, compared to other models, can be attributed to the difficulty of proving such lower bounds, which we undertake in this work. Sum of Squares lower bounds have been obtained for various problems of interest, such as Sherrington-Kirkpatrick Hamiltonian [45, 74, 63, 104], Maximum Cut [87], Maximum Independent Set [64, 104], Constraint Satisfaction Problems [71], Densest k-Subgraph [65], etc. The techniques used in this work are closely related to the work [19] which proved Sum of Squares lower bounds for a problem known as Planted Clique. Some of the ideas and techniques we employ in this work, namely pseudo-calibration and graph matrices have also appeared in other works [87, 103, 45, 1, 64, 105, 63, 65]. It’s plausible that our generalized techniques could be applied to other high dimensional statistical problems, which we leave for future work. 5 Conclusion In this work, we show sub-exponential time lower bounds for the powerful Sum-of-Squares algorithms for Sparse PCA and Tensor PCA. With the evergrowing research into better algorithms for Sparse PCA [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 36] and Tensor PCA [11, 22, 52, 53, 110, 125, 120, 67, 8], combined with the recent breakthrough of Sum of Squares algorithms in statistics [16, 80, 51, 70, 43, 72, 14, 15, 111], it’s therefore an important goal to understand whether Sum of Squares algorithms can beat state of the art algorithms for these problems. In this work, we answer this negatively and show that even sub-exponential time SoS algorithms cannot do much better than relatively simpler algorithms. In particular, we settle open problems raised by [82, 54, 55, 52, 22]. Our work does not handle exponential time ⌦(n) degree SoS so analyzing these algorithms is a potential future direction. Another important direction is to understand the limits of powerful algorithms such as SoS for other statistical problems of importance, such as mixture modeling or clustering. For algorithm designers, our results illustrates the intrinsic difficulty of PCA problems and sheds light on information-computation gaps exhibited by PCA. For practitioners, this result provides strong evidence that existing algorithms work relatively well.
1. What is the main contribution of the paper regarding understanding information-computation tradeoffs for statistical inference? 2. How does the paper provide evidence for the computational threshold of the sparse PCA problem, and how does it add to our understanding of the problem? 3. What are the strengths and weaknesses of the paper, particularly in substantiating predictions for sparse PCA? 4. How does the paper progress in substantiating the predictions for sparse PCA, especially compared to existing lower bounds in the style of #1 and #2? 5. Can the authors comment on whether the lower bound provided by the paper relies less on problem-specific structure, and try to abstract conditions needed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper fits into the broad area of theoretically understanding information-computation tradeoffs for statistical inference. The abstract setup is that there is some signal x, typically in the vector space R^d. And as input, we are given noisy observations about x -- v_1, v_2, ..., v_m. And the goal is to infer x up to the best accuracy possible via an efficient algorithm. When m is too small, the problem is impossible to solve, even information theoretically. At some threshold for m the problem becomes information theoretically identifiable (information threshold) but could possibly remain computationally hard. And at a (possibly distinct) threshold (computational threshold) for m the problem also becomes algorithmically easy to solve. Understanding the computational threshold for average-case problems, and in particular if there is a gap in between the information and computational thresholds is a topic that has attracted a lot of attention in the theoretical computer science community. To understand the computational threshold, one must give an algorithm along with some evidence that the problem is hard under the threshold, which usually comes in the form of lower bounds against particular algorithms (semidefinite programs, statistical queries, polynomial-based methods, eigenvalue methods, iterative algorithms, etc). A ubiquitous phenomenon for a wide variety of such average-case algorithmic problems is that at some threshold several simple algorithms (and most notably some spectral algorithm) succeed at solving the problem, and under the threshold many seemingly more powerful algorithms fail. This work is concerned with providing more evidence to our understanding of this computational threshold for the sparse PCA (and also tensor PCA) problems. In the sparse PCA problem, the hidden signal x is a random d-dimensional vector supported on a much smaller number of coordinates (k coordinates) where the nonzero entries are random in {±1}. The observations are independent samples from the gaussian distribution with covariance spiked by x: N(0, Identity + lambda xx*), where lambda is some positive scalar (to be interpreted as a "signal-to-noise" parameter). The main result of this paper is that a certain powerful subexponential-sized semidefinite programming-based algorithm (degree-n^epsilon Sum-of-Squares algorithm) fails to solve sparse PCA better than a simple spectral algorithm, which adds to our understanding of how hard this problem should be. Similar results are presented for the tensor PCA problem as well. Strengths And Weaknesses The current state of affairs in obtaining computational hardness for restricted classes of algorithms is that when we want to argue that it's computationally hard to beat a simple spectral, we study the following methods of hardness, which are listed in roughly increasing order of difficulty. Low-degree polynomials Statistical queries Semidefinite programming lower bounds (Sum-of-Squares and this paper's contributions fall here) Reduction-based hardness (I must note that the gap between the difficulty of 2 and 3 is quite large.) In the current sociological setting of the subarea, it is a generally widespread belief that a hardness result against 1 or 2 does mean that our current algorithmic techniques likely won't surmount the barriers they are facing, and likely new algorithmic ideas are needed. An important question to answer to support this belief is to show that hardness results for 1 and 2 (which are often easy to establish) imply hardness results for other algorithms. Such meta-theorems are hard to come by with no particularly promising attacks for the case of Sum-of-Squares, and therefore the current approach is to substantiate the conjecture via a rich set of example lower bounds, which together hopefully illuminate a path to such meta-theorems. With respect to this paper's strengths and weaknesses I would like to touch upon two points: A) Sparse PCA is a problem that has received a lot of attention in theoretical algorithmic statistics, and in light of that evidence for computational thresholds is interesting. However, I would not say that the paper gave us a surprising answer as to where the computational threshold should lie; the pre-existing lower bounds in the style of #1 and #2 already told us what we think the answer should be. B) I think the paper's main strength is in progress in substantiating the predictions #1 and #2 have for sparse PCA, since Sum-of-Squares lower bounds are only known for a handful of problems, especially since they are technically difficult to achieve. It is worthy to note that the best reduction-based hardness (conditional on some conjecture about planted clique) only rules out quasipolynomial time algorithms, as opposed to subexponential algorithms. Some of the other existing SOS lower bounds (planted clique, CSPs, SK model) highly use problem-specific structure, and the most desirable aspect for any new SOS lower bound is to rely less on the problem-specific structure, and try to abstract conditions needed; it is hard for me to tell whether this lower bound does this given the time constraint of the review, but it would be nice if the authors can comment on this. Overall, I recommend acceptance. Questions I encourage the authors to highlight that the distinguishing problem is information theoretically solvable (given exponential time) in some interesting regime covered by the theorem, because if it isn't an SOS lower bound holds for a silly reason. The discussion about SOS being "clearly" more powerful than low-degree polynomials and statistical queries is slightly questionable, and I would advise the authors to remove that line. For a wide variety of settings we expect SOS to capture what low-degree polynomials can do, and in some settings we think low-degree lower bounds imply SOS lower bounds, but they're formally incomparable. One example is, if you plant a random coloring in a sparse graph with signal-to-noise ratio below the so-called "Kesten-Stigum threshold", a low-degree polynomial should succeed at distinguishing this planted distribution from a random graph with constant probability, whereas the lingering belief is that SOS SDPs shouldn't solve this problem (since the intuition is that they're limited to what can be solved "with high probability"). Limitations --
NIPS
Title Sub-exponential time Sum-of-Squares lower bounds for Principal Components Analysis Abstract Principal Components Analysis (PCA) is a dimension-reduction technique widely used in machine learning and statistics. However, due to the dependence of the principal components on all the dimensions, the components are notoriously hard to interpret. Therefore, a variant known as sparse PCA is often preferred. Sparse PCA learns principal components of the data but enforces that such components must be sparse. This has applications in diverse fields such as computational biology and image processing. To learn sparse principal components, it’s well known that standard PCA will not work, especially in high dimensions, and therefore algorithms for sparse PCA are often studied as a separate endeavor. Various algorithms have been proposed for Sparse PCA over the years, but given how fundamental it is for applications in science, the limits of efficient algorithms are only partially understood. In this work, we study the limits of the powerful Sum of Squares (SoS) family of algorithms for Sparse PCA. SoS algorithms have recently revolutionized robust statistics, leading to breakthrough algorithms for long-standing open problems in machine learning, such as optimally learning mixtures of gaussians, robust clustering, robust regression, etc. Moreover, it is believed to be the optimal robust algorithm for many statistical problems. Therefore, for sparse PCA, it’s plausible that it can beat simpler algorithms such as diagonal thresholding that have been traditionally used. In this work, we show that this is not the case, by exhibiting strong tradeoffs between the number of samples required, the sparsity and the ambient dimension, for which SoS algorithms, even if allowed sub-exponential time, will fail to optimally recover the component. Our results are complemented by known algorithms in literature, thereby painting an almost complete picture of the behavior of efficient algorithms for sparse PCA. Since SoS algorithms encapsulate many algorithmic techniques such as spectral or statistical query algorithms, this solidifies the message that known algorithms are optimal for sparse PCA. Moreover, our techniques are strong enough to obtain similar tradeoffs for Tensor PCA, another important higher order variant of PCA with applications in topic modeling, video processing, etc. 1 Introduction Principal components analysis (PCA) [62] is a popular data processing and dimension reduction routine that is widely used. It has numerous applications in Machine Learning, Statistics, Engineering, Biology, etc. Given a dataset, PCA projects the data to a lower dimensional space spanned by the principal components. The intuition is that PCA sheds lower order information such as noise ⇤Equal contribution †A.P. was supported in part by NSF grant CCF-2008920 and G.R. was supported in part by NSF grants CCF-1816372 and CCF-2008920 36th Conference on Neural Information Processing Systems (NeurIPS 2022). but importantly preserves much of the intrinsic information present in the data that are needed for downstream tasks. However, despite great optimality properties, PCA has its drawbacks. Firstly, because the principal components are linear combinations of all the original variables, it’s notoriously hard to interpret them [84]. Secondly, it’s well known that PCA does not yield good estimators in high dimensional settings [13, 97, 61]. To address these issues, a variant of PCA known as Sparse PCA is often used. Sparse PCA searches for principal components of the data with the added constraint of sparsity. Concretely, consider given data v1, v2, . . . , vm 2 Rd. In Sparse PCA, we want to find the top principal component of the data under the extra constraint that it has sparsity at most k. That is, we want to find a vector v 2 Rd that maximizes Pm i=1hv, vii2 such that kvk0 k. Sparse PCA has enjoyed applications in a diverse range of fields ranging from medicine, computational biology, economics, image and signal processing, finance and of course, machine learning and statistics (e.g. [117, 89, 85, 115, 31, 2]). It’s worth noting that in some of these applications, other algorithms are also often used to learn statistical models with sparse structure, such as greedy algorithms (e.g. [60, 81, 59, 124]) and score-based algorithms (e.g. [28, 90, 107]) but in this work, we focus on the widely used sparse PCA technique. Sparse PCA comes with the important benefit that the learnt components are easier to interpret. A notable example of this is to recover topics from documents [32, 95]. Moreover, this has important benefits for algorithmic fairness in machine learning. A large volume of research has been devoted to study Sparse PCA and its variants. Algorithms have been proposed and studied by several works, e.g. [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36]. For example, simple variants of PCA such as thresholding on top of standard PCA [61, 29] work well in certain parameter settings. This leads to the natural question whether more sophisticated algorithms can do better either for these settings or other parameter settings. On the other hand, there have been works from the inapproximability perspective as well (e.g. [20, 54, 23, 73, 35, 118], see Section 3.1 for a more detailed overview) In particular, a lot of these inapproximability results have relied on various other conjectures, due to the difficulty of proving unconditional lower bounds. Despite these prior works, exactly understanding the limits of efficient algorithms to this problem is still an active research area. This is natural considering the importance of sparse PCA and how fundamental it is to a multitude of applications. In this work, we focus on the powerful Sum-of-Squares (SoS) family of algorithms [113, 92, 96, 48] based on semidefinite programming relaxations. SoS algorithms have recently revolutionized robust machine learning, a branch of machine learning where the underlying dataset is noisy, with the noise being either random or adversarial. Robust machine learning has gotten a lot of attention in recent years because of its wide variety of use cases in machine learning and other downstream applications, including safety-critical ones like autonomous driving. For example, there has been a high volume of practical works in computer vision [114, 47, 121, 50, 112, 122, 42, 76] and speech recognition [57, 119, 106, 108, 78, 3, 91, 94]. In this important field, SoS has recently lead to breakthrough algorithms for long-standing open problems [16, 80, 51, 70, 43, 72, 14, 15, 111]. Highlights include - Robustly learning mixtures of high dimensional Gaussians. This is an extremely important problem that has been subjected to intense scrutiny, with a long line of work culminating in [16, 80]. - Efficient algorithms for the fundamental problems of regression [70], moment estimation [72], clustering [14] and subspace recovery [15] in the presence of outliers. Also known as robust machine learning, this setting is more akin to real life data which almost always has outliers or corrupted data. Moreover, SoS algorithms are believed to be the optimal robust algorithm for many statistical problems. In a different direction, SoS algorithms have led to the design of fast algorithms for problems such as tensor decomposition [53, 111]. Put more concretely, SoS algorithms, also known as the SoS hierarchy or the Lasserre hieararchy, offers a series of convex semidefinite programming (SDP) based relaxations to optimization problems. Due to its ability to capture a wide variety of algorithmic techniques, it has become a fundamental tool in algorithms and optimization. It was and still remains an extremely versatile tool for combinatorial optimization [46, 9, 49, 102]) but recently, it is being extensively used in Statistics and Machine Learning (apart from the references above, see also [17, 18, 52, 100]). Therefore, we ask (also raised by and posed as an open problem in the works [82, 54, 55]) Can Sum-of-Squares algorithms beat known algorithms for Sparse PCA? In this work, we show that SoS algorithms cannot beat known spectral algorithms, even if we allow sub-exponential time! Therefore, this suggests that currently used algorithms such as thresholding or other spectral algorithms are in a sense optimal for this problem. To prove our results, we will consider random instances of Sparse PCA and show that they are naturally hard for SoS. In particular, we focus on the Wishart random model of Sparse PCA. This model is a more natural modeling assumption compared to other random models that have been studied before, such as the Wigner random model. Note importantly that our model assumptions only strengthen our results because we are proving impossibility results. In other words, if SoS algorithms do not work for this restricted version of sparse PCA, then it will not work for more general models, e.g. with general covariance or multiple spikes. We now describe the model. The Wishart model of Sparse PCA, also known as the Spiked Covariance model, was originally proposed by [61]. In this model, we observe m vectors v1, . . . , vm 2 Rd from the distribution N (0, Id + uuT ) where u is a k-sparse unit vector, that is, kuk0 k and we would like to recover the principal component u. Here, the sparsity of a vector is the number of nonzero entries and is known as the signal-to-noise ratio. As the signal to noise ratio gets lower, it becomes harder and maybe even impossible to recover u since the signature left by u in the data becomes fainter. But it’s possible that this may be mitigated if the number of samples m grows. Therefore, there is a tradeoff between m, d, k and at play here. Algorithms proposed earlier have been able to recover u at various regimes. For example, if the number of samples is really large, namely m max( d , d 2 ), then standard PCA will work. But if this is not the case, we may still be able to recover u by assuming that the sparsity is not too large compared to the number of samples, namely m k 2 2 . To do this, we use a variant of standard PCA known as diagonal thresholding. Similar results have been obtained for various regimes, while some regimes have resisted attack to algorithms. Our results here complete the picture by showing that in the regimes that have so far resisted attack by efficient algorithms, the powerful Sum of Squares algorithms also cannot recover the principal component. We now state our theorem informally, with the formal statement in Theorem 3.1. Theorem 1.1. For the Wishart model of Sparse PCA, sub-exponential time SoS algorithms fail to recover the principal component when the number of samples m ⌧ min( d 2 , k2 2 ) . In particular, this theorem resolves an open problem posed by [82] and [54, 55]. In almost all other regimes, algorithms to recover the principal component u exist. We give a summary of such algorithms in Section 3, captured succinctly in Fig. 1. We say almost all other regimes because there is one interesting regime, namely d 2 m min(d,k) marked by light green in Fig. 1, where we can show that information theoretically, we cannot recover u but it’s possible to do hypothesis testing of Sparse PCA. That is, in this regime, we can distinguish purely random unspiked samples from the spiked samples. However, we will not be able to recover the principal component even if we use an exponential time bruteforce algorithm. We use our techniques to also obtain strong results for the related Tensor Principal components analysis (Tensor PCA) problem. Tensor PCA, originally introduced by [110], is a generalization of PCA to higher order tensors. Formally, given an order k tensor of the form u⌦k + B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries, we would like to recover the principal component u. Here, is known as the signal-to-noise ratio. Tensor PCA is a remarkably useful statistical and computational technique to exploit higher order moments of the data. It was originally envisaged to be applied in latent variable modeling and indeed, it has found multiple applications in this context (e.g. [5, 68, 69, 6]). Here, a tensor containing statistics of the input data is computed and then it’s decomposed in order to recover the latent variables. Because of the technique’s versatility, it has gathered a lot of attention in machine learning with applications in topic modeling, video processing, collaborative filtering, community detection, etc. (see e.g. [56, 7, 110, 5, 6, 38, 79] and references therein.) For Tensor PCA, similar to sparse PCA, there has been wide interest in the community to study algorithms (e.g. [11, 22, 52, 53, 110, 125, 120, 67, 8]) as well as approximability and hardness (e.g. [88, 75, 24, 54], see Section 3.2 for a more detailed overview). It’s worth noting that many of these hardness results are conditional, that is, they rely on various conjectures, sometimes stronger than P 6= NP. Moreover, there has been widespread interest from the statistics community as well, e.g. [58, 98, 77, 26, 27], due to fascinating connections to random matrix theory and statistical physics. In this work, we study the performance of sub-exponential time Sum of Squares algorithms for Tensor PCA. Our main result is stated informally below and formally in Theorem 3.2. Theorem 1.2. For Tensor PCA, sub-exponential time SoS algorithms fail to recover the principal component when the signal to noise ratio ⌧ n k4 . In particular, this resolves an open question posed by the works [52, 22, 54, 55]. Therefore, our main contributions can be summarized as follows 1. Despite the huge breakthroughs achieved by Sum-of-Squares algorithms in recent works on high dimensional statistics, we show barriers to it for the fundamental problems of Sparse PCA and Tensor PCA. 2. We achieve optimal tradeoffs compared to known algorithms, thereby painting a full picture of the computational thresholds of tractable algorithms. This suggests that existing algorithms are preferrable for PCA and its variants. 3. Prior lower bounds for these problems have either focused on weaker classes of algorithms or were obtained assuming other hardness conjectures, whereas we prove high degree sub-exponential time SoS lower bounds without relying on any conjectures. Acknowledgements and Bibliographic note We thank Sam Hopkins, Pravesh Kothari, Prasad Raghavendra, Tselil Schramm, David Steurer and Madhur Tulsiani for helpful discussions. We also thank Sam Hopkins and Pravesh Kothari for assistance in drafting the informal description of the machinery (Section C). Parts of this work have also appeared in [99, 104]. 2 Sum-of-Squares algorithms The Sum of Squares (SoS) hierarchy is a powerful class of algorithms that utilizes the power of semidefinite programming for optimization problems, which has achieved breakthrough algorithms for many problems in machine learning and statistics. In this section, we briefly describe the sum of squares hierarchy of algorithms. For a more detailed treament with an eye towards applications to machine learning and statistics, see the ICM survey [102] or the monograph [43]. Given an optimization problem given by a program with polynomial constraints, the SoS hierarchy of algorithms gives a family of convex relaxations parameterized by an integer known as its degree. As the degree gets higher, the running time to solve the convex relaxation increases but on the other hand, the relaxation gets stronger and hence serves as a better algorithm. This offers a smooth tradeoff between running time and the quality of approximation. In general, we can solve degree-Dsos SoS in nO(Dsos) time †. Therefore, constant degree SoS corresponds to polynomial time algorithms which in general translates to efficient algorithms. In this work, we focus on and show limitations of degree n" SoS which corresponds to subexponential running time. Suppose we are given multivariate polynomials p, g1, . . . , gm on n variables x1, . . . , xn (denoted collectively by x) taking real values. Consider the task: maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 In general, we could also allow inequality constraints, e.g., gi(x) 0. In this work, we only have equality constraints but much of the theory generalizes when we have inequality constraints instead. We now formally describe the Sum of Squares hierarchy of algorithms, via the so-called pseudoexpectation operators. Definition 2.1 (Pseudo-expectation values). Given multivariate polynomial constraints g1 = 0,. . . ,gm = 0 on n variables x1, . . . , xn, degree Dsos pseudo-expectation values are a linear map Ẽ from polynomials of x1, . . . , xn of degree at most Dsos to R satisfying the following conditions: 1. Ẽ[1] = 1, 2. Ẽ[f · gi] = 0 for every i 2 [m] and polynomial f such that deg(f · gi) Dsos. 3. Ẽ[f2] 0 for every polynomial f such that deg(f2) Dsos. Any linear map Ẽ satisfying the above properties is known as a degree Dsos pseudoexpectation operator satisfying the constraints g1 = 0, . . . , gm = 0. The intuition behind pseudo-expectation values is that the conditions on the pseudo-expectation values are conditions that would be satisfied by any actual expectation operator that takes expected values over a distribution of true optimal solutions, so optimizing over pseudo-expectation values gives a relaxation of the problem. †In pathological cases, there may be issues with bit complexity but that will not appear in our settings. For details, see [93, 101] Definition 2.2 (Degree Dsos SoS). The degree Dsos SoS relaxation for the polynomial optimization problem maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 is the program that maximizes Ẽ[p(x)] over all degree Dsos pseudoexpectation operators Ẽ satisfying the constraints g1 = 0, . . . , gm = 0. The main advantage is that the SoS relaxation can be efficiently solved via convex programming! In particular, Item 3 in Definition 2.1 is equivalent to a matrix being positive semidefinite, therefore the degree Dsos SoS relaxation can be done via semidefinite programming [116]. This meta-algorithm is known as a degree-Dsos SoS algorithm. This algorithm runs in nO(Dsos) time†. Therefore, constant degree SoS can be solved in polynomial time. In the next section, we apply SoS on PCA and formally state our results. 2.1 Related algorithmic techniques Statistical Query algorithms Statistical query algorithms are another popular restricted class of algorithms introduced by [66]. In this model, for a given data distribution, we are allowed to query expected value of functions. Concretely, for a dataset D on Rn, we have access to it via an oracle that given as query a function f : Rn ! [ 1, 1] returns Ex⇠D f(x) up to some additive adversarial error. SQ algorithms capture a broad class of algorithms in statistics and machine learning and have been used to study information-computation tradeoffs [109, 40, 30]. There has also been significant work trying to understand the limits of SQ algorithms (e.g. [40, 41, 34]). Formally, SQ algorithms and SoS are in general incomparable. However, the recent work [25] showed that under mild conditions, low-degree polynomial algorithms (defined next) and statistical query algorithms have equivalent power. But also, under these conditions, it’s easy to see that SoS is a more powerful algorithm than low degree algorithms and hence, SoS algorithms are stronger than statistical query algorithms. Therefore, SoS lower bounds as shown in this work give strictly stronger evidence of hardness than SQ lower bounds. Low degree polynomial algorithms In statistics, a hypothesis testing problem is a problem where the input is sampled from one of two distributions and we would like to identify which distribution it was sampled from. In this setting, a low degree polynomial algorithm is to compare the expectation of a low-degree polynomial to try and distinguish the two distributions. This method has been used to conjecture hardness thresholds for various problems [54, 55, 75]. However, under mild conditions, the SoS hierarchy of algorithms is more powerful than low degree polynomial algorithms [54] and therefore potentially yields better algorithms. Therefore, the SoS lower bounds shown in this work are stronger than low degree polynomial lower bounds as well. 3 Lower bounds for Sparse Principal Components Analysis In this section, we will state our main results for Sparse PCA and Tensor PCA. 3.1 Sparse PCA We recall the setting of the Wishart model of Sparse PCA: We are given v1, . . . , vm 2 Rd sampled from N (0, Id + uuT ) where u is a k-sparse unit vector and we wish to recover u. We will further assume that the entries of u are in { 1p k , 0, 1p k } chosen such that the sparsity is k (and hence, the norm is 1). Note importantly that this assumption is only strengthening our result: If SoS cannot solve this problem even for this specific u, it cannot do any better for the general problem with arbitrary u. Let the vectors from the given dataset be v1, . . . , vm. Let them form the rows of a matrix S 2 Rm⇥d. Let ⌃ = 1m Pm i=1 viv T i be the sample covariance matrix. Then the standard PCA objective is to maximize xT⌃x and recover x = p ku. Therefore, the sparse PCA problem can be rephrased as maximize m k · xT⌃x = 1 k mX i=1 hx, vii2 such that x3i = xi for all i d and dX i=1 x2i = k where the program variables are x1, . . . , xd. The constraint x3i = xi enforces that the entries of x are in { 1, 0, 1} and along with these constraints, the last condition Pd i=1 x 2 i = k enforces k-sparsity. Now, we will consider the series of convex relaxations for Sparse PCA obtained by SoS algorithms. In particular, we will consider SoS degree of d" for a small constant " > 0. Note that this corresponds to SoS algorithms of subexponential running time in the input size dO(1). Our main result states that for choices of m below a certain threshold, when the vectors v1, . . . , vm are sampled from the unspiked standard Gaussian N (0, Id), then sub-exponential time SoS algorithms will have optimal value at least m + m . This is also the optimal value of the objective in the case when the vectors v1, . . . , vm are indeed sampled from the spiked Gaussian N (0, Id + uuT ) and x = p ku. Therefore, SoS is unable to distinguish N (0, Id) from N (0, Id + uuT ) and hence cannot solve sparse PCA. Formally, Theorem 3.1. For all sufficiently small constants " > 0, suppose m d 1 " 2 ,m k2 " 2 , and for some A > 0, dA k d1 A", p p k d A", then for an absolute constant C > 0, with high probability over a random m⇥ d input matrix S with Gaussian entries, the sub-exponential time SoS algorithm of degree dC" for sparse PCA has optimal value at least m+m o(1). In other words, sub-exponential time SoS cannot certify that for a random dataset with Gaussian entries, there is no unit vector u with k nonzero entries and m · uT⌃u ⇡ m +m . The proof of Theorem 3.1 is deferred to the appendix. A few remarks are in order. 1. Note here that m+m is approximately the value of the objective when the input vectors v1, . . . , vm are indeed sampled from the spiked model N (0, Id + uuT ) and x = p ku. Therefore, sub-exponential time SoS is unable to distinguish a completely random distribution from the spiked distribution and hence is unable to solve sparse PCA. 2. The constant A can be thought of as ⇡ 0 and it appears for technical reasons, to ensure that we have sufficient decay in our bounds (see Remark K.8). In particular, most values of k, fall under the conditions of the theorem. Informally, our main result says that when m ⌧ min ⇣ d 2 , k2 2 ⌘ , then subexponential time SoS cannot recover the principal component u. This is the content of Theorem 1.1. Prior work on algorithms Due to its widespread importance, a tremendous amount of work has been devoted to obtaining algorithms for sparse PCA, both theoretically and practically, [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36] to cite a few. We now place our result in the context of known algorithms for Sparse PCA and explain why it offers tight tradeoffs between approximability and inapproximability. Between this work and prior works, we completely understand the parameter regimes where sparse PCA is easy or conjectured to be hard up to polylogarithmic factors. In Fig. 1a and Fig. 1b, we assign the different parameter regimes into the following categories. - Diagonal thresholding: In this regime, Diagonal thresholding [61, 4] recovers the sparse vector. Covariance thresholding [73, 33] and SoS algorithms [37] can also be used in this regime. The benefits of these alternate algorithms are that covariance thresholding has better dependence on logarithmic factors and SoS algorithms works in the presence of adversarial errors. - Vanilla PCA: Vanilla PCA (i.e. standard PCA) can recover the vector, i.e. we do not need to use the fact that the vector is sparse (see e.g. [21, 37]). - Spectral: An efficient spectral algorithm recovers the sparse vector (see e.g. [37]). - Can test but not recover: A simple spectral algorithm can solve the hypothesis testing version of Sparse PCA but it is information theoretically impossible to recover the sparse vector [37, Appendix E]. - Hard: A regime where it is conjectured to be hard for algorithms to recover the sparse principal component. We discuss this in more detail below. In Fig. 1a and Fig. 1b, the regimes corresponding to Diagonal thresholding, Vanilla PCA and Spectral are dark green, while the regimes corresponding to Spectral* and Hard are light green and red respectively. Prior work on hardness Prior works have explored statistical query lower bounds [25], basic SDP lower bounds [73], reductions from conjectured hard problems [21, 20, 23, 44, 118], lower bounds via the low-degree conjecture [35, 37], lower bounds via statistical physics [35, 12], etc. We note that similar threshold behaviors as us have been predicted by [37], but importantly, they assume a conjecture known as the low-degree likelihood conjecture. Similarly, many of these other lower bounds rely on various conjectures. To put in context, the low-degree likelihood conjecture is a stronger assumption than P 6= NP. In contrast, our results are unconditional and do not assume any conjectures. Compared to these other lower bounds, there have only been two prior works on lower bounds against SoS algorithms [73, 21, 82] which are only for degree 2 and degree 4 SoS. In particular, degree 2 SoS lower bounds have been studied in [73, 21] although they don’t state it this way. And [82] obtained degree 4 SoS lower bounds but they were very lossy, i.e. they hold for a strict subset of the Hard regime m ⌧ k 2 2 and m ⌧ d 2 . Moreover, the ideas used in these prior works do not generalize for higher degrees. The lack of other SoS lower bounds can be attributed to the difficulty in proving such lower bounds. In this paper, we vastly strengthen these known results and show almost-tight lower bounds for SoS algorithms of degree d" which correspond to sub-exponential running time dd O(") . We note that SoS algorithms get stronger as the degree increases, therefore our results immediately imply these prior results and even in the special case of degree 4 SoS, we improve the known lossy bounds. In summary, Theorem 3.1 subsumes all these earlier known results and is a vast improvement over prior known SoS lower bounds which provides compelling evidence for the hardness of Sparse PCA in this parameter range. The work [54] also states SoS lower bounds for Sparse PCA but it differs from our work in three important aspects. First, they handle the related but qualitatively different Wigner model of Sparse PCA. Their techniques fail for the Wishart model of Sparse PCA, which is more natural in practice. We overcome this shortcoming and work with the Wishart model. We emphasize that their techniques are insufficient to handle this generality and overcoming this is far from being a mere technicality. On the other hand, our techniques can easily recover their results. Second, while they sketch a high level proof overview for their lower bound, they don’t give a proof. On the other hand, our proofs are fully explicit. Finally, they assume the input distribution has entries in {±1}, that is, they work with the ±1 variant of PCA. On the other hand, we work with the more realistic setting where the distribution is N (0, 1). Again, our techniques can easily recover their results as well. 3.2 Tensor PCA We will now state our main result for Tensor PCA. Let k 2 be an integer. We are given an order k tensor A of the form A = u⌦k +B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries and we would like to recover the principal component u. Tensor PCA can be rephrased by the program maximize hA, x⌦ki = hA, x⌦ . . .⌦ x| {z } k times i such that nX i=1 x2i = 1 where the program variables are x1, . . . , xn. The principal component u will then just be the returned solution x. We will again consider sub-exponential time SoS algorithms, in particular degree n" SoS, for this problem. This is sub-exponential time because the input size is nO(1). We then show that if the signal to noise ratio is below a certain threshold, then sub-exponential time SoS for the unspiked input A ⇠ N (0, I[n]k) will have optimal value close to , which is also the optimal objective value in the spiked case when A = u⌦k +B,B ⇠ N (0, I[n]k) and x = u. In other words, SoS cannot distinguish the unspiked and spiked distributions and hence cannot recover the principal component u. Theorem 3.2. Let k 2 be an integer. For all sufficiently small " > 0, if n k4 ", for an absolute constant C > 0, with high probability over a random tensor A ⇠ N (0, I[n]k), the sub-exponential time SoS algorithm of degree nC" for Tensor PCA has optimal value at least o(1). Therefore, sub-exponential time SoS cannot certify that for a random tensor A ⇠ N (0, I[n]k), there is no unit vector u such that hA, u⌦ . . .⌦ u| {z } k times i ⇡ . The proof of Theorem 3.2 is deferred to the appendix. We again remark that when the tensor A is actually sampled from the spiked model A = u⌦k +B, the optimal objective value is approximately when x = u. Therefore, this shows that sub-exponential time SoS algorithms cannot solve Tensor PCA. Informally, the theorem says that when the signal to noise ratio ⌧ n k4 , SoS algorithms cannot solve Tensor PCA, as stated in Theorem 1.2. Prior work Algorithms for Tensor PCA have been studied in the works [11, 22, 52, 53, 110, 125, 120, 67, 8]. It was shown in [22] that the degree q SoS algorithm certifies an upper bound of 2 O(k)(n·polylog(n))k/4 qk/4 1/2 for the Tensor PCA problem. When q = n" this gives an upper bound of n k 4 O("). Therefore, our result is tight, giving insight into the computational threshold for Tensor PCA. Lower bounds for Tensor PCA have been studied in various forms including statistical query lower bounds [25, 39], reductions from conjectured hard problems [123, 24], lower bounds from the low-degree conjecture [54, 55, 75], evidence based on the landscape behavior [10, 88], etc. Compared to a lot of these works which rely on various conjectures, we remark that our lower bounds are unconditional and do not rely on any conjectures. In [54], similar to Sparse PCA, they state a similar theorem for a different variant of Tensor PCA. However, they do not give a proof whereas we give explicit proofs. In particular, they state their result without proof for the ±1 variant of Tensor PCA whereas we work with the more realistic setting where the distribution is N (0, 1). We remark that their techniques do not recover our results but on the other hand, our techniques can recover theirs. 4 Related work As stated in their respective sections, there have been some prior works on (degree at most 4) SoS lower bounds on Sparse and Tensor PCA and various other lower bounds that have mostly relied on various hardness conjectures, some of which are stronger than P 6= NP . The lack of results on higher degree SoS, compared to other models, can be attributed to the difficulty of proving such lower bounds, which we undertake in this work. Sum of Squares lower bounds have been obtained for various problems of interest, such as Sherrington-Kirkpatrick Hamiltonian [45, 74, 63, 104], Maximum Cut [87], Maximum Independent Set [64, 104], Constraint Satisfaction Problems [71], Densest k-Subgraph [65], etc. The techniques used in this work are closely related to the work [19] which proved Sum of Squares lower bounds for a problem known as Planted Clique. Some of the ideas and techniques we employ in this work, namely pseudo-calibration and graph matrices have also appeared in other works [87, 103, 45, 1, 64, 105, 63, 65]. It’s plausible that our generalized techniques could be applied to other high dimensional statistical problems, which we leave for future work. 5 Conclusion In this work, we show sub-exponential time lower bounds for the powerful Sum-of-Squares algorithms for Sparse PCA and Tensor PCA. With the evergrowing research into better algorithms for Sparse PCA [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 36] and Tensor PCA [11, 22, 52, 53, 110, 125, 120, 67, 8], combined with the recent breakthrough of Sum of Squares algorithms in statistics [16, 80, 51, 70, 43, 72, 14, 15, 111], it’s therefore an important goal to understand whether Sum of Squares algorithms can beat state of the art algorithms for these problems. In this work, we answer this negatively and show that even sub-exponential time SoS algorithms cannot do much better than relatively simpler algorithms. In particular, we settle open problems raised by [82, 54, 55, 52, 22]. Our work does not handle exponential time ⌦(n) degree SoS so analyzing these algorithms is a potential future direction. Another important direction is to understand the limits of powerful algorithms such as SoS for other statistical problems of importance, such as mixture modeling or clustering. For algorithm designers, our results illustrates the intrinsic difficulty of PCA problems and sheds light on information-computation gaps exhibited by PCA. For practitioners, this result provides strong evidence that existing algorithms work relatively well.
1. What is the focus of the paper regarding sparse PCA and its connection to the Sum-of-Squares algorithm? 2. What are the strengths of the paper, particularly in providing insights into the complexity of sparse/tensor PCA and the power of SoS algorithms? 3. What are the weaknesses of the paper, such as the lack of a detailed comparison to previous work and the difficulty in following the proofs for readers unfamiliar with graph matrices? 4. What are some questions raised by the reviewer regarding the proof techniques used in the paper and their differences from previous work? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Sparse PCA is a fundamental problem in the machine learning and statistical inference community. Suppose we are given vectors v 1 , … , v m ∈ R d sampled from the mean-zero Gaussian distribution with covariance I d + λ u u ⊤ where u is a k -sparse vector, the task is to recover the hidden vector u (principal component). This is the Wishart model. There are several parameters in play here: the number of samples m , the dimension d , the signal-to-noise ratio λ , and the sparsity k . A long line of work has established an almost complete picture of the tractability of sparse PCA in different parameter regimes; in particular, in the “hard” regime where m λ 2 ≪ d and m λ 2 ≪ k 2 , no known algorithm exists. This paper shows that the powerful Sum-of-Squares (SoS) algorithm fails to solve sparse PCA in this hard regime even with degree d O ( 1 ) . Specifically, the authors show that the natural SoS relaxation at degree d O ( ϵ ) fails at the (easier) task of distinguishing between vectors sampled randomly vs vectors sampled from the spiked model when m λ 2 ≪ d 1 − ϵ and m λ 2 ≪ k 2 − ϵ . Their techniques also apply to tensor PCA. For k ≥ 2 , we are given an order k tensor A = λ u ⊗ k + B where u ∈ R n is a unit vector and B is a tensor with Gaussian entries, and the task is to recover u . They prove analogous SoS lower bounds for the regime λ ≪ n k / 4 . Techniques The authors showed that in the hard regime, even when given random inputs (no planted spike), the canonical SoS relaxation has a large optimal value of m + m λ − o ( 1 ) , thus failing to distinguish from the case when the inputs are drawn from the spiked Gaussian. To prove this, the authors constructed a candidate pseudo-distribution from the spiked distribution (aka planted distribution) using the pseudo-calibration method, which is by now a standard technique to prove SoS lower bounds. As in the SoS lower bound for planted clique [18], they decompose the pseudo-moment matrix into a linear combination of “graph matrices”, which are structured random matrices that can be represented as graphs (called shapes) and whose spectral norms can be bounded by analyzing “vertex separators” in the shape. Following planted clique [18], they factorize each shape into “left”, “right”, and “middle” shapes and then bound the “intersection terms” (the errors in the factorization). One technicality is that the coefficients in the linear combination also depend on the shapes, which they handle by analyzing the “coefficient matrix”. They show that the middle part is PSD by charging non-trivial shapes to the diagonal, which implies the whole matrix is PSD. The bulk of their proof is in fact handling the intersection terms. Strengths And Weaknesses Strengths This paper is important as it gives insights to the complexity of sparse/tensor PCA and also the power of SoS algorithms. It provides a strong piece of evidence that the hardness regimes are indeed hard. Previous work by Hopkins et al [49] claimed SoS lower bounds for sparse PCA in the spiked Wigner model, though their proofs are not online. This paper generalizes their ideas and proves an SoS lower bound for the more natural Wishart model. I view this as the main contribution of this paper. As in previous papers, proving SoS lower bounds requires a significant amount of work, especially for polynomial-degree. Therefore, even though many techniques in this paper are not new (e.g. factorization of shapes, charging arguments, etc), grinding out all the technical details is a strength in my opinion. Weaknesses I would like to see a detailed comparison to [49] and why their techniques fail for the Wishart model. I believe that [49] has most of the main ideas even though a full proof hasn’t been posted. There must be some technical challenges hidden in the details, but this was not explained in either the main paper or the appendix. Finally, it is almost impossible for readers unfamiliar with graph matrices to follow the proofs. Maybe include some example graphs? Questions What are the key innovations in the proofs that are different from Hopkins et al [49]. In particular, why do their techniques fail for the Wishart model? Where are the requirements d A ≤ k ≤ d 1 − A ϵ and λ / k ≤ d − A ϵ used in the proof? They seem to be buried in the details in the proof, and it would be nice if you can explain this in Appendix A. Other comments Line 102: “tradeoff between m, n and k” should be m, \lambda, k Line 256: what does “sufficient decay” mean? Line 757: I don’t think “trivial shapes” were defined before. Remark C.18: “improper” not defined (it’s only defined later). Limitations The authors adequately addressed the limitations.
NIPS
Title Sub-exponential time Sum-of-Squares lower bounds for Principal Components Analysis Abstract Principal Components Analysis (PCA) is a dimension-reduction technique widely used in machine learning and statistics. However, due to the dependence of the principal components on all the dimensions, the components are notoriously hard to interpret. Therefore, a variant known as sparse PCA is often preferred. Sparse PCA learns principal components of the data but enforces that such components must be sparse. This has applications in diverse fields such as computational biology and image processing. To learn sparse principal components, it’s well known that standard PCA will not work, especially in high dimensions, and therefore algorithms for sparse PCA are often studied as a separate endeavor. Various algorithms have been proposed for Sparse PCA over the years, but given how fundamental it is for applications in science, the limits of efficient algorithms are only partially understood. In this work, we study the limits of the powerful Sum of Squares (SoS) family of algorithms for Sparse PCA. SoS algorithms have recently revolutionized robust statistics, leading to breakthrough algorithms for long-standing open problems in machine learning, such as optimally learning mixtures of gaussians, robust clustering, robust regression, etc. Moreover, it is believed to be the optimal robust algorithm for many statistical problems. Therefore, for sparse PCA, it’s plausible that it can beat simpler algorithms such as diagonal thresholding that have been traditionally used. In this work, we show that this is not the case, by exhibiting strong tradeoffs between the number of samples required, the sparsity and the ambient dimension, for which SoS algorithms, even if allowed sub-exponential time, will fail to optimally recover the component. Our results are complemented by known algorithms in literature, thereby painting an almost complete picture of the behavior of efficient algorithms for sparse PCA. Since SoS algorithms encapsulate many algorithmic techniques such as spectral or statistical query algorithms, this solidifies the message that known algorithms are optimal for sparse PCA. Moreover, our techniques are strong enough to obtain similar tradeoffs for Tensor PCA, another important higher order variant of PCA with applications in topic modeling, video processing, etc. 1 Introduction Principal components analysis (PCA) [62] is a popular data processing and dimension reduction routine that is widely used. It has numerous applications in Machine Learning, Statistics, Engineering, Biology, etc. Given a dataset, PCA projects the data to a lower dimensional space spanned by the principal components. The intuition is that PCA sheds lower order information such as noise ⇤Equal contribution †A.P. was supported in part by NSF grant CCF-2008920 and G.R. was supported in part by NSF grants CCF-1816372 and CCF-2008920 36th Conference on Neural Information Processing Systems (NeurIPS 2022). but importantly preserves much of the intrinsic information present in the data that are needed for downstream tasks. However, despite great optimality properties, PCA has its drawbacks. Firstly, because the principal components are linear combinations of all the original variables, it’s notoriously hard to interpret them [84]. Secondly, it’s well known that PCA does not yield good estimators in high dimensional settings [13, 97, 61]. To address these issues, a variant of PCA known as Sparse PCA is often used. Sparse PCA searches for principal components of the data with the added constraint of sparsity. Concretely, consider given data v1, v2, . . . , vm 2 Rd. In Sparse PCA, we want to find the top principal component of the data under the extra constraint that it has sparsity at most k. That is, we want to find a vector v 2 Rd that maximizes Pm i=1hv, vii2 such that kvk0 k. Sparse PCA has enjoyed applications in a diverse range of fields ranging from medicine, computational biology, economics, image and signal processing, finance and of course, machine learning and statistics (e.g. [117, 89, 85, 115, 31, 2]). It’s worth noting that in some of these applications, other algorithms are also often used to learn statistical models with sparse structure, such as greedy algorithms (e.g. [60, 81, 59, 124]) and score-based algorithms (e.g. [28, 90, 107]) but in this work, we focus on the widely used sparse PCA technique. Sparse PCA comes with the important benefit that the learnt components are easier to interpret. A notable example of this is to recover topics from documents [32, 95]. Moreover, this has important benefits for algorithmic fairness in machine learning. A large volume of research has been devoted to study Sparse PCA and its variants. Algorithms have been proposed and studied by several works, e.g. [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36]. For example, simple variants of PCA such as thresholding on top of standard PCA [61, 29] work well in certain parameter settings. This leads to the natural question whether more sophisticated algorithms can do better either for these settings or other parameter settings. On the other hand, there have been works from the inapproximability perspective as well (e.g. [20, 54, 23, 73, 35, 118], see Section 3.1 for a more detailed overview) In particular, a lot of these inapproximability results have relied on various other conjectures, due to the difficulty of proving unconditional lower bounds. Despite these prior works, exactly understanding the limits of efficient algorithms to this problem is still an active research area. This is natural considering the importance of sparse PCA and how fundamental it is to a multitude of applications. In this work, we focus on the powerful Sum-of-Squares (SoS) family of algorithms [113, 92, 96, 48] based on semidefinite programming relaxations. SoS algorithms have recently revolutionized robust machine learning, a branch of machine learning where the underlying dataset is noisy, with the noise being either random or adversarial. Robust machine learning has gotten a lot of attention in recent years because of its wide variety of use cases in machine learning and other downstream applications, including safety-critical ones like autonomous driving. For example, there has been a high volume of practical works in computer vision [114, 47, 121, 50, 112, 122, 42, 76] and speech recognition [57, 119, 106, 108, 78, 3, 91, 94]. In this important field, SoS has recently lead to breakthrough algorithms for long-standing open problems [16, 80, 51, 70, 43, 72, 14, 15, 111]. Highlights include - Robustly learning mixtures of high dimensional Gaussians. This is an extremely important problem that has been subjected to intense scrutiny, with a long line of work culminating in [16, 80]. - Efficient algorithms for the fundamental problems of regression [70], moment estimation [72], clustering [14] and subspace recovery [15] in the presence of outliers. Also known as robust machine learning, this setting is more akin to real life data which almost always has outliers or corrupted data. Moreover, SoS algorithms are believed to be the optimal robust algorithm for many statistical problems. In a different direction, SoS algorithms have led to the design of fast algorithms for problems such as tensor decomposition [53, 111]. Put more concretely, SoS algorithms, also known as the SoS hierarchy or the Lasserre hieararchy, offers a series of convex semidefinite programming (SDP) based relaxations to optimization problems. Due to its ability to capture a wide variety of algorithmic techniques, it has become a fundamental tool in algorithms and optimization. It was and still remains an extremely versatile tool for combinatorial optimization [46, 9, 49, 102]) but recently, it is being extensively used in Statistics and Machine Learning (apart from the references above, see also [17, 18, 52, 100]). Therefore, we ask (also raised by and posed as an open problem in the works [82, 54, 55]) Can Sum-of-Squares algorithms beat known algorithms for Sparse PCA? In this work, we show that SoS algorithms cannot beat known spectral algorithms, even if we allow sub-exponential time! Therefore, this suggests that currently used algorithms such as thresholding or other spectral algorithms are in a sense optimal for this problem. To prove our results, we will consider random instances of Sparse PCA and show that they are naturally hard for SoS. In particular, we focus on the Wishart random model of Sparse PCA. This model is a more natural modeling assumption compared to other random models that have been studied before, such as the Wigner random model. Note importantly that our model assumptions only strengthen our results because we are proving impossibility results. In other words, if SoS algorithms do not work for this restricted version of sparse PCA, then it will not work for more general models, e.g. with general covariance or multiple spikes. We now describe the model. The Wishart model of Sparse PCA, also known as the Spiked Covariance model, was originally proposed by [61]. In this model, we observe m vectors v1, . . . , vm 2 Rd from the distribution N (0, Id + uuT ) where u is a k-sparse unit vector, that is, kuk0 k and we would like to recover the principal component u. Here, the sparsity of a vector is the number of nonzero entries and is known as the signal-to-noise ratio. As the signal to noise ratio gets lower, it becomes harder and maybe even impossible to recover u since the signature left by u in the data becomes fainter. But it’s possible that this may be mitigated if the number of samples m grows. Therefore, there is a tradeoff between m, d, k and at play here. Algorithms proposed earlier have been able to recover u at various regimes. For example, if the number of samples is really large, namely m max( d , d 2 ), then standard PCA will work. But if this is not the case, we may still be able to recover u by assuming that the sparsity is not too large compared to the number of samples, namely m k 2 2 . To do this, we use a variant of standard PCA known as diagonal thresholding. Similar results have been obtained for various regimes, while some regimes have resisted attack to algorithms. Our results here complete the picture by showing that in the regimes that have so far resisted attack by efficient algorithms, the powerful Sum of Squares algorithms also cannot recover the principal component. We now state our theorem informally, with the formal statement in Theorem 3.1. Theorem 1.1. For the Wishart model of Sparse PCA, sub-exponential time SoS algorithms fail to recover the principal component when the number of samples m ⌧ min( d 2 , k2 2 ) . In particular, this theorem resolves an open problem posed by [82] and [54, 55]. In almost all other regimes, algorithms to recover the principal component u exist. We give a summary of such algorithms in Section 3, captured succinctly in Fig. 1. We say almost all other regimes because there is one interesting regime, namely d 2 m min(d,k) marked by light green in Fig. 1, where we can show that information theoretically, we cannot recover u but it’s possible to do hypothesis testing of Sparse PCA. That is, in this regime, we can distinguish purely random unspiked samples from the spiked samples. However, we will not be able to recover the principal component even if we use an exponential time bruteforce algorithm. We use our techniques to also obtain strong results for the related Tensor Principal components analysis (Tensor PCA) problem. Tensor PCA, originally introduced by [110], is a generalization of PCA to higher order tensors. Formally, given an order k tensor of the form u⌦k + B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries, we would like to recover the principal component u. Here, is known as the signal-to-noise ratio. Tensor PCA is a remarkably useful statistical and computational technique to exploit higher order moments of the data. It was originally envisaged to be applied in latent variable modeling and indeed, it has found multiple applications in this context (e.g. [5, 68, 69, 6]). Here, a tensor containing statistics of the input data is computed and then it’s decomposed in order to recover the latent variables. Because of the technique’s versatility, it has gathered a lot of attention in machine learning with applications in topic modeling, video processing, collaborative filtering, community detection, etc. (see e.g. [56, 7, 110, 5, 6, 38, 79] and references therein.) For Tensor PCA, similar to sparse PCA, there has been wide interest in the community to study algorithms (e.g. [11, 22, 52, 53, 110, 125, 120, 67, 8]) as well as approximability and hardness (e.g. [88, 75, 24, 54], see Section 3.2 for a more detailed overview). It’s worth noting that many of these hardness results are conditional, that is, they rely on various conjectures, sometimes stronger than P 6= NP. Moreover, there has been widespread interest from the statistics community as well, e.g. [58, 98, 77, 26, 27], due to fascinating connections to random matrix theory and statistical physics. In this work, we study the performance of sub-exponential time Sum of Squares algorithms for Tensor PCA. Our main result is stated informally below and formally in Theorem 3.2. Theorem 1.2. For Tensor PCA, sub-exponential time SoS algorithms fail to recover the principal component when the signal to noise ratio ⌧ n k4 . In particular, this resolves an open question posed by the works [52, 22, 54, 55]. Therefore, our main contributions can be summarized as follows 1. Despite the huge breakthroughs achieved by Sum-of-Squares algorithms in recent works on high dimensional statistics, we show barriers to it for the fundamental problems of Sparse PCA and Tensor PCA. 2. We achieve optimal tradeoffs compared to known algorithms, thereby painting a full picture of the computational thresholds of tractable algorithms. This suggests that existing algorithms are preferrable for PCA and its variants. 3. Prior lower bounds for these problems have either focused on weaker classes of algorithms or were obtained assuming other hardness conjectures, whereas we prove high degree sub-exponential time SoS lower bounds without relying on any conjectures. Acknowledgements and Bibliographic note We thank Sam Hopkins, Pravesh Kothari, Prasad Raghavendra, Tselil Schramm, David Steurer and Madhur Tulsiani for helpful discussions. We also thank Sam Hopkins and Pravesh Kothari for assistance in drafting the informal description of the machinery (Section C). Parts of this work have also appeared in [99, 104]. 2 Sum-of-Squares algorithms The Sum of Squares (SoS) hierarchy is a powerful class of algorithms that utilizes the power of semidefinite programming for optimization problems, which has achieved breakthrough algorithms for many problems in machine learning and statistics. In this section, we briefly describe the sum of squares hierarchy of algorithms. For a more detailed treament with an eye towards applications to machine learning and statistics, see the ICM survey [102] or the monograph [43]. Given an optimization problem given by a program with polynomial constraints, the SoS hierarchy of algorithms gives a family of convex relaxations parameterized by an integer known as its degree. As the degree gets higher, the running time to solve the convex relaxation increases but on the other hand, the relaxation gets stronger and hence serves as a better algorithm. This offers a smooth tradeoff between running time and the quality of approximation. In general, we can solve degree-Dsos SoS in nO(Dsos) time †. Therefore, constant degree SoS corresponds to polynomial time algorithms which in general translates to efficient algorithms. In this work, we focus on and show limitations of degree n" SoS which corresponds to subexponential running time. Suppose we are given multivariate polynomials p, g1, . . . , gm on n variables x1, . . . , xn (denoted collectively by x) taking real values. Consider the task: maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 In general, we could also allow inequality constraints, e.g., gi(x) 0. In this work, we only have equality constraints but much of the theory generalizes when we have inequality constraints instead. We now formally describe the Sum of Squares hierarchy of algorithms, via the so-called pseudoexpectation operators. Definition 2.1 (Pseudo-expectation values). Given multivariate polynomial constraints g1 = 0,. . . ,gm = 0 on n variables x1, . . . , xn, degree Dsos pseudo-expectation values are a linear map Ẽ from polynomials of x1, . . . , xn of degree at most Dsos to R satisfying the following conditions: 1. Ẽ[1] = 1, 2. Ẽ[f · gi] = 0 for every i 2 [m] and polynomial f such that deg(f · gi) Dsos. 3. Ẽ[f2] 0 for every polynomial f such that deg(f2) Dsos. Any linear map Ẽ satisfying the above properties is known as a degree Dsos pseudoexpectation operator satisfying the constraints g1 = 0, . . . , gm = 0. The intuition behind pseudo-expectation values is that the conditions on the pseudo-expectation values are conditions that would be satisfied by any actual expectation operator that takes expected values over a distribution of true optimal solutions, so optimizing over pseudo-expectation values gives a relaxation of the problem. †In pathological cases, there may be issues with bit complexity but that will not appear in our settings. For details, see [93, 101] Definition 2.2 (Degree Dsos SoS). The degree Dsos SoS relaxation for the polynomial optimization problem maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 is the program that maximizes Ẽ[p(x)] over all degree Dsos pseudoexpectation operators Ẽ satisfying the constraints g1 = 0, . . . , gm = 0. The main advantage is that the SoS relaxation can be efficiently solved via convex programming! In particular, Item 3 in Definition 2.1 is equivalent to a matrix being positive semidefinite, therefore the degree Dsos SoS relaxation can be done via semidefinite programming [116]. This meta-algorithm is known as a degree-Dsos SoS algorithm. This algorithm runs in nO(Dsos) time†. Therefore, constant degree SoS can be solved in polynomial time. In the next section, we apply SoS on PCA and formally state our results. 2.1 Related algorithmic techniques Statistical Query algorithms Statistical query algorithms are another popular restricted class of algorithms introduced by [66]. In this model, for a given data distribution, we are allowed to query expected value of functions. Concretely, for a dataset D on Rn, we have access to it via an oracle that given as query a function f : Rn ! [ 1, 1] returns Ex⇠D f(x) up to some additive adversarial error. SQ algorithms capture a broad class of algorithms in statistics and machine learning and have been used to study information-computation tradeoffs [109, 40, 30]. There has also been significant work trying to understand the limits of SQ algorithms (e.g. [40, 41, 34]). Formally, SQ algorithms and SoS are in general incomparable. However, the recent work [25] showed that under mild conditions, low-degree polynomial algorithms (defined next) and statistical query algorithms have equivalent power. But also, under these conditions, it’s easy to see that SoS is a more powerful algorithm than low degree algorithms and hence, SoS algorithms are stronger than statistical query algorithms. Therefore, SoS lower bounds as shown in this work give strictly stronger evidence of hardness than SQ lower bounds. Low degree polynomial algorithms In statistics, a hypothesis testing problem is a problem where the input is sampled from one of two distributions and we would like to identify which distribution it was sampled from. In this setting, a low degree polynomial algorithm is to compare the expectation of a low-degree polynomial to try and distinguish the two distributions. This method has been used to conjecture hardness thresholds for various problems [54, 55, 75]. However, under mild conditions, the SoS hierarchy of algorithms is more powerful than low degree polynomial algorithms [54] and therefore potentially yields better algorithms. Therefore, the SoS lower bounds shown in this work are stronger than low degree polynomial lower bounds as well. 3 Lower bounds for Sparse Principal Components Analysis In this section, we will state our main results for Sparse PCA and Tensor PCA. 3.1 Sparse PCA We recall the setting of the Wishart model of Sparse PCA: We are given v1, . . . , vm 2 Rd sampled from N (0, Id + uuT ) where u is a k-sparse unit vector and we wish to recover u. We will further assume that the entries of u are in { 1p k , 0, 1p k } chosen such that the sparsity is k (and hence, the norm is 1). Note importantly that this assumption is only strengthening our result: If SoS cannot solve this problem even for this specific u, it cannot do any better for the general problem with arbitrary u. Let the vectors from the given dataset be v1, . . . , vm. Let them form the rows of a matrix S 2 Rm⇥d. Let ⌃ = 1m Pm i=1 viv T i be the sample covariance matrix. Then the standard PCA objective is to maximize xT⌃x and recover x = p ku. Therefore, the sparse PCA problem can be rephrased as maximize m k · xT⌃x = 1 k mX i=1 hx, vii2 such that x3i = xi for all i d and dX i=1 x2i = k where the program variables are x1, . . . , xd. The constraint x3i = xi enforces that the entries of x are in { 1, 0, 1} and along with these constraints, the last condition Pd i=1 x 2 i = k enforces k-sparsity. Now, we will consider the series of convex relaxations for Sparse PCA obtained by SoS algorithms. In particular, we will consider SoS degree of d" for a small constant " > 0. Note that this corresponds to SoS algorithms of subexponential running time in the input size dO(1). Our main result states that for choices of m below a certain threshold, when the vectors v1, . . . , vm are sampled from the unspiked standard Gaussian N (0, Id), then sub-exponential time SoS algorithms will have optimal value at least m + m . This is also the optimal value of the objective in the case when the vectors v1, . . . , vm are indeed sampled from the spiked Gaussian N (0, Id + uuT ) and x = p ku. Therefore, SoS is unable to distinguish N (0, Id) from N (0, Id + uuT ) and hence cannot solve sparse PCA. Formally, Theorem 3.1. For all sufficiently small constants " > 0, suppose m d 1 " 2 ,m k2 " 2 , and for some A > 0, dA k d1 A", p p k d A", then for an absolute constant C > 0, with high probability over a random m⇥ d input matrix S with Gaussian entries, the sub-exponential time SoS algorithm of degree dC" for sparse PCA has optimal value at least m+m o(1). In other words, sub-exponential time SoS cannot certify that for a random dataset with Gaussian entries, there is no unit vector u with k nonzero entries and m · uT⌃u ⇡ m +m . The proof of Theorem 3.1 is deferred to the appendix. A few remarks are in order. 1. Note here that m+m is approximately the value of the objective when the input vectors v1, . . . , vm are indeed sampled from the spiked model N (0, Id + uuT ) and x = p ku. Therefore, sub-exponential time SoS is unable to distinguish a completely random distribution from the spiked distribution and hence is unable to solve sparse PCA. 2. The constant A can be thought of as ⇡ 0 and it appears for technical reasons, to ensure that we have sufficient decay in our bounds (see Remark K.8). In particular, most values of k, fall under the conditions of the theorem. Informally, our main result says that when m ⌧ min ⇣ d 2 , k2 2 ⌘ , then subexponential time SoS cannot recover the principal component u. This is the content of Theorem 1.1. Prior work on algorithms Due to its widespread importance, a tremendous amount of work has been devoted to obtaining algorithms for sparse PCA, both theoretically and practically, [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36] to cite a few. We now place our result in the context of known algorithms for Sparse PCA and explain why it offers tight tradeoffs between approximability and inapproximability. Between this work and prior works, we completely understand the parameter regimes where sparse PCA is easy or conjectured to be hard up to polylogarithmic factors. In Fig. 1a and Fig. 1b, we assign the different parameter regimes into the following categories. - Diagonal thresholding: In this regime, Diagonal thresholding [61, 4] recovers the sparse vector. Covariance thresholding [73, 33] and SoS algorithms [37] can also be used in this regime. The benefits of these alternate algorithms are that covariance thresholding has better dependence on logarithmic factors and SoS algorithms works in the presence of adversarial errors. - Vanilla PCA: Vanilla PCA (i.e. standard PCA) can recover the vector, i.e. we do not need to use the fact that the vector is sparse (see e.g. [21, 37]). - Spectral: An efficient spectral algorithm recovers the sparse vector (see e.g. [37]). - Can test but not recover: A simple spectral algorithm can solve the hypothesis testing version of Sparse PCA but it is information theoretically impossible to recover the sparse vector [37, Appendix E]. - Hard: A regime where it is conjectured to be hard for algorithms to recover the sparse principal component. We discuss this in more detail below. In Fig. 1a and Fig. 1b, the regimes corresponding to Diagonal thresholding, Vanilla PCA and Spectral are dark green, while the regimes corresponding to Spectral* and Hard are light green and red respectively. Prior work on hardness Prior works have explored statistical query lower bounds [25], basic SDP lower bounds [73], reductions from conjectured hard problems [21, 20, 23, 44, 118], lower bounds via the low-degree conjecture [35, 37], lower bounds via statistical physics [35, 12], etc. We note that similar threshold behaviors as us have been predicted by [37], but importantly, they assume a conjecture known as the low-degree likelihood conjecture. Similarly, many of these other lower bounds rely on various conjectures. To put in context, the low-degree likelihood conjecture is a stronger assumption than P 6= NP. In contrast, our results are unconditional and do not assume any conjectures. Compared to these other lower bounds, there have only been two prior works on lower bounds against SoS algorithms [73, 21, 82] which are only for degree 2 and degree 4 SoS. In particular, degree 2 SoS lower bounds have been studied in [73, 21] although they don’t state it this way. And [82] obtained degree 4 SoS lower bounds but they were very lossy, i.e. they hold for a strict subset of the Hard regime m ⌧ k 2 2 and m ⌧ d 2 . Moreover, the ideas used in these prior works do not generalize for higher degrees. The lack of other SoS lower bounds can be attributed to the difficulty in proving such lower bounds. In this paper, we vastly strengthen these known results and show almost-tight lower bounds for SoS algorithms of degree d" which correspond to sub-exponential running time dd O(") . We note that SoS algorithms get stronger as the degree increases, therefore our results immediately imply these prior results and even in the special case of degree 4 SoS, we improve the known lossy bounds. In summary, Theorem 3.1 subsumes all these earlier known results and is a vast improvement over prior known SoS lower bounds which provides compelling evidence for the hardness of Sparse PCA in this parameter range. The work [54] also states SoS lower bounds for Sparse PCA but it differs from our work in three important aspects. First, they handle the related but qualitatively different Wigner model of Sparse PCA. Their techniques fail for the Wishart model of Sparse PCA, which is more natural in practice. We overcome this shortcoming and work with the Wishart model. We emphasize that their techniques are insufficient to handle this generality and overcoming this is far from being a mere technicality. On the other hand, our techniques can easily recover their results. Second, while they sketch a high level proof overview for their lower bound, they don’t give a proof. On the other hand, our proofs are fully explicit. Finally, they assume the input distribution has entries in {±1}, that is, they work with the ±1 variant of PCA. On the other hand, we work with the more realistic setting where the distribution is N (0, 1). Again, our techniques can easily recover their results as well. 3.2 Tensor PCA We will now state our main result for Tensor PCA. Let k 2 be an integer. We are given an order k tensor A of the form A = u⌦k +B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries and we would like to recover the principal component u. Tensor PCA can be rephrased by the program maximize hA, x⌦ki = hA, x⌦ . . .⌦ x| {z } k times i such that nX i=1 x2i = 1 where the program variables are x1, . . . , xn. The principal component u will then just be the returned solution x. We will again consider sub-exponential time SoS algorithms, in particular degree n" SoS, for this problem. This is sub-exponential time because the input size is nO(1). We then show that if the signal to noise ratio is below a certain threshold, then sub-exponential time SoS for the unspiked input A ⇠ N (0, I[n]k) will have optimal value close to , which is also the optimal objective value in the spiked case when A = u⌦k +B,B ⇠ N (0, I[n]k) and x = u. In other words, SoS cannot distinguish the unspiked and spiked distributions and hence cannot recover the principal component u. Theorem 3.2. Let k 2 be an integer. For all sufficiently small " > 0, if n k4 ", for an absolute constant C > 0, with high probability over a random tensor A ⇠ N (0, I[n]k), the sub-exponential time SoS algorithm of degree nC" for Tensor PCA has optimal value at least o(1). Therefore, sub-exponential time SoS cannot certify that for a random tensor A ⇠ N (0, I[n]k), there is no unit vector u such that hA, u⌦ . . .⌦ u| {z } k times i ⇡ . The proof of Theorem 3.2 is deferred to the appendix. We again remark that when the tensor A is actually sampled from the spiked model A = u⌦k +B, the optimal objective value is approximately when x = u. Therefore, this shows that sub-exponential time SoS algorithms cannot solve Tensor PCA. Informally, the theorem says that when the signal to noise ratio ⌧ n k4 , SoS algorithms cannot solve Tensor PCA, as stated in Theorem 1.2. Prior work Algorithms for Tensor PCA have been studied in the works [11, 22, 52, 53, 110, 125, 120, 67, 8]. It was shown in [22] that the degree q SoS algorithm certifies an upper bound of 2 O(k)(n·polylog(n))k/4 qk/4 1/2 for the Tensor PCA problem. When q = n" this gives an upper bound of n k 4 O("). Therefore, our result is tight, giving insight into the computational threshold for Tensor PCA. Lower bounds for Tensor PCA have been studied in various forms including statistical query lower bounds [25, 39], reductions from conjectured hard problems [123, 24], lower bounds from the low-degree conjecture [54, 55, 75], evidence based on the landscape behavior [10, 88], etc. Compared to a lot of these works which rely on various conjectures, we remark that our lower bounds are unconditional and do not rely on any conjectures. In [54], similar to Sparse PCA, they state a similar theorem for a different variant of Tensor PCA. However, they do not give a proof whereas we give explicit proofs. In particular, they state their result without proof for the ±1 variant of Tensor PCA whereas we work with the more realistic setting where the distribution is N (0, 1). We remark that their techniques do not recover our results but on the other hand, our techniques can recover theirs. 4 Related work As stated in their respective sections, there have been some prior works on (degree at most 4) SoS lower bounds on Sparse and Tensor PCA and various other lower bounds that have mostly relied on various hardness conjectures, some of which are stronger than P 6= NP . The lack of results on higher degree SoS, compared to other models, can be attributed to the difficulty of proving such lower bounds, which we undertake in this work. Sum of Squares lower bounds have been obtained for various problems of interest, such as Sherrington-Kirkpatrick Hamiltonian [45, 74, 63, 104], Maximum Cut [87], Maximum Independent Set [64, 104], Constraint Satisfaction Problems [71], Densest k-Subgraph [65], etc. The techniques used in this work are closely related to the work [19] which proved Sum of Squares lower bounds for a problem known as Planted Clique. Some of the ideas and techniques we employ in this work, namely pseudo-calibration and graph matrices have also appeared in other works [87, 103, 45, 1, 64, 105, 63, 65]. It’s plausible that our generalized techniques could be applied to other high dimensional statistical problems, which we leave for future work. 5 Conclusion In this work, we show sub-exponential time lower bounds for the powerful Sum-of-Squares algorithms for Sparse PCA and Tensor PCA. With the evergrowing research into better algorithms for Sparse PCA [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 36] and Tensor PCA [11, 22, 52, 53, 110, 125, 120, 67, 8], combined with the recent breakthrough of Sum of Squares algorithms in statistics [16, 80, 51, 70, 43, 72, 14, 15, 111], it’s therefore an important goal to understand whether Sum of Squares algorithms can beat state of the art algorithms for these problems. In this work, we answer this negatively and show that even sub-exponential time SoS algorithms cannot do much better than relatively simpler algorithms. In particular, we settle open problems raised by [82, 54, 55, 52, 22]. Our work does not handle exponential time ⌦(n) degree SoS so analyzing these algorithms is a potential future direction. Another important direction is to understand the limits of powerful algorithms such as SoS for other statistical problems of importance, such as mixture modeling or clustering. For algorithm designers, our results illustrates the intrinsic difficulty of PCA problems and sheds light on information-computation gaps exhibited by PCA. For practitioners, this result provides strong evidence that existing algorithms work relatively well.
1. What is the focus and contribution of the paper regarding Sum-of-Squares algorithms? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis? 3. Do you have any concerns or questions about the paper's claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Sum-of-squares is a family of algorithms that capture many known techniques for combinatorial optimization problems, and it has been recently shown to be very powerful for many ML/robust statistics problems. For the theoretical computer science community, some has seen this as a proxy to strong algorithmic impossibility result especially in the average-case setting. This paper rigorously establishes strong Sum-of-Squares lower bounds at n^\eps degree for several problems in the colloquial “dense-graph” regime, including tensor PCA, sparse PCA both of which have been claimed by a previous work of Hopkins et al. [HKPRSS17] but without an explicit, spelled-out proof. On the technical level, it builds upon the SoS lower bound for Planted Clique by Barak et al [BHKKMP16] that they use the recursive factorization framework to show PSDness of the candidate moment matrix obtained from pseudo-calibration with intense matrix analysis. For the specific hardness this work obtains, not much is known before except deg-2 and deg-4 lower bound in the SoS realm, and this work obtains almost tight hardness matching known algorithms. Strengths And Weaknesses Strength: 1) In general, this is detailedly written paper; 2) Though the hardness result may not be too surprising, the almost tight hardness result of this work is certainly nice to have, and necessary for the community considering how universal these two problems are in ML practice; 3) Their generalization of the framework from planted clique is likely to find applications in other problems. Weakness: As the paper is filled with details, it'd be nice to incorporate more exposition in the technical section. It is hard to imagine someone going through this generalization; and as technical as it already is, more intuition between the technical proofs can be helpful for readability. It is not immediately clear what technical challenges the authors claimed to have overcome from planted clique. They note that the method from planted clique does not immediately work, while the challenges are not explicitly stated. Questions 1, "Their techniques fail for the Wishart model of Sparse PCA, which is more natural in practice. We overcome this shortcoming and work with the Wishart model. We emphasize that their techniques are insufficient to handle this generality and overcoming this is far from being a mere technicality." As surprising as it may be, this is quite strong and cryptic a statement. Apology if I missed something in the supplementary material, is there a way to highlight/encapsulate the difference from your technique should one try to apply the recursive factorization framework from planted clique in a somewhat straightforward manner? For Sparse PCA constraint of supporting on k coordinates, is this constrained exactly satisfied by the pseudo-expectation or only on average? I suppose its the former, and it seems nice (and even important) to point out in the work if not yet considering the struggle for satisfying exact constraints in sos lower bounds. Limitations N/A
NIPS
Title Sub-exponential time Sum-of-Squares lower bounds for Principal Components Analysis Abstract Principal Components Analysis (PCA) is a dimension-reduction technique widely used in machine learning and statistics. However, due to the dependence of the principal components on all the dimensions, the components are notoriously hard to interpret. Therefore, a variant known as sparse PCA is often preferred. Sparse PCA learns principal components of the data but enforces that such components must be sparse. This has applications in diverse fields such as computational biology and image processing. To learn sparse principal components, it’s well known that standard PCA will not work, especially in high dimensions, and therefore algorithms for sparse PCA are often studied as a separate endeavor. Various algorithms have been proposed for Sparse PCA over the years, but given how fundamental it is for applications in science, the limits of efficient algorithms are only partially understood. In this work, we study the limits of the powerful Sum of Squares (SoS) family of algorithms for Sparse PCA. SoS algorithms have recently revolutionized robust statistics, leading to breakthrough algorithms for long-standing open problems in machine learning, such as optimally learning mixtures of gaussians, robust clustering, robust regression, etc. Moreover, it is believed to be the optimal robust algorithm for many statistical problems. Therefore, for sparse PCA, it’s plausible that it can beat simpler algorithms such as diagonal thresholding that have been traditionally used. In this work, we show that this is not the case, by exhibiting strong tradeoffs between the number of samples required, the sparsity and the ambient dimension, for which SoS algorithms, even if allowed sub-exponential time, will fail to optimally recover the component. Our results are complemented by known algorithms in literature, thereby painting an almost complete picture of the behavior of efficient algorithms for sparse PCA. Since SoS algorithms encapsulate many algorithmic techniques such as spectral or statistical query algorithms, this solidifies the message that known algorithms are optimal for sparse PCA. Moreover, our techniques are strong enough to obtain similar tradeoffs for Tensor PCA, another important higher order variant of PCA with applications in topic modeling, video processing, etc. 1 Introduction Principal components analysis (PCA) [62] is a popular data processing and dimension reduction routine that is widely used. It has numerous applications in Machine Learning, Statistics, Engineering, Biology, etc. Given a dataset, PCA projects the data to a lower dimensional space spanned by the principal components. The intuition is that PCA sheds lower order information such as noise ⇤Equal contribution †A.P. was supported in part by NSF grant CCF-2008920 and G.R. was supported in part by NSF grants CCF-1816372 and CCF-2008920 36th Conference on Neural Information Processing Systems (NeurIPS 2022). but importantly preserves much of the intrinsic information present in the data that are needed for downstream tasks. However, despite great optimality properties, PCA has its drawbacks. Firstly, because the principal components are linear combinations of all the original variables, it’s notoriously hard to interpret them [84]. Secondly, it’s well known that PCA does not yield good estimators in high dimensional settings [13, 97, 61]. To address these issues, a variant of PCA known as Sparse PCA is often used. Sparse PCA searches for principal components of the data with the added constraint of sparsity. Concretely, consider given data v1, v2, . . . , vm 2 Rd. In Sparse PCA, we want to find the top principal component of the data under the extra constraint that it has sparsity at most k. That is, we want to find a vector v 2 Rd that maximizes Pm i=1hv, vii2 such that kvk0 k. Sparse PCA has enjoyed applications in a diverse range of fields ranging from medicine, computational biology, economics, image and signal processing, finance and of course, machine learning and statistics (e.g. [117, 89, 85, 115, 31, 2]). It’s worth noting that in some of these applications, other algorithms are also often used to learn statistical models with sparse structure, such as greedy algorithms (e.g. [60, 81, 59, 124]) and score-based algorithms (e.g. [28, 90, 107]) but in this work, we focus on the widely used sparse PCA technique. Sparse PCA comes with the important benefit that the learnt components are easier to interpret. A notable example of this is to recover topics from documents [32, 95]. Moreover, this has important benefits for algorithmic fairness in machine learning. A large volume of research has been devoted to study Sparse PCA and its variants. Algorithms have been proposed and studied by several works, e.g. [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36]. For example, simple variants of PCA such as thresholding on top of standard PCA [61, 29] work well in certain parameter settings. This leads to the natural question whether more sophisticated algorithms can do better either for these settings or other parameter settings. On the other hand, there have been works from the inapproximability perspective as well (e.g. [20, 54, 23, 73, 35, 118], see Section 3.1 for a more detailed overview) In particular, a lot of these inapproximability results have relied on various other conjectures, due to the difficulty of proving unconditional lower bounds. Despite these prior works, exactly understanding the limits of efficient algorithms to this problem is still an active research area. This is natural considering the importance of sparse PCA and how fundamental it is to a multitude of applications. In this work, we focus on the powerful Sum-of-Squares (SoS) family of algorithms [113, 92, 96, 48] based on semidefinite programming relaxations. SoS algorithms have recently revolutionized robust machine learning, a branch of machine learning where the underlying dataset is noisy, with the noise being either random or adversarial. Robust machine learning has gotten a lot of attention in recent years because of its wide variety of use cases in machine learning and other downstream applications, including safety-critical ones like autonomous driving. For example, there has been a high volume of practical works in computer vision [114, 47, 121, 50, 112, 122, 42, 76] and speech recognition [57, 119, 106, 108, 78, 3, 91, 94]. In this important field, SoS has recently lead to breakthrough algorithms for long-standing open problems [16, 80, 51, 70, 43, 72, 14, 15, 111]. Highlights include - Robustly learning mixtures of high dimensional Gaussians. This is an extremely important problem that has been subjected to intense scrutiny, with a long line of work culminating in [16, 80]. - Efficient algorithms for the fundamental problems of regression [70], moment estimation [72], clustering [14] and subspace recovery [15] in the presence of outliers. Also known as robust machine learning, this setting is more akin to real life data which almost always has outliers or corrupted data. Moreover, SoS algorithms are believed to be the optimal robust algorithm for many statistical problems. In a different direction, SoS algorithms have led to the design of fast algorithms for problems such as tensor decomposition [53, 111]. Put more concretely, SoS algorithms, also known as the SoS hierarchy or the Lasserre hieararchy, offers a series of convex semidefinite programming (SDP) based relaxations to optimization problems. Due to its ability to capture a wide variety of algorithmic techniques, it has become a fundamental tool in algorithms and optimization. It was and still remains an extremely versatile tool for combinatorial optimization [46, 9, 49, 102]) but recently, it is being extensively used in Statistics and Machine Learning (apart from the references above, see also [17, 18, 52, 100]). Therefore, we ask (also raised by and posed as an open problem in the works [82, 54, 55]) Can Sum-of-Squares algorithms beat known algorithms for Sparse PCA? In this work, we show that SoS algorithms cannot beat known spectral algorithms, even if we allow sub-exponential time! Therefore, this suggests that currently used algorithms such as thresholding or other spectral algorithms are in a sense optimal for this problem. To prove our results, we will consider random instances of Sparse PCA and show that they are naturally hard for SoS. In particular, we focus on the Wishart random model of Sparse PCA. This model is a more natural modeling assumption compared to other random models that have been studied before, such as the Wigner random model. Note importantly that our model assumptions only strengthen our results because we are proving impossibility results. In other words, if SoS algorithms do not work for this restricted version of sparse PCA, then it will not work for more general models, e.g. with general covariance or multiple spikes. We now describe the model. The Wishart model of Sparse PCA, also known as the Spiked Covariance model, was originally proposed by [61]. In this model, we observe m vectors v1, . . . , vm 2 Rd from the distribution N (0, Id + uuT ) where u is a k-sparse unit vector, that is, kuk0 k and we would like to recover the principal component u. Here, the sparsity of a vector is the number of nonzero entries and is known as the signal-to-noise ratio. As the signal to noise ratio gets lower, it becomes harder and maybe even impossible to recover u since the signature left by u in the data becomes fainter. But it’s possible that this may be mitigated if the number of samples m grows. Therefore, there is a tradeoff between m, d, k and at play here. Algorithms proposed earlier have been able to recover u at various regimes. For example, if the number of samples is really large, namely m max( d , d 2 ), then standard PCA will work. But if this is not the case, we may still be able to recover u by assuming that the sparsity is not too large compared to the number of samples, namely m k 2 2 . To do this, we use a variant of standard PCA known as diagonal thresholding. Similar results have been obtained for various regimes, while some regimes have resisted attack to algorithms. Our results here complete the picture by showing that in the regimes that have so far resisted attack by efficient algorithms, the powerful Sum of Squares algorithms also cannot recover the principal component. We now state our theorem informally, with the formal statement in Theorem 3.1. Theorem 1.1. For the Wishart model of Sparse PCA, sub-exponential time SoS algorithms fail to recover the principal component when the number of samples m ⌧ min( d 2 , k2 2 ) . In particular, this theorem resolves an open problem posed by [82] and [54, 55]. In almost all other regimes, algorithms to recover the principal component u exist. We give a summary of such algorithms in Section 3, captured succinctly in Fig. 1. We say almost all other regimes because there is one interesting regime, namely d 2 m min(d,k) marked by light green in Fig. 1, where we can show that information theoretically, we cannot recover u but it’s possible to do hypothesis testing of Sparse PCA. That is, in this regime, we can distinguish purely random unspiked samples from the spiked samples. However, we will not be able to recover the principal component even if we use an exponential time bruteforce algorithm. We use our techniques to also obtain strong results for the related Tensor Principal components analysis (Tensor PCA) problem. Tensor PCA, originally introduced by [110], is a generalization of PCA to higher order tensors. Formally, given an order k tensor of the form u⌦k + B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries, we would like to recover the principal component u. Here, is known as the signal-to-noise ratio. Tensor PCA is a remarkably useful statistical and computational technique to exploit higher order moments of the data. It was originally envisaged to be applied in latent variable modeling and indeed, it has found multiple applications in this context (e.g. [5, 68, 69, 6]). Here, a tensor containing statistics of the input data is computed and then it’s decomposed in order to recover the latent variables. Because of the technique’s versatility, it has gathered a lot of attention in machine learning with applications in topic modeling, video processing, collaborative filtering, community detection, etc. (see e.g. [56, 7, 110, 5, 6, 38, 79] and references therein.) For Tensor PCA, similar to sparse PCA, there has been wide interest in the community to study algorithms (e.g. [11, 22, 52, 53, 110, 125, 120, 67, 8]) as well as approximability and hardness (e.g. [88, 75, 24, 54], see Section 3.2 for a more detailed overview). It’s worth noting that many of these hardness results are conditional, that is, they rely on various conjectures, sometimes stronger than P 6= NP. Moreover, there has been widespread interest from the statistics community as well, e.g. [58, 98, 77, 26, 27], due to fascinating connections to random matrix theory and statistical physics. In this work, we study the performance of sub-exponential time Sum of Squares algorithms for Tensor PCA. Our main result is stated informally below and formally in Theorem 3.2. Theorem 1.2. For Tensor PCA, sub-exponential time SoS algorithms fail to recover the principal component when the signal to noise ratio ⌧ n k4 . In particular, this resolves an open question posed by the works [52, 22, 54, 55]. Therefore, our main contributions can be summarized as follows 1. Despite the huge breakthroughs achieved by Sum-of-Squares algorithms in recent works on high dimensional statistics, we show barriers to it for the fundamental problems of Sparse PCA and Tensor PCA. 2. We achieve optimal tradeoffs compared to known algorithms, thereby painting a full picture of the computational thresholds of tractable algorithms. This suggests that existing algorithms are preferrable for PCA and its variants. 3. Prior lower bounds for these problems have either focused on weaker classes of algorithms or were obtained assuming other hardness conjectures, whereas we prove high degree sub-exponential time SoS lower bounds without relying on any conjectures. Acknowledgements and Bibliographic note We thank Sam Hopkins, Pravesh Kothari, Prasad Raghavendra, Tselil Schramm, David Steurer and Madhur Tulsiani for helpful discussions. We also thank Sam Hopkins and Pravesh Kothari for assistance in drafting the informal description of the machinery (Section C). Parts of this work have also appeared in [99, 104]. 2 Sum-of-Squares algorithms The Sum of Squares (SoS) hierarchy is a powerful class of algorithms that utilizes the power of semidefinite programming for optimization problems, which has achieved breakthrough algorithms for many problems in machine learning and statistics. In this section, we briefly describe the sum of squares hierarchy of algorithms. For a more detailed treament with an eye towards applications to machine learning and statistics, see the ICM survey [102] or the monograph [43]. Given an optimization problem given by a program with polynomial constraints, the SoS hierarchy of algorithms gives a family of convex relaxations parameterized by an integer known as its degree. As the degree gets higher, the running time to solve the convex relaxation increases but on the other hand, the relaxation gets stronger and hence serves as a better algorithm. This offers a smooth tradeoff between running time and the quality of approximation. In general, we can solve degree-Dsos SoS in nO(Dsos) time †. Therefore, constant degree SoS corresponds to polynomial time algorithms which in general translates to efficient algorithms. In this work, we focus on and show limitations of degree n" SoS which corresponds to subexponential running time. Suppose we are given multivariate polynomials p, g1, . . . , gm on n variables x1, . . . , xn (denoted collectively by x) taking real values. Consider the task: maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 In general, we could also allow inequality constraints, e.g., gi(x) 0. In this work, we only have equality constraints but much of the theory generalizes when we have inequality constraints instead. We now formally describe the Sum of Squares hierarchy of algorithms, via the so-called pseudoexpectation operators. Definition 2.1 (Pseudo-expectation values). Given multivariate polynomial constraints g1 = 0,. . . ,gm = 0 on n variables x1, . . . , xn, degree Dsos pseudo-expectation values are a linear map Ẽ from polynomials of x1, . . . , xn of degree at most Dsos to R satisfying the following conditions: 1. Ẽ[1] = 1, 2. Ẽ[f · gi] = 0 for every i 2 [m] and polynomial f such that deg(f · gi) Dsos. 3. Ẽ[f2] 0 for every polynomial f such that deg(f2) Dsos. Any linear map Ẽ satisfying the above properties is known as a degree Dsos pseudoexpectation operator satisfying the constraints g1 = 0, . . . , gm = 0. The intuition behind pseudo-expectation values is that the conditions on the pseudo-expectation values are conditions that would be satisfied by any actual expectation operator that takes expected values over a distribution of true optimal solutions, so optimizing over pseudo-expectation values gives a relaxation of the problem. †In pathological cases, there may be issues with bit complexity but that will not appear in our settings. For details, see [93, 101] Definition 2.2 (Degree Dsos SoS). The degree Dsos SoS relaxation for the polynomial optimization problem maximize p(x) such that g1(x) = 0, . . . , gm(x) = 0 is the program that maximizes Ẽ[p(x)] over all degree Dsos pseudoexpectation operators Ẽ satisfying the constraints g1 = 0, . . . , gm = 0. The main advantage is that the SoS relaxation can be efficiently solved via convex programming! In particular, Item 3 in Definition 2.1 is equivalent to a matrix being positive semidefinite, therefore the degree Dsos SoS relaxation can be done via semidefinite programming [116]. This meta-algorithm is known as a degree-Dsos SoS algorithm. This algorithm runs in nO(Dsos) time†. Therefore, constant degree SoS can be solved in polynomial time. In the next section, we apply SoS on PCA and formally state our results. 2.1 Related algorithmic techniques Statistical Query algorithms Statistical query algorithms are another popular restricted class of algorithms introduced by [66]. In this model, for a given data distribution, we are allowed to query expected value of functions. Concretely, for a dataset D on Rn, we have access to it via an oracle that given as query a function f : Rn ! [ 1, 1] returns Ex⇠D f(x) up to some additive adversarial error. SQ algorithms capture a broad class of algorithms in statistics and machine learning and have been used to study information-computation tradeoffs [109, 40, 30]. There has also been significant work trying to understand the limits of SQ algorithms (e.g. [40, 41, 34]). Formally, SQ algorithms and SoS are in general incomparable. However, the recent work [25] showed that under mild conditions, low-degree polynomial algorithms (defined next) and statistical query algorithms have equivalent power. But also, under these conditions, it’s easy to see that SoS is a more powerful algorithm than low degree algorithms and hence, SoS algorithms are stronger than statistical query algorithms. Therefore, SoS lower bounds as shown in this work give strictly stronger evidence of hardness than SQ lower bounds. Low degree polynomial algorithms In statistics, a hypothesis testing problem is a problem where the input is sampled from one of two distributions and we would like to identify which distribution it was sampled from. In this setting, a low degree polynomial algorithm is to compare the expectation of a low-degree polynomial to try and distinguish the two distributions. This method has been used to conjecture hardness thresholds for various problems [54, 55, 75]. However, under mild conditions, the SoS hierarchy of algorithms is more powerful than low degree polynomial algorithms [54] and therefore potentially yields better algorithms. Therefore, the SoS lower bounds shown in this work are stronger than low degree polynomial lower bounds as well. 3 Lower bounds for Sparse Principal Components Analysis In this section, we will state our main results for Sparse PCA and Tensor PCA. 3.1 Sparse PCA We recall the setting of the Wishart model of Sparse PCA: We are given v1, . . . , vm 2 Rd sampled from N (0, Id + uuT ) where u is a k-sparse unit vector and we wish to recover u. We will further assume that the entries of u are in { 1p k , 0, 1p k } chosen such that the sparsity is k (and hence, the norm is 1). Note importantly that this assumption is only strengthening our result: If SoS cannot solve this problem even for this specific u, it cannot do any better for the general problem with arbitrary u. Let the vectors from the given dataset be v1, . . . , vm. Let them form the rows of a matrix S 2 Rm⇥d. Let ⌃ = 1m Pm i=1 viv T i be the sample covariance matrix. Then the standard PCA objective is to maximize xT⌃x and recover x = p ku. Therefore, the sparse PCA problem can be rephrased as maximize m k · xT⌃x = 1 k mX i=1 hx, vii2 such that x3i = xi for all i d and dX i=1 x2i = k where the program variables are x1, . . . , xd. The constraint x3i = xi enforces that the entries of x are in { 1, 0, 1} and along with these constraints, the last condition Pd i=1 x 2 i = k enforces k-sparsity. Now, we will consider the series of convex relaxations for Sparse PCA obtained by SoS algorithms. In particular, we will consider SoS degree of d" for a small constant " > 0. Note that this corresponds to SoS algorithms of subexponential running time in the input size dO(1). Our main result states that for choices of m below a certain threshold, when the vectors v1, . . . , vm are sampled from the unspiked standard Gaussian N (0, Id), then sub-exponential time SoS algorithms will have optimal value at least m + m . This is also the optimal value of the objective in the case when the vectors v1, . . . , vm are indeed sampled from the spiked Gaussian N (0, Id + uuT ) and x = p ku. Therefore, SoS is unable to distinguish N (0, Id) from N (0, Id + uuT ) and hence cannot solve sparse PCA. Formally, Theorem 3.1. For all sufficiently small constants " > 0, suppose m d 1 " 2 ,m k2 " 2 , and for some A > 0, dA k d1 A", p p k d A", then for an absolute constant C > 0, with high probability over a random m⇥ d input matrix S with Gaussian entries, the sub-exponential time SoS algorithm of degree dC" for sparse PCA has optimal value at least m+m o(1). In other words, sub-exponential time SoS cannot certify that for a random dataset with Gaussian entries, there is no unit vector u with k nonzero entries and m · uT⌃u ⇡ m +m . The proof of Theorem 3.1 is deferred to the appendix. A few remarks are in order. 1. Note here that m+m is approximately the value of the objective when the input vectors v1, . . . , vm are indeed sampled from the spiked model N (0, Id + uuT ) and x = p ku. Therefore, sub-exponential time SoS is unable to distinguish a completely random distribution from the spiked distribution and hence is unable to solve sparse PCA. 2. The constant A can be thought of as ⇡ 0 and it appears for technical reasons, to ensure that we have sufficient decay in our bounds (see Remark K.8). In particular, most values of k, fall under the conditions of the theorem. Informally, our main result says that when m ⌧ min ⇣ d 2 , k2 2 ⌘ , then subexponential time SoS cannot recover the principal component u. This is the content of Theorem 1.1. Prior work on algorithms Due to its widespread importance, a tremendous amount of work has been devoted to obtaining algorithms for sparse PCA, both theoretically and practically, [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 29, 36] to cite a few. We now place our result in the context of known algorithms for Sparse PCA and explain why it offers tight tradeoffs between approximability and inapproximability. Between this work and prior works, we completely understand the parameter regimes where sparse PCA is easy or conjectured to be hard up to polylogarithmic factors. In Fig. 1a and Fig. 1b, we assign the different parameter regimes into the following categories. - Diagonal thresholding: In this regime, Diagonal thresholding [61, 4] recovers the sparse vector. Covariance thresholding [73, 33] and SoS algorithms [37] can also be used in this regime. The benefits of these alternate algorithms are that covariance thresholding has better dependence on logarithmic factors and SoS algorithms works in the presence of adversarial errors. - Vanilla PCA: Vanilla PCA (i.e. standard PCA) can recover the vector, i.e. we do not need to use the fact that the vector is sparse (see e.g. [21, 37]). - Spectral: An efficient spectral algorithm recovers the sparse vector (see e.g. [37]). - Can test but not recover: A simple spectral algorithm can solve the hypothesis testing version of Sparse PCA but it is information theoretically impossible to recover the sparse vector [37, Appendix E]. - Hard: A regime where it is conjectured to be hard for algorithms to recover the sparse principal component. We discuss this in more detail below. In Fig. 1a and Fig. 1b, the regimes corresponding to Diagonal thresholding, Vanilla PCA and Spectral are dark green, while the regimes corresponding to Spectral* and Hard are light green and red respectively. Prior work on hardness Prior works have explored statistical query lower bounds [25], basic SDP lower bounds [73], reductions from conjectured hard problems [21, 20, 23, 44, 118], lower bounds via the low-degree conjecture [35, 37], lower bounds via statistical physics [35, 12], etc. We note that similar threshold behaviors as us have been predicted by [37], but importantly, they assume a conjecture known as the low-degree likelihood conjecture. Similarly, many of these other lower bounds rely on various conjectures. To put in context, the low-degree likelihood conjecture is a stronger assumption than P 6= NP. In contrast, our results are unconditional and do not assume any conjectures. Compared to these other lower bounds, there have only been two prior works on lower bounds against SoS algorithms [73, 21, 82] which are only for degree 2 and degree 4 SoS. In particular, degree 2 SoS lower bounds have been studied in [73, 21] although they don’t state it this way. And [82] obtained degree 4 SoS lower bounds but they were very lossy, i.e. they hold for a strict subset of the Hard regime m ⌧ k 2 2 and m ⌧ d 2 . Moreover, the ideas used in these prior works do not generalize for higher degrees. The lack of other SoS lower bounds can be attributed to the difficulty in proving such lower bounds. In this paper, we vastly strengthen these known results and show almost-tight lower bounds for SoS algorithms of degree d" which correspond to sub-exponential running time dd O(") . We note that SoS algorithms get stronger as the degree increases, therefore our results immediately imply these prior results and even in the special case of degree 4 SoS, we improve the known lossy bounds. In summary, Theorem 3.1 subsumes all these earlier known results and is a vast improvement over prior known SoS lower bounds which provides compelling evidence for the hardness of Sparse PCA in this parameter range. The work [54] also states SoS lower bounds for Sparse PCA but it differs from our work in three important aspects. First, they handle the related but qualitatively different Wigner model of Sparse PCA. Their techniques fail for the Wishart model of Sparse PCA, which is more natural in practice. We overcome this shortcoming and work with the Wishart model. We emphasize that their techniques are insufficient to handle this generality and overcoming this is far from being a mere technicality. On the other hand, our techniques can easily recover their results. Second, while they sketch a high level proof overview for their lower bound, they don’t give a proof. On the other hand, our proofs are fully explicit. Finally, they assume the input distribution has entries in {±1}, that is, they work with the ±1 variant of PCA. On the other hand, we work with the more realistic setting where the distribution is N (0, 1). Again, our techniques can easily recover their results as well. 3.2 Tensor PCA We will now state our main result for Tensor PCA. Let k 2 be an integer. We are given an order k tensor A of the form A = u⌦k +B where u 2 Rn is a unit vector and B 2 R[n]k has independent Gaussian entries and we would like to recover the principal component u. Tensor PCA can be rephrased by the program maximize hA, x⌦ki = hA, x⌦ . . .⌦ x| {z } k times i such that nX i=1 x2i = 1 where the program variables are x1, . . . , xn. The principal component u will then just be the returned solution x. We will again consider sub-exponential time SoS algorithms, in particular degree n" SoS, for this problem. This is sub-exponential time because the input size is nO(1). We then show that if the signal to noise ratio is below a certain threshold, then sub-exponential time SoS for the unspiked input A ⇠ N (0, I[n]k) will have optimal value close to , which is also the optimal objective value in the spiked case when A = u⌦k +B,B ⇠ N (0, I[n]k) and x = u. In other words, SoS cannot distinguish the unspiked and spiked distributions and hence cannot recover the principal component u. Theorem 3.2. Let k 2 be an integer. For all sufficiently small " > 0, if n k4 ", for an absolute constant C > 0, with high probability over a random tensor A ⇠ N (0, I[n]k), the sub-exponential time SoS algorithm of degree nC" for Tensor PCA has optimal value at least o(1). Therefore, sub-exponential time SoS cannot certify that for a random tensor A ⇠ N (0, I[n]k), there is no unit vector u such that hA, u⌦ . . .⌦ u| {z } k times i ⇡ . The proof of Theorem 3.2 is deferred to the appendix. We again remark that when the tensor A is actually sampled from the spiked model A = u⌦k +B, the optimal objective value is approximately when x = u. Therefore, this shows that sub-exponential time SoS algorithms cannot solve Tensor PCA. Informally, the theorem says that when the signal to noise ratio ⌧ n k4 , SoS algorithms cannot solve Tensor PCA, as stated in Theorem 1.2. Prior work Algorithms for Tensor PCA have been studied in the works [11, 22, 52, 53, 110, 125, 120, 67, 8]. It was shown in [22] that the degree q SoS algorithm certifies an upper bound of 2 O(k)(n·polylog(n))k/4 qk/4 1/2 for the Tensor PCA problem. When q = n" this gives an upper bound of n k 4 O("). Therefore, our result is tight, giving insight into the computational threshold for Tensor PCA. Lower bounds for Tensor PCA have been studied in various forms including statistical query lower bounds [25, 39], reductions from conjectured hard problems [123, 24], lower bounds from the low-degree conjecture [54, 55, 75], evidence based on the landscape behavior [10, 88], etc. Compared to a lot of these works which rely on various conjectures, we remark that our lower bounds are unconditional and do not rely on any conjectures. In [54], similar to Sparse PCA, they state a similar theorem for a different variant of Tensor PCA. However, they do not give a proof whereas we give explicit proofs. In particular, they state their result without proof for the ±1 variant of Tensor PCA whereas we work with the more realistic setting where the distribution is N (0, 1). We remark that their techniques do not recover our results but on the other hand, our techniques can recover theirs. 4 Related work As stated in their respective sections, there have been some prior works on (degree at most 4) SoS lower bounds on Sparse and Tensor PCA and various other lower bounds that have mostly relied on various hardness conjectures, some of which are stronger than P 6= NP . The lack of results on higher degree SoS, compared to other models, can be attributed to the difficulty of proving such lower bounds, which we undertake in this work. Sum of Squares lower bounds have been obtained for various problems of interest, such as Sherrington-Kirkpatrick Hamiltonian [45, 74, 63, 104], Maximum Cut [87], Maximum Independent Set [64, 104], Constraint Satisfaction Problems [71], Densest k-Subgraph [65], etc. The techniques used in this work are closely related to the work [19] which proved Sum of Squares lower bounds for a problem known as Planted Clique. Some of the ideas and techniques we employ in this work, namely pseudo-calibration and graph matrices have also appeared in other works [87, 103, 45, 1, 64, 105, 63, 65]. It’s plausible that our generalized techniques could be applied to other high dimensional statistical problems, which we leave for future work. 5 Conclusion In this work, we show sub-exponential time lower bounds for the powerful Sum-of-Squares algorithms for Sparse PCA and Tensor PCA. With the evergrowing research into better algorithms for Sparse PCA [4, 83, 73, 33, 118, 20, 82, 34, 54, 23, 35, 36] and Tensor PCA [11, 22, 52, 53, 110, 125, 120, 67, 8], combined with the recent breakthrough of Sum of Squares algorithms in statistics [16, 80, 51, 70, 43, 72, 14, 15, 111], it’s therefore an important goal to understand whether Sum of Squares algorithms can beat state of the art algorithms for these problems. In this work, we answer this negatively and show that even sub-exponential time SoS algorithms cannot do much better than relatively simpler algorithms. In particular, we settle open problems raised by [82, 54, 55, 52, 22]. Our work does not handle exponential time ⌦(n) degree SoS so analyzing these algorithms is a potential future direction. Another important direction is to understand the limits of powerful algorithms such as SoS for other statistical problems of importance, such as mixture modeling or clustering. For algorithm designers, our results illustrates the intrinsic difficulty of PCA problems and sheds light on information-computation gaps exhibited by PCA. For practitioners, this result provides strong evidence that existing algorithms work relatively well.
1. What is the main contribution of the paper regarding sum of squares (SoS) algorithms for sparse principal component analysis (PCA) and tensor PCA? 2. What are the strengths and weaknesses of the paper, particularly in terms of its theoretical analysis and relevance to practice? 3. Do you have any questions or concerns regarding the assumptions and limitations of the proposed theory? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or recommendations for improving the paper or its applications in practice?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This is a theoretical work that studies the applicability of sum of squares (SoS) algorithms for sparse principal component analysis (PCA) and tensor PCA. Specifically, this work provides theoretical proof that despite the success of SOS algorithms in high dimensional statistics, they underperform when applied to sparse PCA and tensor PCA. The paper claims that sub-exponential time SOS algorithms cannot beat traditional sparse PCA algorithms, even if allowed sub-exponential time. Strengths And Weaknesses Strengths: The text is easy to follow. Visual illustrations (e.g., in Figure 1 ) assist in understanding the claims of the paper. The paper is theoretical and appears to propose a broad result on the usefulness of SOS algorithms for sparse PCA and tensor PCA. Weaknesses: The flow of the paper can be further improved. Is the proposed theory relevant in practice? Please elaborate. For example, theorem 3.1 assumes a range for m and theorem 3.2 assumes an upper limit for \lambda. Are these valid assumptions in practice? Figures are referred to as "Fig.". Please correct them. Equations are not referenced. Please number them for easy reference. Line 124 consists of a tensor product notation and it is not explained. Please provide an explanation. Questions For sparse PCA, it seems like this work focuses on recovering a single principal component. If so, could the authors clarify why they focus on just a single component and how one could extend this to multiple components? Paper claims that the proposed theory is more general compared to existing results because the existing results assume an input distribution of +-1, whereas the proposed work relies on the distribution N(0,1). How does this make the proposed theory more general? Please provide intuition in terms of real-world examples. Have SoS algorithms been used for sparse PCA in the past? If so, do they perform well? Can the authors comment on this? Although the proposed theory is useful in itself, what is its practical applicability? It would be nice to see an empirical demonstration of the theory in practice. For example, demonstrate on real-world (or synthetic) data that sub-exponential time SoS algorithms are unable to recover sparse PCA solution while simple traditional algorithms succeed. Repeated word "devise" in line 383. Please remove one of them. Limitations The paper identifies relevant future directions of research based on limitations of the current work.
NIPS
Title Explaining Deep Learning Models -- A Bayesian Non-parametric Approach Abstract Understanding and interpreting how machine learning (ML) models make decisions have been a big challenge. While recent research has proposed various technical approaches to provide some clues as to how an ML model makes individual predictions, they cannot provide users with an ability to inspect a model as a complete entity. In this work, we propose a novel technical approach that augments a Bayesian non-parametric regression mixture model with multiple elastic nets. Using the enhanced mixture model, we can extract generalizable insights for a target model through a global approximation. To demonstrate the utility of our approach, we evaluate it on different ML models in the context of image recognition. The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models. 1 Introduction When comparing with relatively simple learning techniques such as decision trees and K-nearest neighbors, it is well acknowledged that complex learning models – particularly, deep neural networks (DNN) – usually demonstrate superior performance in classification and prediction. However, they are almost completely opaque, even to the engineers that build them [20]. Presumably as such, they have not yet been widely adopted in critical problem domains, such as diagnosing deadly diseases [13] and making million-dollar trading decisions [14]. To address this problem, prior research proposes to derive an interpretable explanation for the output of a DNN. With that, people could understand, trust and effectively manage a deep learning model. From a technical prospective, this can be interpreted as pinpointing the most important features in the input of a deep learning model. In the past, the techniques designed and developed primarily focus on two kinds of methods – (1) whitebox explanation that derives interpretation for a deep learning model through forward or backward propagation approach [26, 36], and (2) blackbox explanation that infers explanations for individual decisions through local approximation [21, 23]. While both demonstrate a great potential to help users interpret an individual decision, they lack an ability to extract insights from the target ML model that could be generalized to future cases. In other words, existing methods could not shed lights on the general sensitivity level of a target model to specific input dimensions and hence fall short in foreseeing when prediction errors might occur for future cases. In this work, we propose a new technical approach that not only explains an individual decision but, more importantly, extracts generalizable insights from the target model. As we will show in 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Section 4, we define such insights as the general sensitivity level of a target model to specific input dimensions. We demonstrate that model developers could use them to identify model strengths as well as model vulnerabilities. Technically, our approach introduces multiple elastic nets to a Bayesian non-parametric regression mixture model. Then, it utilizes this model to approximate a target model and thus derives its generalizable insight and explanation for its individual decision. The rationale behind this approach is as follows. A Bayesian non-parametric regression mixture model can approximate arbitrary probability density with high accuracy [22]. As we will discuss in Section 3, with multiple elastic nets, we can augment a regression mixture model with an ability to extract patterns (generalizable insights) even from a learning model that takes as input data of different extent of correlations. Given the pattern, we could extrapolate input features that are critical to the overall performance of an ML model. This information can be used to facilitate one to scrutinize a model’s overall strengths and weaknesses. Besides extracting generalizable insights, the proposed model can also provide users with more understandable and accountable explanations. We will demonstrate this characteristic in Section 4. 2 Related Work Most of the works related to model interpretation lie in demystifying complicated ML models through whitebox and blackbox mechanisms. Here, we summarize these works and discuss their limitations. It should be noticed that we do not include those works that identify training samples that are most responsible for a given prediction (e.g., [12, 15]) and those works that build a self-interpretable deep learning model [7, 33]. The whitebox mechanism augments a learning model with the ability to yield explanations for individual predictions. Generally, the techniques in this kind of mechanism follow two lines of approaches – Ê occluding a fraction of a single input sample and identifying what portions of the features are important for classification [4, 6, 17, 36, 37], and Ë computing the gradient of an output with respect to a given input sample and pinpointing what features are sensitive to the prediction of that sample [1, 8, 24, 25, 26, 29, 32]. While both can give users an explanation for a single decision that a learning model reach, they are not sufficient to provide a global understanding of a learning model, nor capable of exposing its strengths and weaknesses. In addition, they typically cannot be generally applied to explaining prediction outcomes of other ML models because most of the techniques following this mechanism are designed for a specific ML model and require altering that learning model. The blackbox mechanism treats an ML model as a black box, and produces explanations by locally learning an interpretable model around a prediction. For example, LIME [23] and SHAP [21] are the same kind of explanation techniques that sample perturbed instances around a single data sample and fit a linear model to perform local explanations. Going beyond the explanation of a single prediction, they both can be extended to explain the model as a complete entity by selecting a small number of representative individual predictions and their explanations. However, explanations obtained through such approaches cannot describe the full mapping learned by an ML model. In this work, our proposed technique derives a generalizable insight directly from a target model, which provides us with the ability to unveil model weaknesses and strengths. 3 Technical Approach 3.1 Background A Bayesian non-parametric regression mixture model (i.e., mixture model for short) consists of multiple Gaussian distributions: yi|xi,Θ ∼ ∞∑ j=1 πjN(yi | xiβj , σ2j ), (1) where Θ denotes the parameter set, xi ∈ Rp is the i-th data sample of the sample feature matrix XT ∈ Rp×n, and yi is the corresponding prediction in y ∈ Rn, which is the predictions of n samples. π1:∞ are the probabilities tied to the distributions with the sum equal to 1, and β1:∞ and σ 2 1:∞ represent the parameters of regression models, with βj ∈ Rp and σ2j ∈ R. In general, model (1) can be viewed as a combination of infinite number of regression models and be used to approximate any learning model with high accuracy. Given a learning model g : Rp → R, we can therefore approximate g(·) with a mixture model using {X,y}, a set of data samples as well as their corresponding predictions obtained from model g, i.e., yi = g(xi). For any data sample xi, we can then identify a regression model ŷi = xiβj + i, which best approximates the local decision boundary near xi1. Note that in this paper, we assume that a single mixture component is sufficient to approximate the local decision boundary around xi. Despite the assumption doesnot hold in some cases, the proposed model can be relaxed and extended to deal with these cases. More specifically, instead of directly assigning each instance to one mixture component, we can assign an instance at a mode level [10], (i.e., assigning the instance to a combination of multiple mixture components). When explaining a single instance, we can linearly combine the corresponding regression coefficients in a mode. Recent research [23] has demonstrated that such a linear regression model can be used for assessing how the feature space affects a decision by inspecting the weights (model coefficients) of the features present in the input. As a result, similar to prior research [23], we can take this linear regression model to pinpoint the important features and take them as an explanation for the corresponding individual decision. In addition to model approximation and explanation mentioned above, another characteristic of a mixture model is that it can enable multiple training samples to share the same regression model and thus preserve only dominant patterns in data. With this, we can significantly reduce the amount of explanations derived from training data and utilize them as the generalizable insight of a target model. 3.2 Challenge and Technical Overview Despite the great characteristics of a mixture model, it is still challenging for us to use it for deriving generalizable insights or individual explanation. This is because a regression mixture model does not always guarantee a success in model approximation, especially when it deals with samples with diverse feature correlations and data sparsity. To tackle this challenge, an instinctive reaction is to introduce an elastic net to a Bayesian regression mixture model. Past research [9, 18, 38] has demonstrated that an elastic net encourages the grouping effects among variables so that highly correlated variables tend to be in or out of a mixture model together. Therefore, it can potentially augment the aforementioned method with the ability of dealing with the situation where the features of a high dimensional sample are highly correlated. However, a key limitation of this approach could manifest, especially when it deals with samples with diverse feature correlation and data sparsity. In the following, we address this issue by establishing a dirichlet process mixture model with multiple elastic nets (DMM-MEN). Different from previous research [35], our approach allows the regularization terms to has the flexibility to reduce a lasso or ridge under some sample categories, while maintaining the properties of the elastic net under other categories. With the multiple elastic nets, the model is able to capture the different levels of feature correlation and sparsity in the data. the In the following, we provide more details of this hierarchical Bayesian non-parametric model. 3.3 Technical Details Dirichlet Process Regression Mixture Model. As is specified in Equation (1), the amount of Gaussian distributions is infinite, which indicates that there are infinite number of parameters that need to be estimated. In practice, however, the amount of available data samples is limited and therefore it is necessary to restrict the number of distributions. To do this, truncated Dirichlet process prior [11] can be applied, and Equation (1) can be written as yi|xi,Θ ∼ J∑ j=1 πjN(yi | xiβj , σ2j ). (2) 1For multi-class classification tasks, this work approximates each class separately, and thus X denotes the samples in the same class and g(X) represents the corresponding predictions. Given that y is a probability vector, we conduct logit transformation before fitting a regression mixture model. Where J is the hyper-parameter that specify the upper bound of the number of mixture components. To estimate the parameters Θ, a Bayesian non-parametric approach first models π1:J through a “stick-breaking” prior process. With such modeling, parameters π1:J can then be computed by πj = uj j−1∏ l=1 (1− ul) for j = 2, ..., J − 1, (3) with π1 = u1 and πJ = 1− ∑J−1 l=1 πl. Here, ul follows a beta prior distribution, Beta(1, α), parameterized by α, where α can be drawn from Gamma(e, f) with hyperparameters e and f . To make the computation efficient, σ2j is set to follow an inverse Gamma prior, i.e., σ 2 j ∼ Inv-Gamma(a, b) with hyperparameters a and b. Given σ21:J , for conventional Bayesian regression mixture model, β1:J can be drawn from Gaussian distribution N(mβ , σ2jVβ) with hyperparameters mβ and Vβ . As is described above, using a mixture model to approximate a learning model, for any data sample we can identify a regression model to best approximate the prediction of that sample. This is due to the fact that a mixture model can be interpreted as arising from a clustering procedure which depends on the underlying latent component indicators z1:n. For each observation (xi, yi), zi = j indicates that the observation was generated from the j-th Gaussian distribution, i.e., yi|zi = j ∼ N(xiβj , σ2j ) with P (zi = j) = πj . Dirichlet Process Mixture Model with Multiple Elastic Nets. Recall that a conventional mixture model has difficulty not only in dealing with high dimensional data and highly correlated features but also in handling different types of data heterogeneity. We modify the conventional mixture model by resetting the prior distribution of β1:J to realize multiple elastic nets. Specifically, we first define mixture distribution P (βj |λ1,1:K , λ2,1:K , σ2j ) = K∑ k=1 wkfk(βj |λ1,k, λ2,k, σ2j ), (4) where K denotes the total number of component distributions, and w1:K represent component probabilities with ∑K k=1 wk = 1. Let w ′ ks follow a Dirichlet distribution, i.e., w1, w2, · · · , wK ∼ Dir(1/K). Since we add elastic net regularization to the regression coefficient β1:J , instead of the aforementioned normal distribution, we adopt the Orthant Gaussian distribution as the prior distribution according to [9]. To be specific, each βk follows a Orthant Gaussian prior, whose density function fk can be defined as fk ( βj |λ1,k, λ2,k, σ2j ) ∝ Φ ( −λ1,k 2σ √ λ2,k )−p × ∑ Z∈Z N ( βj ∣∣∣ − λ1,k 2λ2,k Z, σ2j λ2,k Ip ) 1(βj ∈ OZ). (5) Here, λi,k (i = 1, 2) is a pair of parameters which controls lasso and ridge regression for the k-th component, respectively. We set both to follow Gamma conjugate prior with λ1,k ∼ Gamma(R, V/2) and λ2,k ∼ Gamma(L, V/2), where R, L, and V are hyperparameters. Φ(·) is the cumulative distribution function of the univariate standard Gaussian distribution, and Z = {−1,+1}p is a collection of all possible p-vectors with elements ±1. Let Zl = 1 for βjl ≥ 0 and Zl = −1 for βjl < 0. Then, OZ ⊂ Rp can be determined by vector Z ∈ Z , indicating the corresponding orthant. Given the prior distribution of fk defined in (5), it is difficult to compute the posterior distribution and sample from it. To obtain a simpler form, we use the mixture representation of the prior distribution (5). To be specific, we introduce a latent variable τ 1:p and rewrite the (5) into the following hierarchical form2 βj | τ j , σ2j , λ2,cj ∼ N ( βj ∣∣∣ 0, σ2j λ2,cj Sτ j ) , and (6) τ j | σ2j , λ1,cj , λ2,cj ∼ p∏ l=1 Inv-Gamma(0,1) τjl ∣∣∣∣∣ 12 , 12 ( λ1,cj 2σj √ λ2,cj )2 , (7) 2More details about the derivation of the scale mixture representation and the proof of equivalence can be found in [9, 18]. where τ j ∈ Rp denotes latent variables and Sτ j ∈ Rp×p, with Sτ j = diag(1− τjl) for l = 1, · · · , p. Similar to component indicator zi introduced in the previous section, here, we introduce a set of latent regularization indicators c1:J . For each parameter βj , cj = k indicates that parameter follows distribution fk(·) with P (cj = k) = wk. Posterior Computation and Post-MCMC Analysis. We develop a customized MCMC method involving a combination of Gibbs sampling and Metropolis-Hastings algorithm for parameter inference [28]. Basically, it involves augmentation of the model parameter space by the aforementioned mixture component indicators z1:n and c1:J . These indicators enable simulation of relevant conditional distributions for model parameters. As the MCMC proceeds, they can be estimated from relevant conditional posteriors and thus we can jointly obtain posterior simulations for model parameters and mixture component indicators. We provide the details of posterior distribution and the implementation of updating the parameters in the supplementary material. Considering that fitting a mixture model with MCMC suffers from the well-known label switching problem, we use an iterative relabeling algorithm introduced in [3]. 4 Evaluation Recall that the motivation of our proposed method is to increase the transparency for complex ML models so that users could leverage our approach to not only understand an individual decision (explainability) but also to obtain insights into the strength and vulnerabilities of the target model (scrutability). The experimental evaluation of the proposed method thus focuses on the aforementioned two aspects – scrutability and explainability. 4.1 Scrutability Methodology. As a first step, we utilize Keras [2] to train an MLP on MNIST dataset [16] and CNNs to classify clothing images in Fashion-MNIST dataset [34] respectively. These machine learning methods represent the techniques most commonly used for the corresponding classification tasks. We trained these model to achieve more than decent classification performance. We then treat these two models as our target models and apply our proposed approach to establish scrutability. We define the scrutability of an explanation method as the ability to distill generalizable insights from the model under examination. In this work, generalizable insights refer to feature importance inferences that could be generalized across all cases. Admittedly, the fidelity of our proposed solution to the target model is an important prerequisite to any generalizable insights our solution extracts. In this section, we carry out experiments to empirically evaluate the fidelity while also demonstrating scrutability of our solution. We apply the following procedures to obtain experimentation data. 1. Construct bootstrapped samples from the training data and nullify the top important pixels identified by our approach among positive cases while replacing the same pixels in negative cases with the mean value of those features among positive samples. 2. Apply random pixel nullification/replacement to the same bootstrapped samples used in previous step from the training data. 3. Construct test cases that register positive properties for the top important pixels while randomly assign value for the remaining pixels. 4. Construct randomly created test cases (i.e., assigning random value to all pixels) as baseline samples for the new test cases. We then compare the target model classification performance among synthetic samples crafted via procedures mentioned above. The intuition behind this exercise is that if the fidelity/scrutability of our proposed solution holds, we should be able to see significant impact on the classification accuracy. Moreover, the magnitude of the impact should significantly outweigh that observed from randomly manipulating features. In the following, we describe our experiment tactics and findings in greater details. Experimental Results. Figure 1 illustrates the generalizable insights (i.e., important pixels in MNIST and Fashion-MNIST datasets) that our proposed solution distilled from the target MLP and CNNs models, respectively. To validate the faithfulness of these insights and establish fidelity of our proposed solution, we conduct the following experiment. First, bootstrapped samples, each contains a random draw of 30% of the original cases, are constructed from the MNIST and Fashion-MNIST datasets. For cases that are originally identified as positive for corresponding classes by the target models (i.e., MLP and CNNs), we nullify top 50/75/100/125/150 important features identified by our proposed solution respectively, while forcing the value of corresponding features in the negative samples equal to the mean value of those among the positive samples. These manipulated cases are then supplied to the the target model and we measure the proportion of cases that those models would classify as positive under each condition. In addition, we apply the same perturbations on randomly selected 50/75/100/125/150 features in the same bootstrapped sample and measure the target model’s positive classification rate after the manipulation as a baseline for comparison. We repeat such a process for 50 times for both datasets to account for the statistical uncertainty in the measured classification rate. Figure 3a, Figure 3b and supplementary material showcase some of the aforementioned bootstrapped samples. Figure 2a and Figure 2b summarize the experimental results we obtain from the procedures mentioned above. As is illustrated in both figures, the classification rates of the target models on these perturbed samples are impacted dramatically once we start manipulating top 50/75 important features (i.e., around 9% of the pixels in each image) identified by our proposed solution in these images. However, we do not observe any significant impact to the model’s classification performance if we randomly perturb the same number of pixels. Non-overlapping 95% confidence intervals of the post-manipulation classification performance also reveal that the impact of these top features is significantly greater than the features selected at random. Moreover, the fact that we start observing dramatic impact in the target models’ classification performance after we manipulate less than 9% of the total features justifies the faithfulness of our proposed approach to the ML models under examination. To further validate the fidelity of the insights illustrated in Figure 1, we construct new testing cases based on top 50/75/100/125/150 pixels deemed important by our proposed solution respectively and measure the proportion of these testing samples that are classified as positive cases by the target models. We also create testing cases by randomly filling 50/75/100/125/150 pixels within the images and measure the positive classification rate as a baseline. The intuition behind this exercise is that, similar to the experiments described earlier, we would like to see significantly higher positive classification rates leveraging the insights from our proposed solution than creating cases around randomly selected pixels. In Figure 3c and supplementary material, we showcase some insights driven testing cases. As is shown in Figure 2c, insights driven testing cases have much higher success rates than the cases created around random pixels. In fact, we observe that even if we randomly fill 150 pixels (which is close to 20% of the pixels in an image), the positive classification rate remains extremely low across classes. On the contrary, we notice that with the cases created based on the top 50 important pixels (i.e., 9% of all pixels in an image) deemed by our solution, we could already achieve around 50% success rate. For some specific outcome categories, we could even achieve a much higher success rate. It is worth noting that aforementioned experiments also unveil the vulnerabilities and sensitivities of the target MLP and CNNs models. It does not seem to matter if a handwritten digit or a fashion product is visually recognizable in an image, the model will classify it to the corresponding category with a high confidence as long as the important features indicated in the heat map are filled with greater values (see Figure 3b). In other words, both the MLP and CNNs models evaluated in this study are very sensitive to these pixels but could also be vulnerable to pathological image samples crafted based on such insights. Figure 3a and Figure 3c are two additional examples. A sample (Figure 3a) might carry the right semantics, the learning model still might be blind to that sample if the pixels corresponding to important features are filled with smaller values. On the other hand, a very noisy sample (Figure 3c) could still be correctly classified as long as the pixels corresponding to important features are assigned with decent values. 4.2 Explainability Our proposed solution does not only extract generalizable insights from the target models but also demonstrate superior performance in explaining individual decisions. To illustrate its superiority, we compare our approach with a couple of state-of-the-art explainable approaches, namely LIME and SHAP. In particular, we evaluate the explainability of these approaches by comparing the explanation feature maps and more importantly quantitatively measuring their relative superiority in identifying influential features in individual decisions. As is introduced in the aforementioned section, we also evaluate the explainability of our proposed solution on the VGG16 model [27] trained from ImageNet dataset [5]. Due to the ultra high dimensionality concern, which we will discuss in the following section, we adopt the methodology in [23] to generate data to explain individual decisions. More specifically, we create a new dataset by randomly sampling around the data sample that needed to be explained, reducing the dimensionality of the newly crafted dataset by certain dimension reduction method [23] and fitting the approximation model. Figure 4a and supplementary material illustrate ten handwritten digits and ten fashion products randomly selected from each of the classes in MNIST and Fashion-MNIST datasets, respectively. We apply our solution as well as LIME and SHAP to each of the images shown in the figure and then select and highlight the top 20 segments that each approach deems important to the decision made by deep neural network classifiers. The results are presented in Figure 4b, Figure 4c, Figure 4d and supplementary material for our approach, LIME and SHAP, respectively. As we can observe in these figures, our approach nearly perfectly highlights the contour of each digit and fashion product, whereas LIME and SHAP identify only the partial contour of each digit and product and select more background parts than our approach. Figure 4a also has two images we randomly selected from ImageNet dataset. The left image has only one object and the other image has two. Figure 4b to Figure 4d demonstrate the top 10 segments pinpointed by three explanation techniques. The results shown in these figures are consistent with those of MNIST and Fashion-MNIST. More specifically, the proposed approach can precisely highlight the object in the images, while the other approaches only partly identify the object and even select some background noise as important features. In order to evaluate the fidelity of these explanation results, we input these feature images back to VGG16 and record the prediction probabilities of the true labels (tiger cat, lion and tiger cat). Figure 4b achieved the highest probabilities on each feature map, which from the left to right are 93.20%, 78.51% and 92.70%. Note that in the fourth image of Figure 4b, while identifying a lion in the image, our approach highlights the moustache of the cat, which seems like a wrong selection. However, if we exclude this part from the image, the probability of the object belonging to lion drops from 78.51% to 20.31%. This result showcases a false positive of VGG16 and indicates that we can still find the weakness of the target model even from the individual explanations. To further quantify the relative performance in explainability, we also conduct the following experiment. First we randomly select 10000 data samples from aforementioned datasets. Then, we apply our approach as well as two state-of-the-art solutions (i.e., LIME and SHAP) to extract top 20 important segments (top 10 segments for ImageNet dataset). We then manipulate these samples based on the segments identified via three approaches. To be specific, we only keep the top important pixels intact while nullifying the remaining pixels and supply these manipulated samples to the target models and evaluate the classification accuracy. Table 1 shows the accuracy of these feature images being classified to the corresponding truth categories as well as the means and the 95% confidence interval of the prediction probabilities. The results indicate that our approach offers better resolution and more granular explanations to individual predictions. One possible explanation is that both LIME and SHAP assume the local decision boundary of the target model to be linear while the proposed approach conducts the variable selection by applying a non-linear approximation. It is known that Bayesian non-parametric models are computationally expensive. However, It does not mean that we cannot use the proposed approach in the real-world applications. In fact, we have recorded the latency of the proposed approach on explaining individual samples in three datasets. The running times are for MNIST, Fashion-MNIST and ImageNet are 37.5s, 44s and 139.2s, respectively. As to approximating the global decision boundary, the running times are 105 mins on MNIST and 115 mins on Fashion-MNIST. It is believed that the latency of our approach is still within the range of normal training time for complex ML models. 5 Discussion Scalability. As is shown in Section 4, our proposed solution does not impose incremental challenge on scalability. We can still further accelerate the algorithm to improve its scalability. More specifically, current advances in Bayesian computation approaches allow the MCMC methods to be used for big data analysis, such as adopting Bootstrap Metropolis–Hastings Algorithm [19], applying divide and conquer approaches [30] and even taking advantage of GPU programming to speed up the computation [31]. Data Dimensionality. Our evaluation described in Section 4 indicates that the proposed solution (DMM-MEN) could extract generalizable insights even from high dimensional data (e.g. Fashion MNIST). However, when it comes to ultra high-dimensional data, getting generalizable insights could still be a challenge. One obvious reason is that we do not have sufficient data to infer all the parameters. More importantly, even if we had enough data, it would be very computationally expensive. Arguably, one solution is to reduce the dimensionality of such ultra high dimensional data while preserving the original data distribution. However, take ImageNet dataset as an example. Even the state-of-the-art dimensionality reduction methods (i.e., the one used in [23]) could not satisfactorily preserve the whole data distribution. This indeed speaks to the limitation of our proposed solution in extracting generalizable insights when it comes to specific datasets. Nevertheless, it does not affect our solution’s ability in precisely explaining individual predictions even when it comes to ultra high dimensional data. As is shown in Section 4, our solution significantly outperforms the state-of-the-art solutions in explaining individual decisions made on ultra-high dimensional data samples. Other Applications and Learning Models. While we evaluate and demonstrate the capability of our proposed technique only on the image recognition using deep learning models, the proposed approach is not limited to such a learning task and models. In fact, we also evaluated our technique on other learning tasks with various learning models. We observed the consistent superiority in extracting global insight and explaining individual decisions. Due to the space limit, we specify those experiment results in our supplementary material submitted along with this manuscript. 6 Conclusion and Future Work This work introduces a new technical approach to derive generalizable insights for complicated ML models. Technically, it treats a target ML model as a black box and approximates its decision boundary through DMM-MEN. With this approach, model developers and users can approximate complex ML models with low errors and obtain better explanations of individual decisions. More importantly, they can extract generalizable insights learned by a target model and use it to scrutinize model strengths and weaknesses. While our proposed approach exhibits outstanding performance in explaining individual decisions, and provides a user with an ability to discover model weaknesses, its performance may not be good enough when applied to interpreting temporal learning models (e.g., recurrent neural networks). This is due to the fact that, our approach takes features independently whereas time series analysis deals with features temporally dependent. As part of the future work, we will therefore equip our approach with the ability of dissecting temporal learning models. Acknowledgments We gratefully acknowledge the funding from NSF grant CNS-1718459 and the support of NVIDIA Corporation with the donation of the GPU. We also would like to thank anonymous reviewers, Kaixuan Zhang, Xinran Li and Chenxin Ma for their helpful comments.
1. What is the focus of the review, and what are the reviewer's main concerns? 2. What are the strengths and weaknesses of the proposed method in terms of its technical quality? 3. How does the reviewer assess the clarity and originality of the paper's content? 4. What is the significance of the paper regarding its contribution to the field of machine learning interpretability? 5. Are there any minor issues or suggestions that the reviewer has regarding the paper's writing or content?
Review
Review I think the rebuttal is prepared very well. Although the assumption of a single component approximating the local decision boundary is quite strong, the paper nonetheless offers a good, systematic approach to interpreting black box ML systems. It is an important topic and I don't see a lot of studies in this area. Overview In an effort to improve scrutability (ability to extract generalizable insight) and explainability of a black box target learning algorithm the current paper proposes to use infinite Dirichlet mixture models with multiple elastic nets (DMM-MEN) to map the inputs to the predicted outputs. Any target model can be approximated by a non-parametric Bayesian regression mixture model. However, when samples exhibit diverse feature correlations, sparsity, and heterogeneity standard non-parametric Bayesian regression models may not be sufficients. The authors integrate multiple elastic nets to deal with diverse data characteristics. Extensive experiments are performed on two benchmark data sets to evaluate the proposed method in terms of scrutability and explainability. For scrutability a bootstrap sample of the training set is used to replace the most important pixels (as determined by the proposed algorithm) in images with random values. The intuition behind this exercise is that if the selected pixels are indeed strong features then the classification accuracy should suffer significantly and the degree of impact should outweigh other scenarios with randomly manipulated features. This intuition is demonstrated to be correct on both data sets. For explainability the proposed method is compared against two other methods from the literature (LIME and SHAP). The proposed method can identify parts of images that contribute to decisions with greater level accuracy than the other two techniques. Results suggest that the proposed technique can also be used to identify anomalies (inconsistent predictions of the target system) with some level of success. Technical Quality I would consider the technical quality to be good with some reservations. The main idea relies on the assumption that one component in the mixture model can be sufficient to explain a single prediction. What happens if a single component cannot approximate the local decision boundary near a single instance with an acceptable accuracy? In reality one would expect that many Normal components might be needed to explain a single prediction. So using a single component can make the prediction uninterpretable. What aspect of the model guarantee or stipulate sparse mixture components? The motivation behind using Orthant Gaussian prior on regression coefficients is not well justified. Does this really serve for its purpose (data dimensionality and heteregoneity)? What would happen if standard Gaussian prior was used as a prior? I also do not follow the need for truncation when the finite number of components can be easily determined during inference without any truncation especially when MCMC inference is used. Clarity The paper reads well with some minor issues outlined below (list is not exhaustive). Line 48 Most of the works that related to, "that" here is redundant Line 81 any learning models, "model" Line 320 when it comes specific datasets, "to" missing Line 320 it dose not affect, "does" Originality I would consider both the method (Dirichlet mixture models with multiple elastic nets) and the approach (evaluation in terms of scrutinability and explainability) quite original. Significance: I expect the significance of this work to be high as there is a dire need in the ML literature for models that can make outputs of complex learning systems more interpretable. Toward achieving this end the current paper proposes a systematic approach to explain and scrutinize the outputs of deep learning classifiers with promising results. Other Comments: Please define PCR (Principal Component Regression) and explain why PCR is used as opposed to classification accuracy
NIPS
Title Explaining Deep Learning Models -- A Bayesian Non-parametric Approach Abstract Understanding and interpreting how machine learning (ML) models make decisions have been a big challenge. While recent research has proposed various technical approaches to provide some clues as to how an ML model makes individual predictions, they cannot provide users with an ability to inspect a model as a complete entity. In this work, we propose a novel technical approach that augments a Bayesian non-parametric regression mixture model with multiple elastic nets. Using the enhanced mixture model, we can extract generalizable insights for a target model through a global approximation. To demonstrate the utility of our approach, we evaluate it on different ML models in the context of image recognition. The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models. 1 Introduction When comparing with relatively simple learning techniques such as decision trees and K-nearest neighbors, it is well acknowledged that complex learning models – particularly, deep neural networks (DNN) – usually demonstrate superior performance in classification and prediction. However, they are almost completely opaque, even to the engineers that build them [20]. Presumably as such, they have not yet been widely adopted in critical problem domains, such as diagnosing deadly diseases [13] and making million-dollar trading decisions [14]. To address this problem, prior research proposes to derive an interpretable explanation for the output of a DNN. With that, people could understand, trust and effectively manage a deep learning model. From a technical prospective, this can be interpreted as pinpointing the most important features in the input of a deep learning model. In the past, the techniques designed and developed primarily focus on two kinds of methods – (1) whitebox explanation that derives interpretation for a deep learning model through forward or backward propagation approach [26, 36], and (2) blackbox explanation that infers explanations for individual decisions through local approximation [21, 23]. While both demonstrate a great potential to help users interpret an individual decision, they lack an ability to extract insights from the target ML model that could be generalized to future cases. In other words, existing methods could not shed lights on the general sensitivity level of a target model to specific input dimensions and hence fall short in foreseeing when prediction errors might occur for future cases. In this work, we propose a new technical approach that not only explains an individual decision but, more importantly, extracts generalizable insights from the target model. As we will show in 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Section 4, we define such insights as the general sensitivity level of a target model to specific input dimensions. We demonstrate that model developers could use them to identify model strengths as well as model vulnerabilities. Technically, our approach introduces multiple elastic nets to a Bayesian non-parametric regression mixture model. Then, it utilizes this model to approximate a target model and thus derives its generalizable insight and explanation for its individual decision. The rationale behind this approach is as follows. A Bayesian non-parametric regression mixture model can approximate arbitrary probability density with high accuracy [22]. As we will discuss in Section 3, with multiple elastic nets, we can augment a regression mixture model with an ability to extract patterns (generalizable insights) even from a learning model that takes as input data of different extent of correlations. Given the pattern, we could extrapolate input features that are critical to the overall performance of an ML model. This information can be used to facilitate one to scrutinize a model’s overall strengths and weaknesses. Besides extracting generalizable insights, the proposed model can also provide users with more understandable and accountable explanations. We will demonstrate this characteristic in Section 4. 2 Related Work Most of the works related to model interpretation lie in demystifying complicated ML models through whitebox and blackbox mechanisms. Here, we summarize these works and discuss their limitations. It should be noticed that we do not include those works that identify training samples that are most responsible for a given prediction (e.g., [12, 15]) and those works that build a self-interpretable deep learning model [7, 33]. The whitebox mechanism augments a learning model with the ability to yield explanations for individual predictions. Generally, the techniques in this kind of mechanism follow two lines of approaches – Ê occluding a fraction of a single input sample and identifying what portions of the features are important for classification [4, 6, 17, 36, 37], and Ë computing the gradient of an output with respect to a given input sample and pinpointing what features are sensitive to the prediction of that sample [1, 8, 24, 25, 26, 29, 32]. While both can give users an explanation for a single decision that a learning model reach, they are not sufficient to provide a global understanding of a learning model, nor capable of exposing its strengths and weaknesses. In addition, they typically cannot be generally applied to explaining prediction outcomes of other ML models because most of the techniques following this mechanism are designed for a specific ML model and require altering that learning model. The blackbox mechanism treats an ML model as a black box, and produces explanations by locally learning an interpretable model around a prediction. For example, LIME [23] and SHAP [21] are the same kind of explanation techniques that sample perturbed instances around a single data sample and fit a linear model to perform local explanations. Going beyond the explanation of a single prediction, they both can be extended to explain the model as a complete entity by selecting a small number of representative individual predictions and their explanations. However, explanations obtained through such approaches cannot describe the full mapping learned by an ML model. In this work, our proposed technique derives a generalizable insight directly from a target model, which provides us with the ability to unveil model weaknesses and strengths. 3 Technical Approach 3.1 Background A Bayesian non-parametric regression mixture model (i.e., mixture model for short) consists of multiple Gaussian distributions: yi|xi,Θ ∼ ∞∑ j=1 πjN(yi | xiβj , σ2j ), (1) where Θ denotes the parameter set, xi ∈ Rp is the i-th data sample of the sample feature matrix XT ∈ Rp×n, and yi is the corresponding prediction in y ∈ Rn, which is the predictions of n samples. π1:∞ are the probabilities tied to the distributions with the sum equal to 1, and β1:∞ and σ 2 1:∞ represent the parameters of regression models, with βj ∈ Rp and σ2j ∈ R. In general, model (1) can be viewed as a combination of infinite number of regression models and be used to approximate any learning model with high accuracy. Given a learning model g : Rp → R, we can therefore approximate g(·) with a mixture model using {X,y}, a set of data samples as well as their corresponding predictions obtained from model g, i.e., yi = g(xi). For any data sample xi, we can then identify a regression model ŷi = xiβj + i, which best approximates the local decision boundary near xi1. Note that in this paper, we assume that a single mixture component is sufficient to approximate the local decision boundary around xi. Despite the assumption doesnot hold in some cases, the proposed model can be relaxed and extended to deal with these cases. More specifically, instead of directly assigning each instance to one mixture component, we can assign an instance at a mode level [10], (i.e., assigning the instance to a combination of multiple mixture components). When explaining a single instance, we can linearly combine the corresponding regression coefficients in a mode. Recent research [23] has demonstrated that such a linear regression model can be used for assessing how the feature space affects a decision by inspecting the weights (model coefficients) of the features present in the input. As a result, similar to prior research [23], we can take this linear regression model to pinpoint the important features and take them as an explanation for the corresponding individual decision. In addition to model approximation and explanation mentioned above, another characteristic of a mixture model is that it can enable multiple training samples to share the same regression model and thus preserve only dominant patterns in data. With this, we can significantly reduce the amount of explanations derived from training data and utilize them as the generalizable insight of a target model. 3.2 Challenge and Technical Overview Despite the great characteristics of a mixture model, it is still challenging for us to use it for deriving generalizable insights or individual explanation. This is because a regression mixture model does not always guarantee a success in model approximation, especially when it deals with samples with diverse feature correlations and data sparsity. To tackle this challenge, an instinctive reaction is to introduce an elastic net to a Bayesian regression mixture model. Past research [9, 18, 38] has demonstrated that an elastic net encourages the grouping effects among variables so that highly correlated variables tend to be in or out of a mixture model together. Therefore, it can potentially augment the aforementioned method with the ability of dealing with the situation where the features of a high dimensional sample are highly correlated. However, a key limitation of this approach could manifest, especially when it deals with samples with diverse feature correlation and data sparsity. In the following, we address this issue by establishing a dirichlet process mixture model with multiple elastic nets (DMM-MEN). Different from previous research [35], our approach allows the regularization terms to has the flexibility to reduce a lasso or ridge under some sample categories, while maintaining the properties of the elastic net under other categories. With the multiple elastic nets, the model is able to capture the different levels of feature correlation and sparsity in the data. the In the following, we provide more details of this hierarchical Bayesian non-parametric model. 3.3 Technical Details Dirichlet Process Regression Mixture Model. As is specified in Equation (1), the amount of Gaussian distributions is infinite, which indicates that there are infinite number of parameters that need to be estimated. In practice, however, the amount of available data samples is limited and therefore it is necessary to restrict the number of distributions. To do this, truncated Dirichlet process prior [11] can be applied, and Equation (1) can be written as yi|xi,Θ ∼ J∑ j=1 πjN(yi | xiβj , σ2j ). (2) 1For multi-class classification tasks, this work approximates each class separately, and thus X denotes the samples in the same class and g(X) represents the corresponding predictions. Given that y is a probability vector, we conduct logit transformation before fitting a regression mixture model. Where J is the hyper-parameter that specify the upper bound of the number of mixture components. To estimate the parameters Θ, a Bayesian non-parametric approach first models π1:J through a “stick-breaking” prior process. With such modeling, parameters π1:J can then be computed by πj = uj j−1∏ l=1 (1− ul) for j = 2, ..., J − 1, (3) with π1 = u1 and πJ = 1− ∑J−1 l=1 πl. Here, ul follows a beta prior distribution, Beta(1, α), parameterized by α, where α can be drawn from Gamma(e, f) with hyperparameters e and f . To make the computation efficient, σ2j is set to follow an inverse Gamma prior, i.e., σ 2 j ∼ Inv-Gamma(a, b) with hyperparameters a and b. Given σ21:J , for conventional Bayesian regression mixture model, β1:J can be drawn from Gaussian distribution N(mβ , σ2jVβ) with hyperparameters mβ and Vβ . As is described above, using a mixture model to approximate a learning model, for any data sample we can identify a regression model to best approximate the prediction of that sample. This is due to the fact that a mixture model can be interpreted as arising from a clustering procedure which depends on the underlying latent component indicators z1:n. For each observation (xi, yi), zi = j indicates that the observation was generated from the j-th Gaussian distribution, i.e., yi|zi = j ∼ N(xiβj , σ2j ) with P (zi = j) = πj . Dirichlet Process Mixture Model with Multiple Elastic Nets. Recall that a conventional mixture model has difficulty not only in dealing with high dimensional data and highly correlated features but also in handling different types of data heterogeneity. We modify the conventional mixture model by resetting the prior distribution of β1:J to realize multiple elastic nets. Specifically, we first define mixture distribution P (βj |λ1,1:K , λ2,1:K , σ2j ) = K∑ k=1 wkfk(βj |λ1,k, λ2,k, σ2j ), (4) where K denotes the total number of component distributions, and w1:K represent component probabilities with ∑K k=1 wk = 1. Let w ′ ks follow a Dirichlet distribution, i.e., w1, w2, · · · , wK ∼ Dir(1/K). Since we add elastic net regularization to the regression coefficient β1:J , instead of the aforementioned normal distribution, we adopt the Orthant Gaussian distribution as the prior distribution according to [9]. To be specific, each βk follows a Orthant Gaussian prior, whose density function fk can be defined as fk ( βj |λ1,k, λ2,k, σ2j ) ∝ Φ ( −λ1,k 2σ √ λ2,k )−p × ∑ Z∈Z N ( βj ∣∣∣ − λ1,k 2λ2,k Z, σ2j λ2,k Ip ) 1(βj ∈ OZ). (5) Here, λi,k (i = 1, 2) is a pair of parameters which controls lasso and ridge regression for the k-th component, respectively. We set both to follow Gamma conjugate prior with λ1,k ∼ Gamma(R, V/2) and λ2,k ∼ Gamma(L, V/2), where R, L, and V are hyperparameters. Φ(·) is the cumulative distribution function of the univariate standard Gaussian distribution, and Z = {−1,+1}p is a collection of all possible p-vectors with elements ±1. Let Zl = 1 for βjl ≥ 0 and Zl = −1 for βjl < 0. Then, OZ ⊂ Rp can be determined by vector Z ∈ Z , indicating the corresponding orthant. Given the prior distribution of fk defined in (5), it is difficult to compute the posterior distribution and sample from it. To obtain a simpler form, we use the mixture representation of the prior distribution (5). To be specific, we introduce a latent variable τ 1:p and rewrite the (5) into the following hierarchical form2 βj | τ j , σ2j , λ2,cj ∼ N ( βj ∣∣∣ 0, σ2j λ2,cj Sτ j ) , and (6) τ j | σ2j , λ1,cj , λ2,cj ∼ p∏ l=1 Inv-Gamma(0,1) τjl ∣∣∣∣∣ 12 , 12 ( λ1,cj 2σj √ λ2,cj )2 , (7) 2More details about the derivation of the scale mixture representation and the proof of equivalence can be found in [9, 18]. where τ j ∈ Rp denotes latent variables and Sτ j ∈ Rp×p, with Sτ j = diag(1− τjl) for l = 1, · · · , p. Similar to component indicator zi introduced in the previous section, here, we introduce a set of latent regularization indicators c1:J . For each parameter βj , cj = k indicates that parameter follows distribution fk(·) with P (cj = k) = wk. Posterior Computation and Post-MCMC Analysis. We develop a customized MCMC method involving a combination of Gibbs sampling and Metropolis-Hastings algorithm for parameter inference [28]. Basically, it involves augmentation of the model parameter space by the aforementioned mixture component indicators z1:n and c1:J . These indicators enable simulation of relevant conditional distributions for model parameters. As the MCMC proceeds, they can be estimated from relevant conditional posteriors and thus we can jointly obtain posterior simulations for model parameters and mixture component indicators. We provide the details of posterior distribution and the implementation of updating the parameters in the supplementary material. Considering that fitting a mixture model with MCMC suffers from the well-known label switching problem, we use an iterative relabeling algorithm introduced in [3]. 4 Evaluation Recall that the motivation of our proposed method is to increase the transparency for complex ML models so that users could leverage our approach to not only understand an individual decision (explainability) but also to obtain insights into the strength and vulnerabilities of the target model (scrutability). The experimental evaluation of the proposed method thus focuses on the aforementioned two aspects – scrutability and explainability. 4.1 Scrutability Methodology. As a first step, we utilize Keras [2] to train an MLP on MNIST dataset [16] and CNNs to classify clothing images in Fashion-MNIST dataset [34] respectively. These machine learning methods represent the techniques most commonly used for the corresponding classification tasks. We trained these model to achieve more than decent classification performance. We then treat these two models as our target models and apply our proposed approach to establish scrutability. We define the scrutability of an explanation method as the ability to distill generalizable insights from the model under examination. In this work, generalizable insights refer to feature importance inferences that could be generalized across all cases. Admittedly, the fidelity of our proposed solution to the target model is an important prerequisite to any generalizable insights our solution extracts. In this section, we carry out experiments to empirically evaluate the fidelity while also demonstrating scrutability of our solution. We apply the following procedures to obtain experimentation data. 1. Construct bootstrapped samples from the training data and nullify the top important pixels identified by our approach among positive cases while replacing the same pixels in negative cases with the mean value of those features among positive samples. 2. Apply random pixel nullification/replacement to the same bootstrapped samples used in previous step from the training data. 3. Construct test cases that register positive properties for the top important pixels while randomly assign value for the remaining pixels. 4. Construct randomly created test cases (i.e., assigning random value to all pixels) as baseline samples for the new test cases. We then compare the target model classification performance among synthetic samples crafted via procedures mentioned above. The intuition behind this exercise is that if the fidelity/scrutability of our proposed solution holds, we should be able to see significant impact on the classification accuracy. Moreover, the magnitude of the impact should significantly outweigh that observed from randomly manipulating features. In the following, we describe our experiment tactics and findings in greater details. Experimental Results. Figure 1 illustrates the generalizable insights (i.e., important pixels in MNIST and Fashion-MNIST datasets) that our proposed solution distilled from the target MLP and CNNs models, respectively. To validate the faithfulness of these insights and establish fidelity of our proposed solution, we conduct the following experiment. First, bootstrapped samples, each contains a random draw of 30% of the original cases, are constructed from the MNIST and Fashion-MNIST datasets. For cases that are originally identified as positive for corresponding classes by the target models (i.e., MLP and CNNs), we nullify top 50/75/100/125/150 important features identified by our proposed solution respectively, while forcing the value of corresponding features in the negative samples equal to the mean value of those among the positive samples. These manipulated cases are then supplied to the the target model and we measure the proportion of cases that those models would classify as positive under each condition. In addition, we apply the same perturbations on randomly selected 50/75/100/125/150 features in the same bootstrapped sample and measure the target model’s positive classification rate after the manipulation as a baseline for comparison. We repeat such a process for 50 times for both datasets to account for the statistical uncertainty in the measured classification rate. Figure 3a, Figure 3b and supplementary material showcase some of the aforementioned bootstrapped samples. Figure 2a and Figure 2b summarize the experimental results we obtain from the procedures mentioned above. As is illustrated in both figures, the classification rates of the target models on these perturbed samples are impacted dramatically once we start manipulating top 50/75 important features (i.e., around 9% of the pixels in each image) identified by our proposed solution in these images. However, we do not observe any significant impact to the model’s classification performance if we randomly perturb the same number of pixels. Non-overlapping 95% confidence intervals of the post-manipulation classification performance also reveal that the impact of these top features is significantly greater than the features selected at random. Moreover, the fact that we start observing dramatic impact in the target models’ classification performance after we manipulate less than 9% of the total features justifies the faithfulness of our proposed approach to the ML models under examination. To further validate the fidelity of the insights illustrated in Figure 1, we construct new testing cases based on top 50/75/100/125/150 pixels deemed important by our proposed solution respectively and measure the proportion of these testing samples that are classified as positive cases by the target models. We also create testing cases by randomly filling 50/75/100/125/150 pixels within the images and measure the positive classification rate as a baseline. The intuition behind this exercise is that, similar to the experiments described earlier, we would like to see significantly higher positive classification rates leveraging the insights from our proposed solution than creating cases around randomly selected pixels. In Figure 3c and supplementary material, we showcase some insights driven testing cases. As is shown in Figure 2c, insights driven testing cases have much higher success rates than the cases created around random pixels. In fact, we observe that even if we randomly fill 150 pixels (which is close to 20% of the pixels in an image), the positive classification rate remains extremely low across classes. On the contrary, we notice that with the cases created based on the top 50 important pixels (i.e., 9% of all pixels in an image) deemed by our solution, we could already achieve around 50% success rate. For some specific outcome categories, we could even achieve a much higher success rate. It is worth noting that aforementioned experiments also unveil the vulnerabilities and sensitivities of the target MLP and CNNs models. It does not seem to matter if a handwritten digit or a fashion product is visually recognizable in an image, the model will classify it to the corresponding category with a high confidence as long as the important features indicated in the heat map are filled with greater values (see Figure 3b). In other words, both the MLP and CNNs models evaluated in this study are very sensitive to these pixels but could also be vulnerable to pathological image samples crafted based on such insights. Figure 3a and Figure 3c are two additional examples. A sample (Figure 3a) might carry the right semantics, the learning model still might be blind to that sample if the pixels corresponding to important features are filled with smaller values. On the other hand, a very noisy sample (Figure 3c) could still be correctly classified as long as the pixels corresponding to important features are assigned with decent values. 4.2 Explainability Our proposed solution does not only extract generalizable insights from the target models but also demonstrate superior performance in explaining individual decisions. To illustrate its superiority, we compare our approach with a couple of state-of-the-art explainable approaches, namely LIME and SHAP. In particular, we evaluate the explainability of these approaches by comparing the explanation feature maps and more importantly quantitatively measuring their relative superiority in identifying influential features in individual decisions. As is introduced in the aforementioned section, we also evaluate the explainability of our proposed solution on the VGG16 model [27] trained from ImageNet dataset [5]. Due to the ultra high dimensionality concern, which we will discuss in the following section, we adopt the methodology in [23] to generate data to explain individual decisions. More specifically, we create a new dataset by randomly sampling around the data sample that needed to be explained, reducing the dimensionality of the newly crafted dataset by certain dimension reduction method [23] and fitting the approximation model. Figure 4a and supplementary material illustrate ten handwritten digits and ten fashion products randomly selected from each of the classes in MNIST and Fashion-MNIST datasets, respectively. We apply our solution as well as LIME and SHAP to each of the images shown in the figure and then select and highlight the top 20 segments that each approach deems important to the decision made by deep neural network classifiers. The results are presented in Figure 4b, Figure 4c, Figure 4d and supplementary material for our approach, LIME and SHAP, respectively. As we can observe in these figures, our approach nearly perfectly highlights the contour of each digit and fashion product, whereas LIME and SHAP identify only the partial contour of each digit and product and select more background parts than our approach. Figure 4a also has two images we randomly selected from ImageNet dataset. The left image has only one object and the other image has two. Figure 4b to Figure 4d demonstrate the top 10 segments pinpointed by three explanation techniques. The results shown in these figures are consistent with those of MNIST and Fashion-MNIST. More specifically, the proposed approach can precisely highlight the object in the images, while the other approaches only partly identify the object and even select some background noise as important features. In order to evaluate the fidelity of these explanation results, we input these feature images back to VGG16 and record the prediction probabilities of the true labels (tiger cat, lion and tiger cat). Figure 4b achieved the highest probabilities on each feature map, which from the left to right are 93.20%, 78.51% and 92.70%. Note that in the fourth image of Figure 4b, while identifying a lion in the image, our approach highlights the moustache of the cat, which seems like a wrong selection. However, if we exclude this part from the image, the probability of the object belonging to lion drops from 78.51% to 20.31%. This result showcases a false positive of VGG16 and indicates that we can still find the weakness of the target model even from the individual explanations. To further quantify the relative performance in explainability, we also conduct the following experiment. First we randomly select 10000 data samples from aforementioned datasets. Then, we apply our approach as well as two state-of-the-art solutions (i.e., LIME and SHAP) to extract top 20 important segments (top 10 segments for ImageNet dataset). We then manipulate these samples based on the segments identified via three approaches. To be specific, we only keep the top important pixels intact while nullifying the remaining pixels and supply these manipulated samples to the target models and evaluate the classification accuracy. Table 1 shows the accuracy of these feature images being classified to the corresponding truth categories as well as the means and the 95% confidence interval of the prediction probabilities. The results indicate that our approach offers better resolution and more granular explanations to individual predictions. One possible explanation is that both LIME and SHAP assume the local decision boundary of the target model to be linear while the proposed approach conducts the variable selection by applying a non-linear approximation. It is known that Bayesian non-parametric models are computationally expensive. However, It does not mean that we cannot use the proposed approach in the real-world applications. In fact, we have recorded the latency of the proposed approach on explaining individual samples in three datasets. The running times are for MNIST, Fashion-MNIST and ImageNet are 37.5s, 44s and 139.2s, respectively. As to approximating the global decision boundary, the running times are 105 mins on MNIST and 115 mins on Fashion-MNIST. It is believed that the latency of our approach is still within the range of normal training time for complex ML models. 5 Discussion Scalability. As is shown in Section 4, our proposed solution does not impose incremental challenge on scalability. We can still further accelerate the algorithm to improve its scalability. More specifically, current advances in Bayesian computation approaches allow the MCMC methods to be used for big data analysis, such as adopting Bootstrap Metropolis–Hastings Algorithm [19], applying divide and conquer approaches [30] and even taking advantage of GPU programming to speed up the computation [31]. Data Dimensionality. Our evaluation described in Section 4 indicates that the proposed solution (DMM-MEN) could extract generalizable insights even from high dimensional data (e.g. Fashion MNIST). However, when it comes to ultra high-dimensional data, getting generalizable insights could still be a challenge. One obvious reason is that we do not have sufficient data to infer all the parameters. More importantly, even if we had enough data, it would be very computationally expensive. Arguably, one solution is to reduce the dimensionality of such ultra high dimensional data while preserving the original data distribution. However, take ImageNet dataset as an example. Even the state-of-the-art dimensionality reduction methods (i.e., the one used in [23]) could not satisfactorily preserve the whole data distribution. This indeed speaks to the limitation of our proposed solution in extracting generalizable insights when it comes to specific datasets. Nevertheless, it does not affect our solution’s ability in precisely explaining individual predictions even when it comes to ultra high dimensional data. As is shown in Section 4, our solution significantly outperforms the state-of-the-art solutions in explaining individual decisions made on ultra-high dimensional data samples. Other Applications and Learning Models. While we evaluate and demonstrate the capability of our proposed technique only on the image recognition using deep learning models, the proposed approach is not limited to such a learning task and models. In fact, we also evaluated our technique on other learning tasks with various learning models. We observed the consistent superiority in extracting global insight and explaining individual decisions. Due to the space limit, we specify those experiment results in our supplementary material submitted along with this manuscript. 6 Conclusion and Future Work This work introduces a new technical approach to derive generalizable insights for complicated ML models. Technically, it treats a target ML model as a black box and approximates its decision boundary through DMM-MEN. With this approach, model developers and users can approximate complex ML models with low errors and obtain better explanations of individual decisions. More importantly, they can extract generalizable insights learned by a target model and use it to scrutinize model strengths and weaknesses. While our proposed approach exhibits outstanding performance in explaining individual decisions, and provides a user with an ability to discover model weaknesses, its performance may not be good enough when applied to interpreting temporal learning models (e.g., recurrent neural networks). This is due to the fact that, our approach takes features independently whereas time series analysis deals with features temporally dependent. As part of the future work, we will therefore equip our approach with the ability of dissecting temporal learning models. Acknowledgments We gratefully acknowledge the funding from NSF grant CNS-1718459 and the support of NVIDIA Corporation with the donation of the GPU. We also would like to thank anonymous reviewers, Kaixuan Zhang, Xinran Li and Chenxin Ma for their helpful comments.
1. What is the main contribution of the paper, and what are the concerns regarding the approach? 2. How does the reviewer assess the prior distribution on the regression coefficients, and what are their questions regarding the implementation of the MCMC algorithm? 3. What are the issues with reproducibility in the experiments, and which hyperparameters are not specified? 4. How does the reviewer evaluate the link between the BNP mixture model and the subsequent sections on interpretability and explainability? 5. Are there any unclear or vague statements in the paper that need further explanation?
Review
Review The goal of the paper is to provide a methodology to explain deep learning models in generality. The approach is based on Bayesian nonparametric (BNP) methods, more specifically on a BNP regression mixture model with (truncated) Dirichlet process mixing measure. The results of the linear regression model are used to interpret decide which features are important, and grouping them as explaining the decisions taken by the deep model. I have some concern with the prior distribution on the regression coefficients \beta_j's. This prior is defined in two different places, ie (4-5) and (6-7), but they are not consistent. could you comment why the variance for the prior on \beta_j is the same as the variance for the model, \sigma_j^2? commenting the choice for the covariance matrix in (6-7) would be useful, as it does not seem common practice. The authors should comment on the implementation of the MCMC algorithm, such as running time, mixing properties, convergence: insuring a good mixing of the chains in dimension p=28x28 does not seem trivial at all. Reproducibility of the experiments: a lot of (hyper)parameters are not specified in the model (including R, L, V in line 144) are not specified. Section 3 describes the technical details about the BNP mixture model. However, the link with the next sections is not clearly stated, so that it is difficult to see how scrutability and explainability are derived from the BNP model results. Additionally, some phrasings are vague and difficult to understand. See eg the paragraph on data dimensionality, lines 311-324.
NIPS
Title Explaining Deep Learning Models -- A Bayesian Non-parametric Approach Abstract Understanding and interpreting how machine learning (ML) models make decisions have been a big challenge. While recent research has proposed various technical approaches to provide some clues as to how an ML model makes individual predictions, they cannot provide users with an ability to inspect a model as a complete entity. In this work, we propose a novel technical approach that augments a Bayesian non-parametric regression mixture model with multiple elastic nets. Using the enhanced mixture model, we can extract generalizable insights for a target model through a global approximation. To demonstrate the utility of our approach, we evaluate it on different ML models in the context of image recognition. The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models. 1 Introduction When comparing with relatively simple learning techniques such as decision trees and K-nearest neighbors, it is well acknowledged that complex learning models – particularly, deep neural networks (DNN) – usually demonstrate superior performance in classification and prediction. However, they are almost completely opaque, even to the engineers that build them [20]. Presumably as such, they have not yet been widely adopted in critical problem domains, such as diagnosing deadly diseases [13] and making million-dollar trading decisions [14]. To address this problem, prior research proposes to derive an interpretable explanation for the output of a DNN. With that, people could understand, trust and effectively manage a deep learning model. From a technical prospective, this can be interpreted as pinpointing the most important features in the input of a deep learning model. In the past, the techniques designed and developed primarily focus on two kinds of methods – (1) whitebox explanation that derives interpretation for a deep learning model through forward or backward propagation approach [26, 36], and (2) blackbox explanation that infers explanations for individual decisions through local approximation [21, 23]. While both demonstrate a great potential to help users interpret an individual decision, they lack an ability to extract insights from the target ML model that could be generalized to future cases. In other words, existing methods could not shed lights on the general sensitivity level of a target model to specific input dimensions and hence fall short in foreseeing when prediction errors might occur for future cases. In this work, we propose a new technical approach that not only explains an individual decision but, more importantly, extracts generalizable insights from the target model. As we will show in 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Section 4, we define such insights as the general sensitivity level of a target model to specific input dimensions. We demonstrate that model developers could use them to identify model strengths as well as model vulnerabilities. Technically, our approach introduces multiple elastic nets to a Bayesian non-parametric regression mixture model. Then, it utilizes this model to approximate a target model and thus derives its generalizable insight and explanation for its individual decision. The rationale behind this approach is as follows. A Bayesian non-parametric regression mixture model can approximate arbitrary probability density with high accuracy [22]. As we will discuss in Section 3, with multiple elastic nets, we can augment a regression mixture model with an ability to extract patterns (generalizable insights) even from a learning model that takes as input data of different extent of correlations. Given the pattern, we could extrapolate input features that are critical to the overall performance of an ML model. This information can be used to facilitate one to scrutinize a model’s overall strengths and weaknesses. Besides extracting generalizable insights, the proposed model can also provide users with more understandable and accountable explanations. We will demonstrate this characteristic in Section 4. 2 Related Work Most of the works related to model interpretation lie in demystifying complicated ML models through whitebox and blackbox mechanisms. Here, we summarize these works and discuss their limitations. It should be noticed that we do not include those works that identify training samples that are most responsible for a given prediction (e.g., [12, 15]) and those works that build a self-interpretable deep learning model [7, 33]. The whitebox mechanism augments a learning model with the ability to yield explanations for individual predictions. Generally, the techniques in this kind of mechanism follow two lines of approaches – Ê occluding a fraction of a single input sample and identifying what portions of the features are important for classification [4, 6, 17, 36, 37], and Ë computing the gradient of an output with respect to a given input sample and pinpointing what features are sensitive to the prediction of that sample [1, 8, 24, 25, 26, 29, 32]. While both can give users an explanation for a single decision that a learning model reach, they are not sufficient to provide a global understanding of a learning model, nor capable of exposing its strengths and weaknesses. In addition, they typically cannot be generally applied to explaining prediction outcomes of other ML models because most of the techniques following this mechanism are designed for a specific ML model and require altering that learning model. The blackbox mechanism treats an ML model as a black box, and produces explanations by locally learning an interpretable model around a prediction. For example, LIME [23] and SHAP [21] are the same kind of explanation techniques that sample perturbed instances around a single data sample and fit a linear model to perform local explanations. Going beyond the explanation of a single prediction, they both can be extended to explain the model as a complete entity by selecting a small number of representative individual predictions and their explanations. However, explanations obtained through such approaches cannot describe the full mapping learned by an ML model. In this work, our proposed technique derives a generalizable insight directly from a target model, which provides us with the ability to unveil model weaknesses and strengths. 3 Technical Approach 3.1 Background A Bayesian non-parametric regression mixture model (i.e., mixture model for short) consists of multiple Gaussian distributions: yi|xi,Θ ∼ ∞∑ j=1 πjN(yi | xiβj , σ2j ), (1) where Θ denotes the parameter set, xi ∈ Rp is the i-th data sample of the sample feature matrix XT ∈ Rp×n, and yi is the corresponding prediction in y ∈ Rn, which is the predictions of n samples. π1:∞ are the probabilities tied to the distributions with the sum equal to 1, and β1:∞ and σ 2 1:∞ represent the parameters of regression models, with βj ∈ Rp and σ2j ∈ R. In general, model (1) can be viewed as a combination of infinite number of regression models and be used to approximate any learning model with high accuracy. Given a learning model g : Rp → R, we can therefore approximate g(·) with a mixture model using {X,y}, a set of data samples as well as their corresponding predictions obtained from model g, i.e., yi = g(xi). For any data sample xi, we can then identify a regression model ŷi = xiβj + i, which best approximates the local decision boundary near xi1. Note that in this paper, we assume that a single mixture component is sufficient to approximate the local decision boundary around xi. Despite the assumption doesnot hold in some cases, the proposed model can be relaxed and extended to deal with these cases. More specifically, instead of directly assigning each instance to one mixture component, we can assign an instance at a mode level [10], (i.e., assigning the instance to a combination of multiple mixture components). When explaining a single instance, we can linearly combine the corresponding regression coefficients in a mode. Recent research [23] has demonstrated that such a linear regression model can be used for assessing how the feature space affects a decision by inspecting the weights (model coefficients) of the features present in the input. As a result, similar to prior research [23], we can take this linear regression model to pinpoint the important features and take them as an explanation for the corresponding individual decision. In addition to model approximation and explanation mentioned above, another characteristic of a mixture model is that it can enable multiple training samples to share the same regression model and thus preserve only dominant patterns in data. With this, we can significantly reduce the amount of explanations derived from training data and utilize them as the generalizable insight of a target model. 3.2 Challenge and Technical Overview Despite the great characteristics of a mixture model, it is still challenging for us to use it for deriving generalizable insights or individual explanation. This is because a regression mixture model does not always guarantee a success in model approximation, especially when it deals with samples with diverse feature correlations and data sparsity. To tackle this challenge, an instinctive reaction is to introduce an elastic net to a Bayesian regression mixture model. Past research [9, 18, 38] has demonstrated that an elastic net encourages the grouping effects among variables so that highly correlated variables tend to be in or out of a mixture model together. Therefore, it can potentially augment the aforementioned method with the ability of dealing with the situation where the features of a high dimensional sample are highly correlated. However, a key limitation of this approach could manifest, especially when it deals with samples with diverse feature correlation and data sparsity. In the following, we address this issue by establishing a dirichlet process mixture model with multiple elastic nets (DMM-MEN). Different from previous research [35], our approach allows the regularization terms to has the flexibility to reduce a lasso or ridge under some sample categories, while maintaining the properties of the elastic net under other categories. With the multiple elastic nets, the model is able to capture the different levels of feature correlation and sparsity in the data. the In the following, we provide more details of this hierarchical Bayesian non-parametric model. 3.3 Technical Details Dirichlet Process Regression Mixture Model. As is specified in Equation (1), the amount of Gaussian distributions is infinite, which indicates that there are infinite number of parameters that need to be estimated. In practice, however, the amount of available data samples is limited and therefore it is necessary to restrict the number of distributions. To do this, truncated Dirichlet process prior [11] can be applied, and Equation (1) can be written as yi|xi,Θ ∼ J∑ j=1 πjN(yi | xiβj , σ2j ). (2) 1For multi-class classification tasks, this work approximates each class separately, and thus X denotes the samples in the same class and g(X) represents the corresponding predictions. Given that y is a probability vector, we conduct logit transformation before fitting a regression mixture model. Where J is the hyper-parameter that specify the upper bound of the number of mixture components. To estimate the parameters Θ, a Bayesian non-parametric approach first models π1:J through a “stick-breaking” prior process. With such modeling, parameters π1:J can then be computed by πj = uj j−1∏ l=1 (1− ul) for j = 2, ..., J − 1, (3) with π1 = u1 and πJ = 1− ∑J−1 l=1 πl. Here, ul follows a beta prior distribution, Beta(1, α), parameterized by α, where α can be drawn from Gamma(e, f) with hyperparameters e and f . To make the computation efficient, σ2j is set to follow an inverse Gamma prior, i.e., σ 2 j ∼ Inv-Gamma(a, b) with hyperparameters a and b. Given σ21:J , for conventional Bayesian regression mixture model, β1:J can be drawn from Gaussian distribution N(mβ , σ2jVβ) with hyperparameters mβ and Vβ . As is described above, using a mixture model to approximate a learning model, for any data sample we can identify a regression model to best approximate the prediction of that sample. This is due to the fact that a mixture model can be interpreted as arising from a clustering procedure which depends on the underlying latent component indicators z1:n. For each observation (xi, yi), zi = j indicates that the observation was generated from the j-th Gaussian distribution, i.e., yi|zi = j ∼ N(xiβj , σ2j ) with P (zi = j) = πj . Dirichlet Process Mixture Model with Multiple Elastic Nets. Recall that a conventional mixture model has difficulty not only in dealing with high dimensional data and highly correlated features but also in handling different types of data heterogeneity. We modify the conventional mixture model by resetting the prior distribution of β1:J to realize multiple elastic nets. Specifically, we first define mixture distribution P (βj |λ1,1:K , λ2,1:K , σ2j ) = K∑ k=1 wkfk(βj |λ1,k, λ2,k, σ2j ), (4) where K denotes the total number of component distributions, and w1:K represent component probabilities with ∑K k=1 wk = 1. Let w ′ ks follow a Dirichlet distribution, i.e., w1, w2, · · · , wK ∼ Dir(1/K). Since we add elastic net regularization to the regression coefficient β1:J , instead of the aforementioned normal distribution, we adopt the Orthant Gaussian distribution as the prior distribution according to [9]. To be specific, each βk follows a Orthant Gaussian prior, whose density function fk can be defined as fk ( βj |λ1,k, λ2,k, σ2j ) ∝ Φ ( −λ1,k 2σ √ λ2,k )−p × ∑ Z∈Z N ( βj ∣∣∣ − λ1,k 2λ2,k Z, σ2j λ2,k Ip ) 1(βj ∈ OZ). (5) Here, λi,k (i = 1, 2) is a pair of parameters which controls lasso and ridge regression for the k-th component, respectively. We set both to follow Gamma conjugate prior with λ1,k ∼ Gamma(R, V/2) and λ2,k ∼ Gamma(L, V/2), where R, L, and V are hyperparameters. Φ(·) is the cumulative distribution function of the univariate standard Gaussian distribution, and Z = {−1,+1}p is a collection of all possible p-vectors with elements ±1. Let Zl = 1 for βjl ≥ 0 and Zl = −1 for βjl < 0. Then, OZ ⊂ Rp can be determined by vector Z ∈ Z , indicating the corresponding orthant. Given the prior distribution of fk defined in (5), it is difficult to compute the posterior distribution and sample from it. To obtain a simpler form, we use the mixture representation of the prior distribution (5). To be specific, we introduce a latent variable τ 1:p and rewrite the (5) into the following hierarchical form2 βj | τ j , σ2j , λ2,cj ∼ N ( βj ∣∣∣ 0, σ2j λ2,cj Sτ j ) , and (6) τ j | σ2j , λ1,cj , λ2,cj ∼ p∏ l=1 Inv-Gamma(0,1) τjl ∣∣∣∣∣ 12 , 12 ( λ1,cj 2σj √ λ2,cj )2 , (7) 2More details about the derivation of the scale mixture representation and the proof of equivalence can be found in [9, 18]. where τ j ∈ Rp denotes latent variables and Sτ j ∈ Rp×p, with Sτ j = diag(1− τjl) for l = 1, · · · , p. Similar to component indicator zi introduced in the previous section, here, we introduce a set of latent regularization indicators c1:J . For each parameter βj , cj = k indicates that parameter follows distribution fk(·) with P (cj = k) = wk. Posterior Computation and Post-MCMC Analysis. We develop a customized MCMC method involving a combination of Gibbs sampling and Metropolis-Hastings algorithm for parameter inference [28]. Basically, it involves augmentation of the model parameter space by the aforementioned mixture component indicators z1:n and c1:J . These indicators enable simulation of relevant conditional distributions for model parameters. As the MCMC proceeds, they can be estimated from relevant conditional posteriors and thus we can jointly obtain posterior simulations for model parameters and mixture component indicators. We provide the details of posterior distribution and the implementation of updating the parameters in the supplementary material. Considering that fitting a mixture model with MCMC suffers from the well-known label switching problem, we use an iterative relabeling algorithm introduced in [3]. 4 Evaluation Recall that the motivation of our proposed method is to increase the transparency for complex ML models so that users could leverage our approach to not only understand an individual decision (explainability) but also to obtain insights into the strength and vulnerabilities of the target model (scrutability). The experimental evaluation of the proposed method thus focuses on the aforementioned two aspects – scrutability and explainability. 4.1 Scrutability Methodology. As a first step, we utilize Keras [2] to train an MLP on MNIST dataset [16] and CNNs to classify clothing images in Fashion-MNIST dataset [34] respectively. These machine learning methods represent the techniques most commonly used for the corresponding classification tasks. We trained these model to achieve more than decent classification performance. We then treat these two models as our target models and apply our proposed approach to establish scrutability. We define the scrutability of an explanation method as the ability to distill generalizable insights from the model under examination. In this work, generalizable insights refer to feature importance inferences that could be generalized across all cases. Admittedly, the fidelity of our proposed solution to the target model is an important prerequisite to any generalizable insights our solution extracts. In this section, we carry out experiments to empirically evaluate the fidelity while also demonstrating scrutability of our solution. We apply the following procedures to obtain experimentation data. 1. Construct bootstrapped samples from the training data and nullify the top important pixels identified by our approach among positive cases while replacing the same pixels in negative cases with the mean value of those features among positive samples. 2. Apply random pixel nullification/replacement to the same bootstrapped samples used in previous step from the training data. 3. Construct test cases that register positive properties for the top important pixels while randomly assign value for the remaining pixels. 4. Construct randomly created test cases (i.e., assigning random value to all pixels) as baseline samples for the new test cases. We then compare the target model classification performance among synthetic samples crafted via procedures mentioned above. The intuition behind this exercise is that if the fidelity/scrutability of our proposed solution holds, we should be able to see significant impact on the classification accuracy. Moreover, the magnitude of the impact should significantly outweigh that observed from randomly manipulating features. In the following, we describe our experiment tactics and findings in greater details. Experimental Results. Figure 1 illustrates the generalizable insights (i.e., important pixels in MNIST and Fashion-MNIST datasets) that our proposed solution distilled from the target MLP and CNNs models, respectively. To validate the faithfulness of these insights and establish fidelity of our proposed solution, we conduct the following experiment. First, bootstrapped samples, each contains a random draw of 30% of the original cases, are constructed from the MNIST and Fashion-MNIST datasets. For cases that are originally identified as positive for corresponding classes by the target models (i.e., MLP and CNNs), we nullify top 50/75/100/125/150 important features identified by our proposed solution respectively, while forcing the value of corresponding features in the negative samples equal to the mean value of those among the positive samples. These manipulated cases are then supplied to the the target model and we measure the proportion of cases that those models would classify as positive under each condition. In addition, we apply the same perturbations on randomly selected 50/75/100/125/150 features in the same bootstrapped sample and measure the target model’s positive classification rate after the manipulation as a baseline for comparison. We repeat such a process for 50 times for both datasets to account for the statistical uncertainty in the measured classification rate. Figure 3a, Figure 3b and supplementary material showcase some of the aforementioned bootstrapped samples. Figure 2a and Figure 2b summarize the experimental results we obtain from the procedures mentioned above. As is illustrated in both figures, the classification rates of the target models on these perturbed samples are impacted dramatically once we start manipulating top 50/75 important features (i.e., around 9% of the pixels in each image) identified by our proposed solution in these images. However, we do not observe any significant impact to the model’s classification performance if we randomly perturb the same number of pixels. Non-overlapping 95% confidence intervals of the post-manipulation classification performance also reveal that the impact of these top features is significantly greater than the features selected at random. Moreover, the fact that we start observing dramatic impact in the target models’ classification performance after we manipulate less than 9% of the total features justifies the faithfulness of our proposed approach to the ML models under examination. To further validate the fidelity of the insights illustrated in Figure 1, we construct new testing cases based on top 50/75/100/125/150 pixels deemed important by our proposed solution respectively and measure the proportion of these testing samples that are classified as positive cases by the target models. We also create testing cases by randomly filling 50/75/100/125/150 pixels within the images and measure the positive classification rate as a baseline. The intuition behind this exercise is that, similar to the experiments described earlier, we would like to see significantly higher positive classification rates leveraging the insights from our proposed solution than creating cases around randomly selected pixels. In Figure 3c and supplementary material, we showcase some insights driven testing cases. As is shown in Figure 2c, insights driven testing cases have much higher success rates than the cases created around random pixels. In fact, we observe that even if we randomly fill 150 pixels (which is close to 20% of the pixels in an image), the positive classification rate remains extremely low across classes. On the contrary, we notice that with the cases created based on the top 50 important pixels (i.e., 9% of all pixels in an image) deemed by our solution, we could already achieve around 50% success rate. For some specific outcome categories, we could even achieve a much higher success rate. It is worth noting that aforementioned experiments also unveil the vulnerabilities and sensitivities of the target MLP and CNNs models. It does not seem to matter if a handwritten digit or a fashion product is visually recognizable in an image, the model will classify it to the corresponding category with a high confidence as long as the important features indicated in the heat map are filled with greater values (see Figure 3b). In other words, both the MLP and CNNs models evaluated in this study are very sensitive to these pixels but could also be vulnerable to pathological image samples crafted based on such insights. Figure 3a and Figure 3c are two additional examples. A sample (Figure 3a) might carry the right semantics, the learning model still might be blind to that sample if the pixels corresponding to important features are filled with smaller values. On the other hand, a very noisy sample (Figure 3c) could still be correctly classified as long as the pixels corresponding to important features are assigned with decent values. 4.2 Explainability Our proposed solution does not only extract generalizable insights from the target models but also demonstrate superior performance in explaining individual decisions. To illustrate its superiority, we compare our approach with a couple of state-of-the-art explainable approaches, namely LIME and SHAP. In particular, we evaluate the explainability of these approaches by comparing the explanation feature maps and more importantly quantitatively measuring their relative superiority in identifying influential features in individual decisions. As is introduced in the aforementioned section, we also evaluate the explainability of our proposed solution on the VGG16 model [27] trained from ImageNet dataset [5]. Due to the ultra high dimensionality concern, which we will discuss in the following section, we adopt the methodology in [23] to generate data to explain individual decisions. More specifically, we create a new dataset by randomly sampling around the data sample that needed to be explained, reducing the dimensionality of the newly crafted dataset by certain dimension reduction method [23] and fitting the approximation model. Figure 4a and supplementary material illustrate ten handwritten digits and ten fashion products randomly selected from each of the classes in MNIST and Fashion-MNIST datasets, respectively. We apply our solution as well as LIME and SHAP to each of the images shown in the figure and then select and highlight the top 20 segments that each approach deems important to the decision made by deep neural network classifiers. The results are presented in Figure 4b, Figure 4c, Figure 4d and supplementary material for our approach, LIME and SHAP, respectively. As we can observe in these figures, our approach nearly perfectly highlights the contour of each digit and fashion product, whereas LIME and SHAP identify only the partial contour of each digit and product and select more background parts than our approach. Figure 4a also has two images we randomly selected from ImageNet dataset. The left image has only one object and the other image has two. Figure 4b to Figure 4d demonstrate the top 10 segments pinpointed by three explanation techniques. The results shown in these figures are consistent with those of MNIST and Fashion-MNIST. More specifically, the proposed approach can precisely highlight the object in the images, while the other approaches only partly identify the object and even select some background noise as important features. In order to evaluate the fidelity of these explanation results, we input these feature images back to VGG16 and record the prediction probabilities of the true labels (tiger cat, lion and tiger cat). Figure 4b achieved the highest probabilities on each feature map, which from the left to right are 93.20%, 78.51% and 92.70%. Note that in the fourth image of Figure 4b, while identifying a lion in the image, our approach highlights the moustache of the cat, which seems like a wrong selection. However, if we exclude this part from the image, the probability of the object belonging to lion drops from 78.51% to 20.31%. This result showcases a false positive of VGG16 and indicates that we can still find the weakness of the target model even from the individual explanations. To further quantify the relative performance in explainability, we also conduct the following experiment. First we randomly select 10000 data samples from aforementioned datasets. Then, we apply our approach as well as two state-of-the-art solutions (i.e., LIME and SHAP) to extract top 20 important segments (top 10 segments for ImageNet dataset). We then manipulate these samples based on the segments identified via three approaches. To be specific, we only keep the top important pixels intact while nullifying the remaining pixels and supply these manipulated samples to the target models and evaluate the classification accuracy. Table 1 shows the accuracy of these feature images being classified to the corresponding truth categories as well as the means and the 95% confidence interval of the prediction probabilities. The results indicate that our approach offers better resolution and more granular explanations to individual predictions. One possible explanation is that both LIME and SHAP assume the local decision boundary of the target model to be linear while the proposed approach conducts the variable selection by applying a non-linear approximation. It is known that Bayesian non-parametric models are computationally expensive. However, It does not mean that we cannot use the proposed approach in the real-world applications. In fact, we have recorded the latency of the proposed approach on explaining individual samples in three datasets. The running times are for MNIST, Fashion-MNIST and ImageNet are 37.5s, 44s and 139.2s, respectively. As to approximating the global decision boundary, the running times are 105 mins on MNIST and 115 mins on Fashion-MNIST. It is believed that the latency of our approach is still within the range of normal training time for complex ML models. 5 Discussion Scalability. As is shown in Section 4, our proposed solution does not impose incremental challenge on scalability. We can still further accelerate the algorithm to improve its scalability. More specifically, current advances in Bayesian computation approaches allow the MCMC methods to be used for big data analysis, such as adopting Bootstrap Metropolis–Hastings Algorithm [19], applying divide and conquer approaches [30] and even taking advantage of GPU programming to speed up the computation [31]. Data Dimensionality. Our evaluation described in Section 4 indicates that the proposed solution (DMM-MEN) could extract generalizable insights even from high dimensional data (e.g. Fashion MNIST). However, when it comes to ultra high-dimensional data, getting generalizable insights could still be a challenge. One obvious reason is that we do not have sufficient data to infer all the parameters. More importantly, even if we had enough data, it would be very computationally expensive. Arguably, one solution is to reduce the dimensionality of such ultra high dimensional data while preserving the original data distribution. However, take ImageNet dataset as an example. Even the state-of-the-art dimensionality reduction methods (i.e., the one used in [23]) could not satisfactorily preserve the whole data distribution. This indeed speaks to the limitation of our proposed solution in extracting generalizable insights when it comes to specific datasets. Nevertheless, it does not affect our solution’s ability in precisely explaining individual predictions even when it comes to ultra high dimensional data. As is shown in Section 4, our solution significantly outperforms the state-of-the-art solutions in explaining individual decisions made on ultra-high dimensional data samples. Other Applications and Learning Models. While we evaluate and demonstrate the capability of our proposed technique only on the image recognition using deep learning models, the proposed approach is not limited to such a learning task and models. In fact, we also evaluated our technique on other learning tasks with various learning models. We observed the consistent superiority in extracting global insight and explaining individual decisions. Due to the space limit, we specify those experiment results in our supplementary material submitted along with this manuscript. 6 Conclusion and Future Work This work introduces a new technical approach to derive generalizable insights for complicated ML models. Technically, it treats a target ML model as a black box and approximates its decision boundary through DMM-MEN. With this approach, model developers and users can approximate complex ML models with low errors and obtain better explanations of individual decisions. More importantly, they can extract generalizable insights learned by a target model and use it to scrutinize model strengths and weaknesses. While our proposed approach exhibits outstanding performance in explaining individual decisions, and provides a user with an ability to discover model weaknesses, its performance may not be good enough when applied to interpreting temporal learning models (e.g., recurrent neural networks). This is due to the fact that, our approach takes features independently whereas time series analysis deals with features temporally dependent. As part of the future work, we will therefore equip our approach with the ability of dissecting temporal learning models. Acknowledgments We gratefully acknowledge the funding from NSF grant CNS-1718459 and the support of NVIDIA Corporation with the donation of the GPU. We also would like to thank anonymous reviewers, Kaixuan Zhang, Xinran Li and Chenxin Ma for their helpful comments.
1. What is the primary contribution of the paper regarding interpreting complex ML algorithms? 2. Are there any recent related papers that propose ways of interpreting complex ML algorithms? If so, please mention them. 3. Is the idea of approximating a complex model with a less complex one new? If not, please provide references to previous works. 4. How does the proposed Bayesian non-parametric regression mixture model improve the performance of the interpretable model? 5. What are the weaknesses of the paper, particularly in terms of clarity and technical details? 6. How would you rate the overall quality of the paper?
Review
Review The paper proposes approximating a complex ML model with a less complicated one for easier interpretation and proposes a specific model, a Bayesian non-parametric regression mixture model, to do this. Recent related papers that also propose ways of interpreting complex ML algorithms are • Piotr Dabkowski and Yarin Gal. Real time image saliency for black box classifiers. In Advances in Neural Information Processing Systems, pages 6970–6979, 2017. • Springenberg, J., et al. "Striving for Simplicity: The All Convolutional Net." ICLR (workshop track). 2015. • Koh, Pang Wei, and Percy Liang. "Understanding Black-box Predictions via Influence Functions." International Conference on Machine Learning. 2017. Especially the idea of approximating a complex model with a less complex one is not new. • Wu, Mike, et al. "Beyond sparsity: Tree regularization of deep models for interpretability." AAAI 2018 • Frosst, Nicholas, and Geoffrey Hinton. "Distilling a Neural Network Into a Soft Decision Tree." (2018). To my knowledge, the key innovation of this paper is using a Bayesian non-parametric regression mixture model, which allows for better performance of the interpretable model, one of the main weaknesses of less complex approaches. The experiments used to evaluate the method were extensive and compared with current popular approaches. Regarding clarity of the paper, there are some improvements still possible. This is especially visible in the explanation of the proposed approach. The specific restrictions and modifications to the model seem very ad-hoc and could be motivated better. 3.3 Technical details should bbe written clearer and more concise . F.e. “where alpha can be drawn from Gamma(e,f), a Gamma conjugate prior with hyperparameters e and f.” can be shortened to “where alpha can be drawn from Gamma(e,f)” Overall I think the paper proposes and evaluates a novel idea in a satisfactory way. Minor comments: Better labeling for figure 1 Edit after author rebuttal: I have updated my score to a strong 7. Ultimately I think the idea in the paper is novel and has potential. However, the presentation of this idea is unclear and vague enough to prevent an 8 for me.
NIPS
Title Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising Abstract Self-supervised frameworks that learn denoising models with merely individual noisy images have shown strong capability and promising performance in various image denoising tasks. Existing self-supervised denoising frameworks are mostly built upon the same theoretical foundation, where the denoising models are required to be J -invariant. However, our analyses indicate that the current theory and the J -invariance may lead to denoising models with reduced performance. In this work, we introduce Noise2Same, a novel self-supervised denoising framework. In Noise2Same, a new self-supervised loss is proposed by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model and can be used in a wider range of denoising applications. We analyze our proposed Noise2Same both theoretically and experimentally. The experimental results show that our Noise2Same consistently outperforms previous self-supervised denoising methods in terms of denoising performance and training efficiency. 1 Introduction The quality of deep learning methods for signal reconstruction from noisy images, also known as deep image denoising, has benefited from the advanced neural network architectures such as ResNet [8], U-Net [19] and their variants [29, 16, 26, 31, 25, 14]. While more powerful deep image denoising models are developed over time, the problem of data availability becomes more critical. Most deep image denoising algorithms are supervised methods that require matched pairs of noisy and clean images for training [27, 29, 2, 7]. The problem of these supervised methods is that, in many denoising applications, the clean images are hard to obtain due to instrument or cost limitations. To overcome this problem, Noise2Noise [13] explores an alternative training framework, where pairs of noisy images are used for training. Here, each pair of noisy images should correspond to the same but unknown clean image. Note that Noise2Noise is basically still a supervised method, just with noisy supervision. Despite the success of Noise2Noise, its application scenarios are still limited as pairs of noisy images are not available in some cases and may have registration problems. Recently, various of denoising frameworks that can be trained on individual noisy images [23, 17, 28, 10, 1, 12] have been developed. These studies can be divided into two categories according to the amount of extra information required. Methods in the first category requires the noise model to be known. For example, the simulation-based methods [17, 28] use the noise model to generate simulated noises and make individual noisy images noisier. Then a framework similar to Noise2Noise can be applied to train the model with pairs of noisier image and the original noisy image. The limitation is obvious as the noise model may be too complicated or even not available. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. On the other hand, algorithms in the second category target at more general cases where only individual noisy images are available without any extra information [23, 10, 1, 12]. In this category, self-supervised learning [30, 6, 24] has been widely explored, such as Noise2Void [10], Noise2Self [1], and the convolutional blind-spot neural network [12]. Note that these self-supervised models can be improved as well if information about the noise model is given. For example, Laine et al. [12] and Krull et al. [11] propose the Bayesian post-processing to utilize the noise model. However, with the proposed post-processing, these methods fall into the first category where applicability is limited. In this work, we stick to the most general cases where only individual noisy images are provided and focus on the self-supervised framework itself without any post-processing step. We note that all of these existing self-supervised denoising frameworks are built upon the same theoretical background, where the denoising models are required to be J -invariant (Section 2). We perform in-depth analyses on the J -invariance property and argue that it may lead to denoising models with reduced performance. Based on this insight, we propose Noise2Same, a novel self-supervised denoising framework, with a new theoretical foundation. Noise2Same comes with a new self-supervised loss by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model. We analyze the effect of the new loss theoretically and conduct thorough experiments to evaluate Noise2Same. Result show that our Noise2Same consistently outperforms previous self-supervised denoising methods. 2 Background and Related Studies Self-Supervised Denoising with J -Invariant Functions. We consider the reconstruction of a noisy image x 2 Rm, where m = (d⇥)h⇥w⇥c depends on the spatial and channel dimensions. Let y 2 Rm denotes the clean image. Given noisy and clean image pairs (x,y), supervised methods learn a denoising function f : Rm ! Rm by minimizing the supervised loss L(f) = Ex,y kf(x) yk2. When neither clean images nor paired noisy images are available, various self-supervised denoising methods have been developed [10, 1, 12] by assuming that the noise is zero-mean and independent among all dimensions. These methods are trained on individual noisy images to minimize the self-supervised loss L(f) = Ex kf(x) xk2. Particularly, in order to prevent the self-supervised training from collapsing into leaning the identity function, Batson et al. [1] point out that the denoising function f should be J -invariant, as defined below. Definition 1. For a given partition J = {J1, · · · , Jk} (|J1|+ · · ·+ |Jk| = m) of the dimensions of an image x 2 Rm, a function f : Rm ! Rm is J -invariant if f(x)J does not depend on xJ for all J 2 J , where f(x)J and xJ denotes the values of f(x) and x on J , respectively. Intuitively, J -invariance means that, when denoising xJ , f only uses its context xJc , where Jc denotes the complement of J . With a J -invariant function f , we have Ex kf(x) xk2 = Ex,y kf(x) yk2 + Ex,y kx yk2 2 hf(x) y,x yi (1) = Ex,y kf(x) yk2 + Ex,y kx yk2 . (2) Here, the third term in Equation (1) becomes zero when f is J -invariant and the zero-mean assumption about the noise holds [1]. We can see from Equation (2) that when f is J -invariant, minimizing the self-supervised loss Ex kf(x) xk2 indirectly minimizes the supervised loss Ex,y kf(x) yk2. All existing self-supervised denoising methods [10, 1, 12] compute the J -invariant denoising function f through a blind-spot network. Concretely, a subset J of the dimensions are sampled from the noisy image x as “blind spots”. The blind-spot network f is asked to predict the values of these “blind spots” based on the context xJc . In other words, f is blind on J . In previous studies, the blindness on J is achieved in two ways. Specifically, Noise2Void [10] and Noise2Self [1] use masking, while the convolutional blind-spot neural network [12] shifts the receptive field. With the blind-spot network, the self-supervised loss Ex kf(x) xk2 can be written as L(f) = EJEx kf(xJc)J xJk2 . (3) While these methods have achieved good performance, our analysis in this work indicates that minimizing the self-supervised loss in Equation (3) with J -invariant f is not optimal for selfsupervised denoising. Based on this insight, we propose a novel self-supervised denoising framework, known as Noise2Same. In particular, our Noise2Same minimizes a new self-supervised loss without requiring the denoising function f to be J -invariant. Bayesian Post-Processing. From the probabilistic view, the blind-spot network f attempts to model p(yJ |xJc), where the information from xJ is not utilized thus limiting the performance. This limitation can be overcome through the Bayesian deep learning [9] if the noise model p(x|y) is known, as proposed by [12, 11]. Specifically, they propose to compute the posterior by p(yJ |xJ ,xJc) / p(xJ |yJ) p(yJ |xJc), 8J 2 J . (4) Here, the prior p(yJ |xJc) is Gaussian, whose the mean comes from the original outputs of the blind-spot network f and the variance is estimated by extra outputs added to f . The computation of the posterior is a post-processing step, which takes information from xJ into consideration. Despite the improved performance, the Bayesian post-processing has limited applicability as it requires the noise model p(xJ |yJ) to be knwon. Besides, it assumes that a single type of noise is present for all dimensions. In practice, it is common to have unknown noise models, inconsistent noises, or combined noises with different types, where the Bayesian post-processing is no longer applicable. In contrast, our proposed Noise2Same can make use of the entire input image without any postprocessing. Most importantly, Noise2Same does not require the noise model to be known and thus can be used in a much wider range of denoising applications. 3 Analysis of the J -Invariance Property In this section, we analyze the J -invariance property and motivate our work. In section 3.1, we experimentally show that the denoising functions trained through mask-based blind-spot methods are not strictly J -invariant. Next, in Section 3.2, we argue that minimizing Ex kf(x) xk2 with J -invariant f is not optimal for self-supervised denoising. 3.1 Mask-Based Blind-Spot Denoising: Is the Optimal Function Strictly J -Invariant? We show that, in mask-based blind-spot approaches, the optimal denoising function obtained through training is not strictly J -invariant, which contradicts the theory behind these methods. As introduced in Section 2, mask-based blind-spot methods implement blindness on J through masking. Original values on J are masked out and replaced by other values. Concretely, in Equation (3), xJc becomes 1Jc · x + 1J · r, where r denotes the new values on the masked locations (J). As introduced in Section 2, Noise2Void [10] and Noise2Self [1] are current mask-based blind-spot methods. The main difference between them is the choice of the replacement strategy, i.e., how to select r. Specifically, Noise2Void applies the Uniform Pixel Selection (UPS) to randomly select r from local neighbors of the masked locations, while Noise2Self directly uses a random value. Although the masking prevents f from accessing the original values on J during training, we point out that, during inference, f still shows a weak dependency on values on J , and thus does not strictly satisfy the J -invariance property. In other words, mask-based blind-spot methods do not guarantee the learning of a J -invariant function f . We conduct experiments to verify the above statement. Concretely, given a denoising function f trained through mask-based blind-spot methods, we quantify the strictness of J -invariance by computing the following metric: D(f) = EJEx kf(xJc)J f(x)Jk2 /|J |, (5) where x is the raw noisy image and xJc denotes the image whose values on J are replaced with random Gaussian noises ( m=0.5). Note that the replacement here is irrelevant to the the replacement strategy used in mask-based blind-spot methods. If the function f is strictly J -invariant, D(f) should be close to 0 for all x. Smaller D(f) indicates more J -invariant f . To mitigate mutual influences among the locations within J , we use saturate sampling [10] to sample J and make the sampling sparse enough (at a portion of 0.01%). D(f) is computed on the output of f before re-scaling back to [0,255]. In our experiments, we compare D(f) and the testing PSNR for f trained with different replacement strategies and on different datasets. Table 1 provides the comparison results between f trained with different replacement strategies on the BSD68 dataset [15]. We also include the scores of the convolutional blind-spot neural network [12] for reference, which guarantees the strict J -invariance through shifting receptive field, as discussed in Section 3.2. As expected, it has a close-to-zero D(f), where the non-zero value comes from mutual influences among the locations within J and the numerical precision. The large D(f) for all the mask-based blind-spot methods indicate that the J -invariance is not strictly guaranteed and the strictness varies significantly over different replacement strategies. We also compare results on different datasets when we fix the replacement strategy, as shown in Table 2. We can see that different datasets have strong influences on the strictness of J -invariance as well. Note that such influences are not under the control of the denoising approach itself. In addition, although the shown results in Tables 1 and 2 are computed on testing dataset at the end of training, similar trends with D(f) 0 is observed during training. Given the results in Tables 1 and 2, we draw our conclusions from two aspects. We first consider the mask together with the network f as a J -invariant function g, i.e., g(x) := f(1Jc · x+ 1J · r). In this case, the function g is guaranteed to be J -invariant during training, and thus Equation (2) is valid. However, during testing, the mask is removed and a different non-J -invariant function f is used because f achieves better performance than g, according to [1]. This contradicts the theoretical results of [1]. On the other hand, we consider the network f and the mask separately and perform training and testing with the same function f . In this case, the use of mask aims to help f learn to be J -invariant during training so that Equation (2) becomes valid. However, our experiments show that f is neither strictly J -invariant during training nor till the end of training, indicating that Equation (2) is not valid. With findings interpreted from both aspects, we ask whether minimizing Ex kf(x) xk2 with J -invariant f yields optimal performance for self-supervised denoising. 3.2 Shifting Receptive Field: How do the Strictly J -Invariant Models Perform? We directly show that, with a strictly J -invariant f , minimizingEx kf(x) xk2 does not necessarily lead to the best performance. Different from mask-based blind-spot methods, Laine et al. [12] propose the convolutional blind-spot neural network, which achieves the blindness on J by shifting receptive field (RF). Specifically, each pixel in the output image excludes its corresponding pixel in the input image from its receptive field. As values outside the receptive field cannot affect the output, the convolutional blind-spot neural network is strictly J -invariant by design. According to Table 1, the shift RF method outperforms all the mask-based blind-spot approaches with Gaussian replacement strategies, indicating the advantage of the strict J -invariance. However, we notice that the UPS replacement strategy shows a different result. Here, a denoising function with less strict J -invariance performs the best. One possible explanation is that the UPS replacement has a certain probability to replace a masked location by its original value. It weakens the J -invariance of the mask-based denoising model but boosts the performance by yielding a result that is equivalent to computing a linear combination of the noisy input and the output of a strictly J -invariant blindspot network [1]. This result shows that minimizing Ex kf(x) xk2 with a strictly J -invariant f does not necessarily give the best performance. Another evidence is the Bayesian post-processing introduced in Section 2, which also make the final denoising function not strictly J -invariant while boosting the performance. To conclude, we argue that minimizing Ex kf(x) xk2 with J -invariant f can lead to reduction in performance for self-supervised denoising. In this work, we propose a new self-supervised loss. Our loss does not require the J -invariance. In addition, our proposed method can take advantage of the information from the entire noisy input without any post-processing step or extra assumption about the noise. 4 The Proposed Noise2Same Method In this section, we introduce Noise2Same, a novel self-supervised denoising framework. Noise2Same comes with a new self-supervised loss. In particular, Noise2Same requires neither J -invariant denoising functions nor the noise models. 4.1 Noise2Same: A Self-Supervised Upper Bound without the J -Invariance Requirement As introduced in Section 2, the J -invariance requirement sets the inner product term hf(x) y,x yi in Equation (1) to zero. The resulting Equation (2) shows that minimizing Ex kf(x) xk2 with J - invariant f indirectly minimizes the supervised loss, leading to the current self-supervised denoising framework. However, we have pointed out that this framework yields reduced performance. In order to overcome this limitation, we propose to control the right side of Equation (2) with a self-supervised upper bound, instead of approximating hf(x) y,x yi to zero. The upper bound holds without requiring the denoising function f to be J -invariant. Theorem 1. Consider a normalized noisy image x 2 Rm (obtained by subtracting the mean and dividing by the standard deviation) and its ground truth signal y 2 Rm. Assume the noise is zeromean and i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. For any f : Rm ! Rm, we have Ex,y kf(x) yk2 + kx yk2 Ex kf(x) xk2 +2mEJ " Ex kf(x)J f(xJc)Jk2 |J | #1/2 (6) The proof of Theorem 1 is provided in Appendix A. With Theorem 1, we can perform self-supervised denoising by minimizing the right side of Inequality (6) instead. Following the theoretical result, we propose our new self-supervised denoising framework, Noise2Same, with the following selfsupervised loss: L(f) = Ex kf(x) xk2 /m+ inv EJ h Ex kf(x)J f(xJc)Jk2 /|J | i1/2 . (7) This new self-supervised loss consists of two terms: a reconstruction mean squared error (MSE) Lrec = Ex kf(x) xk2 and a squared-root of invariance MSE Linv = EJ(Ex kf(x)J f(xJc)Jk2 /|J |)1/2. Intuitively, Linv prevents our model from learning the identity function when minimizing Lrec without any requirement on f . In fact, by comparing Linv with D(f) in Equation (5), we can see that Linv implicitly controls how strictly f should be J -invariant, avoiding the explicit J -invariance requirement. We balance Lrec and Linv with a positive scalar weight inv. By default, we set inv = 2 according to Theorem 1. In some cases, setting inv to different values according to the scale of observed Linv during training could help achieve a better denoising performance. Figure 1 compares our proposed Noise2Same with mask-based blind-spot denoising methods. Maskbased blind-spot denoising methods employ the self-supervised loss in Equation (3), where the reconstruction MSE Lrec is computed only on J . In contrast, our proposed Noise2Same computes Lrec between the entire noisy image x and the output of the neural network f(x). To compute the invariance term Linv , we still feed the masked noisy image xJc to the neural network and compute MSE between f(x) and f(xJc) on J , i.e., f(x)J and f(xJc)J . Note that, while Noise2Same also samples J from x, it does not require f to be J -invariant. 4.2 Analysis of the Invariance Term The invariance term Linv is a unique and important part in our proposed self-supervised loss. In this section, we further analyze the effect of this term. To make the analysis concrete, we perform analysis based on an example case, where the noise model is given as the additive Gaussian noise N(0, ). Note that the example is for analysis purpose only, and the application of our proposed Noise2Same does not require the noise model to be known. Theorem 2. Consider a noisy image x 2 Rm and its ground truth signal y 2 Rm. Assume the noise is i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. If the noise is additive Gaussian with zero-mean and standard deviation , we have Ex,y kf(x) yk2+kx yk2 Ex kf(x) xk2+2m EJ " E kf(x)J f(xJc)Jk2 |J | #1/2 (8) The proof of Theorem 2 is provided in Appendix B. Note that the noisy image x here does not require normalization as in Theorem 1. Compared to Theorem 1, the from the noise model is added to balance the invariance term. As introduced in Section 4.1, the invariance term controls how strictly f should be J -invariant and a higher weight of the invariance term pushes the model to learn a more strictly J -invariant f . Therefore, Theorem 2 indicates that, when the noise is stronger with a larger , f should be more strictly J -invariant. Based on the definition of J -invariance, a more strictly J -invariant f will depend more on the context xJc and less on the noisy input xJ . This result is consistent with the findings in previous studies. Batson et al. [1] propose to compute the linear combination of the noisy image and the output of the blind-spot network as a post-processing step, leading to better performance. The weights in the linear combination are determined by the variance of noise. And a higher weight is given to the output of the blind-spot network with larger noise variance. Laine et al. [12] derive a similar result through the Bayesian post-processing. This explains how the invariance term in our proposed Noise2Same improves denoising performance. However, a critical difference between our Noise2Same and previous studies is that, the postprocessing in [1, 12] cannot be performed when the noise model is unknown. To the contrary, Noise2Same is able to control how strictly f should be J -invariant through the invariance term without any assumption about the noise or requirement on f . This property allows Noise2Same to be used in a much wider range of denoising tasks with unknown noise models, inconsistent noise, or combined noises with different types. 5 Experiments We evaluate our Noise2Same on four datasets, including RGB natural images (ImageNet ILSVRC 2012 Val [21]), generated hand-written Chinese character images (HànZì [1]), physically captured 3D microscopy data (Planaria [27]) and grey-scale natural images (BSD68 [15]). The four datasets have different noise types and levels. The constructions of the four datasets are described in Appendix C. 5.1 Comparisons with Baselines The baselines include traditional denoising algorithms (NLM [3], BM3D [5]), supervised methods (Noise2True, Noise2Noise [13]), and previous self-supervised methods (Noise2Void [10], Noise2Self [1], the convolutional blind-spot neural network [12]). Note that we consider Noise2Noise as a supervised model since it requires pairs of noisy images, where the supervision is noisy. While Noise2Void and Noise2Self are similar methods following the blind-spot approach, they mainly differ in the strategy of mask replacement. To be more specific, Noise2Void proposes to use Uniform Pixel Selection (UPS), and Noise2Self proposes to exclude the information of the masked pixel and uses a random value on the range of given image data. As an additional mask strategy using the local average excluding the center pixel (donut) is mentioned in [1], we also include it for comparison. We use the same neural network architecture for all deep learning methods. Detailed experimental settings are provided in Appendices D and E. Note that ImageNet and HànZì have combined noises and Planaria has unknown noise models. As a result, the post-processing steps in Noise2Self [1] and the convolutional blind-spot neural network [12] are not applicable, as explained in Section 2. In order to make fair comparisons under the self-supervised category, we train and evaluate all models only using the images, without extra information about the noise. In this case, among self-supervised methods, only our Noise2Same and Noise2Void with the UPS replacement strategy can make use of information from the entire input image, as demonstrated in Section 3.2. We also include the complete version of the convolutional blind-spot neural network with post-processing, who is only available on BSD68, where the noise is not combined and the noise type is known. Following previous studies, we use Peak Signal-to-Noise Ratio (PSNR) as the evaluation metric. The comparison results between our Noise2Same and the baselines in terms of PSNR on the four datasets are summarized in Table 3 and visualized in Figure 2 and Appendix F. The results show that our Noise2Same achieve remarkable improvements over previous self-supervised baselines on ImageNet, HànZì and CARE. In particular, on the ImageNet and the HànZì Datasets, our Noise2Same and Noise2Void demonstrate the advantage of utilizing information from the entire input image. Although the using of donut masking can achieve better performance on the BSD68 Dataset, it leads to model collapsing on the ImageNet Dataset and hence can be unstable. On the other hand, the convolutional blind-spot neural network [12] suffers from significant performance losses without the Bayesian post-processing, which requires information about the noise models that are unknown. We note that, in our fair settings, supervised methods still have better performance over self-supervised models, especially on the Planaria and BSD68 datasets. One explanation is that the supervision usually carries extra information implicitly, such as information about the noise model. Here, we draw a conclusion different from Batson et al. [1]. That is, there are still performance gaps between self-supervised and supervised denoising methods. Our Noise2Same moves one step towards closing the gap by proposing a new self-supervised denoising framework. In addition to the performance, we compares the training efficiency among self-supervised methods as well. Specifically, we plot how the PSNR changes during training on the ImageNet dataset. We compare Noise2Same with Noise2Self and the convolutional blind-spot neural network. The plot shows that our Noise2Same has similar convergence speed to the convolutional blind-spot neural network. On the other hand, as the mask-based method Noise2Self uses only a subset of output pixels to compute the loss function in each step, the training is expected to be slower [12]. 5.2 Effect of the Invariance Term In Section 4.2, we analyzed the effect of the invariance term using an example, where the noise model is given as the additive Gaussian noise. In this example, the variance of the noise controls how the strictness of the optimal f through the coefficient inv of the invariance term. Here, we conduct experiments to verify this insight. Specifically, we construct four noisy dataset from the HànZì dataset with only additive Gaussian noise at different levels ( noise = 0.9, 0.7, 0.5, 0.3). Then we train Noise2Same with inv = 2 loss by varying loss from 0.1 to 1.0 for each dataset. According to Theorem 2, the best performance on each dataset should be achieved when loss is close to noise. The results, as reported Figure 4, are consistent with our theoretical results in Theorem 2. 6 Conclusion and Future Work We analyzed the existing blind-spot-based denoising methods and introduced Noise2Same, a novel self-supervised denoising method, which removes the assumption and over-restriction on the neural network as a J -invariant function. We provided further analysis on Noise2Same and experimentally demonstrated the denoising capability of Noise2Same. As an orthogonal work, the combination of self-supervised denoising result and the noise model has be shown to provide additional performance gain. We would like to further explore noise model-augmented Noise2Same in future works. Broader Impact In this paper, we introduce Noise2Same, a self-supervised framework for deep image denoising. As Noise2Same does not need paired clean data, paired noisy data, nor the noise model, its application scenarios could be much broader than both traditional supervised and existing self-supervised denoising frameworks. The most direct application of Noise2Same is to perform denoising on digital images captured under poor conditions. Individuals and corporations related to photography may benefit from our work. Besides, Noise2Same could be applied as a pre-processing step for computer vision tasks such as object detection and segmentation [18], making the downstream algorithms more robust to noisy images. Also, specific research communities could benefit from the development of Noise2Same as well. For example, the capture of high-quality microscopy data of live cells, tissue, or nanomaterials is expensive in terms of budget and time [27]. Proper denoising algorithms allow researchers to obtain high-quality data from low-quality data and hence remove the need to capture high-quality data directly. In addition to image denoising applications, the self-supervised denoising framework could be extended to other domains such as audio noise reduction and single-cell [1]. On the negative aspect, as many imaging-based research tasks and computer vision applications may be built upon the denoising algorithms, the failure of Noise2Same could potentially lead to biases or failures in these tasks and applications. Acknowledgments and Disclosure of Funding This work was supported in part by National Science Foundation grant DBI-2028361.
1. What is the main contribution of the paper regarding self-supervised denoising? 2. What are the strengths of the proposed method, particularly in avoiding identity mapping and assessing center pixel values? 3. What are the weaknesses of the paper, especially in terms of experimental comparisons and confusion in the claim? 4. Do you have any concerns about the modification of original implementations in the paper's comparison? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper argues that the denoising model trained through mask-based blind-spot approaches is not strictly J-invariant and minimizing the self-supervised loss with a J-invariant function is not optimal for self-supervised denoising. A regularization term that serves as a relaxed J-invariance measure of a model is derived from a self-supervised upper bound. The regularization term is then combined with a non-masked self-supervised reconstruction loss for training the model. Such a method can avoid learning identity mapping while keeping the center pixel value assessible during the self-supervised reconstruction training . This makes it different from existing methods which achieves strict J-invariance during training with blind-spot schemes or specific network designs. The proposed method also shows a little higher efficiency over the existing blind-spot-based approaches. Strengths 1. The proposed J-invariance regularization term allows implicitly avoiding learning identity mapping while keeping the center pixel value assessible in the model during self-supervised reconstruction training, which makes it different from existing approaches. 2. Theoretical analysis on the proposed regularization term is provided. 3. A simple yet intuitive self-supervised loss is proposed. The center pixel value is very useful for denoising. Blind-spot methods like N2V cannot assess the center pixel values, and thus the way they learn on the exploitation of the center pixel for denoising is not an optimal way. Specific network architectures with strict J-invariance cannot assess the center pixel value during both training and test, which is not optimal either and some post processing is needed. This paper provides a nice way to learn on dependency on the center pixel value for denoising, in which the center pixel value is directly assessible. Weaknesses With a double check on the paper after reading the rebuttal, I have the following concerns. 1. The major one is on correctness of the experimental comparison. I would like to share the results of the two most related blind-spot-based methods: N2S and Laine et al. [12], in their original implementations. The results tell a quite different story from what showed in the paper which used the versions with their own modifications. The original implementations of both N2S [1] and Laine [12] achieve much better results on BSD68, with more than 1.2dB PSNR, than that reported in the paper. Concretely, we use the model trained with the original authors' code and training scheme on BSD400, the same training dataset used in this paper. The test results on BSD68 are as follows: (1) Original N2S can achieve PSNR 28.12dB vs 26.98 reported in this paper with their own modifications. (2) Original Laine et al [12] can achieve PSNR 28.84dB vs. 27.15dB reported in the paper with their own modifications. In short, original implementations of N2S and [12] noticeably outperformed the proposed methods. There is a big gap between the results using original implementations and the one reported in the paper. In supplementary materials, It seems that the authors modified the original implementation (not sure, the description is not very clear). To be honest, I did not see good reasons why only include different results or the results from the modified implementations of the original papers, which are much lower than the results from their original implementations. *********************************************************************** 2. The claim in Section 3.1 is rather confusing. It says that "the denoising function f trained through mask-based blind-spot approaches is not strictly J-invariant, making Equation (2) not valid". Also, in the rebuttal, it says that "it is stated that the results in Table 1 indicate that the model f does not have the J-invariance, thus violating the assumption behind using the loss in Eqn. (3)." In N2S, during training, the denoising function f is J-invariant if we view f as the one equipped with masking. Thus, it does not violate Equation (2). Further, since Equation (2) is the loss for training which is not used in test, relating the J-invariance of a trained model in test to the conditions of Equation (2)(3) for training does not make sense.
NIPS
Title Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising Abstract Self-supervised frameworks that learn denoising models with merely individual noisy images have shown strong capability and promising performance in various image denoising tasks. Existing self-supervised denoising frameworks are mostly built upon the same theoretical foundation, where the denoising models are required to be J -invariant. However, our analyses indicate that the current theory and the J -invariance may lead to denoising models with reduced performance. In this work, we introduce Noise2Same, a novel self-supervised denoising framework. In Noise2Same, a new self-supervised loss is proposed by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model and can be used in a wider range of denoising applications. We analyze our proposed Noise2Same both theoretically and experimentally. The experimental results show that our Noise2Same consistently outperforms previous self-supervised denoising methods in terms of denoising performance and training efficiency. 1 Introduction The quality of deep learning methods for signal reconstruction from noisy images, also known as deep image denoising, has benefited from the advanced neural network architectures such as ResNet [8], U-Net [19] and their variants [29, 16, 26, 31, 25, 14]. While more powerful deep image denoising models are developed over time, the problem of data availability becomes more critical. Most deep image denoising algorithms are supervised methods that require matched pairs of noisy and clean images for training [27, 29, 2, 7]. The problem of these supervised methods is that, in many denoising applications, the clean images are hard to obtain due to instrument or cost limitations. To overcome this problem, Noise2Noise [13] explores an alternative training framework, where pairs of noisy images are used for training. Here, each pair of noisy images should correspond to the same but unknown clean image. Note that Noise2Noise is basically still a supervised method, just with noisy supervision. Despite the success of Noise2Noise, its application scenarios are still limited as pairs of noisy images are not available in some cases and may have registration problems. Recently, various of denoising frameworks that can be trained on individual noisy images [23, 17, 28, 10, 1, 12] have been developed. These studies can be divided into two categories according to the amount of extra information required. Methods in the first category requires the noise model to be known. For example, the simulation-based methods [17, 28] use the noise model to generate simulated noises and make individual noisy images noisier. Then a framework similar to Noise2Noise can be applied to train the model with pairs of noisier image and the original noisy image. The limitation is obvious as the noise model may be too complicated or even not available. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. On the other hand, algorithms in the second category target at more general cases where only individual noisy images are available without any extra information [23, 10, 1, 12]. In this category, self-supervised learning [30, 6, 24] has been widely explored, such as Noise2Void [10], Noise2Self [1], and the convolutional blind-spot neural network [12]. Note that these self-supervised models can be improved as well if information about the noise model is given. For example, Laine et al. [12] and Krull et al. [11] propose the Bayesian post-processing to utilize the noise model. However, with the proposed post-processing, these methods fall into the first category where applicability is limited. In this work, we stick to the most general cases where only individual noisy images are provided and focus on the self-supervised framework itself without any post-processing step. We note that all of these existing self-supervised denoising frameworks are built upon the same theoretical background, where the denoising models are required to be J -invariant (Section 2). We perform in-depth analyses on the J -invariance property and argue that it may lead to denoising models with reduced performance. Based on this insight, we propose Noise2Same, a novel self-supervised denoising framework, with a new theoretical foundation. Noise2Same comes with a new self-supervised loss by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model. We analyze the effect of the new loss theoretically and conduct thorough experiments to evaluate Noise2Same. Result show that our Noise2Same consistently outperforms previous self-supervised denoising methods. 2 Background and Related Studies Self-Supervised Denoising with J -Invariant Functions. We consider the reconstruction of a noisy image x 2 Rm, where m = (d⇥)h⇥w⇥c depends on the spatial and channel dimensions. Let y 2 Rm denotes the clean image. Given noisy and clean image pairs (x,y), supervised methods learn a denoising function f : Rm ! Rm by minimizing the supervised loss L(f) = Ex,y kf(x) yk2. When neither clean images nor paired noisy images are available, various self-supervised denoising methods have been developed [10, 1, 12] by assuming that the noise is zero-mean and independent among all dimensions. These methods are trained on individual noisy images to minimize the self-supervised loss L(f) = Ex kf(x) xk2. Particularly, in order to prevent the self-supervised training from collapsing into leaning the identity function, Batson et al. [1] point out that the denoising function f should be J -invariant, as defined below. Definition 1. For a given partition J = {J1, · · · , Jk} (|J1|+ · · ·+ |Jk| = m) of the dimensions of an image x 2 Rm, a function f : Rm ! Rm is J -invariant if f(x)J does not depend on xJ for all J 2 J , where f(x)J and xJ denotes the values of f(x) and x on J , respectively. Intuitively, J -invariance means that, when denoising xJ , f only uses its context xJc , where Jc denotes the complement of J . With a J -invariant function f , we have Ex kf(x) xk2 = Ex,y kf(x) yk2 + Ex,y kx yk2 2 hf(x) y,x yi (1) = Ex,y kf(x) yk2 + Ex,y kx yk2 . (2) Here, the third term in Equation (1) becomes zero when f is J -invariant and the zero-mean assumption about the noise holds [1]. We can see from Equation (2) that when f is J -invariant, minimizing the self-supervised loss Ex kf(x) xk2 indirectly minimizes the supervised loss Ex,y kf(x) yk2. All existing self-supervised denoising methods [10, 1, 12] compute the J -invariant denoising function f through a blind-spot network. Concretely, a subset J of the dimensions are sampled from the noisy image x as “blind spots”. The blind-spot network f is asked to predict the values of these “blind spots” based on the context xJc . In other words, f is blind on J . In previous studies, the blindness on J is achieved in two ways. Specifically, Noise2Void [10] and Noise2Self [1] use masking, while the convolutional blind-spot neural network [12] shifts the receptive field. With the blind-spot network, the self-supervised loss Ex kf(x) xk2 can be written as L(f) = EJEx kf(xJc)J xJk2 . (3) While these methods have achieved good performance, our analysis in this work indicates that minimizing the self-supervised loss in Equation (3) with J -invariant f is not optimal for selfsupervised denoising. Based on this insight, we propose a novel self-supervised denoising framework, known as Noise2Same. In particular, our Noise2Same minimizes a new self-supervised loss without requiring the denoising function f to be J -invariant. Bayesian Post-Processing. From the probabilistic view, the blind-spot network f attempts to model p(yJ |xJc), where the information from xJ is not utilized thus limiting the performance. This limitation can be overcome through the Bayesian deep learning [9] if the noise model p(x|y) is known, as proposed by [12, 11]. Specifically, they propose to compute the posterior by p(yJ |xJ ,xJc) / p(xJ |yJ) p(yJ |xJc), 8J 2 J . (4) Here, the prior p(yJ |xJc) is Gaussian, whose the mean comes from the original outputs of the blind-spot network f and the variance is estimated by extra outputs added to f . The computation of the posterior is a post-processing step, which takes information from xJ into consideration. Despite the improved performance, the Bayesian post-processing has limited applicability as it requires the noise model p(xJ |yJ) to be knwon. Besides, it assumes that a single type of noise is present for all dimensions. In practice, it is common to have unknown noise models, inconsistent noises, or combined noises with different types, where the Bayesian post-processing is no longer applicable. In contrast, our proposed Noise2Same can make use of the entire input image without any postprocessing. Most importantly, Noise2Same does not require the noise model to be known and thus can be used in a much wider range of denoising applications. 3 Analysis of the J -Invariance Property In this section, we analyze the J -invariance property and motivate our work. In section 3.1, we experimentally show that the denoising functions trained through mask-based blind-spot methods are not strictly J -invariant. Next, in Section 3.2, we argue that minimizing Ex kf(x) xk2 with J -invariant f is not optimal for self-supervised denoising. 3.1 Mask-Based Blind-Spot Denoising: Is the Optimal Function Strictly J -Invariant? We show that, in mask-based blind-spot approaches, the optimal denoising function obtained through training is not strictly J -invariant, which contradicts the theory behind these methods. As introduced in Section 2, mask-based blind-spot methods implement blindness on J through masking. Original values on J are masked out and replaced by other values. Concretely, in Equation (3), xJc becomes 1Jc · x + 1J · r, where r denotes the new values on the masked locations (J). As introduced in Section 2, Noise2Void [10] and Noise2Self [1] are current mask-based blind-spot methods. The main difference between them is the choice of the replacement strategy, i.e., how to select r. Specifically, Noise2Void applies the Uniform Pixel Selection (UPS) to randomly select r from local neighbors of the masked locations, while Noise2Self directly uses a random value. Although the masking prevents f from accessing the original values on J during training, we point out that, during inference, f still shows a weak dependency on values on J , and thus does not strictly satisfy the J -invariance property. In other words, mask-based blind-spot methods do not guarantee the learning of a J -invariant function f . We conduct experiments to verify the above statement. Concretely, given a denoising function f trained through mask-based blind-spot methods, we quantify the strictness of J -invariance by computing the following metric: D(f) = EJEx kf(xJc)J f(x)Jk2 /|J |, (5) where x is the raw noisy image and xJc denotes the image whose values on J are replaced with random Gaussian noises ( m=0.5). Note that the replacement here is irrelevant to the the replacement strategy used in mask-based blind-spot methods. If the function f is strictly J -invariant, D(f) should be close to 0 for all x. Smaller D(f) indicates more J -invariant f . To mitigate mutual influences among the locations within J , we use saturate sampling [10] to sample J and make the sampling sparse enough (at a portion of 0.01%). D(f) is computed on the output of f before re-scaling back to [0,255]. In our experiments, we compare D(f) and the testing PSNR for f trained with different replacement strategies and on different datasets. Table 1 provides the comparison results between f trained with different replacement strategies on the BSD68 dataset [15]. We also include the scores of the convolutional blind-spot neural network [12] for reference, which guarantees the strict J -invariance through shifting receptive field, as discussed in Section 3.2. As expected, it has a close-to-zero D(f), where the non-zero value comes from mutual influences among the locations within J and the numerical precision. The large D(f) for all the mask-based blind-spot methods indicate that the J -invariance is not strictly guaranteed and the strictness varies significantly over different replacement strategies. We also compare results on different datasets when we fix the replacement strategy, as shown in Table 2. We can see that different datasets have strong influences on the strictness of J -invariance as well. Note that such influences are not under the control of the denoising approach itself. In addition, although the shown results in Tables 1 and 2 are computed on testing dataset at the end of training, similar trends with D(f) 0 is observed during training. Given the results in Tables 1 and 2, we draw our conclusions from two aspects. We first consider the mask together with the network f as a J -invariant function g, i.e., g(x) := f(1Jc · x+ 1J · r). In this case, the function g is guaranteed to be J -invariant during training, and thus Equation (2) is valid. However, during testing, the mask is removed and a different non-J -invariant function f is used because f achieves better performance than g, according to [1]. This contradicts the theoretical results of [1]. On the other hand, we consider the network f and the mask separately and perform training and testing with the same function f . In this case, the use of mask aims to help f learn to be J -invariant during training so that Equation (2) becomes valid. However, our experiments show that f is neither strictly J -invariant during training nor till the end of training, indicating that Equation (2) is not valid. With findings interpreted from both aspects, we ask whether minimizing Ex kf(x) xk2 with J -invariant f yields optimal performance for self-supervised denoising. 3.2 Shifting Receptive Field: How do the Strictly J -Invariant Models Perform? We directly show that, with a strictly J -invariant f , minimizingEx kf(x) xk2 does not necessarily lead to the best performance. Different from mask-based blind-spot methods, Laine et al. [12] propose the convolutional blind-spot neural network, which achieves the blindness on J by shifting receptive field (RF). Specifically, each pixel in the output image excludes its corresponding pixel in the input image from its receptive field. As values outside the receptive field cannot affect the output, the convolutional blind-spot neural network is strictly J -invariant by design. According to Table 1, the shift RF method outperforms all the mask-based blind-spot approaches with Gaussian replacement strategies, indicating the advantage of the strict J -invariance. However, we notice that the UPS replacement strategy shows a different result. Here, a denoising function with less strict J -invariance performs the best. One possible explanation is that the UPS replacement has a certain probability to replace a masked location by its original value. It weakens the J -invariance of the mask-based denoising model but boosts the performance by yielding a result that is equivalent to computing a linear combination of the noisy input and the output of a strictly J -invariant blindspot network [1]. This result shows that minimizing Ex kf(x) xk2 with a strictly J -invariant f does not necessarily give the best performance. Another evidence is the Bayesian post-processing introduced in Section 2, which also make the final denoising function not strictly J -invariant while boosting the performance. To conclude, we argue that minimizing Ex kf(x) xk2 with J -invariant f can lead to reduction in performance for self-supervised denoising. In this work, we propose a new self-supervised loss. Our loss does not require the J -invariance. In addition, our proposed method can take advantage of the information from the entire noisy input without any post-processing step or extra assumption about the noise. 4 The Proposed Noise2Same Method In this section, we introduce Noise2Same, a novel self-supervised denoising framework. Noise2Same comes with a new self-supervised loss. In particular, Noise2Same requires neither J -invariant denoising functions nor the noise models. 4.1 Noise2Same: A Self-Supervised Upper Bound without the J -Invariance Requirement As introduced in Section 2, the J -invariance requirement sets the inner product term hf(x) y,x yi in Equation (1) to zero. The resulting Equation (2) shows that minimizing Ex kf(x) xk2 with J - invariant f indirectly minimizes the supervised loss, leading to the current self-supervised denoising framework. However, we have pointed out that this framework yields reduced performance. In order to overcome this limitation, we propose to control the right side of Equation (2) with a self-supervised upper bound, instead of approximating hf(x) y,x yi to zero. The upper bound holds without requiring the denoising function f to be J -invariant. Theorem 1. Consider a normalized noisy image x 2 Rm (obtained by subtracting the mean and dividing by the standard deviation) and its ground truth signal y 2 Rm. Assume the noise is zeromean and i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. For any f : Rm ! Rm, we have Ex,y kf(x) yk2 + kx yk2 Ex kf(x) xk2 +2mEJ " Ex kf(x)J f(xJc)Jk2 |J | #1/2 (6) The proof of Theorem 1 is provided in Appendix A. With Theorem 1, we can perform self-supervised denoising by minimizing the right side of Inequality (6) instead. Following the theoretical result, we propose our new self-supervised denoising framework, Noise2Same, with the following selfsupervised loss: L(f) = Ex kf(x) xk2 /m+ inv EJ h Ex kf(x)J f(xJc)Jk2 /|J | i1/2 . (7) This new self-supervised loss consists of two terms: a reconstruction mean squared error (MSE) Lrec = Ex kf(x) xk2 and a squared-root of invariance MSE Linv = EJ(Ex kf(x)J f(xJc)Jk2 /|J |)1/2. Intuitively, Linv prevents our model from learning the identity function when minimizing Lrec without any requirement on f . In fact, by comparing Linv with D(f) in Equation (5), we can see that Linv implicitly controls how strictly f should be J -invariant, avoiding the explicit J -invariance requirement. We balance Lrec and Linv with a positive scalar weight inv. By default, we set inv = 2 according to Theorem 1. In some cases, setting inv to different values according to the scale of observed Linv during training could help achieve a better denoising performance. Figure 1 compares our proposed Noise2Same with mask-based blind-spot denoising methods. Maskbased blind-spot denoising methods employ the self-supervised loss in Equation (3), where the reconstruction MSE Lrec is computed only on J . In contrast, our proposed Noise2Same computes Lrec between the entire noisy image x and the output of the neural network f(x). To compute the invariance term Linv , we still feed the masked noisy image xJc to the neural network and compute MSE between f(x) and f(xJc) on J , i.e., f(x)J and f(xJc)J . Note that, while Noise2Same also samples J from x, it does not require f to be J -invariant. 4.2 Analysis of the Invariance Term The invariance term Linv is a unique and important part in our proposed self-supervised loss. In this section, we further analyze the effect of this term. To make the analysis concrete, we perform analysis based on an example case, where the noise model is given as the additive Gaussian noise N(0, ). Note that the example is for analysis purpose only, and the application of our proposed Noise2Same does not require the noise model to be known. Theorem 2. Consider a noisy image x 2 Rm and its ground truth signal y 2 Rm. Assume the noise is i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. If the noise is additive Gaussian with zero-mean and standard deviation , we have Ex,y kf(x) yk2+kx yk2 Ex kf(x) xk2+2m EJ " E kf(x)J f(xJc)Jk2 |J | #1/2 (8) The proof of Theorem 2 is provided in Appendix B. Note that the noisy image x here does not require normalization as in Theorem 1. Compared to Theorem 1, the from the noise model is added to balance the invariance term. As introduced in Section 4.1, the invariance term controls how strictly f should be J -invariant and a higher weight of the invariance term pushes the model to learn a more strictly J -invariant f . Therefore, Theorem 2 indicates that, when the noise is stronger with a larger , f should be more strictly J -invariant. Based on the definition of J -invariance, a more strictly J -invariant f will depend more on the context xJc and less on the noisy input xJ . This result is consistent with the findings in previous studies. Batson et al. [1] propose to compute the linear combination of the noisy image and the output of the blind-spot network as a post-processing step, leading to better performance. The weights in the linear combination are determined by the variance of noise. And a higher weight is given to the output of the blind-spot network with larger noise variance. Laine et al. [12] derive a similar result through the Bayesian post-processing. This explains how the invariance term in our proposed Noise2Same improves denoising performance. However, a critical difference between our Noise2Same and previous studies is that, the postprocessing in [1, 12] cannot be performed when the noise model is unknown. To the contrary, Noise2Same is able to control how strictly f should be J -invariant through the invariance term without any assumption about the noise or requirement on f . This property allows Noise2Same to be used in a much wider range of denoising tasks with unknown noise models, inconsistent noise, or combined noises with different types. 5 Experiments We evaluate our Noise2Same on four datasets, including RGB natural images (ImageNet ILSVRC 2012 Val [21]), generated hand-written Chinese character images (HànZì [1]), physically captured 3D microscopy data (Planaria [27]) and grey-scale natural images (BSD68 [15]). The four datasets have different noise types and levels. The constructions of the four datasets are described in Appendix C. 5.1 Comparisons with Baselines The baselines include traditional denoising algorithms (NLM [3], BM3D [5]), supervised methods (Noise2True, Noise2Noise [13]), and previous self-supervised methods (Noise2Void [10], Noise2Self [1], the convolutional blind-spot neural network [12]). Note that we consider Noise2Noise as a supervised model since it requires pairs of noisy images, where the supervision is noisy. While Noise2Void and Noise2Self are similar methods following the blind-spot approach, they mainly differ in the strategy of mask replacement. To be more specific, Noise2Void proposes to use Uniform Pixel Selection (UPS), and Noise2Self proposes to exclude the information of the masked pixel and uses a random value on the range of given image data. As an additional mask strategy using the local average excluding the center pixel (donut) is mentioned in [1], we also include it for comparison. We use the same neural network architecture for all deep learning methods. Detailed experimental settings are provided in Appendices D and E. Note that ImageNet and HànZì have combined noises and Planaria has unknown noise models. As a result, the post-processing steps in Noise2Self [1] and the convolutional blind-spot neural network [12] are not applicable, as explained in Section 2. In order to make fair comparisons under the self-supervised category, we train and evaluate all models only using the images, without extra information about the noise. In this case, among self-supervised methods, only our Noise2Same and Noise2Void with the UPS replacement strategy can make use of information from the entire input image, as demonstrated in Section 3.2. We also include the complete version of the convolutional blind-spot neural network with post-processing, who is only available on BSD68, where the noise is not combined and the noise type is known. Following previous studies, we use Peak Signal-to-Noise Ratio (PSNR) as the evaluation metric. The comparison results between our Noise2Same and the baselines in terms of PSNR on the four datasets are summarized in Table 3 and visualized in Figure 2 and Appendix F. The results show that our Noise2Same achieve remarkable improvements over previous self-supervised baselines on ImageNet, HànZì and CARE. In particular, on the ImageNet and the HànZì Datasets, our Noise2Same and Noise2Void demonstrate the advantage of utilizing information from the entire input image. Although the using of donut masking can achieve better performance on the BSD68 Dataset, it leads to model collapsing on the ImageNet Dataset and hence can be unstable. On the other hand, the convolutional blind-spot neural network [12] suffers from significant performance losses without the Bayesian post-processing, which requires information about the noise models that are unknown. We note that, in our fair settings, supervised methods still have better performance over self-supervised models, especially on the Planaria and BSD68 datasets. One explanation is that the supervision usually carries extra information implicitly, such as information about the noise model. Here, we draw a conclusion different from Batson et al. [1]. That is, there are still performance gaps between self-supervised and supervised denoising methods. Our Noise2Same moves one step towards closing the gap by proposing a new self-supervised denoising framework. In addition to the performance, we compares the training efficiency among self-supervised methods as well. Specifically, we plot how the PSNR changes during training on the ImageNet dataset. We compare Noise2Same with Noise2Self and the convolutional blind-spot neural network. The plot shows that our Noise2Same has similar convergence speed to the convolutional blind-spot neural network. On the other hand, as the mask-based method Noise2Self uses only a subset of output pixels to compute the loss function in each step, the training is expected to be slower [12]. 5.2 Effect of the Invariance Term In Section 4.2, we analyzed the effect of the invariance term using an example, where the noise model is given as the additive Gaussian noise. In this example, the variance of the noise controls how the strictness of the optimal f through the coefficient inv of the invariance term. Here, we conduct experiments to verify this insight. Specifically, we construct four noisy dataset from the HànZì dataset with only additive Gaussian noise at different levels ( noise = 0.9, 0.7, 0.5, 0.3). Then we train Noise2Same with inv = 2 loss by varying loss from 0.1 to 1.0 for each dataset. According to Theorem 2, the best performance on each dataset should be achieved when loss is close to noise. The results, as reported Figure 4, are consistent with our theoretical results in Theorem 2. 6 Conclusion and Future Work We analyzed the existing blind-spot-based denoising methods and introduced Noise2Same, a novel self-supervised denoising method, which removes the assumption and over-restriction on the neural network as a J -invariant function. We provided further analysis on Noise2Same and experimentally demonstrated the denoising capability of Noise2Same. As an orthogonal work, the combination of self-supervised denoising result and the noise model has be shown to provide additional performance gain. We would like to further explore noise model-augmented Noise2Same in future works. Broader Impact In this paper, we introduce Noise2Same, a self-supervised framework for deep image denoising. As Noise2Same does not need paired clean data, paired noisy data, nor the noise model, its application scenarios could be much broader than both traditional supervised and existing self-supervised denoising frameworks. The most direct application of Noise2Same is to perform denoising on digital images captured under poor conditions. Individuals and corporations related to photography may benefit from our work. Besides, Noise2Same could be applied as a pre-processing step for computer vision tasks such as object detection and segmentation [18], making the downstream algorithms more robust to noisy images. Also, specific research communities could benefit from the development of Noise2Same as well. For example, the capture of high-quality microscopy data of live cells, tissue, or nanomaterials is expensive in terms of budget and time [27]. Proper denoising algorithms allow researchers to obtain high-quality data from low-quality data and hence remove the need to capture high-quality data directly. In addition to image denoising applications, the self-supervised denoising framework could be extended to other domains such as audio noise reduction and single-cell [1]. On the negative aspect, as many imaging-based research tasks and computer vision applications may be built upon the denoising algorithms, the failure of Noise2Same could potentially lead to biases or failures in these tasks and applications. Acknowledgments and Disclosure of Funding This work was supported in part by National Science Foundation grant DBI-2028361.
1. What is the focus and contribution of the paper on image denoising? 2. What are the strengths of the proposed approach, particularly in terms of the objective function and its mathematical derivation? 3. What are the weaknesses of the paper, especially regarding computational expense and limitation in incorporating noise models? 4. Do you have any concerns or questions about the method's ability to leverage the value of the noisy pixel without collapsing the denoising function into identity? 5. How do you assess the novelty and relevance of the proposed work compared to other self-supervised methods?
Summary and Contributions Strengths Weaknesses
Summary and Contributions After Rebuttal Summary: During the review phase, I did not realize that the PSNR scores of Laine et al and N2S reported in the paper are lower than what is reported in the original paper. I thank R1 for pointing this out. Given this new observation, I'm afraid that the conclusions in section 3.2 and hence the motivation of the paper doesn't hold anymore. However, I still think that characterizing the weak dependencies networks trained with masking has on the center pixel, and a new objective function that learns these dependencies systematically is interesting. The paper can be accepted if the inaccuracies in the results are fixed. I've updated my score to a weak accept. =========== The authors propose a new framework to train regular (non-J-invariant) neural networks for image denoising without ground truth. They do this via a new objective function that combines the MSE of the output of the network with the noisy image and another term which enforces that the network doesn't use the noisy pixel it is denoising. This loss function enables one to leverage the value of the noisy pixel we are denoising without collapsing the denoising function into identity. Strengths The proposed objective function is very straight forward and intuitive. The authors combine MSE of the network output with a noisy image with MSE of the network output of masked noisy image with again noisy image as target. The authors support this cost function by mathematically deriving that it's an upper bound of the supervised loss (eq 6). The proposed cost function works out of the box for different noise types, even when we don't have the knowledge of the distribution. To the best of my knowledge, the proposed work is novel. The work is relevant to the NeurIPS community and of practical importance for denoising. Weaknesses The proposed method is computally expensive during the training time, but it is a drawback suffered by all masking based methods. Further, knowing the noise model may give easy prior to incorporate in the framework, which I currently see no way of doing in this framework. Overall, in my opinion, the work doesn't have any significant limitations when comapred to the existing self supervised methods.
NIPS
Title Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising Abstract Self-supervised frameworks that learn denoising models with merely individual noisy images have shown strong capability and promising performance in various image denoising tasks. Existing self-supervised denoising frameworks are mostly built upon the same theoretical foundation, where the denoising models are required to be J -invariant. However, our analyses indicate that the current theory and the J -invariance may lead to denoising models with reduced performance. In this work, we introduce Noise2Same, a novel self-supervised denoising framework. In Noise2Same, a new self-supervised loss is proposed by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model and can be used in a wider range of denoising applications. We analyze our proposed Noise2Same both theoretically and experimentally. The experimental results show that our Noise2Same consistently outperforms previous self-supervised denoising methods in terms of denoising performance and training efficiency. 1 Introduction The quality of deep learning methods for signal reconstruction from noisy images, also known as deep image denoising, has benefited from the advanced neural network architectures such as ResNet [8], U-Net [19] and their variants [29, 16, 26, 31, 25, 14]. While more powerful deep image denoising models are developed over time, the problem of data availability becomes more critical. Most deep image denoising algorithms are supervised methods that require matched pairs of noisy and clean images for training [27, 29, 2, 7]. The problem of these supervised methods is that, in many denoising applications, the clean images are hard to obtain due to instrument or cost limitations. To overcome this problem, Noise2Noise [13] explores an alternative training framework, where pairs of noisy images are used for training. Here, each pair of noisy images should correspond to the same but unknown clean image. Note that Noise2Noise is basically still a supervised method, just with noisy supervision. Despite the success of Noise2Noise, its application scenarios are still limited as pairs of noisy images are not available in some cases and may have registration problems. Recently, various of denoising frameworks that can be trained on individual noisy images [23, 17, 28, 10, 1, 12] have been developed. These studies can be divided into two categories according to the amount of extra information required. Methods in the first category requires the noise model to be known. For example, the simulation-based methods [17, 28] use the noise model to generate simulated noises and make individual noisy images noisier. Then a framework similar to Noise2Noise can be applied to train the model with pairs of noisier image and the original noisy image. The limitation is obvious as the noise model may be too complicated or even not available. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. On the other hand, algorithms in the second category target at more general cases where only individual noisy images are available without any extra information [23, 10, 1, 12]. In this category, self-supervised learning [30, 6, 24] has been widely explored, such as Noise2Void [10], Noise2Self [1], and the convolutional blind-spot neural network [12]. Note that these self-supervised models can be improved as well if information about the noise model is given. For example, Laine et al. [12] and Krull et al. [11] propose the Bayesian post-processing to utilize the noise model. However, with the proposed post-processing, these methods fall into the first category where applicability is limited. In this work, we stick to the most general cases where only individual noisy images are provided and focus on the self-supervised framework itself without any post-processing step. We note that all of these existing self-supervised denoising frameworks are built upon the same theoretical background, where the denoising models are required to be J -invariant (Section 2). We perform in-depth analyses on the J -invariance property and argue that it may lead to denoising models with reduced performance. Based on this insight, we propose Noise2Same, a novel self-supervised denoising framework, with a new theoretical foundation. Noise2Same comes with a new self-supervised loss by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model. We analyze the effect of the new loss theoretically and conduct thorough experiments to evaluate Noise2Same. Result show that our Noise2Same consistently outperforms previous self-supervised denoising methods. 2 Background and Related Studies Self-Supervised Denoising with J -Invariant Functions. We consider the reconstruction of a noisy image x 2 Rm, where m = (d⇥)h⇥w⇥c depends on the spatial and channel dimensions. Let y 2 Rm denotes the clean image. Given noisy and clean image pairs (x,y), supervised methods learn a denoising function f : Rm ! Rm by minimizing the supervised loss L(f) = Ex,y kf(x) yk2. When neither clean images nor paired noisy images are available, various self-supervised denoising methods have been developed [10, 1, 12] by assuming that the noise is zero-mean and independent among all dimensions. These methods are trained on individual noisy images to minimize the self-supervised loss L(f) = Ex kf(x) xk2. Particularly, in order to prevent the self-supervised training from collapsing into leaning the identity function, Batson et al. [1] point out that the denoising function f should be J -invariant, as defined below. Definition 1. For a given partition J = {J1, · · · , Jk} (|J1|+ · · ·+ |Jk| = m) of the dimensions of an image x 2 Rm, a function f : Rm ! Rm is J -invariant if f(x)J does not depend on xJ for all J 2 J , where f(x)J and xJ denotes the values of f(x) and x on J , respectively. Intuitively, J -invariance means that, when denoising xJ , f only uses its context xJc , where Jc denotes the complement of J . With a J -invariant function f , we have Ex kf(x) xk2 = Ex,y kf(x) yk2 + Ex,y kx yk2 2 hf(x) y,x yi (1) = Ex,y kf(x) yk2 + Ex,y kx yk2 . (2) Here, the third term in Equation (1) becomes zero when f is J -invariant and the zero-mean assumption about the noise holds [1]. We can see from Equation (2) that when f is J -invariant, minimizing the self-supervised loss Ex kf(x) xk2 indirectly minimizes the supervised loss Ex,y kf(x) yk2. All existing self-supervised denoising methods [10, 1, 12] compute the J -invariant denoising function f through a blind-spot network. Concretely, a subset J of the dimensions are sampled from the noisy image x as “blind spots”. The blind-spot network f is asked to predict the values of these “blind spots” based on the context xJc . In other words, f is blind on J . In previous studies, the blindness on J is achieved in two ways. Specifically, Noise2Void [10] and Noise2Self [1] use masking, while the convolutional blind-spot neural network [12] shifts the receptive field. With the blind-spot network, the self-supervised loss Ex kf(x) xk2 can be written as L(f) = EJEx kf(xJc)J xJk2 . (3) While these methods have achieved good performance, our analysis in this work indicates that minimizing the self-supervised loss in Equation (3) with J -invariant f is not optimal for selfsupervised denoising. Based on this insight, we propose a novel self-supervised denoising framework, known as Noise2Same. In particular, our Noise2Same minimizes a new self-supervised loss without requiring the denoising function f to be J -invariant. Bayesian Post-Processing. From the probabilistic view, the blind-spot network f attempts to model p(yJ |xJc), where the information from xJ is not utilized thus limiting the performance. This limitation can be overcome through the Bayesian deep learning [9] if the noise model p(x|y) is known, as proposed by [12, 11]. Specifically, they propose to compute the posterior by p(yJ |xJ ,xJc) / p(xJ |yJ) p(yJ |xJc), 8J 2 J . (4) Here, the prior p(yJ |xJc) is Gaussian, whose the mean comes from the original outputs of the blind-spot network f and the variance is estimated by extra outputs added to f . The computation of the posterior is a post-processing step, which takes information from xJ into consideration. Despite the improved performance, the Bayesian post-processing has limited applicability as it requires the noise model p(xJ |yJ) to be knwon. Besides, it assumes that a single type of noise is present for all dimensions. In practice, it is common to have unknown noise models, inconsistent noises, or combined noises with different types, where the Bayesian post-processing is no longer applicable. In contrast, our proposed Noise2Same can make use of the entire input image without any postprocessing. Most importantly, Noise2Same does not require the noise model to be known and thus can be used in a much wider range of denoising applications. 3 Analysis of the J -Invariance Property In this section, we analyze the J -invariance property and motivate our work. In section 3.1, we experimentally show that the denoising functions trained through mask-based blind-spot methods are not strictly J -invariant. Next, in Section 3.2, we argue that minimizing Ex kf(x) xk2 with J -invariant f is not optimal for self-supervised denoising. 3.1 Mask-Based Blind-Spot Denoising: Is the Optimal Function Strictly J -Invariant? We show that, in mask-based blind-spot approaches, the optimal denoising function obtained through training is not strictly J -invariant, which contradicts the theory behind these methods. As introduced in Section 2, mask-based blind-spot methods implement blindness on J through masking. Original values on J are masked out and replaced by other values. Concretely, in Equation (3), xJc becomes 1Jc · x + 1J · r, where r denotes the new values on the masked locations (J). As introduced in Section 2, Noise2Void [10] and Noise2Self [1] are current mask-based blind-spot methods. The main difference between them is the choice of the replacement strategy, i.e., how to select r. Specifically, Noise2Void applies the Uniform Pixel Selection (UPS) to randomly select r from local neighbors of the masked locations, while Noise2Self directly uses a random value. Although the masking prevents f from accessing the original values on J during training, we point out that, during inference, f still shows a weak dependency on values on J , and thus does not strictly satisfy the J -invariance property. In other words, mask-based blind-spot methods do not guarantee the learning of a J -invariant function f . We conduct experiments to verify the above statement. Concretely, given a denoising function f trained through mask-based blind-spot methods, we quantify the strictness of J -invariance by computing the following metric: D(f) = EJEx kf(xJc)J f(x)Jk2 /|J |, (5) where x is the raw noisy image and xJc denotes the image whose values on J are replaced with random Gaussian noises ( m=0.5). Note that the replacement here is irrelevant to the the replacement strategy used in mask-based blind-spot methods. If the function f is strictly J -invariant, D(f) should be close to 0 for all x. Smaller D(f) indicates more J -invariant f . To mitigate mutual influences among the locations within J , we use saturate sampling [10] to sample J and make the sampling sparse enough (at a portion of 0.01%). D(f) is computed on the output of f before re-scaling back to [0,255]. In our experiments, we compare D(f) and the testing PSNR for f trained with different replacement strategies and on different datasets. Table 1 provides the comparison results between f trained with different replacement strategies on the BSD68 dataset [15]. We also include the scores of the convolutional blind-spot neural network [12] for reference, which guarantees the strict J -invariance through shifting receptive field, as discussed in Section 3.2. As expected, it has a close-to-zero D(f), where the non-zero value comes from mutual influences among the locations within J and the numerical precision. The large D(f) for all the mask-based blind-spot methods indicate that the J -invariance is not strictly guaranteed and the strictness varies significantly over different replacement strategies. We also compare results on different datasets when we fix the replacement strategy, as shown in Table 2. We can see that different datasets have strong influences on the strictness of J -invariance as well. Note that such influences are not under the control of the denoising approach itself. In addition, although the shown results in Tables 1 and 2 are computed on testing dataset at the end of training, similar trends with D(f) 0 is observed during training. Given the results in Tables 1 and 2, we draw our conclusions from two aspects. We first consider the mask together with the network f as a J -invariant function g, i.e., g(x) := f(1Jc · x+ 1J · r). In this case, the function g is guaranteed to be J -invariant during training, and thus Equation (2) is valid. However, during testing, the mask is removed and a different non-J -invariant function f is used because f achieves better performance than g, according to [1]. This contradicts the theoretical results of [1]. On the other hand, we consider the network f and the mask separately and perform training and testing with the same function f . In this case, the use of mask aims to help f learn to be J -invariant during training so that Equation (2) becomes valid. However, our experiments show that f is neither strictly J -invariant during training nor till the end of training, indicating that Equation (2) is not valid. With findings interpreted from both aspects, we ask whether minimizing Ex kf(x) xk2 with J -invariant f yields optimal performance for self-supervised denoising. 3.2 Shifting Receptive Field: How do the Strictly J -Invariant Models Perform? We directly show that, with a strictly J -invariant f , minimizingEx kf(x) xk2 does not necessarily lead to the best performance. Different from mask-based blind-spot methods, Laine et al. [12] propose the convolutional blind-spot neural network, which achieves the blindness on J by shifting receptive field (RF). Specifically, each pixel in the output image excludes its corresponding pixel in the input image from its receptive field. As values outside the receptive field cannot affect the output, the convolutional blind-spot neural network is strictly J -invariant by design. According to Table 1, the shift RF method outperforms all the mask-based blind-spot approaches with Gaussian replacement strategies, indicating the advantage of the strict J -invariance. However, we notice that the UPS replacement strategy shows a different result. Here, a denoising function with less strict J -invariance performs the best. One possible explanation is that the UPS replacement has a certain probability to replace a masked location by its original value. It weakens the J -invariance of the mask-based denoising model but boosts the performance by yielding a result that is equivalent to computing a linear combination of the noisy input and the output of a strictly J -invariant blindspot network [1]. This result shows that minimizing Ex kf(x) xk2 with a strictly J -invariant f does not necessarily give the best performance. Another evidence is the Bayesian post-processing introduced in Section 2, which also make the final denoising function not strictly J -invariant while boosting the performance. To conclude, we argue that minimizing Ex kf(x) xk2 with J -invariant f can lead to reduction in performance for self-supervised denoising. In this work, we propose a new self-supervised loss. Our loss does not require the J -invariance. In addition, our proposed method can take advantage of the information from the entire noisy input without any post-processing step or extra assumption about the noise. 4 The Proposed Noise2Same Method In this section, we introduce Noise2Same, a novel self-supervised denoising framework. Noise2Same comes with a new self-supervised loss. In particular, Noise2Same requires neither J -invariant denoising functions nor the noise models. 4.1 Noise2Same: A Self-Supervised Upper Bound without the J -Invariance Requirement As introduced in Section 2, the J -invariance requirement sets the inner product term hf(x) y,x yi in Equation (1) to zero. The resulting Equation (2) shows that minimizing Ex kf(x) xk2 with J - invariant f indirectly minimizes the supervised loss, leading to the current self-supervised denoising framework. However, we have pointed out that this framework yields reduced performance. In order to overcome this limitation, we propose to control the right side of Equation (2) with a self-supervised upper bound, instead of approximating hf(x) y,x yi to zero. The upper bound holds without requiring the denoising function f to be J -invariant. Theorem 1. Consider a normalized noisy image x 2 Rm (obtained by subtracting the mean and dividing by the standard deviation) and its ground truth signal y 2 Rm. Assume the noise is zeromean and i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. For any f : Rm ! Rm, we have Ex,y kf(x) yk2 + kx yk2 Ex kf(x) xk2 +2mEJ " Ex kf(x)J f(xJc)Jk2 |J | #1/2 (6) The proof of Theorem 1 is provided in Appendix A. With Theorem 1, we can perform self-supervised denoising by minimizing the right side of Inequality (6) instead. Following the theoretical result, we propose our new self-supervised denoising framework, Noise2Same, with the following selfsupervised loss: L(f) = Ex kf(x) xk2 /m+ inv EJ h Ex kf(x)J f(xJc)Jk2 /|J | i1/2 . (7) This new self-supervised loss consists of two terms: a reconstruction mean squared error (MSE) Lrec = Ex kf(x) xk2 and a squared-root of invariance MSE Linv = EJ(Ex kf(x)J f(xJc)Jk2 /|J |)1/2. Intuitively, Linv prevents our model from learning the identity function when minimizing Lrec without any requirement on f . In fact, by comparing Linv with D(f) in Equation (5), we can see that Linv implicitly controls how strictly f should be J -invariant, avoiding the explicit J -invariance requirement. We balance Lrec and Linv with a positive scalar weight inv. By default, we set inv = 2 according to Theorem 1. In some cases, setting inv to different values according to the scale of observed Linv during training could help achieve a better denoising performance. Figure 1 compares our proposed Noise2Same with mask-based blind-spot denoising methods. Maskbased blind-spot denoising methods employ the self-supervised loss in Equation (3), where the reconstruction MSE Lrec is computed only on J . In contrast, our proposed Noise2Same computes Lrec between the entire noisy image x and the output of the neural network f(x). To compute the invariance term Linv , we still feed the masked noisy image xJc to the neural network and compute MSE between f(x) and f(xJc) on J , i.e., f(x)J and f(xJc)J . Note that, while Noise2Same also samples J from x, it does not require f to be J -invariant. 4.2 Analysis of the Invariance Term The invariance term Linv is a unique and important part in our proposed self-supervised loss. In this section, we further analyze the effect of this term. To make the analysis concrete, we perform analysis based on an example case, where the noise model is given as the additive Gaussian noise N(0, ). Note that the example is for analysis purpose only, and the application of our proposed Noise2Same does not require the noise model to be known. Theorem 2. Consider a noisy image x 2 Rm and its ground truth signal y 2 Rm. Assume the noise is i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. If the noise is additive Gaussian with zero-mean and standard deviation , we have Ex,y kf(x) yk2+kx yk2 Ex kf(x) xk2+2m EJ " E kf(x)J f(xJc)Jk2 |J | #1/2 (8) The proof of Theorem 2 is provided in Appendix B. Note that the noisy image x here does not require normalization as in Theorem 1. Compared to Theorem 1, the from the noise model is added to balance the invariance term. As introduced in Section 4.1, the invariance term controls how strictly f should be J -invariant and a higher weight of the invariance term pushes the model to learn a more strictly J -invariant f . Therefore, Theorem 2 indicates that, when the noise is stronger with a larger , f should be more strictly J -invariant. Based on the definition of J -invariance, a more strictly J -invariant f will depend more on the context xJc and less on the noisy input xJ . This result is consistent with the findings in previous studies. Batson et al. [1] propose to compute the linear combination of the noisy image and the output of the blind-spot network as a post-processing step, leading to better performance. The weights in the linear combination are determined by the variance of noise. And a higher weight is given to the output of the blind-spot network with larger noise variance. Laine et al. [12] derive a similar result through the Bayesian post-processing. This explains how the invariance term in our proposed Noise2Same improves denoising performance. However, a critical difference between our Noise2Same and previous studies is that, the postprocessing in [1, 12] cannot be performed when the noise model is unknown. To the contrary, Noise2Same is able to control how strictly f should be J -invariant through the invariance term without any assumption about the noise or requirement on f . This property allows Noise2Same to be used in a much wider range of denoising tasks with unknown noise models, inconsistent noise, or combined noises with different types. 5 Experiments We evaluate our Noise2Same on four datasets, including RGB natural images (ImageNet ILSVRC 2012 Val [21]), generated hand-written Chinese character images (HànZì [1]), physically captured 3D microscopy data (Planaria [27]) and grey-scale natural images (BSD68 [15]). The four datasets have different noise types and levels. The constructions of the four datasets are described in Appendix C. 5.1 Comparisons with Baselines The baselines include traditional denoising algorithms (NLM [3], BM3D [5]), supervised methods (Noise2True, Noise2Noise [13]), and previous self-supervised methods (Noise2Void [10], Noise2Self [1], the convolutional blind-spot neural network [12]). Note that we consider Noise2Noise as a supervised model since it requires pairs of noisy images, where the supervision is noisy. While Noise2Void and Noise2Self are similar methods following the blind-spot approach, they mainly differ in the strategy of mask replacement. To be more specific, Noise2Void proposes to use Uniform Pixel Selection (UPS), and Noise2Self proposes to exclude the information of the masked pixel and uses a random value on the range of given image data. As an additional mask strategy using the local average excluding the center pixel (donut) is mentioned in [1], we also include it for comparison. We use the same neural network architecture for all deep learning methods. Detailed experimental settings are provided in Appendices D and E. Note that ImageNet and HànZì have combined noises and Planaria has unknown noise models. As a result, the post-processing steps in Noise2Self [1] and the convolutional blind-spot neural network [12] are not applicable, as explained in Section 2. In order to make fair comparisons under the self-supervised category, we train and evaluate all models only using the images, without extra information about the noise. In this case, among self-supervised methods, only our Noise2Same and Noise2Void with the UPS replacement strategy can make use of information from the entire input image, as demonstrated in Section 3.2. We also include the complete version of the convolutional blind-spot neural network with post-processing, who is only available on BSD68, where the noise is not combined and the noise type is known. Following previous studies, we use Peak Signal-to-Noise Ratio (PSNR) as the evaluation metric. The comparison results between our Noise2Same and the baselines in terms of PSNR on the four datasets are summarized in Table 3 and visualized in Figure 2 and Appendix F. The results show that our Noise2Same achieve remarkable improvements over previous self-supervised baselines on ImageNet, HànZì and CARE. In particular, on the ImageNet and the HànZì Datasets, our Noise2Same and Noise2Void demonstrate the advantage of utilizing information from the entire input image. Although the using of donut masking can achieve better performance on the BSD68 Dataset, it leads to model collapsing on the ImageNet Dataset and hence can be unstable. On the other hand, the convolutional blind-spot neural network [12] suffers from significant performance losses without the Bayesian post-processing, which requires information about the noise models that are unknown. We note that, in our fair settings, supervised methods still have better performance over self-supervised models, especially on the Planaria and BSD68 datasets. One explanation is that the supervision usually carries extra information implicitly, such as information about the noise model. Here, we draw a conclusion different from Batson et al. [1]. That is, there are still performance gaps between self-supervised and supervised denoising methods. Our Noise2Same moves one step towards closing the gap by proposing a new self-supervised denoising framework. In addition to the performance, we compares the training efficiency among self-supervised methods as well. Specifically, we plot how the PSNR changes during training on the ImageNet dataset. We compare Noise2Same with Noise2Self and the convolutional blind-spot neural network. The plot shows that our Noise2Same has similar convergence speed to the convolutional blind-spot neural network. On the other hand, as the mask-based method Noise2Self uses only a subset of output pixels to compute the loss function in each step, the training is expected to be slower [12]. 5.2 Effect of the Invariance Term In Section 4.2, we analyzed the effect of the invariance term using an example, where the noise model is given as the additive Gaussian noise. In this example, the variance of the noise controls how the strictness of the optimal f through the coefficient inv of the invariance term. Here, we conduct experiments to verify this insight. Specifically, we construct four noisy dataset from the HànZì dataset with only additive Gaussian noise at different levels ( noise = 0.9, 0.7, 0.5, 0.3). Then we train Noise2Same with inv = 2 loss by varying loss from 0.1 to 1.0 for each dataset. According to Theorem 2, the best performance on each dataset should be achieved when loss is close to noise. The results, as reported Figure 4, are consistent with our theoretical results in Theorem 2. 6 Conclusion and Future Work We analyzed the existing blind-spot-based denoising methods and introduced Noise2Same, a novel self-supervised denoising method, which removes the assumption and over-restriction on the neural network as a J -invariant function. We provided further analysis on Noise2Same and experimentally demonstrated the denoising capability of Noise2Same. As an orthogonal work, the combination of self-supervised denoising result and the noise model has be shown to provide additional performance gain. We would like to further explore noise model-augmented Noise2Same in future works. Broader Impact In this paper, we introduce Noise2Same, a self-supervised framework for deep image denoising. As Noise2Same does not need paired clean data, paired noisy data, nor the noise model, its application scenarios could be much broader than both traditional supervised and existing self-supervised denoising frameworks. The most direct application of Noise2Same is to perform denoising on digital images captured under poor conditions. Individuals and corporations related to photography may benefit from our work. Besides, Noise2Same could be applied as a pre-processing step for computer vision tasks such as object detection and segmentation [18], making the downstream algorithms more robust to noisy images. Also, specific research communities could benefit from the development of Noise2Same as well. For example, the capture of high-quality microscopy data of live cells, tissue, or nanomaterials is expensive in terms of budget and time [27]. Proper denoising algorithms allow researchers to obtain high-quality data from low-quality data and hence remove the need to capture high-quality data directly. In addition to image denoising applications, the self-supervised denoising framework could be extended to other domains such as audio noise reduction and single-cell [1]. On the negative aspect, as many imaging-based research tasks and computer vision applications may be built upon the denoising algorithms, the failure of Noise2Same could potentially lead to biases or failures in these tasks and applications. Acknowledgments and Disclosure of Funding This work was supported in part by National Science Foundation grant DBI-2028361.
1. What is the main contribution of the paper on self-supervised image denoising? 2. What are the strengths of the proposed approach, particularly in its ability to handle sub-optimal results? 3. What are the weaknesses of the paper, especially regarding its experimental results and comparisons with other works? 4. How can the authors improve their comparisons using visually perceptual metrics and real noisy datasets? 5. Why is the result of BM3D for BSD68 highlighted in Table 3?
Summary and Contributions Strengths Weaknesses
Summary and Contributions According to the observation that J-invariant, which is required by most of the self-supervised denoising networks, leads to sub-optimal results, this paper proposes a novel loss for self-supervised image denoising without this constrain. Strengths 1. This paper is well-written, the logic is easy to follow. 2. The authors observe that J-invariant leads to sub-optimal results and propose a new loss to tackle it. 3. The proposed method does not need to know the noise model information. Weaknesses My only concern about this paper is the experimental results. Except for the psnr, I do not see a big visual difference relative to existing self-supervised trained methods. I recommend the authors provide some comparisons about visually perceptual metrics e.g. NIQE, BRISQUE. In addition, I think the authors need to provide some comparisons in real noisy dataset e.g. SIDD, NAM, DND. Since the noise model for real noisy images is unknown and it should be more suitable for self-supervised framework. minor issue: Why is the result of BM3D for BSD68 bolded in Table 3?
NIPS
Title Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising Abstract Self-supervised frameworks that learn denoising models with merely individual noisy images have shown strong capability and promising performance in various image denoising tasks. Existing self-supervised denoising frameworks are mostly built upon the same theoretical foundation, where the denoising models are required to be J -invariant. However, our analyses indicate that the current theory and the J -invariance may lead to denoising models with reduced performance. In this work, we introduce Noise2Same, a novel self-supervised denoising framework. In Noise2Same, a new self-supervised loss is proposed by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model and can be used in a wider range of denoising applications. We analyze our proposed Noise2Same both theoretically and experimentally. The experimental results show that our Noise2Same consistently outperforms previous self-supervised denoising methods in terms of denoising performance and training efficiency. 1 Introduction The quality of deep learning methods for signal reconstruction from noisy images, also known as deep image denoising, has benefited from the advanced neural network architectures such as ResNet [8], U-Net [19] and their variants [29, 16, 26, 31, 25, 14]. While more powerful deep image denoising models are developed over time, the problem of data availability becomes more critical. Most deep image denoising algorithms are supervised methods that require matched pairs of noisy and clean images for training [27, 29, 2, 7]. The problem of these supervised methods is that, in many denoising applications, the clean images are hard to obtain due to instrument or cost limitations. To overcome this problem, Noise2Noise [13] explores an alternative training framework, where pairs of noisy images are used for training. Here, each pair of noisy images should correspond to the same but unknown clean image. Note that Noise2Noise is basically still a supervised method, just with noisy supervision. Despite the success of Noise2Noise, its application scenarios are still limited as pairs of noisy images are not available in some cases and may have registration problems. Recently, various of denoising frameworks that can be trained on individual noisy images [23, 17, 28, 10, 1, 12] have been developed. These studies can be divided into two categories according to the amount of extra information required. Methods in the first category requires the noise model to be known. For example, the simulation-based methods [17, 28] use the noise model to generate simulated noises and make individual noisy images noisier. Then a framework similar to Noise2Noise can be applied to train the model with pairs of noisier image and the original noisy image. The limitation is obvious as the noise model may be too complicated or even not available. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. On the other hand, algorithms in the second category target at more general cases where only individual noisy images are available without any extra information [23, 10, 1, 12]. In this category, self-supervised learning [30, 6, 24] has been widely explored, such as Noise2Void [10], Noise2Self [1], and the convolutional blind-spot neural network [12]. Note that these self-supervised models can be improved as well if information about the noise model is given. For example, Laine et al. [12] and Krull et al. [11] propose the Bayesian post-processing to utilize the noise model. However, with the proposed post-processing, these methods fall into the first category where applicability is limited. In this work, we stick to the most general cases where only individual noisy images are provided and focus on the self-supervised framework itself without any post-processing step. We note that all of these existing self-supervised denoising frameworks are built upon the same theoretical background, where the denoising models are required to be J -invariant (Section 2). We perform in-depth analyses on the J -invariance property and argue that it may lead to denoising models with reduced performance. Based on this insight, we propose Noise2Same, a novel self-supervised denoising framework, with a new theoretical foundation. Noise2Same comes with a new self-supervised loss by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J -invariance nor extra information about the noise model. We analyze the effect of the new loss theoretically and conduct thorough experiments to evaluate Noise2Same. Result show that our Noise2Same consistently outperforms previous self-supervised denoising methods. 2 Background and Related Studies Self-Supervised Denoising with J -Invariant Functions. We consider the reconstruction of a noisy image x 2 Rm, where m = (d⇥)h⇥w⇥c depends on the spatial and channel dimensions. Let y 2 Rm denotes the clean image. Given noisy and clean image pairs (x,y), supervised methods learn a denoising function f : Rm ! Rm by minimizing the supervised loss L(f) = Ex,y kf(x) yk2. When neither clean images nor paired noisy images are available, various self-supervised denoising methods have been developed [10, 1, 12] by assuming that the noise is zero-mean and independent among all dimensions. These methods are trained on individual noisy images to minimize the self-supervised loss L(f) = Ex kf(x) xk2. Particularly, in order to prevent the self-supervised training from collapsing into leaning the identity function, Batson et al. [1] point out that the denoising function f should be J -invariant, as defined below. Definition 1. For a given partition J = {J1, · · · , Jk} (|J1|+ · · ·+ |Jk| = m) of the dimensions of an image x 2 Rm, a function f : Rm ! Rm is J -invariant if f(x)J does not depend on xJ for all J 2 J , where f(x)J and xJ denotes the values of f(x) and x on J , respectively. Intuitively, J -invariance means that, when denoising xJ , f only uses its context xJc , where Jc denotes the complement of J . With a J -invariant function f , we have Ex kf(x) xk2 = Ex,y kf(x) yk2 + Ex,y kx yk2 2 hf(x) y,x yi (1) = Ex,y kf(x) yk2 + Ex,y kx yk2 . (2) Here, the third term in Equation (1) becomes zero when f is J -invariant and the zero-mean assumption about the noise holds [1]. We can see from Equation (2) that when f is J -invariant, minimizing the self-supervised loss Ex kf(x) xk2 indirectly minimizes the supervised loss Ex,y kf(x) yk2. All existing self-supervised denoising methods [10, 1, 12] compute the J -invariant denoising function f through a blind-spot network. Concretely, a subset J of the dimensions are sampled from the noisy image x as “blind spots”. The blind-spot network f is asked to predict the values of these “blind spots” based on the context xJc . In other words, f is blind on J . In previous studies, the blindness on J is achieved in two ways. Specifically, Noise2Void [10] and Noise2Self [1] use masking, while the convolutional blind-spot neural network [12] shifts the receptive field. With the blind-spot network, the self-supervised loss Ex kf(x) xk2 can be written as L(f) = EJEx kf(xJc)J xJk2 . (3) While these methods have achieved good performance, our analysis in this work indicates that minimizing the self-supervised loss in Equation (3) with J -invariant f is not optimal for selfsupervised denoising. Based on this insight, we propose a novel self-supervised denoising framework, known as Noise2Same. In particular, our Noise2Same minimizes a new self-supervised loss without requiring the denoising function f to be J -invariant. Bayesian Post-Processing. From the probabilistic view, the blind-spot network f attempts to model p(yJ |xJc), where the information from xJ is not utilized thus limiting the performance. This limitation can be overcome through the Bayesian deep learning [9] if the noise model p(x|y) is known, as proposed by [12, 11]. Specifically, they propose to compute the posterior by p(yJ |xJ ,xJc) / p(xJ |yJ) p(yJ |xJc), 8J 2 J . (4) Here, the prior p(yJ |xJc) is Gaussian, whose the mean comes from the original outputs of the blind-spot network f and the variance is estimated by extra outputs added to f . The computation of the posterior is a post-processing step, which takes information from xJ into consideration. Despite the improved performance, the Bayesian post-processing has limited applicability as it requires the noise model p(xJ |yJ) to be knwon. Besides, it assumes that a single type of noise is present for all dimensions. In practice, it is common to have unknown noise models, inconsistent noises, or combined noises with different types, where the Bayesian post-processing is no longer applicable. In contrast, our proposed Noise2Same can make use of the entire input image without any postprocessing. Most importantly, Noise2Same does not require the noise model to be known and thus can be used in a much wider range of denoising applications. 3 Analysis of the J -Invariance Property In this section, we analyze the J -invariance property and motivate our work. In section 3.1, we experimentally show that the denoising functions trained through mask-based blind-spot methods are not strictly J -invariant. Next, in Section 3.2, we argue that minimizing Ex kf(x) xk2 with J -invariant f is not optimal for self-supervised denoising. 3.1 Mask-Based Blind-Spot Denoising: Is the Optimal Function Strictly J -Invariant? We show that, in mask-based blind-spot approaches, the optimal denoising function obtained through training is not strictly J -invariant, which contradicts the theory behind these methods. As introduced in Section 2, mask-based blind-spot methods implement blindness on J through masking. Original values on J are masked out and replaced by other values. Concretely, in Equation (3), xJc becomes 1Jc · x + 1J · r, where r denotes the new values on the masked locations (J). As introduced in Section 2, Noise2Void [10] and Noise2Self [1] are current mask-based blind-spot methods. The main difference between them is the choice of the replacement strategy, i.e., how to select r. Specifically, Noise2Void applies the Uniform Pixel Selection (UPS) to randomly select r from local neighbors of the masked locations, while Noise2Self directly uses a random value. Although the masking prevents f from accessing the original values on J during training, we point out that, during inference, f still shows a weak dependency on values on J , and thus does not strictly satisfy the J -invariance property. In other words, mask-based blind-spot methods do not guarantee the learning of a J -invariant function f . We conduct experiments to verify the above statement. Concretely, given a denoising function f trained through mask-based blind-spot methods, we quantify the strictness of J -invariance by computing the following metric: D(f) = EJEx kf(xJc)J f(x)Jk2 /|J |, (5) where x is the raw noisy image and xJc denotes the image whose values on J are replaced with random Gaussian noises ( m=0.5). Note that the replacement here is irrelevant to the the replacement strategy used in mask-based blind-spot methods. If the function f is strictly J -invariant, D(f) should be close to 0 for all x. Smaller D(f) indicates more J -invariant f . To mitigate mutual influences among the locations within J , we use saturate sampling [10] to sample J and make the sampling sparse enough (at a portion of 0.01%). D(f) is computed on the output of f before re-scaling back to [0,255]. In our experiments, we compare D(f) and the testing PSNR for f trained with different replacement strategies and on different datasets. Table 1 provides the comparison results between f trained with different replacement strategies on the BSD68 dataset [15]. We also include the scores of the convolutional blind-spot neural network [12] for reference, which guarantees the strict J -invariance through shifting receptive field, as discussed in Section 3.2. As expected, it has a close-to-zero D(f), where the non-zero value comes from mutual influences among the locations within J and the numerical precision. The large D(f) for all the mask-based blind-spot methods indicate that the J -invariance is not strictly guaranteed and the strictness varies significantly over different replacement strategies. We also compare results on different datasets when we fix the replacement strategy, as shown in Table 2. We can see that different datasets have strong influences on the strictness of J -invariance as well. Note that such influences are not under the control of the denoising approach itself. In addition, although the shown results in Tables 1 and 2 are computed on testing dataset at the end of training, similar trends with D(f) 0 is observed during training. Given the results in Tables 1 and 2, we draw our conclusions from two aspects. We first consider the mask together with the network f as a J -invariant function g, i.e., g(x) := f(1Jc · x+ 1J · r). In this case, the function g is guaranteed to be J -invariant during training, and thus Equation (2) is valid. However, during testing, the mask is removed and a different non-J -invariant function f is used because f achieves better performance than g, according to [1]. This contradicts the theoretical results of [1]. On the other hand, we consider the network f and the mask separately and perform training and testing with the same function f . In this case, the use of mask aims to help f learn to be J -invariant during training so that Equation (2) becomes valid. However, our experiments show that f is neither strictly J -invariant during training nor till the end of training, indicating that Equation (2) is not valid. With findings interpreted from both aspects, we ask whether minimizing Ex kf(x) xk2 with J -invariant f yields optimal performance for self-supervised denoising. 3.2 Shifting Receptive Field: How do the Strictly J -Invariant Models Perform? We directly show that, with a strictly J -invariant f , minimizingEx kf(x) xk2 does not necessarily lead to the best performance. Different from mask-based blind-spot methods, Laine et al. [12] propose the convolutional blind-spot neural network, which achieves the blindness on J by shifting receptive field (RF). Specifically, each pixel in the output image excludes its corresponding pixel in the input image from its receptive field. As values outside the receptive field cannot affect the output, the convolutional blind-spot neural network is strictly J -invariant by design. According to Table 1, the shift RF method outperforms all the mask-based blind-spot approaches with Gaussian replacement strategies, indicating the advantage of the strict J -invariance. However, we notice that the UPS replacement strategy shows a different result. Here, a denoising function with less strict J -invariance performs the best. One possible explanation is that the UPS replacement has a certain probability to replace a masked location by its original value. It weakens the J -invariance of the mask-based denoising model but boosts the performance by yielding a result that is equivalent to computing a linear combination of the noisy input and the output of a strictly J -invariant blindspot network [1]. This result shows that minimizing Ex kf(x) xk2 with a strictly J -invariant f does not necessarily give the best performance. Another evidence is the Bayesian post-processing introduced in Section 2, which also make the final denoising function not strictly J -invariant while boosting the performance. To conclude, we argue that minimizing Ex kf(x) xk2 with J -invariant f can lead to reduction in performance for self-supervised denoising. In this work, we propose a new self-supervised loss. Our loss does not require the J -invariance. In addition, our proposed method can take advantage of the information from the entire noisy input without any post-processing step or extra assumption about the noise. 4 The Proposed Noise2Same Method In this section, we introduce Noise2Same, a novel self-supervised denoising framework. Noise2Same comes with a new self-supervised loss. In particular, Noise2Same requires neither J -invariant denoising functions nor the noise models. 4.1 Noise2Same: A Self-Supervised Upper Bound without the J -Invariance Requirement As introduced in Section 2, the J -invariance requirement sets the inner product term hf(x) y,x yi in Equation (1) to zero. The resulting Equation (2) shows that minimizing Ex kf(x) xk2 with J - invariant f indirectly minimizes the supervised loss, leading to the current self-supervised denoising framework. However, we have pointed out that this framework yields reduced performance. In order to overcome this limitation, we propose to control the right side of Equation (2) with a self-supervised upper bound, instead of approximating hf(x) y,x yi to zero. The upper bound holds without requiring the denoising function f to be J -invariant. Theorem 1. Consider a normalized noisy image x 2 Rm (obtained by subtracting the mean and dividing by the standard deviation) and its ground truth signal y 2 Rm. Assume the noise is zeromean and i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. For any f : Rm ! Rm, we have Ex,y kf(x) yk2 + kx yk2 Ex kf(x) xk2 +2mEJ " Ex kf(x)J f(xJc)Jk2 |J | #1/2 (6) The proof of Theorem 1 is provided in Appendix A. With Theorem 1, we can perform self-supervised denoising by minimizing the right side of Inequality (6) instead. Following the theoretical result, we propose our new self-supervised denoising framework, Noise2Same, with the following selfsupervised loss: L(f) = Ex kf(x) xk2 /m+ inv EJ h Ex kf(x)J f(xJc)Jk2 /|J | i1/2 . (7) This new self-supervised loss consists of two terms: a reconstruction mean squared error (MSE) Lrec = Ex kf(x) xk2 and a squared-root of invariance MSE Linv = EJ(Ex kf(x)J f(xJc)Jk2 /|J |)1/2. Intuitively, Linv prevents our model from learning the identity function when minimizing Lrec without any requirement on f . In fact, by comparing Linv with D(f) in Equation (5), we can see that Linv implicitly controls how strictly f should be J -invariant, avoiding the explicit J -invariance requirement. We balance Lrec and Linv with a positive scalar weight inv. By default, we set inv = 2 according to Theorem 1. In some cases, setting inv to different values according to the scale of observed Linv during training could help achieve a better denoising performance. Figure 1 compares our proposed Noise2Same with mask-based blind-spot denoising methods. Maskbased blind-spot denoising methods employ the self-supervised loss in Equation (3), where the reconstruction MSE Lrec is computed only on J . In contrast, our proposed Noise2Same computes Lrec between the entire noisy image x and the output of the neural network f(x). To compute the invariance term Linv , we still feed the masked noisy image xJc to the neural network and compute MSE between f(x) and f(xJc) on J , i.e., f(x)J and f(xJc)J . Note that, while Noise2Same also samples J from x, it does not require f to be J -invariant. 4.2 Analysis of the Invariance Term The invariance term Linv is a unique and important part in our proposed self-supervised loss. In this section, we further analyze the effect of this term. To make the analysis concrete, we perform analysis based on an example case, where the noise model is given as the additive Gaussian noise N(0, ). Note that the example is for analysis purpose only, and the application of our proposed Noise2Same does not require the noise model to be known. Theorem 2. Consider a noisy image x 2 Rm and its ground truth signal y 2 Rm. Assume the noise is i.i.d among all the dimensions, and let J be a subset of m dimensions uniformly sampled from the image x. If the noise is additive Gaussian with zero-mean and standard deviation , we have Ex,y kf(x) yk2+kx yk2 Ex kf(x) xk2+2m EJ " E kf(x)J f(xJc)Jk2 |J | #1/2 (8) The proof of Theorem 2 is provided in Appendix B. Note that the noisy image x here does not require normalization as in Theorem 1. Compared to Theorem 1, the from the noise model is added to balance the invariance term. As introduced in Section 4.1, the invariance term controls how strictly f should be J -invariant and a higher weight of the invariance term pushes the model to learn a more strictly J -invariant f . Therefore, Theorem 2 indicates that, when the noise is stronger with a larger , f should be more strictly J -invariant. Based on the definition of J -invariance, a more strictly J -invariant f will depend more on the context xJc and less on the noisy input xJ . This result is consistent with the findings in previous studies. Batson et al. [1] propose to compute the linear combination of the noisy image and the output of the blind-spot network as a post-processing step, leading to better performance. The weights in the linear combination are determined by the variance of noise. And a higher weight is given to the output of the blind-spot network with larger noise variance. Laine et al. [12] derive a similar result through the Bayesian post-processing. This explains how the invariance term in our proposed Noise2Same improves denoising performance. However, a critical difference between our Noise2Same and previous studies is that, the postprocessing in [1, 12] cannot be performed when the noise model is unknown. To the contrary, Noise2Same is able to control how strictly f should be J -invariant through the invariance term without any assumption about the noise or requirement on f . This property allows Noise2Same to be used in a much wider range of denoising tasks with unknown noise models, inconsistent noise, or combined noises with different types. 5 Experiments We evaluate our Noise2Same on four datasets, including RGB natural images (ImageNet ILSVRC 2012 Val [21]), generated hand-written Chinese character images (HànZì [1]), physically captured 3D microscopy data (Planaria [27]) and grey-scale natural images (BSD68 [15]). The four datasets have different noise types and levels. The constructions of the four datasets are described in Appendix C. 5.1 Comparisons with Baselines The baselines include traditional denoising algorithms (NLM [3], BM3D [5]), supervised methods (Noise2True, Noise2Noise [13]), and previous self-supervised methods (Noise2Void [10], Noise2Self [1], the convolutional blind-spot neural network [12]). Note that we consider Noise2Noise as a supervised model since it requires pairs of noisy images, where the supervision is noisy. While Noise2Void and Noise2Self are similar methods following the blind-spot approach, they mainly differ in the strategy of mask replacement. To be more specific, Noise2Void proposes to use Uniform Pixel Selection (UPS), and Noise2Self proposes to exclude the information of the masked pixel and uses a random value on the range of given image data. As an additional mask strategy using the local average excluding the center pixel (donut) is mentioned in [1], we also include it for comparison. We use the same neural network architecture for all deep learning methods. Detailed experimental settings are provided in Appendices D and E. Note that ImageNet and HànZì have combined noises and Planaria has unknown noise models. As a result, the post-processing steps in Noise2Self [1] and the convolutional blind-spot neural network [12] are not applicable, as explained in Section 2. In order to make fair comparisons under the self-supervised category, we train and evaluate all models only using the images, without extra information about the noise. In this case, among self-supervised methods, only our Noise2Same and Noise2Void with the UPS replacement strategy can make use of information from the entire input image, as demonstrated in Section 3.2. We also include the complete version of the convolutional blind-spot neural network with post-processing, who is only available on BSD68, where the noise is not combined and the noise type is known. Following previous studies, we use Peak Signal-to-Noise Ratio (PSNR) as the evaluation metric. The comparison results between our Noise2Same and the baselines in terms of PSNR on the four datasets are summarized in Table 3 and visualized in Figure 2 and Appendix F. The results show that our Noise2Same achieve remarkable improvements over previous self-supervised baselines on ImageNet, HànZì and CARE. In particular, on the ImageNet and the HànZì Datasets, our Noise2Same and Noise2Void demonstrate the advantage of utilizing information from the entire input image. Although the using of donut masking can achieve better performance on the BSD68 Dataset, it leads to model collapsing on the ImageNet Dataset and hence can be unstable. On the other hand, the convolutional blind-spot neural network [12] suffers from significant performance losses without the Bayesian post-processing, which requires information about the noise models that are unknown. We note that, in our fair settings, supervised methods still have better performance over self-supervised models, especially on the Planaria and BSD68 datasets. One explanation is that the supervision usually carries extra information implicitly, such as information about the noise model. Here, we draw a conclusion different from Batson et al. [1]. That is, there are still performance gaps between self-supervised and supervised denoising methods. Our Noise2Same moves one step towards closing the gap by proposing a new self-supervised denoising framework. In addition to the performance, we compares the training efficiency among self-supervised methods as well. Specifically, we plot how the PSNR changes during training on the ImageNet dataset. We compare Noise2Same with Noise2Self and the convolutional blind-spot neural network. The plot shows that our Noise2Same has similar convergence speed to the convolutional blind-spot neural network. On the other hand, as the mask-based method Noise2Self uses only a subset of output pixels to compute the loss function in each step, the training is expected to be slower [12]. 5.2 Effect of the Invariance Term In Section 4.2, we analyzed the effect of the invariance term using an example, where the noise model is given as the additive Gaussian noise. In this example, the variance of the noise controls how the strictness of the optimal f through the coefficient inv of the invariance term. Here, we conduct experiments to verify this insight. Specifically, we construct four noisy dataset from the HànZì dataset with only additive Gaussian noise at different levels ( noise = 0.9, 0.7, 0.5, 0.3). Then we train Noise2Same with inv = 2 loss by varying loss from 0.1 to 1.0 for each dataset. According to Theorem 2, the best performance on each dataset should be achieved when loss is close to noise. The results, as reported Figure 4, are consistent with our theoretical results in Theorem 2. 6 Conclusion and Future Work We analyzed the existing blind-spot-based denoising methods and introduced Noise2Same, a novel self-supervised denoising method, which removes the assumption and over-restriction on the neural network as a J -invariant function. We provided further analysis on Noise2Same and experimentally demonstrated the denoising capability of Noise2Same. As an orthogonal work, the combination of self-supervised denoising result and the noise model has be shown to provide additional performance gain. We would like to further explore noise model-augmented Noise2Same in future works. Broader Impact In this paper, we introduce Noise2Same, a self-supervised framework for deep image denoising. As Noise2Same does not need paired clean data, paired noisy data, nor the noise model, its application scenarios could be much broader than both traditional supervised and existing self-supervised denoising frameworks. The most direct application of Noise2Same is to perform denoising on digital images captured under poor conditions. Individuals and corporations related to photography may benefit from our work. Besides, Noise2Same could be applied as a pre-processing step for computer vision tasks such as object detection and segmentation [18], making the downstream algorithms more robust to noisy images. Also, specific research communities could benefit from the development of Noise2Same as well. For example, the capture of high-quality microscopy data of live cells, tissue, or nanomaterials is expensive in terms of budget and time [27]. Proper denoising algorithms allow researchers to obtain high-quality data from low-quality data and hence remove the need to capture high-quality data directly. In addition to image denoising applications, the self-supervised denoising framework could be extended to other domains such as audio noise reduction and single-cell [1]. On the negative aspect, as many imaging-based research tasks and computer vision applications may be built upon the denoising algorithms, the failure of Noise2Same could potentially lead to biases or failures in these tasks and applications. Acknowledgments and Disclosure of Funding This work was supported in part by National Science Foundation grant DBI-2028361.
1. What is the focus and contribution of the paper on self-supervised image denoising? 2. What are the strengths of the proposed approach, particularly in its organization and idea? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. Do you have any concerns about the experimental results only being applied to synthetic noisy images? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a new method call noise2same for self-supervised image denoising. The proposed method is to optimize an upper bound of typical supervised loss, which contains a reconstruction loss and an invariance loss. Experimental results showed the effectiveness of the proposed method on synthetic noisy images. Strengths 1. The paper is well-organized and easy to follow. The idea of using the upper bound for self-supervised image denoising is very interesting. 2. There are lots of analysis and explanation for existing methods and the proposed method, which make the paper more convincing. 3. Consistently better results than the existing self-supervised methods in the experiments. Weaknesses 1. The idea of the paper is interesting but seems to be over-claimed. For example, in the abstract, it says the existing methods may be sub-optimal. But this is also true for the proposed method. The proposed method is derived by "an" upper bound of the typical supervised loss. It has not proven to the tightest bound. This is somehow misleading. In addition, the analysis in Sec. 4.2 did not show why the proposed method is better than the baselines. Most conclusions are obtained empirically. 2. The results are on synthetic noise. It will be great to apply the proposed method on real noisy image Benchmarking.
NIPS
Title Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images Abstract In this paper, we focus on training and evaluating effective word embeddings with both text and visual information. More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest [2]. This dataset is more than 200 times larger than MS COCO [22], the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system, thus contain rich semantic relationships. Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: http://www.stat. ucla.edu/~junhua.mao/multimodal_embedding.html1. 1 Introduction Word embeddings are dense vector representations of words with semantic and relational information. In this vector space, semantically related or similar words should be close to each other. A large-scale training dataset with billions of words is crucial to train effective word embedding models. The trained word embeddings are very useful in various tasks and real-world applications that involve searching for semantically similar or related words and phrases. A large proportion of the state-of-the-art word embedding models are trained on pure text data only. Since one of the most important functions of language is to describe the visual world, we argue that the effective word embeddings should contain rich visual semantics. Previous work has shown that visual information is important for training effective embedding models. However, due to the lack of large training datasets of the same scale as the pure text dataset, the models are either trained on relatively small datasets (e.g. [13]), or the visual contraints are only applied to limited number of pre-defined visual concepts (e.g. [21]). Therefore, such work did not fully explore the potential of visual information in learning word embeddings. In this paper, we introduce a large-scale dataset with both text descriptions and images, crawled and collected from Pinterest, one of the largest database of annotated web images. On Pinterest, users save web images onto their boards (i.e. image collectors) and supply their descriptions of the images. More descriptions are collected when the same images are saved and commented by other users. Compared to MS COCO (i.e. the image benchmark with sentences descriptions [22]), our dataset is much larger (40 million images with 300 million sentences compared to 0.2 million images and 1 million sentences in the current release of MS COCO) and is at the same scale as the standard pure 1The datasets introduced in this work will be gradually released on the project page. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. text training datasets (e.g. Wikipedia Text Corpus). Some sample images and their descriptions are shown in Figure 1 in Section 3.1. We believe training on this large-scale dataset will lead to richer and better generalized models. We denote this dataset as the Pinterest40M dataset. One challenge for word embeddings learning is how to directly evaluate the quality of the model with respect to the tasks (e.g. the task of finding related or similar words and phrases). State-ofthe-art neural language models often use the negative log-likelihood of the predicted words as their training loss, which is not always correlated with the effectiveness of the learned embedding. Current evaluation datasets (e.g. [5, 14, 11]) for word similarity or relatedness contain only less than a thousand word pairs and cannot comprehensively evaluate all the embeddings of the words appearing in the training set. The challenge of constructing large-scale evaluation datasets is partly due to the difficulty of finding a large number of semantically similar or related word/phrase pairs. In this paper, we utilize user click information collected from Pinterest’s image search system to generate millions of these candidate word/phrase pairs. Because user click data are somewhat noisy, we removed inaccurate entries in the dataset by using crowdsourcing human annotations. This led to a final gold standard evaluation dataset consists of 10,674 entries. Equipped with these datasets, we propose, train and evaluate several Recurrent Neural Network (RNN [10]) based models with input of both text descriptions and images. Some of these models directly minimize the Euclidean distance between the visual features and the word embeddings or RNN states, similar to previous work (e.g. [13, 21]). The best performing model is inspired by recent image captioning models [9, 24, 36], with the additional weight-sharing strategy originally proposed in [23] to learn novel visual concepts. This strategy imposes soft constraints between the visual features and all the related words in the sentences. Our experiments validate the effectiveness and importance of incorporating visual information into the learned word embeddings. We make three major contributions: Firstly, we constructed a large-scale multimodal dataset with both text descriptions and images, which is at the same scale as the pure text training set. Secondly, we collected and labeled a large-scale evaluation dataset for word and phrase similarity and relatedness evaluation. Finally, we proposed and compared several RNN based models for learning multimodal word embeddings effectively. To facilitate research in this area, we will gradually release the datasets proposed in this paper on our project page. 2 Related Work Image-Sentence Description Datasets The image descriptions datasets, such as Flickr8K [15], Flickr30K [37], IAPR-TC12 [12], and MS COCO [22], greatly facilitated the development of models for language and vision tasks such as image captioning. Because it takes lots of resources to label images with sentences descriptions, the scale of these datasets are relatively small (MS COCO, the largest dataset among them, only contains 1 million sentences while our Pinterest40M dataset has 300 million sentences). In addition, the language used to describe images in these datasets is relatively simple (e.g. MS COCO only has around 10,000 unique words appearing at least 3 times while there are 335,323 unique words appearing at least 50 times in Pinterest40M). The Im2Text dataset proposed in [28] adopts a similar data collection process to ours by using 1 million images with 1 million user annotated captions from Flickr. But its scale is still much smaller than our Pinterest40M dataset. Recently, [34] proposed and released the YFCC100M dataset, which is a large-scale multimedia dataset contains metadata of 100 million Flickr images. It provides rich information about images, such as tags, titles, and locations where they were taken. The users’ comments can be obtained by querying the Flickr API. Because of the different functionality and user groups between Flickr and Pinterest, the users’ comments of Flickr images are quite different from those of Pinterest (e.g. on Flickr, users tend to comment more on the photography techniques). This dataset provides complementary information to our Pinterest40M dataset. Word Similarity-Relatedness Evaluation The standard benchmarks, such as WordSim-353/WSSim [11, 3], MEN [5], and SimLex-999 [14], consist of a couple hundreds of word pairs and their similarity or relatedness scores. The word pairs are composed by asking human subjects to write the first related, or similar, word that comes into their mind when presented with a concept word (e.g. [27, 11]), or by randomly selecting frequent words in large text corpus and manually searching for useful pairs (e.g. [5]). In this work, we are able to collect a large number of word/phrase pairs with good quality by mining them from the click data of Pinterest’s image search system used by millions of users. In addition, because this dataset is collected through a visual search system, it is more suitable to evaluate multimodal embedding models. Another related evaluation is the analogy task proposed in [25]. They ask the model questions like “man to woman is equal king to what?” as their evaluation. But such questions do not directly measure the word similarity or relatedness, and cannot cover all the semantic relationships of million of words in the dictionary. RNN for Language and Vision Our models are inspired by recent RNN-CNN based image captioning models [9, 24, 36, 16, 6, 18, 23], which can be viewed as a special case of the sequence-tosequence learning framework [33, 7]. We adopt Gated Recurrent Units (GRUs [7]), a variation of the simple RNN model. Multimodal Word Embedding Models For pure text, one of the most effective approaches to learn word embeddings is to train neural network models to predict a word given its context words in a sentence (i.e. the continuous bag-of-word model [4]) or to predict the context words given the current word (i.e. the skip-gram model [25]). There is a large literature on word embedding models that utilize visual information. One type of methods takes a two-step strategy that first extracts text and image features separately and then fuses them together using singular value decomposition [5], stacked autoencoders [31], or even simple concatenation [17]. [13, 21, 19] learn the text and image features jointly by fusing visual or perceptual information in a skip-gram model [25]. However, because of the lack of large-scale multimodal datasets, they only associate visual content with a pre-defined set of nouns (e.g. [21]) or perception domains (e.g. [14]) in the sentences, or focus on abstract scenes (e.g. [19]). By contrast, our best performing model places a soft constraint between visual features and all the words in the sentences by a weight sharing strategy as shown in Section 4. 3 Datasets We constructed two datasets: one for training our multimodal word-embeddings (see Section 3.1) and another one for the evaluation of the learned word-embeddings (see Section 3.2). 3.1 Training Dataset Table 1: Scale comparison with other image descriptions benchmarks. Image Sentences Flickr8K [15] 8K 40K Flickr30K [37] 30K 150K IAPR-TC12 [12] 20K 34K MS COCO [22] 200K 1M Im2Text [28] 1M 1M Pinterset40M 40M 300M Pinterest is one of the largest repository of Web images. Users commonly tag images with short descriptions and share the images (and desriptions) with others. Since a given image can be shared and tagged by multiple, sometimes thousands of users, many images have a very rich set of descriptions, making this source of data ideal for training model with both text and image inputs. The dataset is prepared in the following way: first, we crawled the public available data on Pinterest to construct our training dataset of more than 40 million images. Each image is associated with an average of 12 sentences, and we removed duplicated or short sentences with less than 4 words. The duplication detection is conducted by calculating the overlapped word unigram ratios. Some sample images and descriptions are shown in Figure 1. We denote this dataset as the Pinterest40M dataset. Our dataset contains 40 million images with 300 million sentences (around 3 billion words), which is much larger than the previous image description datasets (see Table 1). In addition, because the descriptions are annotated by users who expressed interest in the images, the descriptions in our dataset are more natural and richer than the annotated image description datasets. In our dataset, there are 335,323 unique words with a minimum number of occurence of 50, compared with 10,232 and 65,552 words appearing at least 3 times in MS COCO and IM2Text dataset respectively. To the best of our knowledge, there is no previous paper that trains a multimodal RNN model on a dataset of such scale. 3.2 Evaluation Datasets This work proposes to use labeled phrase triplets – each triplet is a three-phrase tuple containing phrase A, phrase B and phrase C, where A is considered as semantically closer to B than A is to C. At testing time, we compute the distance in the word embedding space between A/B and A/C, and consider a test triplet as positive if d(A,B) < d(A,C). This relative comparison approach was commonly used to evaluate and compare different word embedding models [30]. In order to generate large number of phrase triplets, we rely on user-click data collected from Pinterest image search system. At the end, we construct a large-scale evaluation dataset with 9.8 million triplets (see Section 3.2.1), and its cleaned up gold standard version with 10 thousand triplets (see Section 3.2.2). 3.2.1 The Raw Evaluation Dataset from User Clickthrough Data It is very hard to obtain a large number of semantically similar or related word and phrase pairs. This is one of the challenges for constructing a large-scale word/phrase similarity and relatedness evaluation dataset. We address this challenge by utilizing the user clickthrough data from Pinterest image search system, see Figure 2 for an illustration. More specifically, given a query from a user (e.g. “hair styles”), the search system returns a list of items, and each item is composed of an image and a set of annotations (i.e. short phrases or words that describe the item). Please note that the same annotation can appear in multiple items, e.g., “hair tutorial” can describe items related to prom hair styles or ponytails. We derive a matching score for each annotation by aggregating the click frequency of the items containing the annotation. The annotations are then ranked according to the matching scores, and the top ranked annotations are considered as the positive set of phrases or words with respect to the user query. To increase the difficulty of this dataset, we remove the phrases that share common words with the user query from the initial list of positive phrases. E.g. “hair tutorials” will be removed because the word “hair” is contained in the query phrase “hair styles”. A stemmer in Python’s “stemmer” package is also adopted to find words with the same root (e.g. “cake” and “cakes” are considered as the same word). This pruning step also prevents giving bias to methods which measure the similarity between the positive phrase and the query phrase by counting the number of overlapping words between them. In this way, we collected 9,778,508 semantically similar phrase pairs. Previous word similarity/relatedness datasets (e.g. [11, 14]) manually annotated each word pair with an absolute score reflecting how much the words in this pair are semantically related. In the testing stage, a predicted similarity score list of the word pairs generated by the model in the dataset is compared with the groundtruth score list. The Spearman’s rank correlation between the two lists is calculated as the score of the model. However, it is often too hard and expensive to label the absolute related score and maintain the consistency across all the pairs in a large-scale dataset, even if we average the scores of several annotators. We adopt a simple strategy by composing triplets for the phrase pairs. More specifically, we randomly sample negative phrases from a pool of 1 billion phrases. The negative phrase should not contain any overlapping word (a stemmer is also adopted) with both of the phrases in the original phrase pair. In this way, we construct 9,778,508 triplets with the format of (base phrase, positive phrase, negative phrase). In the evaluation, a model should be able to distinguish the positive phrase from the negative phrase by calculating their similarities with the base phrase in the embedding space. We denote this dataset as Related Phrase 10M (RP10M) dataset. 3.2.2 The Cleaned-up Gold Standard Dataset Because the raw Related Query 10M dataset is built upon user click information, it contains some noisy triplets (e.g. the positive and base phrase are not related, or the negative phrase is strongly related to the base phrase). To create a gold standard dataset, we conduct a clean up step using the crowdsourcing platform CrowdFlower [1] to remove these inaccurate triplets. A sample question and choices for the crowdsourcing annotators are shown in Figure 3. The positive and negative phrases in a triplet are randomly given as choice “A” or “B”. The annotators are required to choose which phrase is more related to the base phrase, or if they are both related or unrelated. To help the annotators understand the meaning of the phrases, they can click on the phrases to get Google search results. We annotate 21,000 triplets randomly sampled from the raw Related Query 10M dataset. Three to five annotators are assigned to each question. A triplet is accepted and added in the final cleaned up dataset only if more than 50% of the annotators agree with the original positive and negative label of the queries (note that they do not know which one is positive in the annotation process). In practice, 70% of the selected phrases triplets have more than 3 annotators to agree. This leads to a gold standard dataset with 10,674 triplets. We denote this dataset as Gold Phrase Query 10K (Gold RP10K) dataset. This dataset is very challenging and a successfully model should be able to capture a variety of semantic relationships between words or phrases. Some sample triplets are shown in Table 2. 4 The Multimodal Word Embedding Models We propose three RNN-CNN based models to learn the multimodal word embeddings, as illustrated in Figure 4. All of the models have two parts in common: a Convolutional Neural Network (CNN [20]) to extract visual representations and a Recurrent Neural Network (RNN [10]) to model sentences. For the CNN part, we resize the images to 224× 224, and adopt the 16-layer VGGNet [32] as the visual feature extractor. The binarized activation (i.e. 4096 binary vectors) of the layer before its SoftMax layer are used as the image features and will be mapped to the same space of the state of RNN (Model A, B) or the word embeddings (Model C), depends on the structure of the model, by a fully connected layer and a Rectified Linear Unit function (ReLU [26], ReLU(x) = max(0, x)). For the RNN part, we use a Gated Recurrent Unit (GRU [7]), an recently very popular RNN structure, with a 512 dimensional state cell. The state of GRU ht for each word with index t in a sentence can be represented as: rt = σ(Wr[et, ht−1] + br) (1) ut = σ(Wu[et, ht−1] + bu) (2) ct = tanh(Wc[et, rt ht−1] + bc) (3) ht = ut ht−1 + (1− ut) ct (4) where represents the element-wise product, σ(.) is the sigmoid function, et denotes the word embedding for the word wt, rt and ut are the reset gate and update gate respectively. The inputs of the GRU are words in a sentence and it is trained to predict the next words given the previous words. We add all the words that appear more than 50 times in the Pinterest40M dataset into the dictionary. The final vocabulary size is 335,323. Because the vocabulary size is very huge, we adopt the sampled SoftMax loss [8] to accelerate the training. For each training step, we sample 1024 negative words according to their log frequency in the training data and calculate the sampled SoftMax loss for the positive word. This sampled SoftMax loss function of the RNN part is adopted with Model A, B and C. Minimizing this loss function can be considered as approximately maximizing the probability of the sentences in the training set. As illustrated in Figure 4, Model A, B and C have different ways to fuse the visual information in the word embeddings. Model A is inspired by the CNN-RNN based image captioning models [36, 23]. We map the visual representation in the same space as the GRU states to initialize them (i.e. set h0 = ReLU(WIfI)). Since the visual information is fed after the embedding layer, it is usually hard to ensure that this information is fused in the learned embeddings. We adopt a transposed weight sharing strategy proposed in [23] that was originally used to enhance the models’ ability to learn novel visual concepts. More specifically, we share the weight matrix of the SoftMax layer UM with the matrix Uw of the word embedding layer in a transposed manner. In this way, UTw is learned to decode the visual information and is enforced to incorporate this information into the word embedding matrix Uw. In the experiments, we show that this strategy significantly improve the performance of the trained embeddings. Model A is trained by maximizing the log likelihood of the next words given the previous words conditioned on the visual representations, similar to the image captioning models. Compared to Model A, we adopt a more direct way to utilize the visual information for Model B and Model C. We add direct supervisions of the final state of the GRU (Model B) or the word embeddings (Model C), by adding new loss terms, in addition to the negative log-likelihood loss from the sampled SoftMax layer: Lstate = 1 n ∑ s ‖ hls − ReLU(WIfIs) ‖ (5) Lemb = 1 n ∑ s 1 ls ∑ t ‖ et − ReLU(WIfIs) ‖ (6) where ls is the length of the sentence s in a mini-batch with n sentences, Eqn. 5 and Eqn. 6 denote the additional losses for model B and C respectively. The added loss term is balanced by a weight hyperparameter λ with the negative log-likehood loss from the sampled SoftMax layer. 5 Experiments 5.1 Training Details We convert the words in all sentences of the Pinterest40M dataset to lower cases. All the nonalphanumeric characters are removed. A start sign 〈bos〉 and an end sign 〈eos〉 are added at the beginning and the end of all the sentences respectively. We use the stochastic gradient descent method with a mini-batch size of 256 sentences and a learning rate of 1.0. The gradient is clipped to 10.0. We train the models until the loss does not decrease on a small validation set with 10,000 images and their descriptions. The models will scan the dataset for roughly five 5 epochs. The bias terms of the gates (i.e. br and bu in Eqn. 1 and 2) in the GRU layer are initialized to 1.0. 5.2 Evaluation Details We use the trained embedding models to extract embeddings for all the words in a phrase and aggregate them by average pooling to get the phrase representation. We then check whether the cosine distance between the (base phrase, positive phrase) pair are smaller than the (base phrase, negative phrase) pair. The average precision over all the triplets in the raw Related Phrases 10M (RP10M) dataset and the Gold standard Related Phrases 10K (Gold RP10K) dataset are reported. 5.3 Results on the Gold RP10K and RP10M datasets We evaluate and compare our Model A, B, C, their variants and several strong baselines on our RP10M and Gold RP10K datasets. The results are shown in Table 3. “Pure Text RNN” denotes the baseline model without input of the visual features trained on Pinterest40M. It have the same model structure as our Model A except that we initialize the hidden state of GRU with a zero vector. “Model A without weight sharing” denotes a variant of Model A where the weight matrix Uw of the word embedding layer is not shared with the weight matrix UM of the sampled SoftMax layer (see Figure 4 for details). 2 “Word2Vec-GoogleNews” denotes the state-of-the-art off-the-shelf word 2We also try to adopt the weight sharing strategy in Model B and C, but the performance is very similar to the non-weight sharing version. embedding models of Word2Vec [25] trained on the Google-News data (about 300 billion words). “GloVe-Twitter” denotes the GloVe model [29] trained on the Twitter data (about 27 billion words). They are pure text models, but trained on a very large dataset (our model only trains on 3 billion words). Comparing these models, we can draw the following conclusions: • Under our evaluation criteria, visual information significantly helps the learning of word embeddings when the model successfully fuses the visual and text information together. E.g., our Model A outperforms the Word2Vec model by 9.5% and 9.2% on the Gold RP10K and RP10M datasets respectively. Model C also outperforms the pure text RNN baselines. • The weight sharing strategy is crucial to enhance the ability of Model A to fuse visual information into the learned embeddings. E.g., our Model A outperforms the baseline without this sharing strategy by 7.0% and 4.4% on Gold RP10K and RP10M respectively. • Model A performs the best among all the three models. It shows that soft supervision imposed by the weight-sharing strategy is more effective than direct supervision. This is not surprising since not all the words are semantically related to the content of the image and a direct and hard constraint might hinder the learning of the embeddings for these words. • Model B does not perform very well. The reason might be that most of the sentences have more than 8 words and the gradient from the final state loss term Lstate cannot be easily passed to the embedding of all the words in the sentence. • All the models trained on the Pinterest40M dataset performs better than the skip-gram model [25] trained on a much larger dataset of 300 billion words. 6 Discussion In this paper, we investigate the task of training and evaluating word embedding models. We introduce Pinterest40M, the largest image dataset with sentence descriptions to the best of our knowledge, and construct two evaluation dataset (i.e. RP10M and Gold RP10K) for word/phrase similarity and relatedness evaluation. Based on these datasets, we propose several CNN-RNN based multimodal models to learn effective word embeddings. Experiments show that visual information significantly helps the training of word embeddings, and our proposed model successfully incorporates such information into the learned embeddings. There are lots of possible extensions of the proposed model and the dataset. E.g., we plan to separate semantically similar or related phrase pairs from the Gold RP10K dataset to better understand the performance of the methods, similar to [3]. We will also give relatedness or similarity scores for the pairs (base phrase, positive phrase) to enable same evaluation strategy as previous datasets (e.g. [5, 11]). Finally, we plan to propose better models for phrase representations. Acknowledgement We are grateful to James Rubinstein for setting up the crowdsourcing experiments for dataset cleanup. We thank Veronica Mapes, Pawel Garbacki, and Leon Wong for discussions and support. We appreciate the comments and suggestions from anonymous reviewers of NIPS 2016. This work is partly supported by the Center for Brains, Minds and Machines NSF STC award CCF-1231216 and the Army Research Office ARO 62250-CS.
1. What is the focus of the paper regarding multi-modal word embeddings? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its comparison to previous works? 3. Do you have any concerns regarding the data collection process and evaluation method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or biases in the paper that need to be addressed?
Review
Review This paper examines training and evaluating multi-modal word embeddings with large data sets. It contributes a new data set of images with captions derived from Pinterest, and a new data set of phrase similarity judgments automatically derived from click through data. A subset of these judgments were checked in a user study. A number of RNN-CNN models are trained on the Pinterest data, and evaluated on the newly create data sets. It is found that a multi-modal RNN with a transposed weight sharing scheme between the input word embeddings and the output layer fed to the softmax word prediction layer achieves the best performance on the evaluations. The paper was clear and well written. The data set and the evaluation that was conducted could be useful to the community. However, the paper unfairly characterizes or omits some previous work, and was not clear enough about the limitations and biases of their evaluation strategy. These points detract from a paper that otherwise makes an interesting contribution. First, there is an implied criticism of WordSim-353 and MEN at the bottom of page 2 that they only contain similarity judgments at the word level. However, there is a large amount of work on learning phrase and sentence-level embeddings in the recently literature that overcome these issues (see representative work by Mirella Lapata, Marco Baroni, Stephen Clarke, Richard Socher, among many others), which the paper does not mention. Thus, learning 2- or 3-word embeddings is already well investigated, rather than a source of new challenges. The criticism of the analogical reasoning task on page 3 is also misplaced. The paper criticizes this task for not covering all the semantic relationships of millions of words in the dictionary. In my view, relatedness judgments are much worse than analogical reasoning, because they reduce all semantic relations down to a single scalar. The paper should be up front about its limitations and biases. The data collection process the resulting evaluation are clearly biased towards multi-modal methods, because an image is displayed in the interface along with the text. This is not a problem, but the fact that multi-modal representations outperform pure text ones is then less meaningful, and by no means spells the end of models trained purely on text. Also, the paper should discuss possible confounds with the construction of the click-through evaluation data. A link can be clicked on for reasons other than relevance or similarity between query and the phrase that is presented. It seems that other factors are involved as well (e.g., catchiness of the phrase or the perceived informativeness of the linked article). The paper would be stronger if the multi-modal methods were evaluated on the WS 353 and MEN data sets as well. This would give an indication of whether they might outperform pure text models on tasks that were originally conceived for text models, at least on words related to visual imagery. Finally, the decision to omit cases involving word overlap is understandable, but it does not come without a cost. Accounting for homonymy or polysemy would not be tested by their approach.
NIPS
Title Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images Abstract In this paper, we focus on training and evaluating effective word embeddings with both text and visual information. More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest [2]. This dataset is more than 200 times larger than MS COCO [22], the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system, thus contain rich semantic relationships. Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: http://www.stat. ucla.edu/~junhua.mao/multimodal_embedding.html1. 1 Introduction Word embeddings are dense vector representations of words with semantic and relational information. In this vector space, semantically related or similar words should be close to each other. A large-scale training dataset with billions of words is crucial to train effective word embedding models. The trained word embeddings are very useful in various tasks and real-world applications that involve searching for semantically similar or related words and phrases. A large proportion of the state-of-the-art word embedding models are trained on pure text data only. Since one of the most important functions of language is to describe the visual world, we argue that the effective word embeddings should contain rich visual semantics. Previous work has shown that visual information is important for training effective embedding models. However, due to the lack of large training datasets of the same scale as the pure text dataset, the models are either trained on relatively small datasets (e.g. [13]), or the visual contraints are only applied to limited number of pre-defined visual concepts (e.g. [21]). Therefore, such work did not fully explore the potential of visual information in learning word embeddings. In this paper, we introduce a large-scale dataset with both text descriptions and images, crawled and collected from Pinterest, one of the largest database of annotated web images. On Pinterest, users save web images onto their boards (i.e. image collectors) and supply their descriptions of the images. More descriptions are collected when the same images are saved and commented by other users. Compared to MS COCO (i.e. the image benchmark with sentences descriptions [22]), our dataset is much larger (40 million images with 300 million sentences compared to 0.2 million images and 1 million sentences in the current release of MS COCO) and is at the same scale as the standard pure 1The datasets introduced in this work will be gradually released on the project page. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. text training datasets (e.g. Wikipedia Text Corpus). Some sample images and their descriptions are shown in Figure 1 in Section 3.1. We believe training on this large-scale dataset will lead to richer and better generalized models. We denote this dataset as the Pinterest40M dataset. One challenge for word embeddings learning is how to directly evaluate the quality of the model with respect to the tasks (e.g. the task of finding related or similar words and phrases). State-ofthe-art neural language models often use the negative log-likelihood of the predicted words as their training loss, which is not always correlated with the effectiveness of the learned embedding. Current evaluation datasets (e.g. [5, 14, 11]) for word similarity or relatedness contain only less than a thousand word pairs and cannot comprehensively evaluate all the embeddings of the words appearing in the training set. The challenge of constructing large-scale evaluation datasets is partly due to the difficulty of finding a large number of semantically similar or related word/phrase pairs. In this paper, we utilize user click information collected from Pinterest’s image search system to generate millions of these candidate word/phrase pairs. Because user click data are somewhat noisy, we removed inaccurate entries in the dataset by using crowdsourcing human annotations. This led to a final gold standard evaluation dataset consists of 10,674 entries. Equipped with these datasets, we propose, train and evaluate several Recurrent Neural Network (RNN [10]) based models with input of both text descriptions and images. Some of these models directly minimize the Euclidean distance between the visual features and the word embeddings or RNN states, similar to previous work (e.g. [13, 21]). The best performing model is inspired by recent image captioning models [9, 24, 36], with the additional weight-sharing strategy originally proposed in [23] to learn novel visual concepts. This strategy imposes soft constraints between the visual features and all the related words in the sentences. Our experiments validate the effectiveness and importance of incorporating visual information into the learned word embeddings. We make three major contributions: Firstly, we constructed a large-scale multimodal dataset with both text descriptions and images, which is at the same scale as the pure text training set. Secondly, we collected and labeled a large-scale evaluation dataset for word and phrase similarity and relatedness evaluation. Finally, we proposed and compared several RNN based models for learning multimodal word embeddings effectively. To facilitate research in this area, we will gradually release the datasets proposed in this paper on our project page. 2 Related Work Image-Sentence Description Datasets The image descriptions datasets, such as Flickr8K [15], Flickr30K [37], IAPR-TC12 [12], and MS COCO [22], greatly facilitated the development of models for language and vision tasks such as image captioning. Because it takes lots of resources to label images with sentences descriptions, the scale of these datasets are relatively small (MS COCO, the largest dataset among them, only contains 1 million sentences while our Pinterest40M dataset has 300 million sentences). In addition, the language used to describe images in these datasets is relatively simple (e.g. MS COCO only has around 10,000 unique words appearing at least 3 times while there are 335,323 unique words appearing at least 50 times in Pinterest40M). The Im2Text dataset proposed in [28] adopts a similar data collection process to ours by using 1 million images with 1 million user annotated captions from Flickr. But its scale is still much smaller than our Pinterest40M dataset. Recently, [34] proposed and released the YFCC100M dataset, which is a large-scale multimedia dataset contains metadata of 100 million Flickr images. It provides rich information about images, such as tags, titles, and locations where they were taken. The users’ comments can be obtained by querying the Flickr API. Because of the different functionality and user groups between Flickr and Pinterest, the users’ comments of Flickr images are quite different from those of Pinterest (e.g. on Flickr, users tend to comment more on the photography techniques). This dataset provides complementary information to our Pinterest40M dataset. Word Similarity-Relatedness Evaluation The standard benchmarks, such as WordSim-353/WSSim [11, 3], MEN [5], and SimLex-999 [14], consist of a couple hundreds of word pairs and their similarity or relatedness scores. The word pairs are composed by asking human subjects to write the first related, or similar, word that comes into their mind when presented with a concept word (e.g. [27, 11]), or by randomly selecting frequent words in large text corpus and manually searching for useful pairs (e.g. [5]). In this work, we are able to collect a large number of word/phrase pairs with good quality by mining them from the click data of Pinterest’s image search system used by millions of users. In addition, because this dataset is collected through a visual search system, it is more suitable to evaluate multimodal embedding models. Another related evaluation is the analogy task proposed in [25]. They ask the model questions like “man to woman is equal king to what?” as their evaluation. But such questions do not directly measure the word similarity or relatedness, and cannot cover all the semantic relationships of million of words in the dictionary. RNN for Language and Vision Our models are inspired by recent RNN-CNN based image captioning models [9, 24, 36, 16, 6, 18, 23], which can be viewed as a special case of the sequence-tosequence learning framework [33, 7]. We adopt Gated Recurrent Units (GRUs [7]), a variation of the simple RNN model. Multimodal Word Embedding Models For pure text, one of the most effective approaches to learn word embeddings is to train neural network models to predict a word given its context words in a sentence (i.e. the continuous bag-of-word model [4]) or to predict the context words given the current word (i.e. the skip-gram model [25]). There is a large literature on word embedding models that utilize visual information. One type of methods takes a two-step strategy that first extracts text and image features separately and then fuses them together using singular value decomposition [5], stacked autoencoders [31], or even simple concatenation [17]. [13, 21, 19] learn the text and image features jointly by fusing visual or perceptual information in a skip-gram model [25]. However, because of the lack of large-scale multimodal datasets, they only associate visual content with a pre-defined set of nouns (e.g. [21]) or perception domains (e.g. [14]) in the sentences, or focus on abstract scenes (e.g. [19]). By contrast, our best performing model places a soft constraint between visual features and all the words in the sentences by a weight sharing strategy as shown in Section 4. 3 Datasets We constructed two datasets: one for training our multimodal word-embeddings (see Section 3.1) and another one for the evaluation of the learned word-embeddings (see Section 3.2). 3.1 Training Dataset Table 1: Scale comparison with other image descriptions benchmarks. Image Sentences Flickr8K [15] 8K 40K Flickr30K [37] 30K 150K IAPR-TC12 [12] 20K 34K MS COCO [22] 200K 1M Im2Text [28] 1M 1M Pinterset40M 40M 300M Pinterest is one of the largest repository of Web images. Users commonly tag images with short descriptions and share the images (and desriptions) with others. Since a given image can be shared and tagged by multiple, sometimes thousands of users, many images have a very rich set of descriptions, making this source of data ideal for training model with both text and image inputs. The dataset is prepared in the following way: first, we crawled the public available data on Pinterest to construct our training dataset of more than 40 million images. Each image is associated with an average of 12 sentences, and we removed duplicated or short sentences with less than 4 words. The duplication detection is conducted by calculating the overlapped word unigram ratios. Some sample images and descriptions are shown in Figure 1. We denote this dataset as the Pinterest40M dataset. Our dataset contains 40 million images with 300 million sentences (around 3 billion words), which is much larger than the previous image description datasets (see Table 1). In addition, because the descriptions are annotated by users who expressed interest in the images, the descriptions in our dataset are more natural and richer than the annotated image description datasets. In our dataset, there are 335,323 unique words with a minimum number of occurence of 50, compared with 10,232 and 65,552 words appearing at least 3 times in MS COCO and IM2Text dataset respectively. To the best of our knowledge, there is no previous paper that trains a multimodal RNN model on a dataset of such scale. 3.2 Evaluation Datasets This work proposes to use labeled phrase triplets – each triplet is a three-phrase tuple containing phrase A, phrase B and phrase C, where A is considered as semantically closer to B than A is to C. At testing time, we compute the distance in the word embedding space between A/B and A/C, and consider a test triplet as positive if d(A,B) < d(A,C). This relative comparison approach was commonly used to evaluate and compare different word embedding models [30]. In order to generate large number of phrase triplets, we rely on user-click data collected from Pinterest image search system. At the end, we construct a large-scale evaluation dataset with 9.8 million triplets (see Section 3.2.1), and its cleaned up gold standard version with 10 thousand triplets (see Section 3.2.2). 3.2.1 The Raw Evaluation Dataset from User Clickthrough Data It is very hard to obtain a large number of semantically similar or related word and phrase pairs. This is one of the challenges for constructing a large-scale word/phrase similarity and relatedness evaluation dataset. We address this challenge by utilizing the user clickthrough data from Pinterest image search system, see Figure 2 for an illustration. More specifically, given a query from a user (e.g. “hair styles”), the search system returns a list of items, and each item is composed of an image and a set of annotations (i.e. short phrases or words that describe the item). Please note that the same annotation can appear in multiple items, e.g., “hair tutorial” can describe items related to prom hair styles or ponytails. We derive a matching score for each annotation by aggregating the click frequency of the items containing the annotation. The annotations are then ranked according to the matching scores, and the top ranked annotations are considered as the positive set of phrases or words with respect to the user query. To increase the difficulty of this dataset, we remove the phrases that share common words with the user query from the initial list of positive phrases. E.g. “hair tutorials” will be removed because the word “hair” is contained in the query phrase “hair styles”. A stemmer in Python’s “stemmer” package is also adopted to find words with the same root (e.g. “cake” and “cakes” are considered as the same word). This pruning step also prevents giving bias to methods which measure the similarity between the positive phrase and the query phrase by counting the number of overlapping words between them. In this way, we collected 9,778,508 semantically similar phrase pairs. Previous word similarity/relatedness datasets (e.g. [11, 14]) manually annotated each word pair with an absolute score reflecting how much the words in this pair are semantically related. In the testing stage, a predicted similarity score list of the word pairs generated by the model in the dataset is compared with the groundtruth score list. The Spearman’s rank correlation between the two lists is calculated as the score of the model. However, it is often too hard and expensive to label the absolute related score and maintain the consistency across all the pairs in a large-scale dataset, even if we average the scores of several annotators. We adopt a simple strategy by composing triplets for the phrase pairs. More specifically, we randomly sample negative phrases from a pool of 1 billion phrases. The negative phrase should not contain any overlapping word (a stemmer is also adopted) with both of the phrases in the original phrase pair. In this way, we construct 9,778,508 triplets with the format of (base phrase, positive phrase, negative phrase). In the evaluation, a model should be able to distinguish the positive phrase from the negative phrase by calculating their similarities with the base phrase in the embedding space. We denote this dataset as Related Phrase 10M (RP10M) dataset. 3.2.2 The Cleaned-up Gold Standard Dataset Because the raw Related Query 10M dataset is built upon user click information, it contains some noisy triplets (e.g. the positive and base phrase are not related, or the negative phrase is strongly related to the base phrase). To create a gold standard dataset, we conduct a clean up step using the crowdsourcing platform CrowdFlower [1] to remove these inaccurate triplets. A sample question and choices for the crowdsourcing annotators are shown in Figure 3. The positive and negative phrases in a triplet are randomly given as choice “A” or “B”. The annotators are required to choose which phrase is more related to the base phrase, or if they are both related or unrelated. To help the annotators understand the meaning of the phrases, they can click on the phrases to get Google search results. We annotate 21,000 triplets randomly sampled from the raw Related Query 10M dataset. Three to five annotators are assigned to each question. A triplet is accepted and added in the final cleaned up dataset only if more than 50% of the annotators agree with the original positive and negative label of the queries (note that they do not know which one is positive in the annotation process). In practice, 70% of the selected phrases triplets have more than 3 annotators to agree. This leads to a gold standard dataset with 10,674 triplets. We denote this dataset as Gold Phrase Query 10K (Gold RP10K) dataset. This dataset is very challenging and a successfully model should be able to capture a variety of semantic relationships between words or phrases. Some sample triplets are shown in Table 2. 4 The Multimodal Word Embedding Models We propose three RNN-CNN based models to learn the multimodal word embeddings, as illustrated in Figure 4. All of the models have two parts in common: a Convolutional Neural Network (CNN [20]) to extract visual representations and a Recurrent Neural Network (RNN [10]) to model sentences. For the CNN part, we resize the images to 224× 224, and adopt the 16-layer VGGNet [32] as the visual feature extractor. The binarized activation (i.e. 4096 binary vectors) of the layer before its SoftMax layer are used as the image features and will be mapped to the same space of the state of RNN (Model A, B) or the word embeddings (Model C), depends on the structure of the model, by a fully connected layer and a Rectified Linear Unit function (ReLU [26], ReLU(x) = max(0, x)). For the RNN part, we use a Gated Recurrent Unit (GRU [7]), an recently very popular RNN structure, with a 512 dimensional state cell. The state of GRU ht for each word with index t in a sentence can be represented as: rt = σ(Wr[et, ht−1] + br) (1) ut = σ(Wu[et, ht−1] + bu) (2) ct = tanh(Wc[et, rt ht−1] + bc) (3) ht = ut ht−1 + (1− ut) ct (4) where represents the element-wise product, σ(.) is the sigmoid function, et denotes the word embedding for the word wt, rt and ut are the reset gate and update gate respectively. The inputs of the GRU are words in a sentence and it is trained to predict the next words given the previous words. We add all the words that appear more than 50 times in the Pinterest40M dataset into the dictionary. The final vocabulary size is 335,323. Because the vocabulary size is very huge, we adopt the sampled SoftMax loss [8] to accelerate the training. For each training step, we sample 1024 negative words according to their log frequency in the training data and calculate the sampled SoftMax loss for the positive word. This sampled SoftMax loss function of the RNN part is adopted with Model A, B and C. Minimizing this loss function can be considered as approximately maximizing the probability of the sentences in the training set. As illustrated in Figure 4, Model A, B and C have different ways to fuse the visual information in the word embeddings. Model A is inspired by the CNN-RNN based image captioning models [36, 23]. We map the visual representation in the same space as the GRU states to initialize them (i.e. set h0 = ReLU(WIfI)). Since the visual information is fed after the embedding layer, it is usually hard to ensure that this information is fused in the learned embeddings. We adopt a transposed weight sharing strategy proposed in [23] that was originally used to enhance the models’ ability to learn novel visual concepts. More specifically, we share the weight matrix of the SoftMax layer UM with the matrix Uw of the word embedding layer in a transposed manner. In this way, UTw is learned to decode the visual information and is enforced to incorporate this information into the word embedding matrix Uw. In the experiments, we show that this strategy significantly improve the performance of the trained embeddings. Model A is trained by maximizing the log likelihood of the next words given the previous words conditioned on the visual representations, similar to the image captioning models. Compared to Model A, we adopt a more direct way to utilize the visual information for Model B and Model C. We add direct supervisions of the final state of the GRU (Model B) or the word embeddings (Model C), by adding new loss terms, in addition to the negative log-likelihood loss from the sampled SoftMax layer: Lstate = 1 n ∑ s ‖ hls − ReLU(WIfIs) ‖ (5) Lemb = 1 n ∑ s 1 ls ∑ t ‖ et − ReLU(WIfIs) ‖ (6) where ls is the length of the sentence s in a mini-batch with n sentences, Eqn. 5 and Eqn. 6 denote the additional losses for model B and C respectively. The added loss term is balanced by a weight hyperparameter λ with the negative log-likehood loss from the sampled SoftMax layer. 5 Experiments 5.1 Training Details We convert the words in all sentences of the Pinterest40M dataset to lower cases. All the nonalphanumeric characters are removed. A start sign 〈bos〉 and an end sign 〈eos〉 are added at the beginning and the end of all the sentences respectively. We use the stochastic gradient descent method with a mini-batch size of 256 sentences and a learning rate of 1.0. The gradient is clipped to 10.0. We train the models until the loss does not decrease on a small validation set with 10,000 images and their descriptions. The models will scan the dataset for roughly five 5 epochs. The bias terms of the gates (i.e. br and bu in Eqn. 1 and 2) in the GRU layer are initialized to 1.0. 5.2 Evaluation Details We use the trained embedding models to extract embeddings for all the words in a phrase and aggregate them by average pooling to get the phrase representation. We then check whether the cosine distance between the (base phrase, positive phrase) pair are smaller than the (base phrase, negative phrase) pair. The average precision over all the triplets in the raw Related Phrases 10M (RP10M) dataset and the Gold standard Related Phrases 10K (Gold RP10K) dataset are reported. 5.3 Results on the Gold RP10K and RP10M datasets We evaluate and compare our Model A, B, C, their variants and several strong baselines on our RP10M and Gold RP10K datasets. The results are shown in Table 3. “Pure Text RNN” denotes the baseline model without input of the visual features trained on Pinterest40M. It have the same model structure as our Model A except that we initialize the hidden state of GRU with a zero vector. “Model A without weight sharing” denotes a variant of Model A where the weight matrix Uw of the word embedding layer is not shared with the weight matrix UM of the sampled SoftMax layer (see Figure 4 for details). 2 “Word2Vec-GoogleNews” denotes the state-of-the-art off-the-shelf word 2We also try to adopt the weight sharing strategy in Model B and C, but the performance is very similar to the non-weight sharing version. embedding models of Word2Vec [25] trained on the Google-News data (about 300 billion words). “GloVe-Twitter” denotes the GloVe model [29] trained on the Twitter data (about 27 billion words). They are pure text models, but trained on a very large dataset (our model only trains on 3 billion words). Comparing these models, we can draw the following conclusions: • Under our evaluation criteria, visual information significantly helps the learning of word embeddings when the model successfully fuses the visual and text information together. E.g., our Model A outperforms the Word2Vec model by 9.5% and 9.2% on the Gold RP10K and RP10M datasets respectively. Model C also outperforms the pure text RNN baselines. • The weight sharing strategy is crucial to enhance the ability of Model A to fuse visual information into the learned embeddings. E.g., our Model A outperforms the baseline without this sharing strategy by 7.0% and 4.4% on Gold RP10K and RP10M respectively. • Model A performs the best among all the three models. It shows that soft supervision imposed by the weight-sharing strategy is more effective than direct supervision. This is not surprising since not all the words are semantically related to the content of the image and a direct and hard constraint might hinder the learning of the embeddings for these words. • Model B does not perform very well. The reason might be that most of the sentences have more than 8 words and the gradient from the final state loss term Lstate cannot be easily passed to the embedding of all the words in the sentence. • All the models trained on the Pinterest40M dataset performs better than the skip-gram model [25] trained on a much larger dataset of 300 billion words. 6 Discussion In this paper, we investigate the task of training and evaluating word embedding models. We introduce Pinterest40M, the largest image dataset with sentence descriptions to the best of our knowledge, and construct two evaluation dataset (i.e. RP10M and Gold RP10K) for word/phrase similarity and relatedness evaluation. Based on these datasets, we propose several CNN-RNN based multimodal models to learn effective word embeddings. Experiments show that visual information significantly helps the training of word embeddings, and our proposed model successfully incorporates such information into the learned embeddings. There are lots of possible extensions of the proposed model and the dataset. E.g., we plan to separate semantically similar or related phrase pairs from the Gold RP10K dataset to better understand the performance of the methods, similar to [3]. We will also give relatedness or similarity scores for the pairs (base phrase, positive phrase) to enable same evaluation strategy as previous datasets (e.g. [5, 11]). Finally, we plan to propose better models for phrase representations. Acknowledgement We are grateful to James Rubinstein for setting up the crowdsourcing experiments for dataset cleanup. We thank Veronica Mapes, Pawel Garbacki, and Leon Wong for discussions and support. We appreciate the comments and suggestions from anonymous reviewers of NIPS 2016. This work is partly supported by the Center for Brains, Minds and Machines NSF STC award CCF-1231216 and the Army Research Office ARO 62250-CS.
1. What is the focus of the paper regarding image-caption style datasets and multimodal embedding models? 2. What are the strengths and weaknesses of the proposed datasets? 3. How does the evaluation method have multiple issues in its current form? 4. Why did the authors choose to work only on word similarity and not show other image-language tasks? 5. What are the concerns regarding the quality of the training dataset? 6. Can the authors provide more clarity on the main model figure and suggest new model variants for this task/dataset? 7. Are there any typos or errors in the paper that need to be addressed?
Review
Review This paper presents a new multi-million image-caption style dataset and use it for training multimodal embedding models. They also present a new user click based similarity dataset for evaluation. Finally, they try 2-3 CNN-RNN style models for multimodal phrase embedding similarity. The datasets are decent contributions (but with some issues). However, the evaluation has multiple issues in its current form and the models are also borrowed from previous work. After author response: Thanks for answering some of my questions -- I updated some of my scores accordingly. I still encourage the authors to answer the rest of the questions, esp. eval on better downstream tasks like captioning and visual zero-shot learning. --------------------- Evaluation issues: -- No existing datasets have been used to evaluate, e.g., WordSim353, SimLex-999 or the new visual similarity datasets VisSim and SemSim from Lazaridou et al., 2015. The authors have not compared to any existing paper/model on these datasets to show the advantage of either their larger/better training dataset or their models, hence leaving no takeaways. -- Why only work on word similarity? Why not show other image-language tasks in addition, e.g., captioning, visual zero-shot learning? This will better demonstrate the advantages of the training dataset. -- For word embedding baseline, please use stronger latest models such as GloVe and Skip-thought and paragram. Evaluation dataset issues: -- The dataset mixes multiple types of similarity, e.g., related versus paraphrase/synonym, and this has been discussed to be a big issue for evaluating embedding models. -- the data uses triplets instead of the standard ranking+correlation method, and then they also choose random negative phrases for these triplets (which will be easy to detect), which makes the task much easier. -- In the CrowdFlower experiment, the turkers will assume they need to choose at least one of the two phrases most of the time, and even if they choose the correct phrase, it might be only because it is more related to the query phrase as compared to the other random negative phrase, but overall the positive phrase might still be only very slightly related to the query phrase (because of the initial recommendation system based retrieval). To verify this, the authors should get turkers to also rate how much these 'positive' phrases are related to the query phrase in absolute terms/ratings. -- they use 3-5 annotators and then choose phrases where >50% of the annotators agree, but this will mean agreement between just two people in many cases. For such a noisy initial dataset, the filtering should be more strict than 50%, or the #annotators should be higher. Training dataset: -- Since the data is from Pinterest and the 'captions' are just user comments, I am worried that some of these captions might not be standard description style sentences but instead might contain some non-image/visual story or information -- the authors should investigate and present this, and also verify if this corpus can be used for training captioning systems. Model: -- main model figure is very unclear and should be expanded and maybe separated. -- all models are from previous work; would have been good to also suggest some new model variants for this task/datasets. -- why use average pooling at test time for a phrase representation and not run the trained RNN model? Other: -- Lots of typo's throughout the paper, e.g., Line15: "crutial" --> "crucial" Line34: "commemted" --> "commented" Table 3: "weigh" --> "weight" Line 232: "to Model A, We" --> "to Model A, we"
NIPS
Title Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images Abstract In this paper, we focus on training and evaluating effective word embeddings with both text and visual information. More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest [2]. This dataset is more than 200 times larger than MS COCO [22], the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system, thus contain rich semantic relationships. Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: http://www.stat. ucla.edu/~junhua.mao/multimodal_embedding.html1. 1 Introduction Word embeddings are dense vector representations of words with semantic and relational information. In this vector space, semantically related or similar words should be close to each other. A large-scale training dataset with billions of words is crucial to train effective word embedding models. The trained word embeddings are very useful in various tasks and real-world applications that involve searching for semantically similar or related words and phrases. A large proportion of the state-of-the-art word embedding models are trained on pure text data only. Since one of the most important functions of language is to describe the visual world, we argue that the effective word embeddings should contain rich visual semantics. Previous work has shown that visual information is important for training effective embedding models. However, due to the lack of large training datasets of the same scale as the pure text dataset, the models are either trained on relatively small datasets (e.g. [13]), or the visual contraints are only applied to limited number of pre-defined visual concepts (e.g. [21]). Therefore, such work did not fully explore the potential of visual information in learning word embeddings. In this paper, we introduce a large-scale dataset with both text descriptions and images, crawled and collected from Pinterest, one of the largest database of annotated web images. On Pinterest, users save web images onto their boards (i.e. image collectors) and supply their descriptions of the images. More descriptions are collected when the same images are saved and commented by other users. Compared to MS COCO (i.e. the image benchmark with sentences descriptions [22]), our dataset is much larger (40 million images with 300 million sentences compared to 0.2 million images and 1 million sentences in the current release of MS COCO) and is at the same scale as the standard pure 1The datasets introduced in this work will be gradually released on the project page. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. text training datasets (e.g. Wikipedia Text Corpus). Some sample images and their descriptions are shown in Figure 1 in Section 3.1. We believe training on this large-scale dataset will lead to richer and better generalized models. We denote this dataset as the Pinterest40M dataset. One challenge for word embeddings learning is how to directly evaluate the quality of the model with respect to the tasks (e.g. the task of finding related or similar words and phrases). State-ofthe-art neural language models often use the negative log-likelihood of the predicted words as their training loss, which is not always correlated with the effectiveness of the learned embedding. Current evaluation datasets (e.g. [5, 14, 11]) for word similarity or relatedness contain only less than a thousand word pairs and cannot comprehensively evaluate all the embeddings of the words appearing in the training set. The challenge of constructing large-scale evaluation datasets is partly due to the difficulty of finding a large number of semantically similar or related word/phrase pairs. In this paper, we utilize user click information collected from Pinterest’s image search system to generate millions of these candidate word/phrase pairs. Because user click data are somewhat noisy, we removed inaccurate entries in the dataset by using crowdsourcing human annotations. This led to a final gold standard evaluation dataset consists of 10,674 entries. Equipped with these datasets, we propose, train and evaluate several Recurrent Neural Network (RNN [10]) based models with input of both text descriptions and images. Some of these models directly minimize the Euclidean distance between the visual features and the word embeddings or RNN states, similar to previous work (e.g. [13, 21]). The best performing model is inspired by recent image captioning models [9, 24, 36], with the additional weight-sharing strategy originally proposed in [23] to learn novel visual concepts. This strategy imposes soft constraints between the visual features and all the related words in the sentences. Our experiments validate the effectiveness and importance of incorporating visual information into the learned word embeddings. We make three major contributions: Firstly, we constructed a large-scale multimodal dataset with both text descriptions and images, which is at the same scale as the pure text training set. Secondly, we collected and labeled a large-scale evaluation dataset for word and phrase similarity and relatedness evaluation. Finally, we proposed and compared several RNN based models for learning multimodal word embeddings effectively. To facilitate research in this area, we will gradually release the datasets proposed in this paper on our project page. 2 Related Work Image-Sentence Description Datasets The image descriptions datasets, such as Flickr8K [15], Flickr30K [37], IAPR-TC12 [12], and MS COCO [22], greatly facilitated the development of models for language and vision tasks such as image captioning. Because it takes lots of resources to label images with sentences descriptions, the scale of these datasets are relatively small (MS COCO, the largest dataset among them, only contains 1 million sentences while our Pinterest40M dataset has 300 million sentences). In addition, the language used to describe images in these datasets is relatively simple (e.g. MS COCO only has around 10,000 unique words appearing at least 3 times while there are 335,323 unique words appearing at least 50 times in Pinterest40M). The Im2Text dataset proposed in [28] adopts a similar data collection process to ours by using 1 million images with 1 million user annotated captions from Flickr. But its scale is still much smaller than our Pinterest40M dataset. Recently, [34] proposed and released the YFCC100M dataset, which is a large-scale multimedia dataset contains metadata of 100 million Flickr images. It provides rich information about images, such as tags, titles, and locations where they were taken. The users’ comments can be obtained by querying the Flickr API. Because of the different functionality and user groups between Flickr and Pinterest, the users’ comments of Flickr images are quite different from those of Pinterest (e.g. on Flickr, users tend to comment more on the photography techniques). This dataset provides complementary information to our Pinterest40M dataset. Word Similarity-Relatedness Evaluation The standard benchmarks, such as WordSim-353/WSSim [11, 3], MEN [5], and SimLex-999 [14], consist of a couple hundreds of word pairs and their similarity or relatedness scores. The word pairs are composed by asking human subjects to write the first related, or similar, word that comes into their mind when presented with a concept word (e.g. [27, 11]), or by randomly selecting frequent words in large text corpus and manually searching for useful pairs (e.g. [5]). In this work, we are able to collect a large number of word/phrase pairs with good quality by mining them from the click data of Pinterest’s image search system used by millions of users. In addition, because this dataset is collected through a visual search system, it is more suitable to evaluate multimodal embedding models. Another related evaluation is the analogy task proposed in [25]. They ask the model questions like “man to woman is equal king to what?” as their evaluation. But such questions do not directly measure the word similarity or relatedness, and cannot cover all the semantic relationships of million of words in the dictionary. RNN for Language and Vision Our models are inspired by recent RNN-CNN based image captioning models [9, 24, 36, 16, 6, 18, 23], which can be viewed as a special case of the sequence-tosequence learning framework [33, 7]. We adopt Gated Recurrent Units (GRUs [7]), a variation of the simple RNN model. Multimodal Word Embedding Models For pure text, one of the most effective approaches to learn word embeddings is to train neural network models to predict a word given its context words in a sentence (i.e. the continuous bag-of-word model [4]) or to predict the context words given the current word (i.e. the skip-gram model [25]). There is a large literature on word embedding models that utilize visual information. One type of methods takes a two-step strategy that first extracts text and image features separately and then fuses them together using singular value decomposition [5], stacked autoencoders [31], or even simple concatenation [17]. [13, 21, 19] learn the text and image features jointly by fusing visual or perceptual information in a skip-gram model [25]. However, because of the lack of large-scale multimodal datasets, they only associate visual content with a pre-defined set of nouns (e.g. [21]) or perception domains (e.g. [14]) in the sentences, or focus on abstract scenes (e.g. [19]). By contrast, our best performing model places a soft constraint between visual features and all the words in the sentences by a weight sharing strategy as shown in Section 4. 3 Datasets We constructed two datasets: one for training our multimodal word-embeddings (see Section 3.1) and another one for the evaluation of the learned word-embeddings (see Section 3.2). 3.1 Training Dataset Table 1: Scale comparison with other image descriptions benchmarks. Image Sentences Flickr8K [15] 8K 40K Flickr30K [37] 30K 150K IAPR-TC12 [12] 20K 34K MS COCO [22] 200K 1M Im2Text [28] 1M 1M Pinterset40M 40M 300M Pinterest is one of the largest repository of Web images. Users commonly tag images with short descriptions and share the images (and desriptions) with others. Since a given image can be shared and tagged by multiple, sometimes thousands of users, many images have a very rich set of descriptions, making this source of data ideal for training model with both text and image inputs. The dataset is prepared in the following way: first, we crawled the public available data on Pinterest to construct our training dataset of more than 40 million images. Each image is associated with an average of 12 sentences, and we removed duplicated or short sentences with less than 4 words. The duplication detection is conducted by calculating the overlapped word unigram ratios. Some sample images and descriptions are shown in Figure 1. We denote this dataset as the Pinterest40M dataset. Our dataset contains 40 million images with 300 million sentences (around 3 billion words), which is much larger than the previous image description datasets (see Table 1). In addition, because the descriptions are annotated by users who expressed interest in the images, the descriptions in our dataset are more natural and richer than the annotated image description datasets. In our dataset, there are 335,323 unique words with a minimum number of occurence of 50, compared with 10,232 and 65,552 words appearing at least 3 times in MS COCO and IM2Text dataset respectively. To the best of our knowledge, there is no previous paper that trains a multimodal RNN model on a dataset of such scale. 3.2 Evaluation Datasets This work proposes to use labeled phrase triplets – each triplet is a three-phrase tuple containing phrase A, phrase B and phrase C, where A is considered as semantically closer to B than A is to C. At testing time, we compute the distance in the word embedding space between A/B and A/C, and consider a test triplet as positive if d(A,B) < d(A,C). This relative comparison approach was commonly used to evaluate and compare different word embedding models [30]. In order to generate large number of phrase triplets, we rely on user-click data collected from Pinterest image search system. At the end, we construct a large-scale evaluation dataset with 9.8 million triplets (see Section 3.2.1), and its cleaned up gold standard version with 10 thousand triplets (see Section 3.2.2). 3.2.1 The Raw Evaluation Dataset from User Clickthrough Data It is very hard to obtain a large number of semantically similar or related word and phrase pairs. This is one of the challenges for constructing a large-scale word/phrase similarity and relatedness evaluation dataset. We address this challenge by utilizing the user clickthrough data from Pinterest image search system, see Figure 2 for an illustration. More specifically, given a query from a user (e.g. “hair styles”), the search system returns a list of items, and each item is composed of an image and a set of annotations (i.e. short phrases or words that describe the item). Please note that the same annotation can appear in multiple items, e.g., “hair tutorial” can describe items related to prom hair styles or ponytails. We derive a matching score for each annotation by aggregating the click frequency of the items containing the annotation. The annotations are then ranked according to the matching scores, and the top ranked annotations are considered as the positive set of phrases or words with respect to the user query. To increase the difficulty of this dataset, we remove the phrases that share common words with the user query from the initial list of positive phrases. E.g. “hair tutorials” will be removed because the word “hair” is contained in the query phrase “hair styles”. A stemmer in Python’s “stemmer” package is also adopted to find words with the same root (e.g. “cake” and “cakes” are considered as the same word). This pruning step also prevents giving bias to methods which measure the similarity between the positive phrase and the query phrase by counting the number of overlapping words between them. In this way, we collected 9,778,508 semantically similar phrase pairs. Previous word similarity/relatedness datasets (e.g. [11, 14]) manually annotated each word pair with an absolute score reflecting how much the words in this pair are semantically related. In the testing stage, a predicted similarity score list of the word pairs generated by the model in the dataset is compared with the groundtruth score list. The Spearman’s rank correlation between the two lists is calculated as the score of the model. However, it is often too hard and expensive to label the absolute related score and maintain the consistency across all the pairs in a large-scale dataset, even if we average the scores of several annotators. We adopt a simple strategy by composing triplets for the phrase pairs. More specifically, we randomly sample negative phrases from a pool of 1 billion phrases. The negative phrase should not contain any overlapping word (a stemmer is also adopted) with both of the phrases in the original phrase pair. In this way, we construct 9,778,508 triplets with the format of (base phrase, positive phrase, negative phrase). In the evaluation, a model should be able to distinguish the positive phrase from the negative phrase by calculating their similarities with the base phrase in the embedding space. We denote this dataset as Related Phrase 10M (RP10M) dataset. 3.2.2 The Cleaned-up Gold Standard Dataset Because the raw Related Query 10M dataset is built upon user click information, it contains some noisy triplets (e.g. the positive and base phrase are not related, or the negative phrase is strongly related to the base phrase). To create a gold standard dataset, we conduct a clean up step using the crowdsourcing platform CrowdFlower [1] to remove these inaccurate triplets. A sample question and choices for the crowdsourcing annotators are shown in Figure 3. The positive and negative phrases in a triplet are randomly given as choice “A” or “B”. The annotators are required to choose which phrase is more related to the base phrase, or if they are both related or unrelated. To help the annotators understand the meaning of the phrases, they can click on the phrases to get Google search results. We annotate 21,000 triplets randomly sampled from the raw Related Query 10M dataset. Three to five annotators are assigned to each question. A triplet is accepted and added in the final cleaned up dataset only if more than 50% of the annotators agree with the original positive and negative label of the queries (note that they do not know which one is positive in the annotation process). In practice, 70% of the selected phrases triplets have more than 3 annotators to agree. This leads to a gold standard dataset with 10,674 triplets. We denote this dataset as Gold Phrase Query 10K (Gold RP10K) dataset. This dataset is very challenging and a successfully model should be able to capture a variety of semantic relationships between words or phrases. Some sample triplets are shown in Table 2. 4 The Multimodal Word Embedding Models We propose three RNN-CNN based models to learn the multimodal word embeddings, as illustrated in Figure 4. All of the models have two parts in common: a Convolutional Neural Network (CNN [20]) to extract visual representations and a Recurrent Neural Network (RNN [10]) to model sentences. For the CNN part, we resize the images to 224× 224, and adopt the 16-layer VGGNet [32] as the visual feature extractor. The binarized activation (i.e. 4096 binary vectors) of the layer before its SoftMax layer are used as the image features and will be mapped to the same space of the state of RNN (Model A, B) or the word embeddings (Model C), depends on the structure of the model, by a fully connected layer and a Rectified Linear Unit function (ReLU [26], ReLU(x) = max(0, x)). For the RNN part, we use a Gated Recurrent Unit (GRU [7]), an recently very popular RNN structure, with a 512 dimensional state cell. The state of GRU ht for each word with index t in a sentence can be represented as: rt = σ(Wr[et, ht−1] + br) (1) ut = σ(Wu[et, ht−1] + bu) (2) ct = tanh(Wc[et, rt ht−1] + bc) (3) ht = ut ht−1 + (1− ut) ct (4) where represents the element-wise product, σ(.) is the sigmoid function, et denotes the word embedding for the word wt, rt and ut are the reset gate and update gate respectively. The inputs of the GRU are words in a sentence and it is trained to predict the next words given the previous words. We add all the words that appear more than 50 times in the Pinterest40M dataset into the dictionary. The final vocabulary size is 335,323. Because the vocabulary size is very huge, we adopt the sampled SoftMax loss [8] to accelerate the training. For each training step, we sample 1024 negative words according to their log frequency in the training data and calculate the sampled SoftMax loss for the positive word. This sampled SoftMax loss function of the RNN part is adopted with Model A, B and C. Minimizing this loss function can be considered as approximately maximizing the probability of the sentences in the training set. As illustrated in Figure 4, Model A, B and C have different ways to fuse the visual information in the word embeddings. Model A is inspired by the CNN-RNN based image captioning models [36, 23]. We map the visual representation in the same space as the GRU states to initialize them (i.e. set h0 = ReLU(WIfI)). Since the visual information is fed after the embedding layer, it is usually hard to ensure that this information is fused in the learned embeddings. We adopt a transposed weight sharing strategy proposed in [23] that was originally used to enhance the models’ ability to learn novel visual concepts. More specifically, we share the weight matrix of the SoftMax layer UM with the matrix Uw of the word embedding layer in a transposed manner. In this way, UTw is learned to decode the visual information and is enforced to incorporate this information into the word embedding matrix Uw. In the experiments, we show that this strategy significantly improve the performance of the trained embeddings. Model A is trained by maximizing the log likelihood of the next words given the previous words conditioned on the visual representations, similar to the image captioning models. Compared to Model A, we adopt a more direct way to utilize the visual information for Model B and Model C. We add direct supervisions of the final state of the GRU (Model B) or the word embeddings (Model C), by adding new loss terms, in addition to the negative log-likelihood loss from the sampled SoftMax layer: Lstate = 1 n ∑ s ‖ hls − ReLU(WIfIs) ‖ (5) Lemb = 1 n ∑ s 1 ls ∑ t ‖ et − ReLU(WIfIs) ‖ (6) where ls is the length of the sentence s in a mini-batch with n sentences, Eqn. 5 and Eqn. 6 denote the additional losses for model B and C respectively. The added loss term is balanced by a weight hyperparameter λ with the negative log-likehood loss from the sampled SoftMax layer. 5 Experiments 5.1 Training Details We convert the words in all sentences of the Pinterest40M dataset to lower cases. All the nonalphanumeric characters are removed. A start sign 〈bos〉 and an end sign 〈eos〉 are added at the beginning and the end of all the sentences respectively. We use the stochastic gradient descent method with a mini-batch size of 256 sentences and a learning rate of 1.0. The gradient is clipped to 10.0. We train the models until the loss does not decrease on a small validation set with 10,000 images and their descriptions. The models will scan the dataset for roughly five 5 epochs. The bias terms of the gates (i.e. br and bu in Eqn. 1 and 2) in the GRU layer are initialized to 1.0. 5.2 Evaluation Details We use the trained embedding models to extract embeddings for all the words in a phrase and aggregate them by average pooling to get the phrase representation. We then check whether the cosine distance between the (base phrase, positive phrase) pair are smaller than the (base phrase, negative phrase) pair. The average precision over all the triplets in the raw Related Phrases 10M (RP10M) dataset and the Gold standard Related Phrases 10K (Gold RP10K) dataset are reported. 5.3 Results on the Gold RP10K and RP10M datasets We evaluate and compare our Model A, B, C, their variants and several strong baselines on our RP10M and Gold RP10K datasets. The results are shown in Table 3. “Pure Text RNN” denotes the baseline model without input of the visual features trained on Pinterest40M. It have the same model structure as our Model A except that we initialize the hidden state of GRU with a zero vector. “Model A without weight sharing” denotes a variant of Model A where the weight matrix Uw of the word embedding layer is not shared with the weight matrix UM of the sampled SoftMax layer (see Figure 4 for details). 2 “Word2Vec-GoogleNews” denotes the state-of-the-art off-the-shelf word 2We also try to adopt the weight sharing strategy in Model B and C, but the performance is very similar to the non-weight sharing version. embedding models of Word2Vec [25] trained on the Google-News data (about 300 billion words). “GloVe-Twitter” denotes the GloVe model [29] trained on the Twitter data (about 27 billion words). They are pure text models, but trained on a very large dataset (our model only trains on 3 billion words). Comparing these models, we can draw the following conclusions: • Under our evaluation criteria, visual information significantly helps the learning of word embeddings when the model successfully fuses the visual and text information together. E.g., our Model A outperforms the Word2Vec model by 9.5% and 9.2% on the Gold RP10K and RP10M datasets respectively. Model C also outperforms the pure text RNN baselines. • The weight sharing strategy is crucial to enhance the ability of Model A to fuse visual information into the learned embeddings. E.g., our Model A outperforms the baseline without this sharing strategy by 7.0% and 4.4% on Gold RP10K and RP10M respectively. • Model A performs the best among all the three models. It shows that soft supervision imposed by the weight-sharing strategy is more effective than direct supervision. This is not surprising since not all the words are semantically related to the content of the image and a direct and hard constraint might hinder the learning of the embeddings for these words. • Model B does not perform very well. The reason might be that most of the sentences have more than 8 words and the gradient from the final state loss term Lstate cannot be easily passed to the embedding of all the words in the sentence. • All the models trained on the Pinterest40M dataset performs better than the skip-gram model [25] trained on a much larger dataset of 300 billion words. 6 Discussion In this paper, we investigate the task of training and evaluating word embedding models. We introduce Pinterest40M, the largest image dataset with sentence descriptions to the best of our knowledge, and construct two evaluation dataset (i.e. RP10M and Gold RP10K) for word/phrase similarity and relatedness evaluation. Based on these datasets, we propose several CNN-RNN based multimodal models to learn effective word embeddings. Experiments show that visual information significantly helps the training of word embeddings, and our proposed model successfully incorporates such information into the learned embeddings. There are lots of possible extensions of the proposed model and the dataset. E.g., we plan to separate semantically similar or related phrase pairs from the Gold RP10K dataset to better understand the performance of the methods, similar to [3]. We will also give relatedness or similarity scores for the pairs (base phrase, positive phrase) to enable same evaluation strategy as previous datasets (e.g. [5, 11]). Finally, we plan to propose better models for phrase representations. Acknowledgement We are grateful to James Rubinstein for setting up the crowdsourcing experiments for dataset cleanup. We thank Veronica Mapes, Pawel Garbacki, and Leon Wong for discussions and support. We appreciate the comments and suggestions from anonymous reviewers of NIPS 2016. This work is partly supported by the Center for Brains, Minds and Machines NSF STC award CCF-1231216 and the Army Research Office ARO 62250-CS.
1. What is the focus of the paper, and what is the reviewer's opinion of its contribution? 2. What are the strengths and weaknesses of the proposed dataset, according to the reviewer? 3. Does the reviewer have any concerns or suggestions regarding the licensing of the dataset? 4. How does the reviewer assess the impact of the dataset on multimodal research and training? 5. Are there any specific aspects of the dataset that the reviewer would like to see analyzed in greater detail?
Review
Review This paper introduces a wonderful new image-sentence dataset. It should be a great resource for multimodal research and training. I hope it will come with a good license. The model isn't that interesting and NIPS cares historically more about that and hence misses out on some impactful dataset papers. I'd like to see some more analysis of the dataset. number and distribution of unique words. problems like personal comments (vs visual descriptions) and ungrammaticality etc.
NIPS
Title Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images Abstract In this paper, we focus on training and evaluating effective word embeddings with both text and visual information. More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest [2]. This dataset is more than 200 times larger than MS COCO [22], the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system, thus contain rich semantic relationships. Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: http://www.stat. ucla.edu/~junhua.mao/multimodal_embedding.html1. 1 Introduction Word embeddings are dense vector representations of words with semantic and relational information. In this vector space, semantically related or similar words should be close to each other. A large-scale training dataset with billions of words is crucial to train effective word embedding models. The trained word embeddings are very useful in various tasks and real-world applications that involve searching for semantically similar or related words and phrases. A large proportion of the state-of-the-art word embedding models are trained on pure text data only. Since one of the most important functions of language is to describe the visual world, we argue that the effective word embeddings should contain rich visual semantics. Previous work has shown that visual information is important for training effective embedding models. However, due to the lack of large training datasets of the same scale as the pure text dataset, the models are either trained on relatively small datasets (e.g. [13]), or the visual contraints are only applied to limited number of pre-defined visual concepts (e.g. [21]). Therefore, such work did not fully explore the potential of visual information in learning word embeddings. In this paper, we introduce a large-scale dataset with both text descriptions and images, crawled and collected from Pinterest, one of the largest database of annotated web images. On Pinterest, users save web images onto their boards (i.e. image collectors) and supply their descriptions of the images. More descriptions are collected when the same images are saved and commented by other users. Compared to MS COCO (i.e. the image benchmark with sentences descriptions [22]), our dataset is much larger (40 million images with 300 million sentences compared to 0.2 million images and 1 million sentences in the current release of MS COCO) and is at the same scale as the standard pure 1The datasets introduced in this work will be gradually released on the project page. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. text training datasets (e.g. Wikipedia Text Corpus). Some sample images and their descriptions are shown in Figure 1 in Section 3.1. We believe training on this large-scale dataset will lead to richer and better generalized models. We denote this dataset as the Pinterest40M dataset. One challenge for word embeddings learning is how to directly evaluate the quality of the model with respect to the tasks (e.g. the task of finding related or similar words and phrases). State-ofthe-art neural language models often use the negative log-likelihood of the predicted words as their training loss, which is not always correlated with the effectiveness of the learned embedding. Current evaluation datasets (e.g. [5, 14, 11]) for word similarity or relatedness contain only less than a thousand word pairs and cannot comprehensively evaluate all the embeddings of the words appearing in the training set. The challenge of constructing large-scale evaluation datasets is partly due to the difficulty of finding a large number of semantically similar or related word/phrase pairs. In this paper, we utilize user click information collected from Pinterest’s image search system to generate millions of these candidate word/phrase pairs. Because user click data are somewhat noisy, we removed inaccurate entries in the dataset by using crowdsourcing human annotations. This led to a final gold standard evaluation dataset consists of 10,674 entries. Equipped with these datasets, we propose, train and evaluate several Recurrent Neural Network (RNN [10]) based models with input of both text descriptions and images. Some of these models directly minimize the Euclidean distance between the visual features and the word embeddings or RNN states, similar to previous work (e.g. [13, 21]). The best performing model is inspired by recent image captioning models [9, 24, 36], with the additional weight-sharing strategy originally proposed in [23] to learn novel visual concepts. This strategy imposes soft constraints between the visual features and all the related words in the sentences. Our experiments validate the effectiveness and importance of incorporating visual information into the learned word embeddings. We make three major contributions: Firstly, we constructed a large-scale multimodal dataset with both text descriptions and images, which is at the same scale as the pure text training set. Secondly, we collected and labeled a large-scale evaluation dataset for word and phrase similarity and relatedness evaluation. Finally, we proposed and compared several RNN based models for learning multimodal word embeddings effectively. To facilitate research in this area, we will gradually release the datasets proposed in this paper on our project page. 2 Related Work Image-Sentence Description Datasets The image descriptions datasets, such as Flickr8K [15], Flickr30K [37], IAPR-TC12 [12], and MS COCO [22], greatly facilitated the development of models for language and vision tasks such as image captioning. Because it takes lots of resources to label images with sentences descriptions, the scale of these datasets are relatively small (MS COCO, the largest dataset among them, only contains 1 million sentences while our Pinterest40M dataset has 300 million sentences). In addition, the language used to describe images in these datasets is relatively simple (e.g. MS COCO only has around 10,000 unique words appearing at least 3 times while there are 335,323 unique words appearing at least 50 times in Pinterest40M). The Im2Text dataset proposed in [28] adopts a similar data collection process to ours by using 1 million images with 1 million user annotated captions from Flickr. But its scale is still much smaller than our Pinterest40M dataset. Recently, [34] proposed and released the YFCC100M dataset, which is a large-scale multimedia dataset contains metadata of 100 million Flickr images. It provides rich information about images, such as tags, titles, and locations where they were taken. The users’ comments can be obtained by querying the Flickr API. Because of the different functionality and user groups between Flickr and Pinterest, the users’ comments of Flickr images are quite different from those of Pinterest (e.g. on Flickr, users tend to comment more on the photography techniques). This dataset provides complementary information to our Pinterest40M dataset. Word Similarity-Relatedness Evaluation The standard benchmarks, such as WordSim-353/WSSim [11, 3], MEN [5], and SimLex-999 [14], consist of a couple hundreds of word pairs and their similarity or relatedness scores. The word pairs are composed by asking human subjects to write the first related, or similar, word that comes into their mind when presented with a concept word (e.g. [27, 11]), or by randomly selecting frequent words in large text corpus and manually searching for useful pairs (e.g. [5]). In this work, we are able to collect a large number of word/phrase pairs with good quality by mining them from the click data of Pinterest’s image search system used by millions of users. In addition, because this dataset is collected through a visual search system, it is more suitable to evaluate multimodal embedding models. Another related evaluation is the analogy task proposed in [25]. They ask the model questions like “man to woman is equal king to what?” as their evaluation. But such questions do not directly measure the word similarity or relatedness, and cannot cover all the semantic relationships of million of words in the dictionary. RNN for Language and Vision Our models are inspired by recent RNN-CNN based image captioning models [9, 24, 36, 16, 6, 18, 23], which can be viewed as a special case of the sequence-tosequence learning framework [33, 7]. We adopt Gated Recurrent Units (GRUs [7]), a variation of the simple RNN model. Multimodal Word Embedding Models For pure text, one of the most effective approaches to learn word embeddings is to train neural network models to predict a word given its context words in a sentence (i.e. the continuous bag-of-word model [4]) or to predict the context words given the current word (i.e. the skip-gram model [25]). There is a large literature on word embedding models that utilize visual information. One type of methods takes a two-step strategy that first extracts text and image features separately and then fuses them together using singular value decomposition [5], stacked autoencoders [31], or even simple concatenation [17]. [13, 21, 19] learn the text and image features jointly by fusing visual or perceptual information in a skip-gram model [25]. However, because of the lack of large-scale multimodal datasets, they only associate visual content with a pre-defined set of nouns (e.g. [21]) or perception domains (e.g. [14]) in the sentences, or focus on abstract scenes (e.g. [19]). By contrast, our best performing model places a soft constraint between visual features and all the words in the sentences by a weight sharing strategy as shown in Section 4. 3 Datasets We constructed two datasets: one for training our multimodal word-embeddings (see Section 3.1) and another one for the evaluation of the learned word-embeddings (see Section 3.2). 3.1 Training Dataset Table 1: Scale comparison with other image descriptions benchmarks. Image Sentences Flickr8K [15] 8K 40K Flickr30K [37] 30K 150K IAPR-TC12 [12] 20K 34K MS COCO [22] 200K 1M Im2Text [28] 1M 1M Pinterset40M 40M 300M Pinterest is one of the largest repository of Web images. Users commonly tag images with short descriptions and share the images (and desriptions) with others. Since a given image can be shared and tagged by multiple, sometimes thousands of users, many images have a very rich set of descriptions, making this source of data ideal for training model with both text and image inputs. The dataset is prepared in the following way: first, we crawled the public available data on Pinterest to construct our training dataset of more than 40 million images. Each image is associated with an average of 12 sentences, and we removed duplicated or short sentences with less than 4 words. The duplication detection is conducted by calculating the overlapped word unigram ratios. Some sample images and descriptions are shown in Figure 1. We denote this dataset as the Pinterest40M dataset. Our dataset contains 40 million images with 300 million sentences (around 3 billion words), which is much larger than the previous image description datasets (see Table 1). In addition, because the descriptions are annotated by users who expressed interest in the images, the descriptions in our dataset are more natural and richer than the annotated image description datasets. In our dataset, there are 335,323 unique words with a minimum number of occurence of 50, compared with 10,232 and 65,552 words appearing at least 3 times in MS COCO and IM2Text dataset respectively. To the best of our knowledge, there is no previous paper that trains a multimodal RNN model on a dataset of such scale. 3.2 Evaluation Datasets This work proposes to use labeled phrase triplets – each triplet is a three-phrase tuple containing phrase A, phrase B and phrase C, where A is considered as semantically closer to B than A is to C. At testing time, we compute the distance in the word embedding space between A/B and A/C, and consider a test triplet as positive if d(A,B) < d(A,C). This relative comparison approach was commonly used to evaluate and compare different word embedding models [30]. In order to generate large number of phrase triplets, we rely on user-click data collected from Pinterest image search system. At the end, we construct a large-scale evaluation dataset with 9.8 million triplets (see Section 3.2.1), and its cleaned up gold standard version with 10 thousand triplets (see Section 3.2.2). 3.2.1 The Raw Evaluation Dataset from User Clickthrough Data It is very hard to obtain a large number of semantically similar or related word and phrase pairs. This is one of the challenges for constructing a large-scale word/phrase similarity and relatedness evaluation dataset. We address this challenge by utilizing the user clickthrough data from Pinterest image search system, see Figure 2 for an illustration. More specifically, given a query from a user (e.g. “hair styles”), the search system returns a list of items, and each item is composed of an image and a set of annotations (i.e. short phrases or words that describe the item). Please note that the same annotation can appear in multiple items, e.g., “hair tutorial” can describe items related to prom hair styles or ponytails. We derive a matching score for each annotation by aggregating the click frequency of the items containing the annotation. The annotations are then ranked according to the matching scores, and the top ranked annotations are considered as the positive set of phrases or words with respect to the user query. To increase the difficulty of this dataset, we remove the phrases that share common words with the user query from the initial list of positive phrases. E.g. “hair tutorials” will be removed because the word “hair” is contained in the query phrase “hair styles”. A stemmer in Python’s “stemmer” package is also adopted to find words with the same root (e.g. “cake” and “cakes” are considered as the same word). This pruning step also prevents giving bias to methods which measure the similarity between the positive phrase and the query phrase by counting the number of overlapping words between them. In this way, we collected 9,778,508 semantically similar phrase pairs. Previous word similarity/relatedness datasets (e.g. [11, 14]) manually annotated each word pair with an absolute score reflecting how much the words in this pair are semantically related. In the testing stage, a predicted similarity score list of the word pairs generated by the model in the dataset is compared with the groundtruth score list. The Spearman’s rank correlation between the two lists is calculated as the score of the model. However, it is often too hard and expensive to label the absolute related score and maintain the consistency across all the pairs in a large-scale dataset, even if we average the scores of several annotators. We adopt a simple strategy by composing triplets for the phrase pairs. More specifically, we randomly sample negative phrases from a pool of 1 billion phrases. The negative phrase should not contain any overlapping word (a stemmer is also adopted) with both of the phrases in the original phrase pair. In this way, we construct 9,778,508 triplets with the format of (base phrase, positive phrase, negative phrase). In the evaluation, a model should be able to distinguish the positive phrase from the negative phrase by calculating their similarities with the base phrase in the embedding space. We denote this dataset as Related Phrase 10M (RP10M) dataset. 3.2.2 The Cleaned-up Gold Standard Dataset Because the raw Related Query 10M dataset is built upon user click information, it contains some noisy triplets (e.g. the positive and base phrase are not related, or the negative phrase is strongly related to the base phrase). To create a gold standard dataset, we conduct a clean up step using the crowdsourcing platform CrowdFlower [1] to remove these inaccurate triplets. A sample question and choices for the crowdsourcing annotators are shown in Figure 3. The positive and negative phrases in a triplet are randomly given as choice “A” or “B”. The annotators are required to choose which phrase is more related to the base phrase, or if they are both related or unrelated. To help the annotators understand the meaning of the phrases, they can click on the phrases to get Google search results. We annotate 21,000 triplets randomly sampled from the raw Related Query 10M dataset. Three to five annotators are assigned to each question. A triplet is accepted and added in the final cleaned up dataset only if more than 50% of the annotators agree with the original positive and negative label of the queries (note that they do not know which one is positive in the annotation process). In practice, 70% of the selected phrases triplets have more than 3 annotators to agree. This leads to a gold standard dataset with 10,674 triplets. We denote this dataset as Gold Phrase Query 10K (Gold RP10K) dataset. This dataset is very challenging and a successfully model should be able to capture a variety of semantic relationships between words or phrases. Some sample triplets are shown in Table 2. 4 The Multimodal Word Embedding Models We propose three RNN-CNN based models to learn the multimodal word embeddings, as illustrated in Figure 4. All of the models have two parts in common: a Convolutional Neural Network (CNN [20]) to extract visual representations and a Recurrent Neural Network (RNN [10]) to model sentences. For the CNN part, we resize the images to 224× 224, and adopt the 16-layer VGGNet [32] as the visual feature extractor. The binarized activation (i.e. 4096 binary vectors) of the layer before its SoftMax layer are used as the image features and will be mapped to the same space of the state of RNN (Model A, B) or the word embeddings (Model C), depends on the structure of the model, by a fully connected layer and a Rectified Linear Unit function (ReLU [26], ReLU(x) = max(0, x)). For the RNN part, we use a Gated Recurrent Unit (GRU [7]), an recently very popular RNN structure, with a 512 dimensional state cell. The state of GRU ht for each word with index t in a sentence can be represented as: rt = σ(Wr[et, ht−1] + br) (1) ut = σ(Wu[et, ht−1] + bu) (2) ct = tanh(Wc[et, rt ht−1] + bc) (3) ht = ut ht−1 + (1− ut) ct (4) where represents the element-wise product, σ(.) is the sigmoid function, et denotes the word embedding for the word wt, rt and ut are the reset gate and update gate respectively. The inputs of the GRU are words in a sentence and it is trained to predict the next words given the previous words. We add all the words that appear more than 50 times in the Pinterest40M dataset into the dictionary. The final vocabulary size is 335,323. Because the vocabulary size is very huge, we adopt the sampled SoftMax loss [8] to accelerate the training. For each training step, we sample 1024 negative words according to their log frequency in the training data and calculate the sampled SoftMax loss for the positive word. This sampled SoftMax loss function of the RNN part is adopted with Model A, B and C. Minimizing this loss function can be considered as approximately maximizing the probability of the sentences in the training set. As illustrated in Figure 4, Model A, B and C have different ways to fuse the visual information in the word embeddings. Model A is inspired by the CNN-RNN based image captioning models [36, 23]. We map the visual representation in the same space as the GRU states to initialize them (i.e. set h0 = ReLU(WIfI)). Since the visual information is fed after the embedding layer, it is usually hard to ensure that this information is fused in the learned embeddings. We adopt a transposed weight sharing strategy proposed in [23] that was originally used to enhance the models’ ability to learn novel visual concepts. More specifically, we share the weight matrix of the SoftMax layer UM with the matrix Uw of the word embedding layer in a transposed manner. In this way, UTw is learned to decode the visual information and is enforced to incorporate this information into the word embedding matrix Uw. In the experiments, we show that this strategy significantly improve the performance of the trained embeddings. Model A is trained by maximizing the log likelihood of the next words given the previous words conditioned on the visual representations, similar to the image captioning models. Compared to Model A, we adopt a more direct way to utilize the visual information for Model B and Model C. We add direct supervisions of the final state of the GRU (Model B) or the word embeddings (Model C), by adding new loss terms, in addition to the negative log-likelihood loss from the sampled SoftMax layer: Lstate = 1 n ∑ s ‖ hls − ReLU(WIfIs) ‖ (5) Lemb = 1 n ∑ s 1 ls ∑ t ‖ et − ReLU(WIfIs) ‖ (6) where ls is the length of the sentence s in a mini-batch with n sentences, Eqn. 5 and Eqn. 6 denote the additional losses for model B and C respectively. The added loss term is balanced by a weight hyperparameter λ with the negative log-likehood loss from the sampled SoftMax layer. 5 Experiments 5.1 Training Details We convert the words in all sentences of the Pinterest40M dataset to lower cases. All the nonalphanumeric characters are removed. A start sign 〈bos〉 and an end sign 〈eos〉 are added at the beginning and the end of all the sentences respectively. We use the stochastic gradient descent method with a mini-batch size of 256 sentences and a learning rate of 1.0. The gradient is clipped to 10.0. We train the models until the loss does not decrease on a small validation set with 10,000 images and their descriptions. The models will scan the dataset for roughly five 5 epochs. The bias terms of the gates (i.e. br and bu in Eqn. 1 and 2) in the GRU layer are initialized to 1.0. 5.2 Evaluation Details We use the trained embedding models to extract embeddings for all the words in a phrase and aggregate them by average pooling to get the phrase representation. We then check whether the cosine distance between the (base phrase, positive phrase) pair are smaller than the (base phrase, negative phrase) pair. The average precision over all the triplets in the raw Related Phrases 10M (RP10M) dataset and the Gold standard Related Phrases 10K (Gold RP10K) dataset are reported. 5.3 Results on the Gold RP10K and RP10M datasets We evaluate and compare our Model A, B, C, their variants and several strong baselines on our RP10M and Gold RP10K datasets. The results are shown in Table 3. “Pure Text RNN” denotes the baseline model without input of the visual features trained on Pinterest40M. It have the same model structure as our Model A except that we initialize the hidden state of GRU with a zero vector. “Model A without weight sharing” denotes a variant of Model A where the weight matrix Uw of the word embedding layer is not shared with the weight matrix UM of the sampled SoftMax layer (see Figure 4 for details). 2 “Word2Vec-GoogleNews” denotes the state-of-the-art off-the-shelf word 2We also try to adopt the weight sharing strategy in Model B and C, but the performance is very similar to the non-weight sharing version. embedding models of Word2Vec [25] trained on the Google-News data (about 300 billion words). “GloVe-Twitter” denotes the GloVe model [29] trained on the Twitter data (about 27 billion words). They are pure text models, but trained on a very large dataset (our model only trains on 3 billion words). Comparing these models, we can draw the following conclusions: • Under our evaluation criteria, visual information significantly helps the learning of word embeddings when the model successfully fuses the visual and text information together. E.g., our Model A outperforms the Word2Vec model by 9.5% and 9.2% on the Gold RP10K and RP10M datasets respectively. Model C also outperforms the pure text RNN baselines. • The weight sharing strategy is crucial to enhance the ability of Model A to fuse visual information into the learned embeddings. E.g., our Model A outperforms the baseline without this sharing strategy by 7.0% and 4.4% on Gold RP10K and RP10M respectively. • Model A performs the best among all the three models. It shows that soft supervision imposed by the weight-sharing strategy is more effective than direct supervision. This is not surprising since not all the words are semantically related to the content of the image and a direct and hard constraint might hinder the learning of the embeddings for these words. • Model B does not perform very well. The reason might be that most of the sentences have more than 8 words and the gradient from the final state loss term Lstate cannot be easily passed to the embedding of all the words in the sentence. • All the models trained on the Pinterest40M dataset performs better than the skip-gram model [25] trained on a much larger dataset of 300 billion words. 6 Discussion In this paper, we investigate the task of training and evaluating word embedding models. We introduce Pinterest40M, the largest image dataset with sentence descriptions to the best of our knowledge, and construct two evaluation dataset (i.e. RP10M and Gold RP10K) for word/phrase similarity and relatedness evaluation. Based on these datasets, we propose several CNN-RNN based multimodal models to learn effective word embeddings. Experiments show that visual information significantly helps the training of word embeddings, and our proposed model successfully incorporates such information into the learned embeddings. There are lots of possible extensions of the proposed model and the dataset. E.g., we plan to separate semantically similar or related phrase pairs from the Gold RP10K dataset to better understand the performance of the methods, similar to [3]. We will also give relatedness or similarity scores for the pairs (base phrase, positive phrase) to enable same evaluation strategy as previous datasets (e.g. [5, 11]). Finally, we plan to propose better models for phrase representations. Acknowledgement We are grateful to James Rubinstein for setting up the crowdsourcing experiments for dataset cleanup. We thank Veronica Mapes, Pawel Garbacki, and Leon Wong for discussions and support. We appreciate the comments and suggestions from anonymous reviewers of NIPS 2016. This work is partly supported by the Center for Brains, Minds and Machines NSF STC award CCF-1231216 and the Army Research Office ARO 62250-CS.
1. What is the focus of the paper regarding introducing new large scale datasets? 2. What are the strengths of the proposed datasets, particularly in their potential impact on research? 3. Do you have any concerns about the paper's claims regarding its novelty? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any additional questions or suggestions regarding the paper's contribution and potential improvements?
Review
Review This paper introduces a new large scale dataset of annotated images from the web. More precisely, the authors crawled approximately 40 millions images from the website interest, along with descriptions submitted by users. There is an average of 12 sentences per image, and many images are described by multiple users. The authors also introduce a new dataset for evaluating word representations. This dataset is made of triplets of short phrase, first two phrases being semantically closer than the first phrase and the third phrase. The positive phrase pairs were obtained using click data, while the negative pairs were randomly sampled. This dataset contains approximately 9.8 millions triplets. The authors also manually cleaned 10,000 triplets, using a crowdsourcing platform. Finally, the authors propose different baselines to learn word vector representations using visual information, based on this dataset. More precisely, they describe three RNN-CNN models, inspired by models for caption generation. They apply these models on the proposed dataset, showing that using multi-modal data is helpful for this evaluation dataset. In particular, they show that on the proposed evaluation set, the proposed models outperforms the pre-trained word2vec vectors (which were trained on approx. 300 billion words).This paper is very clearly written. It introduces two large scale datasets, which could have a big impact for researchers working on learning models from multi-modal data. I believe that collecting and sharing high quality dataset is important for the machine learning community, and it seems to me that the Pinterest 40M images could be such a dataset. However, I have a couple of concerns regarding this paper. First, the paper does not mention the Yahoo Flickr Creative Commons (YFCC) dataset, which contains approximately 100 millions images from Yahoo Flickr. This dataset also contains images with descriptions provided by users. It also contains other metadata, such as tags, location or time. While I believe the two datasets are different, I think the authors should discuss the difference between the two (and not claim that they propose a dataset "200 times larger than the current multimodal datasets"). Second, I think that baselines simpler than RNN-CNN should be considered in the paper. Examples of such baselines are: - use the skipgram or cbow models from word2vec on the descriptions (pure text baseline) ; - use the multimodal skipgram described in [Lazaridou et al.]. Overall, I enjoyed reading this paper and I am looking forward the release of this dataset. However, I believe that this paper would be stronger with better discussion of existing datasets and baselines for multimodal data. == Additional comments == A classical evaluation methodology in for multimodal data is retrieval: given a description, is the model able to retrieve the corresponding image. Have the authors considered this tasks? [Thomee et al.] YFCC100M: The New Data in Multimedia Research (http://webscope.sandbox.yahoo.com/catalog.php?datatype=i&did=67) [Lazaridou et al.] Combining Language and Vision with a Multimodal Skip-gram Model
NIPS
Title Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images Abstract In this paper, we focus on training and evaluating effective word embeddings with both text and visual information. More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest [2]. This dataset is more than 200 times larger than MS COCO [22], the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system, thus contain rich semantic relationships. Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: http://www.stat. ucla.edu/~junhua.mao/multimodal_embedding.html1. 1 Introduction Word embeddings are dense vector representations of words with semantic and relational information. In this vector space, semantically related or similar words should be close to each other. A large-scale training dataset with billions of words is crucial to train effective word embedding models. The trained word embeddings are very useful in various tasks and real-world applications that involve searching for semantically similar or related words and phrases. A large proportion of the state-of-the-art word embedding models are trained on pure text data only. Since one of the most important functions of language is to describe the visual world, we argue that the effective word embeddings should contain rich visual semantics. Previous work has shown that visual information is important for training effective embedding models. However, due to the lack of large training datasets of the same scale as the pure text dataset, the models are either trained on relatively small datasets (e.g. [13]), or the visual contraints are only applied to limited number of pre-defined visual concepts (e.g. [21]). Therefore, such work did not fully explore the potential of visual information in learning word embeddings. In this paper, we introduce a large-scale dataset with both text descriptions and images, crawled and collected from Pinterest, one of the largest database of annotated web images. On Pinterest, users save web images onto their boards (i.e. image collectors) and supply their descriptions of the images. More descriptions are collected when the same images are saved and commented by other users. Compared to MS COCO (i.e. the image benchmark with sentences descriptions [22]), our dataset is much larger (40 million images with 300 million sentences compared to 0.2 million images and 1 million sentences in the current release of MS COCO) and is at the same scale as the standard pure 1The datasets introduced in this work will be gradually released on the project page. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. text training datasets (e.g. Wikipedia Text Corpus). Some sample images and their descriptions are shown in Figure 1 in Section 3.1. We believe training on this large-scale dataset will lead to richer and better generalized models. We denote this dataset as the Pinterest40M dataset. One challenge for word embeddings learning is how to directly evaluate the quality of the model with respect to the tasks (e.g. the task of finding related or similar words and phrases). State-ofthe-art neural language models often use the negative log-likelihood of the predicted words as their training loss, which is not always correlated with the effectiveness of the learned embedding. Current evaluation datasets (e.g. [5, 14, 11]) for word similarity or relatedness contain only less than a thousand word pairs and cannot comprehensively evaluate all the embeddings of the words appearing in the training set. The challenge of constructing large-scale evaluation datasets is partly due to the difficulty of finding a large number of semantically similar or related word/phrase pairs. In this paper, we utilize user click information collected from Pinterest’s image search system to generate millions of these candidate word/phrase pairs. Because user click data are somewhat noisy, we removed inaccurate entries in the dataset by using crowdsourcing human annotations. This led to a final gold standard evaluation dataset consists of 10,674 entries. Equipped with these datasets, we propose, train and evaluate several Recurrent Neural Network (RNN [10]) based models with input of both text descriptions and images. Some of these models directly minimize the Euclidean distance between the visual features and the word embeddings or RNN states, similar to previous work (e.g. [13, 21]). The best performing model is inspired by recent image captioning models [9, 24, 36], with the additional weight-sharing strategy originally proposed in [23] to learn novel visual concepts. This strategy imposes soft constraints between the visual features and all the related words in the sentences. Our experiments validate the effectiveness and importance of incorporating visual information into the learned word embeddings. We make three major contributions: Firstly, we constructed a large-scale multimodal dataset with both text descriptions and images, which is at the same scale as the pure text training set. Secondly, we collected and labeled a large-scale evaluation dataset for word and phrase similarity and relatedness evaluation. Finally, we proposed and compared several RNN based models for learning multimodal word embeddings effectively. To facilitate research in this area, we will gradually release the datasets proposed in this paper on our project page. 2 Related Work Image-Sentence Description Datasets The image descriptions datasets, such as Flickr8K [15], Flickr30K [37], IAPR-TC12 [12], and MS COCO [22], greatly facilitated the development of models for language and vision tasks such as image captioning. Because it takes lots of resources to label images with sentences descriptions, the scale of these datasets are relatively small (MS COCO, the largest dataset among them, only contains 1 million sentences while our Pinterest40M dataset has 300 million sentences). In addition, the language used to describe images in these datasets is relatively simple (e.g. MS COCO only has around 10,000 unique words appearing at least 3 times while there are 335,323 unique words appearing at least 50 times in Pinterest40M). The Im2Text dataset proposed in [28] adopts a similar data collection process to ours by using 1 million images with 1 million user annotated captions from Flickr. But its scale is still much smaller than our Pinterest40M dataset. Recently, [34] proposed and released the YFCC100M dataset, which is a large-scale multimedia dataset contains metadata of 100 million Flickr images. It provides rich information about images, such as tags, titles, and locations where they were taken. The users’ comments can be obtained by querying the Flickr API. Because of the different functionality and user groups between Flickr and Pinterest, the users’ comments of Flickr images are quite different from those of Pinterest (e.g. on Flickr, users tend to comment more on the photography techniques). This dataset provides complementary information to our Pinterest40M dataset. Word Similarity-Relatedness Evaluation The standard benchmarks, such as WordSim-353/WSSim [11, 3], MEN [5], and SimLex-999 [14], consist of a couple hundreds of word pairs and their similarity or relatedness scores. The word pairs are composed by asking human subjects to write the first related, or similar, word that comes into their mind when presented with a concept word (e.g. [27, 11]), or by randomly selecting frequent words in large text corpus and manually searching for useful pairs (e.g. [5]). In this work, we are able to collect a large number of word/phrase pairs with good quality by mining them from the click data of Pinterest’s image search system used by millions of users. In addition, because this dataset is collected through a visual search system, it is more suitable to evaluate multimodal embedding models. Another related evaluation is the analogy task proposed in [25]. They ask the model questions like “man to woman is equal king to what?” as their evaluation. But such questions do not directly measure the word similarity or relatedness, and cannot cover all the semantic relationships of million of words in the dictionary. RNN for Language and Vision Our models are inspired by recent RNN-CNN based image captioning models [9, 24, 36, 16, 6, 18, 23], which can be viewed as a special case of the sequence-tosequence learning framework [33, 7]. We adopt Gated Recurrent Units (GRUs [7]), a variation of the simple RNN model. Multimodal Word Embedding Models For pure text, one of the most effective approaches to learn word embeddings is to train neural network models to predict a word given its context words in a sentence (i.e. the continuous bag-of-word model [4]) or to predict the context words given the current word (i.e. the skip-gram model [25]). There is a large literature on word embedding models that utilize visual information. One type of methods takes a two-step strategy that first extracts text and image features separately and then fuses them together using singular value decomposition [5], stacked autoencoders [31], or even simple concatenation [17]. [13, 21, 19] learn the text and image features jointly by fusing visual or perceptual information in a skip-gram model [25]. However, because of the lack of large-scale multimodal datasets, they only associate visual content with a pre-defined set of nouns (e.g. [21]) or perception domains (e.g. [14]) in the sentences, or focus on abstract scenes (e.g. [19]). By contrast, our best performing model places a soft constraint between visual features and all the words in the sentences by a weight sharing strategy as shown in Section 4. 3 Datasets We constructed two datasets: one for training our multimodal word-embeddings (see Section 3.1) and another one for the evaluation of the learned word-embeddings (see Section 3.2). 3.1 Training Dataset Table 1: Scale comparison with other image descriptions benchmarks. Image Sentences Flickr8K [15] 8K 40K Flickr30K [37] 30K 150K IAPR-TC12 [12] 20K 34K MS COCO [22] 200K 1M Im2Text [28] 1M 1M Pinterset40M 40M 300M Pinterest is one of the largest repository of Web images. Users commonly tag images with short descriptions and share the images (and desriptions) with others. Since a given image can be shared and tagged by multiple, sometimes thousands of users, many images have a very rich set of descriptions, making this source of data ideal for training model with both text and image inputs. The dataset is prepared in the following way: first, we crawled the public available data on Pinterest to construct our training dataset of more than 40 million images. Each image is associated with an average of 12 sentences, and we removed duplicated or short sentences with less than 4 words. The duplication detection is conducted by calculating the overlapped word unigram ratios. Some sample images and descriptions are shown in Figure 1. We denote this dataset as the Pinterest40M dataset. Our dataset contains 40 million images with 300 million sentences (around 3 billion words), which is much larger than the previous image description datasets (see Table 1). In addition, because the descriptions are annotated by users who expressed interest in the images, the descriptions in our dataset are more natural and richer than the annotated image description datasets. In our dataset, there are 335,323 unique words with a minimum number of occurence of 50, compared with 10,232 and 65,552 words appearing at least 3 times in MS COCO and IM2Text dataset respectively. To the best of our knowledge, there is no previous paper that trains a multimodal RNN model on a dataset of such scale. 3.2 Evaluation Datasets This work proposes to use labeled phrase triplets – each triplet is a three-phrase tuple containing phrase A, phrase B and phrase C, where A is considered as semantically closer to B than A is to C. At testing time, we compute the distance in the word embedding space between A/B and A/C, and consider a test triplet as positive if d(A,B) < d(A,C). This relative comparison approach was commonly used to evaluate and compare different word embedding models [30]. In order to generate large number of phrase triplets, we rely on user-click data collected from Pinterest image search system. At the end, we construct a large-scale evaluation dataset with 9.8 million triplets (see Section 3.2.1), and its cleaned up gold standard version with 10 thousand triplets (see Section 3.2.2). 3.2.1 The Raw Evaluation Dataset from User Clickthrough Data It is very hard to obtain a large number of semantically similar or related word and phrase pairs. This is one of the challenges for constructing a large-scale word/phrase similarity and relatedness evaluation dataset. We address this challenge by utilizing the user clickthrough data from Pinterest image search system, see Figure 2 for an illustration. More specifically, given a query from a user (e.g. “hair styles”), the search system returns a list of items, and each item is composed of an image and a set of annotations (i.e. short phrases or words that describe the item). Please note that the same annotation can appear in multiple items, e.g., “hair tutorial” can describe items related to prom hair styles or ponytails. We derive a matching score for each annotation by aggregating the click frequency of the items containing the annotation. The annotations are then ranked according to the matching scores, and the top ranked annotations are considered as the positive set of phrases or words with respect to the user query. To increase the difficulty of this dataset, we remove the phrases that share common words with the user query from the initial list of positive phrases. E.g. “hair tutorials” will be removed because the word “hair” is contained in the query phrase “hair styles”. A stemmer in Python’s “stemmer” package is also adopted to find words with the same root (e.g. “cake” and “cakes” are considered as the same word). This pruning step also prevents giving bias to methods which measure the similarity between the positive phrase and the query phrase by counting the number of overlapping words between them. In this way, we collected 9,778,508 semantically similar phrase pairs. Previous word similarity/relatedness datasets (e.g. [11, 14]) manually annotated each word pair with an absolute score reflecting how much the words in this pair are semantically related. In the testing stage, a predicted similarity score list of the word pairs generated by the model in the dataset is compared with the groundtruth score list. The Spearman’s rank correlation between the two lists is calculated as the score of the model. However, it is often too hard and expensive to label the absolute related score and maintain the consistency across all the pairs in a large-scale dataset, even if we average the scores of several annotators. We adopt a simple strategy by composing triplets for the phrase pairs. More specifically, we randomly sample negative phrases from a pool of 1 billion phrases. The negative phrase should not contain any overlapping word (a stemmer is also adopted) with both of the phrases in the original phrase pair. In this way, we construct 9,778,508 triplets with the format of (base phrase, positive phrase, negative phrase). In the evaluation, a model should be able to distinguish the positive phrase from the negative phrase by calculating their similarities with the base phrase in the embedding space. We denote this dataset as Related Phrase 10M (RP10M) dataset. 3.2.2 The Cleaned-up Gold Standard Dataset Because the raw Related Query 10M dataset is built upon user click information, it contains some noisy triplets (e.g. the positive and base phrase are not related, or the negative phrase is strongly related to the base phrase). To create a gold standard dataset, we conduct a clean up step using the crowdsourcing platform CrowdFlower [1] to remove these inaccurate triplets. A sample question and choices for the crowdsourcing annotators are shown in Figure 3. The positive and negative phrases in a triplet are randomly given as choice “A” or “B”. The annotators are required to choose which phrase is more related to the base phrase, or if they are both related or unrelated. To help the annotators understand the meaning of the phrases, they can click on the phrases to get Google search results. We annotate 21,000 triplets randomly sampled from the raw Related Query 10M dataset. Three to five annotators are assigned to each question. A triplet is accepted and added in the final cleaned up dataset only if more than 50% of the annotators agree with the original positive and negative label of the queries (note that they do not know which one is positive in the annotation process). In practice, 70% of the selected phrases triplets have more than 3 annotators to agree. This leads to a gold standard dataset with 10,674 triplets. We denote this dataset as Gold Phrase Query 10K (Gold RP10K) dataset. This dataset is very challenging and a successfully model should be able to capture a variety of semantic relationships between words or phrases. Some sample triplets are shown in Table 2. 4 The Multimodal Word Embedding Models We propose three RNN-CNN based models to learn the multimodal word embeddings, as illustrated in Figure 4. All of the models have two parts in common: a Convolutional Neural Network (CNN [20]) to extract visual representations and a Recurrent Neural Network (RNN [10]) to model sentences. For the CNN part, we resize the images to 224× 224, and adopt the 16-layer VGGNet [32] as the visual feature extractor. The binarized activation (i.e. 4096 binary vectors) of the layer before its SoftMax layer are used as the image features and will be mapped to the same space of the state of RNN (Model A, B) or the word embeddings (Model C), depends on the structure of the model, by a fully connected layer and a Rectified Linear Unit function (ReLU [26], ReLU(x) = max(0, x)). For the RNN part, we use a Gated Recurrent Unit (GRU [7]), an recently very popular RNN structure, with a 512 dimensional state cell. The state of GRU ht for each word with index t in a sentence can be represented as: rt = σ(Wr[et, ht−1] + br) (1) ut = σ(Wu[et, ht−1] + bu) (2) ct = tanh(Wc[et, rt ht−1] + bc) (3) ht = ut ht−1 + (1− ut) ct (4) where represents the element-wise product, σ(.) is the sigmoid function, et denotes the word embedding for the word wt, rt and ut are the reset gate and update gate respectively. The inputs of the GRU are words in a sentence and it is trained to predict the next words given the previous words. We add all the words that appear more than 50 times in the Pinterest40M dataset into the dictionary. The final vocabulary size is 335,323. Because the vocabulary size is very huge, we adopt the sampled SoftMax loss [8] to accelerate the training. For each training step, we sample 1024 negative words according to their log frequency in the training data and calculate the sampled SoftMax loss for the positive word. This sampled SoftMax loss function of the RNN part is adopted with Model A, B and C. Minimizing this loss function can be considered as approximately maximizing the probability of the sentences in the training set. As illustrated in Figure 4, Model A, B and C have different ways to fuse the visual information in the word embeddings. Model A is inspired by the CNN-RNN based image captioning models [36, 23]. We map the visual representation in the same space as the GRU states to initialize them (i.e. set h0 = ReLU(WIfI)). Since the visual information is fed after the embedding layer, it is usually hard to ensure that this information is fused in the learned embeddings. We adopt a transposed weight sharing strategy proposed in [23] that was originally used to enhance the models’ ability to learn novel visual concepts. More specifically, we share the weight matrix of the SoftMax layer UM with the matrix Uw of the word embedding layer in a transposed manner. In this way, UTw is learned to decode the visual information and is enforced to incorporate this information into the word embedding matrix Uw. In the experiments, we show that this strategy significantly improve the performance of the trained embeddings. Model A is trained by maximizing the log likelihood of the next words given the previous words conditioned on the visual representations, similar to the image captioning models. Compared to Model A, we adopt a more direct way to utilize the visual information for Model B and Model C. We add direct supervisions of the final state of the GRU (Model B) or the word embeddings (Model C), by adding new loss terms, in addition to the negative log-likelihood loss from the sampled SoftMax layer: Lstate = 1 n ∑ s ‖ hls − ReLU(WIfIs) ‖ (5) Lemb = 1 n ∑ s 1 ls ∑ t ‖ et − ReLU(WIfIs) ‖ (6) where ls is the length of the sentence s in a mini-batch with n sentences, Eqn. 5 and Eqn. 6 denote the additional losses for model B and C respectively. The added loss term is balanced by a weight hyperparameter λ with the negative log-likehood loss from the sampled SoftMax layer. 5 Experiments 5.1 Training Details We convert the words in all sentences of the Pinterest40M dataset to lower cases. All the nonalphanumeric characters are removed. A start sign 〈bos〉 and an end sign 〈eos〉 are added at the beginning and the end of all the sentences respectively. We use the stochastic gradient descent method with a mini-batch size of 256 sentences and a learning rate of 1.0. The gradient is clipped to 10.0. We train the models until the loss does not decrease on a small validation set with 10,000 images and their descriptions. The models will scan the dataset for roughly five 5 epochs. The bias terms of the gates (i.e. br and bu in Eqn. 1 and 2) in the GRU layer are initialized to 1.0. 5.2 Evaluation Details We use the trained embedding models to extract embeddings for all the words in a phrase and aggregate them by average pooling to get the phrase representation. We then check whether the cosine distance between the (base phrase, positive phrase) pair are smaller than the (base phrase, negative phrase) pair. The average precision over all the triplets in the raw Related Phrases 10M (RP10M) dataset and the Gold standard Related Phrases 10K (Gold RP10K) dataset are reported. 5.3 Results on the Gold RP10K and RP10M datasets We evaluate and compare our Model A, B, C, their variants and several strong baselines on our RP10M and Gold RP10K datasets. The results are shown in Table 3. “Pure Text RNN” denotes the baseline model without input of the visual features trained on Pinterest40M. It have the same model structure as our Model A except that we initialize the hidden state of GRU with a zero vector. “Model A without weight sharing” denotes a variant of Model A where the weight matrix Uw of the word embedding layer is not shared with the weight matrix UM of the sampled SoftMax layer (see Figure 4 for details). 2 “Word2Vec-GoogleNews” denotes the state-of-the-art off-the-shelf word 2We also try to adopt the weight sharing strategy in Model B and C, but the performance is very similar to the non-weight sharing version. embedding models of Word2Vec [25] trained on the Google-News data (about 300 billion words). “GloVe-Twitter” denotes the GloVe model [29] trained on the Twitter data (about 27 billion words). They are pure text models, but trained on a very large dataset (our model only trains on 3 billion words). Comparing these models, we can draw the following conclusions: • Under our evaluation criteria, visual information significantly helps the learning of word embeddings when the model successfully fuses the visual and text information together. E.g., our Model A outperforms the Word2Vec model by 9.5% and 9.2% on the Gold RP10K and RP10M datasets respectively. Model C also outperforms the pure text RNN baselines. • The weight sharing strategy is crucial to enhance the ability of Model A to fuse visual information into the learned embeddings. E.g., our Model A outperforms the baseline without this sharing strategy by 7.0% and 4.4% on Gold RP10K and RP10M respectively. • Model A performs the best among all the three models. It shows that soft supervision imposed by the weight-sharing strategy is more effective than direct supervision. This is not surprising since not all the words are semantically related to the content of the image and a direct and hard constraint might hinder the learning of the embeddings for these words. • Model B does not perform very well. The reason might be that most of the sentences have more than 8 words and the gradient from the final state loss term Lstate cannot be easily passed to the embedding of all the words in the sentence. • All the models trained on the Pinterest40M dataset performs better than the skip-gram model [25] trained on a much larger dataset of 300 billion words. 6 Discussion In this paper, we investigate the task of training and evaluating word embedding models. We introduce Pinterest40M, the largest image dataset with sentence descriptions to the best of our knowledge, and construct two evaluation dataset (i.e. RP10M and Gold RP10K) for word/phrase similarity and relatedness evaluation. Based on these datasets, we propose several CNN-RNN based multimodal models to learn effective word embeddings. Experiments show that visual information significantly helps the training of word embeddings, and our proposed model successfully incorporates such information into the learned embeddings. There are lots of possible extensions of the proposed model and the dataset. E.g., we plan to separate semantically similar or related phrase pairs from the Gold RP10K dataset to better understand the performance of the methods, similar to [3]. We will also give relatedness or similarity scores for the pairs (base phrase, positive phrase) to enable same evaluation strategy as previous datasets (e.g. [5, 11]). Finally, we plan to propose better models for phrase representations. Acknowledgement We are grateful to James Rubinstein for setting up the crowdsourcing experiments for dataset cleanup. We thank Veronica Mapes, Pawel Garbacki, and Leon Wong for discussions and support. We appreciate the comments and suggestions from anonymous reviewers of NIPS 2016. This work is partly supported by the Center for Brains, Minds and Machines NSF STC award CCF-1231216 and the Army Research Office ARO 62250-CS.
1. How does the proposed method fusion visual information in word embeddings? 2. What is the significance of proving that visual information improves performance by semantic similarity? 3. What are the concerns regarding the conclusion and comparison with other models? 4. Can the author provide more details about the baseline model without visual information?
Review
Review The paper provides a method to fuse the visual information in the word embeddings and it tries to prove that the visual information is able to improve the performance of the word embeddings by semantical similarity. 1. The conclusion in line 281-282 which is not fair, since T.Mikolov's method and the author's model are trained by different datasets. It is not able to see if the performance is improved by the author's model or just the dataset. Is it possible to train the T.Mikolov's model with your dataset? 2. Could you describe a little more about your baseline (Model A without visual) in line 259-260. Otherwise, it is not clear to see if the performance improvement comes from the visual information of the image or just the relationship between the descriptions and images.
NIPS
Title Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images Abstract In this paper, we focus on training and evaluating effective word embeddings with both text and visual information. More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest [2]. This dataset is more than 200 times larger than MS COCO [22], the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system, thus contain rich semantic relationships. Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: http://www.stat. ucla.edu/~junhua.mao/multimodal_embedding.html1. 1 Introduction Word embeddings are dense vector representations of words with semantic and relational information. In this vector space, semantically related or similar words should be close to each other. A large-scale training dataset with billions of words is crucial to train effective word embedding models. The trained word embeddings are very useful in various tasks and real-world applications that involve searching for semantically similar or related words and phrases. A large proportion of the state-of-the-art word embedding models are trained on pure text data only. Since one of the most important functions of language is to describe the visual world, we argue that the effective word embeddings should contain rich visual semantics. Previous work has shown that visual information is important for training effective embedding models. However, due to the lack of large training datasets of the same scale as the pure text dataset, the models are either trained on relatively small datasets (e.g. [13]), or the visual contraints are only applied to limited number of pre-defined visual concepts (e.g. [21]). Therefore, such work did not fully explore the potential of visual information in learning word embeddings. In this paper, we introduce a large-scale dataset with both text descriptions and images, crawled and collected from Pinterest, one of the largest database of annotated web images. On Pinterest, users save web images onto their boards (i.e. image collectors) and supply their descriptions of the images. More descriptions are collected when the same images are saved and commented by other users. Compared to MS COCO (i.e. the image benchmark with sentences descriptions [22]), our dataset is much larger (40 million images with 300 million sentences compared to 0.2 million images and 1 million sentences in the current release of MS COCO) and is at the same scale as the standard pure 1The datasets introduced in this work will be gradually released on the project page. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. text training datasets (e.g. Wikipedia Text Corpus). Some sample images and their descriptions are shown in Figure 1 in Section 3.1. We believe training on this large-scale dataset will lead to richer and better generalized models. We denote this dataset as the Pinterest40M dataset. One challenge for word embeddings learning is how to directly evaluate the quality of the model with respect to the tasks (e.g. the task of finding related or similar words and phrases). State-ofthe-art neural language models often use the negative log-likelihood of the predicted words as their training loss, which is not always correlated with the effectiveness of the learned embedding. Current evaluation datasets (e.g. [5, 14, 11]) for word similarity or relatedness contain only less than a thousand word pairs and cannot comprehensively evaluate all the embeddings of the words appearing in the training set. The challenge of constructing large-scale evaluation datasets is partly due to the difficulty of finding a large number of semantically similar or related word/phrase pairs. In this paper, we utilize user click information collected from Pinterest’s image search system to generate millions of these candidate word/phrase pairs. Because user click data are somewhat noisy, we removed inaccurate entries in the dataset by using crowdsourcing human annotations. This led to a final gold standard evaluation dataset consists of 10,674 entries. Equipped with these datasets, we propose, train and evaluate several Recurrent Neural Network (RNN [10]) based models with input of both text descriptions and images. Some of these models directly minimize the Euclidean distance between the visual features and the word embeddings or RNN states, similar to previous work (e.g. [13, 21]). The best performing model is inspired by recent image captioning models [9, 24, 36], with the additional weight-sharing strategy originally proposed in [23] to learn novel visual concepts. This strategy imposes soft constraints between the visual features and all the related words in the sentences. Our experiments validate the effectiveness and importance of incorporating visual information into the learned word embeddings. We make three major contributions: Firstly, we constructed a large-scale multimodal dataset with both text descriptions and images, which is at the same scale as the pure text training set. Secondly, we collected and labeled a large-scale evaluation dataset for word and phrase similarity and relatedness evaluation. Finally, we proposed and compared several RNN based models for learning multimodal word embeddings effectively. To facilitate research in this area, we will gradually release the datasets proposed in this paper on our project page. 2 Related Work Image-Sentence Description Datasets The image descriptions datasets, such as Flickr8K [15], Flickr30K [37], IAPR-TC12 [12], and MS COCO [22], greatly facilitated the development of models for language and vision tasks such as image captioning. Because it takes lots of resources to label images with sentences descriptions, the scale of these datasets are relatively small (MS COCO, the largest dataset among them, only contains 1 million sentences while our Pinterest40M dataset has 300 million sentences). In addition, the language used to describe images in these datasets is relatively simple (e.g. MS COCO only has around 10,000 unique words appearing at least 3 times while there are 335,323 unique words appearing at least 50 times in Pinterest40M). The Im2Text dataset proposed in [28] adopts a similar data collection process to ours by using 1 million images with 1 million user annotated captions from Flickr. But its scale is still much smaller than our Pinterest40M dataset. Recently, [34] proposed and released the YFCC100M dataset, which is a large-scale multimedia dataset contains metadata of 100 million Flickr images. It provides rich information about images, such as tags, titles, and locations where they were taken. The users’ comments can be obtained by querying the Flickr API. Because of the different functionality and user groups between Flickr and Pinterest, the users’ comments of Flickr images are quite different from those of Pinterest (e.g. on Flickr, users tend to comment more on the photography techniques). This dataset provides complementary information to our Pinterest40M dataset. Word Similarity-Relatedness Evaluation The standard benchmarks, such as WordSim-353/WSSim [11, 3], MEN [5], and SimLex-999 [14], consist of a couple hundreds of word pairs and their similarity or relatedness scores. The word pairs are composed by asking human subjects to write the first related, or similar, word that comes into their mind when presented with a concept word (e.g. [27, 11]), or by randomly selecting frequent words in large text corpus and manually searching for useful pairs (e.g. [5]). In this work, we are able to collect a large number of word/phrase pairs with good quality by mining them from the click data of Pinterest’s image search system used by millions of users. In addition, because this dataset is collected through a visual search system, it is more suitable to evaluate multimodal embedding models. Another related evaluation is the analogy task proposed in [25]. They ask the model questions like “man to woman is equal king to what?” as their evaluation. But such questions do not directly measure the word similarity or relatedness, and cannot cover all the semantic relationships of million of words in the dictionary. RNN for Language and Vision Our models are inspired by recent RNN-CNN based image captioning models [9, 24, 36, 16, 6, 18, 23], which can be viewed as a special case of the sequence-tosequence learning framework [33, 7]. We adopt Gated Recurrent Units (GRUs [7]), a variation of the simple RNN model. Multimodal Word Embedding Models For pure text, one of the most effective approaches to learn word embeddings is to train neural network models to predict a word given its context words in a sentence (i.e. the continuous bag-of-word model [4]) or to predict the context words given the current word (i.e. the skip-gram model [25]). There is a large literature on word embedding models that utilize visual information. One type of methods takes a two-step strategy that first extracts text and image features separately and then fuses them together using singular value decomposition [5], stacked autoencoders [31], or even simple concatenation [17]. [13, 21, 19] learn the text and image features jointly by fusing visual or perceptual information in a skip-gram model [25]. However, because of the lack of large-scale multimodal datasets, they only associate visual content with a pre-defined set of nouns (e.g. [21]) or perception domains (e.g. [14]) in the sentences, or focus on abstract scenes (e.g. [19]). By contrast, our best performing model places a soft constraint between visual features and all the words in the sentences by a weight sharing strategy as shown in Section 4. 3 Datasets We constructed two datasets: one for training our multimodal word-embeddings (see Section 3.1) and another one for the evaluation of the learned word-embeddings (see Section 3.2). 3.1 Training Dataset Table 1: Scale comparison with other image descriptions benchmarks. Image Sentences Flickr8K [15] 8K 40K Flickr30K [37] 30K 150K IAPR-TC12 [12] 20K 34K MS COCO [22] 200K 1M Im2Text [28] 1M 1M Pinterset40M 40M 300M Pinterest is one of the largest repository of Web images. Users commonly tag images with short descriptions and share the images (and desriptions) with others. Since a given image can be shared and tagged by multiple, sometimes thousands of users, many images have a very rich set of descriptions, making this source of data ideal for training model with both text and image inputs. The dataset is prepared in the following way: first, we crawled the public available data on Pinterest to construct our training dataset of more than 40 million images. Each image is associated with an average of 12 sentences, and we removed duplicated or short sentences with less than 4 words. The duplication detection is conducted by calculating the overlapped word unigram ratios. Some sample images and descriptions are shown in Figure 1. We denote this dataset as the Pinterest40M dataset. Our dataset contains 40 million images with 300 million sentences (around 3 billion words), which is much larger than the previous image description datasets (see Table 1). In addition, because the descriptions are annotated by users who expressed interest in the images, the descriptions in our dataset are more natural and richer than the annotated image description datasets. In our dataset, there are 335,323 unique words with a minimum number of occurence of 50, compared with 10,232 and 65,552 words appearing at least 3 times in MS COCO and IM2Text dataset respectively. To the best of our knowledge, there is no previous paper that trains a multimodal RNN model on a dataset of such scale. 3.2 Evaluation Datasets This work proposes to use labeled phrase triplets – each triplet is a three-phrase tuple containing phrase A, phrase B and phrase C, where A is considered as semantically closer to B than A is to C. At testing time, we compute the distance in the word embedding space between A/B and A/C, and consider a test triplet as positive if d(A,B) < d(A,C). This relative comparison approach was commonly used to evaluate and compare different word embedding models [30]. In order to generate large number of phrase triplets, we rely on user-click data collected from Pinterest image search system. At the end, we construct a large-scale evaluation dataset with 9.8 million triplets (see Section 3.2.1), and its cleaned up gold standard version with 10 thousand triplets (see Section 3.2.2). 3.2.1 The Raw Evaluation Dataset from User Clickthrough Data It is very hard to obtain a large number of semantically similar or related word and phrase pairs. This is one of the challenges for constructing a large-scale word/phrase similarity and relatedness evaluation dataset. We address this challenge by utilizing the user clickthrough data from Pinterest image search system, see Figure 2 for an illustration. More specifically, given a query from a user (e.g. “hair styles”), the search system returns a list of items, and each item is composed of an image and a set of annotations (i.e. short phrases or words that describe the item). Please note that the same annotation can appear in multiple items, e.g., “hair tutorial” can describe items related to prom hair styles or ponytails. We derive a matching score for each annotation by aggregating the click frequency of the items containing the annotation. The annotations are then ranked according to the matching scores, and the top ranked annotations are considered as the positive set of phrases or words with respect to the user query. To increase the difficulty of this dataset, we remove the phrases that share common words with the user query from the initial list of positive phrases. E.g. “hair tutorials” will be removed because the word “hair” is contained in the query phrase “hair styles”. A stemmer in Python’s “stemmer” package is also adopted to find words with the same root (e.g. “cake” and “cakes” are considered as the same word). This pruning step also prevents giving bias to methods which measure the similarity between the positive phrase and the query phrase by counting the number of overlapping words between them. In this way, we collected 9,778,508 semantically similar phrase pairs. Previous word similarity/relatedness datasets (e.g. [11, 14]) manually annotated each word pair with an absolute score reflecting how much the words in this pair are semantically related. In the testing stage, a predicted similarity score list of the word pairs generated by the model in the dataset is compared with the groundtruth score list. The Spearman’s rank correlation between the two lists is calculated as the score of the model. However, it is often too hard and expensive to label the absolute related score and maintain the consistency across all the pairs in a large-scale dataset, even if we average the scores of several annotators. We adopt a simple strategy by composing triplets for the phrase pairs. More specifically, we randomly sample negative phrases from a pool of 1 billion phrases. The negative phrase should not contain any overlapping word (a stemmer is also adopted) with both of the phrases in the original phrase pair. In this way, we construct 9,778,508 triplets with the format of (base phrase, positive phrase, negative phrase). In the evaluation, a model should be able to distinguish the positive phrase from the negative phrase by calculating their similarities with the base phrase in the embedding space. We denote this dataset as Related Phrase 10M (RP10M) dataset. 3.2.2 The Cleaned-up Gold Standard Dataset Because the raw Related Query 10M dataset is built upon user click information, it contains some noisy triplets (e.g. the positive and base phrase are not related, or the negative phrase is strongly related to the base phrase). To create a gold standard dataset, we conduct a clean up step using the crowdsourcing platform CrowdFlower [1] to remove these inaccurate triplets. A sample question and choices for the crowdsourcing annotators are shown in Figure 3. The positive and negative phrases in a triplet are randomly given as choice “A” or “B”. The annotators are required to choose which phrase is more related to the base phrase, or if they are both related or unrelated. To help the annotators understand the meaning of the phrases, they can click on the phrases to get Google search results. We annotate 21,000 triplets randomly sampled from the raw Related Query 10M dataset. Three to five annotators are assigned to each question. A triplet is accepted and added in the final cleaned up dataset only if more than 50% of the annotators agree with the original positive and negative label of the queries (note that they do not know which one is positive in the annotation process). In practice, 70% of the selected phrases triplets have more than 3 annotators to agree. This leads to a gold standard dataset with 10,674 triplets. We denote this dataset as Gold Phrase Query 10K (Gold RP10K) dataset. This dataset is very challenging and a successfully model should be able to capture a variety of semantic relationships between words or phrases. Some sample triplets are shown in Table 2. 4 The Multimodal Word Embedding Models We propose three RNN-CNN based models to learn the multimodal word embeddings, as illustrated in Figure 4. All of the models have two parts in common: a Convolutional Neural Network (CNN [20]) to extract visual representations and a Recurrent Neural Network (RNN [10]) to model sentences. For the CNN part, we resize the images to 224× 224, and adopt the 16-layer VGGNet [32] as the visual feature extractor. The binarized activation (i.e. 4096 binary vectors) of the layer before its SoftMax layer are used as the image features and will be mapped to the same space of the state of RNN (Model A, B) or the word embeddings (Model C), depends on the structure of the model, by a fully connected layer and a Rectified Linear Unit function (ReLU [26], ReLU(x) = max(0, x)). For the RNN part, we use a Gated Recurrent Unit (GRU [7]), an recently very popular RNN structure, with a 512 dimensional state cell. The state of GRU ht for each word with index t in a sentence can be represented as: rt = σ(Wr[et, ht−1] + br) (1) ut = σ(Wu[et, ht−1] + bu) (2) ct = tanh(Wc[et, rt ht−1] + bc) (3) ht = ut ht−1 + (1− ut) ct (4) where represents the element-wise product, σ(.) is the sigmoid function, et denotes the word embedding for the word wt, rt and ut are the reset gate and update gate respectively. The inputs of the GRU are words in a sentence and it is trained to predict the next words given the previous words. We add all the words that appear more than 50 times in the Pinterest40M dataset into the dictionary. The final vocabulary size is 335,323. Because the vocabulary size is very huge, we adopt the sampled SoftMax loss [8] to accelerate the training. For each training step, we sample 1024 negative words according to their log frequency in the training data and calculate the sampled SoftMax loss for the positive word. This sampled SoftMax loss function of the RNN part is adopted with Model A, B and C. Minimizing this loss function can be considered as approximately maximizing the probability of the sentences in the training set. As illustrated in Figure 4, Model A, B and C have different ways to fuse the visual information in the word embeddings. Model A is inspired by the CNN-RNN based image captioning models [36, 23]. We map the visual representation in the same space as the GRU states to initialize them (i.e. set h0 = ReLU(WIfI)). Since the visual information is fed after the embedding layer, it is usually hard to ensure that this information is fused in the learned embeddings. We adopt a transposed weight sharing strategy proposed in [23] that was originally used to enhance the models’ ability to learn novel visual concepts. More specifically, we share the weight matrix of the SoftMax layer UM with the matrix Uw of the word embedding layer in a transposed manner. In this way, UTw is learned to decode the visual information and is enforced to incorporate this information into the word embedding matrix Uw. In the experiments, we show that this strategy significantly improve the performance of the trained embeddings. Model A is trained by maximizing the log likelihood of the next words given the previous words conditioned on the visual representations, similar to the image captioning models. Compared to Model A, we adopt a more direct way to utilize the visual information for Model B and Model C. We add direct supervisions of the final state of the GRU (Model B) or the word embeddings (Model C), by adding new loss terms, in addition to the negative log-likelihood loss from the sampled SoftMax layer: Lstate = 1 n ∑ s ‖ hls − ReLU(WIfIs) ‖ (5) Lemb = 1 n ∑ s 1 ls ∑ t ‖ et − ReLU(WIfIs) ‖ (6) where ls is the length of the sentence s in a mini-batch with n sentences, Eqn. 5 and Eqn. 6 denote the additional losses for model B and C respectively. The added loss term is balanced by a weight hyperparameter λ with the negative log-likehood loss from the sampled SoftMax layer. 5 Experiments 5.1 Training Details We convert the words in all sentences of the Pinterest40M dataset to lower cases. All the nonalphanumeric characters are removed. A start sign 〈bos〉 and an end sign 〈eos〉 are added at the beginning and the end of all the sentences respectively. We use the stochastic gradient descent method with a mini-batch size of 256 sentences and a learning rate of 1.0. The gradient is clipped to 10.0. We train the models until the loss does not decrease on a small validation set with 10,000 images and their descriptions. The models will scan the dataset for roughly five 5 epochs. The bias terms of the gates (i.e. br and bu in Eqn. 1 and 2) in the GRU layer are initialized to 1.0. 5.2 Evaluation Details We use the trained embedding models to extract embeddings for all the words in a phrase and aggregate them by average pooling to get the phrase representation. We then check whether the cosine distance between the (base phrase, positive phrase) pair are smaller than the (base phrase, negative phrase) pair. The average precision over all the triplets in the raw Related Phrases 10M (RP10M) dataset and the Gold standard Related Phrases 10K (Gold RP10K) dataset are reported. 5.3 Results on the Gold RP10K and RP10M datasets We evaluate and compare our Model A, B, C, their variants and several strong baselines on our RP10M and Gold RP10K datasets. The results are shown in Table 3. “Pure Text RNN” denotes the baseline model without input of the visual features trained on Pinterest40M. It have the same model structure as our Model A except that we initialize the hidden state of GRU with a zero vector. “Model A without weight sharing” denotes a variant of Model A where the weight matrix Uw of the word embedding layer is not shared with the weight matrix UM of the sampled SoftMax layer (see Figure 4 for details). 2 “Word2Vec-GoogleNews” denotes the state-of-the-art off-the-shelf word 2We also try to adopt the weight sharing strategy in Model B and C, but the performance is very similar to the non-weight sharing version. embedding models of Word2Vec [25] trained on the Google-News data (about 300 billion words). “GloVe-Twitter” denotes the GloVe model [29] trained on the Twitter data (about 27 billion words). They are pure text models, but trained on a very large dataset (our model only trains on 3 billion words). Comparing these models, we can draw the following conclusions: • Under our evaluation criteria, visual information significantly helps the learning of word embeddings when the model successfully fuses the visual and text information together. E.g., our Model A outperforms the Word2Vec model by 9.5% and 9.2% on the Gold RP10K and RP10M datasets respectively. Model C also outperforms the pure text RNN baselines. • The weight sharing strategy is crucial to enhance the ability of Model A to fuse visual information into the learned embeddings. E.g., our Model A outperforms the baseline without this sharing strategy by 7.0% and 4.4% on Gold RP10K and RP10M respectively. • Model A performs the best among all the three models. It shows that soft supervision imposed by the weight-sharing strategy is more effective than direct supervision. This is not surprising since not all the words are semantically related to the content of the image and a direct and hard constraint might hinder the learning of the embeddings for these words. • Model B does not perform very well. The reason might be that most of the sentences have more than 8 words and the gradient from the final state loss term Lstate cannot be easily passed to the embedding of all the words in the sentence. • All the models trained on the Pinterest40M dataset performs better than the skip-gram model [25] trained on a much larger dataset of 300 billion words. 6 Discussion In this paper, we investigate the task of training and evaluating word embedding models. We introduce Pinterest40M, the largest image dataset with sentence descriptions to the best of our knowledge, and construct two evaluation dataset (i.e. RP10M and Gold RP10K) for word/phrase similarity and relatedness evaluation. Based on these datasets, we propose several CNN-RNN based multimodal models to learn effective word embeddings. Experiments show that visual information significantly helps the training of word embeddings, and our proposed model successfully incorporates such information into the learned embeddings. There are lots of possible extensions of the proposed model and the dataset. E.g., we plan to separate semantically similar or related phrase pairs from the Gold RP10K dataset to better understand the performance of the methods, similar to [3]. We will also give relatedness or similarity scores for the pairs (base phrase, positive phrase) to enable same evaluation strategy as previous datasets (e.g. [5, 11]). Finally, we plan to propose better models for phrase representations. Acknowledgement We are grateful to James Rubinstein for setting up the crowdsourcing experiments for dataset cleanup. We thank Veronica Mapes, Pawel Garbacki, and Leon Wong for discussions and support. We appreciate the comments and suggestions from anonymous reviewers of NIPS 2016. This work is partly supported by the Center for Brains, Minds and Machines NSF STC award CCF-1231216 and the Army Research Office ARO 62250-CS.
1. What is the main contribution of the paper regarding multimodal trained word embeddings? 2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly in terms of data quality and quantity? 3. How does the reviewer assess the effectiveness of the three multi-modal trained word embedding models, especially the best model with weight sharing strategy? 4. What are the limitations of the paper regarding its focus on image-language models and their applications in captioning and bidirectional retrieval? 5. Do you have any questions regarding the image extraction pipeline, specifically on using VGG features and multiple crops?
Review
Review Pinterest is crawled to generate a sentence/image aligned corpus of 300M sentences/40M images. This data is used to train joint image-language models in the spirit of image caption training, with the intent of learning word embeddings with visual information. Another corpus is collected which contains 10M semantically similar phrases of the form base phrase, positive phrase, negative phrase. 22k randomly selected samples from this corpus is cleaned up resulting in 10k triplets used for evaluation of the multi-modal trained word embeddings, given three phrases 1,2,3 the distance in word embedding space, d(1,2) and d(1,3) where it is known 1 is semantically closer to 2 than 3, system must declare d(1,2) < d(1,3) to be correct. Three multi-modal trained word embedding models are investigated, with all 3 using VGG based image features, and GRU to model the language. Two models augment the base loss of maximizing the sentence probability with MSE losses from (1) embedded GRU hidden state to embedded image and (2) embedded words to embedded image. The best model uses only the base loss of sentence probability and a weight sharing strategy between word embedding and softmax output. This model shows a strong advantage to using multimodal trained word embeddings vs. text only trained word embeddings.The collection of multi-modal data from a new source Pinterest and the large semantic relatedness dataset is very important to the community. Comparisons are made based on size of corpus to MSCOCO/Flickr30,8 and SBU Im2Text, but no discussion on quality of data in the comparison. MSCOCO and Flickr data are carefully labelled for tight alignment with what is in the image and the sentences, whereas Im2Text is not, using the Flickr sentences rawly from the people who posted the image. Because of this we may train and evaluate caption and bidirectional retrieval systems reliably on MSCOCO and Flickr, but this is not the case with Im2text, often what is in that data are sentences not talking about the image. The sample images of figure 1 in the paper show that Pinterest sentences may include a lot of information outside of the context the sentence, so it is not clear how generally useful this new set is. Certainly you can train better word embeddings using the visual information to measure semantic relatedness, so there is some alignment, but this might be more forgiving than the difficult tasks of captioning and multi-modal retrieval? More should be said about the image extraction pipeline, since VGG is used at level before softmax, you must have used a single crop of multiple crops and averaged? Also the inclusion in model B and C of MSE objectives, this seems reasonable forcing sentence embeddings either from hidden state or word embeddings to match embedded image, but it is not working, this probably requires more investigation, and also more details of the RNN/GRU used. Was this a single layer or two layer GRU? If single does it make sense that the resulting sentence embedding being responsible for generating the words should also be able to create a sentence embedding that converges to the sentence embedding.
NIPS
Title Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization Abstract We consider the minimization of submodular functions subject to ordering constraints. We show that this potentially non-convex optimization problem can be cast as a convex optimization problem on a space of uni-dimensional measures, with ordering constraints corresponding to first-order stochastic dominance. We propose new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles; these algorithms also lead to improvements without isotonic constraints. Finally, our experiments show that non-convex loss functions can be much more robust to outliers for isotonic regression, while still being solvable in polynomial time. 1 Introduction Shape constraints such as ordering constraints appear everywhere in estimation problems in machine learning, signal processing and statistics. They typically correspond to prior knowledge, and are imposed for the interpretability of models, or to allow non-parametric estimation with improved convergence rates [16, 8]. In this paper, we focus on imposing ordering constraints into an estimation problem, a setting typically referred to as isotonic regression [4, 26, 22], and we aim to generalize the set of problems for which efficient (i.e., polynomial-time) algorithms exist. We thus focus on the following optimization problem: min x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (1) where E ⇢ {1, . . . , n}2 represents the set of constraints, which form a directed acyclic graph. For simplicity, we restrict x to the set [0, 1]n, but our results extend to general products of (potentially unbounded) intervals. As convex constraints, isotonic constraints are well-adapted to estimation problems formulated as convex optimization problems where H is convex, such as for linear supervised learning problems, with many efficient algorithms for separable convex problems [4, 26, 22, 30], which can thus be used as inner loops in more general convex problems by using projected gradient methods (see, e.g., [3]). In this paper, we show that another form of structure can be leveraged. We will assume that H is submodular, which is equivalent, when twice continuously differentiable, to having nonpositive cross second-order derivatives. This notably includes all (potentially non convex) separable functions (i.e., sums of functions that depend on single variables), but also many other examples (see Section 2). Minimizing submodular functions on continuous domains has been recently shown to be equivalent to a convex optimization problem on a space of uni-dimensional measures [2], and given that the functions x 7! (xj xi)+ are submodular for any > 0, it is natural that by using tending to +1, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. we recover as well a convex optimization problem; the main contribution of this paper is to provide a simple framework based on stochastic dominance, for which we design efficient algorithms which are based on simple oracles on the function H (typically access to function values and derivatives). In order to obtain such algorithms, we go significantly beyond [2] by introducing novel discretization algorithms that also provide improvements without any isotonic constraints. More precisely, we make the following contributions: – We show in Section 3 that minimizing a submodular function with isotonic constraints can be cast as a convex optimization problem on a space of uni-dimensional measures, with isotonic constraints corresponding to first-order stochastic dominance. – On top of the naive discretization schemes presented in Section 4, we propose in Section 5 new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles. They go from requiring O(1/"3) = O(1/"2+1) function evaluations to reach a precision ", to O(1/"5/2) = O(1/"2+1/2) and O(1/"7/3) = O(1/"2+1/3). – Our experiments in Section 6 show that non-convex loss functions can be much more robust to outliers for isotonic regression. 2 Submodular Analysis in Continuous Domains In this section, we review the framework of [2] that shows how to minimize submodular functions using convex optimization. Definition. Throughout this paper, we consider a continuous function H : [0, 1]n ! R. The function H is said to be submodular if and only if [21, 29]: 8(x, y) 2 [0, 1]n ⇥ [0, 1]n, H(x) +H(y) > H(min{x, y}) +H(max{x, y}), (2) where the min and max operations are applied component-wise. If H is continuously twice differentiable, then this is equivalent to @ 2H @xi@xj (x) 6 0 for any i 6= j and x 2 [0, 1]n [29]. The cone of submodular functions on [0, 1]n is invariant by marginal strictly increasing transformations, and includes all functions that depend on a single variable (which play the role of linear functions for convex functions), which we refer to as separable functions. Examples. The classical examples are: (a) any separable function, (b) convex functions of the difference of two components, (c) concave functions of a positive linear combination, (d) negative log densities of multivariate totally positive distributions [17]. See Section 6 for a concrete example. Extension on a space of measures. We consider the convex set P([0, 1]) of Radon probability measures [24] on [0, 1], which is the closure (for the weak topology) of the convex hull of all Dirac measures. In order to get an extension, we look for a function defined on the set of products of probability measures µ 2 P([0, 1])n, such that if all µi, i = 1, . . . , n, are Dirac measures at points xi 2 [0, 1], then we have a function value equal to H(x1, . . . , xn). Note that P([0, 1])n is different from P([0, 1]n), which is the set of probability measures on [0, 1]n. For a probability distribution µi 2 P([0, 1]) defined on [0, 1], we can define the (reversed) cumulative distribution function Fµi : [0, 1] ! [0, 1] as Fµi(xi) = µi [xi, 1] . This is a non-increasing left-continuous function from [0, 1] to [0, 1], such that Fµi(0) = 1 and Fµi(1) = µi({1}). See illustrations in the left plot of Figure 1. We can then define the “inverse” cumulative function from [0, 1] to [0, 1] as F 1µi (ti) = sup{xi 2 [0, 1], Fµi(xi) > ti}. The function F 1µi is non-increasing and right-continuous, and such that F 1µi (1) = min supp(µi) and F 1 µi (0) = 1. Moreover, we have Fµi(xi) > ti , F 1µi (ti) > xi. The extension from [0, 1]n to the set of product probability measures is obtained by considering a single threshold t applied to all n cumulative distribution functions, that is: 8µ 2 P([0, 1])n, h(µ 1 , . . . , µn) = Z 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt. (3) We have the following properties when H is submodular: (a) it is an extension, that is, if for all i, µi is a Dirac at xi, then h(µ) = H(x); (b) it is convex; (c) minimizing h on P([0, 1])n and minimizing H on [0, 1]n is equivalent; moreover, the minimal values are equal and µ is a minimizer if and only if⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ is a minimizer of H for almost all t2 [0, 1]. Thus, submodular minimization is equivalent to a convex optimization problem in a space of uni-dimensional measures. Note that the extension is defined on all tuples of measures µ = (µ 1 , . . . , µn) but it can equivalently be defined through non-increasing functions from [0, 1] to [0, 1], e.g., the representation in terms of cumulative distribution functions Fµi defined above (this representation will be used in Section 4 where algorithms based on the discretization of the equivalent obtained convex problem are discussed). 3 Isotonic Constraints and Stochastic Dominance In this paper, we consider the following problem: inf x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (4) where E is the edge set of a directed acyclic graph on {1, . . . , n} and H is submodular. We denote by X ⇢ Rn (not necessarily a subset of [0, 1]n) the set of x 2 Rn satisfying the isotonic constraints. In order to define an extension in a space of measures, we consider a specific order on measures on [0, 1], namely first-order stochastic dominance [20], defined as follows. Given two distributions µ and ⌫ on [0, 1], with (inverse) cumulative distribution functions Fµ and F⌫ , we have µ < ⌫, if and only if 8x 2 [0, 1], Fµ(x) > F⌫(x), or equivalently, 8t 2 [0, 1], F 1µ (t) > F 1⌫ (t). As shown in the right plot of Figure 1, the densities may still overlap. An equivalent characterization [19, 9] is the existence of a joint distribution on a vector (X,X 0) 2 R2 with marginals µ(x) and ⌫(x0) and such that X > X 0 almost surely1. We now prove the main proposition of the paper: Proposition 1 We consider the convex minimization problem: inf µ2P([0,1])n h(µ) such that 8(i, j) 2 E, µi < µj . (5) Problems in Eq. (4) and Eq. (5) have the same objective values. Moreover, µ is a minimizer of Eq. (5) if and only if F 1µ (t) is a minimizer of H of Eq. (4) for almost all t 2 [0, 1]. Proof We denote by M the set of µ 2 P([0, 1])n satisfying the stochastic ordering constraints. For any x 2 [0, 1]n that satisfies the constraints in Eq. (4), i.e., x 2 X \ [0, 1]n, the associated Dirac measures satisfy the constraint in Eq. (5). Therefore, the objective value M of Eq. (4) is greater or equal to the one M 0 of Eq. (5). Given a minimizer µ for the convex problem in Eq. (5), we have: M > M 0 = h(µ) = R 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt > R 1 0 Mdt = M. This shows the proposition by studying the equality cases above. 1Such a joint distribution may be built as the distribution of (F 1µ (T ), F 1⌫ (T )), where T is uniformly distributed in [0, 1]. Alternatively, we could add the penalty term P (i,j)2E R +1 1 (Fµj (z) Fµi(z))+dz, which corresponds to the unconstrained minimization of H(x) + P (i,j)2E(xj xi)+. For > 0 big enough2, this is equivalent to the problem above, but with a submodular function which has a large Lipschitz constant (and is thus harder to optimize with the iterative methods presented below). 4 Discretization algorithms Prop. 1 shows that the isotonic regression problem with a submodular cost can be cast as a convex optimization problem; however, this is achieved in a space of measures, which cannot be handled directly computationally in polynomial time. Following [2], we consider a polynomial time and space discretization scheme of each interval [0, 1] (and not of [0, 1]n), but we propose in Section 5 a significant improvement that allows to reduce the number of discrete points significantly. All pseudo-codes for the algorithms are available in Appendix B. 4.1 Review of submodular optimization in discrete domains All our algorithms will end up minimizing approximately a submodular function F on {0, . . . , k 1}n, that is, which satisfies Eq. (2). Isotonic constraints will be added in Section 4.2. Following [2], this can be formulated as minimizing a convex function f# on the set of ⇢ 2 [0, 1]n⇥(k 1) so that for each i 2 {1, . . . , n}, (⇢ij)j2{1,...,k 1} is a non-increasing sequence (we denote by S this set of constraints) corresponding to the cumulative distribution function. For any feasible ⇢, a subgradient of f# may be computed by sorting all n(k 1) elements of the matrix ⇢ and computing at most n(k 1) values of F . An approximate minimizer of F (which exactly inherits approximation properties from the approximate optimality of ⇢) is then obtained by selecting the minimum value of F in the computation of the subgradient. Projected subgradient methods can then be used, and if F is the largest absolute difference in values of F when a single variable is changed by ±1, we obtain an "-minimizer (for function values) after t iterations, with " 6 nk F/ p t. The projection step is composed of n simple separable quadratic isotonic regressions with chain constraints in dimension k, which can be solved easily in O(nk) using the pool-adjacent-violator algorithm [4]. Computing a subgradient requires a sorting operation, which is thus O(nk log(nk)). See more details in [2]. Alternatively, we can minimize the strongly-convex f#(⇢) + 1 2 k⇢k2F on the set of ⇢ 2 Rn⇥(k 1) so that for each i, (⇢ij)j is a non-increasing sequence, that is, ⇢ 2 S (the constraints that ⇢ij 2 [0, 1] are dropped). We then get a minimizer z of F by looking for all i 2 {1, . . . , n} at the largest j 2 {1, . . . , k 1} such that ⇢ij > 0. We take then zi = j (and if no such j exists, zi = 0). A gap of " in the problem above, leads to a gap of p "nk for the original problem (see more details in [2]). The subgradient method in the primal, or Frank-Wolfe algorithm in the dual may be used for this problem. We obtain an "-minimizer (for function values) after t iterations, with " 6 F/t, which leads for the original submodular minimization problem to the same optimality guarantees as above, but with a faster algorithm in practice. See the detailed computations and comparisons in [2]. 4.2 Naive discretization scheme Following [2], we simply discretize [0, 1] by selecting the k values ik 1 or 2i+1 2k , for i 2 {0, . . . , k 1}. If the function H : [0, 1]n is L 1 -Lipschitz-continuous with respect to the ` 1 -norm, that is |H(x) H(x0)| 6 L 1 kx x0k 1 , the function F is (L 1 /k)-Lipschitz-continuous with respect to the ` 1 -norm (and thus we have F 6 L 1 /k above). Moreover, if F is minimized up to ", H is optimized up to "+ nL 1 /k. In order to take into account the isotonic constraints, we simply minimize with respect to ⇢ 2 [0, 1]n⇥(k 1) \ S, with the additional constraint that for all j 2 {1, . . . , k 1}, 8(a, b) 2 E, ⇢a,j > ⇢b,j . This corresponds to additional contraints T ⇢ Rn⇥(k 1). 2A short calculation shows that when H is differentiable, the first order-optimality condition (which is only necessary here) implies that if is strictly larger than n times the largest possible partial first-order derivative of H , the isotonic constraints have to be satisfied. Following Section 4.1, we can either choose to solve the convex problem min⇢2[0,1]n⇥k\S\T f#(⇢), or the strongly-convex problem min⇢2S\T f#(⇢) + 1 2 k⇢k2F . In the two situations, after t iterations, that is tnk accesses to values of H , we get a constrained minimizer of H with approximation guarantee nL 1 /k + nL 1 / p t. Thus in order to get a precision ", it suffices to select k > 2nL 1 /" and t > 4n2L2 1 /"2, leading to an overall 8n4L3 1 /"3 accesses to function values of H , which is the same as obtained in [2] (except for an extra factor of n due to a different definition of L 1 ). 4.3 Improved behavior for smooth functions We consider the discretization points ik 1 for i 2 {0, . . . , k 1}, and we assume that all first-order (resp. second-order) partial derivatives are bounded by L 1 (resp. L2 2 ). In the reasoning above, we may upper-bound the infimum of the discrete function in a finer way, going from infx2X H(x) + nL1/k to infx2X H(x) + 1 2 n2L2 2 /k2 (by doing a Taylor expansion around the global optimum, where the first-order terms are always zero, either because the partial derivative is zero or the deviation is zero). We now select k > nL 2 / p ", leading to a number of accesses to H that scales as 4n4L2 1 L 2 /"5/2. We thus gain a factor p " with the exact same algorithm, but different assumptions. 4.4 Algorithms for isotonic problem Compared to plain submodular minimization where we need to project onto S, we need to take into account the extra isotonic constraints, i.e., ⇢ 2 T, and thus use more complex orthogonal projections. Orthogonal projections. We now require the orthogonal projections on S\T or [0, 1]n⇥k \ S\T, which are themselves isotonic regression problems with nk variables. If there are m original isotonic constraints in Eq. (4), the number of isotonic constraints for the projection step is O(nk+mk), which is typically O(mk) if m > n, which we now assume. Thus, we can use existing parametric max-flow algorithms which can solve these in O(nmk2 log(nk)) [13] or in O(nmk2 log(n2k/m)) [11]. See in Appendix A a description of the reformulation of isotonic regression as a parametric max-flow problem, and the link with minimum cut. Following [7, Prop. 5.3], we incorporate the [0, 1] box constraints, by first ignoring them and thus by projecting onto the regular isotonic constraints, and then thresholding the result through x ! max{min{x, 1}, 0}. Alternatively, we can explicitly consider a sequence of max-flow problems (with at most log(1/") of these, where " is the required precision) [28, 15]. Finally, we may consider (approximate) alternate projection algorithms such as Dykstra’s algorithm and its accelerated variants [6], since the set S is easy to project to, while, in some cases, such as chain isotonic constraints for the original problem, T is easy to project to. Finally, we could also use algorithms dedicated to special structures for isotonic regression (see [27]), in particular when our original set of isotonic constraints in Eq. (4) is a chain, and the orthogonal projection corresponds to a two-dimensional grid [26]. In our experiments, we use a standard max-flow code [5] and the usual divide-and-conquer algorithms [28, 15] for parametric max-flow. Separable problems. The function f# from Section 4.2 is then a linear function of the form f#(⇢) = trw>⇢, and then, a single max-flow algorithm can be used. For these separable problerms, the alternative strongly-convex problem of minimizing f#(⇢)+ 1 2 k⇢k2F becomes the one of minimizing min⇢2S\T 1 2 k⇢+ wk2F , which is simply the problem of projecting on the intersection of two convex sets, for which an accelerated Dykstra algorithm may be used [6], with convergence rate in O(1/t2) after t iterations. Each step is O(kn) for projecting onto S, while this is k parametric network flows with n variables and m constraints for projecting onto T, in O(knm log n) for the general case and O(kn) for chains and rooted trees [4, 30]. In our experiments in Section 6, we show that Dykstra’s algorithm converges quickly for separable problems. Note that when the underlying losses are convex3, then Dykstra converges in a single iteration. Indeed, in this situation, the sequences ( wij)j are non-increasing and isotonic regression 3This is a situation where direct algorithms such as the ones by [22] are much more efficient than our discretization schemes. along a direction preserves decreasingness in the other direction, which implies that after two alternate projections, the algorithm has converged to the optimal solution. Alternatively, for the non-strongly convex formulation, this is a single network flow problem with n(k 1) nodes, and mk constraints, in thus O(nmk2 log(nk)) [25]. When E corresponds to a chain, then this is a 2-dimensional-grid with an algorithm in O(n2k2) [26]. For a precision ", and thus k proportional to n/" with the assumptions of Section 4.2, this makes a number of function calls for H , equal to O(kn) = O(n2/") and a running-time complexity of O(n3m/"2 · log(n2/"))—for smooth functions, as shown in Section 4.3, we get k proportional to n/ p " and thus an improved behavior. 5 Improved discretization algorithms We now consider a different discretization scheme that can take advantage of access to higher-order derivatives. We divide [0, 1] into k disjoint pieces A 0 = [0, 1k ), A1 = [ 1 k , 2 k ), . . . , Ak 1 = [ k 1 k , 1]. This defines a new function ˜H : {0, . . . , k 1}n ! R defined only for elements z 2 {0, . . . , k 1}n that satisfy the isotonic constraint, i.e., z 2 {0, . . . , k 1}n \ X: ˜H(z) = min x2 Qn i=1 Azi H(x) such that 8(i, j) 2 E, xi > xj . (6) The function ˜H(z) is equal to +1 if z does not satisfy the isotonic constraints. Proposition 2 The function ˜H is submodular, and minimizing ˜H(z) for z 2 {0, . . . , k 1}n such that 8(i, j) 2 E, zi > zj is equivalent to minimizing Eq. (4). Proof We consider z and z0 that satisfy the isotonic constraints, with minimizers x and x0 in the definition in Eq. (6). We have H(z) +H(z0) = H(x) +H(x0) > H(min{x, x0}) +H(max{x, x0}) > H(min{z, z0}) +H(max{z, z0}). Thus it is submodular on the sub-lattice {0, . . . , k 1}n \ X. Note that in order to minimize ˜H , we need to make sure that we only access H for elements z that satisfy the isotonic constraints, that is ⇢ 2 S \ T (which our algorithms impose). 5.1 Approximation from high-order smoothness The main idea behind our discretization scheme is to use high-order smoothness to approximate for any required z, the function value ˜H(z). If we assume that H is q-times differentiable, with uniform bounds Lrr on all r-th order derivatives, then, the (q 1)-th order Taylor expansion of H around y is equal to Hq(x|y) = H(y) + Pq 1 r=1 P |↵|=r 1 ↵! (x y) ↵H(↵)(y), where ↵ 2 Nn and |↵| is the sum of elements, (x y)↵ is the vector with components (xi yi)↵i , ↵! the products of all factorials of elements of ↵, and H(↵)(y) is the partial derivative of H with order ↵i for each i. We thus approximate ˜H(z), for any z that satisfies the isotonic constraint (i.e., z 2 X), by ˆH(z) = minx2( Qn i=1 Azi )\X Hq(x| z+1/2 k ). We have for any z, | ˜H(z) ˆH(z)| 6 (nLq/2k)q/q!. Moreover, when moving a single element of z by one, the maximal deviation is L 1 /k + 2(nLq/2k)q/q!. If ˆH is submodular, then the same reasoning as in Section 4.2 leads to an approximate error of (nk/ p t) L 1 /k + 2(nLq/2k)q/q! after t iterations, on top of (nLq/2k)q/q!, thus, with t > 16n2L2 1 /"2 and k > (q!"/2) 1/qnLq/2 (assuming " small enough such that t > 16n2k2), this leads to a number of accesses to the (q 1)-th order oracle equal to O(n4L2 1 Lq/"2+1/q). We thus get an improvement in the power of ", which tend to " 2 for infinitely smooth problems. Note that when q = 1 we recover the same rate as in Section 4.3 (with the same assumptions but a slightly different algorithm). However, unless q = 1, the function ˆH(z) is not submodular, and we cannot apply directly the bounds for convex optimization of the extension. We show in Appendix D that the bound still holds for q > 1 by using the special structure of the convex problem. What remains unknown is the computation of ˆH which requires to minimize polynomials on a small cube. We can always use the generic algorithms from Section 4.2 for this, which do not access extra function values but can be slow. For quadratic functions, we can use a convex relaxation which is not tight but already allows strong improvements with much faster local steps, and which we now present. See the pseudo-code in Appendix B. In any case, using expansions of higher order is only practically useful in situations where function evaluations are expensive. 5.2 Quadratic problems In this section, we consider the minimization of a quadratic submodular function H(x) = 1 2 x>Ax+ c>x (thus with all off-diagonal elements of A non-negative) on [0, 1]n, subject to isotonic constraints xi > xj for all (i, j) 2 E. This is the sub-problem required in Section 5.1 when using second-order Taylor expansions. It could be solved iteratively (and approximately) with the algorithm from Section 4.2; in this section, we consider a semidefinite relaxation which is tight for certain problems (A positive semidefinite, c non-positive, or A with non-positive diagonal elements), but not in general (we have found counter-examples but it is most often tight). The relaxation is based on considering the set of (Y, y) 2 Rn⇥n ⇥ Rn such that there exists x 2 [0, 1]n \ X with Y = xx> and y = x. Our problem is thus equivalent to minimizing 1 2 trAY + c>y such that (Y, y) is in the convex-hull Y of this set, which is NP-hard to characterize in polynomial time [10]. However, following ideas from [18], we can find a simple relaxation by considering the following constraints: (a) for all i 6= j, Yii Yij yi Yij Yjj yj yi yj 1 ! is positive semi-definite, (b) for all i 6= j, Yij 6 inf{yi, yj}, which corresponds to xixj 6 inf{xi, xj} for any x 2 [0, 1]n, (c) for all i, Yii 6 yi, which corresponds to x2i 6 xi, and (d) for all (i, j) 2 E, yi > yj , Yii > Yjj , Yij > max{Yjj , yj yi + Yii} and Yij 6 max{Yii, yi yj + Yjj}, which corresponds to xi > xj , x2i > x2j , xixj > x2j , xi(1 xi) 6 xi(1 xj), xixj 6 x2i , and xi(1 xj) > xj(1 xj). This leads to a semi-definite program which provides a lower-bound on the optimal value of the problem. See Appendix E for a proof of tightness for special cases and a counter-example for the tightness in general. 6 Experiments We consider experiments aiming at (a) showing that the new possibility of minimizing submodular functions with isotonic constraints brings new possibilities and (b) that the new discretization algorithms are faster than the naive one. Robust isotonic regression. Given some z 2 Rn, we consider a separable function H(x) = 1 n Pn i=1 G(xi zi) with various possibilities for G: (a) the square loss G(t) = 1 2 t2, (b) the absolute loss G(t) = |t| and (c) a logarithmic loss G(t) = 2 2 log 1 + t2/2 , which is the negative log- density of a Student distribution and non-convex. The non-convexity of the cost function and the fact that is has vanishing derivatives for large values make it a good candidate for robust estimation [12]. The first two losses may be dealt with methods for separable convex isotonic regression [22, 30], but the non-convex loss can only dealt with exactly by the new optimization routine that we present— majorization-minimization algorithms [14] based on the concavity of G as a function of t2 can be used with such non-convex losses, but as shown below, they converge to bad local optima. For simplicity, we consider chain constraints 1 > x 1 > x 2 > · · · > xn > 0. We consider two set-ups: (a) a separable set-up where maximum flow algorithms can be used directly (with n = 200), and (b) a general submodular set-up (with n = 25 and n = 200), where we add a smoothness penalty which is the sum of terms of the form 2 Pn 1 i=1 (xi xi+1)2, which is submodular (but not separable). Data generation. We generate the data z 2 Rn, with n = 200, as follows: we first generate a simple decreasing function of i 2 {1, . . . , n} (here an affine function); we then perturb this ground truth by (a) adding some independent noise and (b) corrupting the data by changing a random subset of the n values by the application of another function which is increasing (see Figure 2, left). This is an adversarial perturbation, while the independent noise is not adversarial; the presence of the adversarial noise makes the problem harder as the proportion of corrupted data increases. Optimization of separable problems with maximum flow algorithms. We solve the discretized version by a single maximum-flow problem of size nk. We compare the various losses for k = 1000 on data which is along a decreasing line (plus noise), but corrupted (i.e., replaced for a certain proportion) by data along an increasing line. See an example in the left plot of Figure 2 for 50% of corrupted data. We see that the square loss is highly non robust, while the (still convex) absolute loss is slightly more robust, and the robust non-convex loss still approximates the decreasing function correctly with 50% of corrupted data when optimized globally, while the method with no guarantee (based on majorization-minimization, dashed line) does not converge to an acceptable solution. In Appendix C, we show additional examples where it is robust up to 75% of corruption. In the right plot of Figure 2, we also show the robustness to an increasing proportion of outliers (for the same type of data as for the left plot), by plotting the mean-squared error in log-scale and averaged over 20 replications. Overall, this shows the benefits of non-convex isotonic regression with guaranteed global optimization, even for large proportions of corrupted data. Optimization of separable problems with pool-adjacent violator (PAV) algorithm. As shown in Section 4.2, discretized separable submodular optimization corresponds to the orthogonal projection of a matrix into the intersection of chain isotonic constraints in each row, and isotonic constraints in each column equal to the original set of isotonic constraints (in these simulations, these are also chain constraints). This can be done by Dykstra’s alternating projection algorithm or its accelerated version [6], for which each projection step can be performed with the PAV algorithm because each of them corresponds to chain constraints. In the left plot of Figure 3, we show the difference in function values (in log-scale) for various discretization levels (defined by the integer k spaced by 1/4 in base-10 logarithm), as as function of the number of iterations (averaged over 20 replications). For large k (small difference of function values), we see a spacing between the ends of the plots of approximatively 1/2, highlighting the dependence in 1/k2 of the final error with discretization k, which our analysis in Section 4.3 suggests. Effect of the discretization for separable problems. In order to highlight the effect of discretization and its interplay with differentiability properties of the function to minimize, we consider in the middle plot of Figure 3, the distance in function values after full optimization of the discrete submodular function for various values of k. We see that for the simple smooth function (quadratic loss), we have a decay in 1/k2, while for the simple non smooth function (absolute loss), we have a final decay in 1/k), a predicted by our analysis. For the logarithm-based loss, whose smoothness constant depends on , when is large, it behaves like a smooth function immediately, while for smaller, k needs to be large enough to reach that behavior. Non-separable problems. We consider adding a smoothness penalty to add the prior knowledge that values should be decreasing and close. In Appendix C, we show the effect of adding a smoothness prior (for n = 200): it leads to better estimation. In the right plot of Figure 3, we show the effect of various discretization schemes (for n = 25), from order 0 (naive discretization), to order 1 and 2 (our new schemes based on Taylor expansions from Section 5.1), and we plot the difference in function values after 50 steps of subgradient descent: in each plot, the quantity H is equal to H(x⇤k) H⇤, where x⇤k is an approximate minimizer of the discretized problem with k values and H⇤ the minimum of H (taking into account the isotonic constraints). As outlined in our analysis, the first-order scheme does not help because our function has bounded Hessians, while the second-order does so significantly. 7 Conclusion In this paper, we have shown how submodularity could be leveraged to obtain polynomial-time algorithms for isotonic regressions with a submodular cost, based on convex optimization in a space of measures—although based on convexity arguments, our algorithms apply to all separable nonconvex functions. The final algorithms are based on discretization, with a new scheme that also provides improvements based on smoothness (also without isotonic constraints). Our framework is worth extending in the following directions: (a) we currently consider a fixed discretization, it would be advantageous to consider adaptive schemes, potentially improving the dependence on the number of variables n and the precision "; (b) other shape constraints can be consider in a similar submodular framework, such as xixj > 0 for certain pairs (i, j); (c) a direct convex formulation without discretization could probably be found for quadratic programming with submodular costs (which are potentially non-convex but solvable in polynomial time); (d) a statistical study of isotonic regression with adversarial corruption could now rely on formulations with polynomial-time algorithms. Acknowledgements We acknowledge support the European Research Council (grant SEQUOIA 724063).
1. What is the focus of the paper regarding submodular minimization? 2. What are the strengths of the proposed approach, particularly in extending the settings for efficient algorithms? 3. What are the weaknesses of the paper, especially regarding the discretization approach and the lack of concrete applications? 4. Do you have any concerns or questions about the theoretical analysis, such as the improvement from 1/eps^3 to 1/eps^2.33? 5. How does the reviewer assess the clarity and readability of the paper's content?
Review
Review This paper studies continuous submodular minimization under ordering constraints. The main motivation is to extend the settings in which efficient algorithms for isotonic regression exist. In particular, they extend such settings from convex objectives to submodular objectives. First, the authors show that this problem can be reformulated as a convex optimization problem with isotonic constraints. Since this convex problem is in a space of measures, it cannot be optimized exactly. Instead, the authors propose a discretization approach as in Bach18. The discretization approach is improved from the naive 1/eps^3 evaluations to 1/eps^2.33. Finally, the authors experimentally consider robustness to outliers with their approach. Examples of continuous submodular functions are listed, as well as a separate motivation for isotonic constraints, but it would have been nice if a concrete application of the problem studied was explained in detail. On the theoretical side, I don’t find the improvement from a naive 1/eps^3 number of evaluations to 1/eps^2.33 to be of major significance. I also found the paper to be very dense and often hard to follow sometimes. An example is that it is not clear how the noise and corruptions fit in the model. It would have been nice to have additional discussions about what corrupted data corresponds to in this setting.
NIPS
Title Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization Abstract We consider the minimization of submodular functions subject to ordering constraints. We show that this potentially non-convex optimization problem can be cast as a convex optimization problem on a space of uni-dimensional measures, with ordering constraints corresponding to first-order stochastic dominance. We propose new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles; these algorithms also lead to improvements without isotonic constraints. Finally, our experiments show that non-convex loss functions can be much more robust to outliers for isotonic regression, while still being solvable in polynomial time. 1 Introduction Shape constraints such as ordering constraints appear everywhere in estimation problems in machine learning, signal processing and statistics. They typically correspond to prior knowledge, and are imposed for the interpretability of models, or to allow non-parametric estimation with improved convergence rates [16, 8]. In this paper, we focus on imposing ordering constraints into an estimation problem, a setting typically referred to as isotonic regression [4, 26, 22], and we aim to generalize the set of problems for which efficient (i.e., polynomial-time) algorithms exist. We thus focus on the following optimization problem: min x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (1) where E ⇢ {1, . . . , n}2 represents the set of constraints, which form a directed acyclic graph. For simplicity, we restrict x to the set [0, 1]n, but our results extend to general products of (potentially unbounded) intervals. As convex constraints, isotonic constraints are well-adapted to estimation problems formulated as convex optimization problems where H is convex, such as for linear supervised learning problems, with many efficient algorithms for separable convex problems [4, 26, 22, 30], which can thus be used as inner loops in more general convex problems by using projected gradient methods (see, e.g., [3]). In this paper, we show that another form of structure can be leveraged. We will assume that H is submodular, which is equivalent, when twice continuously differentiable, to having nonpositive cross second-order derivatives. This notably includes all (potentially non convex) separable functions (i.e., sums of functions that depend on single variables), but also many other examples (see Section 2). Minimizing submodular functions on continuous domains has been recently shown to be equivalent to a convex optimization problem on a space of uni-dimensional measures [2], and given that the functions x 7! (xj xi)+ are submodular for any > 0, it is natural that by using tending to +1, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. we recover as well a convex optimization problem; the main contribution of this paper is to provide a simple framework based on stochastic dominance, for which we design efficient algorithms which are based on simple oracles on the function H (typically access to function values and derivatives). In order to obtain such algorithms, we go significantly beyond [2] by introducing novel discretization algorithms that also provide improvements without any isotonic constraints. More precisely, we make the following contributions: – We show in Section 3 that minimizing a submodular function with isotonic constraints can be cast as a convex optimization problem on a space of uni-dimensional measures, with isotonic constraints corresponding to first-order stochastic dominance. – On top of the naive discretization schemes presented in Section 4, we propose in Section 5 new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles. They go from requiring O(1/"3) = O(1/"2+1) function evaluations to reach a precision ", to O(1/"5/2) = O(1/"2+1/2) and O(1/"7/3) = O(1/"2+1/3). – Our experiments in Section 6 show that non-convex loss functions can be much more robust to outliers for isotonic regression. 2 Submodular Analysis in Continuous Domains In this section, we review the framework of [2] that shows how to minimize submodular functions using convex optimization. Definition. Throughout this paper, we consider a continuous function H : [0, 1]n ! R. The function H is said to be submodular if and only if [21, 29]: 8(x, y) 2 [0, 1]n ⇥ [0, 1]n, H(x) +H(y) > H(min{x, y}) +H(max{x, y}), (2) where the min and max operations are applied component-wise. If H is continuously twice differentiable, then this is equivalent to @ 2H @xi@xj (x) 6 0 for any i 6= j and x 2 [0, 1]n [29]. The cone of submodular functions on [0, 1]n is invariant by marginal strictly increasing transformations, and includes all functions that depend on a single variable (which play the role of linear functions for convex functions), which we refer to as separable functions. Examples. The classical examples are: (a) any separable function, (b) convex functions of the difference of two components, (c) concave functions of a positive linear combination, (d) negative log densities of multivariate totally positive distributions [17]. See Section 6 for a concrete example. Extension on a space of measures. We consider the convex set P([0, 1]) of Radon probability measures [24] on [0, 1], which is the closure (for the weak topology) of the convex hull of all Dirac measures. In order to get an extension, we look for a function defined on the set of products of probability measures µ 2 P([0, 1])n, such that if all µi, i = 1, . . . , n, are Dirac measures at points xi 2 [0, 1], then we have a function value equal to H(x1, . . . , xn). Note that P([0, 1])n is different from P([0, 1]n), which is the set of probability measures on [0, 1]n. For a probability distribution µi 2 P([0, 1]) defined on [0, 1], we can define the (reversed) cumulative distribution function Fµi : [0, 1] ! [0, 1] as Fµi(xi) = µi [xi, 1] . This is a non-increasing left-continuous function from [0, 1] to [0, 1], such that Fµi(0) = 1 and Fµi(1) = µi({1}). See illustrations in the left plot of Figure 1. We can then define the “inverse” cumulative function from [0, 1] to [0, 1] as F 1µi (ti) = sup{xi 2 [0, 1], Fµi(xi) > ti}. The function F 1µi is non-increasing and right-continuous, and such that F 1µi (1) = min supp(µi) and F 1 µi (0) = 1. Moreover, we have Fµi(xi) > ti , F 1µi (ti) > xi. The extension from [0, 1]n to the set of product probability measures is obtained by considering a single threshold t applied to all n cumulative distribution functions, that is: 8µ 2 P([0, 1])n, h(µ 1 , . . . , µn) = Z 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt. (3) We have the following properties when H is submodular: (a) it is an extension, that is, if for all i, µi is a Dirac at xi, then h(µ) = H(x); (b) it is convex; (c) minimizing h on P([0, 1])n and minimizing H on [0, 1]n is equivalent; moreover, the minimal values are equal and µ is a minimizer if and only if⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ is a minimizer of H for almost all t2 [0, 1]. Thus, submodular minimization is equivalent to a convex optimization problem in a space of uni-dimensional measures. Note that the extension is defined on all tuples of measures µ = (µ 1 , . . . , µn) but it can equivalently be defined through non-increasing functions from [0, 1] to [0, 1], e.g., the representation in terms of cumulative distribution functions Fµi defined above (this representation will be used in Section 4 where algorithms based on the discretization of the equivalent obtained convex problem are discussed). 3 Isotonic Constraints and Stochastic Dominance In this paper, we consider the following problem: inf x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (4) where E is the edge set of a directed acyclic graph on {1, . . . , n} and H is submodular. We denote by X ⇢ Rn (not necessarily a subset of [0, 1]n) the set of x 2 Rn satisfying the isotonic constraints. In order to define an extension in a space of measures, we consider a specific order on measures on [0, 1], namely first-order stochastic dominance [20], defined as follows. Given two distributions µ and ⌫ on [0, 1], with (inverse) cumulative distribution functions Fµ and F⌫ , we have µ < ⌫, if and only if 8x 2 [0, 1], Fµ(x) > F⌫(x), or equivalently, 8t 2 [0, 1], F 1µ (t) > F 1⌫ (t). As shown in the right plot of Figure 1, the densities may still overlap. An equivalent characterization [19, 9] is the existence of a joint distribution on a vector (X,X 0) 2 R2 with marginals µ(x) and ⌫(x0) and such that X > X 0 almost surely1. We now prove the main proposition of the paper: Proposition 1 We consider the convex minimization problem: inf µ2P([0,1])n h(µ) such that 8(i, j) 2 E, µi < µj . (5) Problems in Eq. (4) and Eq. (5) have the same objective values. Moreover, µ is a minimizer of Eq. (5) if and only if F 1µ (t) is a minimizer of H of Eq. (4) for almost all t 2 [0, 1]. Proof We denote by M the set of µ 2 P([0, 1])n satisfying the stochastic ordering constraints. For any x 2 [0, 1]n that satisfies the constraints in Eq. (4), i.e., x 2 X \ [0, 1]n, the associated Dirac measures satisfy the constraint in Eq. (5). Therefore, the objective value M of Eq. (4) is greater or equal to the one M 0 of Eq. (5). Given a minimizer µ for the convex problem in Eq. (5), we have: M > M 0 = h(µ) = R 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt > R 1 0 Mdt = M. This shows the proposition by studying the equality cases above. 1Such a joint distribution may be built as the distribution of (F 1µ (T ), F 1⌫ (T )), where T is uniformly distributed in [0, 1]. Alternatively, we could add the penalty term P (i,j)2E R +1 1 (Fµj (z) Fµi(z))+dz, which corresponds to the unconstrained minimization of H(x) + P (i,j)2E(xj xi)+. For > 0 big enough2, this is equivalent to the problem above, but with a submodular function which has a large Lipschitz constant (and is thus harder to optimize with the iterative methods presented below). 4 Discretization algorithms Prop. 1 shows that the isotonic regression problem with a submodular cost can be cast as a convex optimization problem; however, this is achieved in a space of measures, which cannot be handled directly computationally in polynomial time. Following [2], we consider a polynomial time and space discretization scheme of each interval [0, 1] (and not of [0, 1]n), but we propose in Section 5 a significant improvement that allows to reduce the number of discrete points significantly. All pseudo-codes for the algorithms are available in Appendix B. 4.1 Review of submodular optimization in discrete domains All our algorithms will end up minimizing approximately a submodular function F on {0, . . . , k 1}n, that is, which satisfies Eq. (2). Isotonic constraints will be added in Section 4.2. Following [2], this can be formulated as minimizing a convex function f# on the set of ⇢ 2 [0, 1]n⇥(k 1) so that for each i 2 {1, . . . , n}, (⇢ij)j2{1,...,k 1} is a non-increasing sequence (we denote by S this set of constraints) corresponding to the cumulative distribution function. For any feasible ⇢, a subgradient of f# may be computed by sorting all n(k 1) elements of the matrix ⇢ and computing at most n(k 1) values of F . An approximate minimizer of F (which exactly inherits approximation properties from the approximate optimality of ⇢) is then obtained by selecting the minimum value of F in the computation of the subgradient. Projected subgradient methods can then be used, and if F is the largest absolute difference in values of F when a single variable is changed by ±1, we obtain an "-minimizer (for function values) after t iterations, with " 6 nk F/ p t. The projection step is composed of n simple separable quadratic isotonic regressions with chain constraints in dimension k, which can be solved easily in O(nk) using the pool-adjacent-violator algorithm [4]. Computing a subgradient requires a sorting operation, which is thus O(nk log(nk)). See more details in [2]. Alternatively, we can minimize the strongly-convex f#(⇢) + 1 2 k⇢k2F on the set of ⇢ 2 Rn⇥(k 1) so that for each i, (⇢ij)j is a non-increasing sequence, that is, ⇢ 2 S (the constraints that ⇢ij 2 [0, 1] are dropped). We then get a minimizer z of F by looking for all i 2 {1, . . . , n} at the largest j 2 {1, . . . , k 1} such that ⇢ij > 0. We take then zi = j (and if no such j exists, zi = 0). A gap of " in the problem above, leads to a gap of p "nk for the original problem (see more details in [2]). The subgradient method in the primal, or Frank-Wolfe algorithm in the dual may be used for this problem. We obtain an "-minimizer (for function values) after t iterations, with " 6 F/t, which leads for the original submodular minimization problem to the same optimality guarantees as above, but with a faster algorithm in practice. See the detailed computations and comparisons in [2]. 4.2 Naive discretization scheme Following [2], we simply discretize [0, 1] by selecting the k values ik 1 or 2i+1 2k , for i 2 {0, . . . , k 1}. If the function H : [0, 1]n is L 1 -Lipschitz-continuous with respect to the ` 1 -norm, that is |H(x) H(x0)| 6 L 1 kx x0k 1 , the function F is (L 1 /k)-Lipschitz-continuous with respect to the ` 1 -norm (and thus we have F 6 L 1 /k above). Moreover, if F is minimized up to ", H is optimized up to "+ nL 1 /k. In order to take into account the isotonic constraints, we simply minimize with respect to ⇢ 2 [0, 1]n⇥(k 1) \ S, with the additional constraint that for all j 2 {1, . . . , k 1}, 8(a, b) 2 E, ⇢a,j > ⇢b,j . This corresponds to additional contraints T ⇢ Rn⇥(k 1). 2A short calculation shows that when H is differentiable, the first order-optimality condition (which is only necessary here) implies that if is strictly larger than n times the largest possible partial first-order derivative of H , the isotonic constraints have to be satisfied. Following Section 4.1, we can either choose to solve the convex problem min⇢2[0,1]n⇥k\S\T f#(⇢), or the strongly-convex problem min⇢2S\T f#(⇢) + 1 2 k⇢k2F . In the two situations, after t iterations, that is tnk accesses to values of H , we get a constrained minimizer of H with approximation guarantee nL 1 /k + nL 1 / p t. Thus in order to get a precision ", it suffices to select k > 2nL 1 /" and t > 4n2L2 1 /"2, leading to an overall 8n4L3 1 /"3 accesses to function values of H , which is the same as obtained in [2] (except for an extra factor of n due to a different definition of L 1 ). 4.3 Improved behavior for smooth functions We consider the discretization points ik 1 for i 2 {0, . . . , k 1}, and we assume that all first-order (resp. second-order) partial derivatives are bounded by L 1 (resp. L2 2 ). In the reasoning above, we may upper-bound the infimum of the discrete function in a finer way, going from infx2X H(x) + nL1/k to infx2X H(x) + 1 2 n2L2 2 /k2 (by doing a Taylor expansion around the global optimum, where the first-order terms are always zero, either because the partial derivative is zero or the deviation is zero). We now select k > nL 2 / p ", leading to a number of accesses to H that scales as 4n4L2 1 L 2 /"5/2. We thus gain a factor p " with the exact same algorithm, but different assumptions. 4.4 Algorithms for isotonic problem Compared to plain submodular minimization where we need to project onto S, we need to take into account the extra isotonic constraints, i.e., ⇢ 2 T, and thus use more complex orthogonal projections. Orthogonal projections. We now require the orthogonal projections on S\T or [0, 1]n⇥k \ S\T, which are themselves isotonic regression problems with nk variables. If there are m original isotonic constraints in Eq. (4), the number of isotonic constraints for the projection step is O(nk+mk), which is typically O(mk) if m > n, which we now assume. Thus, we can use existing parametric max-flow algorithms which can solve these in O(nmk2 log(nk)) [13] or in O(nmk2 log(n2k/m)) [11]. See in Appendix A a description of the reformulation of isotonic regression as a parametric max-flow problem, and the link with minimum cut. Following [7, Prop. 5.3], we incorporate the [0, 1] box constraints, by first ignoring them and thus by projecting onto the regular isotonic constraints, and then thresholding the result through x ! max{min{x, 1}, 0}. Alternatively, we can explicitly consider a sequence of max-flow problems (with at most log(1/") of these, where " is the required precision) [28, 15]. Finally, we may consider (approximate) alternate projection algorithms such as Dykstra’s algorithm and its accelerated variants [6], since the set S is easy to project to, while, in some cases, such as chain isotonic constraints for the original problem, T is easy to project to. Finally, we could also use algorithms dedicated to special structures for isotonic regression (see [27]), in particular when our original set of isotonic constraints in Eq. (4) is a chain, and the orthogonal projection corresponds to a two-dimensional grid [26]. In our experiments, we use a standard max-flow code [5] and the usual divide-and-conquer algorithms [28, 15] for parametric max-flow. Separable problems. The function f# from Section 4.2 is then a linear function of the form f#(⇢) = trw>⇢, and then, a single max-flow algorithm can be used. For these separable problerms, the alternative strongly-convex problem of minimizing f#(⇢)+ 1 2 k⇢k2F becomes the one of minimizing min⇢2S\T 1 2 k⇢+ wk2F , which is simply the problem of projecting on the intersection of two convex sets, for which an accelerated Dykstra algorithm may be used [6], with convergence rate in O(1/t2) after t iterations. Each step is O(kn) for projecting onto S, while this is k parametric network flows with n variables and m constraints for projecting onto T, in O(knm log n) for the general case and O(kn) for chains and rooted trees [4, 30]. In our experiments in Section 6, we show that Dykstra’s algorithm converges quickly for separable problems. Note that when the underlying losses are convex3, then Dykstra converges in a single iteration. Indeed, in this situation, the sequences ( wij)j are non-increasing and isotonic regression 3This is a situation where direct algorithms such as the ones by [22] are much more efficient than our discretization schemes. along a direction preserves decreasingness in the other direction, which implies that after two alternate projections, the algorithm has converged to the optimal solution. Alternatively, for the non-strongly convex formulation, this is a single network flow problem with n(k 1) nodes, and mk constraints, in thus O(nmk2 log(nk)) [25]. When E corresponds to a chain, then this is a 2-dimensional-grid with an algorithm in O(n2k2) [26]. For a precision ", and thus k proportional to n/" with the assumptions of Section 4.2, this makes a number of function calls for H , equal to O(kn) = O(n2/") and a running-time complexity of O(n3m/"2 · log(n2/"))—for smooth functions, as shown in Section 4.3, we get k proportional to n/ p " and thus an improved behavior. 5 Improved discretization algorithms We now consider a different discretization scheme that can take advantage of access to higher-order derivatives. We divide [0, 1] into k disjoint pieces A 0 = [0, 1k ), A1 = [ 1 k , 2 k ), . . . , Ak 1 = [ k 1 k , 1]. This defines a new function ˜H : {0, . . . , k 1}n ! R defined only for elements z 2 {0, . . . , k 1}n that satisfy the isotonic constraint, i.e., z 2 {0, . . . , k 1}n \ X: ˜H(z) = min x2 Qn i=1 Azi H(x) such that 8(i, j) 2 E, xi > xj . (6) The function ˜H(z) is equal to +1 if z does not satisfy the isotonic constraints. Proposition 2 The function ˜H is submodular, and minimizing ˜H(z) for z 2 {0, . . . , k 1}n such that 8(i, j) 2 E, zi > zj is equivalent to minimizing Eq. (4). Proof We consider z and z0 that satisfy the isotonic constraints, with minimizers x and x0 in the definition in Eq. (6). We have H(z) +H(z0) = H(x) +H(x0) > H(min{x, x0}) +H(max{x, x0}) > H(min{z, z0}) +H(max{z, z0}). Thus it is submodular on the sub-lattice {0, . . . , k 1}n \ X. Note that in order to minimize ˜H , we need to make sure that we only access H for elements z that satisfy the isotonic constraints, that is ⇢ 2 S \ T (which our algorithms impose). 5.1 Approximation from high-order smoothness The main idea behind our discretization scheme is to use high-order smoothness to approximate for any required z, the function value ˜H(z). If we assume that H is q-times differentiable, with uniform bounds Lrr on all r-th order derivatives, then, the (q 1)-th order Taylor expansion of H around y is equal to Hq(x|y) = H(y) + Pq 1 r=1 P |↵|=r 1 ↵! (x y) ↵H(↵)(y), where ↵ 2 Nn and |↵| is the sum of elements, (x y)↵ is the vector with components (xi yi)↵i , ↵! the products of all factorials of elements of ↵, and H(↵)(y) is the partial derivative of H with order ↵i for each i. We thus approximate ˜H(z), for any z that satisfies the isotonic constraint (i.e., z 2 X), by ˆH(z) = minx2( Qn i=1 Azi )\X Hq(x| z+1/2 k ). We have for any z, | ˜H(z) ˆH(z)| 6 (nLq/2k)q/q!. Moreover, when moving a single element of z by one, the maximal deviation is L 1 /k + 2(nLq/2k)q/q!. If ˆH is submodular, then the same reasoning as in Section 4.2 leads to an approximate error of (nk/ p t) L 1 /k + 2(nLq/2k)q/q! after t iterations, on top of (nLq/2k)q/q!, thus, with t > 16n2L2 1 /"2 and k > (q!"/2) 1/qnLq/2 (assuming " small enough such that t > 16n2k2), this leads to a number of accesses to the (q 1)-th order oracle equal to O(n4L2 1 Lq/"2+1/q). We thus get an improvement in the power of ", which tend to " 2 for infinitely smooth problems. Note that when q = 1 we recover the same rate as in Section 4.3 (with the same assumptions but a slightly different algorithm). However, unless q = 1, the function ˆH(z) is not submodular, and we cannot apply directly the bounds for convex optimization of the extension. We show in Appendix D that the bound still holds for q > 1 by using the special structure of the convex problem. What remains unknown is the computation of ˆH which requires to minimize polynomials on a small cube. We can always use the generic algorithms from Section 4.2 for this, which do not access extra function values but can be slow. For quadratic functions, we can use a convex relaxation which is not tight but already allows strong improvements with much faster local steps, and which we now present. See the pseudo-code in Appendix B. In any case, using expansions of higher order is only practically useful in situations where function evaluations are expensive. 5.2 Quadratic problems In this section, we consider the minimization of a quadratic submodular function H(x) = 1 2 x>Ax+ c>x (thus with all off-diagonal elements of A non-negative) on [0, 1]n, subject to isotonic constraints xi > xj for all (i, j) 2 E. This is the sub-problem required in Section 5.1 when using second-order Taylor expansions. It could be solved iteratively (and approximately) with the algorithm from Section 4.2; in this section, we consider a semidefinite relaxation which is tight for certain problems (A positive semidefinite, c non-positive, or A with non-positive diagonal elements), but not in general (we have found counter-examples but it is most often tight). The relaxation is based on considering the set of (Y, y) 2 Rn⇥n ⇥ Rn such that there exists x 2 [0, 1]n \ X with Y = xx> and y = x. Our problem is thus equivalent to minimizing 1 2 trAY + c>y such that (Y, y) is in the convex-hull Y of this set, which is NP-hard to characterize in polynomial time [10]. However, following ideas from [18], we can find a simple relaxation by considering the following constraints: (a) for all i 6= j, Yii Yij yi Yij Yjj yj yi yj 1 ! is positive semi-definite, (b) for all i 6= j, Yij 6 inf{yi, yj}, which corresponds to xixj 6 inf{xi, xj} for any x 2 [0, 1]n, (c) for all i, Yii 6 yi, which corresponds to x2i 6 xi, and (d) for all (i, j) 2 E, yi > yj , Yii > Yjj , Yij > max{Yjj , yj yi + Yii} and Yij 6 max{Yii, yi yj + Yjj}, which corresponds to xi > xj , x2i > x2j , xixj > x2j , xi(1 xi) 6 xi(1 xj), xixj 6 x2i , and xi(1 xj) > xj(1 xj). This leads to a semi-definite program which provides a lower-bound on the optimal value of the problem. See Appendix E for a proof of tightness for special cases and a counter-example for the tightness in general. 6 Experiments We consider experiments aiming at (a) showing that the new possibility of minimizing submodular functions with isotonic constraints brings new possibilities and (b) that the new discretization algorithms are faster than the naive one. Robust isotonic regression. Given some z 2 Rn, we consider a separable function H(x) = 1 n Pn i=1 G(xi zi) with various possibilities for G: (a) the square loss G(t) = 1 2 t2, (b) the absolute loss G(t) = |t| and (c) a logarithmic loss G(t) = 2 2 log 1 + t2/2 , which is the negative log- density of a Student distribution and non-convex. The non-convexity of the cost function and the fact that is has vanishing derivatives for large values make it a good candidate for robust estimation [12]. The first two losses may be dealt with methods for separable convex isotonic regression [22, 30], but the non-convex loss can only dealt with exactly by the new optimization routine that we present— majorization-minimization algorithms [14] based on the concavity of G as a function of t2 can be used with such non-convex losses, but as shown below, they converge to bad local optima. For simplicity, we consider chain constraints 1 > x 1 > x 2 > · · · > xn > 0. We consider two set-ups: (a) a separable set-up where maximum flow algorithms can be used directly (with n = 200), and (b) a general submodular set-up (with n = 25 and n = 200), where we add a smoothness penalty which is the sum of terms of the form 2 Pn 1 i=1 (xi xi+1)2, which is submodular (but not separable). Data generation. We generate the data z 2 Rn, with n = 200, as follows: we first generate a simple decreasing function of i 2 {1, . . . , n} (here an affine function); we then perturb this ground truth by (a) adding some independent noise and (b) corrupting the data by changing a random subset of the n values by the application of another function which is increasing (see Figure 2, left). This is an adversarial perturbation, while the independent noise is not adversarial; the presence of the adversarial noise makes the problem harder as the proportion of corrupted data increases. Optimization of separable problems with maximum flow algorithms. We solve the discretized version by a single maximum-flow problem of size nk. We compare the various losses for k = 1000 on data which is along a decreasing line (plus noise), but corrupted (i.e., replaced for a certain proportion) by data along an increasing line. See an example in the left plot of Figure 2 for 50% of corrupted data. We see that the square loss is highly non robust, while the (still convex) absolute loss is slightly more robust, and the robust non-convex loss still approximates the decreasing function correctly with 50% of corrupted data when optimized globally, while the method with no guarantee (based on majorization-minimization, dashed line) does not converge to an acceptable solution. In Appendix C, we show additional examples where it is robust up to 75% of corruption. In the right plot of Figure 2, we also show the robustness to an increasing proportion of outliers (for the same type of data as for the left plot), by plotting the mean-squared error in log-scale and averaged over 20 replications. Overall, this shows the benefits of non-convex isotonic regression with guaranteed global optimization, even for large proportions of corrupted data. Optimization of separable problems with pool-adjacent violator (PAV) algorithm. As shown in Section 4.2, discretized separable submodular optimization corresponds to the orthogonal projection of a matrix into the intersection of chain isotonic constraints in each row, and isotonic constraints in each column equal to the original set of isotonic constraints (in these simulations, these are also chain constraints). This can be done by Dykstra’s alternating projection algorithm or its accelerated version [6], for which each projection step can be performed with the PAV algorithm because each of them corresponds to chain constraints. In the left plot of Figure 3, we show the difference in function values (in log-scale) for various discretization levels (defined by the integer k spaced by 1/4 in base-10 logarithm), as as function of the number of iterations (averaged over 20 replications). For large k (small difference of function values), we see a spacing between the ends of the plots of approximatively 1/2, highlighting the dependence in 1/k2 of the final error with discretization k, which our analysis in Section 4.3 suggests. Effect of the discretization for separable problems. In order to highlight the effect of discretization and its interplay with differentiability properties of the function to minimize, we consider in the middle plot of Figure 3, the distance in function values after full optimization of the discrete submodular function for various values of k. We see that for the simple smooth function (quadratic loss), we have a decay in 1/k2, while for the simple non smooth function (absolute loss), we have a final decay in 1/k), a predicted by our analysis. For the logarithm-based loss, whose smoothness constant depends on , when is large, it behaves like a smooth function immediately, while for smaller, k needs to be large enough to reach that behavior. Non-separable problems. We consider adding a smoothness penalty to add the prior knowledge that values should be decreasing and close. In Appendix C, we show the effect of adding a smoothness prior (for n = 200): it leads to better estimation. In the right plot of Figure 3, we show the effect of various discretization schemes (for n = 25), from order 0 (naive discretization), to order 1 and 2 (our new schemes based on Taylor expansions from Section 5.1), and we plot the difference in function values after 50 steps of subgradient descent: in each plot, the quantity H is equal to H(x⇤k) H⇤, where x⇤k is an approximate minimizer of the discretized problem with k values and H⇤ the minimum of H (taking into account the isotonic constraints). As outlined in our analysis, the first-order scheme does not help because our function has bounded Hessians, while the second-order does so significantly. 7 Conclusion In this paper, we have shown how submodularity could be leveraged to obtain polynomial-time algorithms for isotonic regressions with a submodular cost, based on convex optimization in a space of measures—although based on convexity arguments, our algorithms apply to all separable nonconvex functions. The final algorithms are based on discretization, with a new scheme that also provides improvements based on smoothness (also without isotonic constraints). Our framework is worth extending in the following directions: (a) we currently consider a fixed discretization, it would be advantageous to consider adaptive schemes, potentially improving the dependence on the number of variables n and the precision "; (b) other shape constraints can be consider in a similar submodular framework, such as xixj > 0 for certain pairs (i, j); (c) a direct convex formulation without discretization could probably be found for quadratic programming with submodular costs (which are potentially non-convex but solvable in polynomial time); (d) a statistical study of isotonic regression with adversarial corruption could now rely on formulations with polynomial-time algorithms. Acknowledgements We acknowledge support the European Research Council (grant SEQUOIA 724063).
1. What is the main contribution of the paper regarding submodular function minimization? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to ordering constraints? 3. Do you have any concerns or suggestions regarding the authors' use of convex optimization and parametric maxflow? 4. How does the paper address the issue of discretization and box constraints? 5. Are there any limitations or open questions regarding the theoretical guarantees provided in the paper?
Review
Review In this paper the authors consider the problem of minimizing a continuous submodular function subject to ordering (isotonic) constraints. They first show that the problem can be solved if we first discretize it (per coordinate, not in [0,1]^n), and then solve the resulting discrete optimization problem using convex optimization. The fact that the problem is solvable in polynomial time is of course not surprising, because, as pointed out by the authors in lines 29-36, we can add a penalty to the objective that will implicitly enforce the constraints. However, this can significantly increase the Lipschitz constant of the objective, and that is why the authors take on an alternative approach. First, they prove that seen in the space of measures, the isotonic constraints correspond to dominating inequalities of the CDFs, which I guess is an intuitive result given the results known for the unconstrained case. For the discrete problems this adds another set of inequality constraints that have to be satisfied. Projection onto these constraints can be done using parametric maxflow among other techniques, so that the authors are able to achieve rates for this problem similar to those for the unconstrained one (sections 4.2 and 4.3). How this is exactly done is not clear, and I would suggest the authors to perhaps show how they reduce their problem to say parametric maxflow in 4.4, or at least in the supplementary. The authors later go on to discuss improved discretization algorithms. I would like to point out that all these schemes are uniform, in the sense that the points are equally spaced. What the authors analyze is the number of points that you need under different smoothness assumptions. Under uniform bounds on the gradients, they construct a surrogate for the function that is submodular and whose minimization results in faster rates. However, it is hard to evaluate as it requires the minimization of polynomials over box constraints - hence, I think of this algorithm to be of a more theoretical nature. Furthermore, the given guarantees, if I'm not mistaken, assume an exact evaluation of the surrogate. It is not clear if we instead solve approximate methods (e.g. SDP relaxations). Finally, I would like to remark that I did not read the proofs of the numerous convergence rates provided in the paper. Questions / Comments ---- 1. You project onto the constraints using parametric maxflow by solving a graph-cut problem with very large weights corresponding to E? How do you incorporate the box [0, 1] constraints? 2. Don't orthogonal projections and separable problems (section 4.4) reduce to the same problem? Can't you use one parametric flow also for the separable problems? If yes, I would suggest to present these together. 3. What is the purpose of section 5.2? I can not see how it is related to the discretization strategies. 4. l286 - Why only chain constraints? It is not clear from the Section 4.2, as there you add one constraint for each edge in E. 5. Is \Delta H in Fig.3. the difference of the computed optimum between two consecutive discretizations? Post-rebuttal: The rebuttal addressed all of my questions and comments. However, the more fundamental issues with the method that I have outlined (surrogate hard to evaluate, no guarantee under approximate surrogate evaluation) seem to hold, and that is why I will keep my score.
NIPS
Title Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization Abstract We consider the minimization of submodular functions subject to ordering constraints. We show that this potentially non-convex optimization problem can be cast as a convex optimization problem on a space of uni-dimensional measures, with ordering constraints corresponding to first-order stochastic dominance. We propose new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles; these algorithms also lead to improvements without isotonic constraints. Finally, our experiments show that non-convex loss functions can be much more robust to outliers for isotonic regression, while still being solvable in polynomial time. 1 Introduction Shape constraints such as ordering constraints appear everywhere in estimation problems in machine learning, signal processing and statistics. They typically correspond to prior knowledge, and are imposed for the interpretability of models, or to allow non-parametric estimation with improved convergence rates [16, 8]. In this paper, we focus on imposing ordering constraints into an estimation problem, a setting typically referred to as isotonic regression [4, 26, 22], and we aim to generalize the set of problems for which efficient (i.e., polynomial-time) algorithms exist. We thus focus on the following optimization problem: min x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (1) where E ⇢ {1, . . . , n}2 represents the set of constraints, which form a directed acyclic graph. For simplicity, we restrict x to the set [0, 1]n, but our results extend to general products of (potentially unbounded) intervals. As convex constraints, isotonic constraints are well-adapted to estimation problems formulated as convex optimization problems where H is convex, such as for linear supervised learning problems, with many efficient algorithms for separable convex problems [4, 26, 22, 30], which can thus be used as inner loops in more general convex problems by using projected gradient methods (see, e.g., [3]). In this paper, we show that another form of structure can be leveraged. We will assume that H is submodular, which is equivalent, when twice continuously differentiable, to having nonpositive cross second-order derivatives. This notably includes all (potentially non convex) separable functions (i.e., sums of functions that depend on single variables), but also many other examples (see Section 2). Minimizing submodular functions on continuous domains has been recently shown to be equivalent to a convex optimization problem on a space of uni-dimensional measures [2], and given that the functions x 7! (xj xi)+ are submodular for any > 0, it is natural that by using tending to +1, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. we recover as well a convex optimization problem; the main contribution of this paper is to provide a simple framework based on stochastic dominance, for which we design efficient algorithms which are based on simple oracles on the function H (typically access to function values and derivatives). In order to obtain such algorithms, we go significantly beyond [2] by introducing novel discretization algorithms that also provide improvements without any isotonic constraints. More precisely, we make the following contributions: – We show in Section 3 that minimizing a submodular function with isotonic constraints can be cast as a convex optimization problem on a space of uni-dimensional measures, with isotonic constraints corresponding to first-order stochastic dominance. – On top of the naive discretization schemes presented in Section 4, we propose in Section 5 new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles. They go from requiring O(1/"3) = O(1/"2+1) function evaluations to reach a precision ", to O(1/"5/2) = O(1/"2+1/2) and O(1/"7/3) = O(1/"2+1/3). – Our experiments in Section 6 show that non-convex loss functions can be much more robust to outliers for isotonic regression. 2 Submodular Analysis in Continuous Domains In this section, we review the framework of [2] that shows how to minimize submodular functions using convex optimization. Definition. Throughout this paper, we consider a continuous function H : [0, 1]n ! R. The function H is said to be submodular if and only if [21, 29]: 8(x, y) 2 [0, 1]n ⇥ [0, 1]n, H(x) +H(y) > H(min{x, y}) +H(max{x, y}), (2) where the min and max operations are applied component-wise. If H is continuously twice differentiable, then this is equivalent to @ 2H @xi@xj (x) 6 0 for any i 6= j and x 2 [0, 1]n [29]. The cone of submodular functions on [0, 1]n is invariant by marginal strictly increasing transformations, and includes all functions that depend on a single variable (which play the role of linear functions for convex functions), which we refer to as separable functions. Examples. The classical examples are: (a) any separable function, (b) convex functions of the difference of two components, (c) concave functions of a positive linear combination, (d) negative log densities of multivariate totally positive distributions [17]. See Section 6 for a concrete example. Extension on a space of measures. We consider the convex set P([0, 1]) of Radon probability measures [24] on [0, 1], which is the closure (for the weak topology) of the convex hull of all Dirac measures. In order to get an extension, we look for a function defined on the set of products of probability measures µ 2 P([0, 1])n, such that if all µi, i = 1, . . . , n, are Dirac measures at points xi 2 [0, 1], then we have a function value equal to H(x1, . . . , xn). Note that P([0, 1])n is different from P([0, 1]n), which is the set of probability measures on [0, 1]n. For a probability distribution µi 2 P([0, 1]) defined on [0, 1], we can define the (reversed) cumulative distribution function Fµi : [0, 1] ! [0, 1] as Fµi(xi) = µi [xi, 1] . This is a non-increasing left-continuous function from [0, 1] to [0, 1], such that Fµi(0) = 1 and Fµi(1) = µi({1}). See illustrations in the left plot of Figure 1. We can then define the “inverse” cumulative function from [0, 1] to [0, 1] as F 1µi (ti) = sup{xi 2 [0, 1], Fµi(xi) > ti}. The function F 1µi is non-increasing and right-continuous, and such that F 1µi (1) = min supp(µi) and F 1 µi (0) = 1. Moreover, we have Fµi(xi) > ti , F 1µi (ti) > xi. The extension from [0, 1]n to the set of product probability measures is obtained by considering a single threshold t applied to all n cumulative distribution functions, that is: 8µ 2 P([0, 1])n, h(µ 1 , . . . , µn) = Z 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt. (3) We have the following properties when H is submodular: (a) it is an extension, that is, if for all i, µi is a Dirac at xi, then h(µ) = H(x); (b) it is convex; (c) minimizing h on P([0, 1])n and minimizing H on [0, 1]n is equivalent; moreover, the minimal values are equal and µ is a minimizer if and only if⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ is a minimizer of H for almost all t2 [0, 1]. Thus, submodular minimization is equivalent to a convex optimization problem in a space of uni-dimensional measures. Note that the extension is defined on all tuples of measures µ = (µ 1 , . . . , µn) but it can equivalently be defined through non-increasing functions from [0, 1] to [0, 1], e.g., the representation in terms of cumulative distribution functions Fµi defined above (this representation will be used in Section 4 where algorithms based on the discretization of the equivalent obtained convex problem are discussed). 3 Isotonic Constraints and Stochastic Dominance In this paper, we consider the following problem: inf x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (4) where E is the edge set of a directed acyclic graph on {1, . . . , n} and H is submodular. We denote by X ⇢ Rn (not necessarily a subset of [0, 1]n) the set of x 2 Rn satisfying the isotonic constraints. In order to define an extension in a space of measures, we consider a specific order on measures on [0, 1], namely first-order stochastic dominance [20], defined as follows. Given two distributions µ and ⌫ on [0, 1], with (inverse) cumulative distribution functions Fµ and F⌫ , we have µ < ⌫, if and only if 8x 2 [0, 1], Fµ(x) > F⌫(x), or equivalently, 8t 2 [0, 1], F 1µ (t) > F 1⌫ (t). As shown in the right plot of Figure 1, the densities may still overlap. An equivalent characterization [19, 9] is the existence of a joint distribution on a vector (X,X 0) 2 R2 with marginals µ(x) and ⌫(x0) and such that X > X 0 almost surely1. We now prove the main proposition of the paper: Proposition 1 We consider the convex minimization problem: inf µ2P([0,1])n h(µ) such that 8(i, j) 2 E, µi < µj . (5) Problems in Eq. (4) and Eq. (5) have the same objective values. Moreover, µ is a minimizer of Eq. (5) if and only if F 1µ (t) is a minimizer of H of Eq. (4) for almost all t 2 [0, 1]. Proof We denote by M the set of µ 2 P([0, 1])n satisfying the stochastic ordering constraints. For any x 2 [0, 1]n that satisfies the constraints in Eq. (4), i.e., x 2 X \ [0, 1]n, the associated Dirac measures satisfy the constraint in Eq. (5). Therefore, the objective value M of Eq. (4) is greater or equal to the one M 0 of Eq. (5). Given a minimizer µ for the convex problem in Eq. (5), we have: M > M 0 = h(µ) = R 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt > R 1 0 Mdt = M. This shows the proposition by studying the equality cases above. 1Such a joint distribution may be built as the distribution of (F 1µ (T ), F 1⌫ (T )), where T is uniformly distributed in [0, 1]. Alternatively, we could add the penalty term P (i,j)2E R +1 1 (Fµj (z) Fµi(z))+dz, which corresponds to the unconstrained minimization of H(x) + P (i,j)2E(xj xi)+. For > 0 big enough2, this is equivalent to the problem above, but with a submodular function which has a large Lipschitz constant (and is thus harder to optimize with the iterative methods presented below). 4 Discretization algorithms Prop. 1 shows that the isotonic regression problem with a submodular cost can be cast as a convex optimization problem; however, this is achieved in a space of measures, which cannot be handled directly computationally in polynomial time. Following [2], we consider a polynomial time and space discretization scheme of each interval [0, 1] (and not of [0, 1]n), but we propose in Section 5 a significant improvement that allows to reduce the number of discrete points significantly. All pseudo-codes for the algorithms are available in Appendix B. 4.1 Review of submodular optimization in discrete domains All our algorithms will end up minimizing approximately a submodular function F on {0, . . . , k 1}n, that is, which satisfies Eq. (2). Isotonic constraints will be added in Section 4.2. Following [2], this can be formulated as minimizing a convex function f# on the set of ⇢ 2 [0, 1]n⇥(k 1) so that for each i 2 {1, . . . , n}, (⇢ij)j2{1,...,k 1} is a non-increasing sequence (we denote by S this set of constraints) corresponding to the cumulative distribution function. For any feasible ⇢, a subgradient of f# may be computed by sorting all n(k 1) elements of the matrix ⇢ and computing at most n(k 1) values of F . An approximate minimizer of F (which exactly inherits approximation properties from the approximate optimality of ⇢) is then obtained by selecting the minimum value of F in the computation of the subgradient. Projected subgradient methods can then be used, and if F is the largest absolute difference in values of F when a single variable is changed by ±1, we obtain an "-minimizer (for function values) after t iterations, with " 6 nk F/ p t. The projection step is composed of n simple separable quadratic isotonic regressions with chain constraints in dimension k, which can be solved easily in O(nk) using the pool-adjacent-violator algorithm [4]. Computing a subgradient requires a sorting operation, which is thus O(nk log(nk)). See more details in [2]. Alternatively, we can minimize the strongly-convex f#(⇢) + 1 2 k⇢k2F on the set of ⇢ 2 Rn⇥(k 1) so that for each i, (⇢ij)j is a non-increasing sequence, that is, ⇢ 2 S (the constraints that ⇢ij 2 [0, 1] are dropped). We then get a minimizer z of F by looking for all i 2 {1, . . . , n} at the largest j 2 {1, . . . , k 1} such that ⇢ij > 0. We take then zi = j (and if no such j exists, zi = 0). A gap of " in the problem above, leads to a gap of p "nk for the original problem (see more details in [2]). The subgradient method in the primal, or Frank-Wolfe algorithm in the dual may be used for this problem. We obtain an "-minimizer (for function values) after t iterations, with " 6 F/t, which leads for the original submodular minimization problem to the same optimality guarantees as above, but with a faster algorithm in practice. See the detailed computations and comparisons in [2]. 4.2 Naive discretization scheme Following [2], we simply discretize [0, 1] by selecting the k values ik 1 or 2i+1 2k , for i 2 {0, . . . , k 1}. If the function H : [0, 1]n is L 1 -Lipschitz-continuous with respect to the ` 1 -norm, that is |H(x) H(x0)| 6 L 1 kx x0k 1 , the function F is (L 1 /k)-Lipschitz-continuous with respect to the ` 1 -norm (and thus we have F 6 L 1 /k above). Moreover, if F is minimized up to ", H is optimized up to "+ nL 1 /k. In order to take into account the isotonic constraints, we simply minimize with respect to ⇢ 2 [0, 1]n⇥(k 1) \ S, with the additional constraint that for all j 2 {1, . . . , k 1}, 8(a, b) 2 E, ⇢a,j > ⇢b,j . This corresponds to additional contraints T ⇢ Rn⇥(k 1). 2A short calculation shows that when H is differentiable, the first order-optimality condition (which is only necessary here) implies that if is strictly larger than n times the largest possible partial first-order derivative of H , the isotonic constraints have to be satisfied. Following Section 4.1, we can either choose to solve the convex problem min⇢2[0,1]n⇥k\S\T f#(⇢), or the strongly-convex problem min⇢2S\T f#(⇢) + 1 2 k⇢k2F . In the two situations, after t iterations, that is tnk accesses to values of H , we get a constrained minimizer of H with approximation guarantee nL 1 /k + nL 1 / p t. Thus in order to get a precision ", it suffices to select k > 2nL 1 /" and t > 4n2L2 1 /"2, leading to an overall 8n4L3 1 /"3 accesses to function values of H , which is the same as obtained in [2] (except for an extra factor of n due to a different definition of L 1 ). 4.3 Improved behavior for smooth functions We consider the discretization points ik 1 for i 2 {0, . . . , k 1}, and we assume that all first-order (resp. second-order) partial derivatives are bounded by L 1 (resp. L2 2 ). In the reasoning above, we may upper-bound the infimum of the discrete function in a finer way, going from infx2X H(x) + nL1/k to infx2X H(x) + 1 2 n2L2 2 /k2 (by doing a Taylor expansion around the global optimum, where the first-order terms are always zero, either because the partial derivative is zero or the deviation is zero). We now select k > nL 2 / p ", leading to a number of accesses to H that scales as 4n4L2 1 L 2 /"5/2. We thus gain a factor p " with the exact same algorithm, but different assumptions. 4.4 Algorithms for isotonic problem Compared to plain submodular minimization where we need to project onto S, we need to take into account the extra isotonic constraints, i.e., ⇢ 2 T, and thus use more complex orthogonal projections. Orthogonal projections. We now require the orthogonal projections on S\T or [0, 1]n⇥k \ S\T, which are themselves isotonic regression problems with nk variables. If there are m original isotonic constraints in Eq. (4), the number of isotonic constraints for the projection step is O(nk+mk), which is typically O(mk) if m > n, which we now assume. Thus, we can use existing parametric max-flow algorithms which can solve these in O(nmk2 log(nk)) [13] or in O(nmk2 log(n2k/m)) [11]. See in Appendix A a description of the reformulation of isotonic regression as a parametric max-flow problem, and the link with minimum cut. Following [7, Prop. 5.3], we incorporate the [0, 1] box constraints, by first ignoring them and thus by projecting onto the regular isotonic constraints, and then thresholding the result through x ! max{min{x, 1}, 0}. Alternatively, we can explicitly consider a sequence of max-flow problems (with at most log(1/") of these, where " is the required precision) [28, 15]. Finally, we may consider (approximate) alternate projection algorithms such as Dykstra’s algorithm and its accelerated variants [6], since the set S is easy to project to, while, in some cases, such as chain isotonic constraints for the original problem, T is easy to project to. Finally, we could also use algorithms dedicated to special structures for isotonic regression (see [27]), in particular when our original set of isotonic constraints in Eq. (4) is a chain, and the orthogonal projection corresponds to a two-dimensional grid [26]. In our experiments, we use a standard max-flow code [5] and the usual divide-and-conquer algorithms [28, 15] for parametric max-flow. Separable problems. The function f# from Section 4.2 is then a linear function of the form f#(⇢) = trw>⇢, and then, a single max-flow algorithm can be used. For these separable problerms, the alternative strongly-convex problem of minimizing f#(⇢)+ 1 2 k⇢k2F becomes the one of minimizing min⇢2S\T 1 2 k⇢+ wk2F , which is simply the problem of projecting on the intersection of two convex sets, for which an accelerated Dykstra algorithm may be used [6], with convergence rate in O(1/t2) after t iterations. Each step is O(kn) for projecting onto S, while this is k parametric network flows with n variables and m constraints for projecting onto T, in O(knm log n) for the general case and O(kn) for chains and rooted trees [4, 30]. In our experiments in Section 6, we show that Dykstra’s algorithm converges quickly for separable problems. Note that when the underlying losses are convex3, then Dykstra converges in a single iteration. Indeed, in this situation, the sequences ( wij)j are non-increasing and isotonic regression 3This is a situation where direct algorithms such as the ones by [22] are much more efficient than our discretization schemes. along a direction preserves decreasingness in the other direction, which implies that after two alternate projections, the algorithm has converged to the optimal solution. Alternatively, for the non-strongly convex formulation, this is a single network flow problem with n(k 1) nodes, and mk constraints, in thus O(nmk2 log(nk)) [25]. When E corresponds to a chain, then this is a 2-dimensional-grid with an algorithm in O(n2k2) [26]. For a precision ", and thus k proportional to n/" with the assumptions of Section 4.2, this makes a number of function calls for H , equal to O(kn) = O(n2/") and a running-time complexity of O(n3m/"2 · log(n2/"))—for smooth functions, as shown in Section 4.3, we get k proportional to n/ p " and thus an improved behavior. 5 Improved discretization algorithms We now consider a different discretization scheme that can take advantage of access to higher-order derivatives. We divide [0, 1] into k disjoint pieces A 0 = [0, 1k ), A1 = [ 1 k , 2 k ), . . . , Ak 1 = [ k 1 k , 1]. This defines a new function ˜H : {0, . . . , k 1}n ! R defined only for elements z 2 {0, . . . , k 1}n that satisfy the isotonic constraint, i.e., z 2 {0, . . . , k 1}n \ X: ˜H(z) = min x2 Qn i=1 Azi H(x) such that 8(i, j) 2 E, xi > xj . (6) The function ˜H(z) is equal to +1 if z does not satisfy the isotonic constraints. Proposition 2 The function ˜H is submodular, and minimizing ˜H(z) for z 2 {0, . . . , k 1}n such that 8(i, j) 2 E, zi > zj is equivalent to minimizing Eq. (4). Proof We consider z and z0 that satisfy the isotonic constraints, with minimizers x and x0 in the definition in Eq. (6). We have H(z) +H(z0) = H(x) +H(x0) > H(min{x, x0}) +H(max{x, x0}) > H(min{z, z0}) +H(max{z, z0}). Thus it is submodular on the sub-lattice {0, . . . , k 1}n \ X. Note that in order to minimize ˜H , we need to make sure that we only access H for elements z that satisfy the isotonic constraints, that is ⇢ 2 S \ T (which our algorithms impose). 5.1 Approximation from high-order smoothness The main idea behind our discretization scheme is to use high-order smoothness to approximate for any required z, the function value ˜H(z). If we assume that H is q-times differentiable, with uniform bounds Lrr on all r-th order derivatives, then, the (q 1)-th order Taylor expansion of H around y is equal to Hq(x|y) = H(y) + Pq 1 r=1 P |↵|=r 1 ↵! (x y) ↵H(↵)(y), where ↵ 2 Nn and |↵| is the sum of elements, (x y)↵ is the vector with components (xi yi)↵i , ↵! the products of all factorials of elements of ↵, and H(↵)(y) is the partial derivative of H with order ↵i for each i. We thus approximate ˜H(z), for any z that satisfies the isotonic constraint (i.e., z 2 X), by ˆH(z) = minx2( Qn i=1 Azi )\X Hq(x| z+1/2 k ). We have for any z, | ˜H(z) ˆH(z)| 6 (nLq/2k)q/q!. Moreover, when moving a single element of z by one, the maximal deviation is L 1 /k + 2(nLq/2k)q/q!. If ˆH is submodular, then the same reasoning as in Section 4.2 leads to an approximate error of (nk/ p t) L 1 /k + 2(nLq/2k)q/q! after t iterations, on top of (nLq/2k)q/q!, thus, with t > 16n2L2 1 /"2 and k > (q!"/2) 1/qnLq/2 (assuming " small enough such that t > 16n2k2), this leads to a number of accesses to the (q 1)-th order oracle equal to O(n4L2 1 Lq/"2+1/q). We thus get an improvement in the power of ", which tend to " 2 for infinitely smooth problems. Note that when q = 1 we recover the same rate as in Section 4.3 (with the same assumptions but a slightly different algorithm). However, unless q = 1, the function ˆH(z) is not submodular, and we cannot apply directly the bounds for convex optimization of the extension. We show in Appendix D that the bound still holds for q > 1 by using the special structure of the convex problem. What remains unknown is the computation of ˆH which requires to minimize polynomials on a small cube. We can always use the generic algorithms from Section 4.2 for this, which do not access extra function values but can be slow. For quadratic functions, we can use a convex relaxation which is not tight but already allows strong improvements with much faster local steps, and which we now present. See the pseudo-code in Appendix B. In any case, using expansions of higher order is only practically useful in situations where function evaluations are expensive. 5.2 Quadratic problems In this section, we consider the minimization of a quadratic submodular function H(x) = 1 2 x>Ax+ c>x (thus with all off-diagonal elements of A non-negative) on [0, 1]n, subject to isotonic constraints xi > xj for all (i, j) 2 E. This is the sub-problem required in Section 5.1 when using second-order Taylor expansions. It could be solved iteratively (and approximately) with the algorithm from Section 4.2; in this section, we consider a semidefinite relaxation which is tight for certain problems (A positive semidefinite, c non-positive, or A with non-positive diagonal elements), but not in general (we have found counter-examples but it is most often tight). The relaxation is based on considering the set of (Y, y) 2 Rn⇥n ⇥ Rn such that there exists x 2 [0, 1]n \ X with Y = xx> and y = x. Our problem is thus equivalent to minimizing 1 2 trAY + c>y such that (Y, y) is in the convex-hull Y of this set, which is NP-hard to characterize in polynomial time [10]. However, following ideas from [18], we can find a simple relaxation by considering the following constraints: (a) for all i 6= j, Yii Yij yi Yij Yjj yj yi yj 1 ! is positive semi-definite, (b) for all i 6= j, Yij 6 inf{yi, yj}, which corresponds to xixj 6 inf{xi, xj} for any x 2 [0, 1]n, (c) for all i, Yii 6 yi, which corresponds to x2i 6 xi, and (d) for all (i, j) 2 E, yi > yj , Yii > Yjj , Yij > max{Yjj , yj yi + Yii} and Yij 6 max{Yii, yi yj + Yjj}, which corresponds to xi > xj , x2i > x2j , xixj > x2j , xi(1 xi) 6 xi(1 xj), xixj 6 x2i , and xi(1 xj) > xj(1 xj). This leads to a semi-definite program which provides a lower-bound on the optimal value of the problem. See Appendix E for a proof of tightness for special cases and a counter-example for the tightness in general. 6 Experiments We consider experiments aiming at (a) showing that the new possibility of minimizing submodular functions with isotonic constraints brings new possibilities and (b) that the new discretization algorithms are faster than the naive one. Robust isotonic regression. Given some z 2 Rn, we consider a separable function H(x) = 1 n Pn i=1 G(xi zi) with various possibilities for G: (a) the square loss G(t) = 1 2 t2, (b) the absolute loss G(t) = |t| and (c) a logarithmic loss G(t) = 2 2 log 1 + t2/2 , which is the negative log- density of a Student distribution and non-convex. The non-convexity of the cost function and the fact that is has vanishing derivatives for large values make it a good candidate for robust estimation [12]. The first two losses may be dealt with methods for separable convex isotonic regression [22, 30], but the non-convex loss can only dealt with exactly by the new optimization routine that we present— majorization-minimization algorithms [14] based on the concavity of G as a function of t2 can be used with such non-convex losses, but as shown below, they converge to bad local optima. For simplicity, we consider chain constraints 1 > x 1 > x 2 > · · · > xn > 0. We consider two set-ups: (a) a separable set-up where maximum flow algorithms can be used directly (with n = 200), and (b) a general submodular set-up (with n = 25 and n = 200), where we add a smoothness penalty which is the sum of terms of the form 2 Pn 1 i=1 (xi xi+1)2, which is submodular (but not separable). Data generation. We generate the data z 2 Rn, with n = 200, as follows: we first generate a simple decreasing function of i 2 {1, . . . , n} (here an affine function); we then perturb this ground truth by (a) adding some independent noise and (b) corrupting the data by changing a random subset of the n values by the application of another function which is increasing (see Figure 2, left). This is an adversarial perturbation, while the independent noise is not adversarial; the presence of the adversarial noise makes the problem harder as the proportion of corrupted data increases. Optimization of separable problems with maximum flow algorithms. We solve the discretized version by a single maximum-flow problem of size nk. We compare the various losses for k = 1000 on data which is along a decreasing line (plus noise), but corrupted (i.e., replaced for a certain proportion) by data along an increasing line. See an example in the left plot of Figure 2 for 50% of corrupted data. We see that the square loss is highly non robust, while the (still convex) absolute loss is slightly more robust, and the robust non-convex loss still approximates the decreasing function correctly with 50% of corrupted data when optimized globally, while the method with no guarantee (based on majorization-minimization, dashed line) does not converge to an acceptable solution. In Appendix C, we show additional examples where it is robust up to 75% of corruption. In the right plot of Figure 2, we also show the robustness to an increasing proportion of outliers (for the same type of data as for the left plot), by plotting the mean-squared error in log-scale and averaged over 20 replications. Overall, this shows the benefits of non-convex isotonic regression with guaranteed global optimization, even for large proportions of corrupted data. Optimization of separable problems with pool-adjacent violator (PAV) algorithm. As shown in Section 4.2, discretized separable submodular optimization corresponds to the orthogonal projection of a matrix into the intersection of chain isotonic constraints in each row, and isotonic constraints in each column equal to the original set of isotonic constraints (in these simulations, these are also chain constraints). This can be done by Dykstra’s alternating projection algorithm or its accelerated version [6], for which each projection step can be performed with the PAV algorithm because each of them corresponds to chain constraints. In the left plot of Figure 3, we show the difference in function values (in log-scale) for various discretization levels (defined by the integer k spaced by 1/4 in base-10 logarithm), as as function of the number of iterations (averaged over 20 replications). For large k (small difference of function values), we see a spacing between the ends of the plots of approximatively 1/2, highlighting the dependence in 1/k2 of the final error with discretization k, which our analysis in Section 4.3 suggests. Effect of the discretization for separable problems. In order to highlight the effect of discretization and its interplay with differentiability properties of the function to minimize, we consider in the middle plot of Figure 3, the distance in function values after full optimization of the discrete submodular function for various values of k. We see that for the simple smooth function (quadratic loss), we have a decay in 1/k2, while for the simple non smooth function (absolute loss), we have a final decay in 1/k), a predicted by our analysis. For the logarithm-based loss, whose smoothness constant depends on , when is large, it behaves like a smooth function immediately, while for smaller, k needs to be large enough to reach that behavior. Non-separable problems. We consider adding a smoothness penalty to add the prior knowledge that values should be decreasing and close. In Appendix C, we show the effect of adding a smoothness prior (for n = 200): it leads to better estimation. In the right plot of Figure 3, we show the effect of various discretization schemes (for n = 25), from order 0 (naive discretization), to order 1 and 2 (our new schemes based on Taylor expansions from Section 5.1), and we plot the difference in function values after 50 steps of subgradient descent: in each plot, the quantity H is equal to H(x⇤k) H⇤, where x⇤k is an approximate minimizer of the discretized problem with k values and H⇤ the minimum of H (taking into account the isotonic constraints). As outlined in our analysis, the first-order scheme does not help because our function has bounded Hessians, while the second-order does so significantly. 7 Conclusion In this paper, we have shown how submodularity could be leveraged to obtain polynomial-time algorithms for isotonic regressions with a submodular cost, based on convex optimization in a space of measures—although based on convexity arguments, our algorithms apply to all separable nonconvex functions. The final algorithms are based on discretization, with a new scheme that also provides improvements based on smoothness (also without isotonic constraints). Our framework is worth extending in the following directions: (a) we currently consider a fixed discretization, it would be advantageous to consider adaptive schemes, potentially improving the dependence on the number of variables n and the precision "; (b) other shape constraints can be consider in a similar submodular framework, such as xixj > 0 for certain pairs (i, j); (c) a direct convex formulation without discretization could probably be found for quadratic programming with submodular costs (which are potentially non-convex but solvable in polynomial time); (d) a statistical study of isotonic regression with adversarial corruption could now rely on formulations with polynomial-time algorithms. Acknowledgements We acknowledge support the European Research Council (grant SEQUOIA 724063).
1. What is the focus of the paper regarding submodular minimization and its application? 2. What are the strengths of the proposed approach, particularly in dealing with ordering constraints? 3. Do you have any concerns or questions about the nonconvex loss function used in the paper, especially its robustness to corrupted data? 4. How does the reviewer assess the improvement of the discretization schemes and their impact on the overall performance? 5. Are there any limitations or tradeoffs in the proposed method that need further consideration?
Review
Review This paper studies continuous submodular minimization subject to ordering constraints. The main motivation is isotonic regression with a separable, nonconvex loss function. The starting point is the relationship between submodular minimization and convex minimization over a space of measures. It is demonstrated that ordering constraints correspond to stochastic dominance, and algorithms are proposed to enforce these constraints during optimization. The starting point is a simple way of discretizing the CDF corresponding to each variable. Then, improved discretization schemes are presented, depending on the availability of higher-order derivatives. Experimental results show that using a nonconvex loss function can improve robustness to corrupted data in isotonic regression. This paper is well-written, and makes a good set of technical contributions in showing how to incorporate ordering constraints and improve the discretization schemes. The experimental results nicely show the conditions under which the new techniques improve over standard methods. A couple questions: 1) Is there some intuition for why nonconvex losses like the proposed logarithmic function are more robust to outliers than squared or absolute value loss? 2) It makes sense that higher-order derivatives would improve the convergence rate in terms of number of iterations. However, this requires additional computation to get the Hessian (or even higher-order terms). Is the tradeoff worth it in terms of actual runtime?
NIPS
Title Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization Abstract We consider the minimization of submodular functions subject to ordering constraints. We show that this potentially non-convex optimization problem can be cast as a convex optimization problem on a space of uni-dimensional measures, with ordering constraints corresponding to first-order stochastic dominance. We propose new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles; these algorithms also lead to improvements without isotonic constraints. Finally, our experiments show that non-convex loss functions can be much more robust to outliers for isotonic regression, while still being solvable in polynomial time. 1 Introduction Shape constraints such as ordering constraints appear everywhere in estimation problems in machine learning, signal processing and statistics. They typically correspond to prior knowledge, and are imposed for the interpretability of models, or to allow non-parametric estimation with improved convergence rates [16, 8]. In this paper, we focus on imposing ordering constraints into an estimation problem, a setting typically referred to as isotonic regression [4, 26, 22], and we aim to generalize the set of problems for which efficient (i.e., polynomial-time) algorithms exist. We thus focus on the following optimization problem: min x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (1) where E ⇢ {1, . . . , n}2 represents the set of constraints, which form a directed acyclic graph. For simplicity, we restrict x to the set [0, 1]n, but our results extend to general products of (potentially unbounded) intervals. As convex constraints, isotonic constraints are well-adapted to estimation problems formulated as convex optimization problems where H is convex, such as for linear supervised learning problems, with many efficient algorithms for separable convex problems [4, 26, 22, 30], which can thus be used as inner loops in more general convex problems by using projected gradient methods (see, e.g., [3]). In this paper, we show that another form of structure can be leveraged. We will assume that H is submodular, which is equivalent, when twice continuously differentiable, to having nonpositive cross second-order derivatives. This notably includes all (potentially non convex) separable functions (i.e., sums of functions that depend on single variables), but also many other examples (see Section 2). Minimizing submodular functions on continuous domains has been recently shown to be equivalent to a convex optimization problem on a space of uni-dimensional measures [2], and given that the functions x 7! (xj xi)+ are submodular for any > 0, it is natural that by using tending to +1, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. we recover as well a convex optimization problem; the main contribution of this paper is to provide a simple framework based on stochastic dominance, for which we design efficient algorithms which are based on simple oracles on the function H (typically access to function values and derivatives). In order to obtain such algorithms, we go significantly beyond [2] by introducing novel discretization algorithms that also provide improvements without any isotonic constraints. More precisely, we make the following contributions: – We show in Section 3 that minimizing a submodular function with isotonic constraints can be cast as a convex optimization problem on a space of uni-dimensional measures, with isotonic constraints corresponding to first-order stochastic dominance. – On top of the naive discretization schemes presented in Section 4, we propose in Section 5 new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles. They go from requiring O(1/"3) = O(1/"2+1) function evaluations to reach a precision ", to O(1/"5/2) = O(1/"2+1/2) and O(1/"7/3) = O(1/"2+1/3). – Our experiments in Section 6 show that non-convex loss functions can be much more robust to outliers for isotonic regression. 2 Submodular Analysis in Continuous Domains In this section, we review the framework of [2] that shows how to minimize submodular functions using convex optimization. Definition. Throughout this paper, we consider a continuous function H : [0, 1]n ! R. The function H is said to be submodular if and only if [21, 29]: 8(x, y) 2 [0, 1]n ⇥ [0, 1]n, H(x) +H(y) > H(min{x, y}) +H(max{x, y}), (2) where the min and max operations are applied component-wise. If H is continuously twice differentiable, then this is equivalent to @ 2H @xi@xj (x) 6 0 for any i 6= j and x 2 [0, 1]n [29]. The cone of submodular functions on [0, 1]n is invariant by marginal strictly increasing transformations, and includes all functions that depend on a single variable (which play the role of linear functions for convex functions), which we refer to as separable functions. Examples. The classical examples are: (a) any separable function, (b) convex functions of the difference of two components, (c) concave functions of a positive linear combination, (d) negative log densities of multivariate totally positive distributions [17]. See Section 6 for a concrete example. Extension on a space of measures. We consider the convex set P([0, 1]) of Radon probability measures [24] on [0, 1], which is the closure (for the weak topology) of the convex hull of all Dirac measures. In order to get an extension, we look for a function defined on the set of products of probability measures µ 2 P([0, 1])n, such that if all µi, i = 1, . . . , n, are Dirac measures at points xi 2 [0, 1], then we have a function value equal to H(x1, . . . , xn). Note that P([0, 1])n is different from P([0, 1]n), which is the set of probability measures on [0, 1]n. For a probability distribution µi 2 P([0, 1]) defined on [0, 1], we can define the (reversed) cumulative distribution function Fµi : [0, 1] ! [0, 1] as Fµi(xi) = µi [xi, 1] . This is a non-increasing left-continuous function from [0, 1] to [0, 1], such that Fµi(0) = 1 and Fµi(1) = µi({1}). See illustrations in the left plot of Figure 1. We can then define the “inverse” cumulative function from [0, 1] to [0, 1] as F 1µi (ti) = sup{xi 2 [0, 1], Fµi(xi) > ti}. The function F 1µi is non-increasing and right-continuous, and such that F 1µi (1) = min supp(µi) and F 1 µi (0) = 1. Moreover, we have Fµi(xi) > ti , F 1µi (ti) > xi. The extension from [0, 1]n to the set of product probability measures is obtained by considering a single threshold t applied to all n cumulative distribution functions, that is: 8µ 2 P([0, 1])n, h(µ 1 , . . . , µn) = Z 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt. (3) We have the following properties when H is submodular: (a) it is an extension, that is, if for all i, µi is a Dirac at xi, then h(µ) = H(x); (b) it is convex; (c) minimizing h on P([0, 1])n and minimizing H on [0, 1]n is equivalent; moreover, the minimal values are equal and µ is a minimizer if and only if⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ is a minimizer of H for almost all t2 [0, 1]. Thus, submodular minimization is equivalent to a convex optimization problem in a space of uni-dimensional measures. Note that the extension is defined on all tuples of measures µ = (µ 1 , . . . , µn) but it can equivalently be defined through non-increasing functions from [0, 1] to [0, 1], e.g., the representation in terms of cumulative distribution functions Fµi defined above (this representation will be used in Section 4 where algorithms based on the discretization of the equivalent obtained convex problem are discussed). 3 Isotonic Constraints and Stochastic Dominance In this paper, we consider the following problem: inf x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (4) where E is the edge set of a directed acyclic graph on {1, . . . , n} and H is submodular. We denote by X ⇢ Rn (not necessarily a subset of [0, 1]n) the set of x 2 Rn satisfying the isotonic constraints. In order to define an extension in a space of measures, we consider a specific order on measures on [0, 1], namely first-order stochastic dominance [20], defined as follows. Given two distributions µ and ⌫ on [0, 1], with (inverse) cumulative distribution functions Fµ and F⌫ , we have µ < ⌫, if and only if 8x 2 [0, 1], Fµ(x) > F⌫(x), or equivalently, 8t 2 [0, 1], F 1µ (t) > F 1⌫ (t). As shown in the right plot of Figure 1, the densities may still overlap. An equivalent characterization [19, 9] is the existence of a joint distribution on a vector (X,X 0) 2 R2 with marginals µ(x) and ⌫(x0) and such that X > X 0 almost surely1. We now prove the main proposition of the paper: Proposition 1 We consider the convex minimization problem: inf µ2P([0,1])n h(µ) such that 8(i, j) 2 E, µi < µj . (5) Problems in Eq. (4) and Eq. (5) have the same objective values. Moreover, µ is a minimizer of Eq. (5) if and only if F 1µ (t) is a minimizer of H of Eq. (4) for almost all t 2 [0, 1]. Proof We denote by M the set of µ 2 P([0, 1])n satisfying the stochastic ordering constraints. For any x 2 [0, 1]n that satisfies the constraints in Eq. (4), i.e., x 2 X \ [0, 1]n, the associated Dirac measures satisfy the constraint in Eq. (5). Therefore, the objective value M of Eq. (4) is greater or equal to the one M 0 of Eq. (5). Given a minimizer µ for the convex problem in Eq. (5), we have: M > M 0 = h(µ) = R 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt > R 1 0 Mdt = M. This shows the proposition by studying the equality cases above. 1Such a joint distribution may be built as the distribution of (F 1µ (T ), F 1⌫ (T )), where T is uniformly distributed in [0, 1]. Alternatively, we could add the penalty term P (i,j)2E R +1 1 (Fµj (z) Fµi(z))+dz, which corresponds to the unconstrained minimization of H(x) + P (i,j)2E(xj xi)+. For > 0 big enough2, this is equivalent to the problem above, but with a submodular function which has a large Lipschitz constant (and is thus harder to optimize with the iterative methods presented below). 4 Discretization algorithms Prop. 1 shows that the isotonic regression problem with a submodular cost can be cast as a convex optimization problem; however, this is achieved in a space of measures, which cannot be handled directly computationally in polynomial time. Following [2], we consider a polynomial time and space discretization scheme of each interval [0, 1] (and not of [0, 1]n), but we propose in Section 5 a significant improvement that allows to reduce the number of discrete points significantly. All pseudo-codes for the algorithms are available in Appendix B. 4.1 Review of submodular optimization in discrete domains All our algorithms will end up minimizing approximately a submodular function F on {0, . . . , k 1}n, that is, which satisfies Eq. (2). Isotonic constraints will be added in Section 4.2. Following [2], this can be formulated as minimizing a convex function f# on the set of ⇢ 2 [0, 1]n⇥(k 1) so that for each i 2 {1, . . . , n}, (⇢ij)j2{1,...,k 1} is a non-increasing sequence (we denote by S this set of constraints) corresponding to the cumulative distribution function. For any feasible ⇢, a subgradient of f# may be computed by sorting all n(k 1) elements of the matrix ⇢ and computing at most n(k 1) values of F . An approximate minimizer of F (which exactly inherits approximation properties from the approximate optimality of ⇢) is then obtained by selecting the minimum value of F in the computation of the subgradient. Projected subgradient methods can then be used, and if F is the largest absolute difference in values of F when a single variable is changed by ±1, we obtain an "-minimizer (for function values) after t iterations, with " 6 nk F/ p t. The projection step is composed of n simple separable quadratic isotonic regressions with chain constraints in dimension k, which can be solved easily in O(nk) using the pool-adjacent-violator algorithm [4]. Computing a subgradient requires a sorting operation, which is thus O(nk log(nk)). See more details in [2]. Alternatively, we can minimize the strongly-convex f#(⇢) + 1 2 k⇢k2F on the set of ⇢ 2 Rn⇥(k 1) so that for each i, (⇢ij)j is a non-increasing sequence, that is, ⇢ 2 S (the constraints that ⇢ij 2 [0, 1] are dropped). We then get a minimizer z of F by looking for all i 2 {1, . . . , n} at the largest j 2 {1, . . . , k 1} such that ⇢ij > 0. We take then zi = j (and if no such j exists, zi = 0). A gap of " in the problem above, leads to a gap of p "nk for the original problem (see more details in [2]). The subgradient method in the primal, or Frank-Wolfe algorithm in the dual may be used for this problem. We obtain an "-minimizer (for function values) after t iterations, with " 6 F/t, which leads for the original submodular minimization problem to the same optimality guarantees as above, but with a faster algorithm in practice. See the detailed computations and comparisons in [2]. 4.2 Naive discretization scheme Following [2], we simply discretize [0, 1] by selecting the k values ik 1 or 2i+1 2k , for i 2 {0, . . . , k 1}. If the function H : [0, 1]n is L 1 -Lipschitz-continuous with respect to the ` 1 -norm, that is |H(x) H(x0)| 6 L 1 kx x0k 1 , the function F is (L 1 /k)-Lipschitz-continuous with respect to the ` 1 -norm (and thus we have F 6 L 1 /k above). Moreover, if F is minimized up to ", H is optimized up to "+ nL 1 /k. In order to take into account the isotonic constraints, we simply minimize with respect to ⇢ 2 [0, 1]n⇥(k 1) \ S, with the additional constraint that for all j 2 {1, . . . , k 1}, 8(a, b) 2 E, ⇢a,j > ⇢b,j . This corresponds to additional contraints T ⇢ Rn⇥(k 1). 2A short calculation shows that when H is differentiable, the first order-optimality condition (which is only necessary here) implies that if is strictly larger than n times the largest possible partial first-order derivative of H , the isotonic constraints have to be satisfied. Following Section 4.1, we can either choose to solve the convex problem min⇢2[0,1]n⇥k\S\T f#(⇢), or the strongly-convex problem min⇢2S\T f#(⇢) + 1 2 k⇢k2F . In the two situations, after t iterations, that is tnk accesses to values of H , we get a constrained minimizer of H with approximation guarantee nL 1 /k + nL 1 / p t. Thus in order to get a precision ", it suffices to select k > 2nL 1 /" and t > 4n2L2 1 /"2, leading to an overall 8n4L3 1 /"3 accesses to function values of H , which is the same as obtained in [2] (except for an extra factor of n due to a different definition of L 1 ). 4.3 Improved behavior for smooth functions We consider the discretization points ik 1 for i 2 {0, . . . , k 1}, and we assume that all first-order (resp. second-order) partial derivatives are bounded by L 1 (resp. L2 2 ). In the reasoning above, we may upper-bound the infimum of the discrete function in a finer way, going from infx2X H(x) + nL1/k to infx2X H(x) + 1 2 n2L2 2 /k2 (by doing a Taylor expansion around the global optimum, where the first-order terms are always zero, either because the partial derivative is zero or the deviation is zero). We now select k > nL 2 / p ", leading to a number of accesses to H that scales as 4n4L2 1 L 2 /"5/2. We thus gain a factor p " with the exact same algorithm, but different assumptions. 4.4 Algorithms for isotonic problem Compared to plain submodular minimization where we need to project onto S, we need to take into account the extra isotonic constraints, i.e., ⇢ 2 T, and thus use more complex orthogonal projections. Orthogonal projections. We now require the orthogonal projections on S\T or [0, 1]n⇥k \ S\T, which are themselves isotonic regression problems with nk variables. If there are m original isotonic constraints in Eq. (4), the number of isotonic constraints for the projection step is O(nk+mk), which is typically O(mk) if m > n, which we now assume. Thus, we can use existing parametric max-flow algorithms which can solve these in O(nmk2 log(nk)) [13] or in O(nmk2 log(n2k/m)) [11]. See in Appendix A a description of the reformulation of isotonic regression as a parametric max-flow problem, and the link with minimum cut. Following [7, Prop. 5.3], we incorporate the [0, 1] box constraints, by first ignoring them and thus by projecting onto the regular isotonic constraints, and then thresholding the result through x ! max{min{x, 1}, 0}. Alternatively, we can explicitly consider a sequence of max-flow problems (with at most log(1/") of these, where " is the required precision) [28, 15]. Finally, we may consider (approximate) alternate projection algorithms such as Dykstra’s algorithm and its accelerated variants [6], since the set S is easy to project to, while, in some cases, such as chain isotonic constraints for the original problem, T is easy to project to. Finally, we could also use algorithms dedicated to special structures for isotonic regression (see [27]), in particular when our original set of isotonic constraints in Eq. (4) is a chain, and the orthogonal projection corresponds to a two-dimensional grid [26]. In our experiments, we use a standard max-flow code [5] and the usual divide-and-conquer algorithms [28, 15] for parametric max-flow. Separable problems. The function f# from Section 4.2 is then a linear function of the form f#(⇢) = trw>⇢, and then, a single max-flow algorithm can be used. For these separable problerms, the alternative strongly-convex problem of minimizing f#(⇢)+ 1 2 k⇢k2F becomes the one of minimizing min⇢2S\T 1 2 k⇢+ wk2F , which is simply the problem of projecting on the intersection of two convex sets, for which an accelerated Dykstra algorithm may be used [6], with convergence rate in O(1/t2) after t iterations. Each step is O(kn) for projecting onto S, while this is k parametric network flows with n variables and m constraints for projecting onto T, in O(knm log n) for the general case and O(kn) for chains and rooted trees [4, 30]. In our experiments in Section 6, we show that Dykstra’s algorithm converges quickly for separable problems. Note that when the underlying losses are convex3, then Dykstra converges in a single iteration. Indeed, in this situation, the sequences ( wij)j are non-increasing and isotonic regression 3This is a situation where direct algorithms such as the ones by [22] are much more efficient than our discretization schemes. along a direction preserves decreasingness in the other direction, which implies that after two alternate projections, the algorithm has converged to the optimal solution. Alternatively, for the non-strongly convex formulation, this is a single network flow problem with n(k 1) nodes, and mk constraints, in thus O(nmk2 log(nk)) [25]. When E corresponds to a chain, then this is a 2-dimensional-grid with an algorithm in O(n2k2) [26]. For a precision ", and thus k proportional to n/" with the assumptions of Section 4.2, this makes a number of function calls for H , equal to O(kn) = O(n2/") and a running-time complexity of O(n3m/"2 · log(n2/"))—for smooth functions, as shown in Section 4.3, we get k proportional to n/ p " and thus an improved behavior. 5 Improved discretization algorithms We now consider a different discretization scheme that can take advantage of access to higher-order derivatives. We divide [0, 1] into k disjoint pieces A 0 = [0, 1k ), A1 = [ 1 k , 2 k ), . . . , Ak 1 = [ k 1 k , 1]. This defines a new function ˜H : {0, . . . , k 1}n ! R defined only for elements z 2 {0, . . . , k 1}n that satisfy the isotonic constraint, i.e., z 2 {0, . . . , k 1}n \ X: ˜H(z) = min x2 Qn i=1 Azi H(x) such that 8(i, j) 2 E, xi > xj . (6) The function ˜H(z) is equal to +1 if z does not satisfy the isotonic constraints. Proposition 2 The function ˜H is submodular, and minimizing ˜H(z) for z 2 {0, . . . , k 1}n such that 8(i, j) 2 E, zi > zj is equivalent to minimizing Eq. (4). Proof We consider z and z0 that satisfy the isotonic constraints, with minimizers x and x0 in the definition in Eq. (6). We have H(z) +H(z0) = H(x) +H(x0) > H(min{x, x0}) +H(max{x, x0}) > H(min{z, z0}) +H(max{z, z0}). Thus it is submodular on the sub-lattice {0, . . . , k 1}n \ X. Note that in order to minimize ˜H , we need to make sure that we only access H for elements z that satisfy the isotonic constraints, that is ⇢ 2 S \ T (which our algorithms impose). 5.1 Approximation from high-order smoothness The main idea behind our discretization scheme is to use high-order smoothness to approximate for any required z, the function value ˜H(z). If we assume that H is q-times differentiable, with uniform bounds Lrr on all r-th order derivatives, then, the (q 1)-th order Taylor expansion of H around y is equal to Hq(x|y) = H(y) + Pq 1 r=1 P |↵|=r 1 ↵! (x y) ↵H(↵)(y), where ↵ 2 Nn and |↵| is the sum of elements, (x y)↵ is the vector with components (xi yi)↵i , ↵! the products of all factorials of elements of ↵, and H(↵)(y) is the partial derivative of H with order ↵i for each i. We thus approximate ˜H(z), for any z that satisfies the isotonic constraint (i.e., z 2 X), by ˆH(z) = minx2( Qn i=1 Azi )\X Hq(x| z+1/2 k ). We have for any z, | ˜H(z) ˆH(z)| 6 (nLq/2k)q/q!. Moreover, when moving a single element of z by one, the maximal deviation is L 1 /k + 2(nLq/2k)q/q!. If ˆH is submodular, then the same reasoning as in Section 4.2 leads to an approximate error of (nk/ p t) L 1 /k + 2(nLq/2k)q/q! after t iterations, on top of (nLq/2k)q/q!, thus, with t > 16n2L2 1 /"2 and k > (q!"/2) 1/qnLq/2 (assuming " small enough such that t > 16n2k2), this leads to a number of accesses to the (q 1)-th order oracle equal to O(n4L2 1 Lq/"2+1/q). We thus get an improvement in the power of ", which tend to " 2 for infinitely smooth problems. Note that when q = 1 we recover the same rate as in Section 4.3 (with the same assumptions but a slightly different algorithm). However, unless q = 1, the function ˆH(z) is not submodular, and we cannot apply directly the bounds for convex optimization of the extension. We show in Appendix D that the bound still holds for q > 1 by using the special structure of the convex problem. What remains unknown is the computation of ˆH which requires to minimize polynomials on a small cube. We can always use the generic algorithms from Section 4.2 for this, which do not access extra function values but can be slow. For quadratic functions, we can use a convex relaxation which is not tight but already allows strong improvements with much faster local steps, and which we now present. See the pseudo-code in Appendix B. In any case, using expansions of higher order is only practically useful in situations where function evaluations are expensive. 5.2 Quadratic problems In this section, we consider the minimization of a quadratic submodular function H(x) = 1 2 x>Ax+ c>x (thus with all off-diagonal elements of A non-negative) on [0, 1]n, subject to isotonic constraints xi > xj for all (i, j) 2 E. This is the sub-problem required in Section 5.1 when using second-order Taylor expansions. It could be solved iteratively (and approximately) with the algorithm from Section 4.2; in this section, we consider a semidefinite relaxation which is tight for certain problems (A positive semidefinite, c non-positive, or A with non-positive diagonal elements), but not in general (we have found counter-examples but it is most often tight). The relaxation is based on considering the set of (Y, y) 2 Rn⇥n ⇥ Rn such that there exists x 2 [0, 1]n \ X with Y = xx> and y = x. Our problem is thus equivalent to minimizing 1 2 trAY + c>y such that (Y, y) is in the convex-hull Y of this set, which is NP-hard to characterize in polynomial time [10]. However, following ideas from [18], we can find a simple relaxation by considering the following constraints: (a) for all i 6= j, Yii Yij yi Yij Yjj yj yi yj 1 ! is positive semi-definite, (b) for all i 6= j, Yij 6 inf{yi, yj}, which corresponds to xixj 6 inf{xi, xj} for any x 2 [0, 1]n, (c) for all i, Yii 6 yi, which corresponds to x2i 6 xi, and (d) for all (i, j) 2 E, yi > yj , Yii > Yjj , Yij > max{Yjj , yj yi + Yii} and Yij 6 max{Yii, yi yj + Yjj}, which corresponds to xi > xj , x2i > x2j , xixj > x2j , xi(1 xi) 6 xi(1 xj), xixj 6 x2i , and xi(1 xj) > xj(1 xj). This leads to a semi-definite program which provides a lower-bound on the optimal value of the problem. See Appendix E for a proof of tightness for special cases and a counter-example for the tightness in general. 6 Experiments We consider experiments aiming at (a) showing that the new possibility of minimizing submodular functions with isotonic constraints brings new possibilities and (b) that the new discretization algorithms are faster than the naive one. Robust isotonic regression. Given some z 2 Rn, we consider a separable function H(x) = 1 n Pn i=1 G(xi zi) with various possibilities for G: (a) the square loss G(t) = 1 2 t2, (b) the absolute loss G(t) = |t| and (c) a logarithmic loss G(t) = 2 2 log 1 + t2/2 , which is the negative log- density of a Student distribution and non-convex. The non-convexity of the cost function and the fact that is has vanishing derivatives for large values make it a good candidate for robust estimation [12]. The first two losses may be dealt with methods for separable convex isotonic regression [22, 30], but the non-convex loss can only dealt with exactly by the new optimization routine that we present— majorization-minimization algorithms [14] based on the concavity of G as a function of t2 can be used with such non-convex losses, but as shown below, they converge to bad local optima. For simplicity, we consider chain constraints 1 > x 1 > x 2 > · · · > xn > 0. We consider two set-ups: (a) a separable set-up where maximum flow algorithms can be used directly (with n = 200), and (b) a general submodular set-up (with n = 25 and n = 200), where we add a smoothness penalty which is the sum of terms of the form 2 Pn 1 i=1 (xi xi+1)2, which is submodular (but not separable). Data generation. We generate the data z 2 Rn, with n = 200, as follows: we first generate a simple decreasing function of i 2 {1, . . . , n} (here an affine function); we then perturb this ground truth by (a) adding some independent noise and (b) corrupting the data by changing a random subset of the n values by the application of another function which is increasing (see Figure 2, left). This is an adversarial perturbation, while the independent noise is not adversarial; the presence of the adversarial noise makes the problem harder as the proportion of corrupted data increases. Optimization of separable problems with maximum flow algorithms. We solve the discretized version by a single maximum-flow problem of size nk. We compare the various losses for k = 1000 on data which is along a decreasing line (plus noise), but corrupted (i.e., replaced for a certain proportion) by data along an increasing line. See an example in the left plot of Figure 2 for 50% of corrupted data. We see that the square loss is highly non robust, while the (still convex) absolute loss is slightly more robust, and the robust non-convex loss still approximates the decreasing function correctly with 50% of corrupted data when optimized globally, while the method with no guarantee (based on majorization-minimization, dashed line) does not converge to an acceptable solution. In Appendix C, we show additional examples where it is robust up to 75% of corruption. In the right plot of Figure 2, we also show the robustness to an increasing proportion of outliers (for the same type of data as for the left plot), by plotting the mean-squared error in log-scale and averaged over 20 replications. Overall, this shows the benefits of non-convex isotonic regression with guaranteed global optimization, even for large proportions of corrupted data. Optimization of separable problems with pool-adjacent violator (PAV) algorithm. As shown in Section 4.2, discretized separable submodular optimization corresponds to the orthogonal projection of a matrix into the intersection of chain isotonic constraints in each row, and isotonic constraints in each column equal to the original set of isotonic constraints (in these simulations, these are also chain constraints). This can be done by Dykstra’s alternating projection algorithm or its accelerated version [6], for which each projection step can be performed with the PAV algorithm because each of them corresponds to chain constraints. In the left plot of Figure 3, we show the difference in function values (in log-scale) for various discretization levels (defined by the integer k spaced by 1/4 in base-10 logarithm), as as function of the number of iterations (averaged over 20 replications). For large k (small difference of function values), we see a spacing between the ends of the plots of approximatively 1/2, highlighting the dependence in 1/k2 of the final error with discretization k, which our analysis in Section 4.3 suggests. Effect of the discretization for separable problems. In order to highlight the effect of discretization and its interplay with differentiability properties of the function to minimize, we consider in the middle plot of Figure 3, the distance in function values after full optimization of the discrete submodular function for various values of k. We see that for the simple smooth function (quadratic loss), we have a decay in 1/k2, while for the simple non smooth function (absolute loss), we have a final decay in 1/k), a predicted by our analysis. For the logarithm-based loss, whose smoothness constant depends on , when is large, it behaves like a smooth function immediately, while for smaller, k needs to be large enough to reach that behavior. Non-separable problems. We consider adding a smoothness penalty to add the prior knowledge that values should be decreasing and close. In Appendix C, we show the effect of adding a smoothness prior (for n = 200): it leads to better estimation. In the right plot of Figure 3, we show the effect of various discretization schemes (for n = 25), from order 0 (naive discretization), to order 1 and 2 (our new schemes based on Taylor expansions from Section 5.1), and we plot the difference in function values after 50 steps of subgradient descent: in each plot, the quantity H is equal to H(x⇤k) H⇤, where x⇤k is an approximate minimizer of the discretized problem with k values and H⇤ the minimum of H (taking into account the isotonic constraints). As outlined in our analysis, the first-order scheme does not help because our function has bounded Hessians, while the second-order does so significantly. 7 Conclusion In this paper, we have shown how submodularity could be leveraged to obtain polynomial-time algorithms for isotonic regressions with a submodular cost, based on convex optimization in a space of measures—although based on convexity arguments, our algorithms apply to all separable nonconvex functions. The final algorithms are based on discretization, with a new scheme that also provides improvements based on smoothness (also without isotonic constraints). Our framework is worth extending in the following directions: (a) we currently consider a fixed discretization, it would be advantageous to consider adaptive schemes, potentially improving the dependence on the number of variables n and the precision "; (b) other shape constraints can be consider in a similar submodular framework, such as xixj > 0 for certain pairs (i, j); (c) a direct convex formulation without discretization could probably be found for quadratic programming with submodular costs (which are potentially non-convex but solvable in polynomial time); (d) a statistical study of isotonic regression with adversarial corruption could now rely on formulations with polynomial-time algorithms. Acknowledgements We acknowledge support the European Research Council (grant SEQUOIA 724063).
1. What is the focus and contribution of the paper on continuous submodular minimization? 2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental evaluation and comparison to prior works? 3. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 4. What is the relationship between isotonic constraints and other constraint types, such as matroid constraints and down-closed convex constraints? 5. Were there any typos or errors in the review that need correction?
Review
Review This paper extends results of [1] on continuous submodular minimization by adding isotonic constraints to the objective function. The authors first apply the discretization procedure in [1] followed by projection onto the isotonic constraint set (e.g. max-flow), then propose to incorporate isotonic constraints into a refined discretization. Empirical results show increased robustness to outliers on synthetic data. Quality: The arguments appear to be correct, but the experimental evaluation is not very detailed. For example, only small synthetic examples with chain constraints are used, and the only comparison of naive vs improved discretization is Figure 3 (right) which does not verify the improved convergence rates. Moreover, the paper does not compare against previous work (separable convex isotonic regression [2]) where applicable. Clarity: While the paper reads well at a sentence level, it is often unclear where the exposition ends and the contributions begin. I think the paper would be much clearer if all of Sections 4-5 followed the format of Section 3, with Proposition/Theorem statements followed by proofs. Originality: While the combination of submodular minimization and isotonic constraints seems novel, this paper builds heavily on the results of [1]. The proofs of key propositions are less then 6 lines long, which suggests that the theory contribution is is a very straightforward extension. Significance: I am unsure of the impact of these results, given other recent work on constrained continuous submodular and DR-submodular minimization [3]. This is especially relevant because the current paper considers special cases of separable functions and quadratic functions. The authors have not clearly motivated the case for continuous submodular + isotonic constraints, and what advantages it has over other formulations. Question: How are isotonic constraints related to matroid constraints and/or down-closed convex constraints? Typos: Line 116 "Review of optimization of submodular optimization" Line 126 "When a single variables" Line 257 "dicretization" [1] Bach. Submodular functions: from discrete to continuous domains. Mathematical Programming, 2018. [2] Luss and Rosset. Generalized Isotonic Regression, 2014. [3] Staib and Jegelka. Robust Budget Allocation via Continuous Submodular Functions, 2017. ------------ UPDATE: While the author response addressed some of the reviewers' concerns regarding clarity, I agree with Reviewers 1 and 2 that the paper should be rejected
NIPS
Title Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization Abstract We consider the minimization of submodular functions subject to ordering constraints. We show that this potentially non-convex optimization problem can be cast as a convex optimization problem on a space of uni-dimensional measures, with ordering constraints corresponding to first-order stochastic dominance. We propose new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles; these algorithms also lead to improvements without isotonic constraints. Finally, our experiments show that non-convex loss functions can be much more robust to outliers for isotonic regression, while still being solvable in polynomial time. 1 Introduction Shape constraints such as ordering constraints appear everywhere in estimation problems in machine learning, signal processing and statistics. They typically correspond to prior knowledge, and are imposed for the interpretability of models, or to allow non-parametric estimation with improved convergence rates [16, 8]. In this paper, we focus on imposing ordering constraints into an estimation problem, a setting typically referred to as isotonic regression [4, 26, 22], and we aim to generalize the set of problems for which efficient (i.e., polynomial-time) algorithms exist. We thus focus on the following optimization problem: min x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (1) where E ⇢ {1, . . . , n}2 represents the set of constraints, which form a directed acyclic graph. For simplicity, we restrict x to the set [0, 1]n, but our results extend to general products of (potentially unbounded) intervals. As convex constraints, isotonic constraints are well-adapted to estimation problems formulated as convex optimization problems where H is convex, such as for linear supervised learning problems, with many efficient algorithms for separable convex problems [4, 26, 22, 30], which can thus be used as inner loops in more general convex problems by using projected gradient methods (see, e.g., [3]). In this paper, we show that another form of structure can be leveraged. We will assume that H is submodular, which is equivalent, when twice continuously differentiable, to having nonpositive cross second-order derivatives. This notably includes all (potentially non convex) separable functions (i.e., sums of functions that depend on single variables), but also many other examples (see Section 2). Minimizing submodular functions on continuous domains has been recently shown to be equivalent to a convex optimization problem on a space of uni-dimensional measures [2], and given that the functions x 7! (xj xi)+ are submodular for any > 0, it is natural that by using tending to +1, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. we recover as well a convex optimization problem; the main contribution of this paper is to provide a simple framework based on stochastic dominance, for which we design efficient algorithms which are based on simple oracles on the function H (typically access to function values and derivatives). In order to obtain such algorithms, we go significantly beyond [2] by introducing novel discretization algorithms that also provide improvements without any isotonic constraints. More precisely, we make the following contributions: – We show in Section 3 that minimizing a submodular function with isotonic constraints can be cast as a convex optimization problem on a space of uni-dimensional measures, with isotonic constraints corresponding to first-order stochastic dominance. – On top of the naive discretization schemes presented in Section 4, we propose in Section 5 new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles. They go from requiring O(1/"3) = O(1/"2+1) function evaluations to reach a precision ", to O(1/"5/2) = O(1/"2+1/2) and O(1/"7/3) = O(1/"2+1/3). – Our experiments in Section 6 show that non-convex loss functions can be much more robust to outliers for isotonic regression. 2 Submodular Analysis in Continuous Domains In this section, we review the framework of [2] that shows how to minimize submodular functions using convex optimization. Definition. Throughout this paper, we consider a continuous function H : [0, 1]n ! R. The function H is said to be submodular if and only if [21, 29]: 8(x, y) 2 [0, 1]n ⇥ [0, 1]n, H(x) +H(y) > H(min{x, y}) +H(max{x, y}), (2) where the min and max operations are applied component-wise. If H is continuously twice differentiable, then this is equivalent to @ 2H @xi@xj (x) 6 0 for any i 6= j and x 2 [0, 1]n [29]. The cone of submodular functions on [0, 1]n is invariant by marginal strictly increasing transformations, and includes all functions that depend on a single variable (which play the role of linear functions for convex functions), which we refer to as separable functions. Examples. The classical examples are: (a) any separable function, (b) convex functions of the difference of two components, (c) concave functions of a positive linear combination, (d) negative log densities of multivariate totally positive distributions [17]. See Section 6 for a concrete example. Extension on a space of measures. We consider the convex set P([0, 1]) of Radon probability measures [24] on [0, 1], which is the closure (for the weak topology) of the convex hull of all Dirac measures. In order to get an extension, we look for a function defined on the set of products of probability measures µ 2 P([0, 1])n, such that if all µi, i = 1, . . . , n, are Dirac measures at points xi 2 [0, 1], then we have a function value equal to H(x1, . . . , xn). Note that P([0, 1])n is different from P([0, 1]n), which is the set of probability measures on [0, 1]n. For a probability distribution µi 2 P([0, 1]) defined on [0, 1], we can define the (reversed) cumulative distribution function Fµi : [0, 1] ! [0, 1] as Fµi(xi) = µi [xi, 1] . This is a non-increasing left-continuous function from [0, 1] to [0, 1], such that Fµi(0) = 1 and Fµi(1) = µi({1}). See illustrations in the left plot of Figure 1. We can then define the “inverse” cumulative function from [0, 1] to [0, 1] as F 1µi (ti) = sup{xi 2 [0, 1], Fµi(xi) > ti}. The function F 1µi is non-increasing and right-continuous, and such that F 1µi (1) = min supp(µi) and F 1 µi (0) = 1. Moreover, we have Fµi(xi) > ti , F 1µi (ti) > xi. The extension from [0, 1]n to the set of product probability measures is obtained by considering a single threshold t applied to all n cumulative distribution functions, that is: 8µ 2 P([0, 1])n, h(µ 1 , . . . , µn) = Z 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt. (3) We have the following properties when H is submodular: (a) it is an extension, that is, if for all i, µi is a Dirac at xi, then h(µ) = H(x); (b) it is convex; (c) minimizing h on P([0, 1])n and minimizing H on [0, 1]n is equivalent; moreover, the minimal values are equal and µ is a minimizer if and only if⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ is a minimizer of H for almost all t2 [0, 1]. Thus, submodular minimization is equivalent to a convex optimization problem in a space of uni-dimensional measures. Note that the extension is defined on all tuples of measures µ = (µ 1 , . . . , µn) but it can equivalently be defined through non-increasing functions from [0, 1] to [0, 1], e.g., the representation in terms of cumulative distribution functions Fµi defined above (this representation will be used in Section 4 where algorithms based on the discretization of the equivalent obtained convex problem are discussed). 3 Isotonic Constraints and Stochastic Dominance In this paper, we consider the following problem: inf x2[0,1]n H(x) such that 8(i, j) 2 E, xi > xj , (4) where E is the edge set of a directed acyclic graph on {1, . . . , n} and H is submodular. We denote by X ⇢ Rn (not necessarily a subset of [0, 1]n) the set of x 2 Rn satisfying the isotonic constraints. In order to define an extension in a space of measures, we consider a specific order on measures on [0, 1], namely first-order stochastic dominance [20], defined as follows. Given two distributions µ and ⌫ on [0, 1], with (inverse) cumulative distribution functions Fµ and F⌫ , we have µ < ⌫, if and only if 8x 2 [0, 1], Fµ(x) > F⌫(x), or equivalently, 8t 2 [0, 1], F 1µ (t) > F 1⌫ (t). As shown in the right plot of Figure 1, the densities may still overlap. An equivalent characterization [19, 9] is the existence of a joint distribution on a vector (X,X 0) 2 R2 with marginals µ(x) and ⌫(x0) and such that X > X 0 almost surely1. We now prove the main proposition of the paper: Proposition 1 We consider the convex minimization problem: inf µ2P([0,1])n h(µ) such that 8(i, j) 2 E, µi < µj . (5) Problems in Eq. (4) and Eq. (5) have the same objective values. Moreover, µ is a minimizer of Eq. (5) if and only if F 1µ (t) is a minimizer of H of Eq. (4) for almost all t 2 [0, 1]. Proof We denote by M the set of µ 2 P([0, 1])n satisfying the stochastic ordering constraints. For any x 2 [0, 1]n that satisfies the constraints in Eq. (4), i.e., x 2 X \ [0, 1]n, the associated Dirac measures satisfy the constraint in Eq. (5). Therefore, the objective value M of Eq. (4) is greater or equal to the one M 0 of Eq. (5). Given a minimizer µ for the convex problem in Eq. (5), we have: M > M 0 = h(µ) = R 1 0 H ⇥ F 1µ1 (t), . . . , F 1 µn (t) ⇤ dt > R 1 0 Mdt = M. This shows the proposition by studying the equality cases above. 1Such a joint distribution may be built as the distribution of (F 1µ (T ), F 1⌫ (T )), where T is uniformly distributed in [0, 1]. Alternatively, we could add the penalty term P (i,j)2E R +1 1 (Fµj (z) Fµi(z))+dz, which corresponds to the unconstrained minimization of H(x) + P (i,j)2E(xj xi)+. For > 0 big enough2, this is equivalent to the problem above, but with a submodular function which has a large Lipschitz constant (and is thus harder to optimize with the iterative methods presented below). 4 Discretization algorithms Prop. 1 shows that the isotonic regression problem with a submodular cost can be cast as a convex optimization problem; however, this is achieved in a space of measures, which cannot be handled directly computationally in polynomial time. Following [2], we consider a polynomial time and space discretization scheme of each interval [0, 1] (and not of [0, 1]n), but we propose in Section 5 a significant improvement that allows to reduce the number of discrete points significantly. All pseudo-codes for the algorithms are available in Appendix B. 4.1 Review of submodular optimization in discrete domains All our algorithms will end up minimizing approximately a submodular function F on {0, . . . , k 1}n, that is, which satisfies Eq. (2). Isotonic constraints will be added in Section 4.2. Following [2], this can be formulated as minimizing a convex function f# on the set of ⇢ 2 [0, 1]n⇥(k 1) so that for each i 2 {1, . . . , n}, (⇢ij)j2{1,...,k 1} is a non-increasing sequence (we denote by S this set of constraints) corresponding to the cumulative distribution function. For any feasible ⇢, a subgradient of f# may be computed by sorting all n(k 1) elements of the matrix ⇢ and computing at most n(k 1) values of F . An approximate minimizer of F (which exactly inherits approximation properties from the approximate optimality of ⇢) is then obtained by selecting the minimum value of F in the computation of the subgradient. Projected subgradient methods can then be used, and if F is the largest absolute difference in values of F when a single variable is changed by ±1, we obtain an "-minimizer (for function values) after t iterations, with " 6 nk F/ p t. The projection step is composed of n simple separable quadratic isotonic regressions with chain constraints in dimension k, which can be solved easily in O(nk) using the pool-adjacent-violator algorithm [4]. Computing a subgradient requires a sorting operation, which is thus O(nk log(nk)). See more details in [2]. Alternatively, we can minimize the strongly-convex f#(⇢) + 1 2 k⇢k2F on the set of ⇢ 2 Rn⇥(k 1) so that for each i, (⇢ij)j is a non-increasing sequence, that is, ⇢ 2 S (the constraints that ⇢ij 2 [0, 1] are dropped). We then get a minimizer z of F by looking for all i 2 {1, . . . , n} at the largest j 2 {1, . . . , k 1} such that ⇢ij > 0. We take then zi = j (and if no such j exists, zi = 0). A gap of " in the problem above, leads to a gap of p "nk for the original problem (see more details in [2]). The subgradient method in the primal, or Frank-Wolfe algorithm in the dual may be used for this problem. We obtain an "-minimizer (for function values) after t iterations, with " 6 F/t, which leads for the original submodular minimization problem to the same optimality guarantees as above, but with a faster algorithm in practice. See the detailed computations and comparisons in [2]. 4.2 Naive discretization scheme Following [2], we simply discretize [0, 1] by selecting the k values ik 1 or 2i+1 2k , for i 2 {0, . . . , k 1}. If the function H : [0, 1]n is L 1 -Lipschitz-continuous with respect to the ` 1 -norm, that is |H(x) H(x0)| 6 L 1 kx x0k 1 , the function F is (L 1 /k)-Lipschitz-continuous with respect to the ` 1 -norm (and thus we have F 6 L 1 /k above). Moreover, if F is minimized up to ", H is optimized up to "+ nL 1 /k. In order to take into account the isotonic constraints, we simply minimize with respect to ⇢ 2 [0, 1]n⇥(k 1) \ S, with the additional constraint that for all j 2 {1, . . . , k 1}, 8(a, b) 2 E, ⇢a,j > ⇢b,j . This corresponds to additional contraints T ⇢ Rn⇥(k 1). 2A short calculation shows that when H is differentiable, the first order-optimality condition (which is only necessary here) implies that if is strictly larger than n times the largest possible partial first-order derivative of H , the isotonic constraints have to be satisfied. Following Section 4.1, we can either choose to solve the convex problem min⇢2[0,1]n⇥k\S\T f#(⇢), or the strongly-convex problem min⇢2S\T f#(⇢) + 1 2 k⇢k2F . In the two situations, after t iterations, that is tnk accesses to values of H , we get a constrained minimizer of H with approximation guarantee nL 1 /k + nL 1 / p t. Thus in order to get a precision ", it suffices to select k > 2nL 1 /" and t > 4n2L2 1 /"2, leading to an overall 8n4L3 1 /"3 accesses to function values of H , which is the same as obtained in [2] (except for an extra factor of n due to a different definition of L 1 ). 4.3 Improved behavior for smooth functions We consider the discretization points ik 1 for i 2 {0, . . . , k 1}, and we assume that all first-order (resp. second-order) partial derivatives are bounded by L 1 (resp. L2 2 ). In the reasoning above, we may upper-bound the infimum of the discrete function in a finer way, going from infx2X H(x) + nL1/k to infx2X H(x) + 1 2 n2L2 2 /k2 (by doing a Taylor expansion around the global optimum, where the first-order terms are always zero, either because the partial derivative is zero or the deviation is zero). We now select k > nL 2 / p ", leading to a number of accesses to H that scales as 4n4L2 1 L 2 /"5/2. We thus gain a factor p " with the exact same algorithm, but different assumptions. 4.4 Algorithms for isotonic problem Compared to plain submodular minimization where we need to project onto S, we need to take into account the extra isotonic constraints, i.e., ⇢ 2 T, and thus use more complex orthogonal projections. Orthogonal projections. We now require the orthogonal projections on S\T or [0, 1]n⇥k \ S\T, which are themselves isotonic regression problems with nk variables. If there are m original isotonic constraints in Eq. (4), the number of isotonic constraints for the projection step is O(nk+mk), which is typically O(mk) if m > n, which we now assume. Thus, we can use existing parametric max-flow algorithms which can solve these in O(nmk2 log(nk)) [13] or in O(nmk2 log(n2k/m)) [11]. See in Appendix A a description of the reformulation of isotonic regression as a parametric max-flow problem, and the link with minimum cut. Following [7, Prop. 5.3], we incorporate the [0, 1] box constraints, by first ignoring them and thus by projecting onto the regular isotonic constraints, and then thresholding the result through x ! max{min{x, 1}, 0}. Alternatively, we can explicitly consider a sequence of max-flow problems (with at most log(1/") of these, where " is the required precision) [28, 15]. Finally, we may consider (approximate) alternate projection algorithms such as Dykstra’s algorithm and its accelerated variants [6], since the set S is easy to project to, while, in some cases, such as chain isotonic constraints for the original problem, T is easy to project to. Finally, we could also use algorithms dedicated to special structures for isotonic regression (see [27]), in particular when our original set of isotonic constraints in Eq. (4) is a chain, and the orthogonal projection corresponds to a two-dimensional grid [26]. In our experiments, we use a standard max-flow code [5] and the usual divide-and-conquer algorithms [28, 15] for parametric max-flow. Separable problems. The function f# from Section 4.2 is then a linear function of the form f#(⇢) = trw>⇢, and then, a single max-flow algorithm can be used. For these separable problerms, the alternative strongly-convex problem of minimizing f#(⇢)+ 1 2 k⇢k2F becomes the one of minimizing min⇢2S\T 1 2 k⇢+ wk2F , which is simply the problem of projecting on the intersection of two convex sets, for which an accelerated Dykstra algorithm may be used [6], with convergence rate in O(1/t2) after t iterations. Each step is O(kn) for projecting onto S, while this is k parametric network flows with n variables and m constraints for projecting onto T, in O(knm log n) for the general case and O(kn) for chains and rooted trees [4, 30]. In our experiments in Section 6, we show that Dykstra’s algorithm converges quickly for separable problems. Note that when the underlying losses are convex3, then Dykstra converges in a single iteration. Indeed, in this situation, the sequences ( wij)j are non-increasing and isotonic regression 3This is a situation where direct algorithms such as the ones by [22] are much more efficient than our discretization schemes. along a direction preserves decreasingness in the other direction, which implies that after two alternate projections, the algorithm has converged to the optimal solution. Alternatively, for the non-strongly convex formulation, this is a single network flow problem with n(k 1) nodes, and mk constraints, in thus O(nmk2 log(nk)) [25]. When E corresponds to a chain, then this is a 2-dimensional-grid with an algorithm in O(n2k2) [26]. For a precision ", and thus k proportional to n/" with the assumptions of Section 4.2, this makes a number of function calls for H , equal to O(kn) = O(n2/") and a running-time complexity of O(n3m/"2 · log(n2/"))—for smooth functions, as shown in Section 4.3, we get k proportional to n/ p " and thus an improved behavior. 5 Improved discretization algorithms We now consider a different discretization scheme that can take advantage of access to higher-order derivatives. We divide [0, 1] into k disjoint pieces A 0 = [0, 1k ), A1 = [ 1 k , 2 k ), . . . , Ak 1 = [ k 1 k , 1]. This defines a new function ˜H : {0, . . . , k 1}n ! R defined only for elements z 2 {0, . . . , k 1}n that satisfy the isotonic constraint, i.e., z 2 {0, . . . , k 1}n \ X: ˜H(z) = min x2 Qn i=1 Azi H(x) such that 8(i, j) 2 E, xi > xj . (6) The function ˜H(z) is equal to +1 if z does not satisfy the isotonic constraints. Proposition 2 The function ˜H is submodular, and minimizing ˜H(z) for z 2 {0, . . . , k 1}n such that 8(i, j) 2 E, zi > zj is equivalent to minimizing Eq. (4). Proof We consider z and z0 that satisfy the isotonic constraints, with minimizers x and x0 in the definition in Eq. (6). We have H(z) +H(z0) = H(x) +H(x0) > H(min{x, x0}) +H(max{x, x0}) > H(min{z, z0}) +H(max{z, z0}). Thus it is submodular on the sub-lattice {0, . . . , k 1}n \ X. Note that in order to minimize ˜H , we need to make sure that we only access H for elements z that satisfy the isotonic constraints, that is ⇢ 2 S \ T (which our algorithms impose). 5.1 Approximation from high-order smoothness The main idea behind our discretization scheme is to use high-order smoothness to approximate for any required z, the function value ˜H(z). If we assume that H is q-times differentiable, with uniform bounds Lrr on all r-th order derivatives, then, the (q 1)-th order Taylor expansion of H around y is equal to Hq(x|y) = H(y) + Pq 1 r=1 P |↵|=r 1 ↵! (x y) ↵H(↵)(y), where ↵ 2 Nn and |↵| is the sum of elements, (x y)↵ is the vector with components (xi yi)↵i , ↵! the products of all factorials of elements of ↵, and H(↵)(y) is the partial derivative of H with order ↵i for each i. We thus approximate ˜H(z), for any z that satisfies the isotonic constraint (i.e., z 2 X), by ˆH(z) = minx2( Qn i=1 Azi )\X Hq(x| z+1/2 k ). We have for any z, | ˜H(z) ˆH(z)| 6 (nLq/2k)q/q!. Moreover, when moving a single element of z by one, the maximal deviation is L 1 /k + 2(nLq/2k)q/q!. If ˆH is submodular, then the same reasoning as in Section 4.2 leads to an approximate error of (nk/ p t) L 1 /k + 2(nLq/2k)q/q! after t iterations, on top of (nLq/2k)q/q!, thus, with t > 16n2L2 1 /"2 and k > (q!"/2) 1/qnLq/2 (assuming " small enough such that t > 16n2k2), this leads to a number of accesses to the (q 1)-th order oracle equal to O(n4L2 1 Lq/"2+1/q). We thus get an improvement in the power of ", which tend to " 2 for infinitely smooth problems. Note that when q = 1 we recover the same rate as in Section 4.3 (with the same assumptions but a slightly different algorithm). However, unless q = 1, the function ˆH(z) is not submodular, and we cannot apply directly the bounds for convex optimization of the extension. We show in Appendix D that the bound still holds for q > 1 by using the special structure of the convex problem. What remains unknown is the computation of ˆH which requires to minimize polynomials on a small cube. We can always use the generic algorithms from Section 4.2 for this, which do not access extra function values but can be slow. For quadratic functions, we can use a convex relaxation which is not tight but already allows strong improvements with much faster local steps, and which we now present. See the pseudo-code in Appendix B. In any case, using expansions of higher order is only practically useful in situations where function evaluations are expensive. 5.2 Quadratic problems In this section, we consider the minimization of a quadratic submodular function H(x) = 1 2 x>Ax+ c>x (thus with all off-diagonal elements of A non-negative) on [0, 1]n, subject to isotonic constraints xi > xj for all (i, j) 2 E. This is the sub-problem required in Section 5.1 when using second-order Taylor expansions. It could be solved iteratively (and approximately) with the algorithm from Section 4.2; in this section, we consider a semidefinite relaxation which is tight for certain problems (A positive semidefinite, c non-positive, or A with non-positive diagonal elements), but not in general (we have found counter-examples but it is most often tight). The relaxation is based on considering the set of (Y, y) 2 Rn⇥n ⇥ Rn such that there exists x 2 [0, 1]n \ X with Y = xx> and y = x. Our problem is thus equivalent to minimizing 1 2 trAY + c>y such that (Y, y) is in the convex-hull Y of this set, which is NP-hard to characterize in polynomial time [10]. However, following ideas from [18], we can find a simple relaxation by considering the following constraints: (a) for all i 6= j, Yii Yij yi Yij Yjj yj yi yj 1 ! is positive semi-definite, (b) for all i 6= j, Yij 6 inf{yi, yj}, which corresponds to xixj 6 inf{xi, xj} for any x 2 [0, 1]n, (c) for all i, Yii 6 yi, which corresponds to x2i 6 xi, and (d) for all (i, j) 2 E, yi > yj , Yii > Yjj , Yij > max{Yjj , yj yi + Yii} and Yij 6 max{Yii, yi yj + Yjj}, which corresponds to xi > xj , x2i > x2j , xixj > x2j , xi(1 xi) 6 xi(1 xj), xixj 6 x2i , and xi(1 xj) > xj(1 xj). This leads to a semi-definite program which provides a lower-bound on the optimal value of the problem. See Appendix E for a proof of tightness for special cases and a counter-example for the tightness in general. 6 Experiments We consider experiments aiming at (a) showing that the new possibility of minimizing submodular functions with isotonic constraints brings new possibilities and (b) that the new discretization algorithms are faster than the naive one. Robust isotonic regression. Given some z 2 Rn, we consider a separable function H(x) = 1 n Pn i=1 G(xi zi) with various possibilities for G: (a) the square loss G(t) = 1 2 t2, (b) the absolute loss G(t) = |t| and (c) a logarithmic loss G(t) = 2 2 log 1 + t2/2 , which is the negative log- density of a Student distribution and non-convex. The non-convexity of the cost function and the fact that is has vanishing derivatives for large values make it a good candidate for robust estimation [12]. The first two losses may be dealt with methods for separable convex isotonic regression [22, 30], but the non-convex loss can only dealt with exactly by the new optimization routine that we present— majorization-minimization algorithms [14] based on the concavity of G as a function of t2 can be used with such non-convex losses, but as shown below, they converge to bad local optima. For simplicity, we consider chain constraints 1 > x 1 > x 2 > · · · > xn > 0. We consider two set-ups: (a) a separable set-up where maximum flow algorithms can be used directly (with n = 200), and (b) a general submodular set-up (with n = 25 and n = 200), where we add a smoothness penalty which is the sum of terms of the form 2 Pn 1 i=1 (xi xi+1)2, which is submodular (but not separable). Data generation. We generate the data z 2 Rn, with n = 200, as follows: we first generate a simple decreasing function of i 2 {1, . . . , n} (here an affine function); we then perturb this ground truth by (a) adding some independent noise and (b) corrupting the data by changing a random subset of the n values by the application of another function which is increasing (see Figure 2, left). This is an adversarial perturbation, while the independent noise is not adversarial; the presence of the adversarial noise makes the problem harder as the proportion of corrupted data increases. Optimization of separable problems with maximum flow algorithms. We solve the discretized version by a single maximum-flow problem of size nk. We compare the various losses for k = 1000 on data which is along a decreasing line (plus noise), but corrupted (i.e., replaced for a certain proportion) by data along an increasing line. See an example in the left plot of Figure 2 for 50% of corrupted data. We see that the square loss is highly non robust, while the (still convex) absolute loss is slightly more robust, and the robust non-convex loss still approximates the decreasing function correctly with 50% of corrupted data when optimized globally, while the method with no guarantee (based on majorization-minimization, dashed line) does not converge to an acceptable solution. In Appendix C, we show additional examples where it is robust up to 75% of corruption. In the right plot of Figure 2, we also show the robustness to an increasing proportion of outliers (for the same type of data as for the left plot), by plotting the mean-squared error in log-scale and averaged over 20 replications. Overall, this shows the benefits of non-convex isotonic regression with guaranteed global optimization, even for large proportions of corrupted data. Optimization of separable problems with pool-adjacent violator (PAV) algorithm. As shown in Section 4.2, discretized separable submodular optimization corresponds to the orthogonal projection of a matrix into the intersection of chain isotonic constraints in each row, and isotonic constraints in each column equal to the original set of isotonic constraints (in these simulations, these are also chain constraints). This can be done by Dykstra’s alternating projection algorithm or its accelerated version [6], for which each projection step can be performed with the PAV algorithm because each of them corresponds to chain constraints. In the left plot of Figure 3, we show the difference in function values (in log-scale) for various discretization levels (defined by the integer k spaced by 1/4 in base-10 logarithm), as as function of the number of iterations (averaged over 20 replications). For large k (small difference of function values), we see a spacing between the ends of the plots of approximatively 1/2, highlighting the dependence in 1/k2 of the final error with discretization k, which our analysis in Section 4.3 suggests. Effect of the discretization for separable problems. In order to highlight the effect of discretization and its interplay with differentiability properties of the function to minimize, we consider in the middle plot of Figure 3, the distance in function values after full optimization of the discrete submodular function for various values of k. We see that for the simple smooth function (quadratic loss), we have a decay in 1/k2, while for the simple non smooth function (absolute loss), we have a final decay in 1/k), a predicted by our analysis. For the logarithm-based loss, whose smoothness constant depends on , when is large, it behaves like a smooth function immediately, while for smaller, k needs to be large enough to reach that behavior. Non-separable problems. We consider adding a smoothness penalty to add the prior knowledge that values should be decreasing and close. In Appendix C, we show the effect of adding a smoothness prior (for n = 200): it leads to better estimation. In the right plot of Figure 3, we show the effect of various discretization schemes (for n = 25), from order 0 (naive discretization), to order 1 and 2 (our new schemes based on Taylor expansions from Section 5.1), and we plot the difference in function values after 50 steps of subgradient descent: in each plot, the quantity H is equal to H(x⇤k) H⇤, where x⇤k is an approximate minimizer of the discretized problem with k values and H⇤ the minimum of H (taking into account the isotonic constraints). As outlined in our analysis, the first-order scheme does not help because our function has bounded Hessians, while the second-order does so significantly. 7 Conclusion In this paper, we have shown how submodularity could be leveraged to obtain polynomial-time algorithms for isotonic regressions with a submodular cost, based on convex optimization in a space of measures—although based on convexity arguments, our algorithms apply to all separable nonconvex functions. The final algorithms are based on discretization, with a new scheme that also provides improvements based on smoothness (also without isotonic constraints). Our framework is worth extending in the following directions: (a) we currently consider a fixed discretization, it would be advantageous to consider adaptive schemes, potentially improving the dependence on the number of variables n and the precision "; (b) other shape constraints can be consider in a similar submodular framework, such as xixj > 0 for certain pairs (i, j); (c) a direct convex formulation without discretization could probably be found for quadratic programming with submodular costs (which are potentially non-convex but solvable in polynomial time); (d) a statistical study of isotonic regression with adversarial corruption could now rely on formulations with polynomial-time algorithms. Acknowledgements We acknowledge support the European Research Council (grant SEQUOIA 724063).
1. What is the focus of the paper regarding continuous submodular minimization? 2. What are the strengths of the proposed algorithm, particularly in leveraging isotonic structure? 3. Do you have any concerns or questions about the practical relevance of the techniques in the paper? 4. How does the paper contribute to the broader topic of nonconvex optimization in machine learning? 5. Are there any minor issues or suggestions you have for improving the paper?
Review
Review This paper considers the problem of continuous submodular minimization with box constraints and extra isotonic constraints, motivated by nonconvex isotonic regression. The paper gives an elegant algorithm for solving these constrained problems, by leveraging isotonic structure that is already present in the algorithms for continuous submodular minimization with box constraints from [1]. Beyond generalizing the approach of [1] to these constrained problems, the paper further shows that an extra smoothness condition on the objective yields better dependence the suboptimality criterion, for free, than thought before. Higher order smoothness can yield even better dependence. Finally, the paper shows the practical relevance of these techniques via a seemingly simple robust isotonic regression problem where natural convex formulations fail. A different (student t) model succeeds, but the objective is nonconvex; heuristics give poor performance, while the techniques from the paper solve the problem optimally and yield good performance on the regression task. Overall I think the paper has clean, well-executed theory and a motivating practical problem. Continuous submodular optimization is an important topic relevant to the NIPS community, as it presents a class of tractable nonconvex problems of use in machine learning applications (like nonconvex isotonic regression). Within this area of work, the paper is original and significant: while continuous submodular _maximization_ has seen much activity and many positive results in recent years, continuous submodular _minimization_ (this paper) seems more elusive. (beyond this paper I can only think of Bach '18 [1] and Staib and Jegelka '17, versus a multitude of maximization papers). This paper grants to us a new class of tractable constrained nonconvex problems. The paper is also clearly written and was a joy to read. I do have a few minor concerns/questions -- these are all practical considerations as I think the math is clean and not fluffy: - what is the wall-clock runtime like? - is it obvious that majorization-minimization is the most competitive alternative? (e.g. vs running projected gradient descent from random initialization) - I find the practical benefit of extra Taylor expansion terms in the approximation somewhat suspect -- it seems like you have to do a lot of extra work (both algorithmically and just by having to query q different derivative oracles) to get the improvement in epsilon dependence. It seems the fancy discretization was not really even used in the experiments (in "Non-separable problems" the authors compare the quality of approximations, but they seem to just use the usual discretization for the main regression experiments and just note that smooth objectives automatically get better approximation). Thus I am skeptical that "the new discretization algorithms are faster than the naive one." However the observation that smoothness implies better speed for free is still important. Overall, I think this is a solid paper with significant theoretical and algorithmic contribution, and it is also well-written. Some aspects of the experiments could be strengthened, but still I think this is a clear accept. minor comments: - the SDP relaxation is based on [15] which should be cited in the main text (it is cited in the appendix) - the point that \hat H is not technically submodular but we can still approximately solve probably should be stated more formally
NIPS
Title BERT Loses Patience: Fast and Robust Inference with Early Exit Abstract In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM). To achieve this, our approach couples an internal-classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for a pre-defined number of steps. Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers. Meanwhile, experimental results with an ALBERT model show that our method can improve the accuracy and robustness of the model by preventing it from overthinking and exploiting multiple classifiers for prediction, yielding a better accuracy-speed trade-off compared to existing early exit methods.2 1 Introduction In Natural Language Processing (NLP), pretraining and fine-tuning have become a new norm for many tasks. Pretrained language models (PLMs) (e.g., BERT [1], XLNet [2], RoBERTa [3], ALBERT [4]) contain many layers and millions or even billions of parameters, making them computationally expensive and inefficient regarding both memory consumption and latency. This drawback hinders their application in scenarios where inference speed and computational costs are crucial. Another bottleneck of overparameterized PLMs that stack dozens of Transformer layers is the “overthinking” problem [5] during their decision-making process. That is, for many input samples, their shallow representations at an earlier layer are adequate to make a correct classification, whereas the representations in the final layer may be otherwise distracted by over-complicated or irrelevant features that do not generalize well. The overthinking problem in PLMs leads to wasted computation, hinders model generalization, and may also make them vulnerable to adversarial attacks [6]. In this paper, we propose a novel Patience-based Early Exit (PABEE) mechanism to enable models to stop inference dynamically. PABEE is inspired by the widely used Early Stopping [7, 8] strategy for model training. It enables better input-adaptive inference of PLMs to address the aforementioned limitations. Specifically, our approach couples an internal classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for t times consecutively (see Figure 1b), where t is a pre-defined patience. We first show that our method is able to improve the accuracy compared to conventional inference under certain assumptions. Then we conduct extensive experiments on the GLUE benchmark and show that PABEE outperforms existing prediction probability distribution-based exit criteria by a large margin. In addition, PABEE can simultaneously improve inference speed and adversarial robustness of the original ∗Equal contribution. Work done during these two authors’ internship at Microsoft Research Asia. 2Code available at https://github.com/JetRunner/PABEE. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. model while retaining or even improving its original accuracy with minor additional effort in terms of model size and training time. Also, our method can dynamically adjust the accuracy-efficiency trade-off to fit different devices and resource constraints by tuning the patience hyperparameter without retraining the model, which is favored in real-world applications [9]. Although we focus on PLM in this paper, we also have conducted experiments on image classification tasks with the popular ResNet [10] as the backbone model and present the results in Appendix A to verify the generalization ability of PABEE. To summarize, our contribution is two-fold: (1) We propose Patience-based Early Exit, a novel and effective inference mechanism and show its feasibility of improving the efficiency and the accuracy of deep neural networks with theoretical analysis. (2) Our empirical results on the GLUE benchmark highlight that our approach can simultaneously improve the accuracy and robustness of a competitive ALBERT model, while speeding up inference across different tasks with trivial additional training resources in terms of both time and parameters. 2 Related Work Existing research in improving the efficiency of deep neural networks can be categorized into two streams: (1) Static approaches design compact models or compress heavy models, while the models remain static for all instances at inference (i.e., the input goes through the same layers); (2) Dynamic approaches allow the model to choose different computational paths according to different instances when doing inference. In this way, the simpler inputs usually require less calculation to make predictions. Our proposed PABEE falls into the second category. Static Approaches: Compact Network Design and Model Compression Many lightweight neural network architectures have been specifically designed for resource-constrained applications, including MobileNet [11], ShuffleNet [12], EfficientNet [13], and ALBERT [4], to name a few. For model compression, Han et al. [14] first proposed to sparsify deep models by removing non-significant synapses and then re-training to restore performance. Weight Quantization [15] and Knowledge Distillation [16] have also proved to be effective for compressing neural models. Recently, existing studies employ Knowledge Distillation [17–19], Weight Pruning [20–22] and Module Replacing [23] to accelerate PLMs. Dynamic Approaches: Input-Adaptive Inference A parallel line of research for improving the efficiency of neural networks is to enable adaptive inference for various input instances. Adaptive Computation Time [24, 25] proposed to use a trainable halting mechanism to perform input-adaptive inference. However, training the halting model requires extra effort and also introduces additional parameters and inference cost. To alleviate this problem, BranchyNet [26] calculated the entropy of the prediction probability distribution as a proxy for the confidence of branch classifiers to enable early exit. Shallow-Deep Nets [5] leveraged the softmax scores of predictions of branch classifiers to mitigate the overthinking problem of DNNs. More recently, Hu et al. [27] leveraged this approach in adversarial training to improve the adversarial robustness of DNNs. In addition, existing approaches [24, 28] trained separate models to determine passing through or skipping each layer. Very recently, FastBERT [29] and DeeBERT [30] adapted confidence-based BranchyNet [26] for PLMs while RightTool [31] leveraged the same early-exit criterion as in the Shallow-Deep Network [5]. However, Schwartz et al. [31] recently revealed that prediction probability based methods often lead to substantial performance drop compared to an oracle that identifies the smallest model needed to solve a given instance. In addition, these methods only support classification and leave out regression, which limits their applications. Different from the recent work that directly employs existing efficient inference methods on top of PLMs, PABEE is a novel early-exit criterion that captures the inneragreement between earlier and later internal classifiers and exploit multiple classifiers for inference, leading to better accuracy. 3 Patience-based Early Exit Patience-based Early Exit (PABEE) is a plug-and-play method that can work well with minimal adjustment on training. 3.1 Motivation We first conduct experiments to investigate the overthinking problem in PLMs. As shown in Figure 2b, we illustrate the prediction distribution entropy [26] and the error rate of the model on the development set as more layers join the prediction. Although the model becomes more “confident” (lower entropy indicates higher confidence in BranchyNet [26]) with its prediction as more layers join, the actual error rate instead increases after 10 layers. This phenomenon was discovered and named “overthinking” by Kaya et al. [5]. Similarly, as shown in Figure 2a, after 2.5 epochs of training, the model continues to get better accuracy on the training set but begins to deteriorate on the development set. This is the well-known overfitting problem which can be resolved by applying an early stopping mechanism [7, 8]. From this aspect, overfitting in training and overthinking in inference are naturally alike, inspiring us to adopt an approach similar to early stopping for inference. 3.2 Inference The inference process of PABEE is illustrated in Figure 1b. Formally, we define a common inference process as the input instance x goes through layers L1 . . . Ln and the classifier/regressor Cn to predict a class label distribution y (for classification) or a value y (for regression, we assume the output dimension is 1 for brevity). We couple an internal classifier/regressor C1 . . . Cn−1 with each layer of L1 . . . Ln−1, respectively. For each layer Li, we first calculate its hidden state hi: hi = Li(hi−1) h0 = Embedding(x) (1) Then, we use its internal classifier/regressor to output a distribution or value as a per-layer prediction yi = Ci(hi) or yi = Ci(hi). We use a counter cnt to store the number of times that the predictions remain “unchanged”. For classification, cnt i is calculated by: cnt i = { cnt i−1 + 1 argmax(yi) = argmax(yi−1), 0 argmax(yi) 6= argmax(yi−1) ∨ i = 0. (2) While for regression, cnt i is calculated by: cnt i = { cnt i−1 + 1 |yi − yi−1| < τ, 0 |yi − yi−1| ≥ τ ∨ i = 0. (3) where τ is a pre-defined threshold. We stop inference early at layer Lj when cntj = t. If this condition is never fulfilled, we use the final classifier Cn for prediction. In this way, the model can exit early without passing through all layers to make a prediction. As shown in Figure 1a, prediction score-based early exit relies on the softmax score. As revealed by prior work [32, 33], prediction of probability distributions (i.e., softmax scores) suffers from being over-confident to one class, making it an unreliable metric to represent confidence. Nevertheless, the capability of a low layer may not match its high confidence score. In Figure 1a, the second classifier outputs a high confidence score and incorrectly terminates inference. With Patience-based Early Exit, the stopping criteria is in a cross-layer fashion, preventing errors from one single classifier. Also, since PABEE comprehensively considers results from multiple classifiers, it can also benefit from an ensemble learning [34] effect. 3.3 Training PABEE requires that we train internal classifiers to predict based on their corresponding layers’ hidden states. For classification, the loss function Li for classifier Ci is calculated with cross entropy: Li = − ∑ z∈Z [1 [yi = z] · logP (yi = z|hi)] (4) where z and Z denote a class label and the set of class labels, respectively. For regression, the loss is instead calculated by a (mean) squared error: Li = (yi − ŷi)2 (5) where ŷ is the ground truth. Then, we calculate and train the model to minimize the total loss L by a weighted average following Kaya et al. [5]: L = ∑n j=1 j · Lj∑n j=1 j (6) In this way, every possible inference branch has been covered in the training process. Also, the weighted average can correspond to the relative inference cost of each internal classifier. 3.4 Theoretical Analysis It is straightforward to see that Patience-based Early Exit is able to reduce inference latency. To understand whether and under what conditions it can also improve accuracy, we conduct a theoretical comparison of a model’s accuracy with and without PABEE under a simplified condition. We consider the case of binary classification for simplicity and conclude that: Theorem 1 Assuming the patience of PABEE inference is t, the total number of internal classifiers (IC) is n, the misclassification probability (i.e., error rate) of all internal classifiers (excluding the final classifier) is q, and the misclassification probability of the final classifier and the original classifier (without ICs) is p. Then the PABEE mechanism improves the accuracy of conventional inference as long as n− t < ( 12q ) t(pq )− p (the proof is detailed in Appendix B). We can see the above inequality can be easily satisfied. For instance, when n = 12, q = 0.2, and p = 0.1, the above equation is satisfied as long as the patience t ≥ 4. However, it is notable that assuming the accuracy of each internal classifiers to be equal and independent is generally not attainable in practice. Additionally, we verify the statistical feasibility of PABEE with Monte Carlo simulation in Appendix C. To further test PABEE with real data and tasks, we also conduct extensive experiments in the following section. 4 Experiments 4.1 Tasks and Datasets We evaluate our proposed approach on the GLUE benchmark [35]. Specifically, we test on Microsoft Research Paraphrase Matching (MRPC) [36], Quora Question Pairs (QQP)3 and STS-B [37] for Paraphrase Similarity Matching; Stanford Sentiment Treebank (SST-2) [38] for Sentiment Classification; Multi-Genre Natural Language Inference Matched (MNLI-m), Multi-Genre Natural Language Inference Mismatched (MNLI-mm) [39], Question Natural Language Inference (QNLI) [40] and Recognizing Textual Entailment (RTE) [35] for the Natural Language Inference (NLI) task; The Corpus of Linguistic Acceptability (CoLA) [41] for Linguistic Acceptability. We exclude WNLI [42] from GLUE following previous work [1, 19, 23]. For datasets with more than one metric, we report the arithmetic mean of the metrics. 4.2 Baselines For GLUE tasks, we compare our approach with four types of baselines: (1) Backbone models: We choose ALBERT-base and BERT-base, which have approximately the same inference latency and accuracy. (2) Directly reducing layers: We experiment with the first 6 and 9 layers of the original (AL)BERT with a single output layer on the top, denoted by (AL)BERT-6L and (AL)BERT-9L, respectively. These two baselines help to set a lower bound for methods that do not employ any technique. (3) Static model compression approaches: For pruning, we include the results of LayerDrop [22] and attention head pruning [20] on ALBERT. For reference, we also report the performance of state-of-the-art methods on compressing the BERT-base model with knowledge distillation or module replacing, including DistillBERT [17], BERT-PKD [18] and BERT-of-Theseus [23]. (4) Input-adaptive inference: Following the settings in concurrent studies [31, 29, 30], we add internal classifiers after each layer and apply different early exit criteria, including that employed by BranchyNet [26] and Shallow-Deep [5]. To make a fair comparison, the internal classifiers and their insertions are exactly same in both baselines and Patience-based Early Exit. We search over a set of thresholds to find the one delivering the best accuracy for the baselines while targeting a speed-up ratio between 1.30× and 1.96× (the speed-up ratios of (AL)BERT-9L and -6L, respectively). 4.3 Experimental Setting Training We add a linear output layer after each intermediate layer of the pretrained BERT/ALBERT model as the internal classifiers. We perform grid search over batch sizes of 3https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs Test Set ALBERT-base [4] 12M 1.00× 54.1 84.3 87.0 90.8 71.1 76.4 94.1 85.5 80.4 PABEE (ours) 12M 1.57× 55.7 84.8 87.4 91.0 71.2 77.3 94.1 85.7 80.9 {16, 32, 128}, and learning rates of {1e-5, 2e-5, 3e-5, 5e-5} with an Adam optimizer. We apply an early stopping mechanism and select the model with the best performance on the development set. We implement PABEE on the base of Hugging Face’s Transformers [43]. We conduct our experiments on a single Nvidia V100 16GB GPU. Inference Following prior work on input-adaptive inference [26, 5], inference is on a per-instance basis, i.e., the batch size for inference is set to 1. This is a common latency-sensitive production scenario when processing individual requests from different users [31]. We report the median performance over 5 runs with different random seeds because the performance on relatively small datasets such as CoLA and RTE usually has large variance. For PABEE, we set the patience t = 6 in the overall comparison to keep the speed-up ratio between 1.30× and 1.96× while obtaining good performance following Figure 4. We further analyze the behavior of the PABEE mechanism with different patience settings in Section 4.5. 4.4 Overall Comparison We first report our main result on GLUE with ALBERT as the backbone model in Table 1. This choice is made because: (1) ALBERT is a state-of-the-art PLM for natural language understanding. (2) ALBERT is already very efficient in terms of the number of parameters and memory use because of its layer sharing mechanism, but still suffers from the problem of high inference latency. We can see that our approach outperforms all compared approaches on improving the inference efficiency of PLMs, demonstrating the effectiveness of the proposed PABEE mechanism. Surprisingly, our approach consistently improves the performance of the original ALBERT model by a relatively large margin while speeding-up inference by 1.57×. This is, to the best of our knowledge, the first inference strategy that can improve both the speed and performance of a fine-tuned PLM. To better compare the efficiency of PABEE with the method employed in BranchyNet and ShallowDeep, we illustrate speed-accuracy curves in Figure 3 with different trade-off hyperparameters (i.e., threshold for BranchyNet and Shallow-Deep, patience for PABEE). Notably, PABEE retains higher accuracy than BranchyNet and Shallow-Deep under the same speed-up ratio, showing its superiority over prediction score based methods. To demonstrate the versatility of our method with different PLMs, we report the results on a representative subset of GLUE with BERT [1] as the backbone model in Table 2. We can see that our BERT-based model significantly outperforms other compared methods with either knowledge distillation or prediction probability based input-adaptive inference methods. Notably, the performance is slightly lower than the original BERT model while PABEE improves the accuracy on ALBERT. We suspect that this is because the intermediate layers of BERT have never been connected to an output layer during pretraining, which leads to a mismatch between pretraining and fine-tuning when adding the internal classifiers. However, PABEE still has a higher accuracy than various knowledge distillation-based approaches as well as prediction probability distribution based models, showing its potential as a generic method for deep neural networks of different kinds. As for the cost of training, we present parameter numbers and training time with and without PABEE with both BERT and ALBERT backbones in Table 3. Although more classifiers need to be trained, training PABEE is no slower (even slightly faster) than conventional fine-tuning, which may be attributed to the additional loss functions of added internal classifiers. This makes our approach appealing compared with other approaches for accelerating inference such as pruning or distillation because they require separately training another model for each speed-up ratio in addition to training the full model. Also, PABEE only introduces fewer than 40K parameters (0.33% of the original 12M parameters). 4.5 Analysis Impact of Patience As illustrated in Figure 4, different patience can lead to different speed-up ratios and performance. For a 12-layer ALBERT model, PABEE reaches peak performance with a patience of 6 or 7. On MNLI, SST-2 and STS-B, PABEE can always outperform the baseline with patience between 5 and 8. Notably, unlike BranchyNet and Shallow-Deep, whose accuracy drops as the inference speed goes up, PABEE has an inverted-U curve. We confirm this observation statistically with Monte Carlo simulation in Appendix C. To analyze, when the patience t is set too large, the later internal classifier may suffer from the overthinking problem and make a wrong prediction that breaks the stable state among previous internal classifiers, which have not met the early-exit criterion because t is large. This makes PABEE leave more samples to be classified by the final classifier Cn, which suffers from the aforementioned overthinking problem. Thus, the effect of the ensemble learning vanishes and undermines its performance. Similarly, when t is relatively small, more samples may meet the early-exit criterion by accident before actually reaching the stable state where consecutive internal classifiers agree with each other. Impact of Model Depth We also investigate the impact of model depth on the performance of PABEE. We apply PABEE to a 24-layer ALBERT-large model. As shown in Table 4, our approach consistently improves the accuracy as more layers and classifiers are added while producing an even larger speed-up ratio. This finding demonstrates the potential of PABEE for burgeoning deeper PLMs [44–46]. 4.6 Defending Against Adversarial Attack Deep Learning models have been found to be vulnerable to adversarial examples that are slightly altered with perturbations often indistinguishable to humans [47]. Jin et al. [6] revealed that PLMs can also be attacked with a high success rate. Recent studies [5, 27] attribute the vulnerability partially to the overthinking problem, arguing that it can be mitigated by early exit mechanisms. In our experiments, we use a state-of-the-art adversarial attack method, TextFooler [6], which demonstrates effectiveness on attacking BERT. We conduct black-box attacks on three datasets: SNLI [48], MNLI [39] and Yelp [49]. Note that since we use the pre-tokenized data provided by Jin et al. [6], the results on MNLI differ slightly from the ones in Table 1. We attack the original ALBERT-base model, ALBERT-base with Shallow-Deep [5] and with Patience-based Early Exit. As shown in Table 5, we report the original accuracy, after-attack accuracy and the number of queries needed by TextFooler to attack each model. Our approach successfully defends more than 3× attacks compared to the original ALBERT on NLI tasks, and 2× on the Yelp sentiment analysis task. Also, PABEE increases the number of queries needed to attack by a large margin, providing more protection to the model. Compared to Shallow-Deep [5], our model demonstrates significant robustness improvements. To analyze, although the early exit mechanism of Shallow-Deep can prevent the aforementioned overthinking problem, it still relies on a single classifier to make the final prediction, which makes it vulnerable to adversarial attacks. In comparison, since Patience-based Early Exit exploits multiple layers and classifiers, the attacker has to fool multiple classifiers (which may exploit different features) at the same time, making it much more difficult to attack the model. This effect is similar to the merits of ensemble learning against adversarial attack, discussed in previous studies [50–52]. 5 Discussion In this paper, we proposed PABEE, a novel efficient inference method that can yield better accuracyspeed trade-off than existing methods. We verify its effectiveness and efficiency on GLUE and provide theoretical analysis. Empirical results show that PABEE can simultaneously improve the efficiency, accuracy, and adversarial robustness upon a competitive ALBERT model. However, a limitation is that PABEE currently only works on models with a single branch (e.g., ResNet, Transformer). Some adaption is needed for multi-branch networks (e.g., NASNet [53]). For future work, we would like to explore our method on more tasks and settings. Also, since PABEE is orthogonal to prediction distribution based early exit approaches, it would be interesting to see if we can combine them with PABEE for better performance. Broader Impact As an efficient inference technique, our proposed PABEE can facilitate more applications on mobile and edge computing, and also help reduce energy use and carbon emission [54]. Since our method serves as a plug-in for existing pretrained language models, it does not introduce significant new ethical concerns but more work is needed to determine its effect on biases (e.g., gender bias) that have already been encoded in a PLM. Acknowledgments and Disclosure of Funding We are grateful for the comments from the anonymous reviewers. We would like to thank the authors of TextFooler [6], Di Jin and Zhijing Jin, for their help with the data for adversarial attack. Tao Ge is the corresponding author. The authors did not receive third-party funding or support for this work.
1. What is the main contribution of the paper? 2. What are the strengths of the proposed approach, particularly its simplicity and potential impact? 3. What are the weaknesses of the method, such as limited speed improvements and architecture specificity?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors proposes early stopping at test-time to improve inference speed and accuracy. The idea is to train a classifier at each layer of multi-layered embedding model like BERT and perform classification one layer at time, stopping when the prediction stops changing. They demonstrate empirically that the method improves both the speed and accuracy of BERT/ALBERT on the GLUE benchmarks. My opinion of the work remains the same after the response. Strengths Simple straightforward idea that would be easy to implement directly from the description of the paper and that performs better in some cases than more complicated methods. The prevalence of these models means the idea could be pretty impactful, especially in industry where there is a heavy reliance on out-of-the-box methods like BERT and throughput and latency concerns really matter. The method could presumably be combined with other work like head pruning since it is somewhat orthogonal. Weaknesses The primary weakness is that the speed improvements are modest, only about 50\% faster. Another weakness is that it is specific to architectures with many layers, like 12-layer BERT. An architecture with MUSE, with less layers may have less opportunity for improvement.
NIPS
Title BERT Loses Patience: Fast and Robust Inference with Early Exit Abstract In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM). To achieve this, our approach couples an internal-classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for a pre-defined number of steps. Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers. Meanwhile, experimental results with an ALBERT model show that our method can improve the accuracy and robustness of the model by preventing it from overthinking and exploiting multiple classifiers for prediction, yielding a better accuracy-speed trade-off compared to existing early exit methods.2 1 Introduction In Natural Language Processing (NLP), pretraining and fine-tuning have become a new norm for many tasks. Pretrained language models (PLMs) (e.g., BERT [1], XLNet [2], RoBERTa [3], ALBERT [4]) contain many layers and millions or even billions of parameters, making them computationally expensive and inefficient regarding both memory consumption and latency. This drawback hinders their application in scenarios where inference speed and computational costs are crucial. Another bottleneck of overparameterized PLMs that stack dozens of Transformer layers is the “overthinking” problem [5] during their decision-making process. That is, for many input samples, their shallow representations at an earlier layer are adequate to make a correct classification, whereas the representations in the final layer may be otherwise distracted by over-complicated or irrelevant features that do not generalize well. The overthinking problem in PLMs leads to wasted computation, hinders model generalization, and may also make them vulnerable to adversarial attacks [6]. In this paper, we propose a novel Patience-based Early Exit (PABEE) mechanism to enable models to stop inference dynamically. PABEE is inspired by the widely used Early Stopping [7, 8] strategy for model training. It enables better input-adaptive inference of PLMs to address the aforementioned limitations. Specifically, our approach couples an internal classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for t times consecutively (see Figure 1b), where t is a pre-defined patience. We first show that our method is able to improve the accuracy compared to conventional inference under certain assumptions. Then we conduct extensive experiments on the GLUE benchmark and show that PABEE outperforms existing prediction probability distribution-based exit criteria by a large margin. In addition, PABEE can simultaneously improve inference speed and adversarial robustness of the original ∗Equal contribution. Work done during these two authors’ internship at Microsoft Research Asia. 2Code available at https://github.com/JetRunner/PABEE. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. model while retaining or even improving its original accuracy with minor additional effort in terms of model size and training time. Also, our method can dynamically adjust the accuracy-efficiency trade-off to fit different devices and resource constraints by tuning the patience hyperparameter without retraining the model, which is favored in real-world applications [9]. Although we focus on PLM in this paper, we also have conducted experiments on image classification tasks with the popular ResNet [10] as the backbone model and present the results in Appendix A to verify the generalization ability of PABEE. To summarize, our contribution is two-fold: (1) We propose Patience-based Early Exit, a novel and effective inference mechanism and show its feasibility of improving the efficiency and the accuracy of deep neural networks with theoretical analysis. (2) Our empirical results on the GLUE benchmark highlight that our approach can simultaneously improve the accuracy and robustness of a competitive ALBERT model, while speeding up inference across different tasks with trivial additional training resources in terms of both time and parameters. 2 Related Work Existing research in improving the efficiency of deep neural networks can be categorized into two streams: (1) Static approaches design compact models or compress heavy models, while the models remain static for all instances at inference (i.e., the input goes through the same layers); (2) Dynamic approaches allow the model to choose different computational paths according to different instances when doing inference. In this way, the simpler inputs usually require less calculation to make predictions. Our proposed PABEE falls into the second category. Static Approaches: Compact Network Design and Model Compression Many lightweight neural network architectures have been specifically designed for resource-constrained applications, including MobileNet [11], ShuffleNet [12], EfficientNet [13], and ALBERT [4], to name a few. For model compression, Han et al. [14] first proposed to sparsify deep models by removing non-significant synapses and then re-training to restore performance. Weight Quantization [15] and Knowledge Distillation [16] have also proved to be effective for compressing neural models. Recently, existing studies employ Knowledge Distillation [17–19], Weight Pruning [20–22] and Module Replacing [23] to accelerate PLMs. Dynamic Approaches: Input-Adaptive Inference A parallel line of research for improving the efficiency of neural networks is to enable adaptive inference for various input instances. Adaptive Computation Time [24, 25] proposed to use a trainable halting mechanism to perform input-adaptive inference. However, training the halting model requires extra effort and also introduces additional parameters and inference cost. To alleviate this problem, BranchyNet [26] calculated the entropy of the prediction probability distribution as a proxy for the confidence of branch classifiers to enable early exit. Shallow-Deep Nets [5] leveraged the softmax scores of predictions of branch classifiers to mitigate the overthinking problem of DNNs. More recently, Hu et al. [27] leveraged this approach in adversarial training to improve the adversarial robustness of DNNs. In addition, existing approaches [24, 28] trained separate models to determine passing through or skipping each layer. Very recently, FastBERT [29] and DeeBERT [30] adapted confidence-based BranchyNet [26] for PLMs while RightTool [31] leveraged the same early-exit criterion as in the Shallow-Deep Network [5]. However, Schwartz et al. [31] recently revealed that prediction probability based methods often lead to substantial performance drop compared to an oracle that identifies the smallest model needed to solve a given instance. In addition, these methods only support classification and leave out regression, which limits their applications. Different from the recent work that directly employs existing efficient inference methods on top of PLMs, PABEE is a novel early-exit criterion that captures the inneragreement between earlier and later internal classifiers and exploit multiple classifiers for inference, leading to better accuracy. 3 Patience-based Early Exit Patience-based Early Exit (PABEE) is a plug-and-play method that can work well with minimal adjustment on training. 3.1 Motivation We first conduct experiments to investigate the overthinking problem in PLMs. As shown in Figure 2b, we illustrate the prediction distribution entropy [26] and the error rate of the model on the development set as more layers join the prediction. Although the model becomes more “confident” (lower entropy indicates higher confidence in BranchyNet [26]) with its prediction as more layers join, the actual error rate instead increases after 10 layers. This phenomenon was discovered and named “overthinking” by Kaya et al. [5]. Similarly, as shown in Figure 2a, after 2.5 epochs of training, the model continues to get better accuracy on the training set but begins to deteriorate on the development set. This is the well-known overfitting problem which can be resolved by applying an early stopping mechanism [7, 8]. From this aspect, overfitting in training and overthinking in inference are naturally alike, inspiring us to adopt an approach similar to early stopping for inference. 3.2 Inference The inference process of PABEE is illustrated in Figure 1b. Formally, we define a common inference process as the input instance x goes through layers L1 . . . Ln and the classifier/regressor Cn to predict a class label distribution y (for classification) or a value y (for regression, we assume the output dimension is 1 for brevity). We couple an internal classifier/regressor C1 . . . Cn−1 with each layer of L1 . . . Ln−1, respectively. For each layer Li, we first calculate its hidden state hi: hi = Li(hi−1) h0 = Embedding(x) (1) Then, we use its internal classifier/regressor to output a distribution or value as a per-layer prediction yi = Ci(hi) or yi = Ci(hi). We use a counter cnt to store the number of times that the predictions remain “unchanged”. For classification, cnt i is calculated by: cnt i = { cnt i−1 + 1 argmax(yi) = argmax(yi−1), 0 argmax(yi) 6= argmax(yi−1) ∨ i = 0. (2) While for regression, cnt i is calculated by: cnt i = { cnt i−1 + 1 |yi − yi−1| < τ, 0 |yi − yi−1| ≥ τ ∨ i = 0. (3) where τ is a pre-defined threshold. We stop inference early at layer Lj when cntj = t. If this condition is never fulfilled, we use the final classifier Cn for prediction. In this way, the model can exit early without passing through all layers to make a prediction. As shown in Figure 1a, prediction score-based early exit relies on the softmax score. As revealed by prior work [32, 33], prediction of probability distributions (i.e., softmax scores) suffers from being over-confident to one class, making it an unreliable metric to represent confidence. Nevertheless, the capability of a low layer may not match its high confidence score. In Figure 1a, the second classifier outputs a high confidence score and incorrectly terminates inference. With Patience-based Early Exit, the stopping criteria is in a cross-layer fashion, preventing errors from one single classifier. Also, since PABEE comprehensively considers results from multiple classifiers, it can also benefit from an ensemble learning [34] effect. 3.3 Training PABEE requires that we train internal classifiers to predict based on their corresponding layers’ hidden states. For classification, the loss function Li for classifier Ci is calculated with cross entropy: Li = − ∑ z∈Z [1 [yi = z] · logP (yi = z|hi)] (4) where z and Z denote a class label and the set of class labels, respectively. For regression, the loss is instead calculated by a (mean) squared error: Li = (yi − ŷi)2 (5) where ŷ is the ground truth. Then, we calculate and train the model to minimize the total loss L by a weighted average following Kaya et al. [5]: L = ∑n j=1 j · Lj∑n j=1 j (6) In this way, every possible inference branch has been covered in the training process. Also, the weighted average can correspond to the relative inference cost of each internal classifier. 3.4 Theoretical Analysis It is straightforward to see that Patience-based Early Exit is able to reduce inference latency. To understand whether and under what conditions it can also improve accuracy, we conduct a theoretical comparison of a model’s accuracy with and without PABEE under a simplified condition. We consider the case of binary classification for simplicity and conclude that: Theorem 1 Assuming the patience of PABEE inference is t, the total number of internal classifiers (IC) is n, the misclassification probability (i.e., error rate) of all internal classifiers (excluding the final classifier) is q, and the misclassification probability of the final classifier and the original classifier (without ICs) is p. Then the PABEE mechanism improves the accuracy of conventional inference as long as n− t < ( 12q ) t(pq )− p (the proof is detailed in Appendix B). We can see the above inequality can be easily satisfied. For instance, when n = 12, q = 0.2, and p = 0.1, the above equation is satisfied as long as the patience t ≥ 4. However, it is notable that assuming the accuracy of each internal classifiers to be equal and independent is generally not attainable in practice. Additionally, we verify the statistical feasibility of PABEE with Monte Carlo simulation in Appendix C. To further test PABEE with real data and tasks, we also conduct extensive experiments in the following section. 4 Experiments 4.1 Tasks and Datasets We evaluate our proposed approach on the GLUE benchmark [35]. Specifically, we test on Microsoft Research Paraphrase Matching (MRPC) [36], Quora Question Pairs (QQP)3 and STS-B [37] for Paraphrase Similarity Matching; Stanford Sentiment Treebank (SST-2) [38] for Sentiment Classification; Multi-Genre Natural Language Inference Matched (MNLI-m), Multi-Genre Natural Language Inference Mismatched (MNLI-mm) [39], Question Natural Language Inference (QNLI) [40] and Recognizing Textual Entailment (RTE) [35] for the Natural Language Inference (NLI) task; The Corpus of Linguistic Acceptability (CoLA) [41] for Linguistic Acceptability. We exclude WNLI [42] from GLUE following previous work [1, 19, 23]. For datasets with more than one metric, we report the arithmetic mean of the metrics. 4.2 Baselines For GLUE tasks, we compare our approach with four types of baselines: (1) Backbone models: We choose ALBERT-base and BERT-base, which have approximately the same inference latency and accuracy. (2) Directly reducing layers: We experiment with the first 6 and 9 layers of the original (AL)BERT with a single output layer on the top, denoted by (AL)BERT-6L and (AL)BERT-9L, respectively. These two baselines help to set a lower bound for methods that do not employ any technique. (3) Static model compression approaches: For pruning, we include the results of LayerDrop [22] and attention head pruning [20] on ALBERT. For reference, we also report the performance of state-of-the-art methods on compressing the BERT-base model with knowledge distillation or module replacing, including DistillBERT [17], BERT-PKD [18] and BERT-of-Theseus [23]. (4) Input-adaptive inference: Following the settings in concurrent studies [31, 29, 30], we add internal classifiers after each layer and apply different early exit criteria, including that employed by BranchyNet [26] and Shallow-Deep [5]. To make a fair comparison, the internal classifiers and their insertions are exactly same in both baselines and Patience-based Early Exit. We search over a set of thresholds to find the one delivering the best accuracy for the baselines while targeting a speed-up ratio between 1.30× and 1.96× (the speed-up ratios of (AL)BERT-9L and -6L, respectively). 4.3 Experimental Setting Training We add a linear output layer after each intermediate layer of the pretrained BERT/ALBERT model as the internal classifiers. We perform grid search over batch sizes of 3https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs Test Set ALBERT-base [4] 12M 1.00× 54.1 84.3 87.0 90.8 71.1 76.4 94.1 85.5 80.4 PABEE (ours) 12M 1.57× 55.7 84.8 87.4 91.0 71.2 77.3 94.1 85.7 80.9 {16, 32, 128}, and learning rates of {1e-5, 2e-5, 3e-5, 5e-5} with an Adam optimizer. We apply an early stopping mechanism and select the model with the best performance on the development set. We implement PABEE on the base of Hugging Face’s Transformers [43]. We conduct our experiments on a single Nvidia V100 16GB GPU. Inference Following prior work on input-adaptive inference [26, 5], inference is on a per-instance basis, i.e., the batch size for inference is set to 1. This is a common latency-sensitive production scenario when processing individual requests from different users [31]. We report the median performance over 5 runs with different random seeds because the performance on relatively small datasets such as CoLA and RTE usually has large variance. For PABEE, we set the patience t = 6 in the overall comparison to keep the speed-up ratio between 1.30× and 1.96× while obtaining good performance following Figure 4. We further analyze the behavior of the PABEE mechanism with different patience settings in Section 4.5. 4.4 Overall Comparison We first report our main result on GLUE with ALBERT as the backbone model in Table 1. This choice is made because: (1) ALBERT is a state-of-the-art PLM for natural language understanding. (2) ALBERT is already very efficient in terms of the number of parameters and memory use because of its layer sharing mechanism, but still suffers from the problem of high inference latency. We can see that our approach outperforms all compared approaches on improving the inference efficiency of PLMs, demonstrating the effectiveness of the proposed PABEE mechanism. Surprisingly, our approach consistently improves the performance of the original ALBERT model by a relatively large margin while speeding-up inference by 1.57×. This is, to the best of our knowledge, the first inference strategy that can improve both the speed and performance of a fine-tuned PLM. To better compare the efficiency of PABEE with the method employed in BranchyNet and ShallowDeep, we illustrate speed-accuracy curves in Figure 3 with different trade-off hyperparameters (i.e., threshold for BranchyNet and Shallow-Deep, patience for PABEE). Notably, PABEE retains higher accuracy than BranchyNet and Shallow-Deep under the same speed-up ratio, showing its superiority over prediction score based methods. To demonstrate the versatility of our method with different PLMs, we report the results on a representative subset of GLUE with BERT [1] as the backbone model in Table 2. We can see that our BERT-based model significantly outperforms other compared methods with either knowledge distillation or prediction probability based input-adaptive inference methods. Notably, the performance is slightly lower than the original BERT model while PABEE improves the accuracy on ALBERT. We suspect that this is because the intermediate layers of BERT have never been connected to an output layer during pretraining, which leads to a mismatch between pretraining and fine-tuning when adding the internal classifiers. However, PABEE still has a higher accuracy than various knowledge distillation-based approaches as well as prediction probability distribution based models, showing its potential as a generic method for deep neural networks of different kinds. As for the cost of training, we present parameter numbers and training time with and without PABEE with both BERT and ALBERT backbones in Table 3. Although more classifiers need to be trained, training PABEE is no slower (even slightly faster) than conventional fine-tuning, which may be attributed to the additional loss functions of added internal classifiers. This makes our approach appealing compared with other approaches for accelerating inference such as pruning or distillation because they require separately training another model for each speed-up ratio in addition to training the full model. Also, PABEE only introduces fewer than 40K parameters (0.33% of the original 12M parameters). 4.5 Analysis Impact of Patience As illustrated in Figure 4, different patience can lead to different speed-up ratios and performance. For a 12-layer ALBERT model, PABEE reaches peak performance with a patience of 6 or 7. On MNLI, SST-2 and STS-B, PABEE can always outperform the baseline with patience between 5 and 8. Notably, unlike BranchyNet and Shallow-Deep, whose accuracy drops as the inference speed goes up, PABEE has an inverted-U curve. We confirm this observation statistically with Monte Carlo simulation in Appendix C. To analyze, when the patience t is set too large, the later internal classifier may suffer from the overthinking problem and make a wrong prediction that breaks the stable state among previous internal classifiers, which have not met the early-exit criterion because t is large. This makes PABEE leave more samples to be classified by the final classifier Cn, which suffers from the aforementioned overthinking problem. Thus, the effect of the ensemble learning vanishes and undermines its performance. Similarly, when t is relatively small, more samples may meet the early-exit criterion by accident before actually reaching the stable state where consecutive internal classifiers agree with each other. Impact of Model Depth We also investigate the impact of model depth on the performance of PABEE. We apply PABEE to a 24-layer ALBERT-large model. As shown in Table 4, our approach consistently improves the accuracy as more layers and classifiers are added while producing an even larger speed-up ratio. This finding demonstrates the potential of PABEE for burgeoning deeper PLMs [44–46]. 4.6 Defending Against Adversarial Attack Deep Learning models have been found to be vulnerable to adversarial examples that are slightly altered with perturbations often indistinguishable to humans [47]. Jin et al. [6] revealed that PLMs can also be attacked with a high success rate. Recent studies [5, 27] attribute the vulnerability partially to the overthinking problem, arguing that it can be mitigated by early exit mechanisms. In our experiments, we use a state-of-the-art adversarial attack method, TextFooler [6], which demonstrates effectiveness on attacking BERT. We conduct black-box attacks on three datasets: SNLI [48], MNLI [39] and Yelp [49]. Note that since we use the pre-tokenized data provided by Jin et al. [6], the results on MNLI differ slightly from the ones in Table 1. We attack the original ALBERT-base model, ALBERT-base with Shallow-Deep [5] and with Patience-based Early Exit. As shown in Table 5, we report the original accuracy, after-attack accuracy and the number of queries needed by TextFooler to attack each model. Our approach successfully defends more than 3× attacks compared to the original ALBERT on NLI tasks, and 2× on the Yelp sentiment analysis task. Also, PABEE increases the number of queries needed to attack by a large margin, providing more protection to the model. Compared to Shallow-Deep [5], our model demonstrates significant robustness improvements. To analyze, although the early exit mechanism of Shallow-Deep can prevent the aforementioned overthinking problem, it still relies on a single classifier to make the final prediction, which makes it vulnerable to adversarial attacks. In comparison, since Patience-based Early Exit exploits multiple layers and classifiers, the attacker has to fool multiple classifiers (which may exploit different features) at the same time, making it much more difficult to attack the model. This effect is similar to the merits of ensemble learning against adversarial attack, discussed in previous studies [50–52]. 5 Discussion In this paper, we proposed PABEE, a novel efficient inference method that can yield better accuracyspeed trade-off than existing methods. We verify its effectiveness and efficiency on GLUE and provide theoretical analysis. Empirical results show that PABEE can simultaneously improve the efficiency, accuracy, and adversarial robustness upon a competitive ALBERT model. However, a limitation is that PABEE currently only works on models with a single branch (e.g., ResNet, Transformer). Some adaption is needed for multi-branch networks (e.g., NASNet [53]). For future work, we would like to explore our method on more tasks and settings. Also, since PABEE is orthogonal to prediction distribution based early exit approaches, it would be interesting to see if we can combine them with PABEE for better performance. Broader Impact As an efficient inference technique, our proposed PABEE can facilitate more applications on mobile and edge computing, and also help reduce energy use and carbon emission [54]. Since our method serves as a plug-in for existing pretrained language models, it does not introduce significant new ethical concerns but more work is needed to determine its effect on biases (e.g., gender bias) that have already been encoded in a PLM. Acknowledgments and Disclosure of Funding We are grateful for the comments from the anonymous reviewers. We would like to thank the authors of TextFooler [6], Di Jin and Zhijing Jin, for their help with the data for adversarial attack. Tao Ge is the corresponding author. The authors did not receive third-party funding or support for this work.
1. What is the main contribution of the paper regarding inference efficiency and end-task accuracy? 2. What are the strengths of the paper, particularly in its empirical findings? 3. What are the weaknesses of the paper, especially in its theoretical explanations and assumptions? 4. Do you have any concerns or alternative explanations for the observed behavior in the paper's experiments? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions After author response: thanks for adding the suggested baselines and clarifications to the paper. === The paper introduces a simple approach for improving inference efficiency via early exists. Like past work, additional classification heads are added and trained at each layer. But unlike past work, the early exit criteria is based on the consistency of the classification decisions over several layers, rather than the entropy of the prediction distribution. The authors find that the proposed approach often improves end-task accuracy over the original baseline model (which makes predictions after all layers have been executed), an observation referred to by the authors as "overthinking." While I find some of the explanations for the observed behavior a bit unconvincing, the approach and empirical findings are very exciting. This work is the first I'm aware of that shows it's possible to simultaneously improve inference efficiency and end-task accuracy via early exits. Strengths This work is the first to show that it's possible to simultaneously improve inference efficiency and end-task accuracy via early exits. The experimental methodology is sound and support the paper's main *empirical* claims. Weaknesses While the empirical claims relating to the proposed approach are well supported, I'm not completely convinced by the theoretical explanations provided (e.g., the analogy between "overthinking" and overfitting, the theorem suggesting that the proposed approach is guaranteed to improve end-task accuracy). For instance, I find the analogy in Section 3.1 between overfitting and the observed "overthinking" phenomenon to be pretty unconvincing. The former is about generalization, while the latter is about the representations learned at different layers and their suitability to a particular input. Maybe this was the inspiration for your idea, but I don't think you've shown a strong enough connection to say that the two are analogous. I'm also unconvinced by Theorem 1, which seems to try to guarantee that the proposed early exit approach would improve end-task accuracy. This theorem seems to assume that the misclassification probability at different layers are independent of one another. This assumption should be stated more clearly, but also isn't a reasonable assumption in my opinion. Misclassifications at different layers are almost certainly not independent. Consider Pr(misclassified at layer i+1 | correctly classified at layer i) vs. Pr(misclassified at layer i+1 | misclassified at layer i). Are these equal? I suspect they are not, since some examples are inherently more challenging and will be misclassified at multiple successive layers. To my eyes, there are several alternative explanations for the observed behavior, for example: (1) different layers are comparatively better suited to some inputs than others, thus why early exits perform better; and/or (2) there's an ensembling effect from the proposed early exit criteria, which is why end-task accuracy improves compared to the baseline. These other explanations should ideally be considered and evaluated.
NIPS
Title BERT Loses Patience: Fast and Robust Inference with Early Exit Abstract In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM). To achieve this, our approach couples an internal-classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for a pre-defined number of steps. Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers. Meanwhile, experimental results with an ALBERT model show that our method can improve the accuracy and robustness of the model by preventing it from overthinking and exploiting multiple classifiers for prediction, yielding a better accuracy-speed trade-off compared to existing early exit methods.2 1 Introduction In Natural Language Processing (NLP), pretraining and fine-tuning have become a new norm for many tasks. Pretrained language models (PLMs) (e.g., BERT [1], XLNet [2], RoBERTa [3], ALBERT [4]) contain many layers and millions or even billions of parameters, making them computationally expensive and inefficient regarding both memory consumption and latency. This drawback hinders their application in scenarios where inference speed and computational costs are crucial. Another bottleneck of overparameterized PLMs that stack dozens of Transformer layers is the “overthinking” problem [5] during their decision-making process. That is, for many input samples, their shallow representations at an earlier layer are adequate to make a correct classification, whereas the representations in the final layer may be otherwise distracted by over-complicated or irrelevant features that do not generalize well. The overthinking problem in PLMs leads to wasted computation, hinders model generalization, and may also make them vulnerable to adversarial attacks [6]. In this paper, we propose a novel Patience-based Early Exit (PABEE) mechanism to enable models to stop inference dynamically. PABEE is inspired by the widely used Early Stopping [7, 8] strategy for model training. It enables better input-adaptive inference of PLMs to address the aforementioned limitations. Specifically, our approach couples an internal classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for t times consecutively (see Figure 1b), where t is a pre-defined patience. We first show that our method is able to improve the accuracy compared to conventional inference under certain assumptions. Then we conduct extensive experiments on the GLUE benchmark and show that PABEE outperforms existing prediction probability distribution-based exit criteria by a large margin. In addition, PABEE can simultaneously improve inference speed and adversarial robustness of the original ∗Equal contribution. Work done during these two authors’ internship at Microsoft Research Asia. 2Code available at https://github.com/JetRunner/PABEE. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. model while retaining or even improving its original accuracy with minor additional effort in terms of model size and training time. Also, our method can dynamically adjust the accuracy-efficiency trade-off to fit different devices and resource constraints by tuning the patience hyperparameter without retraining the model, which is favored in real-world applications [9]. Although we focus on PLM in this paper, we also have conducted experiments on image classification tasks with the popular ResNet [10] as the backbone model and present the results in Appendix A to verify the generalization ability of PABEE. To summarize, our contribution is two-fold: (1) We propose Patience-based Early Exit, a novel and effective inference mechanism and show its feasibility of improving the efficiency and the accuracy of deep neural networks with theoretical analysis. (2) Our empirical results on the GLUE benchmark highlight that our approach can simultaneously improve the accuracy and robustness of a competitive ALBERT model, while speeding up inference across different tasks with trivial additional training resources in terms of both time and parameters. 2 Related Work Existing research in improving the efficiency of deep neural networks can be categorized into two streams: (1) Static approaches design compact models or compress heavy models, while the models remain static for all instances at inference (i.e., the input goes through the same layers); (2) Dynamic approaches allow the model to choose different computational paths according to different instances when doing inference. In this way, the simpler inputs usually require less calculation to make predictions. Our proposed PABEE falls into the second category. Static Approaches: Compact Network Design and Model Compression Many lightweight neural network architectures have been specifically designed for resource-constrained applications, including MobileNet [11], ShuffleNet [12], EfficientNet [13], and ALBERT [4], to name a few. For model compression, Han et al. [14] first proposed to sparsify deep models by removing non-significant synapses and then re-training to restore performance. Weight Quantization [15] and Knowledge Distillation [16] have also proved to be effective for compressing neural models. Recently, existing studies employ Knowledge Distillation [17–19], Weight Pruning [20–22] and Module Replacing [23] to accelerate PLMs. Dynamic Approaches: Input-Adaptive Inference A parallel line of research for improving the efficiency of neural networks is to enable adaptive inference for various input instances. Adaptive Computation Time [24, 25] proposed to use a trainable halting mechanism to perform input-adaptive inference. However, training the halting model requires extra effort and also introduces additional parameters and inference cost. To alleviate this problem, BranchyNet [26] calculated the entropy of the prediction probability distribution as a proxy for the confidence of branch classifiers to enable early exit. Shallow-Deep Nets [5] leveraged the softmax scores of predictions of branch classifiers to mitigate the overthinking problem of DNNs. More recently, Hu et al. [27] leveraged this approach in adversarial training to improve the adversarial robustness of DNNs. In addition, existing approaches [24, 28] trained separate models to determine passing through or skipping each layer. Very recently, FastBERT [29] and DeeBERT [30] adapted confidence-based BranchyNet [26] for PLMs while RightTool [31] leveraged the same early-exit criterion as in the Shallow-Deep Network [5]. However, Schwartz et al. [31] recently revealed that prediction probability based methods often lead to substantial performance drop compared to an oracle that identifies the smallest model needed to solve a given instance. In addition, these methods only support classification and leave out regression, which limits their applications. Different from the recent work that directly employs existing efficient inference methods on top of PLMs, PABEE is a novel early-exit criterion that captures the inneragreement between earlier and later internal classifiers and exploit multiple classifiers for inference, leading to better accuracy. 3 Patience-based Early Exit Patience-based Early Exit (PABEE) is a plug-and-play method that can work well with minimal adjustment on training. 3.1 Motivation We first conduct experiments to investigate the overthinking problem in PLMs. As shown in Figure 2b, we illustrate the prediction distribution entropy [26] and the error rate of the model on the development set as more layers join the prediction. Although the model becomes more “confident” (lower entropy indicates higher confidence in BranchyNet [26]) with its prediction as more layers join, the actual error rate instead increases after 10 layers. This phenomenon was discovered and named “overthinking” by Kaya et al. [5]. Similarly, as shown in Figure 2a, after 2.5 epochs of training, the model continues to get better accuracy on the training set but begins to deteriorate on the development set. This is the well-known overfitting problem which can be resolved by applying an early stopping mechanism [7, 8]. From this aspect, overfitting in training and overthinking in inference are naturally alike, inspiring us to adopt an approach similar to early stopping for inference. 3.2 Inference The inference process of PABEE is illustrated in Figure 1b. Formally, we define a common inference process as the input instance x goes through layers L1 . . . Ln and the classifier/regressor Cn to predict a class label distribution y (for classification) or a value y (for regression, we assume the output dimension is 1 for brevity). We couple an internal classifier/regressor C1 . . . Cn−1 with each layer of L1 . . . Ln−1, respectively. For each layer Li, we first calculate its hidden state hi: hi = Li(hi−1) h0 = Embedding(x) (1) Then, we use its internal classifier/regressor to output a distribution or value as a per-layer prediction yi = Ci(hi) or yi = Ci(hi). We use a counter cnt to store the number of times that the predictions remain “unchanged”. For classification, cnt i is calculated by: cnt i = { cnt i−1 + 1 argmax(yi) = argmax(yi−1), 0 argmax(yi) 6= argmax(yi−1) ∨ i = 0. (2) While for regression, cnt i is calculated by: cnt i = { cnt i−1 + 1 |yi − yi−1| < τ, 0 |yi − yi−1| ≥ τ ∨ i = 0. (3) where τ is a pre-defined threshold. We stop inference early at layer Lj when cntj = t. If this condition is never fulfilled, we use the final classifier Cn for prediction. In this way, the model can exit early without passing through all layers to make a prediction. As shown in Figure 1a, prediction score-based early exit relies on the softmax score. As revealed by prior work [32, 33], prediction of probability distributions (i.e., softmax scores) suffers from being over-confident to one class, making it an unreliable metric to represent confidence. Nevertheless, the capability of a low layer may not match its high confidence score. In Figure 1a, the second classifier outputs a high confidence score and incorrectly terminates inference. With Patience-based Early Exit, the stopping criteria is in a cross-layer fashion, preventing errors from one single classifier. Also, since PABEE comprehensively considers results from multiple classifiers, it can also benefit from an ensemble learning [34] effect. 3.3 Training PABEE requires that we train internal classifiers to predict based on their corresponding layers’ hidden states. For classification, the loss function Li for classifier Ci is calculated with cross entropy: Li = − ∑ z∈Z [1 [yi = z] · logP (yi = z|hi)] (4) where z and Z denote a class label and the set of class labels, respectively. For regression, the loss is instead calculated by a (mean) squared error: Li = (yi − ŷi)2 (5) where ŷ is the ground truth. Then, we calculate and train the model to minimize the total loss L by a weighted average following Kaya et al. [5]: L = ∑n j=1 j · Lj∑n j=1 j (6) In this way, every possible inference branch has been covered in the training process. Also, the weighted average can correspond to the relative inference cost of each internal classifier. 3.4 Theoretical Analysis It is straightforward to see that Patience-based Early Exit is able to reduce inference latency. To understand whether and under what conditions it can also improve accuracy, we conduct a theoretical comparison of a model’s accuracy with and without PABEE under a simplified condition. We consider the case of binary classification for simplicity and conclude that: Theorem 1 Assuming the patience of PABEE inference is t, the total number of internal classifiers (IC) is n, the misclassification probability (i.e., error rate) of all internal classifiers (excluding the final classifier) is q, and the misclassification probability of the final classifier and the original classifier (without ICs) is p. Then the PABEE mechanism improves the accuracy of conventional inference as long as n− t < ( 12q ) t(pq )− p (the proof is detailed in Appendix B). We can see the above inequality can be easily satisfied. For instance, when n = 12, q = 0.2, and p = 0.1, the above equation is satisfied as long as the patience t ≥ 4. However, it is notable that assuming the accuracy of each internal classifiers to be equal and independent is generally not attainable in practice. Additionally, we verify the statistical feasibility of PABEE with Monte Carlo simulation in Appendix C. To further test PABEE with real data and tasks, we also conduct extensive experiments in the following section. 4 Experiments 4.1 Tasks and Datasets We evaluate our proposed approach on the GLUE benchmark [35]. Specifically, we test on Microsoft Research Paraphrase Matching (MRPC) [36], Quora Question Pairs (QQP)3 and STS-B [37] for Paraphrase Similarity Matching; Stanford Sentiment Treebank (SST-2) [38] for Sentiment Classification; Multi-Genre Natural Language Inference Matched (MNLI-m), Multi-Genre Natural Language Inference Mismatched (MNLI-mm) [39], Question Natural Language Inference (QNLI) [40] and Recognizing Textual Entailment (RTE) [35] for the Natural Language Inference (NLI) task; The Corpus of Linguistic Acceptability (CoLA) [41] for Linguistic Acceptability. We exclude WNLI [42] from GLUE following previous work [1, 19, 23]. For datasets with more than one metric, we report the arithmetic mean of the metrics. 4.2 Baselines For GLUE tasks, we compare our approach with four types of baselines: (1) Backbone models: We choose ALBERT-base and BERT-base, which have approximately the same inference latency and accuracy. (2) Directly reducing layers: We experiment with the first 6 and 9 layers of the original (AL)BERT with a single output layer on the top, denoted by (AL)BERT-6L and (AL)BERT-9L, respectively. These two baselines help to set a lower bound for methods that do not employ any technique. (3) Static model compression approaches: For pruning, we include the results of LayerDrop [22] and attention head pruning [20] on ALBERT. For reference, we also report the performance of state-of-the-art methods on compressing the BERT-base model with knowledge distillation or module replacing, including DistillBERT [17], BERT-PKD [18] and BERT-of-Theseus [23]. (4) Input-adaptive inference: Following the settings in concurrent studies [31, 29, 30], we add internal classifiers after each layer and apply different early exit criteria, including that employed by BranchyNet [26] and Shallow-Deep [5]. To make a fair comparison, the internal classifiers and their insertions are exactly same in both baselines and Patience-based Early Exit. We search over a set of thresholds to find the one delivering the best accuracy for the baselines while targeting a speed-up ratio between 1.30× and 1.96× (the speed-up ratios of (AL)BERT-9L and -6L, respectively). 4.3 Experimental Setting Training We add a linear output layer after each intermediate layer of the pretrained BERT/ALBERT model as the internal classifiers. We perform grid search over batch sizes of 3https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs Test Set ALBERT-base [4] 12M 1.00× 54.1 84.3 87.0 90.8 71.1 76.4 94.1 85.5 80.4 PABEE (ours) 12M 1.57× 55.7 84.8 87.4 91.0 71.2 77.3 94.1 85.7 80.9 {16, 32, 128}, and learning rates of {1e-5, 2e-5, 3e-5, 5e-5} with an Adam optimizer. We apply an early stopping mechanism and select the model with the best performance on the development set. We implement PABEE on the base of Hugging Face’s Transformers [43]. We conduct our experiments on a single Nvidia V100 16GB GPU. Inference Following prior work on input-adaptive inference [26, 5], inference is on a per-instance basis, i.e., the batch size for inference is set to 1. This is a common latency-sensitive production scenario when processing individual requests from different users [31]. We report the median performance over 5 runs with different random seeds because the performance on relatively small datasets such as CoLA and RTE usually has large variance. For PABEE, we set the patience t = 6 in the overall comparison to keep the speed-up ratio between 1.30× and 1.96× while obtaining good performance following Figure 4. We further analyze the behavior of the PABEE mechanism with different patience settings in Section 4.5. 4.4 Overall Comparison We first report our main result on GLUE with ALBERT as the backbone model in Table 1. This choice is made because: (1) ALBERT is a state-of-the-art PLM for natural language understanding. (2) ALBERT is already very efficient in terms of the number of parameters and memory use because of its layer sharing mechanism, but still suffers from the problem of high inference latency. We can see that our approach outperforms all compared approaches on improving the inference efficiency of PLMs, demonstrating the effectiveness of the proposed PABEE mechanism. Surprisingly, our approach consistently improves the performance of the original ALBERT model by a relatively large margin while speeding-up inference by 1.57×. This is, to the best of our knowledge, the first inference strategy that can improve both the speed and performance of a fine-tuned PLM. To better compare the efficiency of PABEE with the method employed in BranchyNet and ShallowDeep, we illustrate speed-accuracy curves in Figure 3 with different trade-off hyperparameters (i.e., threshold for BranchyNet and Shallow-Deep, patience for PABEE). Notably, PABEE retains higher accuracy than BranchyNet and Shallow-Deep under the same speed-up ratio, showing its superiority over prediction score based methods. To demonstrate the versatility of our method with different PLMs, we report the results on a representative subset of GLUE with BERT [1] as the backbone model in Table 2. We can see that our BERT-based model significantly outperforms other compared methods with either knowledge distillation or prediction probability based input-adaptive inference methods. Notably, the performance is slightly lower than the original BERT model while PABEE improves the accuracy on ALBERT. We suspect that this is because the intermediate layers of BERT have never been connected to an output layer during pretraining, which leads to a mismatch between pretraining and fine-tuning when adding the internal classifiers. However, PABEE still has a higher accuracy than various knowledge distillation-based approaches as well as prediction probability distribution based models, showing its potential as a generic method for deep neural networks of different kinds. As for the cost of training, we present parameter numbers and training time with and without PABEE with both BERT and ALBERT backbones in Table 3. Although more classifiers need to be trained, training PABEE is no slower (even slightly faster) than conventional fine-tuning, which may be attributed to the additional loss functions of added internal classifiers. This makes our approach appealing compared with other approaches for accelerating inference such as pruning or distillation because they require separately training another model for each speed-up ratio in addition to training the full model. Also, PABEE only introduces fewer than 40K parameters (0.33% of the original 12M parameters). 4.5 Analysis Impact of Patience As illustrated in Figure 4, different patience can lead to different speed-up ratios and performance. For a 12-layer ALBERT model, PABEE reaches peak performance with a patience of 6 or 7. On MNLI, SST-2 and STS-B, PABEE can always outperform the baseline with patience between 5 and 8. Notably, unlike BranchyNet and Shallow-Deep, whose accuracy drops as the inference speed goes up, PABEE has an inverted-U curve. We confirm this observation statistically with Monte Carlo simulation in Appendix C. To analyze, when the patience t is set too large, the later internal classifier may suffer from the overthinking problem and make a wrong prediction that breaks the stable state among previous internal classifiers, which have not met the early-exit criterion because t is large. This makes PABEE leave more samples to be classified by the final classifier Cn, which suffers from the aforementioned overthinking problem. Thus, the effect of the ensemble learning vanishes and undermines its performance. Similarly, when t is relatively small, more samples may meet the early-exit criterion by accident before actually reaching the stable state where consecutive internal classifiers agree with each other. Impact of Model Depth We also investigate the impact of model depth on the performance of PABEE. We apply PABEE to a 24-layer ALBERT-large model. As shown in Table 4, our approach consistently improves the accuracy as more layers and classifiers are added while producing an even larger speed-up ratio. This finding demonstrates the potential of PABEE for burgeoning deeper PLMs [44–46]. 4.6 Defending Against Adversarial Attack Deep Learning models have been found to be vulnerable to adversarial examples that are slightly altered with perturbations often indistinguishable to humans [47]. Jin et al. [6] revealed that PLMs can also be attacked with a high success rate. Recent studies [5, 27] attribute the vulnerability partially to the overthinking problem, arguing that it can be mitigated by early exit mechanisms. In our experiments, we use a state-of-the-art adversarial attack method, TextFooler [6], which demonstrates effectiveness on attacking BERT. We conduct black-box attacks on three datasets: SNLI [48], MNLI [39] and Yelp [49]. Note that since we use the pre-tokenized data provided by Jin et al. [6], the results on MNLI differ slightly from the ones in Table 1. We attack the original ALBERT-base model, ALBERT-base with Shallow-Deep [5] and with Patience-based Early Exit. As shown in Table 5, we report the original accuracy, after-attack accuracy and the number of queries needed by TextFooler to attack each model. Our approach successfully defends more than 3× attacks compared to the original ALBERT on NLI tasks, and 2× on the Yelp sentiment analysis task. Also, PABEE increases the number of queries needed to attack by a large margin, providing more protection to the model. Compared to Shallow-Deep [5], our model demonstrates significant robustness improvements. To analyze, although the early exit mechanism of Shallow-Deep can prevent the aforementioned overthinking problem, it still relies on a single classifier to make the final prediction, which makes it vulnerable to adversarial attacks. In comparison, since Patience-based Early Exit exploits multiple layers and classifiers, the attacker has to fool multiple classifiers (which may exploit different features) at the same time, making it much more difficult to attack the model. This effect is similar to the merits of ensemble learning against adversarial attack, discussed in previous studies [50–52]. 5 Discussion In this paper, we proposed PABEE, a novel efficient inference method that can yield better accuracyspeed trade-off than existing methods. We verify its effectiveness and efficiency on GLUE and provide theoretical analysis. Empirical results show that PABEE can simultaneously improve the efficiency, accuracy, and adversarial robustness upon a competitive ALBERT model. However, a limitation is that PABEE currently only works on models with a single branch (e.g., ResNet, Transformer). Some adaption is needed for multi-branch networks (e.g., NASNet [53]). For future work, we would like to explore our method on more tasks and settings. Also, since PABEE is orthogonal to prediction distribution based early exit approaches, it would be interesting to see if we can combine them with PABEE for better performance. Broader Impact As an efficient inference technique, our proposed PABEE can facilitate more applications on mobile and edge computing, and also help reduce energy use and carbon emission [54]. Since our method serves as a plug-in for existing pretrained language models, it does not introduce significant new ethical concerns but more work is needed to determine its effect on biases (e.g., gender bias) that have already been encoded in a PLM. Acknowledgments and Disclosure of Funding We are grateful for the comments from the anonymous reviewers. We would like to thank the authors of TextFooler [6], Di Jin and Zhijing Jin, for their help with the data for adversarial attack. Tao Ge is the corresponding author. The authors did not receive third-party funding or support for this work.
1. What is the main contribution of the paper regarding layer-wise early-stopping in multi-layer pre-trained language models? 2. What are the strengths of the proposed approach compared to previous works? 3. What are the weaknesses of the paper, particularly in terms of theoretical analysis and comparisons with prior works? 4. How does the reviewer assess the significance of the paper's contributions and its potential impact on the field?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper the authors propose a new criterion for layer-wise early-stopping in multi-layer pre-trained language models. Like previous work, they add a classifier after each layer, and unlike previous work which uses confidence from the classifier to determine when to stop, their proposed approach is to stop when the same prediction is made by k layers, which they denote "patience-"based training. Their approach also directly applies to regression, unlike previous approaches designed for classification. This approach appears to out-perform alternative objectives in terms of accuracy, but not speed (not entirely clear), which is to be expected since by design their approach is likely to (must?) perform more computation before early-exit. Following author response: I'm sympathetic to the ACL papers being too recent, and in light of that and the rest of the response I've updated my review and score to "Marginally above the acceptance threshold." Strengths - A new criterion for early stopping in fine-tuned language models. The criterion appears to obtain better accuracy than others, though it remains unclear to me exactly how the speed-accuracy trade-off compares to previous work. Weaknesses - theoretical analysis is weak. they only consider (unrealistic) case where all internal classifiers have the same error rate, and the final classifier has the same error rate with and without internal classifiers. The former constraint is very unlikely to be true; see e.g. Figure 2b, where error rate decreases initially as a function of the number of layers. Personally I don't believe this analysis adds much insight to the paper, and certainly shouldn't be touted as a contribution. - could use more thorough comparison/discussion of previous work, and is missing some citations
NIPS
Title BERT Loses Patience: Fast and Robust Inference with Early Exit Abstract In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM). To achieve this, our approach couples an internal-classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for a pre-defined number of steps. Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers. Meanwhile, experimental results with an ALBERT model show that our method can improve the accuracy and robustness of the model by preventing it from overthinking and exploiting multiple classifiers for prediction, yielding a better accuracy-speed trade-off compared to existing early exit methods.2 1 Introduction In Natural Language Processing (NLP), pretraining and fine-tuning have become a new norm for many tasks. Pretrained language models (PLMs) (e.g., BERT [1], XLNet [2], RoBERTa [3], ALBERT [4]) contain many layers and millions or even billions of parameters, making them computationally expensive and inefficient regarding both memory consumption and latency. This drawback hinders their application in scenarios where inference speed and computational costs are crucial. Another bottleneck of overparameterized PLMs that stack dozens of Transformer layers is the “overthinking” problem [5] during their decision-making process. That is, for many input samples, their shallow representations at an earlier layer are adequate to make a correct classification, whereas the representations in the final layer may be otherwise distracted by over-complicated or irrelevant features that do not generalize well. The overthinking problem in PLMs leads to wasted computation, hinders model generalization, and may also make them vulnerable to adversarial attacks [6]. In this paper, we propose a novel Patience-based Early Exit (PABEE) mechanism to enable models to stop inference dynamically. PABEE is inspired by the widely used Early Stopping [7, 8] strategy for model training. It enables better input-adaptive inference of PLMs to address the aforementioned limitations. Specifically, our approach couples an internal classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers remain unchanged for t times consecutively (see Figure 1b), where t is a pre-defined patience. We first show that our method is able to improve the accuracy compared to conventional inference under certain assumptions. Then we conduct extensive experiments on the GLUE benchmark and show that PABEE outperforms existing prediction probability distribution-based exit criteria by a large margin. In addition, PABEE can simultaneously improve inference speed and adversarial robustness of the original ∗Equal contribution. Work done during these two authors’ internship at Microsoft Research Asia. 2Code available at https://github.com/JetRunner/PABEE. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. model while retaining or even improving its original accuracy with minor additional effort in terms of model size and training time. Also, our method can dynamically adjust the accuracy-efficiency trade-off to fit different devices and resource constraints by tuning the patience hyperparameter without retraining the model, which is favored in real-world applications [9]. Although we focus on PLM in this paper, we also have conducted experiments on image classification tasks with the popular ResNet [10] as the backbone model and present the results in Appendix A to verify the generalization ability of PABEE. To summarize, our contribution is two-fold: (1) We propose Patience-based Early Exit, a novel and effective inference mechanism and show its feasibility of improving the efficiency and the accuracy of deep neural networks with theoretical analysis. (2) Our empirical results on the GLUE benchmark highlight that our approach can simultaneously improve the accuracy and robustness of a competitive ALBERT model, while speeding up inference across different tasks with trivial additional training resources in terms of both time and parameters. 2 Related Work Existing research in improving the efficiency of deep neural networks can be categorized into two streams: (1) Static approaches design compact models or compress heavy models, while the models remain static for all instances at inference (i.e., the input goes through the same layers); (2) Dynamic approaches allow the model to choose different computational paths according to different instances when doing inference. In this way, the simpler inputs usually require less calculation to make predictions. Our proposed PABEE falls into the second category. Static Approaches: Compact Network Design and Model Compression Many lightweight neural network architectures have been specifically designed for resource-constrained applications, including MobileNet [11], ShuffleNet [12], EfficientNet [13], and ALBERT [4], to name a few. For model compression, Han et al. [14] first proposed to sparsify deep models by removing non-significant synapses and then re-training to restore performance. Weight Quantization [15] and Knowledge Distillation [16] have also proved to be effective for compressing neural models. Recently, existing studies employ Knowledge Distillation [17–19], Weight Pruning [20–22] and Module Replacing [23] to accelerate PLMs. Dynamic Approaches: Input-Adaptive Inference A parallel line of research for improving the efficiency of neural networks is to enable adaptive inference for various input instances. Adaptive Computation Time [24, 25] proposed to use a trainable halting mechanism to perform input-adaptive inference. However, training the halting model requires extra effort and also introduces additional parameters and inference cost. To alleviate this problem, BranchyNet [26] calculated the entropy of the prediction probability distribution as a proxy for the confidence of branch classifiers to enable early exit. Shallow-Deep Nets [5] leveraged the softmax scores of predictions of branch classifiers to mitigate the overthinking problem of DNNs. More recently, Hu et al. [27] leveraged this approach in adversarial training to improve the adversarial robustness of DNNs. In addition, existing approaches [24, 28] trained separate models to determine passing through or skipping each layer. Very recently, FastBERT [29] and DeeBERT [30] adapted confidence-based BranchyNet [26] for PLMs while RightTool [31] leveraged the same early-exit criterion as in the Shallow-Deep Network [5]. However, Schwartz et al. [31] recently revealed that prediction probability based methods often lead to substantial performance drop compared to an oracle that identifies the smallest model needed to solve a given instance. In addition, these methods only support classification and leave out regression, which limits their applications. Different from the recent work that directly employs existing efficient inference methods on top of PLMs, PABEE is a novel early-exit criterion that captures the inneragreement between earlier and later internal classifiers and exploit multiple classifiers for inference, leading to better accuracy. 3 Patience-based Early Exit Patience-based Early Exit (PABEE) is a plug-and-play method that can work well with minimal adjustment on training. 3.1 Motivation We first conduct experiments to investigate the overthinking problem in PLMs. As shown in Figure 2b, we illustrate the prediction distribution entropy [26] and the error rate of the model on the development set as more layers join the prediction. Although the model becomes more “confident” (lower entropy indicates higher confidence in BranchyNet [26]) with its prediction as more layers join, the actual error rate instead increases after 10 layers. This phenomenon was discovered and named “overthinking” by Kaya et al. [5]. Similarly, as shown in Figure 2a, after 2.5 epochs of training, the model continues to get better accuracy on the training set but begins to deteriorate on the development set. This is the well-known overfitting problem which can be resolved by applying an early stopping mechanism [7, 8]. From this aspect, overfitting in training and overthinking in inference are naturally alike, inspiring us to adopt an approach similar to early stopping for inference. 3.2 Inference The inference process of PABEE is illustrated in Figure 1b. Formally, we define a common inference process as the input instance x goes through layers L1 . . . Ln and the classifier/regressor Cn to predict a class label distribution y (for classification) or a value y (for regression, we assume the output dimension is 1 for brevity). We couple an internal classifier/regressor C1 . . . Cn−1 with each layer of L1 . . . Ln−1, respectively. For each layer Li, we first calculate its hidden state hi: hi = Li(hi−1) h0 = Embedding(x) (1) Then, we use its internal classifier/regressor to output a distribution or value as a per-layer prediction yi = Ci(hi) or yi = Ci(hi). We use a counter cnt to store the number of times that the predictions remain “unchanged”. For classification, cnt i is calculated by: cnt i = { cnt i−1 + 1 argmax(yi) = argmax(yi−1), 0 argmax(yi) 6= argmax(yi−1) ∨ i = 0. (2) While for regression, cnt i is calculated by: cnt i = { cnt i−1 + 1 |yi − yi−1| < τ, 0 |yi − yi−1| ≥ τ ∨ i = 0. (3) where τ is a pre-defined threshold. We stop inference early at layer Lj when cntj = t. If this condition is never fulfilled, we use the final classifier Cn for prediction. In this way, the model can exit early without passing through all layers to make a prediction. As shown in Figure 1a, prediction score-based early exit relies on the softmax score. As revealed by prior work [32, 33], prediction of probability distributions (i.e., softmax scores) suffers from being over-confident to one class, making it an unreliable metric to represent confidence. Nevertheless, the capability of a low layer may not match its high confidence score. In Figure 1a, the second classifier outputs a high confidence score and incorrectly terminates inference. With Patience-based Early Exit, the stopping criteria is in a cross-layer fashion, preventing errors from one single classifier. Also, since PABEE comprehensively considers results from multiple classifiers, it can also benefit from an ensemble learning [34] effect. 3.3 Training PABEE requires that we train internal classifiers to predict based on their corresponding layers’ hidden states. For classification, the loss function Li for classifier Ci is calculated with cross entropy: Li = − ∑ z∈Z [1 [yi = z] · logP (yi = z|hi)] (4) where z and Z denote a class label and the set of class labels, respectively. For regression, the loss is instead calculated by a (mean) squared error: Li = (yi − ŷi)2 (5) where ŷ is the ground truth. Then, we calculate and train the model to minimize the total loss L by a weighted average following Kaya et al. [5]: L = ∑n j=1 j · Lj∑n j=1 j (6) In this way, every possible inference branch has been covered in the training process. Also, the weighted average can correspond to the relative inference cost of each internal classifier. 3.4 Theoretical Analysis It is straightforward to see that Patience-based Early Exit is able to reduce inference latency. To understand whether and under what conditions it can also improve accuracy, we conduct a theoretical comparison of a model’s accuracy with and without PABEE under a simplified condition. We consider the case of binary classification for simplicity and conclude that: Theorem 1 Assuming the patience of PABEE inference is t, the total number of internal classifiers (IC) is n, the misclassification probability (i.e., error rate) of all internal classifiers (excluding the final classifier) is q, and the misclassification probability of the final classifier and the original classifier (without ICs) is p. Then the PABEE mechanism improves the accuracy of conventional inference as long as n− t < ( 12q ) t(pq )− p (the proof is detailed in Appendix B). We can see the above inequality can be easily satisfied. For instance, when n = 12, q = 0.2, and p = 0.1, the above equation is satisfied as long as the patience t ≥ 4. However, it is notable that assuming the accuracy of each internal classifiers to be equal and independent is generally not attainable in practice. Additionally, we verify the statistical feasibility of PABEE with Monte Carlo simulation in Appendix C. To further test PABEE with real data and tasks, we also conduct extensive experiments in the following section. 4 Experiments 4.1 Tasks and Datasets We evaluate our proposed approach on the GLUE benchmark [35]. Specifically, we test on Microsoft Research Paraphrase Matching (MRPC) [36], Quora Question Pairs (QQP)3 and STS-B [37] for Paraphrase Similarity Matching; Stanford Sentiment Treebank (SST-2) [38] for Sentiment Classification; Multi-Genre Natural Language Inference Matched (MNLI-m), Multi-Genre Natural Language Inference Mismatched (MNLI-mm) [39], Question Natural Language Inference (QNLI) [40] and Recognizing Textual Entailment (RTE) [35] for the Natural Language Inference (NLI) task; The Corpus of Linguistic Acceptability (CoLA) [41] for Linguistic Acceptability. We exclude WNLI [42] from GLUE following previous work [1, 19, 23]. For datasets with more than one metric, we report the arithmetic mean of the metrics. 4.2 Baselines For GLUE tasks, we compare our approach with four types of baselines: (1) Backbone models: We choose ALBERT-base and BERT-base, which have approximately the same inference latency and accuracy. (2) Directly reducing layers: We experiment with the first 6 and 9 layers of the original (AL)BERT with a single output layer on the top, denoted by (AL)BERT-6L and (AL)BERT-9L, respectively. These two baselines help to set a lower bound for methods that do not employ any technique. (3) Static model compression approaches: For pruning, we include the results of LayerDrop [22] and attention head pruning [20] on ALBERT. For reference, we also report the performance of state-of-the-art methods on compressing the BERT-base model with knowledge distillation or module replacing, including DistillBERT [17], BERT-PKD [18] and BERT-of-Theseus [23]. (4) Input-adaptive inference: Following the settings in concurrent studies [31, 29, 30], we add internal classifiers after each layer and apply different early exit criteria, including that employed by BranchyNet [26] and Shallow-Deep [5]. To make a fair comparison, the internal classifiers and their insertions are exactly same in both baselines and Patience-based Early Exit. We search over a set of thresholds to find the one delivering the best accuracy for the baselines while targeting a speed-up ratio between 1.30× and 1.96× (the speed-up ratios of (AL)BERT-9L and -6L, respectively). 4.3 Experimental Setting Training We add a linear output layer after each intermediate layer of the pretrained BERT/ALBERT model as the internal classifiers. We perform grid search over batch sizes of 3https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs Test Set ALBERT-base [4] 12M 1.00× 54.1 84.3 87.0 90.8 71.1 76.4 94.1 85.5 80.4 PABEE (ours) 12M 1.57× 55.7 84.8 87.4 91.0 71.2 77.3 94.1 85.7 80.9 {16, 32, 128}, and learning rates of {1e-5, 2e-5, 3e-5, 5e-5} with an Adam optimizer. We apply an early stopping mechanism and select the model with the best performance on the development set. We implement PABEE on the base of Hugging Face’s Transformers [43]. We conduct our experiments on a single Nvidia V100 16GB GPU. Inference Following prior work on input-adaptive inference [26, 5], inference is on a per-instance basis, i.e., the batch size for inference is set to 1. This is a common latency-sensitive production scenario when processing individual requests from different users [31]. We report the median performance over 5 runs with different random seeds because the performance on relatively small datasets such as CoLA and RTE usually has large variance. For PABEE, we set the patience t = 6 in the overall comparison to keep the speed-up ratio between 1.30× and 1.96× while obtaining good performance following Figure 4. We further analyze the behavior of the PABEE mechanism with different patience settings in Section 4.5. 4.4 Overall Comparison We first report our main result on GLUE with ALBERT as the backbone model in Table 1. This choice is made because: (1) ALBERT is a state-of-the-art PLM for natural language understanding. (2) ALBERT is already very efficient in terms of the number of parameters and memory use because of its layer sharing mechanism, but still suffers from the problem of high inference latency. We can see that our approach outperforms all compared approaches on improving the inference efficiency of PLMs, demonstrating the effectiveness of the proposed PABEE mechanism. Surprisingly, our approach consistently improves the performance of the original ALBERT model by a relatively large margin while speeding-up inference by 1.57×. This is, to the best of our knowledge, the first inference strategy that can improve both the speed and performance of a fine-tuned PLM. To better compare the efficiency of PABEE with the method employed in BranchyNet and ShallowDeep, we illustrate speed-accuracy curves in Figure 3 with different trade-off hyperparameters (i.e., threshold for BranchyNet and Shallow-Deep, patience for PABEE). Notably, PABEE retains higher accuracy than BranchyNet and Shallow-Deep under the same speed-up ratio, showing its superiority over prediction score based methods. To demonstrate the versatility of our method with different PLMs, we report the results on a representative subset of GLUE with BERT [1] as the backbone model in Table 2. We can see that our BERT-based model significantly outperforms other compared methods with either knowledge distillation or prediction probability based input-adaptive inference methods. Notably, the performance is slightly lower than the original BERT model while PABEE improves the accuracy on ALBERT. We suspect that this is because the intermediate layers of BERT have never been connected to an output layer during pretraining, which leads to a mismatch between pretraining and fine-tuning when adding the internal classifiers. However, PABEE still has a higher accuracy than various knowledge distillation-based approaches as well as prediction probability distribution based models, showing its potential as a generic method for deep neural networks of different kinds. As for the cost of training, we present parameter numbers and training time with and without PABEE with both BERT and ALBERT backbones in Table 3. Although more classifiers need to be trained, training PABEE is no slower (even slightly faster) than conventional fine-tuning, which may be attributed to the additional loss functions of added internal classifiers. This makes our approach appealing compared with other approaches for accelerating inference such as pruning or distillation because they require separately training another model for each speed-up ratio in addition to training the full model. Also, PABEE only introduces fewer than 40K parameters (0.33% of the original 12M parameters). 4.5 Analysis Impact of Patience As illustrated in Figure 4, different patience can lead to different speed-up ratios and performance. For a 12-layer ALBERT model, PABEE reaches peak performance with a patience of 6 or 7. On MNLI, SST-2 and STS-B, PABEE can always outperform the baseline with patience between 5 and 8. Notably, unlike BranchyNet and Shallow-Deep, whose accuracy drops as the inference speed goes up, PABEE has an inverted-U curve. We confirm this observation statistically with Monte Carlo simulation in Appendix C. To analyze, when the patience t is set too large, the later internal classifier may suffer from the overthinking problem and make a wrong prediction that breaks the stable state among previous internal classifiers, which have not met the early-exit criterion because t is large. This makes PABEE leave more samples to be classified by the final classifier Cn, which suffers from the aforementioned overthinking problem. Thus, the effect of the ensemble learning vanishes and undermines its performance. Similarly, when t is relatively small, more samples may meet the early-exit criterion by accident before actually reaching the stable state where consecutive internal classifiers agree with each other. Impact of Model Depth We also investigate the impact of model depth on the performance of PABEE. We apply PABEE to a 24-layer ALBERT-large model. As shown in Table 4, our approach consistently improves the accuracy as more layers and classifiers are added while producing an even larger speed-up ratio. This finding demonstrates the potential of PABEE for burgeoning deeper PLMs [44–46]. 4.6 Defending Against Adversarial Attack Deep Learning models have been found to be vulnerable to adversarial examples that are slightly altered with perturbations often indistinguishable to humans [47]. Jin et al. [6] revealed that PLMs can also be attacked with a high success rate. Recent studies [5, 27] attribute the vulnerability partially to the overthinking problem, arguing that it can be mitigated by early exit mechanisms. In our experiments, we use a state-of-the-art adversarial attack method, TextFooler [6], which demonstrates effectiveness on attacking BERT. We conduct black-box attacks on three datasets: SNLI [48], MNLI [39] and Yelp [49]. Note that since we use the pre-tokenized data provided by Jin et al. [6], the results on MNLI differ slightly from the ones in Table 1. We attack the original ALBERT-base model, ALBERT-base with Shallow-Deep [5] and with Patience-based Early Exit. As shown in Table 5, we report the original accuracy, after-attack accuracy and the number of queries needed by TextFooler to attack each model. Our approach successfully defends more than 3× attacks compared to the original ALBERT on NLI tasks, and 2× on the Yelp sentiment analysis task. Also, PABEE increases the number of queries needed to attack by a large margin, providing more protection to the model. Compared to Shallow-Deep [5], our model demonstrates significant robustness improvements. To analyze, although the early exit mechanism of Shallow-Deep can prevent the aforementioned overthinking problem, it still relies on a single classifier to make the final prediction, which makes it vulnerable to adversarial attacks. In comparison, since Patience-based Early Exit exploits multiple layers and classifiers, the attacker has to fool multiple classifiers (which may exploit different features) at the same time, making it much more difficult to attack the model. This effect is similar to the merits of ensemble learning against adversarial attack, discussed in previous studies [50–52]. 5 Discussion In this paper, we proposed PABEE, a novel efficient inference method that can yield better accuracyspeed trade-off than existing methods. We verify its effectiveness and efficiency on GLUE and provide theoretical analysis. Empirical results show that PABEE can simultaneously improve the efficiency, accuracy, and adversarial robustness upon a competitive ALBERT model. However, a limitation is that PABEE currently only works on models with a single branch (e.g., ResNet, Transformer). Some adaption is needed for multi-branch networks (e.g., NASNet [53]). For future work, we would like to explore our method on more tasks and settings. Also, since PABEE is orthogonal to prediction distribution based early exit approaches, it would be interesting to see if we can combine them with PABEE for better performance. Broader Impact As an efficient inference technique, our proposed PABEE can facilitate more applications on mobile and edge computing, and also help reduce energy use and carbon emission [54]. Since our method serves as a plug-in for existing pretrained language models, it does not introduce significant new ethical concerns but more work is needed to determine its effect on biases (e.g., gender bias) that have already been encoded in a PLM. Acknowledgments and Disclosure of Funding We are grateful for the comments from the anonymous reviewers. We would like to thank the authors of TextFooler [6], Di Jin and Zhijing Jin, for their help with the data for adversarial attack. Tao Ge is the corresponding author. The authors did not receive third-party funding or support for this work.
1. What is the main contribution of the paper? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and performance? 3. What are the weaknesses of the paper, especially regarding the limitation of the approach? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a method to improve efficiency during inference as well as reduce the problem of "overthinking" of deep models, by using classifiers that produce predictions from internal layers of a model, allowing early exit if the predictions of consecutive layers are consistent for greater than a predefined patience limit. Due to the combination of predictions from the internal classifiers, the paper claims that this approach has an ensembling effect which allows them to improve performance over having the full model. Strengths 1) Provides speed up during inference while also providing gains in performance over the full model baseline using ALBERT. 2) Compared to prediction score based approaches, has the advantage of being able to handle both classification and regression. 3) Dependence on the patience value t as well as model depths is explored. 4) Good results using state of the art ALBERT model. Good empirical evaluation, simple and straight froward approach which seems to work well. Weaknesses The performance gains are on the ALBERT framework where the internal layer weights are shared. Experiments on BERT do not show similar improvements which the authors conjecture is due to this discrepancy between the two models but needs to be examined further.
NIPS
Title Learning to Schedule Heuristics in Branch and Bound Abstract Primal heuristics play a crucial role in exact solvers for Mixed Integer Programming (MIP). While solvers are guaranteed to find optimal solutions given sufficient time, real-world applications typically require finding good solutions early on in the search to enable fast decision-making. While much of MIP research focuses on designing effective heuristics, the question of how to manage multiple MIP heuristics in a solver has not received equal attention. Generally, solvers follow hard-coded rules derived from empirical testing on broad sets of instances. Since the performance of heuristics is problem-dependent, using these general rules for a particular problem might not yield the best performance. In this work, we propose the first data-driven framework for scheduling heuristics in an exact MIP solver. By learning from data describing the performance of primal heuristics, we obtain a problem-specific schedule of heuristics that collectively find many solutions at minimal cost. We formalize the learning task and propose an efficient algorithm for computing such a schedule. Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on two classes of challenging instances. 1 Introduction Many decision-making problems arising from real-world applications can be formulated using Mixed Integer Programming (MIP). The Branch and Bound (B&B) framework is a general approach to solving MIPs to global optimality. Over the recent years, the idea of using machine learning (ML) to improve optimization techniques has gained renewed interest. There exist various approaches to tackle different aspects of the solving process using classical ML techniques. For instance, ML has been used to find good parameter configurations for a solver (Hutter et al., 2009, 2011), improve node (He et al., 2014), variable (Khalil et al., 2016; Gasse et al., 2019; Nair et al., 2020) or cut (Baltean-Lugojan et al., 2019) selection strategies, and detect decomposable structures (Kruber et al., 2017). Even though exact MIP solvers aim for global optimality, finding good feasible solutions fast is at least as important, especially in the presence of a time limit. The use of primal heuristics is crucial to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). ensuring good primal performance in modern solvers. For instance, Berthold (2013a) showed that the primal bound–the objective value of the best solution–improved on average by around 80% when primal heuristics were used. Generally, a solver includes a variety of primal heuristics, where each class of heuristics (e.g., rounding, diving, large-neighborhood search) exploits a different idea to find good solutions. During B&B, some of these heuristics are executed successively at each node of the search tree, and improved solutions, if any, are reported back to the solver. An extensive overview of different primal heuristics, their computational costs, and their impact in MIP solving can be found in Lodi (2013); Berthold (2013b, 2018). Since most heuristics can be very costly, it is necessary to be strategic about the order in which the heuristics are executed and the number of iterations allocated to each. Such decisions are often made by following hard-coded rules derived from testing on broad benchmark test sets. While these static rules yield good performance on average, their performance can be far from satisfactory when considering specific families of instances. To illustrate this fact, Figure 1 compares the solution success rates, i.e., the fraction of calls to a heuristic where a solution was found, of different primal heuristics for two problem classes: the Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed-Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). In this paper, we propose a data-driven approach to systematically improve the use of primal heuristics in B&B. By learning from data about the duration and success of every heuristic call for a set of training instances, we construct a schedule of heuristics that specifies the ordering and duration for which each heuristic should be executed to obtain good primal solutions early on. As a result, we are able to significantly improve the use of primal heuristics as shown in Figure 2 for one MIP instance. Contributions. Our main contributions can be summarized as follows: 1. We formalize the learning task of finding an effective, cost-efficient heuristic schedule on a training dataset as a Mixed Integer Quadratic Program (Section 3); 2. We propose an efficient heuristic for solving the training (scheduling) problem and a scalable data collection strategy (Sections 4 and 5); 3. We perform extensive computational experiments on a class of challenging instances and demonstrate the benefits of our approach (Section 6). Related Work. Optimizing the use of primal heuristics is a topic of ongoing research. For instance, by characterizing nodes with different features, Khalil et al. (2017) propose an ML method to decide when to execute heuristics to improve primal performance. After that decision, all heuristics are executed according to the predefined rules set by the solver. Hendel (2018) and Hendel et al. (2018) use bandit algorithms for the online learning of a heuristic ordering. The method proposed in this paper jointly adapts the ordering and duration for which each heuristic runs. Primal performance can also be improved using algorithm configuration (Hutter et al., 2009, 2011), a technique which is generally computational expensive since it relies on many black-box evaluations of the solver as its parameter configurations are evaluated and does not exploit detailed information about the effect of parameter values on performance, e.g., how parameters of primal heuristics affect their success rates. There has also been work done on how to schedule algorithms optimally. Kadioglu et al. (2011) solved the problem for a portfolio of different MIP solvers whereas Hoos et al. (2014) focused on Answer Set Programming. Furthermore, Seipp et al. (2015) propose an algorithm that greedily finds a schedule of different parameter configurations for automated planning. 2 Preliminaries Let us consider a MIP of the form min x∈Rn cTx s.t. Ax ≤ b, xi ∈ Z,∀i ∈ I, (PMIP) with matrix A ∈ Rm×n, vectors c ∈ Rn, b ∈ Rm, and a non-empty index set I ⊆ [n] for integer variables. A MIP can be solved using B&B, a tree search algorithm that finds an optimal solution to (PMIP) by recursively partitioning the original problem into linear subproblems. The nodes in the resulting search tree correspond to these subproblems. Throughout this work, we assume that each node has a unique index that identifies the node even across B&B trees obtained for different MIP instances. For a set of instances X , we denote the union of the corresponding node indices by NX . Primal Performance Metrics. Since we are interested in finding good solutions fast, we consider a collection of different metrics for primal performance. Beside statistics like the time to the first/best solution and the solution/incumbent success rate, we mainly focus on the primal integral (Berthold, 2013a) as a comprehensive measure of primal performance. Intuitively, this metric can be interpreted as a normalized average of the incumbent value over time. A formal definition can be found in Appendix A. Figure 2 gives an example for the primal gap function. The primal integrals are the areas under each of the curves. It is easy to see that finding near-optimal incumbents earlier shrinks the area under the graph of the primal gap, resulting in a smaller primal integral. 3 Data-Driven Heuristic Scheduling Since the performance of heuristics is highly problem-dependent, it is natural to consider data-driven approaches for optimizing the use of primal heuristics for the instances of interest. Concretely, we consider the following practically relevant setting. We are given a set of heuristics H and a homogeneous set of training instances X from the same problem class. In a data collection phase, we are allowed to execute the B&B algorithm on the training instances, observing how each heuristic performs at each node of each search tree. At a high level, our goal is to then leverage this data to obtain a schedule of heuristics that minimizes a primal performance metric. The specifics of how such data collection is carried out will be discussed later on in the paper. First, let us examine the decisions that could potentially benefit from a data-driven approach. Our discussion is inspired by an in-depth analysis of how the open-source MIP solver SCIP (Gamrath et al., 2020) manages primal heuristics. However, our approach is generic and is likely to apply to other solvers. Controlling the Order. One important degree of freedom in scheduling heuristics is the order in which a set of heuristics H is executed by the solver at a given node. This can be controlled by assigning a priority for each heuristic. In a heuristic loop, the solver then iterates over the heuristics in decreasing priority. The loop is terminated if a heuristic finds a new incumbent solution that cuts off the current node. As such, an ordering 〈h1, . . . , hk〉 that prioritizes effective heuristics can lead to time savings without sacrificing primal performance. Controlling the Duration. Furthermore, solvers use working limits to control the computational effort spent on heuristics. Consider diving heuristics as an example. Increasing the maximal diving depth increases the likelihood of finding an integer feasible solution. At the same time, this increases the overall running time. Figure 3 visualizes this cost-benefit trade-off empirically for three different diving heuristics, highlighting the need for a careful “balancing act”. For a heuristic h ∈ H, let τ ∈ R>0 denote its time budget. Then, we are interested in finding a schedule S := 〈(h1, τ1), . . . , (hk, τk)〉, hi ∈ H. Since controlling the time budget directly can be unreliable and lead to nondeterministic behavior in practice (see Appendix E for details), a deterministic proxy measure is preferable. For diving heuristics, the maximal diving depth provides a suitable measure as demonstrated by Figure 3. Similar measures can be used for other types of heuristics, as we will demonstrate with Large Neighborhood Search heuristics in Section 6. In general, we will refer to τi as the maximal number of iterations that is allocated to a heuristic hi in schedule S. Deriving the Scheduling Problem. Having argued for order and duration as suitable control decisions, we will now formalize our heuristic scheduling problem. Ideally, we would like to construct a schedule S that minimizes the primal integral, averaged over the training set X . Unfortunately, it is very difficult to optimize the primal integral directly, as it depends on the sequence of incumbents found over time during B&B. It also depends on the way the search tree is explored, which is affected by pruning, further complicating any attempt at directly optimizing this primal metric. We address this difficulty by considering a more tractable surrogate objective. Recall thatNX denotes the collection of search tree nodes of the set of training instances X . We will construct a schedule S that finds feasible solutions for a large fraction of the nodes in NX , while also minimizing the number of iterations expended by schedule S. Note that we consider feasible solutions instead of incumbents here: this way, we are able to obtain more data faster since a heuristic finds a feasible solution more often than a new incumbent. The framework we propose in the following can handle incumbents instead, but we have found no benefit in doing so in preliminary experiments. For a heuristic h and node N , denote by t(h,N) the iterations necessary for h to find a solution at node N , and set t(h,N) = ∞ if h does not succeed at N . Now suppose a schedule S is successful at node N , i.e., some heuristic finds a solution within the budget allocated to it in S. Let jS = min{j ∈ [|H|] : t(hj , N) ≤ τj} be the index of the first successful heuristic. Following the execution of hjS , the heuristic loop is terminated, and the time spent by S at node N is given by T (S,N) := ∑ i∈[jS−1] τi + t(hjS , N). Otherwise, set T (S,N) := ∑k i=1 τi + 1, where the additional 1 penalizes unsolved nodes. Furthermore, let NS denote the set of nodes at which schedule S is successful in finding a solution. Then, we consider the heuristic scheduling problem given by min S∈S ∑ N∈NX T (S,N) s.t. |NS | ≥ α|NX |. (PS ) Here α ∈ [0, 1] denotes the minimum fraction of nodes for which the schedule must find a feasible solution. Problem (PS ) can be formulated as a Mixed-Integer Quadratic Program (MIQP); the complete formulation can be found in Appendix B. To find such a schedule, we need to know t(h,N) for every heuristic h and node N . Hence, when collecting data for the instances in the training set X , we track for every B&B node N at which a heuristic h was called, the number of iterations τhN it took h to find a feasible solution; we set τhN = ∞ if h does not succeed at N . Formally, we require a training dataset D := { (h,N, τhN ) | h ∈ H, N ∈ NX , τhN ∈ R>0 ∪ {∞} } . Section 5 describes a computationally efficient approach for building D using a single B&B run per training instance. 4 Solving the Scheduling Problem Problem (PS ) is a generalization of the Pipelined Set Cover Problem which is known to be NP-hard (Munagala et al., 2005). As for the MIQP in Appendix B, tackling it using a non-linear integer programming solver is challenging: the MIQP has O(|H||NX |) variables and constraints. Since a single instance may involve thousands of search tree nodes, this leads to an MIQP with hundreds of thousands of variables and constraints even with a handful of heuristics and tens of training instances. As mentioned in Related Work, algorithm configuration tools such as SMAC (Hutter et al., 2011) could be used to solve (PS ) heuristically. Since SMAC is a sequential algorithm that searches for a good parameter configuration by successively adapting and re-evaluating its best configurations, its running time can be quite substantial. In the following, we present a more efficient approach. We now direct our attention towards designing an efficient heuristic algorithm for (PS ). A similar problem was studied by Streeter (2007) in the context of decision problems. Among other things, the author discusses how to find a schedule of (randomized) heuristics that minimizes the expected time necessary to solve a set of training instances X of a decision problem. Although this setting is somewhat similar to ours, there exist multiple aspects in which they differ significantly: 1. Decision problems are considered instead of MIPs: Solving a MIP is generally much different from solving a decision problem. When using B&B, we normally have to solve many linear subproblems. Since in theory, every such LP is an opportunity for a heuristic to find a new incumbent, we consider the set of nodes NX instead of X as the “instances” we want to solve. 2. A heuristic call can be suspended and resumed: In the work of Streeter, a heuristic can be executed in a “suspend-and-resume model”: If h was executed before, the action (h, τ) represents continuing a heuristic run for an additional τ iterations. When h reaches the iteration limit, the run is suspended and its state kept in memory such that it can be resumed later in the schedule. This model is not used in MIP solving due to challenges in maintaining the states of heuristics in memory. As such, we allow every heuristic to be included in the schedule at most once. 3. Time is used to control the duration of a heuristic run: Controlling time directly is unreliable in practice and can lead to nondeterministic behavior of the solver. Instead, we rely on different proxy measures for different classes of heuristics. Thus, when building a schedule that contains heuristics of distinct types, we need to ensure that these measures are comparable. Despite these differences, it is useful to examine the greedy scheduling approach proposed in Streeter (2007). A schedule G is built by successively adding the action (h, τ) that maximizes the ratio of the marginal increase in the number of instances solved to the cost (i.e., τ ) of including (h, τ). As shown in Corollary 2 of Streeter (2007), the greedy schedule G yields a 4-approximation to that version of the scheduling problem. In an attempt to leverage this elegant heuristic in our problem (PS ), we will describe it formally. Let us denote the greedy schedule by G := 〈g1, . . . , gk〉. Then, G is defined inductively by setting G0 = 〈〉 and Gj = 〈g1, . . . , gj〉 with gj = argmax (h,τ)∈Hj−1×T |{N ∈ N j−1X | τhN ≤ τ}| τ . Here,Hj denotes the set of heuristics that are not in Gj , N jX denotes the subset of nodes not solved by Gj , and T is the interval generated by all possible iteration limits in D, i.e., T := [min{τhN | (N,h, τhN ) ∈ D},max{τhN | (N,h, τhN ) ∈ D}]. We stop adding actions gj when Gj finds a solution at all nodes in NX or all heuristics are contained in the schedule, i.e.,Hj = ∅. Unfortunately, the resulting schedule can perform arbitrarily bad in our setting: Assume we have |NX | = 100 and only one heuristic h. This heuristic solves one node in just one iteration and requires 100 iterations for each of the other 99 nodes. Following the greedy approach, the resulting schedule would be G = 〈(h, 1)〉 since 11 > 99 100 . Whenever α > 0.01, G would be infeasible for our constrained problem (PS ). Since we are not allowed to add a heuristic more than once, this cannot be fixed with the current algorithm. To avoid this situation, we propose the following modification. Instead of only considering the heuristics that are not in Gj−1 when choosing the next action gj , we also consider the option to run the last heuristic hj−1 of Gj−1 for longer. That is, we allow to choose (hj−1, τ) with τ > τj−1. Note that the cost of adding (hj−1, τ) to the schedule is not τ , but τ − τj−1, since we decide to run hj−1 for τ − τj−1 iterations longer and not to rerun hj−1 for τ iterations. Furthermore, when including different classes of heuristics in the schedule, the respective time measures are not necessarily comparable. We observed that not taking the difference of iteration cost into account led to an increase of the primal integral of up to 23% compared to default SCIP. To circumvent this problem, we use the average time per iteration to normalize different notions of iterations. We denote the average cost of an iteration by thavg for heuristic h. Note that t h avg can be easily computed by tracking the running time of a heuristic during data collection. Hence, we redefine gj and obtain gj = argmax (h,τ)∈Aj−1 |{N ∈ N j−1X | τhN ≤ τ}| cj−1(h, τ) , with Aj := (Hj × T ) ∪ {(hj , τ) | τ > τj , τ ∈ T } and cj(h, τ) := { thavgτ, if h 6= hj thavg(τ − τj), otherwise. We set A0 := H × T and c0(h, τ) = thavgτ . With this modification, we obtain the schedule G = 〈(h, 100)〉 (which solves all 100 nodes) in the above example. Additionally, it is also possible to consider the quality of the found solutions when choosing the next action gj . Since we observed that the resulting schedules increased the primal integral by up to 11%, we omit this here. Finally, note that this greedy procedure still does not explicitly enforce that the schedule is successful at a fraction of at least α nodes. In our experiments, however, we observe that the resulting schedules reach a success rate of α = 98% or above. The final algorithm can be found in Appendix C. Example. Figure 4 shows an example of how we obtain a schedule with three heuristics and nodes. As indicated by the left figure, the dataset is given by D = {(h1, N1, 1), (h1, N2,∞), (h1, N3,∞), (h2, N1, 4), (h2, N2, 3), (h2, N3, 3), (h3, N1,∞), (h3, N2, 4), (h3, N3, 2)}. Let us now assume that all three heuristic have the same costs, i.e., th1avg = t h2 avg = t h3 avg . We build the schedule G as follows. First, we add action (h1, 1), since h1 solves one node with only one iteration, yielding the best ratio. Since N1 is “solved” by the current schedule and h1 cannot solve any other nodes, both N1 and h1 do not need to be considered anymore. Among the remaining possibilities, the action (h2, 3) is the best, since h2 solves both nodes in three iterations yielding a ratio of 23 . In contrast, executing h3 for two and four iterations, respectively, yields a ratio of 12 . Hence, we add (h2, 3) to G and obtain G = 〈(h1, 1), (h2, 3)〉. The schedule then solves all three nodes as shown on the right of Figure 4. Note that this schedule is an optimal solution of (PS ) for α > 13 . 5 Data Collection The scheduling approach described thus far rests on the availability of a dataset D. Among others, each entry in D stores the number of iterations τhN required by heuristic h to find a feasible solution at node N . This piece of information must be collected by executing the heuristic and observing its performance. Two main challenges arise in collecting such a dataset for multiple heuristics: 1. Efficient data collection: Solving MIPs by B&B remains computationally expensive, even given the sophisticated techniques implemented in today’s solvers. This poses difficulties to ML approaches that create a single reward signal per MIP evaluation, which may take several minutes up to hours. In other words, even with a handful of heuristics, i.e., a small setH, it is prohibitive to run B&B once for each heuristic-training instance pair in order to construct the dataset D. 2. Obtaining unbiased data: Executing multiple heuristics at each node of the search tree during data collection can have dangerous side effects: if a heuristic finds an incumbent, subsequent heuristics are no longer executed at the same node, as described in Section 3. We address the first point by using a specially-crafted version of the MIP solver for collecting multiple reward signals for the execution of multiple heuristics per single MIP evaluation during the training phase. As a result, we obtain a large amount of data points that scales with the running time of the MIP solves. This has the clear advantage that the efficiency of our data collection does not automatically decrease when the time to evaluate a single MIP increases for more challenging problems. To prevent bias from mutual interaction of different heuristics during training, we engineered the MIP solver to be executed in a special shadow mode, where heuristics are called in a sandbox environment and interaction with the main solving path is maximally reduced. In particular, this means that new incumbents and primal bounds are not communicated back, but only recorded for training data. This setting is an improved version of the shadow mode introduced in Khalil et al. (2017). As a result of these measures, we have instrumented the SCIP solver in a way that allows for the collection of a proper dataset D with a single run of the B&B algorithm per training instance. 6 Computational Results The code we use for data collection and scheduling is publicly available.1 6.1 Heuristics and Instances We can build a schedule containing arbitrary heuristics as long as there is a time measure available. We focus on two broad groups of complex heuristics: Diving and Large Neighborhood Search (LNS). Both classes are much more computationally expensive than simpler heuristics such as rounding (for which scheduling is not necessary and executions are extremely fast), but are generally also more likely to find (good) solutions (Berthold, 2006). That is why it is particularly important to schedule these heuristics most economically. Diving Heuristics. Diving heuristics examine a single probing path by successively fixing variables according to a specific rule. There are multiple ways of controlling the duration of a dive. After careful consideration, we decided on using the maximum diving depth to limit the cost of a call to a diving heuristic: It is both related to the effort spent by the heuristic and its likelihood of success. LNS Heuristics. These heuristics first build a neighborhood of some reference point which is then searched for improving solutions by solving a sub-MIP. To control the duration, we choose to limit the number of nodes in the sub-MIP. The idea behind this measure is similar to limiting the diving depth of diving heuristics: In both cases, we control the number of subproblems a heuristic considers within its execution. Nevertheless, the two measures are not directly comparable: The most expensive LNS heuristic was on average around 892 times more expensive than the cheapest diving heuristic. To summarize, we schedule 16 primal heuristics: ten diving and six LNS heuristics. By controlling this set, we cover about 23 of the more complex heuristics implemented in SCIP. The remaining heuristics are executed after the schedule according to their default settings. 1https://github.com/antoniach/heuristic-scheduling We focus on two problem classes which are challenging on the primal side: The Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). For GISP, we generate two types of instances: The first one takes graphs from the 1993 DIMACS Challange which is also used by Khalil et al. (2017) and Colombi et al. (2017) (120 for training and testing) and the second type uses randomly generated graphs as a base (25 for training and 10 for testing). The latter is also used to obtain FCMNF instances (20 for training and 120 for testing). A detailed description of the problems and how we generate and partition the instances can be found in Appendix D. 6.2 Results To study the performance of our approach, we used the state-of-the-art solver SCIP 7.0 (Gamrath et al., 2020) with CPLEX 12.10.0.0 as the underlying LP solver. Thereby, we needed to modify SCIP’s source code to collect data as described in Section 5, as well as control heuristic parameters that are not already implemented by default. For our experiments, we used a Linux cluster of Intel Xeon CPU E5-2660 v3 2.60GHz with 25MB cache and 128GB main memory. The time limit in all experiments was set to two hours; for data collection to four hours. Because the primal integral depends on time, we ran one process at a time on every machine, allowing for accurate time measurements. Furthermore, since MIP solver performance can be highly sensitive to even small and seemingly performance-neutral perturbations during the solving process (Lodi and Tramontani, 2013), we implemented an exhaustive testing framework that uses four random seeds and evaluates schedules trained with one data distribution on other data distributions, a form of transfer learning. The main baseline we compare against is default SCIP. Note that since the adaptive diving and LNS methods presented in Hendel (2018) and Hendel et al. (2018) are included in default SCIP as heuristics, we implicitly compare to these methods when comparing to default SCIP; improvements due to our method reflect improvements over Hendel’s approach. Furthermore, we also consider SCIP_TUNED, a hand-tuned version of SCIP’s default settings for GISP.2 Since in practice, a MIP expert would try to manually optimize some parameters when dealing with a homogeneous set of instances, we emulated that process to create an even stronger baseline to compare against. GISP – Random graph instances. Table 1 (rows DIVING) shows partial results of the transfer learning experiments for schedules with diving heuristics (see Table 3 in Appendix F for the complete table). Our scheduling framework yields a significant improvement w.r.t. primal integral on all test sets. Since this improvement is consistent over all schedules and test sets, we are able to confirm that the behavior actually comes from our procedure. Especially remarkable is the fact that the schedules trained on smaller instances also perform well on much larger instances. Furthermore, we can see that the schedules perform especially well on instances of increasing difficulty (size). This behavior is intuitive: Since our method aims to improve the primal performance of a solver, there is more room for improvement when an instance is more challenging on the primal side. Over all test sets, the schedules terminated with a strictly better primal integral on 69–76% and with a strictly better primal bound on 59–70% of the instances compared to SCIP_TUNED (see Table 4 in Appendix F for details). In addition, the number of incumbents found by the heuristics considered in the schedule increased significantly: 49–61% of the incumbents were found by heuristics in the schedule, compared to only 33% when running with default SCIP (see Table 4 in Appendix F for details). Table 1 (rows DIVING+LNS) shows the transfer learning experiments for schedules containing diving and LNS heuristics. By including both types of heuristics, we are able to improve over the diving-only schedule in around half of the cases, since on the instances we consider, diving seems to perform significantly better than LNS. Furthermore, we also observe less consistent performance among the schedules which leads us to the conclusion that LNS’s behavior is harder to predict. How to further improve our scheduling procedure to better fit LNS is part of future work. GISP – Finding a schedule with SMAC. As mentioned earlier, we can also find a schedule by using the algorithm configuration tool SMAC. To test SMAC’s performance on the random graph instances, we trained ten SMAC schedules, each with a different random seed, on each of the five training sets. We used the primal integral as a performance metric. To make it easier for SMAC, we only considered diving heuristics. We gave SMAC the same total computational time for training as we did in data collection: With 25 training instances per set using a four hour time limit each, this comes 2We set the frequency offset to 0 for all diving heuristics. to 100 hours per training set and schedule. Note that since SMAC runs sequentially, training the SMAC schedules took over four days per schedule, whereas training a schedule following the greedy algorithm only took four hours with enough machines. To pick the best performing SMAC schedule for each training set, we ran all ten schedules on the test set of same size as the corresponding training set and chose the best performing one. The results can be found in Table 1 (rows SMAC). As we can see, on all test sets, all schedules are significantly better than default SCIP. However, when comparing these results to the performance of the greedy schedules, we can see that SMAC performs worse on average. Over all five test sets, the SMAC schedules terminated with a strictly better primal integral on 36–54% and with a strictly better primal bound on 37–55% of the instances compared to its greedy counterparts. GISP – DIMACS graph instances. The first three columns of Table 2 summarize the results on the instances derived from DIMACS graphs. As we can see, the schedule setting dominates default SCIP in all metrics, but an especially drastic improvement can be obtained w.r.t. the primal integral: the schedule reduces the primal integral by 49%. Furthermore, 92% of instances terminated with a strictly better primal integral and 57% with a strictly better primal bound. Even though SCIP_TUNED finds the best incumbent faster than the schedule, the latter terminates with a better primal bound (GISP is a maximization problem) explaining the small increase in time. When looking at the total time spent in heuristics, we see that heuristics run significantly shorter but with more success: On average, the incumbent success rate is higher compared to default SCIP. That the learned schedule not only improves the primal side of the problem, but also translates to an overall better performance is shown by the last two rows: SCHEDULE significantly dominates DEFAULT in the gap at termination as well as the primal-dual integral. Compared to the results of the method in Khalil et al. (2017), where node features were used to decide if a heuristic should be executed, our scheduling procedure yields competitive performance: On average, their method reduced both the primal integral and the time to best incumbent by 60% (our method: 49% and 47%). Hereby it is important to note that our baseline (SCIP 7.0) is much faster than theirs (SCIP 3.2): for DIMACS instances, default SCIP terminated with a gap of 201.95% in Khalil et al. (2017) compared to 144.59% in our experiments. Furthermore, SCIP’s technical reports show that version 7.0 is 58% faster than version 3.2 on a standard benchmark test set. FCMNF. The last three columns of Table 2 summarize the results on the FCMNF instances. Also for this problem, we can see that the schedule setting dominates both DEFAULT and SCIP_TUNED in almost all metrics. In particular, we are able to almost double the number of solutions found and triple the incumbent success rate. Even though the improvement in the primal integral is not as drastic as we observed with GISP, it is still consistent over the whole test set: 62% of the instances terminated with a strictly better primal integral and 92% with a strictly better primal bound. Similar to the GISP results, SCHEDULE needs more time than DEFAULT to find the best incumbent, since it again terminates with a better primal bound (FCMNF is a minimization problem). Finally, it is important to note that the trained schedules differ significantly from SCIP’s default settings for all training sets. The improvements we observed when using these schedules supports our starting hypothesis, namely that the way default MIP solver parameters are set does not yield the best performance when considering specific use cases. 7 Conclusion and Discussion In this work, we propose a data-driven framework for scheduling primal heuristics in a MIP solver such that the primal performance is optimized. Central to our approach is a novel formulation of the learning task as a scheduling problem, an efficient data collection procedure, and a fast, effective heuristic for solving the learning problem on a training dataset. A comprehensive experimental evaluation shows that our approach consistently learns heuristic schedules with better primal performance than SCIP’s default settings. Furthermore, by replacing our heuristic algorithm with the algorithm configuration tool SMAC in our scheduling framework, we are able to obtain a worse but still significant performance improvement w.r.t. SCIP’s default. Together with the prohibitive computational costs of SMAC, we conclude that for our heuristic scheduling problem, the proposed heuristic algorithm constitutes an efficient alternative to existing methods. A possible limitation of our approach is that it produces a single, “one-size-fits-all" schedule for a class of training instances. It is thus natural to wonder whether alternative formulations of the learning problem leveraging additional contextual data about an input MIP instance and/or a heuristic can be useful. We note that learning a mapping from the space of MIP instances to the space of possible schedules is not trivial. The latter is a highly structured output space that involves both the permutation over heuristics and their respective iteration limits. The approach proposed here is much simpler in nature, which makes it easy to implement and incorporate into a sophisticated MIP solver. Although we have framed the heuristic scheduling problem in ML terms, we are yet to analyze the learning-theoretic aspects of the problem. More specifically, our approach is justified on empirical grounds in Section 6, but we are yet to attempt to analyze potential generalization guarantees. We view the recent foundational results by Balcan et al. (2019) as a promising framework that may apply to our setting, as it has been used for the branching problem in MIP (Balcan et al., 2018). Disclosure of Funding This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689), and by the German Federal Ministry of Education and Research (BMBF) within the Research Campus MODAL (grant numbers 05M14ZAM, 05M20ZBM). Elias B. Khalil acknowledges support from the Scale AI Research Chair Program and an IVADO Postdoctoral Scholarship.
1. What is the main contribution of the paper regarding primal heuristics scheduling? 2. How does the proposed approach differ from Streeter's algorithm? 3. What are the limitations of the data-driven approach used in the paper? 4. How do the results compare with other methods in terms of primal integral reduction? 5. What are some concerns regarding the reproducibility of the results? 6. Are there any issues with the literature review or comparisons made in the paper? 7. What are some minor comments or typos in the paper that can be improved?
Summary Of The Paper Review
Summary Of The Paper The paper focuses on scheduling primal heuristics in an exact MIP solver to find feasible and fast solutions rather than finding the expensive global optimal. An exact MIP solver has a variety of primal heuristics implemented during the branch-and-bound algorithm. This paper aims to propose a data-driven and generic approach to improve the primal heuristics scheduling in the B&B algorithm. They formalized the problem as an MIP, and the goal here is to find the order of the heuristics and their iterations. Paper proposes a greedy heuristic inspired by Streeter’s algorithm to compute different parameter configurations. It is claimed that the heuristic is constructed by learning from data about the duration and success of two kinds of heuristics that call for two sets of training instances. In the data collection phase, the objective is to collect the heuristics performances on each node of the search tree. For this purpose, they used strategies that will result in running just one B&B execution for each instance. It is asserted that authors have solutions at a less expensive cost in comparison to the baseline SCIP and SMAC in terms of primal integral. Based on results the average primal integral is reduced by up to 49% on two classes of instances. Review The problem statement is interesting, but the contributions made do not seem much strong. This is because the heuristic they are proposing is very simple, and the methodology authors have implemented is a variation of Streeter’s approach, so I don’t see significant contributions in the scientific area. The structure of the paper looks good, but it is better to have a more in-depth literature review in the related works section. The approach is asserted to be data-driven, but I don't think it is because no explanation exists about instance structures or features in order to create a data-driven approach. Also, authors are just using a homogeneous set of instances which means not many diversities exist in the set of problems. The result can be hard to reproduce and this is because of: The randomly generated instances in which there is no explanation in the paper. Not enough information about the cross-validation method is mentioned for line 278. Among that large number of used instances, 121 instances are available. In Line 154, authors mentioned a fact without any reference. I mean it’s mentioned that “the SMAC schedule can get very expensive quickly”, but how we can find information about this fact without any reference. In line 170, it is mentioned that "we allow every heuristic to be included in the schedule at most once", and this seems that the approach is simplified version of Streeter's and it defines fixed parameters for scheduling. The shadow mode and the sandbox environment used in the data collection phase do not seem to be real data. This can be because in practice no sandbox environment will be used, and in the shadow mode, the state of the current heuristic can have an impact on the next heuristic. In line 277, they claimed they have implemented “a more exhaustive testing framework than the commonly used benchmark methodology”. The reader does not know what is the commonly used benchmark methodology without any references. In the result section, in line 296, they have talked about the dominance of their approach in “the number of incumbents found by the heuristics” in comparison to SCIP, and I wish I could see the exact difference as a table in appendices. For tables, it is preferred to have statistical plots to show the comparisons more clearly. Comparing SMAC with the greedy approaches results in table 1, the average of primal integral for greedy schedules is dominating the SMAC in just 11 out of 25 cases, and this is not much significant improvement. For table 2, it is recommended to include SMAC results in order to compare the other performance metrics. Line 312, what does enough machines mean here? This can make the evaluation biased. Minor comments: 1. In the introduction section, (line 22) they mentioned “classical ML techniques” in the literature, but in the cited papers I found some neural network related papers and I guess are they are not using classical machine learning. So, the term “classical” should be removed. 2. The y-axis of the left plot in Figure 3 is mentioned to be “in percent”, but it’s not. So “in percent” term should be removed. 3. Every single table and plot should be self-explained, but in Table 1 the number of instance headers is not mentioned which makes the table hard to understand. 4. In line 142, one extra closing parenthesis in the mathematical description. 5. For the formulation in lines 204 and 205, I couldn’t find any definition of “c” variable in the text, and I guess it’s cost. Also, the same happened for “A” variable. 6. In line 248, “It is both related to the effort spent by the heuristic and its likelihood of success.”, they have talked about just one work limit, not two. 7. Typos: ◦ line 275, 291 and 350, typo in “Furtheremore” ◦ line 286 “partitial” ◦ line 297 “heurisitcs” ◦ line 323 “stricly” ◦ line 337 “improvment”
NIPS
Title Learning to Schedule Heuristics in Branch and Bound Abstract Primal heuristics play a crucial role in exact solvers for Mixed Integer Programming (MIP). While solvers are guaranteed to find optimal solutions given sufficient time, real-world applications typically require finding good solutions early on in the search to enable fast decision-making. While much of MIP research focuses on designing effective heuristics, the question of how to manage multiple MIP heuristics in a solver has not received equal attention. Generally, solvers follow hard-coded rules derived from empirical testing on broad sets of instances. Since the performance of heuristics is problem-dependent, using these general rules for a particular problem might not yield the best performance. In this work, we propose the first data-driven framework for scheduling heuristics in an exact MIP solver. By learning from data describing the performance of primal heuristics, we obtain a problem-specific schedule of heuristics that collectively find many solutions at minimal cost. We formalize the learning task and propose an efficient algorithm for computing such a schedule. Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on two classes of challenging instances. 1 Introduction Many decision-making problems arising from real-world applications can be formulated using Mixed Integer Programming (MIP). The Branch and Bound (B&B) framework is a general approach to solving MIPs to global optimality. Over the recent years, the idea of using machine learning (ML) to improve optimization techniques has gained renewed interest. There exist various approaches to tackle different aspects of the solving process using classical ML techniques. For instance, ML has been used to find good parameter configurations for a solver (Hutter et al., 2009, 2011), improve node (He et al., 2014), variable (Khalil et al., 2016; Gasse et al., 2019; Nair et al., 2020) or cut (Baltean-Lugojan et al., 2019) selection strategies, and detect decomposable structures (Kruber et al., 2017). Even though exact MIP solvers aim for global optimality, finding good feasible solutions fast is at least as important, especially in the presence of a time limit. The use of primal heuristics is crucial to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). ensuring good primal performance in modern solvers. For instance, Berthold (2013a) showed that the primal bound–the objective value of the best solution–improved on average by around 80% when primal heuristics were used. Generally, a solver includes a variety of primal heuristics, where each class of heuristics (e.g., rounding, diving, large-neighborhood search) exploits a different idea to find good solutions. During B&B, some of these heuristics are executed successively at each node of the search tree, and improved solutions, if any, are reported back to the solver. An extensive overview of different primal heuristics, their computational costs, and their impact in MIP solving can be found in Lodi (2013); Berthold (2013b, 2018). Since most heuristics can be very costly, it is necessary to be strategic about the order in which the heuristics are executed and the number of iterations allocated to each. Such decisions are often made by following hard-coded rules derived from testing on broad benchmark test sets. While these static rules yield good performance on average, their performance can be far from satisfactory when considering specific families of instances. To illustrate this fact, Figure 1 compares the solution success rates, i.e., the fraction of calls to a heuristic where a solution was found, of different primal heuristics for two problem classes: the Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed-Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). In this paper, we propose a data-driven approach to systematically improve the use of primal heuristics in B&B. By learning from data about the duration and success of every heuristic call for a set of training instances, we construct a schedule of heuristics that specifies the ordering and duration for which each heuristic should be executed to obtain good primal solutions early on. As a result, we are able to significantly improve the use of primal heuristics as shown in Figure 2 for one MIP instance. Contributions. Our main contributions can be summarized as follows: 1. We formalize the learning task of finding an effective, cost-efficient heuristic schedule on a training dataset as a Mixed Integer Quadratic Program (Section 3); 2. We propose an efficient heuristic for solving the training (scheduling) problem and a scalable data collection strategy (Sections 4 and 5); 3. We perform extensive computational experiments on a class of challenging instances and demonstrate the benefits of our approach (Section 6). Related Work. Optimizing the use of primal heuristics is a topic of ongoing research. For instance, by characterizing nodes with different features, Khalil et al. (2017) propose an ML method to decide when to execute heuristics to improve primal performance. After that decision, all heuristics are executed according to the predefined rules set by the solver. Hendel (2018) and Hendel et al. (2018) use bandit algorithms for the online learning of a heuristic ordering. The method proposed in this paper jointly adapts the ordering and duration for which each heuristic runs. Primal performance can also be improved using algorithm configuration (Hutter et al., 2009, 2011), a technique which is generally computational expensive since it relies on many black-box evaluations of the solver as its parameter configurations are evaluated and does not exploit detailed information about the effect of parameter values on performance, e.g., how parameters of primal heuristics affect their success rates. There has also been work done on how to schedule algorithms optimally. Kadioglu et al. (2011) solved the problem for a portfolio of different MIP solvers whereas Hoos et al. (2014) focused on Answer Set Programming. Furthermore, Seipp et al. (2015) propose an algorithm that greedily finds a schedule of different parameter configurations for automated planning. 2 Preliminaries Let us consider a MIP of the form min x∈Rn cTx s.t. Ax ≤ b, xi ∈ Z,∀i ∈ I, (PMIP) with matrix A ∈ Rm×n, vectors c ∈ Rn, b ∈ Rm, and a non-empty index set I ⊆ [n] for integer variables. A MIP can be solved using B&B, a tree search algorithm that finds an optimal solution to (PMIP) by recursively partitioning the original problem into linear subproblems. The nodes in the resulting search tree correspond to these subproblems. Throughout this work, we assume that each node has a unique index that identifies the node even across B&B trees obtained for different MIP instances. For a set of instances X , we denote the union of the corresponding node indices by NX . Primal Performance Metrics. Since we are interested in finding good solutions fast, we consider a collection of different metrics for primal performance. Beside statistics like the time to the first/best solution and the solution/incumbent success rate, we mainly focus on the primal integral (Berthold, 2013a) as a comprehensive measure of primal performance. Intuitively, this metric can be interpreted as a normalized average of the incumbent value over time. A formal definition can be found in Appendix A. Figure 2 gives an example for the primal gap function. The primal integrals are the areas under each of the curves. It is easy to see that finding near-optimal incumbents earlier shrinks the area under the graph of the primal gap, resulting in a smaller primal integral. 3 Data-Driven Heuristic Scheduling Since the performance of heuristics is highly problem-dependent, it is natural to consider data-driven approaches for optimizing the use of primal heuristics for the instances of interest. Concretely, we consider the following practically relevant setting. We are given a set of heuristics H and a homogeneous set of training instances X from the same problem class. In a data collection phase, we are allowed to execute the B&B algorithm on the training instances, observing how each heuristic performs at each node of each search tree. At a high level, our goal is to then leverage this data to obtain a schedule of heuristics that minimizes a primal performance metric. The specifics of how such data collection is carried out will be discussed later on in the paper. First, let us examine the decisions that could potentially benefit from a data-driven approach. Our discussion is inspired by an in-depth analysis of how the open-source MIP solver SCIP (Gamrath et al., 2020) manages primal heuristics. However, our approach is generic and is likely to apply to other solvers. Controlling the Order. One important degree of freedom in scheduling heuristics is the order in which a set of heuristics H is executed by the solver at a given node. This can be controlled by assigning a priority for each heuristic. In a heuristic loop, the solver then iterates over the heuristics in decreasing priority. The loop is terminated if a heuristic finds a new incumbent solution that cuts off the current node. As such, an ordering 〈h1, . . . , hk〉 that prioritizes effective heuristics can lead to time savings without sacrificing primal performance. Controlling the Duration. Furthermore, solvers use working limits to control the computational effort spent on heuristics. Consider diving heuristics as an example. Increasing the maximal diving depth increases the likelihood of finding an integer feasible solution. At the same time, this increases the overall running time. Figure 3 visualizes this cost-benefit trade-off empirically for three different diving heuristics, highlighting the need for a careful “balancing act”. For a heuristic h ∈ H, let τ ∈ R>0 denote its time budget. Then, we are interested in finding a schedule S := 〈(h1, τ1), . . . , (hk, τk)〉, hi ∈ H. Since controlling the time budget directly can be unreliable and lead to nondeterministic behavior in practice (see Appendix E for details), a deterministic proxy measure is preferable. For diving heuristics, the maximal diving depth provides a suitable measure as demonstrated by Figure 3. Similar measures can be used for other types of heuristics, as we will demonstrate with Large Neighborhood Search heuristics in Section 6. In general, we will refer to τi as the maximal number of iterations that is allocated to a heuristic hi in schedule S. Deriving the Scheduling Problem. Having argued for order and duration as suitable control decisions, we will now formalize our heuristic scheduling problem. Ideally, we would like to construct a schedule S that minimizes the primal integral, averaged over the training set X . Unfortunately, it is very difficult to optimize the primal integral directly, as it depends on the sequence of incumbents found over time during B&B. It also depends on the way the search tree is explored, which is affected by pruning, further complicating any attempt at directly optimizing this primal metric. We address this difficulty by considering a more tractable surrogate objective. Recall thatNX denotes the collection of search tree nodes of the set of training instances X . We will construct a schedule S that finds feasible solutions for a large fraction of the nodes in NX , while also minimizing the number of iterations expended by schedule S. Note that we consider feasible solutions instead of incumbents here: this way, we are able to obtain more data faster since a heuristic finds a feasible solution more often than a new incumbent. The framework we propose in the following can handle incumbents instead, but we have found no benefit in doing so in preliminary experiments. For a heuristic h and node N , denote by t(h,N) the iterations necessary for h to find a solution at node N , and set t(h,N) = ∞ if h does not succeed at N . Now suppose a schedule S is successful at node N , i.e., some heuristic finds a solution within the budget allocated to it in S. Let jS = min{j ∈ [|H|] : t(hj , N) ≤ τj} be the index of the first successful heuristic. Following the execution of hjS , the heuristic loop is terminated, and the time spent by S at node N is given by T (S,N) := ∑ i∈[jS−1] τi + t(hjS , N). Otherwise, set T (S,N) := ∑k i=1 τi + 1, where the additional 1 penalizes unsolved nodes. Furthermore, let NS denote the set of nodes at which schedule S is successful in finding a solution. Then, we consider the heuristic scheduling problem given by min S∈S ∑ N∈NX T (S,N) s.t. |NS | ≥ α|NX |. (PS ) Here α ∈ [0, 1] denotes the minimum fraction of nodes for which the schedule must find a feasible solution. Problem (PS ) can be formulated as a Mixed-Integer Quadratic Program (MIQP); the complete formulation can be found in Appendix B. To find such a schedule, we need to know t(h,N) for every heuristic h and node N . Hence, when collecting data for the instances in the training set X , we track for every B&B node N at which a heuristic h was called, the number of iterations τhN it took h to find a feasible solution; we set τhN = ∞ if h does not succeed at N . Formally, we require a training dataset D := { (h,N, τhN ) | h ∈ H, N ∈ NX , τhN ∈ R>0 ∪ {∞} } . Section 5 describes a computationally efficient approach for building D using a single B&B run per training instance. 4 Solving the Scheduling Problem Problem (PS ) is a generalization of the Pipelined Set Cover Problem which is known to be NP-hard (Munagala et al., 2005). As for the MIQP in Appendix B, tackling it using a non-linear integer programming solver is challenging: the MIQP has O(|H||NX |) variables and constraints. Since a single instance may involve thousands of search tree nodes, this leads to an MIQP with hundreds of thousands of variables and constraints even with a handful of heuristics and tens of training instances. As mentioned in Related Work, algorithm configuration tools such as SMAC (Hutter et al., 2011) could be used to solve (PS ) heuristically. Since SMAC is a sequential algorithm that searches for a good parameter configuration by successively adapting and re-evaluating its best configurations, its running time can be quite substantial. In the following, we present a more efficient approach. We now direct our attention towards designing an efficient heuristic algorithm for (PS ). A similar problem was studied by Streeter (2007) in the context of decision problems. Among other things, the author discusses how to find a schedule of (randomized) heuristics that minimizes the expected time necessary to solve a set of training instances X of a decision problem. Although this setting is somewhat similar to ours, there exist multiple aspects in which they differ significantly: 1. Decision problems are considered instead of MIPs: Solving a MIP is generally much different from solving a decision problem. When using B&B, we normally have to solve many linear subproblems. Since in theory, every such LP is an opportunity for a heuristic to find a new incumbent, we consider the set of nodes NX instead of X as the “instances” we want to solve. 2. A heuristic call can be suspended and resumed: In the work of Streeter, a heuristic can be executed in a “suspend-and-resume model”: If h was executed before, the action (h, τ) represents continuing a heuristic run for an additional τ iterations. When h reaches the iteration limit, the run is suspended and its state kept in memory such that it can be resumed later in the schedule. This model is not used in MIP solving due to challenges in maintaining the states of heuristics in memory. As such, we allow every heuristic to be included in the schedule at most once. 3. Time is used to control the duration of a heuristic run: Controlling time directly is unreliable in practice and can lead to nondeterministic behavior of the solver. Instead, we rely on different proxy measures for different classes of heuristics. Thus, when building a schedule that contains heuristics of distinct types, we need to ensure that these measures are comparable. Despite these differences, it is useful to examine the greedy scheduling approach proposed in Streeter (2007). A schedule G is built by successively adding the action (h, τ) that maximizes the ratio of the marginal increase in the number of instances solved to the cost (i.e., τ ) of including (h, τ). As shown in Corollary 2 of Streeter (2007), the greedy schedule G yields a 4-approximation to that version of the scheduling problem. In an attempt to leverage this elegant heuristic in our problem (PS ), we will describe it formally. Let us denote the greedy schedule by G := 〈g1, . . . , gk〉. Then, G is defined inductively by setting G0 = 〈〉 and Gj = 〈g1, . . . , gj〉 with gj = argmax (h,τ)∈Hj−1×T |{N ∈ N j−1X | τhN ≤ τ}| τ . Here,Hj denotes the set of heuristics that are not in Gj , N jX denotes the subset of nodes not solved by Gj , and T is the interval generated by all possible iteration limits in D, i.e., T := [min{τhN | (N,h, τhN ) ∈ D},max{τhN | (N,h, τhN ) ∈ D}]. We stop adding actions gj when Gj finds a solution at all nodes in NX or all heuristics are contained in the schedule, i.e.,Hj = ∅. Unfortunately, the resulting schedule can perform arbitrarily bad in our setting: Assume we have |NX | = 100 and only one heuristic h. This heuristic solves one node in just one iteration and requires 100 iterations for each of the other 99 nodes. Following the greedy approach, the resulting schedule would be G = 〈(h, 1)〉 since 11 > 99 100 . Whenever α > 0.01, G would be infeasible for our constrained problem (PS ). Since we are not allowed to add a heuristic more than once, this cannot be fixed with the current algorithm. To avoid this situation, we propose the following modification. Instead of only considering the heuristics that are not in Gj−1 when choosing the next action gj , we also consider the option to run the last heuristic hj−1 of Gj−1 for longer. That is, we allow to choose (hj−1, τ) with τ > τj−1. Note that the cost of adding (hj−1, τ) to the schedule is not τ , but τ − τj−1, since we decide to run hj−1 for τ − τj−1 iterations longer and not to rerun hj−1 for τ iterations. Furthermore, when including different classes of heuristics in the schedule, the respective time measures are not necessarily comparable. We observed that not taking the difference of iteration cost into account led to an increase of the primal integral of up to 23% compared to default SCIP. To circumvent this problem, we use the average time per iteration to normalize different notions of iterations. We denote the average cost of an iteration by thavg for heuristic h. Note that t h avg can be easily computed by tracking the running time of a heuristic during data collection. Hence, we redefine gj and obtain gj = argmax (h,τ)∈Aj−1 |{N ∈ N j−1X | τhN ≤ τ}| cj−1(h, τ) , with Aj := (Hj × T ) ∪ {(hj , τ) | τ > τj , τ ∈ T } and cj(h, τ) := { thavgτ, if h 6= hj thavg(τ − τj), otherwise. We set A0 := H × T and c0(h, τ) = thavgτ . With this modification, we obtain the schedule G = 〈(h, 100)〉 (which solves all 100 nodes) in the above example. Additionally, it is also possible to consider the quality of the found solutions when choosing the next action gj . Since we observed that the resulting schedules increased the primal integral by up to 11%, we omit this here. Finally, note that this greedy procedure still does not explicitly enforce that the schedule is successful at a fraction of at least α nodes. In our experiments, however, we observe that the resulting schedules reach a success rate of α = 98% or above. The final algorithm can be found in Appendix C. Example. Figure 4 shows an example of how we obtain a schedule with three heuristics and nodes. As indicated by the left figure, the dataset is given by D = {(h1, N1, 1), (h1, N2,∞), (h1, N3,∞), (h2, N1, 4), (h2, N2, 3), (h2, N3, 3), (h3, N1,∞), (h3, N2, 4), (h3, N3, 2)}. Let us now assume that all three heuristic have the same costs, i.e., th1avg = t h2 avg = t h3 avg . We build the schedule G as follows. First, we add action (h1, 1), since h1 solves one node with only one iteration, yielding the best ratio. Since N1 is “solved” by the current schedule and h1 cannot solve any other nodes, both N1 and h1 do not need to be considered anymore. Among the remaining possibilities, the action (h2, 3) is the best, since h2 solves both nodes in three iterations yielding a ratio of 23 . In contrast, executing h3 for two and four iterations, respectively, yields a ratio of 12 . Hence, we add (h2, 3) to G and obtain G = 〈(h1, 1), (h2, 3)〉. The schedule then solves all three nodes as shown on the right of Figure 4. Note that this schedule is an optimal solution of (PS ) for α > 13 . 5 Data Collection The scheduling approach described thus far rests on the availability of a dataset D. Among others, each entry in D stores the number of iterations τhN required by heuristic h to find a feasible solution at node N . This piece of information must be collected by executing the heuristic and observing its performance. Two main challenges arise in collecting such a dataset for multiple heuristics: 1. Efficient data collection: Solving MIPs by B&B remains computationally expensive, even given the sophisticated techniques implemented in today’s solvers. This poses difficulties to ML approaches that create a single reward signal per MIP evaluation, which may take several minutes up to hours. In other words, even with a handful of heuristics, i.e., a small setH, it is prohibitive to run B&B once for each heuristic-training instance pair in order to construct the dataset D. 2. Obtaining unbiased data: Executing multiple heuristics at each node of the search tree during data collection can have dangerous side effects: if a heuristic finds an incumbent, subsequent heuristics are no longer executed at the same node, as described in Section 3. We address the first point by using a specially-crafted version of the MIP solver for collecting multiple reward signals for the execution of multiple heuristics per single MIP evaluation during the training phase. As a result, we obtain a large amount of data points that scales with the running time of the MIP solves. This has the clear advantage that the efficiency of our data collection does not automatically decrease when the time to evaluate a single MIP increases for more challenging problems. To prevent bias from mutual interaction of different heuristics during training, we engineered the MIP solver to be executed in a special shadow mode, where heuristics are called in a sandbox environment and interaction with the main solving path is maximally reduced. In particular, this means that new incumbents and primal bounds are not communicated back, but only recorded for training data. This setting is an improved version of the shadow mode introduced in Khalil et al. (2017). As a result of these measures, we have instrumented the SCIP solver in a way that allows for the collection of a proper dataset D with a single run of the B&B algorithm per training instance. 6 Computational Results The code we use for data collection and scheduling is publicly available.1 6.1 Heuristics and Instances We can build a schedule containing arbitrary heuristics as long as there is a time measure available. We focus on two broad groups of complex heuristics: Diving and Large Neighborhood Search (LNS). Both classes are much more computationally expensive than simpler heuristics such as rounding (for which scheduling is not necessary and executions are extremely fast), but are generally also more likely to find (good) solutions (Berthold, 2006). That is why it is particularly important to schedule these heuristics most economically. Diving Heuristics. Diving heuristics examine a single probing path by successively fixing variables according to a specific rule. There are multiple ways of controlling the duration of a dive. After careful consideration, we decided on using the maximum diving depth to limit the cost of a call to a diving heuristic: It is both related to the effort spent by the heuristic and its likelihood of success. LNS Heuristics. These heuristics first build a neighborhood of some reference point which is then searched for improving solutions by solving a sub-MIP. To control the duration, we choose to limit the number of nodes in the sub-MIP. The idea behind this measure is similar to limiting the diving depth of diving heuristics: In both cases, we control the number of subproblems a heuristic considers within its execution. Nevertheless, the two measures are not directly comparable: The most expensive LNS heuristic was on average around 892 times more expensive than the cheapest diving heuristic. To summarize, we schedule 16 primal heuristics: ten diving and six LNS heuristics. By controlling this set, we cover about 23 of the more complex heuristics implemented in SCIP. The remaining heuristics are executed after the schedule according to their default settings. 1https://github.com/antoniach/heuristic-scheduling We focus on two problem classes which are challenging on the primal side: The Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). For GISP, we generate two types of instances: The first one takes graphs from the 1993 DIMACS Challange which is also used by Khalil et al. (2017) and Colombi et al. (2017) (120 for training and testing) and the second type uses randomly generated graphs as a base (25 for training and 10 for testing). The latter is also used to obtain FCMNF instances (20 for training and 120 for testing). A detailed description of the problems and how we generate and partition the instances can be found in Appendix D. 6.2 Results To study the performance of our approach, we used the state-of-the-art solver SCIP 7.0 (Gamrath et al., 2020) with CPLEX 12.10.0.0 as the underlying LP solver. Thereby, we needed to modify SCIP’s source code to collect data as described in Section 5, as well as control heuristic parameters that are not already implemented by default. For our experiments, we used a Linux cluster of Intel Xeon CPU E5-2660 v3 2.60GHz with 25MB cache and 128GB main memory. The time limit in all experiments was set to two hours; for data collection to four hours. Because the primal integral depends on time, we ran one process at a time on every machine, allowing for accurate time measurements. Furthermore, since MIP solver performance can be highly sensitive to even small and seemingly performance-neutral perturbations during the solving process (Lodi and Tramontani, 2013), we implemented an exhaustive testing framework that uses four random seeds and evaluates schedules trained with one data distribution on other data distributions, a form of transfer learning. The main baseline we compare against is default SCIP. Note that since the adaptive diving and LNS methods presented in Hendel (2018) and Hendel et al. (2018) are included in default SCIP as heuristics, we implicitly compare to these methods when comparing to default SCIP; improvements due to our method reflect improvements over Hendel’s approach. Furthermore, we also consider SCIP_TUNED, a hand-tuned version of SCIP’s default settings for GISP.2 Since in practice, a MIP expert would try to manually optimize some parameters when dealing with a homogeneous set of instances, we emulated that process to create an even stronger baseline to compare against. GISP – Random graph instances. Table 1 (rows DIVING) shows partial results of the transfer learning experiments for schedules with diving heuristics (see Table 3 in Appendix F for the complete table). Our scheduling framework yields a significant improvement w.r.t. primal integral on all test sets. Since this improvement is consistent over all schedules and test sets, we are able to confirm that the behavior actually comes from our procedure. Especially remarkable is the fact that the schedules trained on smaller instances also perform well on much larger instances. Furthermore, we can see that the schedules perform especially well on instances of increasing difficulty (size). This behavior is intuitive: Since our method aims to improve the primal performance of a solver, there is more room for improvement when an instance is more challenging on the primal side. Over all test sets, the schedules terminated with a strictly better primal integral on 69–76% and with a strictly better primal bound on 59–70% of the instances compared to SCIP_TUNED (see Table 4 in Appendix F for details). In addition, the number of incumbents found by the heuristics considered in the schedule increased significantly: 49–61% of the incumbents were found by heuristics in the schedule, compared to only 33% when running with default SCIP (see Table 4 in Appendix F for details). Table 1 (rows DIVING+LNS) shows the transfer learning experiments for schedules containing diving and LNS heuristics. By including both types of heuristics, we are able to improve over the diving-only schedule in around half of the cases, since on the instances we consider, diving seems to perform significantly better than LNS. Furthermore, we also observe less consistent performance among the schedules which leads us to the conclusion that LNS’s behavior is harder to predict. How to further improve our scheduling procedure to better fit LNS is part of future work. GISP – Finding a schedule with SMAC. As mentioned earlier, we can also find a schedule by using the algorithm configuration tool SMAC. To test SMAC’s performance on the random graph instances, we trained ten SMAC schedules, each with a different random seed, on each of the five training sets. We used the primal integral as a performance metric. To make it easier for SMAC, we only considered diving heuristics. We gave SMAC the same total computational time for training as we did in data collection: With 25 training instances per set using a four hour time limit each, this comes 2We set the frequency offset to 0 for all diving heuristics. to 100 hours per training set and schedule. Note that since SMAC runs sequentially, training the SMAC schedules took over four days per schedule, whereas training a schedule following the greedy algorithm only took four hours with enough machines. To pick the best performing SMAC schedule for each training set, we ran all ten schedules on the test set of same size as the corresponding training set and chose the best performing one. The results can be found in Table 1 (rows SMAC). As we can see, on all test sets, all schedules are significantly better than default SCIP. However, when comparing these results to the performance of the greedy schedules, we can see that SMAC performs worse on average. Over all five test sets, the SMAC schedules terminated with a strictly better primal integral on 36–54% and with a strictly better primal bound on 37–55% of the instances compared to its greedy counterparts. GISP – DIMACS graph instances. The first three columns of Table 2 summarize the results on the instances derived from DIMACS graphs. As we can see, the schedule setting dominates default SCIP in all metrics, but an especially drastic improvement can be obtained w.r.t. the primal integral: the schedule reduces the primal integral by 49%. Furthermore, 92% of instances terminated with a strictly better primal integral and 57% with a strictly better primal bound. Even though SCIP_TUNED finds the best incumbent faster than the schedule, the latter terminates with a better primal bound (GISP is a maximization problem) explaining the small increase in time. When looking at the total time spent in heuristics, we see that heuristics run significantly shorter but with more success: On average, the incumbent success rate is higher compared to default SCIP. That the learned schedule not only improves the primal side of the problem, but also translates to an overall better performance is shown by the last two rows: SCHEDULE significantly dominates DEFAULT in the gap at termination as well as the primal-dual integral. Compared to the results of the method in Khalil et al. (2017), where node features were used to decide if a heuristic should be executed, our scheduling procedure yields competitive performance: On average, their method reduced both the primal integral and the time to best incumbent by 60% (our method: 49% and 47%). Hereby it is important to note that our baseline (SCIP 7.0) is much faster than theirs (SCIP 3.2): for DIMACS instances, default SCIP terminated with a gap of 201.95% in Khalil et al. (2017) compared to 144.59% in our experiments. Furthermore, SCIP’s technical reports show that version 7.0 is 58% faster than version 3.2 on a standard benchmark test set. FCMNF. The last three columns of Table 2 summarize the results on the FCMNF instances. Also for this problem, we can see that the schedule setting dominates both DEFAULT and SCIP_TUNED in almost all metrics. In particular, we are able to almost double the number of solutions found and triple the incumbent success rate. Even though the improvement in the primal integral is not as drastic as we observed with GISP, it is still consistent over the whole test set: 62% of the instances terminated with a strictly better primal integral and 92% with a strictly better primal bound. Similar to the GISP results, SCHEDULE needs more time than DEFAULT to find the best incumbent, since it again terminates with a better primal bound (FCMNF is a minimization problem). Finally, it is important to note that the trained schedules differ significantly from SCIP’s default settings for all training sets. The improvements we observed when using these schedules supports our starting hypothesis, namely that the way default MIP solver parameters are set does not yield the best performance when considering specific use cases. 7 Conclusion and Discussion In this work, we propose a data-driven framework for scheduling primal heuristics in a MIP solver such that the primal performance is optimized. Central to our approach is a novel formulation of the learning task as a scheduling problem, an efficient data collection procedure, and a fast, effective heuristic for solving the learning problem on a training dataset. A comprehensive experimental evaluation shows that our approach consistently learns heuristic schedules with better primal performance than SCIP’s default settings. Furthermore, by replacing our heuristic algorithm with the algorithm configuration tool SMAC in our scheduling framework, we are able to obtain a worse but still significant performance improvement w.r.t. SCIP’s default. Together with the prohibitive computational costs of SMAC, we conclude that for our heuristic scheduling problem, the proposed heuristic algorithm constitutes an efficient alternative to existing methods. A possible limitation of our approach is that it produces a single, “one-size-fits-all" schedule for a class of training instances. It is thus natural to wonder whether alternative formulations of the learning problem leveraging additional contextual data about an input MIP instance and/or a heuristic can be useful. We note that learning a mapping from the space of MIP instances to the space of possible schedules is not trivial. The latter is a highly structured output space that involves both the permutation over heuristics and their respective iteration limits. The approach proposed here is much simpler in nature, which makes it easy to implement and incorporate into a sophisticated MIP solver. Although we have framed the heuristic scheduling problem in ML terms, we are yet to analyze the learning-theoretic aspects of the problem. More specifically, our approach is justified on empirical grounds in Section 6, but we are yet to attempt to analyze potential generalization guarantees. We view the recent foundational results by Balcan et al. (2019) as a promising framework that may apply to our setting, as it has been used for the branching problem in MIP (Balcan et al., 2018). Disclosure of Funding This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689), and by the German Federal Ministry of Education and Research (BMBF) within the Research Campus MODAL (grant numbers 05M14ZAM, 05M20ZBM). Elias B. Khalil acknowledges support from the Scale AI Research Chair Program and an IVADO Postdoctoral Scholarship.
1. What is the main contribution of the paper regarding mixed integer programming (MIP) solvers? 2. What are the strengths of the proposed method, particularly in its empirical effectiveness? 3. What are the weaknesses of the paper, especially in terms of theory and justification? 4. How does the reviewer assess the relevance and novelty of the paper's content? 5. Are there any suggestions or recommendations for improving the paper?
Summary Of The Paper Review
Summary Of The Paper This paper considers integrating data driven approaches into the design of mixed integer programming (MIP) solvers. In particular, the problem of scheduling primal heuristics is the main focus. A MIP solver may use various primal heuristics - efficient techniques for generating feasible integer solutions - at various nodes of the branch and bound tree. Generating good feasible solutions can speed up the solution process and some primal heuristics may work better for certain classes of problems than others. Thus it makes sense to use data driven/learning approaches to tailor the choice of primal heuristics to run. This paper proposes a learning approach to generating a fixed heuristic schedule (i.e. the learned schedule does not depend on the current instance being solved by the MIP solver). The schedule must specify the order to run the heuristics and (roughly) how long to run each. Finding the learned schedule is formulated as an MIQP. A greedy approach to construct the schedule and a heuristic for data collection (which is needed to avoid running B&B multiple times) are proposed. The effectiveness of these heuristics are demonstrated empirically on two types of benchmark problems (generalized independent set and fixed charge multicommodity flow), with the main metric of interest being the primal integral. The proposed method shows a significant improvement over standard heuristic schedules (those used by SCIP) and the hyperparameter tuning method SMAC (which can also be used to set the heuristic schedule). The downside of SMAC compared to the proposed methods is that SMAC has a high computational cost for the learning stage. Review Improving MIP solvers using data driven approaches is a relevant problem and has seen some recent interest (along with other data driven approaches to algorithm design). This paper focuses on the primal aspect of MIP solving and shows promising results The main weakness of this paper is the lack of theory. In addition to studying this problem from the perspective of learning theory (which the authors make a note of in the conclusion), it would also be interesting to see further justification for the greedy approach to designing the schedule and the data collection procedure. Does the proposed greedy algorithm (Algorithm 1 in the Appendix) have a constant approximation ratio? Can it be made more formal that the data collection procedure (Section 5) is unbiased? Overall, the strengths of this paper outweigh the weaknesses, so I would vote to accept this paper. Edit after rebuttal period Thanks to the authors for responding to my comments, my overall (positive) opinion is unchanged.
NIPS
Title Learning to Schedule Heuristics in Branch and Bound Abstract Primal heuristics play a crucial role in exact solvers for Mixed Integer Programming (MIP). While solvers are guaranteed to find optimal solutions given sufficient time, real-world applications typically require finding good solutions early on in the search to enable fast decision-making. While much of MIP research focuses on designing effective heuristics, the question of how to manage multiple MIP heuristics in a solver has not received equal attention. Generally, solvers follow hard-coded rules derived from empirical testing on broad sets of instances. Since the performance of heuristics is problem-dependent, using these general rules for a particular problem might not yield the best performance. In this work, we propose the first data-driven framework for scheduling heuristics in an exact MIP solver. By learning from data describing the performance of primal heuristics, we obtain a problem-specific schedule of heuristics that collectively find many solutions at minimal cost. We formalize the learning task and propose an efficient algorithm for computing such a schedule. Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on two classes of challenging instances. 1 Introduction Many decision-making problems arising from real-world applications can be formulated using Mixed Integer Programming (MIP). The Branch and Bound (B&B) framework is a general approach to solving MIPs to global optimality. Over the recent years, the idea of using machine learning (ML) to improve optimization techniques has gained renewed interest. There exist various approaches to tackle different aspects of the solving process using classical ML techniques. For instance, ML has been used to find good parameter configurations for a solver (Hutter et al., 2009, 2011), improve node (He et al., 2014), variable (Khalil et al., 2016; Gasse et al., 2019; Nair et al., 2020) or cut (Baltean-Lugojan et al., 2019) selection strategies, and detect decomposable structures (Kruber et al., 2017). Even though exact MIP solvers aim for global optimality, finding good feasible solutions fast is at least as important, especially in the presence of a time limit. The use of primal heuristics is crucial to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). ensuring good primal performance in modern solvers. For instance, Berthold (2013a) showed that the primal bound–the objective value of the best solution–improved on average by around 80% when primal heuristics were used. Generally, a solver includes a variety of primal heuristics, where each class of heuristics (e.g., rounding, diving, large-neighborhood search) exploits a different idea to find good solutions. During B&B, some of these heuristics are executed successively at each node of the search tree, and improved solutions, if any, are reported back to the solver. An extensive overview of different primal heuristics, their computational costs, and their impact in MIP solving can be found in Lodi (2013); Berthold (2013b, 2018). Since most heuristics can be very costly, it is necessary to be strategic about the order in which the heuristics are executed and the number of iterations allocated to each. Such decisions are often made by following hard-coded rules derived from testing on broad benchmark test sets. While these static rules yield good performance on average, their performance can be far from satisfactory when considering specific families of instances. To illustrate this fact, Figure 1 compares the solution success rates, i.e., the fraction of calls to a heuristic where a solution was found, of different primal heuristics for two problem classes: the Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed-Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). In this paper, we propose a data-driven approach to systematically improve the use of primal heuristics in B&B. By learning from data about the duration and success of every heuristic call for a set of training instances, we construct a schedule of heuristics that specifies the ordering and duration for which each heuristic should be executed to obtain good primal solutions early on. As a result, we are able to significantly improve the use of primal heuristics as shown in Figure 2 for one MIP instance. Contributions. Our main contributions can be summarized as follows: 1. We formalize the learning task of finding an effective, cost-efficient heuristic schedule on a training dataset as a Mixed Integer Quadratic Program (Section 3); 2. We propose an efficient heuristic for solving the training (scheduling) problem and a scalable data collection strategy (Sections 4 and 5); 3. We perform extensive computational experiments on a class of challenging instances and demonstrate the benefits of our approach (Section 6). Related Work. Optimizing the use of primal heuristics is a topic of ongoing research. For instance, by characterizing nodes with different features, Khalil et al. (2017) propose an ML method to decide when to execute heuristics to improve primal performance. After that decision, all heuristics are executed according to the predefined rules set by the solver. Hendel (2018) and Hendel et al. (2018) use bandit algorithms for the online learning of a heuristic ordering. The method proposed in this paper jointly adapts the ordering and duration for which each heuristic runs. Primal performance can also be improved using algorithm configuration (Hutter et al., 2009, 2011), a technique which is generally computational expensive since it relies on many black-box evaluations of the solver as its parameter configurations are evaluated and does not exploit detailed information about the effect of parameter values on performance, e.g., how parameters of primal heuristics affect their success rates. There has also been work done on how to schedule algorithms optimally. Kadioglu et al. (2011) solved the problem for a portfolio of different MIP solvers whereas Hoos et al. (2014) focused on Answer Set Programming. Furthermore, Seipp et al. (2015) propose an algorithm that greedily finds a schedule of different parameter configurations for automated planning. 2 Preliminaries Let us consider a MIP of the form min x∈Rn cTx s.t. Ax ≤ b, xi ∈ Z,∀i ∈ I, (PMIP) with matrix A ∈ Rm×n, vectors c ∈ Rn, b ∈ Rm, and a non-empty index set I ⊆ [n] for integer variables. A MIP can be solved using B&B, a tree search algorithm that finds an optimal solution to (PMIP) by recursively partitioning the original problem into linear subproblems. The nodes in the resulting search tree correspond to these subproblems. Throughout this work, we assume that each node has a unique index that identifies the node even across B&B trees obtained for different MIP instances. For a set of instances X , we denote the union of the corresponding node indices by NX . Primal Performance Metrics. Since we are interested in finding good solutions fast, we consider a collection of different metrics for primal performance. Beside statistics like the time to the first/best solution and the solution/incumbent success rate, we mainly focus on the primal integral (Berthold, 2013a) as a comprehensive measure of primal performance. Intuitively, this metric can be interpreted as a normalized average of the incumbent value over time. A formal definition can be found in Appendix A. Figure 2 gives an example for the primal gap function. The primal integrals are the areas under each of the curves. It is easy to see that finding near-optimal incumbents earlier shrinks the area under the graph of the primal gap, resulting in a smaller primal integral. 3 Data-Driven Heuristic Scheduling Since the performance of heuristics is highly problem-dependent, it is natural to consider data-driven approaches for optimizing the use of primal heuristics for the instances of interest. Concretely, we consider the following practically relevant setting. We are given a set of heuristics H and a homogeneous set of training instances X from the same problem class. In a data collection phase, we are allowed to execute the B&B algorithm on the training instances, observing how each heuristic performs at each node of each search tree. At a high level, our goal is to then leverage this data to obtain a schedule of heuristics that minimizes a primal performance metric. The specifics of how such data collection is carried out will be discussed later on in the paper. First, let us examine the decisions that could potentially benefit from a data-driven approach. Our discussion is inspired by an in-depth analysis of how the open-source MIP solver SCIP (Gamrath et al., 2020) manages primal heuristics. However, our approach is generic and is likely to apply to other solvers. Controlling the Order. One important degree of freedom in scheduling heuristics is the order in which a set of heuristics H is executed by the solver at a given node. This can be controlled by assigning a priority for each heuristic. In a heuristic loop, the solver then iterates over the heuristics in decreasing priority. The loop is terminated if a heuristic finds a new incumbent solution that cuts off the current node. As such, an ordering 〈h1, . . . , hk〉 that prioritizes effective heuristics can lead to time savings without sacrificing primal performance. Controlling the Duration. Furthermore, solvers use working limits to control the computational effort spent on heuristics. Consider diving heuristics as an example. Increasing the maximal diving depth increases the likelihood of finding an integer feasible solution. At the same time, this increases the overall running time. Figure 3 visualizes this cost-benefit trade-off empirically for three different diving heuristics, highlighting the need for a careful “balancing act”. For a heuristic h ∈ H, let τ ∈ R>0 denote its time budget. Then, we are interested in finding a schedule S := 〈(h1, τ1), . . . , (hk, τk)〉, hi ∈ H. Since controlling the time budget directly can be unreliable and lead to nondeterministic behavior in practice (see Appendix E for details), a deterministic proxy measure is preferable. For diving heuristics, the maximal diving depth provides a suitable measure as demonstrated by Figure 3. Similar measures can be used for other types of heuristics, as we will demonstrate with Large Neighborhood Search heuristics in Section 6. In general, we will refer to τi as the maximal number of iterations that is allocated to a heuristic hi in schedule S. Deriving the Scheduling Problem. Having argued for order and duration as suitable control decisions, we will now formalize our heuristic scheduling problem. Ideally, we would like to construct a schedule S that minimizes the primal integral, averaged over the training set X . Unfortunately, it is very difficult to optimize the primal integral directly, as it depends on the sequence of incumbents found over time during B&B. It also depends on the way the search tree is explored, which is affected by pruning, further complicating any attempt at directly optimizing this primal metric. We address this difficulty by considering a more tractable surrogate objective. Recall thatNX denotes the collection of search tree nodes of the set of training instances X . We will construct a schedule S that finds feasible solutions for a large fraction of the nodes in NX , while also minimizing the number of iterations expended by schedule S. Note that we consider feasible solutions instead of incumbents here: this way, we are able to obtain more data faster since a heuristic finds a feasible solution more often than a new incumbent. The framework we propose in the following can handle incumbents instead, but we have found no benefit in doing so in preliminary experiments. For a heuristic h and node N , denote by t(h,N) the iterations necessary for h to find a solution at node N , and set t(h,N) = ∞ if h does not succeed at N . Now suppose a schedule S is successful at node N , i.e., some heuristic finds a solution within the budget allocated to it in S. Let jS = min{j ∈ [|H|] : t(hj , N) ≤ τj} be the index of the first successful heuristic. Following the execution of hjS , the heuristic loop is terminated, and the time spent by S at node N is given by T (S,N) := ∑ i∈[jS−1] τi + t(hjS , N). Otherwise, set T (S,N) := ∑k i=1 τi + 1, where the additional 1 penalizes unsolved nodes. Furthermore, let NS denote the set of nodes at which schedule S is successful in finding a solution. Then, we consider the heuristic scheduling problem given by min S∈S ∑ N∈NX T (S,N) s.t. |NS | ≥ α|NX |. (PS ) Here α ∈ [0, 1] denotes the minimum fraction of nodes for which the schedule must find a feasible solution. Problem (PS ) can be formulated as a Mixed-Integer Quadratic Program (MIQP); the complete formulation can be found in Appendix B. To find such a schedule, we need to know t(h,N) for every heuristic h and node N . Hence, when collecting data for the instances in the training set X , we track for every B&B node N at which a heuristic h was called, the number of iterations τhN it took h to find a feasible solution; we set τhN = ∞ if h does not succeed at N . Formally, we require a training dataset D := { (h,N, τhN ) | h ∈ H, N ∈ NX , τhN ∈ R>0 ∪ {∞} } . Section 5 describes a computationally efficient approach for building D using a single B&B run per training instance. 4 Solving the Scheduling Problem Problem (PS ) is a generalization of the Pipelined Set Cover Problem which is known to be NP-hard (Munagala et al., 2005). As for the MIQP in Appendix B, tackling it using a non-linear integer programming solver is challenging: the MIQP has O(|H||NX |) variables and constraints. Since a single instance may involve thousands of search tree nodes, this leads to an MIQP with hundreds of thousands of variables and constraints even with a handful of heuristics and tens of training instances. As mentioned in Related Work, algorithm configuration tools such as SMAC (Hutter et al., 2011) could be used to solve (PS ) heuristically. Since SMAC is a sequential algorithm that searches for a good parameter configuration by successively adapting and re-evaluating its best configurations, its running time can be quite substantial. In the following, we present a more efficient approach. We now direct our attention towards designing an efficient heuristic algorithm for (PS ). A similar problem was studied by Streeter (2007) in the context of decision problems. Among other things, the author discusses how to find a schedule of (randomized) heuristics that minimizes the expected time necessary to solve a set of training instances X of a decision problem. Although this setting is somewhat similar to ours, there exist multiple aspects in which they differ significantly: 1. Decision problems are considered instead of MIPs: Solving a MIP is generally much different from solving a decision problem. When using B&B, we normally have to solve many linear subproblems. Since in theory, every such LP is an opportunity for a heuristic to find a new incumbent, we consider the set of nodes NX instead of X as the “instances” we want to solve. 2. A heuristic call can be suspended and resumed: In the work of Streeter, a heuristic can be executed in a “suspend-and-resume model”: If h was executed before, the action (h, τ) represents continuing a heuristic run for an additional τ iterations. When h reaches the iteration limit, the run is suspended and its state kept in memory such that it can be resumed later in the schedule. This model is not used in MIP solving due to challenges in maintaining the states of heuristics in memory. As such, we allow every heuristic to be included in the schedule at most once. 3. Time is used to control the duration of a heuristic run: Controlling time directly is unreliable in practice and can lead to nondeterministic behavior of the solver. Instead, we rely on different proxy measures for different classes of heuristics. Thus, when building a schedule that contains heuristics of distinct types, we need to ensure that these measures are comparable. Despite these differences, it is useful to examine the greedy scheduling approach proposed in Streeter (2007). A schedule G is built by successively adding the action (h, τ) that maximizes the ratio of the marginal increase in the number of instances solved to the cost (i.e., τ ) of including (h, τ). As shown in Corollary 2 of Streeter (2007), the greedy schedule G yields a 4-approximation to that version of the scheduling problem. In an attempt to leverage this elegant heuristic in our problem (PS ), we will describe it formally. Let us denote the greedy schedule by G := 〈g1, . . . , gk〉. Then, G is defined inductively by setting G0 = 〈〉 and Gj = 〈g1, . . . , gj〉 with gj = argmax (h,τ)∈Hj−1×T |{N ∈ N j−1X | τhN ≤ τ}| τ . Here,Hj denotes the set of heuristics that are not in Gj , N jX denotes the subset of nodes not solved by Gj , and T is the interval generated by all possible iteration limits in D, i.e., T := [min{τhN | (N,h, τhN ) ∈ D},max{τhN | (N,h, τhN ) ∈ D}]. We stop adding actions gj when Gj finds a solution at all nodes in NX or all heuristics are contained in the schedule, i.e.,Hj = ∅. Unfortunately, the resulting schedule can perform arbitrarily bad in our setting: Assume we have |NX | = 100 and only one heuristic h. This heuristic solves one node in just one iteration and requires 100 iterations for each of the other 99 nodes. Following the greedy approach, the resulting schedule would be G = 〈(h, 1)〉 since 11 > 99 100 . Whenever α > 0.01, G would be infeasible for our constrained problem (PS ). Since we are not allowed to add a heuristic more than once, this cannot be fixed with the current algorithm. To avoid this situation, we propose the following modification. Instead of only considering the heuristics that are not in Gj−1 when choosing the next action gj , we also consider the option to run the last heuristic hj−1 of Gj−1 for longer. That is, we allow to choose (hj−1, τ) with τ > τj−1. Note that the cost of adding (hj−1, τ) to the schedule is not τ , but τ − τj−1, since we decide to run hj−1 for τ − τj−1 iterations longer and not to rerun hj−1 for τ iterations. Furthermore, when including different classes of heuristics in the schedule, the respective time measures are not necessarily comparable. We observed that not taking the difference of iteration cost into account led to an increase of the primal integral of up to 23% compared to default SCIP. To circumvent this problem, we use the average time per iteration to normalize different notions of iterations. We denote the average cost of an iteration by thavg for heuristic h. Note that t h avg can be easily computed by tracking the running time of a heuristic during data collection. Hence, we redefine gj and obtain gj = argmax (h,τ)∈Aj−1 |{N ∈ N j−1X | τhN ≤ τ}| cj−1(h, τ) , with Aj := (Hj × T ) ∪ {(hj , τ) | τ > τj , τ ∈ T } and cj(h, τ) := { thavgτ, if h 6= hj thavg(τ − τj), otherwise. We set A0 := H × T and c0(h, τ) = thavgτ . With this modification, we obtain the schedule G = 〈(h, 100)〉 (which solves all 100 nodes) in the above example. Additionally, it is also possible to consider the quality of the found solutions when choosing the next action gj . Since we observed that the resulting schedules increased the primal integral by up to 11%, we omit this here. Finally, note that this greedy procedure still does not explicitly enforce that the schedule is successful at a fraction of at least α nodes. In our experiments, however, we observe that the resulting schedules reach a success rate of α = 98% or above. The final algorithm can be found in Appendix C. Example. Figure 4 shows an example of how we obtain a schedule with three heuristics and nodes. As indicated by the left figure, the dataset is given by D = {(h1, N1, 1), (h1, N2,∞), (h1, N3,∞), (h2, N1, 4), (h2, N2, 3), (h2, N3, 3), (h3, N1,∞), (h3, N2, 4), (h3, N3, 2)}. Let us now assume that all three heuristic have the same costs, i.e., th1avg = t h2 avg = t h3 avg . We build the schedule G as follows. First, we add action (h1, 1), since h1 solves one node with only one iteration, yielding the best ratio. Since N1 is “solved” by the current schedule and h1 cannot solve any other nodes, both N1 and h1 do not need to be considered anymore. Among the remaining possibilities, the action (h2, 3) is the best, since h2 solves both nodes in three iterations yielding a ratio of 23 . In contrast, executing h3 for two and four iterations, respectively, yields a ratio of 12 . Hence, we add (h2, 3) to G and obtain G = 〈(h1, 1), (h2, 3)〉. The schedule then solves all three nodes as shown on the right of Figure 4. Note that this schedule is an optimal solution of (PS ) for α > 13 . 5 Data Collection The scheduling approach described thus far rests on the availability of a dataset D. Among others, each entry in D stores the number of iterations τhN required by heuristic h to find a feasible solution at node N . This piece of information must be collected by executing the heuristic and observing its performance. Two main challenges arise in collecting such a dataset for multiple heuristics: 1. Efficient data collection: Solving MIPs by B&B remains computationally expensive, even given the sophisticated techniques implemented in today’s solvers. This poses difficulties to ML approaches that create a single reward signal per MIP evaluation, which may take several minutes up to hours. In other words, even with a handful of heuristics, i.e., a small setH, it is prohibitive to run B&B once for each heuristic-training instance pair in order to construct the dataset D. 2. Obtaining unbiased data: Executing multiple heuristics at each node of the search tree during data collection can have dangerous side effects: if a heuristic finds an incumbent, subsequent heuristics are no longer executed at the same node, as described in Section 3. We address the first point by using a specially-crafted version of the MIP solver for collecting multiple reward signals for the execution of multiple heuristics per single MIP evaluation during the training phase. As a result, we obtain a large amount of data points that scales with the running time of the MIP solves. This has the clear advantage that the efficiency of our data collection does not automatically decrease when the time to evaluate a single MIP increases for more challenging problems. To prevent bias from mutual interaction of different heuristics during training, we engineered the MIP solver to be executed in a special shadow mode, where heuristics are called in a sandbox environment and interaction with the main solving path is maximally reduced. In particular, this means that new incumbents and primal bounds are not communicated back, but only recorded for training data. This setting is an improved version of the shadow mode introduced in Khalil et al. (2017). As a result of these measures, we have instrumented the SCIP solver in a way that allows for the collection of a proper dataset D with a single run of the B&B algorithm per training instance. 6 Computational Results The code we use for data collection and scheduling is publicly available.1 6.1 Heuristics and Instances We can build a schedule containing arbitrary heuristics as long as there is a time measure available. We focus on two broad groups of complex heuristics: Diving and Large Neighborhood Search (LNS). Both classes are much more computationally expensive than simpler heuristics such as rounding (for which scheduling is not necessary and executions are extremely fast), but are generally also more likely to find (good) solutions (Berthold, 2006). That is why it is particularly important to schedule these heuristics most economically. Diving Heuristics. Diving heuristics examine a single probing path by successively fixing variables according to a specific rule. There are multiple ways of controlling the duration of a dive. After careful consideration, we decided on using the maximum diving depth to limit the cost of a call to a diving heuristic: It is both related to the effort spent by the heuristic and its likelihood of success. LNS Heuristics. These heuristics first build a neighborhood of some reference point which is then searched for improving solutions by solving a sub-MIP. To control the duration, we choose to limit the number of nodes in the sub-MIP. The idea behind this measure is similar to limiting the diving depth of diving heuristics: In both cases, we control the number of subproblems a heuristic considers within its execution. Nevertheless, the two measures are not directly comparable: The most expensive LNS heuristic was on average around 892 times more expensive than the cheapest diving heuristic. To summarize, we schedule 16 primal heuristics: ten diving and six LNS heuristics. By controlling this set, we cover about 23 of the more complex heuristics implemented in SCIP. The remaining heuristics are executed after the schedule according to their default settings. 1https://github.com/antoniach/heuristic-scheduling We focus on two problem classes which are challenging on the primal side: The Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). For GISP, we generate two types of instances: The first one takes graphs from the 1993 DIMACS Challange which is also used by Khalil et al. (2017) and Colombi et al. (2017) (120 for training and testing) and the second type uses randomly generated graphs as a base (25 for training and 10 for testing). The latter is also used to obtain FCMNF instances (20 for training and 120 for testing). A detailed description of the problems and how we generate and partition the instances can be found in Appendix D. 6.2 Results To study the performance of our approach, we used the state-of-the-art solver SCIP 7.0 (Gamrath et al., 2020) with CPLEX 12.10.0.0 as the underlying LP solver. Thereby, we needed to modify SCIP’s source code to collect data as described in Section 5, as well as control heuristic parameters that are not already implemented by default. For our experiments, we used a Linux cluster of Intel Xeon CPU E5-2660 v3 2.60GHz with 25MB cache and 128GB main memory. The time limit in all experiments was set to two hours; for data collection to four hours. Because the primal integral depends on time, we ran one process at a time on every machine, allowing for accurate time measurements. Furthermore, since MIP solver performance can be highly sensitive to even small and seemingly performance-neutral perturbations during the solving process (Lodi and Tramontani, 2013), we implemented an exhaustive testing framework that uses four random seeds and evaluates schedules trained with one data distribution on other data distributions, a form of transfer learning. The main baseline we compare against is default SCIP. Note that since the adaptive diving and LNS methods presented in Hendel (2018) and Hendel et al. (2018) are included in default SCIP as heuristics, we implicitly compare to these methods when comparing to default SCIP; improvements due to our method reflect improvements over Hendel’s approach. Furthermore, we also consider SCIP_TUNED, a hand-tuned version of SCIP’s default settings for GISP.2 Since in practice, a MIP expert would try to manually optimize some parameters when dealing with a homogeneous set of instances, we emulated that process to create an even stronger baseline to compare against. GISP – Random graph instances. Table 1 (rows DIVING) shows partial results of the transfer learning experiments for schedules with diving heuristics (see Table 3 in Appendix F for the complete table). Our scheduling framework yields a significant improvement w.r.t. primal integral on all test sets. Since this improvement is consistent over all schedules and test sets, we are able to confirm that the behavior actually comes from our procedure. Especially remarkable is the fact that the schedules trained on smaller instances also perform well on much larger instances. Furthermore, we can see that the schedules perform especially well on instances of increasing difficulty (size). This behavior is intuitive: Since our method aims to improve the primal performance of a solver, there is more room for improvement when an instance is more challenging on the primal side. Over all test sets, the schedules terminated with a strictly better primal integral on 69–76% and with a strictly better primal bound on 59–70% of the instances compared to SCIP_TUNED (see Table 4 in Appendix F for details). In addition, the number of incumbents found by the heuristics considered in the schedule increased significantly: 49–61% of the incumbents were found by heuristics in the schedule, compared to only 33% when running with default SCIP (see Table 4 in Appendix F for details). Table 1 (rows DIVING+LNS) shows the transfer learning experiments for schedules containing diving and LNS heuristics. By including both types of heuristics, we are able to improve over the diving-only schedule in around half of the cases, since on the instances we consider, diving seems to perform significantly better than LNS. Furthermore, we also observe less consistent performance among the schedules which leads us to the conclusion that LNS’s behavior is harder to predict. How to further improve our scheduling procedure to better fit LNS is part of future work. GISP – Finding a schedule with SMAC. As mentioned earlier, we can also find a schedule by using the algorithm configuration tool SMAC. To test SMAC’s performance on the random graph instances, we trained ten SMAC schedules, each with a different random seed, on each of the five training sets. We used the primal integral as a performance metric. To make it easier for SMAC, we only considered diving heuristics. We gave SMAC the same total computational time for training as we did in data collection: With 25 training instances per set using a four hour time limit each, this comes 2We set the frequency offset to 0 for all diving heuristics. to 100 hours per training set and schedule. Note that since SMAC runs sequentially, training the SMAC schedules took over four days per schedule, whereas training a schedule following the greedy algorithm only took four hours with enough machines. To pick the best performing SMAC schedule for each training set, we ran all ten schedules on the test set of same size as the corresponding training set and chose the best performing one. The results can be found in Table 1 (rows SMAC). As we can see, on all test sets, all schedules are significantly better than default SCIP. However, when comparing these results to the performance of the greedy schedules, we can see that SMAC performs worse on average. Over all five test sets, the SMAC schedules terminated with a strictly better primal integral on 36–54% and with a strictly better primal bound on 37–55% of the instances compared to its greedy counterparts. GISP – DIMACS graph instances. The first three columns of Table 2 summarize the results on the instances derived from DIMACS graphs. As we can see, the schedule setting dominates default SCIP in all metrics, but an especially drastic improvement can be obtained w.r.t. the primal integral: the schedule reduces the primal integral by 49%. Furthermore, 92% of instances terminated with a strictly better primal integral and 57% with a strictly better primal bound. Even though SCIP_TUNED finds the best incumbent faster than the schedule, the latter terminates with a better primal bound (GISP is a maximization problem) explaining the small increase in time. When looking at the total time spent in heuristics, we see that heuristics run significantly shorter but with more success: On average, the incumbent success rate is higher compared to default SCIP. That the learned schedule not only improves the primal side of the problem, but also translates to an overall better performance is shown by the last two rows: SCHEDULE significantly dominates DEFAULT in the gap at termination as well as the primal-dual integral. Compared to the results of the method in Khalil et al. (2017), where node features were used to decide if a heuristic should be executed, our scheduling procedure yields competitive performance: On average, their method reduced both the primal integral and the time to best incumbent by 60% (our method: 49% and 47%). Hereby it is important to note that our baseline (SCIP 7.0) is much faster than theirs (SCIP 3.2): for DIMACS instances, default SCIP terminated with a gap of 201.95% in Khalil et al. (2017) compared to 144.59% in our experiments. Furthermore, SCIP’s technical reports show that version 7.0 is 58% faster than version 3.2 on a standard benchmark test set. FCMNF. The last three columns of Table 2 summarize the results on the FCMNF instances. Also for this problem, we can see that the schedule setting dominates both DEFAULT and SCIP_TUNED in almost all metrics. In particular, we are able to almost double the number of solutions found and triple the incumbent success rate. Even though the improvement in the primal integral is not as drastic as we observed with GISP, it is still consistent over the whole test set: 62% of the instances terminated with a strictly better primal integral and 92% with a strictly better primal bound. Similar to the GISP results, SCHEDULE needs more time than DEFAULT to find the best incumbent, since it again terminates with a better primal bound (FCMNF is a minimization problem). Finally, it is important to note that the trained schedules differ significantly from SCIP’s default settings for all training sets. The improvements we observed when using these schedules supports our starting hypothesis, namely that the way default MIP solver parameters are set does not yield the best performance when considering specific use cases. 7 Conclusion and Discussion In this work, we propose a data-driven framework for scheduling primal heuristics in a MIP solver such that the primal performance is optimized. Central to our approach is a novel formulation of the learning task as a scheduling problem, an efficient data collection procedure, and a fast, effective heuristic for solving the learning problem on a training dataset. A comprehensive experimental evaluation shows that our approach consistently learns heuristic schedules with better primal performance than SCIP’s default settings. Furthermore, by replacing our heuristic algorithm with the algorithm configuration tool SMAC in our scheduling framework, we are able to obtain a worse but still significant performance improvement w.r.t. SCIP’s default. Together with the prohibitive computational costs of SMAC, we conclude that for our heuristic scheduling problem, the proposed heuristic algorithm constitutes an efficient alternative to existing methods. A possible limitation of our approach is that it produces a single, “one-size-fits-all" schedule for a class of training instances. It is thus natural to wonder whether alternative formulations of the learning problem leveraging additional contextual data about an input MIP instance and/or a heuristic can be useful. We note that learning a mapping from the space of MIP instances to the space of possible schedules is not trivial. The latter is a highly structured output space that involves both the permutation over heuristics and their respective iteration limits. The approach proposed here is much simpler in nature, which makes it easy to implement and incorporate into a sophisticated MIP solver. Although we have framed the heuristic scheduling problem in ML terms, we are yet to analyze the learning-theoretic aspects of the problem. More specifically, our approach is justified on empirical grounds in Section 6, but we are yet to attempt to analyze potential generalization guarantees. We view the recent foundational results by Balcan et al. (2019) as a promising framework that may apply to our setting, as it has been used for the branching problem in MIP (Balcan et al., 2018). Disclosure of Funding This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689), and by the German Federal Ministry of Education and Research (BMBF) within the Research Campus MODAL (grant numbers 05M14ZAM, 05M20ZBM). Elias B. Khalil acknowledges support from the Scale AI Research Chair Program and an IVADO Postdoctoral Scholarship.
1. What is the main contribution of the paper regarding branch-and-bound algorithms? 2. What are the strengths and weaknesses of the proposed approach in learning heuristics for mixed-integer linear programs? 3. How does the reviewer assess the significance of the paper's topic and its potential impact on MILP experts? 4. What are the concerns regarding the feature used for generalization in the paper? 5. What are some suggested ways to provide strong indications of the quality of a good generalization?
Summary Of The Paper Review
Summary Of The Paper Branch-and-bound (B&B) algorithms systematically enumerate all solutions of a combinatorial optimization problem. In order to prune the enumeration tree early, these algorithms periodically solve problem relaxations and execute problem heuristics, thus generating upper and lower bounds on the value of an optimum solution. Usually, the better the bounds are, the smaller the enumeration tree is. In general, we have at our disposal several heuristics that may be run at certain re-occuring points during the excecution of the B&B algorithm. Each heuristic may be run for a given number of iterations, with more iterations increasing the chance of successfully improving the bound at the expense of computation time. Classically, a B&B algorithm would include simple fixed decision rules that govern the execution of the heuristics. It is, however, understood that a more sophisticated decision framework potentially improves the overall running time of the algorithm. Hence, the authors wish to learn when -- and for how many iterations -- best to execute the problem heuristics in the case that the B&B is used to solve a mixed-integer linear program (MILP). To that aim, they: derive a formal model for the heuristic scheduling problem, formulate a learning problem as a mixed-integer quadratic program (MIQP), propose a heuristic greedy heuristic for the learning problem (the MIQP is too difficult to solve exactly) propose a scheme to effiently collect training data from B&B runs (B&B runs are very expensive) seem to use the training data and the training problem to derive a fixed schedule (see comments below) that schedules the heuristics based on the index of a node in the enumeration (B&B) tree Review The paper certainly covers an interesting and potentially high-impact topic, as having fewer parameters to optimize is something that every MILP expert is very happy about. It is clearly geared towards applications with little theoretical contribution. The general idea -- learning parameters for MILP solvers -- has been seen before, but a successful execution seems to be difficult so far. This is because -- in my experience -- (a) even the B&B trees of near identical instances of the same problem may bear little to no ressemblence and (b) decisions that are superb in one B&B tree may prove disastrous in another seemingly very similar B&B tree. Hence, to me a big challenge seems to be to derive meaningful features from a B&B tree that generalize well. Having said that, the feature used for generalization here seems to be the index of a node in the B&B tree. Aside for the general experiments which seem to largely favor the author's approach, there is no argument why this measure should be a good feature for distinguishing B&B nodes with different characteristics. In my view, the index is a measure of how far into the procedure a node was processed and the important question is whether a good choice of heuristics is merely a function of this progress. The answer could also depend on the traversal strategy, since for BFS traversal, the node index corrolates to the depth of the node in the tree, whereas it could correspond to any random position in the tree for best-first or DFS traversals. I am not entirely sure if I have understood correctly how the generalization works and I would like the authors' opinion on the above concern. If my reasoning is correct, however, I would propose the following two ways to provide strong indications of the quality of a good generalization: Could you check how your proposed schedule performs against pure random schedule? Generally, we cannot expect to understand learned models due to their complexity. In this particular case, however, the learned schedule should be interpretable. In fact, what does a schedule produced by your algorithm look like? Do we see trends like a preference for certain heuristics in early or late nodes of the tree, a progress over a (largely) contiguous sequence of heuristics over time, the number of iterations being proportional to the node index, a corrolation of heuristics and the number of iterations (i.e., if the same heuristic appears twice in the schedule, does it have a similar number of iterations?) ... Further researching this point could also become a strong point of this work since it would enable us to understand when and maybe why certain heuristics should be used. I feel those two points would be reasonable sanity checks of the approach.
NIPS
Title Learning to Schedule Heuristics in Branch and Bound Abstract Primal heuristics play a crucial role in exact solvers for Mixed Integer Programming (MIP). While solvers are guaranteed to find optimal solutions given sufficient time, real-world applications typically require finding good solutions early on in the search to enable fast decision-making. While much of MIP research focuses on designing effective heuristics, the question of how to manage multiple MIP heuristics in a solver has not received equal attention. Generally, solvers follow hard-coded rules derived from empirical testing on broad sets of instances. Since the performance of heuristics is problem-dependent, using these general rules for a particular problem might not yield the best performance. In this work, we propose the first data-driven framework for scheduling heuristics in an exact MIP solver. By learning from data describing the performance of primal heuristics, we obtain a problem-specific schedule of heuristics that collectively find many solutions at minimal cost. We formalize the learning task and propose an efficient algorithm for computing such a schedule. Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on two classes of challenging instances. 1 Introduction Many decision-making problems arising from real-world applications can be formulated using Mixed Integer Programming (MIP). The Branch and Bound (B&B) framework is a general approach to solving MIPs to global optimality. Over the recent years, the idea of using machine learning (ML) to improve optimization techniques has gained renewed interest. There exist various approaches to tackle different aspects of the solving process using classical ML techniques. For instance, ML has been used to find good parameter configurations for a solver (Hutter et al., 2009, 2011), improve node (He et al., 2014), variable (Khalil et al., 2016; Gasse et al., 2019; Nair et al., 2020) or cut (Baltean-Lugojan et al., 2019) selection strategies, and detect decomposable structures (Kruber et al., 2017). Even though exact MIP solvers aim for global optimality, finding good feasible solutions fast is at least as important, especially in the presence of a time limit. The use of primal heuristics is crucial to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). ensuring good primal performance in modern solvers. For instance, Berthold (2013a) showed that the primal bound–the objective value of the best solution–improved on average by around 80% when primal heuristics were used. Generally, a solver includes a variety of primal heuristics, where each class of heuristics (e.g., rounding, diving, large-neighborhood search) exploits a different idea to find good solutions. During B&B, some of these heuristics are executed successively at each node of the search tree, and improved solutions, if any, are reported back to the solver. An extensive overview of different primal heuristics, their computational costs, and their impact in MIP solving can be found in Lodi (2013); Berthold (2013b, 2018). Since most heuristics can be very costly, it is necessary to be strategic about the order in which the heuristics are executed and the number of iterations allocated to each. Such decisions are often made by following hard-coded rules derived from testing on broad benchmark test sets. While these static rules yield good performance on average, their performance can be far from satisfactory when considering specific families of instances. To illustrate this fact, Figure 1 compares the solution success rates, i.e., the fraction of calls to a heuristic where a solution was found, of different primal heuristics for two problem classes: the Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed-Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). In this paper, we propose a data-driven approach to systematically improve the use of primal heuristics in B&B. By learning from data about the duration and success of every heuristic call for a set of training instances, we construct a schedule of heuristics that specifies the ordering and duration for which each heuristic should be executed to obtain good primal solutions early on. As a result, we are able to significantly improve the use of primal heuristics as shown in Figure 2 for one MIP instance. Contributions. Our main contributions can be summarized as follows: 1. We formalize the learning task of finding an effective, cost-efficient heuristic schedule on a training dataset as a Mixed Integer Quadratic Program (Section 3); 2. We propose an efficient heuristic for solving the training (scheduling) problem and a scalable data collection strategy (Sections 4 and 5); 3. We perform extensive computational experiments on a class of challenging instances and demonstrate the benefits of our approach (Section 6). Related Work. Optimizing the use of primal heuristics is a topic of ongoing research. For instance, by characterizing nodes with different features, Khalil et al. (2017) propose an ML method to decide when to execute heuristics to improve primal performance. After that decision, all heuristics are executed according to the predefined rules set by the solver. Hendel (2018) and Hendel et al. (2018) use bandit algorithms for the online learning of a heuristic ordering. The method proposed in this paper jointly adapts the ordering and duration for which each heuristic runs. Primal performance can also be improved using algorithm configuration (Hutter et al., 2009, 2011), a technique which is generally computational expensive since it relies on many black-box evaluations of the solver as its parameter configurations are evaluated and does not exploit detailed information about the effect of parameter values on performance, e.g., how parameters of primal heuristics affect their success rates. There has also been work done on how to schedule algorithms optimally. Kadioglu et al. (2011) solved the problem for a portfolio of different MIP solvers whereas Hoos et al. (2014) focused on Answer Set Programming. Furthermore, Seipp et al. (2015) propose an algorithm that greedily finds a schedule of different parameter configurations for automated planning. 2 Preliminaries Let us consider a MIP of the form min x∈Rn cTx s.t. Ax ≤ b, xi ∈ Z,∀i ∈ I, (PMIP) with matrix A ∈ Rm×n, vectors c ∈ Rn, b ∈ Rm, and a non-empty index set I ⊆ [n] for integer variables. A MIP can be solved using B&B, a tree search algorithm that finds an optimal solution to (PMIP) by recursively partitioning the original problem into linear subproblems. The nodes in the resulting search tree correspond to these subproblems. Throughout this work, we assume that each node has a unique index that identifies the node even across B&B trees obtained for different MIP instances. For a set of instances X , we denote the union of the corresponding node indices by NX . Primal Performance Metrics. Since we are interested in finding good solutions fast, we consider a collection of different metrics for primal performance. Beside statistics like the time to the first/best solution and the solution/incumbent success rate, we mainly focus on the primal integral (Berthold, 2013a) as a comprehensive measure of primal performance. Intuitively, this metric can be interpreted as a normalized average of the incumbent value over time. A formal definition can be found in Appendix A. Figure 2 gives an example for the primal gap function. The primal integrals are the areas under each of the curves. It is easy to see that finding near-optimal incumbents earlier shrinks the area under the graph of the primal gap, resulting in a smaller primal integral. 3 Data-Driven Heuristic Scheduling Since the performance of heuristics is highly problem-dependent, it is natural to consider data-driven approaches for optimizing the use of primal heuristics for the instances of interest. Concretely, we consider the following practically relevant setting. We are given a set of heuristics H and a homogeneous set of training instances X from the same problem class. In a data collection phase, we are allowed to execute the B&B algorithm on the training instances, observing how each heuristic performs at each node of each search tree. At a high level, our goal is to then leverage this data to obtain a schedule of heuristics that minimizes a primal performance metric. The specifics of how such data collection is carried out will be discussed later on in the paper. First, let us examine the decisions that could potentially benefit from a data-driven approach. Our discussion is inspired by an in-depth analysis of how the open-source MIP solver SCIP (Gamrath et al., 2020) manages primal heuristics. However, our approach is generic and is likely to apply to other solvers. Controlling the Order. One important degree of freedom in scheduling heuristics is the order in which a set of heuristics H is executed by the solver at a given node. This can be controlled by assigning a priority for each heuristic. In a heuristic loop, the solver then iterates over the heuristics in decreasing priority. The loop is terminated if a heuristic finds a new incumbent solution that cuts off the current node. As such, an ordering 〈h1, . . . , hk〉 that prioritizes effective heuristics can lead to time savings without sacrificing primal performance. Controlling the Duration. Furthermore, solvers use working limits to control the computational effort spent on heuristics. Consider diving heuristics as an example. Increasing the maximal diving depth increases the likelihood of finding an integer feasible solution. At the same time, this increases the overall running time. Figure 3 visualizes this cost-benefit trade-off empirically for three different diving heuristics, highlighting the need for a careful “balancing act”. For a heuristic h ∈ H, let τ ∈ R>0 denote its time budget. Then, we are interested in finding a schedule S := 〈(h1, τ1), . . . , (hk, τk)〉, hi ∈ H. Since controlling the time budget directly can be unreliable and lead to nondeterministic behavior in practice (see Appendix E for details), a deterministic proxy measure is preferable. For diving heuristics, the maximal diving depth provides a suitable measure as demonstrated by Figure 3. Similar measures can be used for other types of heuristics, as we will demonstrate with Large Neighborhood Search heuristics in Section 6. In general, we will refer to τi as the maximal number of iterations that is allocated to a heuristic hi in schedule S. Deriving the Scheduling Problem. Having argued for order and duration as suitable control decisions, we will now formalize our heuristic scheduling problem. Ideally, we would like to construct a schedule S that minimizes the primal integral, averaged over the training set X . Unfortunately, it is very difficult to optimize the primal integral directly, as it depends on the sequence of incumbents found over time during B&B. It also depends on the way the search tree is explored, which is affected by pruning, further complicating any attempt at directly optimizing this primal metric. We address this difficulty by considering a more tractable surrogate objective. Recall thatNX denotes the collection of search tree nodes of the set of training instances X . We will construct a schedule S that finds feasible solutions for a large fraction of the nodes in NX , while also minimizing the number of iterations expended by schedule S. Note that we consider feasible solutions instead of incumbents here: this way, we are able to obtain more data faster since a heuristic finds a feasible solution more often than a new incumbent. The framework we propose in the following can handle incumbents instead, but we have found no benefit in doing so in preliminary experiments. For a heuristic h and node N , denote by t(h,N) the iterations necessary for h to find a solution at node N , and set t(h,N) = ∞ if h does not succeed at N . Now suppose a schedule S is successful at node N , i.e., some heuristic finds a solution within the budget allocated to it in S. Let jS = min{j ∈ [|H|] : t(hj , N) ≤ τj} be the index of the first successful heuristic. Following the execution of hjS , the heuristic loop is terminated, and the time spent by S at node N is given by T (S,N) := ∑ i∈[jS−1] τi + t(hjS , N). Otherwise, set T (S,N) := ∑k i=1 τi + 1, where the additional 1 penalizes unsolved nodes. Furthermore, let NS denote the set of nodes at which schedule S is successful in finding a solution. Then, we consider the heuristic scheduling problem given by min S∈S ∑ N∈NX T (S,N) s.t. |NS | ≥ α|NX |. (PS ) Here α ∈ [0, 1] denotes the minimum fraction of nodes for which the schedule must find a feasible solution. Problem (PS ) can be formulated as a Mixed-Integer Quadratic Program (MIQP); the complete formulation can be found in Appendix B. To find such a schedule, we need to know t(h,N) for every heuristic h and node N . Hence, when collecting data for the instances in the training set X , we track for every B&B node N at which a heuristic h was called, the number of iterations τhN it took h to find a feasible solution; we set τhN = ∞ if h does not succeed at N . Formally, we require a training dataset D := { (h,N, τhN ) | h ∈ H, N ∈ NX , τhN ∈ R>0 ∪ {∞} } . Section 5 describes a computationally efficient approach for building D using a single B&B run per training instance. 4 Solving the Scheduling Problem Problem (PS ) is a generalization of the Pipelined Set Cover Problem which is known to be NP-hard (Munagala et al., 2005). As for the MIQP in Appendix B, tackling it using a non-linear integer programming solver is challenging: the MIQP has O(|H||NX |) variables and constraints. Since a single instance may involve thousands of search tree nodes, this leads to an MIQP with hundreds of thousands of variables and constraints even with a handful of heuristics and tens of training instances. As mentioned in Related Work, algorithm configuration tools such as SMAC (Hutter et al., 2011) could be used to solve (PS ) heuristically. Since SMAC is a sequential algorithm that searches for a good parameter configuration by successively adapting and re-evaluating its best configurations, its running time can be quite substantial. In the following, we present a more efficient approach. We now direct our attention towards designing an efficient heuristic algorithm for (PS ). A similar problem was studied by Streeter (2007) in the context of decision problems. Among other things, the author discusses how to find a schedule of (randomized) heuristics that minimizes the expected time necessary to solve a set of training instances X of a decision problem. Although this setting is somewhat similar to ours, there exist multiple aspects in which they differ significantly: 1. Decision problems are considered instead of MIPs: Solving a MIP is generally much different from solving a decision problem. When using B&B, we normally have to solve many linear subproblems. Since in theory, every such LP is an opportunity for a heuristic to find a new incumbent, we consider the set of nodes NX instead of X as the “instances” we want to solve. 2. A heuristic call can be suspended and resumed: In the work of Streeter, a heuristic can be executed in a “suspend-and-resume model”: If h was executed before, the action (h, τ) represents continuing a heuristic run for an additional τ iterations. When h reaches the iteration limit, the run is suspended and its state kept in memory such that it can be resumed later in the schedule. This model is not used in MIP solving due to challenges in maintaining the states of heuristics in memory. As such, we allow every heuristic to be included in the schedule at most once. 3. Time is used to control the duration of a heuristic run: Controlling time directly is unreliable in practice and can lead to nondeterministic behavior of the solver. Instead, we rely on different proxy measures for different classes of heuristics. Thus, when building a schedule that contains heuristics of distinct types, we need to ensure that these measures are comparable. Despite these differences, it is useful to examine the greedy scheduling approach proposed in Streeter (2007). A schedule G is built by successively adding the action (h, τ) that maximizes the ratio of the marginal increase in the number of instances solved to the cost (i.e., τ ) of including (h, τ). As shown in Corollary 2 of Streeter (2007), the greedy schedule G yields a 4-approximation to that version of the scheduling problem. In an attempt to leverage this elegant heuristic in our problem (PS ), we will describe it formally. Let us denote the greedy schedule by G := 〈g1, . . . , gk〉. Then, G is defined inductively by setting G0 = 〈〉 and Gj = 〈g1, . . . , gj〉 with gj = argmax (h,τ)∈Hj−1×T |{N ∈ N j−1X | τhN ≤ τ}| τ . Here,Hj denotes the set of heuristics that are not in Gj , N jX denotes the subset of nodes not solved by Gj , and T is the interval generated by all possible iteration limits in D, i.e., T := [min{τhN | (N,h, τhN ) ∈ D},max{τhN | (N,h, τhN ) ∈ D}]. We stop adding actions gj when Gj finds a solution at all nodes in NX or all heuristics are contained in the schedule, i.e.,Hj = ∅. Unfortunately, the resulting schedule can perform arbitrarily bad in our setting: Assume we have |NX | = 100 and only one heuristic h. This heuristic solves one node in just one iteration and requires 100 iterations for each of the other 99 nodes. Following the greedy approach, the resulting schedule would be G = 〈(h, 1)〉 since 11 > 99 100 . Whenever α > 0.01, G would be infeasible for our constrained problem (PS ). Since we are not allowed to add a heuristic more than once, this cannot be fixed with the current algorithm. To avoid this situation, we propose the following modification. Instead of only considering the heuristics that are not in Gj−1 when choosing the next action gj , we also consider the option to run the last heuristic hj−1 of Gj−1 for longer. That is, we allow to choose (hj−1, τ) with τ > τj−1. Note that the cost of adding (hj−1, τ) to the schedule is not τ , but τ − τj−1, since we decide to run hj−1 for τ − τj−1 iterations longer and not to rerun hj−1 for τ iterations. Furthermore, when including different classes of heuristics in the schedule, the respective time measures are not necessarily comparable. We observed that not taking the difference of iteration cost into account led to an increase of the primal integral of up to 23% compared to default SCIP. To circumvent this problem, we use the average time per iteration to normalize different notions of iterations. We denote the average cost of an iteration by thavg for heuristic h. Note that t h avg can be easily computed by tracking the running time of a heuristic during data collection. Hence, we redefine gj and obtain gj = argmax (h,τ)∈Aj−1 |{N ∈ N j−1X | τhN ≤ τ}| cj−1(h, τ) , with Aj := (Hj × T ) ∪ {(hj , τ) | τ > τj , τ ∈ T } and cj(h, τ) := { thavgτ, if h 6= hj thavg(τ − τj), otherwise. We set A0 := H × T and c0(h, τ) = thavgτ . With this modification, we obtain the schedule G = 〈(h, 100)〉 (which solves all 100 nodes) in the above example. Additionally, it is also possible to consider the quality of the found solutions when choosing the next action gj . Since we observed that the resulting schedules increased the primal integral by up to 11%, we omit this here. Finally, note that this greedy procedure still does not explicitly enforce that the schedule is successful at a fraction of at least α nodes. In our experiments, however, we observe that the resulting schedules reach a success rate of α = 98% or above. The final algorithm can be found in Appendix C. Example. Figure 4 shows an example of how we obtain a schedule with three heuristics and nodes. As indicated by the left figure, the dataset is given by D = {(h1, N1, 1), (h1, N2,∞), (h1, N3,∞), (h2, N1, 4), (h2, N2, 3), (h2, N3, 3), (h3, N1,∞), (h3, N2, 4), (h3, N3, 2)}. Let us now assume that all three heuristic have the same costs, i.e., th1avg = t h2 avg = t h3 avg . We build the schedule G as follows. First, we add action (h1, 1), since h1 solves one node with only one iteration, yielding the best ratio. Since N1 is “solved” by the current schedule and h1 cannot solve any other nodes, both N1 and h1 do not need to be considered anymore. Among the remaining possibilities, the action (h2, 3) is the best, since h2 solves both nodes in three iterations yielding a ratio of 23 . In contrast, executing h3 for two and four iterations, respectively, yields a ratio of 12 . Hence, we add (h2, 3) to G and obtain G = 〈(h1, 1), (h2, 3)〉. The schedule then solves all three nodes as shown on the right of Figure 4. Note that this schedule is an optimal solution of (PS ) for α > 13 . 5 Data Collection The scheduling approach described thus far rests on the availability of a dataset D. Among others, each entry in D stores the number of iterations τhN required by heuristic h to find a feasible solution at node N . This piece of information must be collected by executing the heuristic and observing its performance. Two main challenges arise in collecting such a dataset for multiple heuristics: 1. Efficient data collection: Solving MIPs by B&B remains computationally expensive, even given the sophisticated techniques implemented in today’s solvers. This poses difficulties to ML approaches that create a single reward signal per MIP evaluation, which may take several minutes up to hours. In other words, even with a handful of heuristics, i.e., a small setH, it is prohibitive to run B&B once for each heuristic-training instance pair in order to construct the dataset D. 2. Obtaining unbiased data: Executing multiple heuristics at each node of the search tree during data collection can have dangerous side effects: if a heuristic finds an incumbent, subsequent heuristics are no longer executed at the same node, as described in Section 3. We address the first point by using a specially-crafted version of the MIP solver for collecting multiple reward signals for the execution of multiple heuristics per single MIP evaluation during the training phase. As a result, we obtain a large amount of data points that scales with the running time of the MIP solves. This has the clear advantage that the efficiency of our data collection does not automatically decrease when the time to evaluate a single MIP increases for more challenging problems. To prevent bias from mutual interaction of different heuristics during training, we engineered the MIP solver to be executed in a special shadow mode, where heuristics are called in a sandbox environment and interaction with the main solving path is maximally reduced. In particular, this means that new incumbents and primal bounds are not communicated back, but only recorded for training data. This setting is an improved version of the shadow mode introduced in Khalil et al. (2017). As a result of these measures, we have instrumented the SCIP solver in a way that allows for the collection of a proper dataset D with a single run of the B&B algorithm per training instance. 6 Computational Results The code we use for data collection and scheduling is publicly available.1 6.1 Heuristics and Instances We can build a schedule containing arbitrary heuristics as long as there is a time measure available. We focus on two broad groups of complex heuristics: Diving and Large Neighborhood Search (LNS). Both classes are much more computationally expensive than simpler heuristics such as rounding (for which scheduling is not necessary and executions are extremely fast), but are generally also more likely to find (good) solutions (Berthold, 2006). That is why it is particularly important to schedule these heuristics most economically. Diving Heuristics. Diving heuristics examine a single probing path by successively fixing variables according to a specific rule. There are multiple ways of controlling the duration of a dive. After careful consideration, we decided on using the maximum diving depth to limit the cost of a call to a diving heuristic: It is both related to the effort spent by the heuristic and its likelihood of success. LNS Heuristics. These heuristics first build a neighborhood of some reference point which is then searched for improving solutions by solving a sub-MIP. To control the duration, we choose to limit the number of nodes in the sub-MIP. The idea behind this measure is similar to limiting the diving depth of diving heuristics: In both cases, we control the number of subproblems a heuristic considers within its execution. Nevertheless, the two measures are not directly comparable: The most expensive LNS heuristic was on average around 892 times more expensive than the cheapest diving heuristic. To summarize, we schedule 16 primal heuristics: ten diving and six LNS heuristics. By controlling this set, we cover about 23 of the more complex heuristics implemented in SCIP. The remaining heuristics are executed after the schedule according to their default settings. 1https://github.com/antoniach/heuristic-scheduling We focus on two problem classes which are challenging on the primal side: The Generalized Independent Set Problem (GISP) (Hochbaum and Pathria, 1997; Colombi et al., 2017) and the Fixed Charge Multicommodity Network Flow Problem (FCMNF) (Hewitt et al., 2010). For GISP, we generate two types of instances: The first one takes graphs from the 1993 DIMACS Challange which is also used by Khalil et al. (2017) and Colombi et al. (2017) (120 for training and testing) and the second type uses randomly generated graphs as a base (25 for training and 10 for testing). The latter is also used to obtain FCMNF instances (20 for training and 120 for testing). A detailed description of the problems and how we generate and partition the instances can be found in Appendix D. 6.2 Results To study the performance of our approach, we used the state-of-the-art solver SCIP 7.0 (Gamrath et al., 2020) with CPLEX 12.10.0.0 as the underlying LP solver. Thereby, we needed to modify SCIP’s source code to collect data as described in Section 5, as well as control heuristic parameters that are not already implemented by default. For our experiments, we used a Linux cluster of Intel Xeon CPU E5-2660 v3 2.60GHz with 25MB cache and 128GB main memory. The time limit in all experiments was set to two hours; for data collection to four hours. Because the primal integral depends on time, we ran one process at a time on every machine, allowing for accurate time measurements. Furthermore, since MIP solver performance can be highly sensitive to even small and seemingly performance-neutral perturbations during the solving process (Lodi and Tramontani, 2013), we implemented an exhaustive testing framework that uses four random seeds and evaluates schedules trained with one data distribution on other data distributions, a form of transfer learning. The main baseline we compare against is default SCIP. Note that since the adaptive diving and LNS methods presented in Hendel (2018) and Hendel et al. (2018) are included in default SCIP as heuristics, we implicitly compare to these methods when comparing to default SCIP; improvements due to our method reflect improvements over Hendel’s approach. Furthermore, we also consider SCIP_TUNED, a hand-tuned version of SCIP’s default settings for GISP.2 Since in practice, a MIP expert would try to manually optimize some parameters when dealing with a homogeneous set of instances, we emulated that process to create an even stronger baseline to compare against. GISP – Random graph instances. Table 1 (rows DIVING) shows partial results of the transfer learning experiments for schedules with diving heuristics (see Table 3 in Appendix F for the complete table). Our scheduling framework yields a significant improvement w.r.t. primal integral on all test sets. Since this improvement is consistent over all schedules and test sets, we are able to confirm that the behavior actually comes from our procedure. Especially remarkable is the fact that the schedules trained on smaller instances also perform well on much larger instances. Furthermore, we can see that the schedules perform especially well on instances of increasing difficulty (size). This behavior is intuitive: Since our method aims to improve the primal performance of a solver, there is more room for improvement when an instance is more challenging on the primal side. Over all test sets, the schedules terminated with a strictly better primal integral on 69–76% and with a strictly better primal bound on 59–70% of the instances compared to SCIP_TUNED (see Table 4 in Appendix F for details). In addition, the number of incumbents found by the heuristics considered in the schedule increased significantly: 49–61% of the incumbents were found by heuristics in the schedule, compared to only 33% when running with default SCIP (see Table 4 in Appendix F for details). Table 1 (rows DIVING+LNS) shows the transfer learning experiments for schedules containing diving and LNS heuristics. By including both types of heuristics, we are able to improve over the diving-only schedule in around half of the cases, since on the instances we consider, diving seems to perform significantly better than LNS. Furthermore, we also observe less consistent performance among the schedules which leads us to the conclusion that LNS’s behavior is harder to predict. How to further improve our scheduling procedure to better fit LNS is part of future work. GISP – Finding a schedule with SMAC. As mentioned earlier, we can also find a schedule by using the algorithm configuration tool SMAC. To test SMAC’s performance on the random graph instances, we trained ten SMAC schedules, each with a different random seed, on each of the five training sets. We used the primal integral as a performance metric. To make it easier for SMAC, we only considered diving heuristics. We gave SMAC the same total computational time for training as we did in data collection: With 25 training instances per set using a four hour time limit each, this comes 2We set the frequency offset to 0 for all diving heuristics. to 100 hours per training set and schedule. Note that since SMAC runs sequentially, training the SMAC schedules took over four days per schedule, whereas training a schedule following the greedy algorithm only took four hours with enough machines. To pick the best performing SMAC schedule for each training set, we ran all ten schedules on the test set of same size as the corresponding training set and chose the best performing one. The results can be found in Table 1 (rows SMAC). As we can see, on all test sets, all schedules are significantly better than default SCIP. However, when comparing these results to the performance of the greedy schedules, we can see that SMAC performs worse on average. Over all five test sets, the SMAC schedules terminated with a strictly better primal integral on 36–54% and with a strictly better primal bound on 37–55% of the instances compared to its greedy counterparts. GISP – DIMACS graph instances. The first three columns of Table 2 summarize the results on the instances derived from DIMACS graphs. As we can see, the schedule setting dominates default SCIP in all metrics, but an especially drastic improvement can be obtained w.r.t. the primal integral: the schedule reduces the primal integral by 49%. Furthermore, 92% of instances terminated with a strictly better primal integral and 57% with a strictly better primal bound. Even though SCIP_TUNED finds the best incumbent faster than the schedule, the latter terminates with a better primal bound (GISP is a maximization problem) explaining the small increase in time. When looking at the total time spent in heuristics, we see that heuristics run significantly shorter but with more success: On average, the incumbent success rate is higher compared to default SCIP. That the learned schedule not only improves the primal side of the problem, but also translates to an overall better performance is shown by the last two rows: SCHEDULE significantly dominates DEFAULT in the gap at termination as well as the primal-dual integral. Compared to the results of the method in Khalil et al. (2017), where node features were used to decide if a heuristic should be executed, our scheduling procedure yields competitive performance: On average, their method reduced both the primal integral and the time to best incumbent by 60% (our method: 49% and 47%). Hereby it is important to note that our baseline (SCIP 7.0) is much faster than theirs (SCIP 3.2): for DIMACS instances, default SCIP terminated with a gap of 201.95% in Khalil et al. (2017) compared to 144.59% in our experiments. Furthermore, SCIP’s technical reports show that version 7.0 is 58% faster than version 3.2 on a standard benchmark test set. FCMNF. The last three columns of Table 2 summarize the results on the FCMNF instances. Also for this problem, we can see that the schedule setting dominates both DEFAULT and SCIP_TUNED in almost all metrics. In particular, we are able to almost double the number of solutions found and triple the incumbent success rate. Even though the improvement in the primal integral is not as drastic as we observed with GISP, it is still consistent over the whole test set: 62% of the instances terminated with a strictly better primal integral and 92% with a strictly better primal bound. Similar to the GISP results, SCHEDULE needs more time than DEFAULT to find the best incumbent, since it again terminates with a better primal bound (FCMNF is a minimization problem). Finally, it is important to note that the trained schedules differ significantly from SCIP’s default settings for all training sets. The improvements we observed when using these schedules supports our starting hypothesis, namely that the way default MIP solver parameters are set does not yield the best performance when considering specific use cases. 7 Conclusion and Discussion In this work, we propose a data-driven framework for scheduling primal heuristics in a MIP solver such that the primal performance is optimized. Central to our approach is a novel formulation of the learning task as a scheduling problem, an efficient data collection procedure, and a fast, effective heuristic for solving the learning problem on a training dataset. A comprehensive experimental evaluation shows that our approach consistently learns heuristic schedules with better primal performance than SCIP’s default settings. Furthermore, by replacing our heuristic algorithm with the algorithm configuration tool SMAC in our scheduling framework, we are able to obtain a worse but still significant performance improvement w.r.t. SCIP’s default. Together with the prohibitive computational costs of SMAC, we conclude that for our heuristic scheduling problem, the proposed heuristic algorithm constitutes an efficient alternative to existing methods. A possible limitation of our approach is that it produces a single, “one-size-fits-all" schedule for a class of training instances. It is thus natural to wonder whether alternative formulations of the learning problem leveraging additional contextual data about an input MIP instance and/or a heuristic can be useful. We note that learning a mapping from the space of MIP instances to the space of possible schedules is not trivial. The latter is a highly structured output space that involves both the permutation over heuristics and their respective iteration limits. The approach proposed here is much simpler in nature, which makes it easy to implement and incorporate into a sophisticated MIP solver. Although we have framed the heuristic scheduling problem in ML terms, we are yet to analyze the learning-theoretic aspects of the problem. More specifically, our approach is justified on empirical grounds in Section 6, but we are yet to attempt to analyze potential generalization guarantees. We view the recent foundational results by Balcan et al. (2019) as a promising framework that may apply to our setting, as it has been used for the branching problem in MIP (Balcan et al., 2018). Disclosure of Funding This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689), and by the German Federal Ministry of Education and Research (BMBF) within the Research Campus MODAL (grant numbers 05M14ZAM, 05M20ZBM). Elias B. Khalil acknowledges support from the Scale AI Research Chair Program and an IVADO Postdoctoral Scholarship.
1. What is the main contribution of the paper on learning a schedule of heuristics for MIP solvers? 2. What are the strengths of the proposed approach, particularly in comparison to previous work? 3. How does the reviewer assess the clarity and quality of the writing in the paper? 4. Are there any concerns or suggestions regarding the computational setup and results? 5. What are some potential ablation experiments that could provide further insight into the effectiveness of the proposed method?
Summary Of The Paper Review
Summary Of The Paper Modern MIP solvers run multiple heuristics throughout its branch-and-bound tree using a fixed order. This paper introduces the problem of learning a schedule of heuristics, that is, the order and computational effort spent by each heuristic, with the goal of reducing the solving time of a MIP solver given a distribution of instances to train on. The learning task is formalized as a Mixed-Integer Quadratic Program but solved with a greedy heuristic. The paper also discusses an efficient approach to collect training data. Computational experiments are done on the Generalized Independent Set Problem and the Fixed Charge Multicommodity Network Flow Problem, using only diving and LNS heuristics, and they show that this learning method significantly improves upon SCIP with tuned hyperparameters, and produces similar or slightly better results than a schedule produced by the hyperparameter tuning tool SMAC. Review This is overall a very good paper that I believe advances the state-of-the-art in learning for MIP solvers, an important area to enable us to better solve discrete optimization problems. The idea of learning the schedule of heuristics is interesting and novel as far as I am aware. Both the proposed learning task and the greedy learning method are simple and clean, which is appropriate for the scope of this paper, and it opens the door for future exploration of more sophisticated methods. The paper is overall well-written and does a good job at relating it to previous work. I particularly like the example plots that help the reader understand the context. The computational setup is sound and adopts good practices for MIP computational research, such as using a tuned version of the MIP solver as a baseline and random seeds to account for inherent MIP solver variability. They also do a fair amount of cross-validation for the random graph instances. The final computational results show that this approach works well overall. They are not particularly striking especially compared to SMAC, but still represent significant improvements in the context of MIP solvers. Perhaps one aspect that feels missing from the paper are ablation experiments that could give us more insight on which components of this approach are important (e.g. does the ordering matter more than number of iterations?), but I believe this paper is good enough for NeurIPS without them, although there is one ablation experiment below in particular that I would like to suggest. Overall I recommend this paper for acceptance, though I have a few requests: In Section 3, the paper says: "The loop is terminated if a heuristic finds a new incumbent solution." This is used to explain that a good ordering can reduce solve time. However, in the function SCIPprimalHeuristics in solve.c in SCIP 7.0.1, I see that the heuristics are interrupted only if a feasible solution that cuts off the current node is found, otherwise it seems to continue. Please correct me if I an incorrect, but if this is the case, then the statement above would not be correct and overstate the value of choosing a good ordering. It is still valuable because you want to be able to cut off nodes early (which could happen fairly often, not just at infeasible nodes but also when finding a feasible solution that meets the local dual bound), but otherwise if you have a node where no heuristic is able to do that, then does the ordering really matter? Or is the benefit here solely to be able to cut off nodes with primal heuristics? In any case, I would like to ask the authors to double-check whether the statement about heuristics cited above is correct. If not, this should be corrected and clarified in the paper. It is fine if a better ordering is meant to produce earlier node cut offs, but this should be clear in the paper. The notion of using the number of iterations needed to find any feasible solution instead of an incumbent or high-quality solution is counter-intuitive to me. I would be interested in a clarifying discussion on this in the paper. There are a few situations where I would expect considering the quality of the solution important, such as the case where feasibility is easy (i.e. when most heuristics should find a feasible solution early) and the case with improvement heuristics where a starting feasible solution is given. For example, for GISP, a schedule where a trivial heuristic that sets all variables to zero is the first one is always an optimal schedule. This does not happen in your case because you use diving and LNS heuristics, but I suspect that your approach is not robust to this type of behavior. I imagine that the dependency on the node is relevant here (e.g. the incumbent solution passed to the improvement heuristic might not be feasible for a particular node), but I could use some clarification as to why the cases above are not problematic. Could you add a discussion on this to the paper, and if possible, an ablation experiment in the appendix showing your attempt at rewarding for improving incumbents? In addition (and this is only out of curiosity, not a request), have you tried making your reward dependent on the quality of the feasible solution (and/or node cut off) rather than it just improving the incumbent? Section 5 appears to be too vague. What is the special version of the MIP solver that collects multiple reward signals? How is the shadow mode improved from previous work? It is fine if the discussion is done in the appendix, but the description of how these challenges were overcome needs to be complete. Writing notes: Many citations throughout the text are supposed to be parenthetical citations but are not. Section 2: It is worth to briefly define "incumbent" and its distinction to "feasible solution", since that is important later in the paper. Section 3: I suggest briefly mentioning here that the average iteration cost can be taken into account as described in Section 4, since this is an immediate concern that one might have when reading this section. Typos: Line 143: Missing period before "Section 5". Line 275: "Furtheremore". Line 291: "Furtheremore". Line 323: "stricly". Line 337: "improvment". Table 2 caption: "Statictics" Formulation in Appendix B.4: In constraint (9), there is an extra ")" and extra space before H . In constraint (12), s n should be s N . Line 564 (Appendix): "unifromly". Line 567 (Appendix): "insatnces".
NIPS
Title Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness Abstract One common task in many data sciences applications is to answer questions about the effect of new interventions, like: ‘what would happen to Y if we make X equal to x while observing covariates Z = z?’. Formally, this is known as conditional effect identification, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available. In this paper, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We make the following contributions under this relaxed setting. First, we introduce a new causal calculus, which subsumes the current state-of-the-art, PAG-calculus. Second, we develop an algorithm for conditional effect identification given a PAG and prove it to be both sound and complete. In words, failure of the algorithm to identify a certain effect implies that this effect is not identifiable by any method. Third, we prove the proposed calculus to be complete for the same task. 1 Introduction Despite the recent advances in AI and machine learning, the current generation of intelligent systems still lacks the pivotal ability to represent, learn, and reason with cause and effect relationships. The discipline of causal inference aims to ‘algorithmitize’ causal reasoning capabilities towards producing human-like machine intelligence and rational decision-making [Pearl and Mackenzie, 2018, Pearl, 2019, Bareinboim and Pearl, 2016]. One fundamental type of inference in this setting is concerned with the effect of new interventions, e.g., ‘what would happen to outcome Y if X were set to x?’ More generally, we may be interested in Y ’s distribution in a sub-population picked out by the value of some covariates Z = z’. For example, a legislator might be interested in the impact that increasing the minimum wage (X = x) has on profits (Y ) in small businesses (Z = z), which is written in causal language as the interventional distribution P (y|do(x), z), or Px(y|z). One method capable of answering such questions is through controlled experimentation [Fisher, 1951]. In many practical settings found throughout the empirical sciences, AI, and machine learning, it is not always possible to perform a controlled experiment due to ethical, financial, and technical considerations. This motivates the study of a problem known as causal effect identification [Pearl, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2000, Ch. 3]. The idea is to use the observational distribution P (V) along with assumptions about the underlying domain, articulated in the form of a causal diagram D, to infer the interventional distribution Px(y|z) when possible. For instance, Fig. 1a represents a causal diagram in which nodes correspond to measured variables, directed edges represent direct causal relations, and bidirecteddashed edges encode spurious associations due to unmeasured confounders. A plethora of methods have been developed to address the identification task including the celebrated causal calculus proposed by Pearl [1995] as well as complete algorithms [Tian, 2004, Shpitser and Pearl, 2006, Huang and Valtorta, 2006]. For instance, given the causal diagram in Fig. 1a and the query Px(y|z), the calculus sanctions the identity Px(y|z) = P (y|z, x). In words, the interventional distribution on the l.h.s. equates to the observational distribution on the right, which is available as input. Despite the power of these results, requiring the diagram as the input of the task is an Achilles heel for those methods, since background knowledge is usually not sufficient to pin down the single, true diagram. To circumvent these challenges, a growing literature develops data-driven methods that attempt to learn the causal diagram from data first, and then perform identification from there. In practice, however, only an equivalence class (EC) of diagrams can be inferred from observational data without making substantial assumptions about the causal mechanisms [Verma, 1993, Spirtes et al., 2001, Pearl, 2000]. A prominent representation of this class is known as partial ancestral graphs (PAGs) [Zhang, 2008b]. Fig. 1c illustrates the PAG learned from observational data consistent with both causal diagrams in Figs. 1a and 1b since they are in the same Markov equivalence class. The directed edges in a PAG encode ancestral relations, not necessarily direct, and the circle marks stand for structural uncertainty. Directed edges labeled with v signify the absence of unmeasured confounders. Causal effect identification in a PAG is usually more challenging than from a single diagram due to the structural uncertainties and the infeasibility of enumerating each member of the EC in most cases. The do-calculus was extended for PAGs to account for the inherent structure uncertainties without the need for enumeration [Zhang, 2007]. Still, the calculus falls short of capturing all identifiable effects as we will see in Sec. 3. On the other hand, it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. In a more systematic manner, a complete algorithm has been developed to identify marginal effects (i.e., Px(y)) given a PAG [Jaber et al., 2019a]. This algorithm can be used to identify conditional effects whenever the joint distribution Px(y ∪ z) is identifiable. Still, many conditional effects are identifiable even if the corresponding joint effect is not (Sec. 4.2). Finally, an algorithm to identify conditional effects has been proposed in [Jaber et al., 2019b], but not proven to be complete.1 In this paper, we pursue a data-driven formulation for the task of identification of any conditional causal effect from a combination of an observational distribution and the corresponding PAG (instead of a fully specified causal diagram). Accordingly, we makes the following contributions: 1. We propose a causal calculus for PAGs that subsumes the stat-of-the-art calculus introduced in [Zhang, 2007]. We prove the rules are atomic complete, i.e., a rule is not applicable in some causal diagram in the underlying EC whenever it is not applicable given the PAG. 2. Building on these results, we develop an algorithm for the identification of conditional causal effects given a PAG. We prove the algorithm is complete, i.e., the effect is not identifiable in some causal diagram in the equivalence class whenever the algorithm fails. 3. Finally, we prove the calculus is complete for the task of identifying conditional effects. 1Another approach is based on SAT (Boolean constraint satisfaction) solvers [Hyttinen et al., 2015]. Given its somewhat distinct nature, a closer comparison lies outside the scope of this paper. 2 Preliminaries In this section, we introduce the basic setup and notations. Boldface capital letters denote sets of variables, while boldface lowercase letters stand for value assignments to those variables.2 Structural Causal Models. We use Structural Causal Models (SCMs) as our basic semantical framework [Pearl, 2000]. Formally, an SCM M is a 4-tuple ⟨U,V,F, P (U)⟩, where U is a set of exogenous (unmeasured) variables and V is a set of endogenous (measured) variables. F represents a collection of functions such that each endogenous variable Vi ∈ V is determined by a function fi ∈ F. Finally, P (U) encodes the uncertainty over the exogenous variables. Every SCM is associated with one causal diagram where every variable in V ∪U is a node, and arrows are drawn between nodes in accordance with the functions in F. Following standard practice, we omit the exogenous nodes and add a bidirected dashed arc between two endogenous nodes if they share an exogenous parent. We only consider recursive systems, thus the corresponding diagram is acyclic. The marginal distribution induced over the endogenous variables P (V) is called observational. The d-separation criterion captures the conditional independence relations entailed by a causal diagram in P (V). For C ⊆ V, Q[C] denotes the post-intervention distribution of C under an intervention on V \C, i.e. Pv\c(c).3 Ancestral Graphs. We now introduce a graphical representation of equivalence classes of causal diagrams. A MAG represents a set of causal diagrams with the same set of observed variables that entail the same conditional independence and ancestral relations among the observed variables [Richardson and Spirtes, 2002]. M-separation extends d-separation to MAGs such that d-separation in a causal diagram corresponds to m-separation in its unique MAG over the observed variables, and vice versa. Definition 1 (m-separation). A path p between X and Y is active (or m-connecting) relative to Z (X,Y ̸∈ Z) if every non-collider on p is not in Z, and every collider on p is an ancestor of some Z ∈ Z. X and Y are m-separated by Z if there is no active path between X and Y relative to Z. Different MAGs entail the same independence model and hence are Markov equivalent. A PAG represents an equivalence class of MAGs [M], which shares the same adjacencies as every MAG in [M] and displays all and only the invariant edge marks. A circle indicates an edge mark that is not invariant. A PAG is learnable from the independence model over the observed variables, and the FCI algorithm is a standard method to learn such an object [Zhang, 2008b]. In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Graphical Notions. Given a PAG, a path between X and Y is potentially directed (causal) from X to Y if there is no arrowhead on the path pointing towards X . Y is called a possible descendant of X and X a possible ancestor of Y if there is a potentially directed path from X to Y . For a set of nodes X, let An(X) (De(X)) denote the union of X and the set of possible ancestors (descendants) of X. Given two sets of nodes X and Y, a path between them is called proper if one of the endpoints is in X and the other is in Y, and no other node on the path is in X or Y. Let ⟨A,B,C⟩ be any consecutive triple along a path p. B is a collider on p if both edges are into B. B is a (definite) non-collider on p if one of the edges is out of B, or both edges have circle marks at B and there is no edge between A and C. A path is definite status if every non-endpoint node along it is either a collider or a non-collider. If the edge marks on a path between X and Y are all circles, we call the path a circle path. We refer to the closure of nodes connected with circle paths as a bucket. A directed edge X → Y in a PAG is visible if there exists no causal diagram in the corresponding equivalence class where the relation between X and Y is confounded. Which directed edges are visible is easily decidable by a graphical condition [Zhang, 2008a], so we mark visible edges by v. Manipulations in PAGs. Let P denote a PAG over V and X ⊆ V. PX denotes the induced subgraph of P over X. The X-lower-manipulation of P deletes all those edges that are visible in P and are out of variables in X, replaces all those edges that are out of variables in X but are invisible in P with bi-directed edges, and otherwise keeps P as it is. The resulting graph is denoted as PX. The X-upper-manipulation of P deletes all those edges in P that are into variables in X, and otherwise keeps P as it is. The resulting graph is denoted as PX. 2A more comprehensive discussion about the background is provided in the full report [Jaber et al., 2022]. 3Without loss of generality, we assume the model is semi-Markovian. Tian [Tian, 2002, Sec. 5.6] shows that the identification of a causal effect in a non-Markovian model is equivalent to the identification of the same effect in a derived semi-Markovian model via a procedure known as ‘projection’. 3 Causal Calculus for PAGs The causal calculus introduced in [Pearl, 1995] is a seminal work that has been instrumental for understanding and eventually solving the task of effect identification from causal diagrams. Zhang [2007] generalized this result to the context of ancestral graphs, where a PAG is taken as the input of the task, instead of the specific causal diagram. In Sec. 3.1, we discuss Zhang’s rules and try to understand the reasons they are insufficient to solve the identification problem in full generality. Further, in Sec.3.2, we introduce another generalization of the original calculus and prove that it is complete for atomic identification. This result will be further strengthened in subsequent sections. 3.1 Zhang’s Calculus An obvious extension of the m-separation criterion shown in Def. 1 to PAGs blocks all possibly m-connecting paths, as defined next. Definition 2 (Possibly m-connecting path). In a PAG, a path p between X and Y is a possibly mconnecting path relative to a (possibly empty) set of nodes Z (X,Y /∈ Z) if every definite non-collider on p is not a member of Z, and every collider on p is a possible ancestor of some member of Z. X and Y are m̂-separated by Z if there is no possibly m-connecting path between them relative to Z. Using this notion of separation, Zhang [2007] proposed a calculus given a PAG as shown next. Proposition 1 (Zhang’s Calculus). Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P . 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m̂-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m̂-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m̂-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PW . In words, rule 1 generalizes m-separation to interventional settings. Further, rule 2 licenses alternating a subset X between intervention and conditioning. Finally, rule 3 allows the adding/removal of an intervention do(X = x). The next two examples illustrate the shortcomings of this result, where the first reveals the drawback of using Def. 2 to establish graphical separation and the second inspects evaluating X(Z) in rule 3 (where the notion of possible ancestors are evoked). Example 1. Consider the PAG P shown in Fig. 2. Since X and Y are not adjacent in P , it is easy to show that X and Y are separable given {Z1, Z2} in every causal diagram in the equivalence class. If rule 3 of Pearl’s calculus is used in each diagram, then Px(y|z1, z2) = P (y|z1, z2). Further, applying rule 2 of do-calculus in each diagram, it’s also the case that Px(y|z1, z2) = P (y|z1, z2, x). However, due to the possibly m-connecting path ⟨X,Z1, Z2, Y ⟩, rules 3 and 2 in Prop. 1 are not applicable to P . In other words, even though Pearl’s calculus rules 2 and 3 are applicable to each diagram in the equivalence class, the same results cannot be established by Zhang’s calculus. Example 2. Consider the PAG in Fig. 3a, and the evaluation on whether the equality Pw,x1,x2(y|z) = Pw(y|z) holds. In order to apply rule 3 of Prop. 1, we need to evaluate whether {X1, X2} is separated from {Y } given {W,Z} in the manipulated graph in Fig. 3b, which is not true in this case. However, the rule can be improved to be applicable in this case, as we will show later on (Sec. 3.2). The critical step will be the evaluation of the set X(Z) from PW. 3.2 A New Calculus Building on the analysis of the calculus proposed in [Zhang, 2007], we introduce next a set of rules centered around blocking definite m-connecting paths, as defined next. Definition 3 (Definite m-connecting path). In a PAG, a path p between X and Y is a definite m-connecting path relative to a (possibly empty) set Z (X,Y ̸∈ Z) if p is definite status, every definite non-collider on p is not a member of Z, and every collider on p is an ancestor of some member of Z. X and Y are m-separated by Z if there is no definite m-connecting path between them relative to Z. It is easy to see that every definite m-connecting path is a possibly m-connecting path, according to Def. 2; however, the converse is not true. For example, given the PAG in Figure 2, we have two definite status paths between X and Y . The first is X ◦−◦Z1 ◦−◦Y and the second is X ◦−◦Z2 ◦−◦Y where Z1 and Z2 are definite non-colliders. Given set Z = {Z1, Z2}, Z blocks all definite status paths between X and Y . Alternatively, the path X ◦−◦ Z1 ◦−◦ Z2 ◦−◦ Y is not definite status since Z1, Z2 are not colliders or non-colliders on this path. Hence, the path is a possibly m-connecting path relative to Z by Def. 2 but not a definite m-connecting path by Def. 3. We are now ready to use this new definition and formulate a more powerful calculus. Theorem 1. Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P .4 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PV\W . A few observations are important at this point. Despite the visual similarity to Prop. 1, there are two pivotal differences between these calculi. First, Thm. 1 only requires blocking the definite status paths, hence the use of ‘m-separation’ in Thm. 1 instead of ‘m̂-separation’. Consider the PAG P in Fig. 2. We want to evaluate whether Px(y|z1, z2) = P (y|x, z1, z2) by applying Rule 2 in Theorem 1. Since all the edges in the PAG are circle edges, then PX = P . As discussed earlier, the set Z = {Z1, Z2} blocks all the definite status path between X and Y . Hence, X and Y are m-separated by Z and Px(y|z1, z2) = P (y|x, z1, z2) holds true by Rule 2. Second, Thm. 1 defines X(Z) as the subset of X that is not in the possible ancestors of Z in PV\W, as opposed to PW in Prop. 1. We revisit the query in Ex. 2 to clarify this subtle but significant difference. Given the PAG in Fig. 3a, we want to evaluate whether Pw,x1,x2(y|z) = Pw(y|z) by applying Rule 3 from Thm. 1 instead of Prop. 1. Fig. 3c shows PV\{W} where X = {X1, X2} are not possible ancestors of Z. Therefore, X(Z) = X, the edges into X1 are cut in PW,X(Z), and X and Y are m-separated therein. Third, the proof of the Theorem 1 is provided in the appendix, but it follows from the relationship between m-connecting path in a manipulated MAG to a definite m-connecting path in the corresponding manipulated PAG. It was conjectured in [Zhang, 2008a, Footnote 15] that, for X,Y ⊂ V, if there is an m-connecting path inMY,X, then there is a definite m-connecting path in PY,X. In this work, we prove that conjecture to be true for the special class of manipulations required in the rules of the calculus. Finally, the next proposition establishes the necessity of the antecedents in Thm. 1 in order to apply the corresponding rule given every diagram in the equivalence class. 4All the proofs can be found in the full report [Jaber et al., 2022]. Theorem 2 (Atomic Completeness). The calculus in Theorem 1 is atomically complete; meaning, whenever a rule is not applicable given a PAG, then the corresponding rule in Pearl’s calculus is not applicable given some causal diagram in the Markov equivalence class. For instance, considering PAG P in Fig. 1c, we note that (Z ⊥⊥Y |X)PX,Z , which means rule 2 is not applicable. Clearly, the diagram in Fig. 1a is in the equivalence class of P and the corresponding rule of Pearl’s calculus is not applicable due to the latent confounder between Z and Y . 4 Effect Identification: A Complete Algorithm It is challenging to use the calculus rules in Thm. 1 to identify causal effects since it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. The goal of this section is to formulate an algorithm to identify conditional causal effects. The next definition formalizes the notion of identifiability from a PAG, generalizing the causal-diagram-specific notion introduced in [Pearl, 2000, Tian, 2004]. Definition 4 (Causal-Effect Identifiability). Let X,Y,Z be disjoint sets of endogenous variables, V. The causal effect of X on Y conditioned on Z is said to be identifiable from a PAG P if the quantity Px(y|z) can be computed uniquely from the observational distribution P (V) given every causal diagram D in the Markov equivalence class represented by P . The remainder of the section is organized as follows. Sec. 4.1 introduces a version of the IDP algorithm [Jaber et al., 2019a] to identify marginal causal effects. The attractiveness of this version is that it yields simpler expressions whenever the effect is identifiable while preserving the same expressive power, i.e., completeness for marginal identification. Sec. 4.2 utilizes the new algorithm along with the calculus in Thm. 1 to formulate a complete algorithm for conditional identification. 4.1 Marginal Effect Identification We introduce the notion of pc-component next, which generalizes the notion of c-component that is instrumental to solve identification problems in a causal diagram [Tian and Pearl, 2002]. Definition 5 (PC-Component). In a PAG, or any induced subgraph thereof, two nodes are in the same possible c-component (pc-component) if there is a path between them such that (1) all non-endpoint nodes along the path are colliders, and (2) none of the edges is visible. Following Def. 5, e.g., W and Z in Fig. 1c are in the same pc-component due to W◦→ X ←◦Z. By contrast, X,Y are not in the same pc-component since the direct edge between them is visible and Z along ⟨X,Z, Y ⟩ is not a collider. Building on pc-components, we define the key notion of regions. Definition 6 (RegionRCA). Given PAG P over V, and A ⊆ C ⊆ V. The region of A with respect to C, denotedRCA, is the union of the buckets that contain nodes in the pc-component of A in PC. A region expands a pc-component and will prove to be useful in the identification algorithm. For example, the pc-component of X in Fig. 2 is {X,Z1, Z2} and the region RVX = {X,Z1, Z2, Y }. Building further on these definitions and the new calculus, we derive a new identification criterion. Proposition 2. Let P denote a PAG over V, T be a union of a subset of the buckets in P , and X ⊂ T be a bucket. Given Pv\t (i.e., an observational expression for Q[T]), Q[T \X] is identifiable by the following expression if, in PT, CX ∩ PossDe(X) ⊆ X, where CX is the pc-component of X. Q[T \X] = Pv\t Pv\t(X|T \ PossDe(X)) (1) Note the interventions are over buckets which may or may not be single nodes. Since there is little to no causal information inside a bucket, marginal effects of interventions over subsets of buckets are not identifiable. Also, the input distribution is possibly interventional which licenses recursive applications of the criterion. The next example illustrates the power of the new criterion. Example 3. Consider PAG P in Fig. 3a and the query Px1,x2,w(y, z, a). Starting with the observational distribution P (V) as input, let T = V and X = {X1,W}. We have CX = {X1,W,A,X2}, Algorithm 1 IDP(P,x,y) Input: PAG P and two disjoint sets X,Y ⊂ V Output: Expression for Px(y) or FAIL 1: Let D = PossAn(Y)PV\X 2: return ∑ d\y IDENTIFY(D,V, P ) 3: function IDENTIFY(C, T, Q = Q[T]) 4: if C = ∅ then return 1 5: if C = T then return Q /* In PT, let B denote a bucket, and let CB denote the pc-component of B */ 6: if ∃B ⊂ T \C such that CB ∩ PossDe(B)PT ⊆ B then 7: Compute Q[T \B] from Q; ▷ via Prop. 2 8: return IDENTIFY(C,T \B,Q[T \B]) 9: else if ∃B ⊂ C such thatRB ̸= C then ▷RB is equivalent toRCB 10: return IDENTIFY(RB,T,Q) × IDENTIFY(RC\RB ,T,Q) IDENTIFY(RB∩RC\RB ,T,Q) 11: else throw FAIL PossDe(X) = {X1,W,Z, Y }, and CX ∩ PossDe(X) = X. Hence, the criterion in Prop. 2 is applicable and we have Px(y, z, a, x2) = P (v) P (x1,w|a,x2) = P (a, x2)× P (y, z|a,w) after simplification. Next, we consider intervening on X2 given Px1,w(y, z, a, x2). Notice X2 is disconnected from the other nodes in PV\{X1,W} and it trivially satisfies the criterion in Prop. 2. Therefore, we get the expression Px1,x2,w(y, z, a) = Px1,w(y,z,a,x2) Px1,w(x2|y,z,a) = P (a)× P (y, z|w, a) after simplification. A more general criterion was introduced in [Jaber et al., 2018, Thm. 1] based on the possible children of the intervention bucket X instead of the possible descendants. However, the corresponding expression is convoluted and usually large, which could be intractable even if the effect is identifiable. Alg. 1 shows the proposed version of IDP, which builds on the new criterion (Prop. 2). Specifically, the key difference between this algorithm and the one proposed in [Jaber et al., 2019a] is in Lines 6-7, where the criterion in Prop. 2 is used as opposed to that in [Jaber et al., 2018, Thm. 1]. Interestingly enough, the new criterion is “just right,” namely, it is also sufficient to obtain a complete algorithm for marginal effect identification, as shown in the next result.5 Theorem 3 (completeness). Alg. 1 is complete for identifying marginal effects Px(y). Moreover, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 4.2 Conditional Effect Identification We start by making a couple of observations, and then build on those observations to formulate an algorithm to identify conditional causal effects. The proposed algorithm leverages the calculus in Thm. 1 and the IDP algorithm in Alg. 1. Obs. 1 notes that a conditional effect Px(y|z) can be rewritten as Px(y,z)∑ y′ Px(y ′,z) , and hence it is identifiable if Px(y, z) is identifiable by Alg. 1. Observation 1 (Marginal Effect). Consider PAG P1 in Fig. 4a where the goal is to identify the causal effect Pb(a, c|d). We notice that the effect Pb(a, c, d) is identifiable using the IDP algorithm. Let E := P (a, d)× P (c|b, d) denote the expression for the marginal effect Pb(a, c, d) which can be obtained from IDP. Consequently, the target effect can be computed using the expression E/ ∑ a′,c′ E. Whenever the marginal effect Px(y, z) is not identifiable using Alg. 1, Observations 2 and 3 propose techniques to identify the conditional effect using the calculus in Thm. 1, namely rule 2. Obs. 2 uses rule 2 of Thm. 1, when applicable, to move variables from the conditioning to the intervention set. The marginal effect of the resulting conditional query turns out to be identifiable, and consequently does the conditional effect. We note that the work in [Shpitser and Pearl, 2006] uses the same trick to formulate an algorithm for conditional effect identification given a causal diagram. 5A more detailed comparison of the two algorithms along with illustrative examples is provided in the full report [Jaber et al., 2022]. Algorithm 2 CIDP(P,x,y, z) Input: PAG P and three disjoint sets X,Y,Z ⊂ V Output: Expression for Px(y|z) or FAIL 1: D← PossAn(Y ∪ Z)PV\X /* Let B1, . . . ,Bm denote the buckets in P */ 2: while ∃Bi s.t. Bi ∩D ̸= ∅ ∧Bi ̸⊆ D do 3: X′ ← Bi ∩X 4: if (X′ ⊥⊥ Y|(X \X′) ∪ Z)P X\X′,X′ then 5: x← x \ x′; z← z ∪ x′ ▷ Apply rule 2 of Thm. 1 6: D← PossAn(Y ∪ Z)PV\X 7: else throw FAIL /* Let Z1, . . . ,Zm partition Z such that Zi := Z ∩Bi */ 8: while ∃Zi s.t. (Zi ⊥⊥ Y|X ∪ (Z \ Zi))PX,Zi do 9: x← x ∪ zi; z← z \ zi ▷ Apply rule 2 of Thm. 1 10: E ← IDP(P,x,y ∪ z) 11: return E/ ∑ y′ E Observation 2 (Flip Observations to Interventions). Consider PAG P1 in Fig. 4a and the causal query Pa(c|b, d). Unlike the case in Obs. 1, the marginal effect Pa(b, c, d) is not identifiable by the IDP algorithm. Using rule 2 of Thm. 1, we have (B ⊥⊥ C|D)PA,B and we move B from conditioning to intervention, i.e., Pa(c|b, d) = Pa,b(c|d). The marginal effect Pa,b(c, d) is identifiable by IDP and we get the expression E := P (d)× P (c|b, d). Hence, we have Pa(c|b, d) = Pa,b(c|d) = E/ ∑ c′ E. Finally, Obs. 3 comes as a surprise since it requires flipping interventions to observations, contrary to Obs. 2. A key graphical structure in the PAG that requires such a treatment is the presence of a proper possibly directed path from X to Y ∪ Z that starts with a circle edge. Observation 3 (Flip Interventions to Observations). We revisit the query in Example. 1. First, the marginal effect Px(y, z1, z2) is not identifiable by the IDP algorithm. Also, we cannot use rule 2 in Thm. 1 to flip Z1 or Z2 into interventions since they are both adjacent to Y with a circle edge (◦−◦). However, we can use rule 2 to flip X to the conditioning set since there are no definite m-connecting paths between X and Y given {Z1, Z2} in the PAG. Hence, we obtain Px(y|z1, z2) = P (y|z1, z2, x). Alternatively, consider PAG P2 in Fig. 4b with the causal query Px(y|z). We cannot use rule 2 to flip X to an observation since ⟨X,W, Y ⟩ is active given Z in P2X . In fact, the causal diagram G in Fig. 4c is in the equivalence class of P2 and such that Px(y|z) is provably not identifiable [Shpitser and Pearl, 2006, Corol. 2]. Hence, the effect is not identifiable given P2 according to Def. 4. Putting these observations together, we formulate the CIDP algorithm (Alg. 2) for identifying conditional causal effects given a PAG. The algorithm is divided into three phases. In Phase I (lines 1-7), Obs. 3 is used to check for proper possibly directed paths from X to Y ∪Z that start with a circle edge. This is checked algorithmically by computing D = PossAn(Y ∪ Z)PV\X , iteratively, and checking if some bucket Bi in P intersects with, but is not a subset of, D. If such a bucket exists, CIDP flips Bi ∩X from interventions to observations using rule 2, when applicable, else the algorithm throws a fail and the effect is not computable. In Phase II (lines 8-9), Obs. 2 is used to flip the subset of observations in each bucket into interventions by applying rule 2 of Thm. 1, whenever applicable. Finally, in Phase III (line 10), the marginal effect Px(y ∪ z) is computed from the modified sets X and Z, using the IDP algorithm in Alg. 1. If the call is successful, an expression for the conditional effect is returned at line 11. The example below illustrates CIDP in action. An empirical evaluation of CIDP is provided in the full report [Jaber et al., 2022].6 Example 4. Consider PAG P in Fig. 5a and the conditional query Px(y|z) := Pa,f (y|b, e). In Phase I, we have D = PossAn(Y ∪Z)PV\X = {Y,B,E,C,G}, and the bucket {A,B} satisfies the conditions at line 2 since A ̸∈ D. In PF,A (as shown in Fig. 5b), X′ = {A} is m-separated from Y given {B,E, F} which satisfies the if condition at line 4. Hence, we flip A to the conditioning set Z via rule 2 of Thm. 1 to obtain the updated query Px(y|z) = Pf (y|a, b, e). In Phase II (lines 8-9), let Z1 = {E} and Z2 = {A,B}. In PF,E (see Fig. 5c), we have E m-separated from Y given {F} ∪ Z2 which satisfies the if condition at line 8. Hence, we flip E to the intervention set using rule 2 of Thm. 1 and we get the updated query Px(y|z) = Pe,f (y|a, b). Next, we check if Z = Z2 is m-separated from Y given X in PX,Z which does not hold due to a bidirected edge between B and Y . Hence, rule 2 is not applicable and Z remains in the conditioning set. Finally, we call IDP to compute the marginal effect Pe,f (y, a, b), if possible. The effect is identifiable with the simplified expression P (y|b, e, f)× P (a, b). Hence, Pa,f (y|b, e) = Pe,f (y|a, b) = P (y|b,e,f)×P (a,b)∑ y′ P (y ′|b,e,f)×P (a,b) = P (y|b, e, f). The soundness of Alg. 2 follows from that of Alg. 1 and Thm. 1. Next, we turn to its completeness. According to Def. 4, whenever CIDP fails, we need to establish one of two conditions for completeness. Either there exist two causal diagrams in the equivalence class with different identifications, or the effect is not identifiable in some causal diagram according to the criterion in [Shpitser and Pearl, 2006, Corol. 2]. Thm. 4 establishes completeness by proving that the latter is always the case. This result along with the completeness of the calculus rules for the identification of marginal effects (see Thm. 3) implies that the rules are complete for conditional effects as well. Theorem 4 (completeness). Alg. 2 is complete for identifying conditional effects Px(y|z). Also, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 5 Discussion In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Assuming the presence of an oracle for conditional independence encapsulates the challenge of dealing with finite data and of testing for conditional independence thereof. Another challenge lies in the computational complexity of learning the PAG in the first place [Colombo et al., 2012], and estimating the expression when the effect is identifiable [Pearl and Robins, 1995, Jung et al., 2021]. In light of this, it is important to make the distinction between the task of causal effect identification and that of causal effect estimation. This set of results is concerned with the first task (causal identification), which asks whether a target conditional effect is uniquely computable from P (V), the observational distribution, and given a PAG learnable from P (V). The objective of CIDP in Algorithm 2 is to decide whether the effect is identifiable and provide an expression for it when the answer is yes, while being agnostic as to whether P (V) can be accurately estimated from the available samples. As for the second task, the estimation of the conditional causal effect using the identification formula provided by Algorithm 2 poses several challenges under finite data. The number of samples sufficient to identify a given effect would depend on the size of the expression, among other factors, and 6Code is available at https://github.com/CausalAILab/PAGId naive methods for estimation exacerbate this problem. Recent work such as [Jung et al., 2021] proposes a double machine learning estimator for marginal effects that are identifiable given a PAG. An interesting direction of work is to generalize this approach to conditional causal effects that are identifiable by CIDP. 6 Conclusions In this work, we investigate the problem of identifying conditional interventional distributions given a Markov equivalence class of causal diagrams represented by a PAG. We introduce a new generalization of the do-calculus for identification of interventional distributions in PAGs (Thm. 1) and show it to be atomically complete (Thm. 2). Building on these results, we develop the CIDP algorithm (Alg. 2), which is both sound and complete, i.e., it identifies any conditional effects of the form Px(y|z) that is identifiable (Thm. 4). Finally, we show that the new calculus rules, along with standard probability manipulations, are complete for the same task. These results close the problem of effect identification under Markov equivalence in that they completely delineate the theoretical boundaries of what is, in principle, computable from a certain data collection. We expect the newly introduced machinery to help data scientists to identify novel effects in real world settings. Acknowledgments and Disclosure of Funding Bareinboim and Ribeiro’s research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. Zhang’s research was supported by the RGC of Hong Kong under GRF13602720.
1. What is the focus and contribution of the paper regarding conditional causal effect identification? 2. What are the strengths of the proposed approach, particularly in relaxing the assumption of knowing the underlying causal graph? 3. What are the weaknesses of the paper, especially regarding the clarity and originality of the content? 4. How does the reviewer assess the significance and novelty of the paper's contributions? 5. Are there any concerns or suggestions regarding the presentation and technical aspects of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the problem of conditional causal effect identification, which asks to recover P(y|do(x), z) from the observational distribution. There is a sequence of works that propose an algorithm for this problem under the assumption that the underlining causal graph is known. This paper relaxes this assumption by assuming that only the Markov equivalence class of the underlining causal graph is known. More specifically, the paper assumes that the so-called partial ancestral graph (PAG) is known. The paper proposes new calculus, which is a modified version of Zhang's [2007] Calculus. For the new calculus, the authors prove that it is complete and atomic complete. Moreover, they propose an algorithm for the identification of conditional causal effects given a PAG and show that the algorithms is complete, that is if some causal cannot be identified by this algorithm then it is not identifiable in some causal graph in the equivalence class of a given PAG. Strengths And Weaknesses Significance: The problem of identifiability of interventional causal effects is an important and long-studied problem. This paper improves over the state-of-the-art result by relaxing the assumption that the ground truth causal graph is known. Instead, the paper assumes that only the partial ancestral graph is known. This is a significant relaxation of the assumption as the true underlining causal graph is frequently not known, while PAG can be recovered from conditional independencies of variables. Zhang [2007] proposed a calculus to solve the problem considered in this paper which relies on the notion of ``possibly m-connected paths". However, Zhang's calculus is not complete, i.e., it is possible to contract a PAG in which the effect is identifiable but is not deducible from Zhang's calculus. The key observation of this paper is that if one relies on the notion "definite m-connected paths" instead of "possibly m-connected paths", then the minor modification of Zhang's calculus becomes a complete calculus. Moreover, the paper proposes an algorithm that uses the rules of the newly introduced calculus to recover causal effect and shows that this algorithm is complete. Clarity: Despite the fact the paper is reasonably well-written it is extremely technical, and as a result quite hard to read. I was not able to verify the correctness of the proofs in the appendix within a reasonable time. Originality: The paper relies mostly on ideas that are well-known in the field. The proposed algorithm appears to be a minor modification of the prior algorithm by Jaber et al. 2019a. However, the proposed calculus and its completeness seem to be novel. Questions What is the main difference between the algorithm proposed in by Jaber et al. 2019a and your algorithm? If the difference is insignificant, I believe the algorithm of Jaber et al. 2019a deserves more credit in the discussion, in particular, it should be acknowledged that your algorithm is a minor modification. Alternatively, if the difference is significant, I would kindly suggest including a more detailed discussion of the distinction between these algorithms. Are there any runtime or space complexity guarantees for your algorithm? Is it polynomial time? Is anything known about the approximate version of the problem: how many samples are sufficient to identify the causal effect? Are there any open problems left related to conditional effect identification (that authors consider interesting)? I would kindly suggest renaming Proposition 1 (by Zhang 2007) into a Theorem, as calling it a proposition appears to be a bit judgemental. typo in line 213: X, Y, Y I believe the notation in line 265 is poorly chosen, y serves the role of both a fixed variable and a summation index (which makes no mathematical sense). I kindly suggest renaming the summation index. The same notation problem occurs in other places. Limitations I believe the authors should comment on the runtime and sample complexity guarantees of the proposed algorithm. If the problem of establishing such guarantees is not resolved I kindly suggest that the authors acknowledge this. Alternatively, if such guarantees (lower or upper bounds) follow from the prior work it will be useful if they are included in the paper.
NIPS
Title Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness Abstract One common task in many data sciences applications is to answer questions about the effect of new interventions, like: ‘what would happen to Y if we make X equal to x while observing covariates Z = z?’. Formally, this is known as conditional effect identification, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available. In this paper, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We make the following contributions under this relaxed setting. First, we introduce a new causal calculus, which subsumes the current state-of-the-art, PAG-calculus. Second, we develop an algorithm for conditional effect identification given a PAG and prove it to be both sound and complete. In words, failure of the algorithm to identify a certain effect implies that this effect is not identifiable by any method. Third, we prove the proposed calculus to be complete for the same task. 1 Introduction Despite the recent advances in AI and machine learning, the current generation of intelligent systems still lacks the pivotal ability to represent, learn, and reason with cause and effect relationships. The discipline of causal inference aims to ‘algorithmitize’ causal reasoning capabilities towards producing human-like machine intelligence and rational decision-making [Pearl and Mackenzie, 2018, Pearl, 2019, Bareinboim and Pearl, 2016]. One fundamental type of inference in this setting is concerned with the effect of new interventions, e.g., ‘what would happen to outcome Y if X were set to x?’ More generally, we may be interested in Y ’s distribution in a sub-population picked out by the value of some covariates Z = z’. For example, a legislator might be interested in the impact that increasing the minimum wage (X = x) has on profits (Y ) in small businesses (Z = z), which is written in causal language as the interventional distribution P (y|do(x), z), or Px(y|z). One method capable of answering such questions is through controlled experimentation [Fisher, 1951]. In many practical settings found throughout the empirical sciences, AI, and machine learning, it is not always possible to perform a controlled experiment due to ethical, financial, and technical considerations. This motivates the study of a problem known as causal effect identification [Pearl, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2000, Ch. 3]. The idea is to use the observational distribution P (V) along with assumptions about the underlying domain, articulated in the form of a causal diagram D, to infer the interventional distribution Px(y|z) when possible. For instance, Fig. 1a represents a causal diagram in which nodes correspond to measured variables, directed edges represent direct causal relations, and bidirecteddashed edges encode spurious associations due to unmeasured confounders. A plethora of methods have been developed to address the identification task including the celebrated causal calculus proposed by Pearl [1995] as well as complete algorithms [Tian, 2004, Shpitser and Pearl, 2006, Huang and Valtorta, 2006]. For instance, given the causal diagram in Fig. 1a and the query Px(y|z), the calculus sanctions the identity Px(y|z) = P (y|z, x). In words, the interventional distribution on the l.h.s. equates to the observational distribution on the right, which is available as input. Despite the power of these results, requiring the diagram as the input of the task is an Achilles heel for those methods, since background knowledge is usually not sufficient to pin down the single, true diagram. To circumvent these challenges, a growing literature develops data-driven methods that attempt to learn the causal diagram from data first, and then perform identification from there. In practice, however, only an equivalence class (EC) of diagrams can be inferred from observational data without making substantial assumptions about the causal mechanisms [Verma, 1993, Spirtes et al., 2001, Pearl, 2000]. A prominent representation of this class is known as partial ancestral graphs (PAGs) [Zhang, 2008b]. Fig. 1c illustrates the PAG learned from observational data consistent with both causal diagrams in Figs. 1a and 1b since they are in the same Markov equivalence class. The directed edges in a PAG encode ancestral relations, not necessarily direct, and the circle marks stand for structural uncertainty. Directed edges labeled with v signify the absence of unmeasured confounders. Causal effect identification in a PAG is usually more challenging than from a single diagram due to the structural uncertainties and the infeasibility of enumerating each member of the EC in most cases. The do-calculus was extended for PAGs to account for the inherent structure uncertainties without the need for enumeration [Zhang, 2007]. Still, the calculus falls short of capturing all identifiable effects as we will see in Sec. 3. On the other hand, it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. In a more systematic manner, a complete algorithm has been developed to identify marginal effects (i.e., Px(y)) given a PAG [Jaber et al., 2019a]. This algorithm can be used to identify conditional effects whenever the joint distribution Px(y ∪ z) is identifiable. Still, many conditional effects are identifiable even if the corresponding joint effect is not (Sec. 4.2). Finally, an algorithm to identify conditional effects has been proposed in [Jaber et al., 2019b], but not proven to be complete.1 In this paper, we pursue a data-driven formulation for the task of identification of any conditional causal effect from a combination of an observational distribution and the corresponding PAG (instead of a fully specified causal diagram). Accordingly, we makes the following contributions: 1. We propose a causal calculus for PAGs that subsumes the stat-of-the-art calculus introduced in [Zhang, 2007]. We prove the rules are atomic complete, i.e., a rule is not applicable in some causal diagram in the underlying EC whenever it is not applicable given the PAG. 2. Building on these results, we develop an algorithm for the identification of conditional causal effects given a PAG. We prove the algorithm is complete, i.e., the effect is not identifiable in some causal diagram in the equivalence class whenever the algorithm fails. 3. Finally, we prove the calculus is complete for the task of identifying conditional effects. 1Another approach is based on SAT (Boolean constraint satisfaction) solvers [Hyttinen et al., 2015]. Given its somewhat distinct nature, a closer comparison lies outside the scope of this paper. 2 Preliminaries In this section, we introduce the basic setup and notations. Boldface capital letters denote sets of variables, while boldface lowercase letters stand for value assignments to those variables.2 Structural Causal Models. We use Structural Causal Models (SCMs) as our basic semantical framework [Pearl, 2000]. Formally, an SCM M is a 4-tuple ⟨U,V,F, P (U)⟩, where U is a set of exogenous (unmeasured) variables and V is a set of endogenous (measured) variables. F represents a collection of functions such that each endogenous variable Vi ∈ V is determined by a function fi ∈ F. Finally, P (U) encodes the uncertainty over the exogenous variables. Every SCM is associated with one causal diagram where every variable in V ∪U is a node, and arrows are drawn between nodes in accordance with the functions in F. Following standard practice, we omit the exogenous nodes and add a bidirected dashed arc between two endogenous nodes if they share an exogenous parent. We only consider recursive systems, thus the corresponding diagram is acyclic. The marginal distribution induced over the endogenous variables P (V) is called observational. The d-separation criterion captures the conditional independence relations entailed by a causal diagram in P (V). For C ⊆ V, Q[C] denotes the post-intervention distribution of C under an intervention on V \C, i.e. Pv\c(c).3 Ancestral Graphs. We now introduce a graphical representation of equivalence classes of causal diagrams. A MAG represents a set of causal diagrams with the same set of observed variables that entail the same conditional independence and ancestral relations among the observed variables [Richardson and Spirtes, 2002]. M-separation extends d-separation to MAGs such that d-separation in a causal diagram corresponds to m-separation in its unique MAG over the observed variables, and vice versa. Definition 1 (m-separation). A path p between X and Y is active (or m-connecting) relative to Z (X,Y ̸∈ Z) if every non-collider on p is not in Z, and every collider on p is an ancestor of some Z ∈ Z. X and Y are m-separated by Z if there is no active path between X and Y relative to Z. Different MAGs entail the same independence model and hence are Markov equivalent. A PAG represents an equivalence class of MAGs [M], which shares the same adjacencies as every MAG in [M] and displays all and only the invariant edge marks. A circle indicates an edge mark that is not invariant. A PAG is learnable from the independence model over the observed variables, and the FCI algorithm is a standard method to learn such an object [Zhang, 2008b]. In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Graphical Notions. Given a PAG, a path between X and Y is potentially directed (causal) from X to Y if there is no arrowhead on the path pointing towards X . Y is called a possible descendant of X and X a possible ancestor of Y if there is a potentially directed path from X to Y . For a set of nodes X, let An(X) (De(X)) denote the union of X and the set of possible ancestors (descendants) of X. Given two sets of nodes X and Y, a path between them is called proper if one of the endpoints is in X and the other is in Y, and no other node on the path is in X or Y. Let ⟨A,B,C⟩ be any consecutive triple along a path p. B is a collider on p if both edges are into B. B is a (definite) non-collider on p if one of the edges is out of B, or both edges have circle marks at B and there is no edge between A and C. A path is definite status if every non-endpoint node along it is either a collider or a non-collider. If the edge marks on a path between X and Y are all circles, we call the path a circle path. We refer to the closure of nodes connected with circle paths as a bucket. A directed edge X → Y in a PAG is visible if there exists no causal diagram in the corresponding equivalence class where the relation between X and Y is confounded. Which directed edges are visible is easily decidable by a graphical condition [Zhang, 2008a], so we mark visible edges by v. Manipulations in PAGs. Let P denote a PAG over V and X ⊆ V. PX denotes the induced subgraph of P over X. The X-lower-manipulation of P deletes all those edges that are visible in P and are out of variables in X, replaces all those edges that are out of variables in X but are invisible in P with bi-directed edges, and otherwise keeps P as it is. The resulting graph is denoted as PX. The X-upper-manipulation of P deletes all those edges in P that are into variables in X, and otherwise keeps P as it is. The resulting graph is denoted as PX. 2A more comprehensive discussion about the background is provided in the full report [Jaber et al., 2022]. 3Without loss of generality, we assume the model is semi-Markovian. Tian [Tian, 2002, Sec. 5.6] shows that the identification of a causal effect in a non-Markovian model is equivalent to the identification of the same effect in a derived semi-Markovian model via a procedure known as ‘projection’. 3 Causal Calculus for PAGs The causal calculus introduced in [Pearl, 1995] is a seminal work that has been instrumental for understanding and eventually solving the task of effect identification from causal diagrams. Zhang [2007] generalized this result to the context of ancestral graphs, where a PAG is taken as the input of the task, instead of the specific causal diagram. In Sec. 3.1, we discuss Zhang’s rules and try to understand the reasons they are insufficient to solve the identification problem in full generality. Further, in Sec.3.2, we introduce another generalization of the original calculus and prove that it is complete for atomic identification. This result will be further strengthened in subsequent sections. 3.1 Zhang’s Calculus An obvious extension of the m-separation criterion shown in Def. 1 to PAGs blocks all possibly m-connecting paths, as defined next. Definition 2 (Possibly m-connecting path). In a PAG, a path p between X and Y is a possibly mconnecting path relative to a (possibly empty) set of nodes Z (X,Y /∈ Z) if every definite non-collider on p is not a member of Z, and every collider on p is a possible ancestor of some member of Z. X and Y are m̂-separated by Z if there is no possibly m-connecting path between them relative to Z. Using this notion of separation, Zhang [2007] proposed a calculus given a PAG as shown next. Proposition 1 (Zhang’s Calculus). Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P . 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m̂-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m̂-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m̂-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PW . In words, rule 1 generalizes m-separation to interventional settings. Further, rule 2 licenses alternating a subset X between intervention and conditioning. Finally, rule 3 allows the adding/removal of an intervention do(X = x). The next two examples illustrate the shortcomings of this result, where the first reveals the drawback of using Def. 2 to establish graphical separation and the second inspects evaluating X(Z) in rule 3 (where the notion of possible ancestors are evoked). Example 1. Consider the PAG P shown in Fig. 2. Since X and Y are not adjacent in P , it is easy to show that X and Y are separable given {Z1, Z2} in every causal diagram in the equivalence class. If rule 3 of Pearl’s calculus is used in each diagram, then Px(y|z1, z2) = P (y|z1, z2). Further, applying rule 2 of do-calculus in each diagram, it’s also the case that Px(y|z1, z2) = P (y|z1, z2, x). However, due to the possibly m-connecting path ⟨X,Z1, Z2, Y ⟩, rules 3 and 2 in Prop. 1 are not applicable to P . In other words, even though Pearl’s calculus rules 2 and 3 are applicable to each diagram in the equivalence class, the same results cannot be established by Zhang’s calculus. Example 2. Consider the PAG in Fig. 3a, and the evaluation on whether the equality Pw,x1,x2(y|z) = Pw(y|z) holds. In order to apply rule 3 of Prop. 1, we need to evaluate whether {X1, X2} is separated from {Y } given {W,Z} in the manipulated graph in Fig. 3b, which is not true in this case. However, the rule can be improved to be applicable in this case, as we will show later on (Sec. 3.2). The critical step will be the evaluation of the set X(Z) from PW. 3.2 A New Calculus Building on the analysis of the calculus proposed in [Zhang, 2007], we introduce next a set of rules centered around blocking definite m-connecting paths, as defined next. Definition 3 (Definite m-connecting path). In a PAG, a path p between X and Y is a definite m-connecting path relative to a (possibly empty) set Z (X,Y ̸∈ Z) if p is definite status, every definite non-collider on p is not a member of Z, and every collider on p is an ancestor of some member of Z. X and Y are m-separated by Z if there is no definite m-connecting path between them relative to Z. It is easy to see that every definite m-connecting path is a possibly m-connecting path, according to Def. 2; however, the converse is not true. For example, given the PAG in Figure 2, we have two definite status paths between X and Y . The first is X ◦−◦Z1 ◦−◦Y and the second is X ◦−◦Z2 ◦−◦Y where Z1 and Z2 are definite non-colliders. Given set Z = {Z1, Z2}, Z blocks all definite status paths between X and Y . Alternatively, the path X ◦−◦ Z1 ◦−◦ Z2 ◦−◦ Y is not definite status since Z1, Z2 are not colliders or non-colliders on this path. Hence, the path is a possibly m-connecting path relative to Z by Def. 2 but not a definite m-connecting path by Def. 3. We are now ready to use this new definition and formulate a more powerful calculus. Theorem 1. Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P .4 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PV\W . A few observations are important at this point. Despite the visual similarity to Prop. 1, there are two pivotal differences between these calculi. First, Thm. 1 only requires blocking the definite status paths, hence the use of ‘m-separation’ in Thm. 1 instead of ‘m̂-separation’. Consider the PAG P in Fig. 2. We want to evaluate whether Px(y|z1, z2) = P (y|x, z1, z2) by applying Rule 2 in Theorem 1. Since all the edges in the PAG are circle edges, then PX = P . As discussed earlier, the set Z = {Z1, Z2} blocks all the definite status path between X and Y . Hence, X and Y are m-separated by Z and Px(y|z1, z2) = P (y|x, z1, z2) holds true by Rule 2. Second, Thm. 1 defines X(Z) as the subset of X that is not in the possible ancestors of Z in PV\W, as opposed to PW in Prop. 1. We revisit the query in Ex. 2 to clarify this subtle but significant difference. Given the PAG in Fig. 3a, we want to evaluate whether Pw,x1,x2(y|z) = Pw(y|z) by applying Rule 3 from Thm. 1 instead of Prop. 1. Fig. 3c shows PV\{W} where X = {X1, X2} are not possible ancestors of Z. Therefore, X(Z) = X, the edges into X1 are cut in PW,X(Z), and X and Y are m-separated therein. Third, the proof of the Theorem 1 is provided in the appendix, but it follows from the relationship between m-connecting path in a manipulated MAG to a definite m-connecting path in the corresponding manipulated PAG. It was conjectured in [Zhang, 2008a, Footnote 15] that, for X,Y ⊂ V, if there is an m-connecting path inMY,X, then there is a definite m-connecting path in PY,X. In this work, we prove that conjecture to be true for the special class of manipulations required in the rules of the calculus. Finally, the next proposition establishes the necessity of the antecedents in Thm. 1 in order to apply the corresponding rule given every diagram in the equivalence class. 4All the proofs can be found in the full report [Jaber et al., 2022]. Theorem 2 (Atomic Completeness). The calculus in Theorem 1 is atomically complete; meaning, whenever a rule is not applicable given a PAG, then the corresponding rule in Pearl’s calculus is not applicable given some causal diagram in the Markov equivalence class. For instance, considering PAG P in Fig. 1c, we note that (Z ⊥⊥Y |X)PX,Z , which means rule 2 is not applicable. Clearly, the diagram in Fig. 1a is in the equivalence class of P and the corresponding rule of Pearl’s calculus is not applicable due to the latent confounder between Z and Y . 4 Effect Identification: A Complete Algorithm It is challenging to use the calculus rules in Thm. 1 to identify causal effects since it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. The goal of this section is to formulate an algorithm to identify conditional causal effects. The next definition formalizes the notion of identifiability from a PAG, generalizing the causal-diagram-specific notion introduced in [Pearl, 2000, Tian, 2004]. Definition 4 (Causal-Effect Identifiability). Let X,Y,Z be disjoint sets of endogenous variables, V. The causal effect of X on Y conditioned on Z is said to be identifiable from a PAG P if the quantity Px(y|z) can be computed uniquely from the observational distribution P (V) given every causal diagram D in the Markov equivalence class represented by P . The remainder of the section is organized as follows. Sec. 4.1 introduces a version of the IDP algorithm [Jaber et al., 2019a] to identify marginal causal effects. The attractiveness of this version is that it yields simpler expressions whenever the effect is identifiable while preserving the same expressive power, i.e., completeness for marginal identification. Sec. 4.2 utilizes the new algorithm along with the calculus in Thm. 1 to formulate a complete algorithm for conditional identification. 4.1 Marginal Effect Identification We introduce the notion of pc-component next, which generalizes the notion of c-component that is instrumental to solve identification problems in a causal diagram [Tian and Pearl, 2002]. Definition 5 (PC-Component). In a PAG, or any induced subgraph thereof, two nodes are in the same possible c-component (pc-component) if there is a path between them such that (1) all non-endpoint nodes along the path are colliders, and (2) none of the edges is visible. Following Def. 5, e.g., W and Z in Fig. 1c are in the same pc-component due to W◦→ X ←◦Z. By contrast, X,Y are not in the same pc-component since the direct edge between them is visible and Z along ⟨X,Z, Y ⟩ is not a collider. Building on pc-components, we define the key notion of regions. Definition 6 (RegionRCA). Given PAG P over V, and A ⊆ C ⊆ V. The region of A with respect to C, denotedRCA, is the union of the buckets that contain nodes in the pc-component of A in PC. A region expands a pc-component and will prove to be useful in the identification algorithm. For example, the pc-component of X in Fig. 2 is {X,Z1, Z2} and the region RVX = {X,Z1, Z2, Y }. Building further on these definitions and the new calculus, we derive a new identification criterion. Proposition 2. Let P denote a PAG over V, T be a union of a subset of the buckets in P , and X ⊂ T be a bucket. Given Pv\t (i.e., an observational expression for Q[T]), Q[T \X] is identifiable by the following expression if, in PT, CX ∩ PossDe(X) ⊆ X, where CX is the pc-component of X. Q[T \X] = Pv\t Pv\t(X|T \ PossDe(X)) (1) Note the interventions are over buckets which may or may not be single nodes. Since there is little to no causal information inside a bucket, marginal effects of interventions over subsets of buckets are not identifiable. Also, the input distribution is possibly interventional which licenses recursive applications of the criterion. The next example illustrates the power of the new criterion. Example 3. Consider PAG P in Fig. 3a and the query Px1,x2,w(y, z, a). Starting with the observational distribution P (V) as input, let T = V and X = {X1,W}. We have CX = {X1,W,A,X2}, Algorithm 1 IDP(P,x,y) Input: PAG P and two disjoint sets X,Y ⊂ V Output: Expression for Px(y) or FAIL 1: Let D = PossAn(Y)PV\X 2: return ∑ d\y IDENTIFY(D,V, P ) 3: function IDENTIFY(C, T, Q = Q[T]) 4: if C = ∅ then return 1 5: if C = T then return Q /* In PT, let B denote a bucket, and let CB denote the pc-component of B */ 6: if ∃B ⊂ T \C such that CB ∩ PossDe(B)PT ⊆ B then 7: Compute Q[T \B] from Q; ▷ via Prop. 2 8: return IDENTIFY(C,T \B,Q[T \B]) 9: else if ∃B ⊂ C such thatRB ̸= C then ▷RB is equivalent toRCB 10: return IDENTIFY(RB,T,Q) × IDENTIFY(RC\RB ,T,Q) IDENTIFY(RB∩RC\RB ,T,Q) 11: else throw FAIL PossDe(X) = {X1,W,Z, Y }, and CX ∩ PossDe(X) = X. Hence, the criterion in Prop. 2 is applicable and we have Px(y, z, a, x2) = P (v) P (x1,w|a,x2) = P (a, x2)× P (y, z|a,w) after simplification. Next, we consider intervening on X2 given Px1,w(y, z, a, x2). Notice X2 is disconnected from the other nodes in PV\{X1,W} and it trivially satisfies the criterion in Prop. 2. Therefore, we get the expression Px1,x2,w(y, z, a) = Px1,w(y,z,a,x2) Px1,w(x2|y,z,a) = P (a)× P (y, z|w, a) after simplification. A more general criterion was introduced in [Jaber et al., 2018, Thm. 1] based on the possible children of the intervention bucket X instead of the possible descendants. However, the corresponding expression is convoluted and usually large, which could be intractable even if the effect is identifiable. Alg. 1 shows the proposed version of IDP, which builds on the new criterion (Prop. 2). Specifically, the key difference between this algorithm and the one proposed in [Jaber et al., 2019a] is in Lines 6-7, where the criterion in Prop. 2 is used as opposed to that in [Jaber et al., 2018, Thm. 1]. Interestingly enough, the new criterion is “just right,” namely, it is also sufficient to obtain a complete algorithm for marginal effect identification, as shown in the next result.5 Theorem 3 (completeness). Alg. 1 is complete for identifying marginal effects Px(y). Moreover, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 4.2 Conditional Effect Identification We start by making a couple of observations, and then build on those observations to formulate an algorithm to identify conditional causal effects. The proposed algorithm leverages the calculus in Thm. 1 and the IDP algorithm in Alg. 1. Obs. 1 notes that a conditional effect Px(y|z) can be rewritten as Px(y,z)∑ y′ Px(y ′,z) , and hence it is identifiable if Px(y, z) is identifiable by Alg. 1. Observation 1 (Marginal Effect). Consider PAG P1 in Fig. 4a where the goal is to identify the causal effect Pb(a, c|d). We notice that the effect Pb(a, c, d) is identifiable using the IDP algorithm. Let E := P (a, d)× P (c|b, d) denote the expression for the marginal effect Pb(a, c, d) which can be obtained from IDP. Consequently, the target effect can be computed using the expression E/ ∑ a′,c′ E. Whenever the marginal effect Px(y, z) is not identifiable using Alg. 1, Observations 2 and 3 propose techniques to identify the conditional effect using the calculus in Thm. 1, namely rule 2. Obs. 2 uses rule 2 of Thm. 1, when applicable, to move variables from the conditioning to the intervention set. The marginal effect of the resulting conditional query turns out to be identifiable, and consequently does the conditional effect. We note that the work in [Shpitser and Pearl, 2006] uses the same trick to formulate an algorithm for conditional effect identification given a causal diagram. 5A more detailed comparison of the two algorithms along with illustrative examples is provided in the full report [Jaber et al., 2022]. Algorithm 2 CIDP(P,x,y, z) Input: PAG P and three disjoint sets X,Y,Z ⊂ V Output: Expression for Px(y|z) or FAIL 1: D← PossAn(Y ∪ Z)PV\X /* Let B1, . . . ,Bm denote the buckets in P */ 2: while ∃Bi s.t. Bi ∩D ̸= ∅ ∧Bi ̸⊆ D do 3: X′ ← Bi ∩X 4: if (X′ ⊥⊥ Y|(X \X′) ∪ Z)P X\X′,X′ then 5: x← x \ x′; z← z ∪ x′ ▷ Apply rule 2 of Thm. 1 6: D← PossAn(Y ∪ Z)PV\X 7: else throw FAIL /* Let Z1, . . . ,Zm partition Z such that Zi := Z ∩Bi */ 8: while ∃Zi s.t. (Zi ⊥⊥ Y|X ∪ (Z \ Zi))PX,Zi do 9: x← x ∪ zi; z← z \ zi ▷ Apply rule 2 of Thm. 1 10: E ← IDP(P,x,y ∪ z) 11: return E/ ∑ y′ E Observation 2 (Flip Observations to Interventions). Consider PAG P1 in Fig. 4a and the causal query Pa(c|b, d). Unlike the case in Obs. 1, the marginal effect Pa(b, c, d) is not identifiable by the IDP algorithm. Using rule 2 of Thm. 1, we have (B ⊥⊥ C|D)PA,B and we move B from conditioning to intervention, i.e., Pa(c|b, d) = Pa,b(c|d). The marginal effect Pa,b(c, d) is identifiable by IDP and we get the expression E := P (d)× P (c|b, d). Hence, we have Pa(c|b, d) = Pa,b(c|d) = E/ ∑ c′ E. Finally, Obs. 3 comes as a surprise since it requires flipping interventions to observations, contrary to Obs. 2. A key graphical structure in the PAG that requires such a treatment is the presence of a proper possibly directed path from X to Y ∪ Z that starts with a circle edge. Observation 3 (Flip Interventions to Observations). We revisit the query in Example. 1. First, the marginal effect Px(y, z1, z2) is not identifiable by the IDP algorithm. Also, we cannot use rule 2 in Thm. 1 to flip Z1 or Z2 into interventions since they are both adjacent to Y with a circle edge (◦−◦). However, we can use rule 2 to flip X to the conditioning set since there are no definite m-connecting paths between X and Y given {Z1, Z2} in the PAG. Hence, we obtain Px(y|z1, z2) = P (y|z1, z2, x). Alternatively, consider PAG P2 in Fig. 4b with the causal query Px(y|z). We cannot use rule 2 to flip X to an observation since ⟨X,W, Y ⟩ is active given Z in P2X . In fact, the causal diagram G in Fig. 4c is in the equivalence class of P2 and such that Px(y|z) is provably not identifiable [Shpitser and Pearl, 2006, Corol. 2]. Hence, the effect is not identifiable given P2 according to Def. 4. Putting these observations together, we formulate the CIDP algorithm (Alg. 2) for identifying conditional causal effects given a PAG. The algorithm is divided into three phases. In Phase I (lines 1-7), Obs. 3 is used to check for proper possibly directed paths from X to Y ∪Z that start with a circle edge. This is checked algorithmically by computing D = PossAn(Y ∪ Z)PV\X , iteratively, and checking if some bucket Bi in P intersects with, but is not a subset of, D. If such a bucket exists, CIDP flips Bi ∩X from interventions to observations using rule 2, when applicable, else the algorithm throws a fail and the effect is not computable. In Phase II (lines 8-9), Obs. 2 is used to flip the subset of observations in each bucket into interventions by applying rule 2 of Thm. 1, whenever applicable. Finally, in Phase III (line 10), the marginal effect Px(y ∪ z) is computed from the modified sets X and Z, using the IDP algorithm in Alg. 1. If the call is successful, an expression for the conditional effect is returned at line 11. The example below illustrates CIDP in action. An empirical evaluation of CIDP is provided in the full report [Jaber et al., 2022].6 Example 4. Consider PAG P in Fig. 5a and the conditional query Px(y|z) := Pa,f (y|b, e). In Phase I, we have D = PossAn(Y ∪Z)PV\X = {Y,B,E,C,G}, and the bucket {A,B} satisfies the conditions at line 2 since A ̸∈ D. In PF,A (as shown in Fig. 5b), X′ = {A} is m-separated from Y given {B,E, F} which satisfies the if condition at line 4. Hence, we flip A to the conditioning set Z via rule 2 of Thm. 1 to obtain the updated query Px(y|z) = Pf (y|a, b, e). In Phase II (lines 8-9), let Z1 = {E} and Z2 = {A,B}. In PF,E (see Fig. 5c), we have E m-separated from Y given {F} ∪ Z2 which satisfies the if condition at line 8. Hence, we flip E to the intervention set using rule 2 of Thm. 1 and we get the updated query Px(y|z) = Pe,f (y|a, b). Next, we check if Z = Z2 is m-separated from Y given X in PX,Z which does not hold due to a bidirected edge between B and Y . Hence, rule 2 is not applicable and Z remains in the conditioning set. Finally, we call IDP to compute the marginal effect Pe,f (y, a, b), if possible. The effect is identifiable with the simplified expression P (y|b, e, f)× P (a, b). Hence, Pa,f (y|b, e) = Pe,f (y|a, b) = P (y|b,e,f)×P (a,b)∑ y′ P (y ′|b,e,f)×P (a,b) = P (y|b, e, f). The soundness of Alg. 2 follows from that of Alg. 1 and Thm. 1. Next, we turn to its completeness. According to Def. 4, whenever CIDP fails, we need to establish one of two conditions for completeness. Either there exist two causal diagrams in the equivalence class with different identifications, or the effect is not identifiable in some causal diagram according to the criterion in [Shpitser and Pearl, 2006, Corol. 2]. Thm. 4 establishes completeness by proving that the latter is always the case. This result along with the completeness of the calculus rules for the identification of marginal effects (see Thm. 3) implies that the rules are complete for conditional effects as well. Theorem 4 (completeness). Alg. 2 is complete for identifying conditional effects Px(y|z). Also, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 5 Discussion In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Assuming the presence of an oracle for conditional independence encapsulates the challenge of dealing with finite data and of testing for conditional independence thereof. Another challenge lies in the computational complexity of learning the PAG in the first place [Colombo et al., 2012], and estimating the expression when the effect is identifiable [Pearl and Robins, 1995, Jung et al., 2021]. In light of this, it is important to make the distinction between the task of causal effect identification and that of causal effect estimation. This set of results is concerned with the first task (causal identification), which asks whether a target conditional effect is uniquely computable from P (V), the observational distribution, and given a PAG learnable from P (V). The objective of CIDP in Algorithm 2 is to decide whether the effect is identifiable and provide an expression for it when the answer is yes, while being agnostic as to whether P (V) can be accurately estimated from the available samples. As for the second task, the estimation of the conditional causal effect using the identification formula provided by Algorithm 2 poses several challenges under finite data. The number of samples sufficient to identify a given effect would depend on the size of the expression, among other factors, and 6Code is available at https://github.com/CausalAILab/PAGId naive methods for estimation exacerbate this problem. Recent work such as [Jung et al., 2021] proposes a double machine learning estimator for marginal effects that are identifiable given a PAG. An interesting direction of work is to generalize this approach to conditional causal effects that are identifiable by CIDP. 6 Conclusions In this work, we investigate the problem of identifying conditional interventional distributions given a Markov equivalence class of causal diagrams represented by a PAG. We introduce a new generalization of the do-calculus for identification of interventional distributions in PAGs (Thm. 1) and show it to be atomically complete (Thm. 2). Building on these results, we develop the CIDP algorithm (Alg. 2), which is both sound and complete, i.e., it identifies any conditional effects of the form Px(y|z) that is identifiable (Thm. 4). Finally, we show that the new calculus rules, along with standard probability manipulations, are complete for the same task. These results close the problem of effect identification under Markov equivalence in that they completely delineate the theoretical boundaries of what is, in principle, computable from a certain data collection. We expect the newly introduced machinery to help data scientists to identify novel effects in real world settings. Acknowledgments and Disclosure of Funding Bareinboim and Ribeiro’s research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. Zhang’s research was supported by the RGC of Hong Kong under GRF13602720.
1. What is the focus and contribution of the paper regarding causal effect identification in PAGs? 2. What are the strengths of the proposed approach, particularly in terms of its soundness and completeness? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works and lack of references? 4. Can you provide a high-level intuition about the differences between the proposed algorithm and previous works, such as the one by A. Jaber? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work studies the question of identification of conditional causal effects given a PAG. Authors propose sound and complete algorithm for the identification of the causal effects of the form P(y|do(x), z). Strengths And Weaknesses It studies an important and quite interesting problem in causality which is causal effect identification in PAGs. Authors propose new set of do-calculus rules for PAGs and prove their soundness and completeness in terms of identifiability of conditional causal effects. Also they propose sound and complete algorithm for the identification of causal effects P(y|do(x), z) given PAG. There are statements that require more references. For example, the IDP algorithm is similar (except slight changes in one line) to the algorithm in “Causal Identification under Markov Equivalence: Completeness Results” by A. Jaber. This needs to be clarified in the main text. Results resemble the ones presented by Shpitser for identification of the conditional causal effects in DAGs. Moreover, the problem of identification of causal effects in PAGs has been addressed before. Therefore, the natural question is the contribution of this work. It is very important to clearly mention what is the main difference between the proposed method and the other work and what is the significant of this work compared to the others, e.g., proofs are different and more involved, assumptions have been relaxed, ... Questions It is important to provide high-level intuition about the difference between the algorithm for identifying conditional causal effects proposed in “Identification of Conditional Causal Effects under Markov Equivalence” by A. Jaber and the algorithm proposed in this work. It will improve the readability to add more details of what has been already done (like Algorithm 1) and what is newly proposed in the paper. Papers on identifiability commonly have an assumption that the underlying DAG is semi-Markovian. I would like to ask whether it is also needed here and whether this has any effect (limitation) on the set of PAGs which are considered in this work? Will the proofs still hold if the underlying DAG of the corresponding PAG was not semi-Markovian? Limitations Please see above comments.
NIPS
Title Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness Abstract One common task in many data sciences applications is to answer questions about the effect of new interventions, like: ‘what would happen to Y if we make X equal to x while observing covariates Z = z?’. Formally, this is known as conditional effect identification, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available. In this paper, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We make the following contributions under this relaxed setting. First, we introduce a new causal calculus, which subsumes the current state-of-the-art, PAG-calculus. Second, we develop an algorithm for conditional effect identification given a PAG and prove it to be both sound and complete. In words, failure of the algorithm to identify a certain effect implies that this effect is not identifiable by any method. Third, we prove the proposed calculus to be complete for the same task. 1 Introduction Despite the recent advances in AI and machine learning, the current generation of intelligent systems still lacks the pivotal ability to represent, learn, and reason with cause and effect relationships. The discipline of causal inference aims to ‘algorithmitize’ causal reasoning capabilities towards producing human-like machine intelligence and rational decision-making [Pearl and Mackenzie, 2018, Pearl, 2019, Bareinboim and Pearl, 2016]. One fundamental type of inference in this setting is concerned with the effect of new interventions, e.g., ‘what would happen to outcome Y if X were set to x?’ More generally, we may be interested in Y ’s distribution in a sub-population picked out by the value of some covariates Z = z’. For example, a legislator might be interested in the impact that increasing the minimum wage (X = x) has on profits (Y ) in small businesses (Z = z), which is written in causal language as the interventional distribution P (y|do(x), z), or Px(y|z). One method capable of answering such questions is through controlled experimentation [Fisher, 1951]. In many practical settings found throughout the empirical sciences, AI, and machine learning, it is not always possible to perform a controlled experiment due to ethical, financial, and technical considerations. This motivates the study of a problem known as causal effect identification [Pearl, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2000, Ch. 3]. The idea is to use the observational distribution P (V) along with assumptions about the underlying domain, articulated in the form of a causal diagram D, to infer the interventional distribution Px(y|z) when possible. For instance, Fig. 1a represents a causal diagram in which nodes correspond to measured variables, directed edges represent direct causal relations, and bidirecteddashed edges encode spurious associations due to unmeasured confounders. A plethora of methods have been developed to address the identification task including the celebrated causal calculus proposed by Pearl [1995] as well as complete algorithms [Tian, 2004, Shpitser and Pearl, 2006, Huang and Valtorta, 2006]. For instance, given the causal diagram in Fig. 1a and the query Px(y|z), the calculus sanctions the identity Px(y|z) = P (y|z, x). In words, the interventional distribution on the l.h.s. equates to the observational distribution on the right, which is available as input. Despite the power of these results, requiring the diagram as the input of the task is an Achilles heel for those methods, since background knowledge is usually not sufficient to pin down the single, true diagram. To circumvent these challenges, a growing literature develops data-driven methods that attempt to learn the causal diagram from data first, and then perform identification from there. In practice, however, only an equivalence class (EC) of diagrams can be inferred from observational data without making substantial assumptions about the causal mechanisms [Verma, 1993, Spirtes et al., 2001, Pearl, 2000]. A prominent representation of this class is known as partial ancestral graphs (PAGs) [Zhang, 2008b]. Fig. 1c illustrates the PAG learned from observational data consistent with both causal diagrams in Figs. 1a and 1b since they are in the same Markov equivalence class. The directed edges in a PAG encode ancestral relations, not necessarily direct, and the circle marks stand for structural uncertainty. Directed edges labeled with v signify the absence of unmeasured confounders. Causal effect identification in a PAG is usually more challenging than from a single diagram due to the structural uncertainties and the infeasibility of enumerating each member of the EC in most cases. The do-calculus was extended for PAGs to account for the inherent structure uncertainties without the need for enumeration [Zhang, 2007]. Still, the calculus falls short of capturing all identifiable effects as we will see in Sec. 3. On the other hand, it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. In a more systematic manner, a complete algorithm has been developed to identify marginal effects (i.e., Px(y)) given a PAG [Jaber et al., 2019a]. This algorithm can be used to identify conditional effects whenever the joint distribution Px(y ∪ z) is identifiable. Still, many conditional effects are identifiable even if the corresponding joint effect is not (Sec. 4.2). Finally, an algorithm to identify conditional effects has been proposed in [Jaber et al., 2019b], but not proven to be complete.1 In this paper, we pursue a data-driven formulation for the task of identification of any conditional causal effect from a combination of an observational distribution and the corresponding PAG (instead of a fully specified causal diagram). Accordingly, we makes the following contributions: 1. We propose a causal calculus for PAGs that subsumes the stat-of-the-art calculus introduced in [Zhang, 2007]. We prove the rules are atomic complete, i.e., a rule is not applicable in some causal diagram in the underlying EC whenever it is not applicable given the PAG. 2. Building on these results, we develop an algorithm for the identification of conditional causal effects given a PAG. We prove the algorithm is complete, i.e., the effect is not identifiable in some causal diagram in the equivalence class whenever the algorithm fails. 3. Finally, we prove the calculus is complete for the task of identifying conditional effects. 1Another approach is based on SAT (Boolean constraint satisfaction) solvers [Hyttinen et al., 2015]. Given its somewhat distinct nature, a closer comparison lies outside the scope of this paper. 2 Preliminaries In this section, we introduce the basic setup and notations. Boldface capital letters denote sets of variables, while boldface lowercase letters stand for value assignments to those variables.2 Structural Causal Models. We use Structural Causal Models (SCMs) as our basic semantical framework [Pearl, 2000]. Formally, an SCM M is a 4-tuple ⟨U,V,F, P (U)⟩, where U is a set of exogenous (unmeasured) variables and V is a set of endogenous (measured) variables. F represents a collection of functions such that each endogenous variable Vi ∈ V is determined by a function fi ∈ F. Finally, P (U) encodes the uncertainty over the exogenous variables. Every SCM is associated with one causal diagram where every variable in V ∪U is a node, and arrows are drawn between nodes in accordance with the functions in F. Following standard practice, we omit the exogenous nodes and add a bidirected dashed arc between two endogenous nodes if they share an exogenous parent. We only consider recursive systems, thus the corresponding diagram is acyclic. The marginal distribution induced over the endogenous variables P (V) is called observational. The d-separation criterion captures the conditional independence relations entailed by a causal diagram in P (V). For C ⊆ V, Q[C] denotes the post-intervention distribution of C under an intervention on V \C, i.e. Pv\c(c).3 Ancestral Graphs. We now introduce a graphical representation of equivalence classes of causal diagrams. A MAG represents a set of causal diagrams with the same set of observed variables that entail the same conditional independence and ancestral relations among the observed variables [Richardson and Spirtes, 2002]. M-separation extends d-separation to MAGs such that d-separation in a causal diagram corresponds to m-separation in its unique MAG over the observed variables, and vice versa. Definition 1 (m-separation). A path p between X and Y is active (or m-connecting) relative to Z (X,Y ̸∈ Z) if every non-collider on p is not in Z, and every collider on p is an ancestor of some Z ∈ Z. X and Y are m-separated by Z if there is no active path between X and Y relative to Z. Different MAGs entail the same independence model and hence are Markov equivalent. A PAG represents an equivalence class of MAGs [M], which shares the same adjacencies as every MAG in [M] and displays all and only the invariant edge marks. A circle indicates an edge mark that is not invariant. A PAG is learnable from the independence model over the observed variables, and the FCI algorithm is a standard method to learn such an object [Zhang, 2008b]. In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Graphical Notions. Given a PAG, a path between X and Y is potentially directed (causal) from X to Y if there is no arrowhead on the path pointing towards X . Y is called a possible descendant of X and X a possible ancestor of Y if there is a potentially directed path from X to Y . For a set of nodes X, let An(X) (De(X)) denote the union of X and the set of possible ancestors (descendants) of X. Given two sets of nodes X and Y, a path between them is called proper if one of the endpoints is in X and the other is in Y, and no other node on the path is in X or Y. Let ⟨A,B,C⟩ be any consecutive triple along a path p. B is a collider on p if both edges are into B. B is a (definite) non-collider on p if one of the edges is out of B, or both edges have circle marks at B and there is no edge between A and C. A path is definite status if every non-endpoint node along it is either a collider or a non-collider. If the edge marks on a path between X and Y are all circles, we call the path a circle path. We refer to the closure of nodes connected with circle paths as a bucket. A directed edge X → Y in a PAG is visible if there exists no causal diagram in the corresponding equivalence class where the relation between X and Y is confounded. Which directed edges are visible is easily decidable by a graphical condition [Zhang, 2008a], so we mark visible edges by v. Manipulations in PAGs. Let P denote a PAG over V and X ⊆ V. PX denotes the induced subgraph of P over X. The X-lower-manipulation of P deletes all those edges that are visible in P and are out of variables in X, replaces all those edges that are out of variables in X but are invisible in P with bi-directed edges, and otherwise keeps P as it is. The resulting graph is denoted as PX. The X-upper-manipulation of P deletes all those edges in P that are into variables in X, and otherwise keeps P as it is. The resulting graph is denoted as PX. 2A more comprehensive discussion about the background is provided in the full report [Jaber et al., 2022]. 3Without loss of generality, we assume the model is semi-Markovian. Tian [Tian, 2002, Sec. 5.6] shows that the identification of a causal effect in a non-Markovian model is equivalent to the identification of the same effect in a derived semi-Markovian model via a procedure known as ‘projection’. 3 Causal Calculus for PAGs The causal calculus introduced in [Pearl, 1995] is a seminal work that has been instrumental for understanding and eventually solving the task of effect identification from causal diagrams. Zhang [2007] generalized this result to the context of ancestral graphs, where a PAG is taken as the input of the task, instead of the specific causal diagram. In Sec. 3.1, we discuss Zhang’s rules and try to understand the reasons they are insufficient to solve the identification problem in full generality. Further, in Sec.3.2, we introduce another generalization of the original calculus and prove that it is complete for atomic identification. This result will be further strengthened in subsequent sections. 3.1 Zhang’s Calculus An obvious extension of the m-separation criterion shown in Def. 1 to PAGs blocks all possibly m-connecting paths, as defined next. Definition 2 (Possibly m-connecting path). In a PAG, a path p between X and Y is a possibly mconnecting path relative to a (possibly empty) set of nodes Z (X,Y /∈ Z) if every definite non-collider on p is not a member of Z, and every collider on p is a possible ancestor of some member of Z. X and Y are m̂-separated by Z if there is no possibly m-connecting path between them relative to Z. Using this notion of separation, Zhang [2007] proposed a calculus given a PAG as shown next. Proposition 1 (Zhang’s Calculus). Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P . 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m̂-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m̂-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m̂-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PW . In words, rule 1 generalizes m-separation to interventional settings. Further, rule 2 licenses alternating a subset X between intervention and conditioning. Finally, rule 3 allows the adding/removal of an intervention do(X = x). The next two examples illustrate the shortcomings of this result, where the first reveals the drawback of using Def. 2 to establish graphical separation and the second inspects evaluating X(Z) in rule 3 (where the notion of possible ancestors are evoked). Example 1. Consider the PAG P shown in Fig. 2. Since X and Y are not adjacent in P , it is easy to show that X and Y are separable given {Z1, Z2} in every causal diagram in the equivalence class. If rule 3 of Pearl’s calculus is used in each diagram, then Px(y|z1, z2) = P (y|z1, z2). Further, applying rule 2 of do-calculus in each diagram, it’s also the case that Px(y|z1, z2) = P (y|z1, z2, x). However, due to the possibly m-connecting path ⟨X,Z1, Z2, Y ⟩, rules 3 and 2 in Prop. 1 are not applicable to P . In other words, even though Pearl’s calculus rules 2 and 3 are applicable to each diagram in the equivalence class, the same results cannot be established by Zhang’s calculus. Example 2. Consider the PAG in Fig. 3a, and the evaluation on whether the equality Pw,x1,x2(y|z) = Pw(y|z) holds. In order to apply rule 3 of Prop. 1, we need to evaluate whether {X1, X2} is separated from {Y } given {W,Z} in the manipulated graph in Fig. 3b, which is not true in this case. However, the rule can be improved to be applicable in this case, as we will show later on (Sec. 3.2). The critical step will be the evaluation of the set X(Z) from PW. 3.2 A New Calculus Building on the analysis of the calculus proposed in [Zhang, 2007], we introduce next a set of rules centered around blocking definite m-connecting paths, as defined next. Definition 3 (Definite m-connecting path). In a PAG, a path p between X and Y is a definite m-connecting path relative to a (possibly empty) set Z (X,Y ̸∈ Z) if p is definite status, every definite non-collider on p is not a member of Z, and every collider on p is an ancestor of some member of Z. X and Y are m-separated by Z if there is no definite m-connecting path between them relative to Z. It is easy to see that every definite m-connecting path is a possibly m-connecting path, according to Def. 2; however, the converse is not true. For example, given the PAG in Figure 2, we have two definite status paths between X and Y . The first is X ◦−◦Z1 ◦−◦Y and the second is X ◦−◦Z2 ◦−◦Y where Z1 and Z2 are definite non-colliders. Given set Z = {Z1, Z2}, Z blocks all definite status paths between X and Y . Alternatively, the path X ◦−◦ Z1 ◦−◦ Z2 ◦−◦ Y is not definite status since Z1, Z2 are not colliders or non-colliders on this path. Hence, the path is a possibly m-connecting path relative to Z by Def. 2 but not a definite m-connecting path by Def. 3. We are now ready to use this new definition and formulate a more powerful calculus. Theorem 1. Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P .4 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PV\W . A few observations are important at this point. Despite the visual similarity to Prop. 1, there are two pivotal differences between these calculi. First, Thm. 1 only requires blocking the definite status paths, hence the use of ‘m-separation’ in Thm. 1 instead of ‘m̂-separation’. Consider the PAG P in Fig. 2. We want to evaluate whether Px(y|z1, z2) = P (y|x, z1, z2) by applying Rule 2 in Theorem 1. Since all the edges in the PAG are circle edges, then PX = P . As discussed earlier, the set Z = {Z1, Z2} blocks all the definite status path between X and Y . Hence, X and Y are m-separated by Z and Px(y|z1, z2) = P (y|x, z1, z2) holds true by Rule 2. Second, Thm. 1 defines X(Z) as the subset of X that is not in the possible ancestors of Z in PV\W, as opposed to PW in Prop. 1. We revisit the query in Ex. 2 to clarify this subtle but significant difference. Given the PAG in Fig. 3a, we want to evaluate whether Pw,x1,x2(y|z) = Pw(y|z) by applying Rule 3 from Thm. 1 instead of Prop. 1. Fig. 3c shows PV\{W} where X = {X1, X2} are not possible ancestors of Z. Therefore, X(Z) = X, the edges into X1 are cut in PW,X(Z), and X and Y are m-separated therein. Third, the proof of the Theorem 1 is provided in the appendix, but it follows from the relationship between m-connecting path in a manipulated MAG to a definite m-connecting path in the corresponding manipulated PAG. It was conjectured in [Zhang, 2008a, Footnote 15] that, for X,Y ⊂ V, if there is an m-connecting path inMY,X, then there is a definite m-connecting path in PY,X. In this work, we prove that conjecture to be true for the special class of manipulations required in the rules of the calculus. Finally, the next proposition establishes the necessity of the antecedents in Thm. 1 in order to apply the corresponding rule given every diagram in the equivalence class. 4All the proofs can be found in the full report [Jaber et al., 2022]. Theorem 2 (Atomic Completeness). The calculus in Theorem 1 is atomically complete; meaning, whenever a rule is not applicable given a PAG, then the corresponding rule in Pearl’s calculus is not applicable given some causal diagram in the Markov equivalence class. For instance, considering PAG P in Fig. 1c, we note that (Z ⊥⊥Y |X)PX,Z , which means rule 2 is not applicable. Clearly, the diagram in Fig. 1a is in the equivalence class of P and the corresponding rule of Pearl’s calculus is not applicable due to the latent confounder between Z and Y . 4 Effect Identification: A Complete Algorithm It is challenging to use the calculus rules in Thm. 1 to identify causal effects since it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. The goal of this section is to formulate an algorithm to identify conditional causal effects. The next definition formalizes the notion of identifiability from a PAG, generalizing the causal-diagram-specific notion introduced in [Pearl, 2000, Tian, 2004]. Definition 4 (Causal-Effect Identifiability). Let X,Y,Z be disjoint sets of endogenous variables, V. The causal effect of X on Y conditioned on Z is said to be identifiable from a PAG P if the quantity Px(y|z) can be computed uniquely from the observational distribution P (V) given every causal diagram D in the Markov equivalence class represented by P . The remainder of the section is organized as follows. Sec. 4.1 introduces a version of the IDP algorithm [Jaber et al., 2019a] to identify marginal causal effects. The attractiveness of this version is that it yields simpler expressions whenever the effect is identifiable while preserving the same expressive power, i.e., completeness for marginal identification. Sec. 4.2 utilizes the new algorithm along with the calculus in Thm. 1 to formulate a complete algorithm for conditional identification. 4.1 Marginal Effect Identification We introduce the notion of pc-component next, which generalizes the notion of c-component that is instrumental to solve identification problems in a causal diagram [Tian and Pearl, 2002]. Definition 5 (PC-Component). In a PAG, or any induced subgraph thereof, two nodes are in the same possible c-component (pc-component) if there is a path between them such that (1) all non-endpoint nodes along the path are colliders, and (2) none of the edges is visible. Following Def. 5, e.g., W and Z in Fig. 1c are in the same pc-component due to W◦→ X ←◦Z. By contrast, X,Y are not in the same pc-component since the direct edge between them is visible and Z along ⟨X,Z, Y ⟩ is not a collider. Building on pc-components, we define the key notion of regions. Definition 6 (RegionRCA). Given PAG P over V, and A ⊆ C ⊆ V. The region of A with respect to C, denotedRCA, is the union of the buckets that contain nodes in the pc-component of A in PC. A region expands a pc-component and will prove to be useful in the identification algorithm. For example, the pc-component of X in Fig. 2 is {X,Z1, Z2} and the region RVX = {X,Z1, Z2, Y }. Building further on these definitions and the new calculus, we derive a new identification criterion. Proposition 2. Let P denote a PAG over V, T be a union of a subset of the buckets in P , and X ⊂ T be a bucket. Given Pv\t (i.e., an observational expression for Q[T]), Q[T \X] is identifiable by the following expression if, in PT, CX ∩ PossDe(X) ⊆ X, where CX is the pc-component of X. Q[T \X] = Pv\t Pv\t(X|T \ PossDe(X)) (1) Note the interventions are over buckets which may or may not be single nodes. Since there is little to no causal information inside a bucket, marginal effects of interventions over subsets of buckets are not identifiable. Also, the input distribution is possibly interventional which licenses recursive applications of the criterion. The next example illustrates the power of the new criterion. Example 3. Consider PAG P in Fig. 3a and the query Px1,x2,w(y, z, a). Starting with the observational distribution P (V) as input, let T = V and X = {X1,W}. We have CX = {X1,W,A,X2}, Algorithm 1 IDP(P,x,y) Input: PAG P and two disjoint sets X,Y ⊂ V Output: Expression for Px(y) or FAIL 1: Let D = PossAn(Y)PV\X 2: return ∑ d\y IDENTIFY(D,V, P ) 3: function IDENTIFY(C, T, Q = Q[T]) 4: if C = ∅ then return 1 5: if C = T then return Q /* In PT, let B denote a bucket, and let CB denote the pc-component of B */ 6: if ∃B ⊂ T \C such that CB ∩ PossDe(B)PT ⊆ B then 7: Compute Q[T \B] from Q; ▷ via Prop. 2 8: return IDENTIFY(C,T \B,Q[T \B]) 9: else if ∃B ⊂ C such thatRB ̸= C then ▷RB is equivalent toRCB 10: return IDENTIFY(RB,T,Q) × IDENTIFY(RC\RB ,T,Q) IDENTIFY(RB∩RC\RB ,T,Q) 11: else throw FAIL PossDe(X) = {X1,W,Z, Y }, and CX ∩ PossDe(X) = X. Hence, the criterion in Prop. 2 is applicable and we have Px(y, z, a, x2) = P (v) P (x1,w|a,x2) = P (a, x2)× P (y, z|a,w) after simplification. Next, we consider intervening on X2 given Px1,w(y, z, a, x2). Notice X2 is disconnected from the other nodes in PV\{X1,W} and it trivially satisfies the criterion in Prop. 2. Therefore, we get the expression Px1,x2,w(y, z, a) = Px1,w(y,z,a,x2) Px1,w(x2|y,z,a) = P (a)× P (y, z|w, a) after simplification. A more general criterion was introduced in [Jaber et al., 2018, Thm. 1] based on the possible children of the intervention bucket X instead of the possible descendants. However, the corresponding expression is convoluted and usually large, which could be intractable even if the effect is identifiable. Alg. 1 shows the proposed version of IDP, which builds on the new criterion (Prop. 2). Specifically, the key difference between this algorithm and the one proposed in [Jaber et al., 2019a] is in Lines 6-7, where the criterion in Prop. 2 is used as opposed to that in [Jaber et al., 2018, Thm. 1]. Interestingly enough, the new criterion is “just right,” namely, it is also sufficient to obtain a complete algorithm for marginal effect identification, as shown in the next result.5 Theorem 3 (completeness). Alg. 1 is complete for identifying marginal effects Px(y). Moreover, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 4.2 Conditional Effect Identification We start by making a couple of observations, and then build on those observations to formulate an algorithm to identify conditional causal effects. The proposed algorithm leverages the calculus in Thm. 1 and the IDP algorithm in Alg. 1. Obs. 1 notes that a conditional effect Px(y|z) can be rewritten as Px(y,z)∑ y′ Px(y ′,z) , and hence it is identifiable if Px(y, z) is identifiable by Alg. 1. Observation 1 (Marginal Effect). Consider PAG P1 in Fig. 4a where the goal is to identify the causal effect Pb(a, c|d). We notice that the effect Pb(a, c, d) is identifiable using the IDP algorithm. Let E := P (a, d)× P (c|b, d) denote the expression for the marginal effect Pb(a, c, d) which can be obtained from IDP. Consequently, the target effect can be computed using the expression E/ ∑ a′,c′ E. Whenever the marginal effect Px(y, z) is not identifiable using Alg. 1, Observations 2 and 3 propose techniques to identify the conditional effect using the calculus in Thm. 1, namely rule 2. Obs. 2 uses rule 2 of Thm. 1, when applicable, to move variables from the conditioning to the intervention set. The marginal effect of the resulting conditional query turns out to be identifiable, and consequently does the conditional effect. We note that the work in [Shpitser and Pearl, 2006] uses the same trick to formulate an algorithm for conditional effect identification given a causal diagram. 5A more detailed comparison of the two algorithms along with illustrative examples is provided in the full report [Jaber et al., 2022]. Algorithm 2 CIDP(P,x,y, z) Input: PAG P and three disjoint sets X,Y,Z ⊂ V Output: Expression for Px(y|z) or FAIL 1: D← PossAn(Y ∪ Z)PV\X /* Let B1, . . . ,Bm denote the buckets in P */ 2: while ∃Bi s.t. Bi ∩D ̸= ∅ ∧Bi ̸⊆ D do 3: X′ ← Bi ∩X 4: if (X′ ⊥⊥ Y|(X \X′) ∪ Z)P X\X′,X′ then 5: x← x \ x′; z← z ∪ x′ ▷ Apply rule 2 of Thm. 1 6: D← PossAn(Y ∪ Z)PV\X 7: else throw FAIL /* Let Z1, . . . ,Zm partition Z such that Zi := Z ∩Bi */ 8: while ∃Zi s.t. (Zi ⊥⊥ Y|X ∪ (Z \ Zi))PX,Zi do 9: x← x ∪ zi; z← z \ zi ▷ Apply rule 2 of Thm. 1 10: E ← IDP(P,x,y ∪ z) 11: return E/ ∑ y′ E Observation 2 (Flip Observations to Interventions). Consider PAG P1 in Fig. 4a and the causal query Pa(c|b, d). Unlike the case in Obs. 1, the marginal effect Pa(b, c, d) is not identifiable by the IDP algorithm. Using rule 2 of Thm. 1, we have (B ⊥⊥ C|D)PA,B and we move B from conditioning to intervention, i.e., Pa(c|b, d) = Pa,b(c|d). The marginal effect Pa,b(c, d) is identifiable by IDP and we get the expression E := P (d)× P (c|b, d). Hence, we have Pa(c|b, d) = Pa,b(c|d) = E/ ∑ c′ E. Finally, Obs. 3 comes as a surprise since it requires flipping interventions to observations, contrary to Obs. 2. A key graphical structure in the PAG that requires such a treatment is the presence of a proper possibly directed path from X to Y ∪ Z that starts with a circle edge. Observation 3 (Flip Interventions to Observations). We revisit the query in Example. 1. First, the marginal effect Px(y, z1, z2) is not identifiable by the IDP algorithm. Also, we cannot use rule 2 in Thm. 1 to flip Z1 or Z2 into interventions since they are both adjacent to Y with a circle edge (◦−◦). However, we can use rule 2 to flip X to the conditioning set since there are no definite m-connecting paths between X and Y given {Z1, Z2} in the PAG. Hence, we obtain Px(y|z1, z2) = P (y|z1, z2, x). Alternatively, consider PAG P2 in Fig. 4b with the causal query Px(y|z). We cannot use rule 2 to flip X to an observation since ⟨X,W, Y ⟩ is active given Z in P2X . In fact, the causal diagram G in Fig. 4c is in the equivalence class of P2 and such that Px(y|z) is provably not identifiable [Shpitser and Pearl, 2006, Corol. 2]. Hence, the effect is not identifiable given P2 according to Def. 4. Putting these observations together, we formulate the CIDP algorithm (Alg. 2) for identifying conditional causal effects given a PAG. The algorithm is divided into three phases. In Phase I (lines 1-7), Obs. 3 is used to check for proper possibly directed paths from X to Y ∪Z that start with a circle edge. This is checked algorithmically by computing D = PossAn(Y ∪ Z)PV\X , iteratively, and checking if some bucket Bi in P intersects with, but is not a subset of, D. If such a bucket exists, CIDP flips Bi ∩X from interventions to observations using rule 2, when applicable, else the algorithm throws a fail and the effect is not computable. In Phase II (lines 8-9), Obs. 2 is used to flip the subset of observations in each bucket into interventions by applying rule 2 of Thm. 1, whenever applicable. Finally, in Phase III (line 10), the marginal effect Px(y ∪ z) is computed from the modified sets X and Z, using the IDP algorithm in Alg. 1. If the call is successful, an expression for the conditional effect is returned at line 11. The example below illustrates CIDP in action. An empirical evaluation of CIDP is provided in the full report [Jaber et al., 2022].6 Example 4. Consider PAG P in Fig. 5a and the conditional query Px(y|z) := Pa,f (y|b, e). In Phase I, we have D = PossAn(Y ∪Z)PV\X = {Y,B,E,C,G}, and the bucket {A,B} satisfies the conditions at line 2 since A ̸∈ D. In PF,A (as shown in Fig. 5b), X′ = {A} is m-separated from Y given {B,E, F} which satisfies the if condition at line 4. Hence, we flip A to the conditioning set Z via rule 2 of Thm. 1 to obtain the updated query Px(y|z) = Pf (y|a, b, e). In Phase II (lines 8-9), let Z1 = {E} and Z2 = {A,B}. In PF,E (see Fig. 5c), we have E m-separated from Y given {F} ∪ Z2 which satisfies the if condition at line 8. Hence, we flip E to the intervention set using rule 2 of Thm. 1 and we get the updated query Px(y|z) = Pe,f (y|a, b). Next, we check if Z = Z2 is m-separated from Y given X in PX,Z which does not hold due to a bidirected edge between B and Y . Hence, rule 2 is not applicable and Z remains in the conditioning set. Finally, we call IDP to compute the marginal effect Pe,f (y, a, b), if possible. The effect is identifiable with the simplified expression P (y|b, e, f)× P (a, b). Hence, Pa,f (y|b, e) = Pe,f (y|a, b) = P (y|b,e,f)×P (a,b)∑ y′ P (y ′|b,e,f)×P (a,b) = P (y|b, e, f). The soundness of Alg. 2 follows from that of Alg. 1 and Thm. 1. Next, we turn to its completeness. According to Def. 4, whenever CIDP fails, we need to establish one of two conditions for completeness. Either there exist two causal diagrams in the equivalence class with different identifications, or the effect is not identifiable in some causal diagram according to the criterion in [Shpitser and Pearl, 2006, Corol. 2]. Thm. 4 establishes completeness by proving that the latter is always the case. This result along with the completeness of the calculus rules for the identification of marginal effects (see Thm. 3) implies that the rules are complete for conditional effects as well. Theorem 4 (completeness). Alg. 2 is complete for identifying conditional effects Px(y|z). Also, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 5 Discussion In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Assuming the presence of an oracle for conditional independence encapsulates the challenge of dealing with finite data and of testing for conditional independence thereof. Another challenge lies in the computational complexity of learning the PAG in the first place [Colombo et al., 2012], and estimating the expression when the effect is identifiable [Pearl and Robins, 1995, Jung et al., 2021]. In light of this, it is important to make the distinction between the task of causal effect identification and that of causal effect estimation. This set of results is concerned with the first task (causal identification), which asks whether a target conditional effect is uniquely computable from P (V), the observational distribution, and given a PAG learnable from P (V). The objective of CIDP in Algorithm 2 is to decide whether the effect is identifiable and provide an expression for it when the answer is yes, while being agnostic as to whether P (V) can be accurately estimated from the available samples. As for the second task, the estimation of the conditional causal effect using the identification formula provided by Algorithm 2 poses several challenges under finite data. The number of samples sufficient to identify a given effect would depend on the size of the expression, among other factors, and 6Code is available at https://github.com/CausalAILab/PAGId naive methods for estimation exacerbate this problem. Recent work such as [Jung et al., 2021] proposes a double machine learning estimator for marginal effects that are identifiable given a PAG. An interesting direction of work is to generalize this approach to conditional causal effects that are identifiable by CIDP. 6 Conclusions In this work, we investigate the problem of identifying conditional interventional distributions given a Markov equivalence class of causal diagrams represented by a PAG. We introduce a new generalization of the do-calculus for identification of interventional distributions in PAGs (Thm. 1) and show it to be atomically complete (Thm. 2). Building on these results, we develop the CIDP algorithm (Alg. 2), which is both sound and complete, i.e., it identifies any conditional effects of the form Px(y|z) that is identifiable (Thm. 4). Finally, we show that the new calculus rules, along with standard probability manipulations, are complete for the same task. These results close the problem of effect identification under Markov equivalence in that they completely delineate the theoretical boundaries of what is, in principle, computable from a certain data collection. We expect the newly introduced machinery to help data scientists to identify novel effects in real world settings. Acknowledgments and Disclosure of Funding Bareinboim and Ribeiro’s research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. Zhang’s research was supported by the RGC of Hong Kong under GRF13602720.
1. What is the focus and contribution of the paper regarding causal queries and structural causal models? 2. What are the strengths of the proposed approach, particularly in its novelty and technical correctness? 3. What are the weaknesses of the paper, especially in the presentation of the algorithms? 4. Do you have any concerns or questions about the applicability of the results to counterfactual queries? 5. How does the reviewer assess the clarity and accessibility of the notation, derivations, and examples in the paper? 6. What are the limitations of the paper regarding societal impact?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This is a paper about the identification of causal queries in structural causal models. If most of the existing works assume the availability of the causal graph, here only the PAG (partial ancestral graph) structure (corresponding to a collection of CGs) is assumed to be available. The authors derive an analogous of do-calculus for such setup. This is an extension of Zhang's calculus, but here completeness guarantees are provided. Algorithms for the identification of causal effects are provided. Strengths And Weaknesses Strength Although representing an evolution of existing results (Zhang's results and calculus for DAGs and Jaber's algorithms and completeness result) the results in this paper are clearly novel and authors' final claim about their paper "clos[ing] the problem of effect identification under Markov equivalence" is hard to question. This makes the contribution highly significant. The paper is very technical, but the results and the proofs seem to be correct (I only partially checked to long proofs in the supplementary material). The the derivations and the notation is quite accessible to people working with SCMs. I also appreciate the insights provided by the different examples reported in the paper. Weaknesses While the statements of the theorems and the examples are generally very clear, the presentation of the algorithms can be probably improved. Questions It is not perfectly clear to me whether or not these results can be also applied to counterfactual queries (I think so but this point might be made explicit). The considered approach allows to address identifiability when the causal graph is not known and only the PAG is available. As I understand, PAG identifiability corresponds to identifiability on all the CGs compatible with the PAG and the fact that the query would give the same result on each CG. What I don't understand is whether non-identifiability in PAGs, means that there is at least a non-identifiable CG compatible with the PAG or it might be the case that all the CGs are identifiable but leading to different values. A clarification on these points would be helpful, at least from my point of view. Limitations I don't see critical issues in terms of societal impact.
NIPS
Title Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness Abstract One common task in many data sciences applications is to answer questions about the effect of new interventions, like: ‘what would happen to Y if we make X equal to x while observing covariates Z = z?’. Formally, this is known as conditional effect identification, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available. In this paper, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We make the following contributions under this relaxed setting. First, we introduce a new causal calculus, which subsumes the current state-of-the-art, PAG-calculus. Second, we develop an algorithm for conditional effect identification given a PAG and prove it to be both sound and complete. In words, failure of the algorithm to identify a certain effect implies that this effect is not identifiable by any method. Third, we prove the proposed calculus to be complete for the same task. 1 Introduction Despite the recent advances in AI and machine learning, the current generation of intelligent systems still lacks the pivotal ability to represent, learn, and reason with cause and effect relationships. The discipline of causal inference aims to ‘algorithmitize’ causal reasoning capabilities towards producing human-like machine intelligence and rational decision-making [Pearl and Mackenzie, 2018, Pearl, 2019, Bareinboim and Pearl, 2016]. One fundamental type of inference in this setting is concerned with the effect of new interventions, e.g., ‘what would happen to outcome Y if X were set to x?’ More generally, we may be interested in Y ’s distribution in a sub-population picked out by the value of some covariates Z = z’. For example, a legislator might be interested in the impact that increasing the minimum wage (X = x) has on profits (Y ) in small businesses (Z = z), which is written in causal language as the interventional distribution P (y|do(x), z), or Px(y|z). One method capable of answering such questions is through controlled experimentation [Fisher, 1951]. In many practical settings found throughout the empirical sciences, AI, and machine learning, it is not always possible to perform a controlled experiment due to ethical, financial, and technical considerations. This motivates the study of a problem known as causal effect identification [Pearl, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2000, Ch. 3]. The idea is to use the observational distribution P (V) along with assumptions about the underlying domain, articulated in the form of a causal diagram D, to infer the interventional distribution Px(y|z) when possible. For instance, Fig. 1a represents a causal diagram in which nodes correspond to measured variables, directed edges represent direct causal relations, and bidirecteddashed edges encode spurious associations due to unmeasured confounders. A plethora of methods have been developed to address the identification task including the celebrated causal calculus proposed by Pearl [1995] as well as complete algorithms [Tian, 2004, Shpitser and Pearl, 2006, Huang and Valtorta, 2006]. For instance, given the causal diagram in Fig. 1a and the query Px(y|z), the calculus sanctions the identity Px(y|z) = P (y|z, x). In words, the interventional distribution on the l.h.s. equates to the observational distribution on the right, which is available as input. Despite the power of these results, requiring the diagram as the input of the task is an Achilles heel for those methods, since background knowledge is usually not sufficient to pin down the single, true diagram. To circumvent these challenges, a growing literature develops data-driven methods that attempt to learn the causal diagram from data first, and then perform identification from there. In practice, however, only an equivalence class (EC) of diagrams can be inferred from observational data without making substantial assumptions about the causal mechanisms [Verma, 1993, Spirtes et al., 2001, Pearl, 2000]. A prominent representation of this class is known as partial ancestral graphs (PAGs) [Zhang, 2008b]. Fig. 1c illustrates the PAG learned from observational data consistent with both causal diagrams in Figs. 1a and 1b since they are in the same Markov equivalence class. The directed edges in a PAG encode ancestral relations, not necessarily direct, and the circle marks stand for structural uncertainty. Directed edges labeled with v signify the absence of unmeasured confounders. Causal effect identification in a PAG is usually more challenging than from a single diagram due to the structural uncertainties and the infeasibility of enumerating each member of the EC in most cases. The do-calculus was extended for PAGs to account for the inherent structure uncertainties without the need for enumeration [Zhang, 2007]. Still, the calculus falls short of capturing all identifiable effects as we will see in Sec. 3. On the other hand, it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. In a more systematic manner, a complete algorithm has been developed to identify marginal effects (i.e., Px(y)) given a PAG [Jaber et al., 2019a]. This algorithm can be used to identify conditional effects whenever the joint distribution Px(y ∪ z) is identifiable. Still, many conditional effects are identifiable even if the corresponding joint effect is not (Sec. 4.2). Finally, an algorithm to identify conditional effects has been proposed in [Jaber et al., 2019b], but not proven to be complete.1 In this paper, we pursue a data-driven formulation for the task of identification of any conditional causal effect from a combination of an observational distribution and the corresponding PAG (instead of a fully specified causal diagram). Accordingly, we makes the following contributions: 1. We propose a causal calculus for PAGs that subsumes the stat-of-the-art calculus introduced in [Zhang, 2007]. We prove the rules are atomic complete, i.e., a rule is not applicable in some causal diagram in the underlying EC whenever it is not applicable given the PAG. 2. Building on these results, we develop an algorithm for the identification of conditional causal effects given a PAG. We prove the algorithm is complete, i.e., the effect is not identifiable in some causal diagram in the equivalence class whenever the algorithm fails. 3. Finally, we prove the calculus is complete for the task of identifying conditional effects. 1Another approach is based on SAT (Boolean constraint satisfaction) solvers [Hyttinen et al., 2015]. Given its somewhat distinct nature, a closer comparison lies outside the scope of this paper. 2 Preliminaries In this section, we introduce the basic setup and notations. Boldface capital letters denote sets of variables, while boldface lowercase letters stand for value assignments to those variables.2 Structural Causal Models. We use Structural Causal Models (SCMs) as our basic semantical framework [Pearl, 2000]. Formally, an SCM M is a 4-tuple ⟨U,V,F, P (U)⟩, where U is a set of exogenous (unmeasured) variables and V is a set of endogenous (measured) variables. F represents a collection of functions such that each endogenous variable Vi ∈ V is determined by a function fi ∈ F. Finally, P (U) encodes the uncertainty over the exogenous variables. Every SCM is associated with one causal diagram where every variable in V ∪U is a node, and arrows are drawn between nodes in accordance with the functions in F. Following standard practice, we omit the exogenous nodes and add a bidirected dashed arc between two endogenous nodes if they share an exogenous parent. We only consider recursive systems, thus the corresponding diagram is acyclic. The marginal distribution induced over the endogenous variables P (V) is called observational. The d-separation criterion captures the conditional independence relations entailed by a causal diagram in P (V). For C ⊆ V, Q[C] denotes the post-intervention distribution of C under an intervention on V \C, i.e. Pv\c(c).3 Ancestral Graphs. We now introduce a graphical representation of equivalence classes of causal diagrams. A MAG represents a set of causal diagrams with the same set of observed variables that entail the same conditional independence and ancestral relations among the observed variables [Richardson and Spirtes, 2002]. M-separation extends d-separation to MAGs such that d-separation in a causal diagram corresponds to m-separation in its unique MAG over the observed variables, and vice versa. Definition 1 (m-separation). A path p between X and Y is active (or m-connecting) relative to Z (X,Y ̸∈ Z) if every non-collider on p is not in Z, and every collider on p is an ancestor of some Z ∈ Z. X and Y are m-separated by Z if there is no active path between X and Y relative to Z. Different MAGs entail the same independence model and hence are Markov equivalent. A PAG represents an equivalence class of MAGs [M], which shares the same adjacencies as every MAG in [M] and displays all and only the invariant edge marks. A circle indicates an edge mark that is not invariant. A PAG is learnable from the independence model over the observed variables, and the FCI algorithm is a standard method to learn such an object [Zhang, 2008b]. In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Graphical Notions. Given a PAG, a path between X and Y is potentially directed (causal) from X to Y if there is no arrowhead on the path pointing towards X . Y is called a possible descendant of X and X a possible ancestor of Y if there is a potentially directed path from X to Y . For a set of nodes X, let An(X) (De(X)) denote the union of X and the set of possible ancestors (descendants) of X. Given two sets of nodes X and Y, a path between them is called proper if one of the endpoints is in X and the other is in Y, and no other node on the path is in X or Y. Let ⟨A,B,C⟩ be any consecutive triple along a path p. B is a collider on p if both edges are into B. B is a (definite) non-collider on p if one of the edges is out of B, or both edges have circle marks at B and there is no edge between A and C. A path is definite status if every non-endpoint node along it is either a collider or a non-collider. If the edge marks on a path between X and Y are all circles, we call the path a circle path. We refer to the closure of nodes connected with circle paths as a bucket. A directed edge X → Y in a PAG is visible if there exists no causal diagram in the corresponding equivalence class where the relation between X and Y is confounded. Which directed edges are visible is easily decidable by a graphical condition [Zhang, 2008a], so we mark visible edges by v. Manipulations in PAGs. Let P denote a PAG over V and X ⊆ V. PX denotes the induced subgraph of P over X. The X-lower-manipulation of P deletes all those edges that are visible in P and are out of variables in X, replaces all those edges that are out of variables in X but are invisible in P with bi-directed edges, and otherwise keeps P as it is. The resulting graph is denoted as PX. The X-upper-manipulation of P deletes all those edges in P that are into variables in X, and otherwise keeps P as it is. The resulting graph is denoted as PX. 2A more comprehensive discussion about the background is provided in the full report [Jaber et al., 2022]. 3Without loss of generality, we assume the model is semi-Markovian. Tian [Tian, 2002, Sec. 5.6] shows that the identification of a causal effect in a non-Markovian model is equivalent to the identification of the same effect in a derived semi-Markovian model via a procedure known as ‘projection’. 3 Causal Calculus for PAGs The causal calculus introduced in [Pearl, 1995] is a seminal work that has been instrumental for understanding and eventually solving the task of effect identification from causal diagrams. Zhang [2007] generalized this result to the context of ancestral graphs, where a PAG is taken as the input of the task, instead of the specific causal diagram. In Sec. 3.1, we discuss Zhang’s rules and try to understand the reasons they are insufficient to solve the identification problem in full generality. Further, in Sec.3.2, we introduce another generalization of the original calculus and prove that it is complete for atomic identification. This result will be further strengthened in subsequent sections. 3.1 Zhang’s Calculus An obvious extension of the m-separation criterion shown in Def. 1 to PAGs blocks all possibly m-connecting paths, as defined next. Definition 2 (Possibly m-connecting path). In a PAG, a path p between X and Y is a possibly mconnecting path relative to a (possibly empty) set of nodes Z (X,Y /∈ Z) if every definite non-collider on p is not a member of Z, and every collider on p is a possible ancestor of some member of Z. X and Y are m̂-separated by Z if there is no possibly m-connecting path between them relative to Z. Using this notion of separation, Zhang [2007] proposed a calculus given a PAG as shown next. Proposition 1 (Zhang’s Calculus). Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P . 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m̂-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m̂-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m̂-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PW . In words, rule 1 generalizes m-separation to interventional settings. Further, rule 2 licenses alternating a subset X between intervention and conditioning. Finally, rule 3 allows the adding/removal of an intervention do(X = x). The next two examples illustrate the shortcomings of this result, where the first reveals the drawback of using Def. 2 to establish graphical separation and the second inspects evaluating X(Z) in rule 3 (where the notion of possible ancestors are evoked). Example 1. Consider the PAG P shown in Fig. 2. Since X and Y are not adjacent in P , it is easy to show that X and Y are separable given {Z1, Z2} in every causal diagram in the equivalence class. If rule 3 of Pearl’s calculus is used in each diagram, then Px(y|z1, z2) = P (y|z1, z2). Further, applying rule 2 of do-calculus in each diagram, it’s also the case that Px(y|z1, z2) = P (y|z1, z2, x). However, due to the possibly m-connecting path ⟨X,Z1, Z2, Y ⟩, rules 3 and 2 in Prop. 1 are not applicable to P . In other words, even though Pearl’s calculus rules 2 and 3 are applicable to each diagram in the equivalence class, the same results cannot be established by Zhang’s calculus. Example 2. Consider the PAG in Fig. 3a, and the evaluation on whether the equality Pw,x1,x2(y|z) = Pw(y|z) holds. In order to apply rule 3 of Prop. 1, we need to evaluate whether {X1, X2} is separated from {Y } given {W,Z} in the manipulated graph in Fig. 3b, which is not true in this case. However, the rule can be improved to be applicable in this case, as we will show later on (Sec. 3.2). The critical step will be the evaluation of the set X(Z) from PW. 3.2 A New Calculus Building on the analysis of the calculus proposed in [Zhang, 2007], we introduce next a set of rules centered around blocking definite m-connecting paths, as defined next. Definition 3 (Definite m-connecting path). In a PAG, a path p between X and Y is a definite m-connecting path relative to a (possibly empty) set Z (X,Y ̸∈ Z) if p is definite status, every definite non-collider on p is not a member of Z, and every collider on p is an ancestor of some member of Z. X and Y are m-separated by Z if there is no definite m-connecting path between them relative to Z. It is easy to see that every definite m-connecting path is a possibly m-connecting path, according to Def. 2; however, the converse is not true. For example, given the PAG in Figure 2, we have two definite status paths between X and Y . The first is X ◦−◦Z1 ◦−◦Y and the second is X ◦−◦Z2 ◦−◦Y where Z1 and Z2 are definite non-colliders. Given set Z = {Z1, Z2}, Z blocks all definite status paths between X and Y . Alternatively, the path X ◦−◦ Z1 ◦−◦ Z2 ◦−◦ Y is not definite status since Z1, Z2 are not colliders or non-colliders on this path. Hence, the path is a possibly m-connecting path relative to Z by Def. 2 but not a definite m-connecting path by Def. 3. We are now ready to use this new definition and formulate a more powerful calculus. Theorem 1. Let P be the PAG over V, and X,Y,W,Z be disjoint subsets of V. The following rules are valid, in the sense that if the antecedent of the rule holds, then the consequent holds in every MAG and consequently every causal diagram represented by P .4 1. P (y|do(w),x, z) = P (y|do(w), z), if X and Y are m-separated by W ∪ Z in PW. 2. P (y|do(w), do(x), z) = P (y|do(w),x, z), if X and Y are m-separated by W ∪ Z in PW,X. 3. P (y|do(w), do(x), z) = P (y|do(w), z), if X and Y are m-separated by W∪Z inP W,X(Z) . where X(Z) := X \ PossAn(Z)PV\W . A few observations are important at this point. Despite the visual similarity to Prop. 1, there are two pivotal differences between these calculi. First, Thm. 1 only requires blocking the definite status paths, hence the use of ‘m-separation’ in Thm. 1 instead of ‘m̂-separation’. Consider the PAG P in Fig. 2. We want to evaluate whether Px(y|z1, z2) = P (y|x, z1, z2) by applying Rule 2 in Theorem 1. Since all the edges in the PAG are circle edges, then PX = P . As discussed earlier, the set Z = {Z1, Z2} blocks all the definite status path between X and Y . Hence, X and Y are m-separated by Z and Px(y|z1, z2) = P (y|x, z1, z2) holds true by Rule 2. Second, Thm. 1 defines X(Z) as the subset of X that is not in the possible ancestors of Z in PV\W, as opposed to PW in Prop. 1. We revisit the query in Ex. 2 to clarify this subtle but significant difference. Given the PAG in Fig. 3a, we want to evaluate whether Pw,x1,x2(y|z) = Pw(y|z) by applying Rule 3 from Thm. 1 instead of Prop. 1. Fig. 3c shows PV\{W} where X = {X1, X2} are not possible ancestors of Z. Therefore, X(Z) = X, the edges into X1 are cut in PW,X(Z), and X and Y are m-separated therein. Third, the proof of the Theorem 1 is provided in the appendix, but it follows from the relationship between m-connecting path in a manipulated MAG to a definite m-connecting path in the corresponding manipulated PAG. It was conjectured in [Zhang, 2008a, Footnote 15] that, for X,Y ⊂ V, if there is an m-connecting path inMY,X, then there is a definite m-connecting path in PY,X. In this work, we prove that conjecture to be true for the special class of manipulations required in the rules of the calculus. Finally, the next proposition establishes the necessity of the antecedents in Thm. 1 in order to apply the corresponding rule given every diagram in the equivalence class. 4All the proofs can be found in the full report [Jaber et al., 2022]. Theorem 2 (Atomic Completeness). The calculus in Theorem 1 is atomically complete; meaning, whenever a rule is not applicable given a PAG, then the corresponding rule in Pearl’s calculus is not applicable given some causal diagram in the Markov equivalence class. For instance, considering PAG P in Fig. 1c, we note that (Z ⊥⊥Y |X)PX,Z , which means rule 2 is not applicable. Clearly, the diagram in Fig. 1a is in the equivalence class of P and the corresponding rule of Pearl’s calculus is not applicable due to the latent confounder between Z and Y . 4 Effect Identification: A Complete Algorithm It is challenging to use the calculus rules in Thm. 1 to identify causal effects since it is computationally hard to decide whether there exists (and, if so, to find) a sequence of derivations in the generalized calculus to identify an effect of interest. The goal of this section is to formulate an algorithm to identify conditional causal effects. The next definition formalizes the notion of identifiability from a PAG, generalizing the causal-diagram-specific notion introduced in [Pearl, 2000, Tian, 2004]. Definition 4 (Causal-Effect Identifiability). Let X,Y,Z be disjoint sets of endogenous variables, V. The causal effect of X on Y conditioned on Z is said to be identifiable from a PAG P if the quantity Px(y|z) can be computed uniquely from the observational distribution P (V) given every causal diagram D in the Markov equivalence class represented by P . The remainder of the section is organized as follows. Sec. 4.1 introduces a version of the IDP algorithm [Jaber et al., 2019a] to identify marginal causal effects. The attractiveness of this version is that it yields simpler expressions whenever the effect is identifiable while preserving the same expressive power, i.e., completeness for marginal identification. Sec. 4.2 utilizes the new algorithm along with the calculus in Thm. 1 to formulate a complete algorithm for conditional identification. 4.1 Marginal Effect Identification We introduce the notion of pc-component next, which generalizes the notion of c-component that is instrumental to solve identification problems in a causal diagram [Tian and Pearl, 2002]. Definition 5 (PC-Component). In a PAG, or any induced subgraph thereof, two nodes are in the same possible c-component (pc-component) if there is a path between them such that (1) all non-endpoint nodes along the path are colliders, and (2) none of the edges is visible. Following Def. 5, e.g., W and Z in Fig. 1c are in the same pc-component due to W◦→ X ←◦Z. By contrast, X,Y are not in the same pc-component since the direct edge between them is visible and Z along ⟨X,Z, Y ⟩ is not a collider. Building on pc-components, we define the key notion of regions. Definition 6 (RegionRCA). Given PAG P over V, and A ⊆ C ⊆ V. The region of A with respect to C, denotedRCA, is the union of the buckets that contain nodes in the pc-component of A in PC. A region expands a pc-component and will prove to be useful in the identification algorithm. For example, the pc-component of X in Fig. 2 is {X,Z1, Z2} and the region RVX = {X,Z1, Z2, Y }. Building further on these definitions and the new calculus, we derive a new identification criterion. Proposition 2. Let P denote a PAG over V, T be a union of a subset of the buckets in P , and X ⊂ T be a bucket. Given Pv\t (i.e., an observational expression for Q[T]), Q[T \X] is identifiable by the following expression if, in PT, CX ∩ PossDe(X) ⊆ X, where CX is the pc-component of X. Q[T \X] = Pv\t Pv\t(X|T \ PossDe(X)) (1) Note the interventions are over buckets which may or may not be single nodes. Since there is little to no causal information inside a bucket, marginal effects of interventions over subsets of buckets are not identifiable. Also, the input distribution is possibly interventional which licenses recursive applications of the criterion. The next example illustrates the power of the new criterion. Example 3. Consider PAG P in Fig. 3a and the query Px1,x2,w(y, z, a). Starting with the observational distribution P (V) as input, let T = V and X = {X1,W}. We have CX = {X1,W,A,X2}, Algorithm 1 IDP(P,x,y) Input: PAG P and two disjoint sets X,Y ⊂ V Output: Expression for Px(y) or FAIL 1: Let D = PossAn(Y)PV\X 2: return ∑ d\y IDENTIFY(D,V, P ) 3: function IDENTIFY(C, T, Q = Q[T]) 4: if C = ∅ then return 1 5: if C = T then return Q /* In PT, let B denote a bucket, and let CB denote the pc-component of B */ 6: if ∃B ⊂ T \C such that CB ∩ PossDe(B)PT ⊆ B then 7: Compute Q[T \B] from Q; ▷ via Prop. 2 8: return IDENTIFY(C,T \B,Q[T \B]) 9: else if ∃B ⊂ C such thatRB ̸= C then ▷RB is equivalent toRCB 10: return IDENTIFY(RB,T,Q) × IDENTIFY(RC\RB ,T,Q) IDENTIFY(RB∩RC\RB ,T,Q) 11: else throw FAIL PossDe(X) = {X1,W,Z, Y }, and CX ∩ PossDe(X) = X. Hence, the criterion in Prop. 2 is applicable and we have Px(y, z, a, x2) = P (v) P (x1,w|a,x2) = P (a, x2)× P (y, z|a,w) after simplification. Next, we consider intervening on X2 given Px1,w(y, z, a, x2). Notice X2 is disconnected from the other nodes in PV\{X1,W} and it trivially satisfies the criterion in Prop. 2. Therefore, we get the expression Px1,x2,w(y, z, a) = Px1,w(y,z,a,x2) Px1,w(x2|y,z,a) = P (a)× P (y, z|w, a) after simplification. A more general criterion was introduced in [Jaber et al., 2018, Thm. 1] based on the possible children of the intervention bucket X instead of the possible descendants. However, the corresponding expression is convoluted and usually large, which could be intractable even if the effect is identifiable. Alg. 1 shows the proposed version of IDP, which builds on the new criterion (Prop. 2). Specifically, the key difference between this algorithm and the one proposed in [Jaber et al., 2019a] is in Lines 6-7, where the criterion in Prop. 2 is used as opposed to that in [Jaber et al., 2018, Thm. 1]. Interestingly enough, the new criterion is “just right,” namely, it is also sufficient to obtain a complete algorithm for marginal effect identification, as shown in the next result.5 Theorem 3 (completeness). Alg. 1 is complete for identifying marginal effects Px(y). Moreover, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 4.2 Conditional Effect Identification We start by making a couple of observations, and then build on those observations to formulate an algorithm to identify conditional causal effects. The proposed algorithm leverages the calculus in Thm. 1 and the IDP algorithm in Alg. 1. Obs. 1 notes that a conditional effect Px(y|z) can be rewritten as Px(y,z)∑ y′ Px(y ′,z) , and hence it is identifiable if Px(y, z) is identifiable by Alg. 1. Observation 1 (Marginal Effect). Consider PAG P1 in Fig. 4a where the goal is to identify the causal effect Pb(a, c|d). We notice that the effect Pb(a, c, d) is identifiable using the IDP algorithm. Let E := P (a, d)× P (c|b, d) denote the expression for the marginal effect Pb(a, c, d) which can be obtained from IDP. Consequently, the target effect can be computed using the expression E/ ∑ a′,c′ E. Whenever the marginal effect Px(y, z) is not identifiable using Alg. 1, Observations 2 and 3 propose techniques to identify the conditional effect using the calculus in Thm. 1, namely rule 2. Obs. 2 uses rule 2 of Thm. 1, when applicable, to move variables from the conditioning to the intervention set. The marginal effect of the resulting conditional query turns out to be identifiable, and consequently does the conditional effect. We note that the work in [Shpitser and Pearl, 2006] uses the same trick to formulate an algorithm for conditional effect identification given a causal diagram. 5A more detailed comparison of the two algorithms along with illustrative examples is provided in the full report [Jaber et al., 2022]. Algorithm 2 CIDP(P,x,y, z) Input: PAG P and three disjoint sets X,Y,Z ⊂ V Output: Expression for Px(y|z) or FAIL 1: D← PossAn(Y ∪ Z)PV\X /* Let B1, . . . ,Bm denote the buckets in P */ 2: while ∃Bi s.t. Bi ∩D ̸= ∅ ∧Bi ̸⊆ D do 3: X′ ← Bi ∩X 4: if (X′ ⊥⊥ Y|(X \X′) ∪ Z)P X\X′,X′ then 5: x← x \ x′; z← z ∪ x′ ▷ Apply rule 2 of Thm. 1 6: D← PossAn(Y ∪ Z)PV\X 7: else throw FAIL /* Let Z1, . . . ,Zm partition Z such that Zi := Z ∩Bi */ 8: while ∃Zi s.t. (Zi ⊥⊥ Y|X ∪ (Z \ Zi))PX,Zi do 9: x← x ∪ zi; z← z \ zi ▷ Apply rule 2 of Thm. 1 10: E ← IDP(P,x,y ∪ z) 11: return E/ ∑ y′ E Observation 2 (Flip Observations to Interventions). Consider PAG P1 in Fig. 4a and the causal query Pa(c|b, d). Unlike the case in Obs. 1, the marginal effect Pa(b, c, d) is not identifiable by the IDP algorithm. Using rule 2 of Thm. 1, we have (B ⊥⊥ C|D)PA,B and we move B from conditioning to intervention, i.e., Pa(c|b, d) = Pa,b(c|d). The marginal effect Pa,b(c, d) is identifiable by IDP and we get the expression E := P (d)× P (c|b, d). Hence, we have Pa(c|b, d) = Pa,b(c|d) = E/ ∑ c′ E. Finally, Obs. 3 comes as a surprise since it requires flipping interventions to observations, contrary to Obs. 2. A key graphical structure in the PAG that requires such a treatment is the presence of a proper possibly directed path from X to Y ∪ Z that starts with a circle edge. Observation 3 (Flip Interventions to Observations). We revisit the query in Example. 1. First, the marginal effect Px(y, z1, z2) is not identifiable by the IDP algorithm. Also, we cannot use rule 2 in Thm. 1 to flip Z1 or Z2 into interventions since they are both adjacent to Y with a circle edge (◦−◦). However, we can use rule 2 to flip X to the conditioning set since there are no definite m-connecting paths between X and Y given {Z1, Z2} in the PAG. Hence, we obtain Px(y|z1, z2) = P (y|z1, z2, x). Alternatively, consider PAG P2 in Fig. 4b with the causal query Px(y|z). We cannot use rule 2 to flip X to an observation since ⟨X,W, Y ⟩ is active given Z in P2X . In fact, the causal diagram G in Fig. 4c is in the equivalence class of P2 and such that Px(y|z) is provably not identifiable [Shpitser and Pearl, 2006, Corol. 2]. Hence, the effect is not identifiable given P2 according to Def. 4. Putting these observations together, we formulate the CIDP algorithm (Alg. 2) for identifying conditional causal effects given a PAG. The algorithm is divided into three phases. In Phase I (lines 1-7), Obs. 3 is used to check for proper possibly directed paths from X to Y ∪Z that start with a circle edge. This is checked algorithmically by computing D = PossAn(Y ∪ Z)PV\X , iteratively, and checking if some bucket Bi in P intersects with, but is not a subset of, D. If such a bucket exists, CIDP flips Bi ∩X from interventions to observations using rule 2, when applicable, else the algorithm throws a fail and the effect is not computable. In Phase II (lines 8-9), Obs. 2 is used to flip the subset of observations in each bucket into interventions by applying rule 2 of Thm. 1, whenever applicable. Finally, in Phase III (line 10), the marginal effect Px(y ∪ z) is computed from the modified sets X and Z, using the IDP algorithm in Alg. 1. If the call is successful, an expression for the conditional effect is returned at line 11. The example below illustrates CIDP in action. An empirical evaluation of CIDP is provided in the full report [Jaber et al., 2022].6 Example 4. Consider PAG P in Fig. 5a and the conditional query Px(y|z) := Pa,f (y|b, e). In Phase I, we have D = PossAn(Y ∪Z)PV\X = {Y,B,E,C,G}, and the bucket {A,B} satisfies the conditions at line 2 since A ̸∈ D. In PF,A (as shown in Fig. 5b), X′ = {A} is m-separated from Y given {B,E, F} which satisfies the if condition at line 4. Hence, we flip A to the conditioning set Z via rule 2 of Thm. 1 to obtain the updated query Px(y|z) = Pf (y|a, b, e). In Phase II (lines 8-9), let Z1 = {E} and Z2 = {A,B}. In PF,E (see Fig. 5c), we have E m-separated from Y given {F} ∪ Z2 which satisfies the if condition at line 8. Hence, we flip E to the intervention set using rule 2 of Thm. 1 and we get the updated query Px(y|z) = Pe,f (y|a, b). Next, we check if Z = Z2 is m-separated from Y given X in PX,Z which does not hold due to a bidirected edge between B and Y . Hence, rule 2 is not applicable and Z remains in the conditioning set. Finally, we call IDP to compute the marginal effect Pe,f (y, a, b), if possible. The effect is identifiable with the simplified expression P (y|b, e, f)× P (a, b). Hence, Pa,f (y|b, e) = Pe,f (y|a, b) = P (y|b,e,f)×P (a,b)∑ y′ P (y ′|b,e,f)×P (a,b) = P (y|b, e, f). The soundness of Alg. 2 follows from that of Alg. 1 and Thm. 1. Next, we turn to its completeness. According to Def. 4, whenever CIDP fails, we need to establish one of two conditions for completeness. Either there exist two causal diagrams in the equivalence class with different identifications, or the effect is not identifiable in some causal diagram according to the criterion in [Shpitser and Pearl, 2006, Corol. 2]. Thm. 4 establishes completeness by proving that the latter is always the case. This result along with the completeness of the calculus rules for the identification of marginal effects (see Thm. 3) implies that the rules are complete for conditional effects as well. Theorem 4 (completeness). Alg. 2 is complete for identifying conditional effects Px(y|z). Also, the calculus in Thm. 1, together with standard probability manipulations are complete for the same task. 5 Discussion In this work, an oracle for conditional independences is assumed to be available, which leads to the true PAG. Assuming the presence of an oracle for conditional independence encapsulates the challenge of dealing with finite data and of testing for conditional independence thereof. Another challenge lies in the computational complexity of learning the PAG in the first place [Colombo et al., 2012], and estimating the expression when the effect is identifiable [Pearl and Robins, 1995, Jung et al., 2021]. In light of this, it is important to make the distinction between the task of causal effect identification and that of causal effect estimation. This set of results is concerned with the first task (causal identification), which asks whether a target conditional effect is uniquely computable from P (V), the observational distribution, and given a PAG learnable from P (V). The objective of CIDP in Algorithm 2 is to decide whether the effect is identifiable and provide an expression for it when the answer is yes, while being agnostic as to whether P (V) can be accurately estimated from the available samples. As for the second task, the estimation of the conditional causal effect using the identification formula provided by Algorithm 2 poses several challenges under finite data. The number of samples sufficient to identify a given effect would depend on the size of the expression, among other factors, and 6Code is available at https://github.com/CausalAILab/PAGId naive methods for estimation exacerbate this problem. Recent work such as [Jung et al., 2021] proposes a double machine learning estimator for marginal effects that are identifiable given a PAG. An interesting direction of work is to generalize this approach to conditional causal effects that are identifiable by CIDP. 6 Conclusions In this work, we investigate the problem of identifying conditional interventional distributions given a Markov equivalence class of causal diagrams represented by a PAG. We introduce a new generalization of the do-calculus for identification of interventional distributions in PAGs (Thm. 1) and show it to be atomically complete (Thm. 2). Building on these results, we develop the CIDP algorithm (Alg. 2), which is both sound and complete, i.e., it identifies any conditional effects of the form Px(y|z) that is identifiable (Thm. 4). Finally, we show that the new calculus rules, along with standard probability manipulations, are complete for the same task. These results close the problem of effect identification under Markov equivalence in that they completely delineate the theoretical boundaries of what is, in principle, computable from a certain data collection. We expect the newly introduced machinery to help data scientists to identify novel effects in real world settings. Acknowledgments and Disclosure of Funding Bareinboim and Ribeiro’s research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. Zhang’s research was supported by the RGC of Hong Kong under GRF13602720.
1. What is the focus and contribution of the paper regarding conditional causal effects in partial ancestral graphs? 2. What are the strengths of the proposed algorithm, particularly in its efficiency and potential applications in causal discovery pipelines? 3. Do you have any concerns or questions regarding the paper's scope and suitability for a conference submission? 4. How does the paper address the issue of counterfactual inference in PAGs, and are there any potential connections to previous research on the causal hierarchy? 5. Are there any empirical results or supplementary materials available to evaluate the performance and usability of the CIDP algorithm? 6. Can methods such as inverse propensity weights or double machine learning be applied to estimate conditional causal effects in PAGs, and how do they relate to the CIDP algorithm? 7. What are the limitations of the proposed method, and how does it compare to other approaches in terms of computational efficiency and applicability?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The manuscript provides a sound and complete calculus and algorithm for identifying conditional causal effects of the form p ( y | d o ( x ) , z ) in partial ancestral graphs (PAGs), i.e. a Markov equivalence class of maximal ancestral graphs (MAGs). This generalizes prior work on causal effect identification in different kinds of graphs and equivalence classes thereof. Strengths And Weaknesses Completeness results for conditional effect identification in PAGs are notably lacking, despite strong progress in this area by Zhang (2007) and Jaber et al. (2019a, 2019b). Providing such a result – not just for an abstract set of rules but in an efficient algorithm – constitutes a major contribution. Since the FCI algorithm outputs a PAG, CIPD could plausibly be incorporated into a causal discovery pipeline for effect estimation over a learned Markov equivalence class. This could be of great use to practitioners in a variety of fields. The manuscript is well-researched and well-written, quite clear and easy to follow despite considerable technicalities. I commend the authors for their strong and original work. If anything, I wonder if a conference submission is really the right venue for this, given how many follow up questions I have! I fully appreciate that space constraints prevent the authors from tackling all the problems that arise in causal reasoning with PAGs, but I strongly encourage them to consider a journal length follow up article, along the lines of Zhang (2007) or Shpitser & Pearl (2008). Short of that, some revisions to the present manuscript at least addressing these concerns would be welcome: -The do-calculus is famously complete for effect identification across all three levels of the causal hierarchy (Shpitser & Pearl, 2008). How, if at all, could the methods proposed here be leveraged for counterfactual inference in PAGs? While a complete answer may lie beyond the scope of this manuscript, some discussion of the problem would be interesting, if only as a direction for future work. -There is some discussion in the manuscript of an R implementation. Mentioning this without providing any empirical results or supplemental code is a bit frustrating, as it’s unclear how reviewers are meant to evaluate the performance or usability of the CIDP algorithm. I would either include this material as part of the submission or cut the reference altogether. -There has been considerable work in recent years on CATE estimation. Could methods like inverse propensity weights or double machine learning be used to estimate conditional causal effects in PAGs, at least under certain conditions? Again, perhaps beyond scope to go too deep into this, but a natural follow up that will likely occur to many readers. Questions See above. Limitations I am not convinced that the brief reference to an independence oracle constitutes a thorough discussion of the method’s limitations. Some elaboration would be welcome here, as well as further comments on if/how the method can be used for counterfactual effect identification, computational efficiency, etc.
NIPS
Title Network-to-Network Regularization: Enforcing Occam's Razor to Improve Generalization Abstract What makes a classifier have the ability to generalize? There have been a lot of important attempts to address this question, but a clear answer is still elusive. Proponents of complexity theory find that the complexity of the classifier’s function space is key to deciding generalization, whereas other recent work reveals that classifiers which extract invariant feature representations are likely to generalize better. Recent theoretical and empirical studies, however, have shown that even within a classifier’s function space, there can be significant differences in the ability to generalize. Specifically, empirical studies have shown that among functions which have a good training data fit, functions with lower Kolmogorov complexity (KC) are likely to generalize better, while the opposite is true for functions of higher KC. Motivated by these findings, we propose, in this work, a novel measure of complexity called Kolmogorov Growth (KG), which we use to derive new generalization error bounds that only depend on the final choice of the classification function. Guided by the bounds, we propose a novel way of regularizing neural networks by constraining the network trajectory to remain in the low KG zone during training. Minimizing KG while learning is akin to applying the Occam’s razor to neural networks. The proposed approach, called network-to-network regularization, leads to clear improvements in the generalization ability of classifiers. We verify this for three popular image datasets (MNIST, CIFAR-10, CIFAR-100) across varying training data sizes. Empirical studies find that conventional training of neural networks, unlike network-to-network regularization, leads to networks of high KG and lower test accuracies. Furthermore, we present the benefits of N2N regularization in the scenario where the training data labels are noisy. Using N2N regularization, we achieve competitive performance on MNIST, CIFAR-10 and CIFAR-100 datasets with corrupted training labels, significantly improving network performance compared to standard cross-entropy baselines in most cases. These findings illustrate the many benefits obtained from imposing a function complexity prior like Kolmogorov Growth during the training process. 1 Introduction and Motivation On the surface, the problem of learning to generalize well over unseen data seems an impossible task. Classification is inherently a problem in function estimation, and the finite information that the training data samples impart seems hardly enough to be able to correctly guess the behaviour of the function over the unseen data samples outside the training set. However, assumption of a structured ground truth label function, which is the unknown function which generates the ground truth label for any datapoint, leads to more optimistic outlook on the problem. Without the assumption of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). structuredness in the ground truth, learning is not guaranteed, as was observed in the no free lunch theorem [1]. As shown in that work, there is no universal learning algorithm which can generalize well for possible choices of ground truth label functions. Since one does not have any control over the true nature of the ground truth function in any classification problem, the other important parameter that decides the ability of a classifier to generalize, is the complexity of its function space itself. Over the past few decades, there have been multiple attempts at bounding the generalization error on the basis of various complexity measures [2, 3, 4] of the classifier’s function space F . The overall results of these theoretical developments indicate that generalization is primarily governed by metrics that are proportional to the size of F . Examples of complexity metrics in this regard include Rademacher complexity, VC-Dimension, and Local forms of Rademacher Complexity (for more tight bounds). However, as observed in [5] these metrics lead to relatively loose bounds on the generalization performance for deep neural networks. This is because deep neural networks have high complexity spaces with flexibility to learn any label assignment on any set of training datapoints. This was perhaps most clearly observed in [6], where even after random label assignments on the training data samples, the networks still were able to fit the training data labels with no errors. Thus, often other methods have been explored to establish the relationship between complexity measures and generalization performance [7, 8], such as causal relations. These studies lead to a natural question, which is, other than the metrics which relate to the size of the function space of a classifier, what other factors contribute towards its ability to generalize? State-of-the-art deep neural networks, as observed above, will yield very high complexity spaces, which does not explain their remarkable ability to generalize well in complex high-dimensional supervised classification problems, such as in vision. We note, that metrics such as Rademacher complexity or VC dimension, which relate to whole function spaces F (or subspaces within F which fit the training data well), essentially assign the same generalization gap to all functions within F (or the subspace within F ). However, there has been a longstanding understanding of the fact that usually among all functions which fit the training data well, simpler functions are expected to generalize better. This is an example of the Occam’s Razor principle, which states that among all hypotheses that explain a phenomenon, the simplest hypothesis is preferred [9]. In the context of neural networks, even for very high complexity deep neural network function spaces, there will always be some weight configurations of a deep neural network, which yield simpler functions. For example, all network weight configurations yielding input-output functions which can still be efficiently approximated by smaller, shallower networks, could be considered to be simpler. An extreme example of this would be where we assign all weight values to zero within a deep neural network. The function that results from this weight configuration is essentially the constant function f(X) = 0. This observation points to the fact that even within a function space of a deep CNN, not all functions are equally complex, as some of them can still be approximated by shallower networks. To that end, a primary objective of this paper is to probe complexity measures which enable us to assign a level of complexity to individual functions within a function space. Subsequently, our objective would be to steer the learning process towards network configurations which yield less complex input-output functions. An important work which explored similar directions is [10], where metrics from algorithmic complexity were developed to bias the learning process towards simpler functions. In this regard, measures of descriptional complexity [5] of functions have been proposed over the years, which quantify the level of complexity of an individual function, based on its shortest description. Although fundamental measures of descriptional complexity such as Kolmogorov Complexity [11] or Solomonoff Probability [12] are uncomputable, computable approximations to them have been developed [13]. In spite of this early work, there is a lack of concrete theoretical or empirical work to continue this line of investigation after [10]. Mainly, there is a lack of work that explores the relevance of such descriptional complexity measures to the generalizational ability of functions. That is, until recently, when interest in the descriptional complexity of a function was renewed as it was found that outputs of random maps tend to be biased towards simpler functions [14]. This result indirectly hinted that for classification tasks, the ground truth labelling function has a simple description with high probability. This was further investigated in [5] in the context of deep neural networks, where it was found that the Lempel-Ziv (LZ) complexity (a form of descriptional complexity) of most neural networks with random choices of weights are low (shorter description), in accordance with the result in [14]. Empirically, it was found that network weight configurations which lead to functions of smaller Lempel-Ziv complexity show better generalization performance. These results and observations show that descriptional complexity measures may help to understand empirically the generalization behaviour of deep neural networks. In this paper, we advance this line of investigation by proposing a new theoretical framework for relating descriptional complexity measures to generalization performance. Subsequently, we also provide a computable method which improves generalization performance of neural networks by lowering their descriptional complexity. 2 Contributions This paper makes the following contributions. 1. First, we undertake a brief theoretical analysis for exploring the relevance of descriptional complexity measures of functions to their expected generalization error. We propose a novel measure of complexity called Kolmogorov Growth (KG). Error bounds are estimated which portray the dependence of generalization error of a single function f toKG(f). The bounds depict that functions of higher KG(f) will likely lead to a higher generalization gap. This result formalizes the Occam’s razor principle for classifiers, and also concurs with the empirical findings in [5], where functions of higher LZ-Complexity showed worse generalization performance. 2. Like Kolmogorov Complexity, KG is also uncomputable. Therefore, we propose computable approximations of KG for neural networks, based on the concept of teacher-student approximation (similar to knowledge distillation [15]). Specifically, we show that neural network functions f which can be approximated well by smaller networks will have smaller empirical KG with high probability. 3. Next, using this idea, we then develop a novel method for regularizing neural networks, called network-to-network (N2N) regularization. N2N regularization forces trained network configurations to be of low KG. We find that doing so not only improves the generalization performance across a range of training data sizes, but also helps in the case of label noise. For instance in MNIST, we see test error decrease by 94%, reaching results competitive with benchmark methods. 4. Finally, we study the evolution of empirical KG as networks are trained and observe that, in the standard classification scenario, networks show a sharp decrease in their KG as training progresses. However, this trend completely reverses when the training data labels are noise corrupted and N2N regularization is able to stem the undesirable increase of KG during training. 3 Kolmogorov Growth: Relevance to Generalization Assume that our training data consists of m d-dimensional i.i.d samples and their labels S = [(z1, y1), (z2, y2), .., (zm, ym)] drawn from some distribution Pm. We propose two growth measures for a single function f , namely Kolmogorov Growth KGm(f) and empirical Kolmogorov Growth K̂GS(f). Note that, when appropriate, we may drop the subscript m and refer to KGm(f) as KG(f). These measures are primarily motivated from the well-known growth function in statistics. The growth function Πm(F) is defined as Πm(F) = max z∈Rd |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}| (1) Note that the growth function is defined for a space of functions and captures the maximum number of label assignments a function space F can generate on any m points in Rd. To define Kolmogorov Growth measures for a single function f , we first need to generate a function space from f , based on a description Df of f . This is done via noting all the parameters involved in the description Df (other than the co-ordinates in Rd), and then assigning all possible values to those parameters to generate a space of functions F(Df ) from the description Df . Next, we note that for a function f , we consider multiple descriptions (D1f , D 2 f , D 3 f , ...), all of which faithfully generate the output of f over all points in Rd, with the additional constraint that each F(Dif ) should fit any m datapoints sampled from Pm. Given these observations and denominations, we then can define the Kolmogorov Growth of a function f as follows: KGm(f) = min i log Πm ( F(Dif ) ) m . (2) Note that KGm(f) requires the knowledge of distribution Pm of the training samples. For an instance of the datapoints in S, we define the empirical Kolmogorov Growth via the empirical growth function Π̂S(F), which only computes the number of label assignments a function space can generate over the given m training data samples in S. Thus, Π̂S(F) = |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}|. This leads to the following definition of the empirical Kolmogorov Growth of a function f : K̂GS(f) = min i log Π̂S ( F(Dif ) ) m . (3) Remark: Kolmorogov growth is indirectly motivated from Kolmogorov complexity itself. However, unlike Kolmogorov complexity, which is the length of the shortest program that generates f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions which have shorter descriptions usually require a smaller number of variables and are expected to have lower Kolmogorov Growth. Moreover, it turns out that Kolmogorov Growth allows us to directly comment on the error bounds for the function f (see Section 3.1). We believe that a possible direction of future work would be to do a deeper study of the relationship between Kolmogorov Growth and Kolmogorov Complexity itself. Remark: Note that, in the binary classification scenario, for a completely unstructured function f (i.e., f outputs random labels at every point X ∈ Rd), one expects KG(f) to be near its maximum value (i.e., log 2). A structured f would generate shorter descriptions with fewer parameters and therefore lead to smaller KG(f). 3.1 Bounding Generalization Error using Kolmogorov Growth Here we present error bounds that depend on KG(f), where f is the classification function, given the m training data samples and their labels in S. As before, the data samples and labels in S are drawn from some underlying distribution Pm. We now define a set of error functions for computing training loss (0-1 loss) on S, denoted as êrrS(f), and the overall generalization error with respect to the distribution P , denoted by errP (f). We define them as follows: êrrS(f) = m∑ i=1 (1− f(zi)yi) 2m (4) errP (f) = E z,y∼P [ (1− f(z)y) 2 ] . (5) These definitions hold for any function f . Note that the error functions depend both on the function f and the distribution P . With this, we have the following results. The proofs of all results are provided in the supplementary material. Theorem 1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2KGm(f) + √ log (1/δ) 2m . (6) The following corollary of the above theorem gives bounds that depend on empirical Kolmogorov growth K̂GS(f). Corollary 1.1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2K̂GS(f) + 4 √ 2 log (4/δ) m . (7) Remark: Theorem 1 and its corollary essentially state that for functions f of lower Kolmogorov growth, we should expect a smaller generalization gap. In what follows, we outline ways to approximate the empirical Kolmorogov growth K̂GS(f). 4 Teacher-Student Approximation Bounds for Kolmogorov Growth The fundamental idea for approximating empirical Kolmogorov growth of a function f which belongs to the function space F is to use a student classifier with a function space F1small (with a much smaller parametric count and complexity) to approximate the given function f (the teacher). We apply this idea recursively. That is, if the function f1small ∈ Fsmall approximates f best, we recursively estimate the empirical Kolmogorov growth of f1small by approximating it via another classifier with a smaller function space F2small (thus, Πm(F 2 small) < Πm(F 1 small)), and so on. We use this recursive way to then obtain a final estimate for K̂GS(f). We conjecture that, like Kolmogorov complexity itself, the true K̂GS(f) is uncomputable, so the estimate that results from this recursive approximation process is essentially an upper bound to the true K̂GS(f). The following theorem establishes an upper bound to empirical KG approximation from a single smaller student classifier. Theorem 2 Given the function f ∈ F : Rd −→ R2 which outputs class logits for binary classification. We construct a function space F1small such that Πm(F 1 small) < Πm(F) and ∀g ∈ F 1 small, there exists a description Dg such that Π̂S (F(Dg)) ≤ Π̂S(F1small). We approximate f via another function f1small ∈ F 1 small : Rd −→ R2 and let max be such that 2max/2 = max X∈Rd ‖f1small(X)− f(X)‖2. (8) Denote the output probabilities generated from the corresponding logit outputs of f(X) using the softmax operator (temperature T = 1), as P0(f(X)) (label 1 output) and P1(f(X)) (label 2 output). Let 0 ≤ δ ≤ 1 be such that Pr (∣∣∣∣log(P0(f(X))P1(f(X) )∣∣∣∣ ≤ max) ≤ δ, (9) when X is drawn from S. Then we have, K̂GS(f) ≤ δ log 2 + log Π̂S ( F1small ) m , (10) where m is the number of samples in S. Remark: Theorem 2 demonstrates a way to bound the true empirical Kolmogorov growth of the function f , using a single student classifier function f1small ∈ F 1 small. Note that unlike in Theorem 1, there are no direct constraints on the expressivity of F1small, but rather a joint constraint on F 1 small and δ combined. If F1small cannot fit all m points sampled from Pm, then the approximation error in δ will likely be higher, which will add to the estimate of K̂GS(f). The proof of Theorem 2 and its extension to the recursive approximation case are given in the supplementary material. 5 Network-to-Network (N2N) Regularization We denote the base network to be trained as N base and the function modelled by the network weights wbase asN base(wbase, X), whereX ∈ Rd is the input. Here, N base(wbase, X) represents the output logits for the networkN base when presented with the inputX . Thus, we haveN base(wbase, X) ∈ Rc, where c is the number of classes. For what follows, let us denote the available training data and their labels by S = {Xi, yi}mi=1. The approach that follows is directly motivated from the result in Theorem 2. The main objective is to ensure that the KG of the network stays low during learning, using the teacher-student approximation error in Theorem 2. This is primarily achieved by ensuring that during training, the base network function N base(wbase, X) is always near to some function within the smaller network’s function space. Next, we outline the details of the proposed multi-level network-to-network (N2N) regularization approach. 5.1 Multi-Level N2N: Details In multi-level N2N regularization, we have multiple smaller networks nsmall1 , n small 2 , ..., n small K of decreasing complexity such that Πm(F1small) > Πm(F 2 small) > · · · > Πm(F K small). Algorithm 1 N2N Regularization (Multi-Level) Input: Training data {Xi, yi}mi=1, base network N base and its weights wbase, K networks nsmall1 , n small 2 , ..., n small K , with weights w1, w2, ..., wK (s.t. |nsmall1 | > |nsmall2 | > ..|nsmallK | in size), Number of epochs J , Hyperparameters λ0, λ1, λ2, ..., λK−1, α0, .., αK , ebase, esmall. 1: for j = 1, 2, . . . , J do 2: for iter = 1, 2, . . . , ebase do 3: L1 = ∑m i=1(LCE(N base(Xi), yi) + λ0‖N base(Xi)− nsmall1 (Xi)‖2) 4: Weight update: wbase ←− wbase − α0m ∂L1 ∂wbase 5: for k = 1, 2, . . . ,K do 6: for iter = 1, 2, . . . , esmall do 7: if k = 1 then 8: Lk = ∑ i‖N base(Xi)− nsmall1 (Xi)‖2 + λ1‖nsmall2 (Xi)− nsmall1 (Xi)‖2 9: else if k = K then 10: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 11: else 12: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 + λk‖nsmallk (Xi)− nsmallk+1 (Xi)‖2 13: Weight update: wk ←− wk − αkm ∂Lk ∂w1 The corresponding functions resulting from the network weights w1, w2, .., wK are denoted as nsmall1 (w1, X), n small 2 (w2, X), ..., n small K (wK , X). Next, we outline the loss functions for all networks. For the larger to-be-trained base networkN base, the loss objective is to minimize cross-entropy loss on S while being close to nsmall1 (w1, X) for some choice of weights w1 (L1 in Algorithm 1). For the smaller network nsmall1 , the objective is two-fold: find the weight configuration w1 that approximates the larger network function N base, while also being close to nsmall2 (w2, X) for some choice of w2 (L2 in Algorithm 1). Thus, we force the smaller network nsmall1 to be close to the base network and an even lower-complexity network nsmall2 at the same time. Similarly we can define L3,L3, ..,LK−1, except for LK which applies to the smallest network nsmallK . The loss objective for nsmallK is to just keep n small K (wK , X) close to n small K−1 (wK−1, X). Finally, we optimize the loss functions in an alternating manner in the order of L1,L2, ..,LK . Details are given in Algorithm 1. The choice of mean-squared error based loss functions here directly follows from the result in Theorem 2. Note that Algorithm 1 updates with the entire batch of training data points at each iteration, and can be extended to the case of minibatch stochastic gradient descent (SGD). 5.2 Other Relevant Approaches in Literature To the best of our knowledge, our proposed approach is novel, and we did not find much relevant work. Conceptually, we found the reverse knowledge distillation method [16] to be the most relevant to our proposed approach, which regularizes large teacher networks using smaller, trained versions of student networks of less depth. The outputs logits of the trained student networks are then essentially re-used for smoothing the outputs of the larger neural network. Here, we do not directly use trained student networks to supervise the teacher, but instead simply ensure that during training, the teacher network is within reach of some student network (which may change throughout the training process), which is a more relaxed constraint. Also, the mean-squared error based approximation error between the student and teacher networks is motivated from Theorem 2, and differs from KL-divergence based measures used in knowledge distillation. Another point of difference is that N2N uses a multi-level approach for a recursive way of regularizing multiple networks of different levels of complexity. 6 Experiments We test N2N on three datasets: MNIST [17], CIFAR-10 [18] and CIFAR-100 [19]. We also demonstrate that N2N regularization improves performance in the presence of label noise. Lastly, we analyse Kolmogorov growth of networks during training. Experiments were either carried out on an RTX 2060 GPU or a Tesla V100 or A100 GPU. As mentioned in Algorithm 1, an epoch refers to a total of ebase iterations of training the base network and esmall iterations of training the smaller networks on the whole dataset. Code will be made available at https://github.com/rghosh92/N2N. 6.1 Supervised Classification: MNIST, CIFAR-10, CIFAR-100 K̂GS(f) of the trained networks. The primary objective of the experiments presented here is to see whether N2N regularization can drive the training process towards network configurations that generalize better. For each dataset, results are reported for various choices of training data size. Furthermore, to show that our regularization approach complements other commonly used regularization approaches, we show results when our approach is combined with Dropout and L2-norm regularization. For the ResNet networks (CIFAR-10/100), we combine N2N with L2-norm regularization. All networks were trained for a total of 200 iterations, and in each case results reported are averaged over five networks. For all experiments we set ebase = 3, esmall = 1 in Algorithm 1. The values of the regularization parameters (λ0, λ1) are provided in the supplementary material. Note that due to the additional iterations for training the smaller networks, the worst case training time for the N2N approach is 1.5 times longer than standard training. Across all three datasets, we generally find that for larger training data sizes, smaller regularization parameters yield best performance, reinforcing the fact that N2N is indeed a form of regularization. This is primarily because for large training data, the distribution is dense enough for the network to learn, and thus less emphasis can be given to the N2N regularization term. Results are shown in Table 1, and the average approximation error δ for the trained networks is shown in Table 2. We note that the use of N2N regularization improves test accuracy. Mainly, we see that N2N regularization complements common regularization approaches such as dropout and L2-norm well. In all cases we find that combining these well-known regularization approaches with the proposed approach yields the best results. Furthermore, we also see that the improvement in performance persists when the training data size is increased. Lastly, in most cases, we see that 2-level N2N regularization (N2N-2, m = 2 in Algorithm 1), outperforms single-level N2N (N2N-1, m = 1 in Algorithm 1), with the exception of CIFAR-10 with the full training dataset. For the CIFAR-10 and CIFAR-100 datasets, we used the benchmark ResNet architectures ResNet44 and ResNet-50 respectively. Our results with L2-norm regularization for the ResNet-44 and ResNet-50 architectures are slightly better than the results originally reported in [20]. For the MNIST dataset, we used a 5-layer CNN with 3 conv layers and 2 fc layers. Network architecture details are provided in the supplementary material. Note that although better results can be found in literature, our objective was to demonstrate that using N2N regularization in conjunction with common regularization approaches can benefit both shallow CNN architectures (MNIST) and ResNets (CIFAR-10, CIFAR-100). Furthermore, as Table 2 shows, we find that N2N reduces the empirical KG of trained networks, and datasets on which test accuracies are lower yield higher KG of trained networks. This supports the implications of Theorem 1, as high KG functions are expected to have a larger generalization gap. 6.2 Learning with Noisy Labels As our proposed regularization approach constrains the network function to be simpler by minimizing an approximation of Kolmogorov Growth, it naturally applies to the case of noisy training labels. Without regularization, label noise in the training data usually forces a network to emulate a more complex function, as it potentially makes the decision boundary more complex, a fact that we also empirically observe in section 6.3. We stipulate that N2N regularization should help the network in achieving simpler functions to approximate the training data labels, favoring simpler decision boundaries over complex ones, and thus potentially shielding against the corrupted labels to a certain extent. We test whether enforcing a simpler function (large λ0, ...λm) at the cost of compromising training loss can help improve test accuracy, when the training data is corrupted by label noise. We tested the cases where symmetric and asymmetric label noise of some probability p was applied (same as in [21]), and show our results for symmetric noise with p = 0.5 and p = 0.2. Results with asymmetric pair-flip noise of probability p = 0.45 are shown in the Supplementary Material. First, we show the results for symmetric label noise of probability p = 0.5 and p = 0.2 on MNIST, CIFAR-10 and CIFAR-100 in Table 3. For F-correction [22], Decoupling [23], MentorNet [24] and Co-Teaching [25] methods, we report the accuracy over the last ten iterations of training as observed in [25], along with their standard cross-entropy results with corresponding network architectures for reference. We do the same for our implemented SCE and N2N methods on MNIST and CIFAR-10, but for CIFAR-100, we report the accuracies using a 48k-2k training-validation split of the data for both, as we find it to yield best performance (due to hard convergence). Note that we use the same network configurations for SCE. The values of λ0 and λ1 are provided in the supplementary material. We find that N2N regularization yields competitive performance in most cases. We also plot the test accuracy as a function of the regularization parameter λ0 in Figure 1. We find that for MNIST, large λ0 helps achieve significantly higher test accuracy, whereas for CIFAR-10 and CIFAR-100 accuracy peaks around λ0 = 0.6 and λ0 = 0.5 respectively. Note that accuracies may differ from Table 3 because of different training configurations used. 6.3 Comparing Kolmogorov Growth Trajectories during Training The improvements observed via the use of N2N regularization lead to the question of how the network trajectories differ when N2N regularization is used, as compared to when it is not applied. We use the result in Theorem 2 to compute the bounded approximation to empirical Kolmogorov growth of the network function. We thus plot the approximation of K̂G(f) of the function f represented by the neural network during the training process. Note that the variation of KG in all plots is only owing to the changes in the approximation error term δ in Theorem 2, as F1small is fixed to a single-layer CNN (of a fixed configuration) for all results in Fig. 2. Π̂S ( F1small ) was estimated using a VC dimension based approximation shown in [7]. Results are shown in Figure 2. We find that in the case of no training label noise, the KG of networks typically have high initial values, steeply reducing within a few epochs of training, after which it stabilizes. Expectedly, when trained with N2N regularization, we find that the final KG of networks are lower, compared to KG of networks trained without N2N. In the case of label noise, we report some interesting observations. First, we see that, differently from before, the KG values rather increase with training and stabilize eventually at higher values. This is almost an opposite trend to the case of no label noise. This can be partly explained by the fact that as training progresses, the network slowly adapts its decision boundary to fit the erroneous labelling, eventually resulting in a decision boundary of high complexity. For the label noise case, we find that N2N regularization significantly reduces the increase of KG during the training process. Furthermore, larger values of the λ parameter leads to networks which exhibit smaller KG values. This also helps explain the significant gains in test accuracy observed for MNIST earlier in Table 3, when using N2N regularization. 7 Discussion and Reflections This results in this paper further the recent work by [5], where it was shown that neural networks are inherently biased towards simpler functions of lower Kolmogorov complexity. In particular, we provide an actionable method for incorporating a function complexity prior while learning, using a novel measure called Kolmogorov Growth. Unlike Kolmogorov complexity, which is the description of the shortest program that generates some function f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions with shorter descriptions will typically need fewer variables and thus may have lower Kolmogorov Growth. Although smaller function spaces have less expressive power, as recent work in [6] shows, even shallower neural nets can fit random labels on the training data points. The observations in [5] however, put a new perspective on the results in [6]: any random choice of network weights on smaller networks is likely to yield a low complexity function. Thus, even shallower networks can potentially exhibit a wide range of complexities. Among them, the higher Kolmogorov complexity functions are likely required for a network to fit random labels (similar to observations in [5]). In the case of label noise, N2N considers this fact by avoiding directly training the shallower networks to fit the noisy labels, which helps reduce their descriptional complexity, which then helps in regularizing the larger base network. As such, when using a pre-trained shallower network to regularize the base network (reverse-KD), we found that performance can significantly suffer in the case of label noise. Via N2N regularization we see that enforcing low KG for large networks can improve their ability to generalize. The proposed approach greatly helps in the scenario where the training data has noisy labels, attaining competitive performance on the three tested datasets when the training labels are corrupted with symmetric label noise. In the case of label noise, we see that networks trained without N2N regularization have larger Kolmogorov Growth (see Figure 2), which reduces immediately following the application of N2N regularization. Furthermore, it is clear that by varying the emphasis on minimizing the regularization term via tuning the λ parameters, the KG of subsequently trained networks can effectively be controlled. As λ0 increases, more emphasis is put on lowering KG, which improves generalization and yields better test accuracy. However, this happens up to a threshold (see Figure 1) and test accuracy decreases as λ0 increases beyond the threshold. In the case of training data with noisy labels, we find that the threshold is larger because we can put less emphasis on fitting the noisy training labels and more emphasis on minimizing KG. Our theoretical results in Section 3.1 show that network configurations that can be approximated well by smaller networks of lower complexity will have low Kolmogorov growth, and subsequently, lower generalization error. These results concur with the very recent theoretical findings in [26], which finds analogous results for the Rademacher complexity based generalization error framework, in the context of knowledge distillation. Our main result in Theorem 1 outlines an Occam’s razor like principle for generalization. Theorem 1 implies that for all functions which have zero training error, the function with the smallest KGm(f) will be the most likely to show the least generalization error. Our empirical findings consistently show that driving the networks towards simpler functions of lower Kolmogorov Growth leads to networks that generalize better. We find multi-level N2N follows from a theoretical result shown in the Supplementary material, where we bound the empirical KG of the base network function based on the set of recursive mean-squared error estimates. However, KG bounds resulting from recursive estimation are provably less tight than single-estimation KG bounds of the form in Theorem 2. We believe that more bounded loss terms could be one of the reasons behind 2-level N2N yielding better performance on average, as compared to single-level N2N. In the case of label noise, we see that enforcing low KGm(f) on the classification function f , by increasing the regularization parameter values, can have a significant impact (Section 6.2). This also leads to a current limitation of our approach, which is that the hyperparameters (λ0, λ1, ..) have to be manually tuned. Automatic estimation of their optimal values is an avenue for future research. Another limitation of our work is that the growth function term in the empirical approximation of KG (Theorem 2) potentially can render the bounds quite loose. Thus, achieving tighter bounds with KG-based metrics is also a possible extension of this work. In N2N regularization, we observe that the properties of the smaller networks can dictate the learning of the base network. If we choose smaller networks which are highly rotation invariant in their structure (for e.g., by using a rotation-invariant CNN), we should expect the base network to adopt some of the rotation invariance properties as well. We thus conducted an additional experiment on a custom MNIST [17] dataset, which contains images of digits translated randomly within the image. We added symmetric noise on the labels (p = 0.5), and tested our proposed N2N regularization approach with a student network which is highly translation invariant (large max-pooling windows). We found that N2N shows larger improvements, reducing test error by 27% compared to other baselines. This demonstrates the possibility of extending this work by analyzing the effect of invariance/equivariance choices in the smaller networks on the generalization behaviour of the larger network, similar to the observations on distillation methods transferring inductive biases in [27]. Finally, since our work provides a certain level of robustness against label noise, it supports activities such as crowdsourcing data labelling, which potentially contains significant label noise. 8 Acknowledgements This research was supported by the National University of Singapore and by A*STAR, CISCO Systems (USA) Pte. Ltd and National University of Singapore under its Cisco-NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002). We would also like to acknowledge the helpful feedback provided by members of the Kent-Ridge AI research group at the National University of Singapore.
1. What is the focus of the paper regarding deep neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical soundness and novelty? 3. Are there any concerns or questions regarding the experimental evaluation and its thoroughness? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any suggestions or recommendations for future research extensions in this direction?
Summary Of The Paper Review
Summary Of The Paper This paper addresses the fundamental problem of the generalization ability of deep neural networks and aims to shed light on the theoretical aspects of their generalization ability by introducing a approach based on a new complexity measure called "Kolmogorov Growth" (KG). A practical neural network training approach called Network-to-Network (N2N) Regularization is also introduced with the aim of enforcing the low KG condition so that the generalization gap is reduced. Theoretical results are accompanied by relevant experimental evaluations, including a noisy-label case investigation, which helps in validating the proposed KG-based N2N training of neural networks. Overall, the material introduced is seemingly novel, and although tries to address a well-established problem, the results presented are good guidance for research extensions in this direction. Review Merits: Overall, the paper is well-written and theoretically sound. The idea of extending the concept of Growth Function to measure the descriptional complexity of a function space and connecting it with the well-known Rademacher complexity-based Generalization gap theorem from Learning Theory is new, and of interest to the research community. Although the theoretical results presented themselves are not entirely original and largely inspired from existing gen. gap results, coming up with the KG measure and developing a measurable approximation based on the recursive student classifier approach is still novel. The experimental evaluation is quite thorough, and attempts to validate most aspects of the theoretical results. The impact of the properties of the smaller networks (e.g. rotational invariance for CNNs) on the larger network is also briefly analyzed, which would be interesting for further research in this direction. However, I do have the following questions and comments. Questions/ Comments: In Algorithm 1 table, the first line reads ``Training data X i , y i i = 1 k ''. What is k here? Do you mean m instead of k , where m is the number of training samples? In Algo 1 table, what is e b a s e and e s m a l l ? How is e b a s e different from J ? Later in Sec 6.1, it is mentioned that e b a s e = 3 and e s m a l l = 1 , but in Algo 1, X i and y i is used throughout, which does not make sense. The overall notation and usage in Algorithm 1 needs much more clarification. In Sec 6.1, it is said that "larger training data sizes need smaller regularization parameters". Could the authors provide any more insight into this observation, especially in the light of the introduced KG? In Sec 6.1, you say "2-level N2N regularization" and then mention m = 2 , but this makes no sense. Do you mean K = 2 , i.e., 2 smaller networks trained. Similarly with single-level N2N? In Sec 6.2, the observation that a larger λ 0 gives a higher test accuracy. Could the authors provide any intuitive explanation for this trend, as to why KG decreases with higher λ 0 ? In Fig. 2, what is K , i.e., how many student networks are used to learn the original teacher network?
NIPS
Title Network-to-Network Regularization: Enforcing Occam's Razor to Improve Generalization Abstract What makes a classifier have the ability to generalize? There have been a lot of important attempts to address this question, but a clear answer is still elusive. Proponents of complexity theory find that the complexity of the classifier’s function space is key to deciding generalization, whereas other recent work reveals that classifiers which extract invariant feature representations are likely to generalize better. Recent theoretical and empirical studies, however, have shown that even within a classifier’s function space, there can be significant differences in the ability to generalize. Specifically, empirical studies have shown that among functions which have a good training data fit, functions with lower Kolmogorov complexity (KC) are likely to generalize better, while the opposite is true for functions of higher KC. Motivated by these findings, we propose, in this work, a novel measure of complexity called Kolmogorov Growth (KG), which we use to derive new generalization error bounds that only depend on the final choice of the classification function. Guided by the bounds, we propose a novel way of regularizing neural networks by constraining the network trajectory to remain in the low KG zone during training. Minimizing KG while learning is akin to applying the Occam’s razor to neural networks. The proposed approach, called network-to-network regularization, leads to clear improvements in the generalization ability of classifiers. We verify this for three popular image datasets (MNIST, CIFAR-10, CIFAR-100) across varying training data sizes. Empirical studies find that conventional training of neural networks, unlike network-to-network regularization, leads to networks of high KG and lower test accuracies. Furthermore, we present the benefits of N2N regularization in the scenario where the training data labels are noisy. Using N2N regularization, we achieve competitive performance on MNIST, CIFAR-10 and CIFAR-100 datasets with corrupted training labels, significantly improving network performance compared to standard cross-entropy baselines in most cases. These findings illustrate the many benefits obtained from imposing a function complexity prior like Kolmogorov Growth during the training process. 1 Introduction and Motivation On the surface, the problem of learning to generalize well over unseen data seems an impossible task. Classification is inherently a problem in function estimation, and the finite information that the training data samples impart seems hardly enough to be able to correctly guess the behaviour of the function over the unseen data samples outside the training set. However, assumption of a structured ground truth label function, which is the unknown function which generates the ground truth label for any datapoint, leads to more optimistic outlook on the problem. Without the assumption of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). structuredness in the ground truth, learning is not guaranteed, as was observed in the no free lunch theorem [1]. As shown in that work, there is no universal learning algorithm which can generalize well for possible choices of ground truth label functions. Since one does not have any control over the true nature of the ground truth function in any classification problem, the other important parameter that decides the ability of a classifier to generalize, is the complexity of its function space itself. Over the past few decades, there have been multiple attempts at bounding the generalization error on the basis of various complexity measures [2, 3, 4] of the classifier’s function space F . The overall results of these theoretical developments indicate that generalization is primarily governed by metrics that are proportional to the size of F . Examples of complexity metrics in this regard include Rademacher complexity, VC-Dimension, and Local forms of Rademacher Complexity (for more tight bounds). However, as observed in [5] these metrics lead to relatively loose bounds on the generalization performance for deep neural networks. This is because deep neural networks have high complexity spaces with flexibility to learn any label assignment on any set of training datapoints. This was perhaps most clearly observed in [6], where even after random label assignments on the training data samples, the networks still were able to fit the training data labels with no errors. Thus, often other methods have been explored to establish the relationship between complexity measures and generalization performance [7, 8], such as causal relations. These studies lead to a natural question, which is, other than the metrics which relate to the size of the function space of a classifier, what other factors contribute towards its ability to generalize? State-of-the-art deep neural networks, as observed above, will yield very high complexity spaces, which does not explain their remarkable ability to generalize well in complex high-dimensional supervised classification problems, such as in vision. We note, that metrics such as Rademacher complexity or VC dimension, which relate to whole function spaces F (or subspaces within F which fit the training data well), essentially assign the same generalization gap to all functions within F (or the subspace within F ). However, there has been a longstanding understanding of the fact that usually among all functions which fit the training data well, simpler functions are expected to generalize better. This is an example of the Occam’s Razor principle, which states that among all hypotheses that explain a phenomenon, the simplest hypothesis is preferred [9]. In the context of neural networks, even for very high complexity deep neural network function spaces, there will always be some weight configurations of a deep neural network, which yield simpler functions. For example, all network weight configurations yielding input-output functions which can still be efficiently approximated by smaller, shallower networks, could be considered to be simpler. An extreme example of this would be where we assign all weight values to zero within a deep neural network. The function that results from this weight configuration is essentially the constant function f(X) = 0. This observation points to the fact that even within a function space of a deep CNN, not all functions are equally complex, as some of them can still be approximated by shallower networks. To that end, a primary objective of this paper is to probe complexity measures which enable us to assign a level of complexity to individual functions within a function space. Subsequently, our objective would be to steer the learning process towards network configurations which yield less complex input-output functions. An important work which explored similar directions is [10], where metrics from algorithmic complexity were developed to bias the learning process towards simpler functions. In this regard, measures of descriptional complexity [5] of functions have been proposed over the years, which quantify the level of complexity of an individual function, based on its shortest description. Although fundamental measures of descriptional complexity such as Kolmogorov Complexity [11] or Solomonoff Probability [12] are uncomputable, computable approximations to them have been developed [13]. In spite of this early work, there is a lack of concrete theoretical or empirical work to continue this line of investigation after [10]. Mainly, there is a lack of work that explores the relevance of such descriptional complexity measures to the generalizational ability of functions. That is, until recently, when interest in the descriptional complexity of a function was renewed as it was found that outputs of random maps tend to be biased towards simpler functions [14]. This result indirectly hinted that for classification tasks, the ground truth labelling function has a simple description with high probability. This was further investigated in [5] in the context of deep neural networks, where it was found that the Lempel-Ziv (LZ) complexity (a form of descriptional complexity) of most neural networks with random choices of weights are low (shorter description), in accordance with the result in [14]. Empirically, it was found that network weight configurations which lead to functions of smaller Lempel-Ziv complexity show better generalization performance. These results and observations show that descriptional complexity measures may help to understand empirically the generalization behaviour of deep neural networks. In this paper, we advance this line of investigation by proposing a new theoretical framework for relating descriptional complexity measures to generalization performance. Subsequently, we also provide a computable method which improves generalization performance of neural networks by lowering their descriptional complexity. 2 Contributions This paper makes the following contributions. 1. First, we undertake a brief theoretical analysis for exploring the relevance of descriptional complexity measures of functions to their expected generalization error. We propose a novel measure of complexity called Kolmogorov Growth (KG). Error bounds are estimated which portray the dependence of generalization error of a single function f toKG(f). The bounds depict that functions of higher KG(f) will likely lead to a higher generalization gap. This result formalizes the Occam’s razor principle for classifiers, and also concurs with the empirical findings in [5], where functions of higher LZ-Complexity showed worse generalization performance. 2. Like Kolmogorov Complexity, KG is also uncomputable. Therefore, we propose computable approximations of KG for neural networks, based on the concept of teacher-student approximation (similar to knowledge distillation [15]). Specifically, we show that neural network functions f which can be approximated well by smaller networks will have smaller empirical KG with high probability. 3. Next, using this idea, we then develop a novel method for regularizing neural networks, called network-to-network (N2N) regularization. N2N regularization forces trained network configurations to be of low KG. We find that doing so not only improves the generalization performance across a range of training data sizes, but also helps in the case of label noise. For instance in MNIST, we see test error decrease by 94%, reaching results competitive with benchmark methods. 4. Finally, we study the evolution of empirical KG as networks are trained and observe that, in the standard classification scenario, networks show a sharp decrease in their KG as training progresses. However, this trend completely reverses when the training data labels are noise corrupted and N2N regularization is able to stem the undesirable increase of KG during training. 3 Kolmogorov Growth: Relevance to Generalization Assume that our training data consists of m d-dimensional i.i.d samples and their labels S = [(z1, y1), (z2, y2), .., (zm, ym)] drawn from some distribution Pm. We propose two growth measures for a single function f , namely Kolmogorov Growth KGm(f) and empirical Kolmogorov Growth K̂GS(f). Note that, when appropriate, we may drop the subscript m and refer to KGm(f) as KG(f). These measures are primarily motivated from the well-known growth function in statistics. The growth function Πm(F) is defined as Πm(F) = max z∈Rd |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}| (1) Note that the growth function is defined for a space of functions and captures the maximum number of label assignments a function space F can generate on any m points in Rd. To define Kolmogorov Growth measures for a single function f , we first need to generate a function space from f , based on a description Df of f . This is done via noting all the parameters involved in the description Df (other than the co-ordinates in Rd), and then assigning all possible values to those parameters to generate a space of functions F(Df ) from the description Df . Next, we note that for a function f , we consider multiple descriptions (D1f , D 2 f , D 3 f , ...), all of which faithfully generate the output of f over all points in Rd, with the additional constraint that each F(Dif ) should fit any m datapoints sampled from Pm. Given these observations and denominations, we then can define the Kolmogorov Growth of a function f as follows: KGm(f) = min i log Πm ( F(Dif ) ) m . (2) Note that KGm(f) requires the knowledge of distribution Pm of the training samples. For an instance of the datapoints in S, we define the empirical Kolmogorov Growth via the empirical growth function Π̂S(F), which only computes the number of label assignments a function space can generate over the given m training data samples in S. Thus, Π̂S(F) = |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}|. This leads to the following definition of the empirical Kolmogorov Growth of a function f : K̂GS(f) = min i log Π̂S ( F(Dif ) ) m . (3) Remark: Kolmorogov growth is indirectly motivated from Kolmogorov complexity itself. However, unlike Kolmogorov complexity, which is the length of the shortest program that generates f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions which have shorter descriptions usually require a smaller number of variables and are expected to have lower Kolmogorov Growth. Moreover, it turns out that Kolmogorov Growth allows us to directly comment on the error bounds for the function f (see Section 3.1). We believe that a possible direction of future work would be to do a deeper study of the relationship between Kolmogorov Growth and Kolmogorov Complexity itself. Remark: Note that, in the binary classification scenario, for a completely unstructured function f (i.e., f outputs random labels at every point X ∈ Rd), one expects KG(f) to be near its maximum value (i.e., log 2). A structured f would generate shorter descriptions with fewer parameters and therefore lead to smaller KG(f). 3.1 Bounding Generalization Error using Kolmogorov Growth Here we present error bounds that depend on KG(f), where f is the classification function, given the m training data samples and their labels in S. As before, the data samples and labels in S are drawn from some underlying distribution Pm. We now define a set of error functions for computing training loss (0-1 loss) on S, denoted as êrrS(f), and the overall generalization error with respect to the distribution P , denoted by errP (f). We define them as follows: êrrS(f) = m∑ i=1 (1− f(zi)yi) 2m (4) errP (f) = E z,y∼P [ (1− f(z)y) 2 ] . (5) These definitions hold for any function f . Note that the error functions depend both on the function f and the distribution P . With this, we have the following results. The proofs of all results are provided in the supplementary material. Theorem 1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2KGm(f) + √ log (1/δ) 2m . (6) The following corollary of the above theorem gives bounds that depend on empirical Kolmogorov growth K̂GS(f). Corollary 1.1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2K̂GS(f) + 4 √ 2 log (4/δ) m . (7) Remark: Theorem 1 and its corollary essentially state that for functions f of lower Kolmogorov growth, we should expect a smaller generalization gap. In what follows, we outline ways to approximate the empirical Kolmorogov growth K̂GS(f). 4 Teacher-Student Approximation Bounds for Kolmogorov Growth The fundamental idea for approximating empirical Kolmogorov growth of a function f which belongs to the function space F is to use a student classifier with a function space F1small (with a much smaller parametric count and complexity) to approximate the given function f (the teacher). We apply this idea recursively. That is, if the function f1small ∈ Fsmall approximates f best, we recursively estimate the empirical Kolmogorov growth of f1small by approximating it via another classifier with a smaller function space F2small (thus, Πm(F 2 small) < Πm(F 1 small)), and so on. We use this recursive way to then obtain a final estimate for K̂GS(f). We conjecture that, like Kolmogorov complexity itself, the true K̂GS(f) is uncomputable, so the estimate that results from this recursive approximation process is essentially an upper bound to the true K̂GS(f). The following theorem establishes an upper bound to empirical KG approximation from a single smaller student classifier. Theorem 2 Given the function f ∈ F : Rd −→ R2 which outputs class logits for binary classification. We construct a function space F1small such that Πm(F 1 small) < Πm(F) and ∀g ∈ F 1 small, there exists a description Dg such that Π̂S (F(Dg)) ≤ Π̂S(F1small). We approximate f via another function f1small ∈ F 1 small : Rd −→ R2 and let max be such that 2max/2 = max X∈Rd ‖f1small(X)− f(X)‖2. (8) Denote the output probabilities generated from the corresponding logit outputs of f(X) using the softmax operator (temperature T = 1), as P0(f(X)) (label 1 output) and P1(f(X)) (label 2 output). Let 0 ≤ δ ≤ 1 be such that Pr (∣∣∣∣log(P0(f(X))P1(f(X) )∣∣∣∣ ≤ max) ≤ δ, (9) when X is drawn from S. Then we have, K̂GS(f) ≤ δ log 2 + log Π̂S ( F1small ) m , (10) where m is the number of samples in S. Remark: Theorem 2 demonstrates a way to bound the true empirical Kolmogorov growth of the function f , using a single student classifier function f1small ∈ F 1 small. Note that unlike in Theorem 1, there are no direct constraints on the expressivity of F1small, but rather a joint constraint on F 1 small and δ combined. If F1small cannot fit all m points sampled from Pm, then the approximation error in δ will likely be higher, which will add to the estimate of K̂GS(f). The proof of Theorem 2 and its extension to the recursive approximation case are given in the supplementary material. 5 Network-to-Network (N2N) Regularization We denote the base network to be trained as N base and the function modelled by the network weights wbase asN base(wbase, X), whereX ∈ Rd is the input. Here, N base(wbase, X) represents the output logits for the networkN base when presented with the inputX . Thus, we haveN base(wbase, X) ∈ Rc, where c is the number of classes. For what follows, let us denote the available training data and their labels by S = {Xi, yi}mi=1. The approach that follows is directly motivated from the result in Theorem 2. The main objective is to ensure that the KG of the network stays low during learning, using the teacher-student approximation error in Theorem 2. This is primarily achieved by ensuring that during training, the base network function N base(wbase, X) is always near to some function within the smaller network’s function space. Next, we outline the details of the proposed multi-level network-to-network (N2N) regularization approach. 5.1 Multi-Level N2N: Details In multi-level N2N regularization, we have multiple smaller networks nsmall1 , n small 2 , ..., n small K of decreasing complexity such that Πm(F1small) > Πm(F 2 small) > · · · > Πm(F K small). Algorithm 1 N2N Regularization (Multi-Level) Input: Training data {Xi, yi}mi=1, base network N base and its weights wbase, K networks nsmall1 , n small 2 , ..., n small K , with weights w1, w2, ..., wK (s.t. |nsmall1 | > |nsmall2 | > ..|nsmallK | in size), Number of epochs J , Hyperparameters λ0, λ1, λ2, ..., λK−1, α0, .., αK , ebase, esmall. 1: for j = 1, 2, . . . , J do 2: for iter = 1, 2, . . . , ebase do 3: L1 = ∑m i=1(LCE(N base(Xi), yi) + λ0‖N base(Xi)− nsmall1 (Xi)‖2) 4: Weight update: wbase ←− wbase − α0m ∂L1 ∂wbase 5: for k = 1, 2, . . . ,K do 6: for iter = 1, 2, . . . , esmall do 7: if k = 1 then 8: Lk = ∑ i‖N base(Xi)− nsmall1 (Xi)‖2 + λ1‖nsmall2 (Xi)− nsmall1 (Xi)‖2 9: else if k = K then 10: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 11: else 12: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 + λk‖nsmallk (Xi)− nsmallk+1 (Xi)‖2 13: Weight update: wk ←− wk − αkm ∂Lk ∂w1 The corresponding functions resulting from the network weights w1, w2, .., wK are denoted as nsmall1 (w1, X), n small 2 (w2, X), ..., n small K (wK , X). Next, we outline the loss functions for all networks. For the larger to-be-trained base networkN base, the loss objective is to minimize cross-entropy loss on S while being close to nsmall1 (w1, X) for some choice of weights w1 (L1 in Algorithm 1). For the smaller network nsmall1 , the objective is two-fold: find the weight configuration w1 that approximates the larger network function N base, while also being close to nsmall2 (w2, X) for some choice of w2 (L2 in Algorithm 1). Thus, we force the smaller network nsmall1 to be close to the base network and an even lower-complexity network nsmall2 at the same time. Similarly we can define L3,L3, ..,LK−1, except for LK which applies to the smallest network nsmallK . The loss objective for nsmallK is to just keep n small K (wK , X) close to n small K−1 (wK−1, X). Finally, we optimize the loss functions in an alternating manner in the order of L1,L2, ..,LK . Details are given in Algorithm 1. The choice of mean-squared error based loss functions here directly follows from the result in Theorem 2. Note that Algorithm 1 updates with the entire batch of training data points at each iteration, and can be extended to the case of minibatch stochastic gradient descent (SGD). 5.2 Other Relevant Approaches in Literature To the best of our knowledge, our proposed approach is novel, and we did not find much relevant work. Conceptually, we found the reverse knowledge distillation method [16] to be the most relevant to our proposed approach, which regularizes large teacher networks using smaller, trained versions of student networks of less depth. The outputs logits of the trained student networks are then essentially re-used for smoothing the outputs of the larger neural network. Here, we do not directly use trained student networks to supervise the teacher, but instead simply ensure that during training, the teacher network is within reach of some student network (which may change throughout the training process), which is a more relaxed constraint. Also, the mean-squared error based approximation error between the student and teacher networks is motivated from Theorem 2, and differs from KL-divergence based measures used in knowledge distillation. Another point of difference is that N2N uses a multi-level approach for a recursive way of regularizing multiple networks of different levels of complexity. 6 Experiments We test N2N on three datasets: MNIST [17], CIFAR-10 [18] and CIFAR-100 [19]. We also demonstrate that N2N regularization improves performance in the presence of label noise. Lastly, we analyse Kolmogorov growth of networks during training. Experiments were either carried out on an RTX 2060 GPU or a Tesla V100 or A100 GPU. As mentioned in Algorithm 1, an epoch refers to a total of ebase iterations of training the base network and esmall iterations of training the smaller networks on the whole dataset. Code will be made available at https://github.com/rghosh92/N2N. 6.1 Supervised Classification: MNIST, CIFAR-10, CIFAR-100 K̂GS(f) of the trained networks. The primary objective of the experiments presented here is to see whether N2N regularization can drive the training process towards network configurations that generalize better. For each dataset, results are reported for various choices of training data size. Furthermore, to show that our regularization approach complements other commonly used regularization approaches, we show results when our approach is combined with Dropout and L2-norm regularization. For the ResNet networks (CIFAR-10/100), we combine N2N with L2-norm regularization. All networks were trained for a total of 200 iterations, and in each case results reported are averaged over five networks. For all experiments we set ebase = 3, esmall = 1 in Algorithm 1. The values of the regularization parameters (λ0, λ1) are provided in the supplementary material. Note that due to the additional iterations for training the smaller networks, the worst case training time for the N2N approach is 1.5 times longer than standard training. Across all three datasets, we generally find that for larger training data sizes, smaller regularization parameters yield best performance, reinforcing the fact that N2N is indeed a form of regularization. This is primarily because for large training data, the distribution is dense enough for the network to learn, and thus less emphasis can be given to the N2N regularization term. Results are shown in Table 1, and the average approximation error δ for the trained networks is shown in Table 2. We note that the use of N2N regularization improves test accuracy. Mainly, we see that N2N regularization complements common regularization approaches such as dropout and L2-norm well. In all cases we find that combining these well-known regularization approaches with the proposed approach yields the best results. Furthermore, we also see that the improvement in performance persists when the training data size is increased. Lastly, in most cases, we see that 2-level N2N regularization (N2N-2, m = 2 in Algorithm 1), outperforms single-level N2N (N2N-1, m = 1 in Algorithm 1), with the exception of CIFAR-10 with the full training dataset. For the CIFAR-10 and CIFAR-100 datasets, we used the benchmark ResNet architectures ResNet44 and ResNet-50 respectively. Our results with L2-norm regularization for the ResNet-44 and ResNet-50 architectures are slightly better than the results originally reported in [20]. For the MNIST dataset, we used a 5-layer CNN with 3 conv layers and 2 fc layers. Network architecture details are provided in the supplementary material. Note that although better results can be found in literature, our objective was to demonstrate that using N2N regularization in conjunction with common regularization approaches can benefit both shallow CNN architectures (MNIST) and ResNets (CIFAR-10, CIFAR-100). Furthermore, as Table 2 shows, we find that N2N reduces the empirical KG of trained networks, and datasets on which test accuracies are lower yield higher KG of trained networks. This supports the implications of Theorem 1, as high KG functions are expected to have a larger generalization gap. 6.2 Learning with Noisy Labels As our proposed regularization approach constrains the network function to be simpler by minimizing an approximation of Kolmogorov Growth, it naturally applies to the case of noisy training labels. Without regularization, label noise in the training data usually forces a network to emulate a more complex function, as it potentially makes the decision boundary more complex, a fact that we also empirically observe in section 6.3. We stipulate that N2N regularization should help the network in achieving simpler functions to approximate the training data labels, favoring simpler decision boundaries over complex ones, and thus potentially shielding against the corrupted labels to a certain extent. We test whether enforcing a simpler function (large λ0, ...λm) at the cost of compromising training loss can help improve test accuracy, when the training data is corrupted by label noise. We tested the cases where symmetric and asymmetric label noise of some probability p was applied (same as in [21]), and show our results for symmetric noise with p = 0.5 and p = 0.2. Results with asymmetric pair-flip noise of probability p = 0.45 are shown in the Supplementary Material. First, we show the results for symmetric label noise of probability p = 0.5 and p = 0.2 on MNIST, CIFAR-10 and CIFAR-100 in Table 3. For F-correction [22], Decoupling [23], MentorNet [24] and Co-Teaching [25] methods, we report the accuracy over the last ten iterations of training as observed in [25], along with their standard cross-entropy results with corresponding network architectures for reference. We do the same for our implemented SCE and N2N methods on MNIST and CIFAR-10, but for CIFAR-100, we report the accuracies using a 48k-2k training-validation split of the data for both, as we find it to yield best performance (due to hard convergence). Note that we use the same network configurations for SCE. The values of λ0 and λ1 are provided in the supplementary material. We find that N2N regularization yields competitive performance in most cases. We also plot the test accuracy as a function of the regularization parameter λ0 in Figure 1. We find that for MNIST, large λ0 helps achieve significantly higher test accuracy, whereas for CIFAR-10 and CIFAR-100 accuracy peaks around λ0 = 0.6 and λ0 = 0.5 respectively. Note that accuracies may differ from Table 3 because of different training configurations used. 6.3 Comparing Kolmogorov Growth Trajectories during Training The improvements observed via the use of N2N regularization lead to the question of how the network trajectories differ when N2N regularization is used, as compared to when it is not applied. We use the result in Theorem 2 to compute the bounded approximation to empirical Kolmogorov growth of the network function. We thus plot the approximation of K̂G(f) of the function f represented by the neural network during the training process. Note that the variation of KG in all plots is only owing to the changes in the approximation error term δ in Theorem 2, as F1small is fixed to a single-layer CNN (of a fixed configuration) for all results in Fig. 2. Π̂S ( F1small ) was estimated using a VC dimension based approximation shown in [7]. Results are shown in Figure 2. We find that in the case of no training label noise, the KG of networks typically have high initial values, steeply reducing within a few epochs of training, after which it stabilizes. Expectedly, when trained with N2N regularization, we find that the final KG of networks are lower, compared to KG of networks trained without N2N. In the case of label noise, we report some interesting observations. First, we see that, differently from before, the KG values rather increase with training and stabilize eventually at higher values. This is almost an opposite trend to the case of no label noise. This can be partly explained by the fact that as training progresses, the network slowly adapts its decision boundary to fit the erroneous labelling, eventually resulting in a decision boundary of high complexity. For the label noise case, we find that N2N regularization significantly reduces the increase of KG during the training process. Furthermore, larger values of the λ parameter leads to networks which exhibit smaller KG values. This also helps explain the significant gains in test accuracy observed for MNIST earlier in Table 3, when using N2N regularization. 7 Discussion and Reflections This results in this paper further the recent work by [5], where it was shown that neural networks are inherently biased towards simpler functions of lower Kolmogorov complexity. In particular, we provide an actionable method for incorporating a function complexity prior while learning, using a novel measure called Kolmogorov Growth. Unlike Kolmogorov complexity, which is the description of the shortest program that generates some function f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions with shorter descriptions will typically need fewer variables and thus may have lower Kolmogorov Growth. Although smaller function spaces have less expressive power, as recent work in [6] shows, even shallower neural nets can fit random labels on the training data points. The observations in [5] however, put a new perspective on the results in [6]: any random choice of network weights on smaller networks is likely to yield a low complexity function. Thus, even shallower networks can potentially exhibit a wide range of complexities. Among them, the higher Kolmogorov complexity functions are likely required for a network to fit random labels (similar to observations in [5]). In the case of label noise, N2N considers this fact by avoiding directly training the shallower networks to fit the noisy labels, which helps reduce their descriptional complexity, which then helps in regularizing the larger base network. As such, when using a pre-trained shallower network to regularize the base network (reverse-KD), we found that performance can significantly suffer in the case of label noise. Via N2N regularization we see that enforcing low KG for large networks can improve their ability to generalize. The proposed approach greatly helps in the scenario where the training data has noisy labels, attaining competitive performance on the three tested datasets when the training labels are corrupted with symmetric label noise. In the case of label noise, we see that networks trained without N2N regularization have larger Kolmogorov Growth (see Figure 2), which reduces immediately following the application of N2N regularization. Furthermore, it is clear that by varying the emphasis on minimizing the regularization term via tuning the λ parameters, the KG of subsequently trained networks can effectively be controlled. As λ0 increases, more emphasis is put on lowering KG, which improves generalization and yields better test accuracy. However, this happens up to a threshold (see Figure 1) and test accuracy decreases as λ0 increases beyond the threshold. In the case of training data with noisy labels, we find that the threshold is larger because we can put less emphasis on fitting the noisy training labels and more emphasis on minimizing KG. Our theoretical results in Section 3.1 show that network configurations that can be approximated well by smaller networks of lower complexity will have low Kolmogorov growth, and subsequently, lower generalization error. These results concur with the very recent theoretical findings in [26], which finds analogous results for the Rademacher complexity based generalization error framework, in the context of knowledge distillation. Our main result in Theorem 1 outlines an Occam’s razor like principle for generalization. Theorem 1 implies that for all functions which have zero training error, the function with the smallest KGm(f) will be the most likely to show the least generalization error. Our empirical findings consistently show that driving the networks towards simpler functions of lower Kolmogorov Growth leads to networks that generalize better. We find multi-level N2N follows from a theoretical result shown in the Supplementary material, where we bound the empirical KG of the base network function based on the set of recursive mean-squared error estimates. However, KG bounds resulting from recursive estimation are provably less tight than single-estimation KG bounds of the form in Theorem 2. We believe that more bounded loss terms could be one of the reasons behind 2-level N2N yielding better performance on average, as compared to single-level N2N. In the case of label noise, we see that enforcing low KGm(f) on the classification function f , by increasing the regularization parameter values, can have a significant impact (Section 6.2). This also leads to a current limitation of our approach, which is that the hyperparameters (λ0, λ1, ..) have to be manually tuned. Automatic estimation of their optimal values is an avenue for future research. Another limitation of our work is that the growth function term in the empirical approximation of KG (Theorem 2) potentially can render the bounds quite loose. Thus, achieving tighter bounds with KG-based metrics is also a possible extension of this work. In N2N regularization, we observe that the properties of the smaller networks can dictate the learning of the base network. If we choose smaller networks which are highly rotation invariant in their structure (for e.g., by using a rotation-invariant CNN), we should expect the base network to adopt some of the rotation invariance properties as well. We thus conducted an additional experiment on a custom MNIST [17] dataset, which contains images of digits translated randomly within the image. We added symmetric noise on the labels (p = 0.5), and tested our proposed N2N regularization approach with a student network which is highly translation invariant (large max-pooling windows). We found that N2N shows larger improvements, reducing test error by 27% compared to other baselines. This demonstrates the possibility of extending this work by analyzing the effect of invariance/equivariance choices in the smaller networks on the generalization behaviour of the larger network, similar to the observations on distillation methods transferring inductive biases in [27]. Finally, since our work provides a certain level of robustness against label noise, it supports activities such as crowdsourcing data labelling, which potentially contains significant label noise. 8 Acknowledgements This research was supported by the National University of Singapore and by A*STAR, CISCO Systems (USA) Pte. Ltd and National University of Singapore under its Cisco-NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002). We would also like to acknowledge the helpful feedback provided by members of the Kent-Ridge AI research group at the National University of Singapore.
1. What is the main contribution of the paper regarding measuring the complexity of neural networks? 2. How does the proposed approach, Network-to-Network Regularization, relate to the concept of Kolmogorov Growth? 3. What are the strengths and weaknesses of the paper's experimental results? 4. Are there any concerns or limitations regarding the implicit assumption behind Network-to-Network Regularization? 5. Is there potential for a stronger connection between Kolmogorov Complexity and Kolmogorov Growth?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a Kolmogorov Growth (KG), a novel measure of the complexity of a neural network. They derive new generalization bounds using KG and take inspiration from this bound to propose a new way of regularizing neural networks (network-to-network regularization, N2N). Finally, the paper verifies their regularization scheme on three standard image benchmarks. Review As reflected in its name, Kolmogorov Growth takes inspiration from Kolmogorov complexity and the growth function. Conceptually, KG measures the size of the simplest class of functions that a given function f belongs to. The main innovation seems to be the min_i operator, which restricts our attention to the simplest function class. KG by itself is obviously not computable, but the paper proposes an interesting way to approximate it. The generalization bound in section 3.1 is not particularly surprising. In a sense, the proof for this bound is embedded in the definition of KG itself. Still, I think this is an interesting approach to bounding the generalization error. Network-to-network regularization is motivated by the results in section 3.1, which show that functions that can be approximated well by small networks will have low KG. N2N trains a small student network and a large teacher network to output similar predictions to each other. This makes the student learn from the teacher, while the teacher is regularized by the simpler function of the student. The paper also considers a multi-level version where a sequence of progressively smaller networks are trained with the same loss. N2N hinges on the implicit assumption that smaller networks have smaller hypothesis spaces and thus learn simpler functions. While this is likely true in a strict sense, many results with neural networks show that small networks have surprisingly strong expressive power. For example, works on knowledge distillation show that minimal networks can achieve accuracy close to that of a large teacher, and [1] shows that a rather small network can fit random labels. I think this implicit assumption and its limitations should be discussed a bit more in the paper. Experiments show that N2N acts as an effective regularizer and that N2N regularization indeed reduces the KG bound. It would have been interesting to additionally include the generalization gap in Figure 2 so that the KG bound can be compared with it. There may be a more direct correspondence between Kolmogorov complexity and KG. In KG’s definition (section 3), you essentially construct a short two-part program for describing the function f where you first describe the function class F and then specify f within F. I wouldn’t be surprised if the two concepts can be shown to have a stronger relation than KC simply acting as “indirect motivation” for KG (line 140). I think this is an exciting direction for future work. [1] Understanding deep learning requires rethinking generalization, Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
NIPS
Title Network-to-Network Regularization: Enforcing Occam's Razor to Improve Generalization Abstract What makes a classifier have the ability to generalize? There have been a lot of important attempts to address this question, but a clear answer is still elusive. Proponents of complexity theory find that the complexity of the classifier’s function space is key to deciding generalization, whereas other recent work reveals that classifiers which extract invariant feature representations are likely to generalize better. Recent theoretical and empirical studies, however, have shown that even within a classifier’s function space, there can be significant differences in the ability to generalize. Specifically, empirical studies have shown that among functions which have a good training data fit, functions with lower Kolmogorov complexity (KC) are likely to generalize better, while the opposite is true for functions of higher KC. Motivated by these findings, we propose, in this work, a novel measure of complexity called Kolmogorov Growth (KG), which we use to derive new generalization error bounds that only depend on the final choice of the classification function. Guided by the bounds, we propose a novel way of regularizing neural networks by constraining the network trajectory to remain in the low KG zone during training. Minimizing KG while learning is akin to applying the Occam’s razor to neural networks. The proposed approach, called network-to-network regularization, leads to clear improvements in the generalization ability of classifiers. We verify this for three popular image datasets (MNIST, CIFAR-10, CIFAR-100) across varying training data sizes. Empirical studies find that conventional training of neural networks, unlike network-to-network regularization, leads to networks of high KG and lower test accuracies. Furthermore, we present the benefits of N2N regularization in the scenario where the training data labels are noisy. Using N2N regularization, we achieve competitive performance on MNIST, CIFAR-10 and CIFAR-100 datasets with corrupted training labels, significantly improving network performance compared to standard cross-entropy baselines in most cases. These findings illustrate the many benefits obtained from imposing a function complexity prior like Kolmogorov Growth during the training process. 1 Introduction and Motivation On the surface, the problem of learning to generalize well over unseen data seems an impossible task. Classification is inherently a problem in function estimation, and the finite information that the training data samples impart seems hardly enough to be able to correctly guess the behaviour of the function over the unseen data samples outside the training set. However, assumption of a structured ground truth label function, which is the unknown function which generates the ground truth label for any datapoint, leads to more optimistic outlook on the problem. Without the assumption of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). structuredness in the ground truth, learning is not guaranteed, as was observed in the no free lunch theorem [1]. As shown in that work, there is no universal learning algorithm which can generalize well for possible choices of ground truth label functions. Since one does not have any control over the true nature of the ground truth function in any classification problem, the other important parameter that decides the ability of a classifier to generalize, is the complexity of its function space itself. Over the past few decades, there have been multiple attempts at bounding the generalization error on the basis of various complexity measures [2, 3, 4] of the classifier’s function space F . The overall results of these theoretical developments indicate that generalization is primarily governed by metrics that are proportional to the size of F . Examples of complexity metrics in this regard include Rademacher complexity, VC-Dimension, and Local forms of Rademacher Complexity (for more tight bounds). However, as observed in [5] these metrics lead to relatively loose bounds on the generalization performance for deep neural networks. This is because deep neural networks have high complexity spaces with flexibility to learn any label assignment on any set of training datapoints. This was perhaps most clearly observed in [6], where even after random label assignments on the training data samples, the networks still were able to fit the training data labels with no errors. Thus, often other methods have been explored to establish the relationship between complexity measures and generalization performance [7, 8], such as causal relations. These studies lead to a natural question, which is, other than the metrics which relate to the size of the function space of a classifier, what other factors contribute towards its ability to generalize? State-of-the-art deep neural networks, as observed above, will yield very high complexity spaces, which does not explain their remarkable ability to generalize well in complex high-dimensional supervised classification problems, such as in vision. We note, that metrics such as Rademacher complexity or VC dimension, which relate to whole function spaces F (or subspaces within F which fit the training data well), essentially assign the same generalization gap to all functions within F (or the subspace within F ). However, there has been a longstanding understanding of the fact that usually among all functions which fit the training data well, simpler functions are expected to generalize better. This is an example of the Occam’s Razor principle, which states that among all hypotheses that explain a phenomenon, the simplest hypothesis is preferred [9]. In the context of neural networks, even for very high complexity deep neural network function spaces, there will always be some weight configurations of a deep neural network, which yield simpler functions. For example, all network weight configurations yielding input-output functions which can still be efficiently approximated by smaller, shallower networks, could be considered to be simpler. An extreme example of this would be where we assign all weight values to zero within a deep neural network. The function that results from this weight configuration is essentially the constant function f(X) = 0. This observation points to the fact that even within a function space of a deep CNN, not all functions are equally complex, as some of them can still be approximated by shallower networks. To that end, a primary objective of this paper is to probe complexity measures which enable us to assign a level of complexity to individual functions within a function space. Subsequently, our objective would be to steer the learning process towards network configurations which yield less complex input-output functions. An important work which explored similar directions is [10], where metrics from algorithmic complexity were developed to bias the learning process towards simpler functions. In this regard, measures of descriptional complexity [5] of functions have been proposed over the years, which quantify the level of complexity of an individual function, based on its shortest description. Although fundamental measures of descriptional complexity such as Kolmogorov Complexity [11] or Solomonoff Probability [12] are uncomputable, computable approximations to them have been developed [13]. In spite of this early work, there is a lack of concrete theoretical or empirical work to continue this line of investigation after [10]. Mainly, there is a lack of work that explores the relevance of such descriptional complexity measures to the generalizational ability of functions. That is, until recently, when interest in the descriptional complexity of a function was renewed as it was found that outputs of random maps tend to be biased towards simpler functions [14]. This result indirectly hinted that for classification tasks, the ground truth labelling function has a simple description with high probability. This was further investigated in [5] in the context of deep neural networks, where it was found that the Lempel-Ziv (LZ) complexity (a form of descriptional complexity) of most neural networks with random choices of weights are low (shorter description), in accordance with the result in [14]. Empirically, it was found that network weight configurations which lead to functions of smaller Lempel-Ziv complexity show better generalization performance. These results and observations show that descriptional complexity measures may help to understand empirically the generalization behaviour of deep neural networks. In this paper, we advance this line of investigation by proposing a new theoretical framework for relating descriptional complexity measures to generalization performance. Subsequently, we also provide a computable method which improves generalization performance of neural networks by lowering their descriptional complexity. 2 Contributions This paper makes the following contributions. 1. First, we undertake a brief theoretical analysis for exploring the relevance of descriptional complexity measures of functions to their expected generalization error. We propose a novel measure of complexity called Kolmogorov Growth (KG). Error bounds are estimated which portray the dependence of generalization error of a single function f toKG(f). The bounds depict that functions of higher KG(f) will likely lead to a higher generalization gap. This result formalizes the Occam’s razor principle for classifiers, and also concurs with the empirical findings in [5], where functions of higher LZ-Complexity showed worse generalization performance. 2. Like Kolmogorov Complexity, KG is also uncomputable. Therefore, we propose computable approximations of KG for neural networks, based on the concept of teacher-student approximation (similar to knowledge distillation [15]). Specifically, we show that neural network functions f which can be approximated well by smaller networks will have smaller empirical KG with high probability. 3. Next, using this idea, we then develop a novel method for regularizing neural networks, called network-to-network (N2N) regularization. N2N regularization forces trained network configurations to be of low KG. We find that doing so not only improves the generalization performance across a range of training data sizes, but also helps in the case of label noise. For instance in MNIST, we see test error decrease by 94%, reaching results competitive with benchmark methods. 4. Finally, we study the evolution of empirical KG as networks are trained and observe that, in the standard classification scenario, networks show a sharp decrease in their KG as training progresses. However, this trend completely reverses when the training data labels are noise corrupted and N2N regularization is able to stem the undesirable increase of KG during training. 3 Kolmogorov Growth: Relevance to Generalization Assume that our training data consists of m d-dimensional i.i.d samples and their labels S = [(z1, y1), (z2, y2), .., (zm, ym)] drawn from some distribution Pm. We propose two growth measures for a single function f , namely Kolmogorov Growth KGm(f) and empirical Kolmogorov Growth K̂GS(f). Note that, when appropriate, we may drop the subscript m and refer to KGm(f) as KG(f). These measures are primarily motivated from the well-known growth function in statistics. The growth function Πm(F) is defined as Πm(F) = max z∈Rd |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}| (1) Note that the growth function is defined for a space of functions and captures the maximum number of label assignments a function space F can generate on any m points in Rd. To define Kolmogorov Growth measures for a single function f , we first need to generate a function space from f , based on a description Df of f . This is done via noting all the parameters involved in the description Df (other than the co-ordinates in Rd), and then assigning all possible values to those parameters to generate a space of functions F(Df ) from the description Df . Next, we note that for a function f , we consider multiple descriptions (D1f , D 2 f , D 3 f , ...), all of which faithfully generate the output of f over all points in Rd, with the additional constraint that each F(Dif ) should fit any m datapoints sampled from Pm. Given these observations and denominations, we then can define the Kolmogorov Growth of a function f as follows: KGm(f) = min i log Πm ( F(Dif ) ) m . (2) Note that KGm(f) requires the knowledge of distribution Pm of the training samples. For an instance of the datapoints in S, we define the empirical Kolmogorov Growth via the empirical growth function Π̂S(F), which only computes the number of label assignments a function space can generate over the given m training data samples in S. Thus, Π̂S(F) = |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}|. This leads to the following definition of the empirical Kolmogorov Growth of a function f : K̂GS(f) = min i log Π̂S ( F(Dif ) ) m . (3) Remark: Kolmorogov growth is indirectly motivated from Kolmogorov complexity itself. However, unlike Kolmogorov complexity, which is the length of the shortest program that generates f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions which have shorter descriptions usually require a smaller number of variables and are expected to have lower Kolmogorov Growth. Moreover, it turns out that Kolmogorov Growth allows us to directly comment on the error bounds for the function f (see Section 3.1). We believe that a possible direction of future work would be to do a deeper study of the relationship between Kolmogorov Growth and Kolmogorov Complexity itself. Remark: Note that, in the binary classification scenario, for a completely unstructured function f (i.e., f outputs random labels at every point X ∈ Rd), one expects KG(f) to be near its maximum value (i.e., log 2). A structured f would generate shorter descriptions with fewer parameters and therefore lead to smaller KG(f). 3.1 Bounding Generalization Error using Kolmogorov Growth Here we present error bounds that depend on KG(f), where f is the classification function, given the m training data samples and their labels in S. As before, the data samples and labels in S are drawn from some underlying distribution Pm. We now define a set of error functions for computing training loss (0-1 loss) on S, denoted as êrrS(f), and the overall generalization error with respect to the distribution P , denoted by errP (f). We define them as follows: êrrS(f) = m∑ i=1 (1− f(zi)yi) 2m (4) errP (f) = E z,y∼P [ (1− f(z)y) 2 ] . (5) These definitions hold for any function f . Note that the error functions depend both on the function f and the distribution P . With this, we have the following results. The proofs of all results are provided in the supplementary material. Theorem 1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2KGm(f) + √ log (1/δ) 2m . (6) The following corollary of the above theorem gives bounds that depend on empirical Kolmogorov growth K̂GS(f). Corollary 1.1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2K̂GS(f) + 4 √ 2 log (4/δ) m . (7) Remark: Theorem 1 and its corollary essentially state that for functions f of lower Kolmogorov growth, we should expect a smaller generalization gap. In what follows, we outline ways to approximate the empirical Kolmorogov growth K̂GS(f). 4 Teacher-Student Approximation Bounds for Kolmogorov Growth The fundamental idea for approximating empirical Kolmogorov growth of a function f which belongs to the function space F is to use a student classifier with a function space F1small (with a much smaller parametric count and complexity) to approximate the given function f (the teacher). We apply this idea recursively. That is, if the function f1small ∈ Fsmall approximates f best, we recursively estimate the empirical Kolmogorov growth of f1small by approximating it via another classifier with a smaller function space F2small (thus, Πm(F 2 small) < Πm(F 1 small)), and so on. We use this recursive way to then obtain a final estimate for K̂GS(f). We conjecture that, like Kolmogorov complexity itself, the true K̂GS(f) is uncomputable, so the estimate that results from this recursive approximation process is essentially an upper bound to the true K̂GS(f). The following theorem establishes an upper bound to empirical KG approximation from a single smaller student classifier. Theorem 2 Given the function f ∈ F : Rd −→ R2 which outputs class logits for binary classification. We construct a function space F1small such that Πm(F 1 small) < Πm(F) and ∀g ∈ F 1 small, there exists a description Dg such that Π̂S (F(Dg)) ≤ Π̂S(F1small). We approximate f via another function f1small ∈ F 1 small : Rd −→ R2 and let max be such that 2max/2 = max X∈Rd ‖f1small(X)− f(X)‖2. (8) Denote the output probabilities generated from the corresponding logit outputs of f(X) using the softmax operator (temperature T = 1), as P0(f(X)) (label 1 output) and P1(f(X)) (label 2 output). Let 0 ≤ δ ≤ 1 be such that Pr (∣∣∣∣log(P0(f(X))P1(f(X) )∣∣∣∣ ≤ max) ≤ δ, (9) when X is drawn from S. Then we have, K̂GS(f) ≤ δ log 2 + log Π̂S ( F1small ) m , (10) where m is the number of samples in S. Remark: Theorem 2 demonstrates a way to bound the true empirical Kolmogorov growth of the function f , using a single student classifier function f1small ∈ F 1 small. Note that unlike in Theorem 1, there are no direct constraints on the expressivity of F1small, but rather a joint constraint on F 1 small and δ combined. If F1small cannot fit all m points sampled from Pm, then the approximation error in δ will likely be higher, which will add to the estimate of K̂GS(f). The proof of Theorem 2 and its extension to the recursive approximation case are given in the supplementary material. 5 Network-to-Network (N2N) Regularization We denote the base network to be trained as N base and the function modelled by the network weights wbase asN base(wbase, X), whereX ∈ Rd is the input. Here, N base(wbase, X) represents the output logits for the networkN base when presented with the inputX . Thus, we haveN base(wbase, X) ∈ Rc, where c is the number of classes. For what follows, let us denote the available training data and their labels by S = {Xi, yi}mi=1. The approach that follows is directly motivated from the result in Theorem 2. The main objective is to ensure that the KG of the network stays low during learning, using the teacher-student approximation error in Theorem 2. This is primarily achieved by ensuring that during training, the base network function N base(wbase, X) is always near to some function within the smaller network’s function space. Next, we outline the details of the proposed multi-level network-to-network (N2N) regularization approach. 5.1 Multi-Level N2N: Details In multi-level N2N regularization, we have multiple smaller networks nsmall1 , n small 2 , ..., n small K of decreasing complexity such that Πm(F1small) > Πm(F 2 small) > · · · > Πm(F K small). Algorithm 1 N2N Regularization (Multi-Level) Input: Training data {Xi, yi}mi=1, base network N base and its weights wbase, K networks nsmall1 , n small 2 , ..., n small K , with weights w1, w2, ..., wK (s.t. |nsmall1 | > |nsmall2 | > ..|nsmallK | in size), Number of epochs J , Hyperparameters λ0, λ1, λ2, ..., λK−1, α0, .., αK , ebase, esmall. 1: for j = 1, 2, . . . , J do 2: for iter = 1, 2, . . . , ebase do 3: L1 = ∑m i=1(LCE(N base(Xi), yi) + λ0‖N base(Xi)− nsmall1 (Xi)‖2) 4: Weight update: wbase ←− wbase − α0m ∂L1 ∂wbase 5: for k = 1, 2, . . . ,K do 6: for iter = 1, 2, . . . , esmall do 7: if k = 1 then 8: Lk = ∑ i‖N base(Xi)− nsmall1 (Xi)‖2 + λ1‖nsmall2 (Xi)− nsmall1 (Xi)‖2 9: else if k = K then 10: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 11: else 12: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 + λk‖nsmallk (Xi)− nsmallk+1 (Xi)‖2 13: Weight update: wk ←− wk − αkm ∂Lk ∂w1 The corresponding functions resulting from the network weights w1, w2, .., wK are denoted as nsmall1 (w1, X), n small 2 (w2, X), ..., n small K (wK , X). Next, we outline the loss functions for all networks. For the larger to-be-trained base networkN base, the loss objective is to minimize cross-entropy loss on S while being close to nsmall1 (w1, X) for some choice of weights w1 (L1 in Algorithm 1). For the smaller network nsmall1 , the objective is two-fold: find the weight configuration w1 that approximates the larger network function N base, while also being close to nsmall2 (w2, X) for some choice of w2 (L2 in Algorithm 1). Thus, we force the smaller network nsmall1 to be close to the base network and an even lower-complexity network nsmall2 at the same time. Similarly we can define L3,L3, ..,LK−1, except for LK which applies to the smallest network nsmallK . The loss objective for nsmallK is to just keep n small K (wK , X) close to n small K−1 (wK−1, X). Finally, we optimize the loss functions in an alternating manner in the order of L1,L2, ..,LK . Details are given in Algorithm 1. The choice of mean-squared error based loss functions here directly follows from the result in Theorem 2. Note that Algorithm 1 updates with the entire batch of training data points at each iteration, and can be extended to the case of minibatch stochastic gradient descent (SGD). 5.2 Other Relevant Approaches in Literature To the best of our knowledge, our proposed approach is novel, and we did not find much relevant work. Conceptually, we found the reverse knowledge distillation method [16] to be the most relevant to our proposed approach, which regularizes large teacher networks using smaller, trained versions of student networks of less depth. The outputs logits of the trained student networks are then essentially re-used for smoothing the outputs of the larger neural network. Here, we do not directly use trained student networks to supervise the teacher, but instead simply ensure that during training, the teacher network is within reach of some student network (which may change throughout the training process), which is a more relaxed constraint. Also, the mean-squared error based approximation error between the student and teacher networks is motivated from Theorem 2, and differs from KL-divergence based measures used in knowledge distillation. Another point of difference is that N2N uses a multi-level approach for a recursive way of regularizing multiple networks of different levels of complexity. 6 Experiments We test N2N on three datasets: MNIST [17], CIFAR-10 [18] and CIFAR-100 [19]. We also demonstrate that N2N regularization improves performance in the presence of label noise. Lastly, we analyse Kolmogorov growth of networks during training. Experiments were either carried out on an RTX 2060 GPU or a Tesla V100 or A100 GPU. As mentioned in Algorithm 1, an epoch refers to a total of ebase iterations of training the base network and esmall iterations of training the smaller networks on the whole dataset. Code will be made available at https://github.com/rghosh92/N2N. 6.1 Supervised Classification: MNIST, CIFAR-10, CIFAR-100 K̂GS(f) of the trained networks. The primary objective of the experiments presented here is to see whether N2N regularization can drive the training process towards network configurations that generalize better. For each dataset, results are reported for various choices of training data size. Furthermore, to show that our regularization approach complements other commonly used regularization approaches, we show results when our approach is combined with Dropout and L2-norm regularization. For the ResNet networks (CIFAR-10/100), we combine N2N with L2-norm regularization. All networks were trained for a total of 200 iterations, and in each case results reported are averaged over five networks. For all experiments we set ebase = 3, esmall = 1 in Algorithm 1. The values of the regularization parameters (λ0, λ1) are provided in the supplementary material. Note that due to the additional iterations for training the smaller networks, the worst case training time for the N2N approach is 1.5 times longer than standard training. Across all three datasets, we generally find that for larger training data sizes, smaller regularization parameters yield best performance, reinforcing the fact that N2N is indeed a form of regularization. This is primarily because for large training data, the distribution is dense enough for the network to learn, and thus less emphasis can be given to the N2N regularization term. Results are shown in Table 1, and the average approximation error δ for the trained networks is shown in Table 2. We note that the use of N2N regularization improves test accuracy. Mainly, we see that N2N regularization complements common regularization approaches such as dropout and L2-norm well. In all cases we find that combining these well-known regularization approaches with the proposed approach yields the best results. Furthermore, we also see that the improvement in performance persists when the training data size is increased. Lastly, in most cases, we see that 2-level N2N regularization (N2N-2, m = 2 in Algorithm 1), outperforms single-level N2N (N2N-1, m = 1 in Algorithm 1), with the exception of CIFAR-10 with the full training dataset. For the CIFAR-10 and CIFAR-100 datasets, we used the benchmark ResNet architectures ResNet44 and ResNet-50 respectively. Our results with L2-norm regularization for the ResNet-44 and ResNet-50 architectures are slightly better than the results originally reported in [20]. For the MNIST dataset, we used a 5-layer CNN with 3 conv layers and 2 fc layers. Network architecture details are provided in the supplementary material. Note that although better results can be found in literature, our objective was to demonstrate that using N2N regularization in conjunction with common regularization approaches can benefit both shallow CNN architectures (MNIST) and ResNets (CIFAR-10, CIFAR-100). Furthermore, as Table 2 shows, we find that N2N reduces the empirical KG of trained networks, and datasets on which test accuracies are lower yield higher KG of trained networks. This supports the implications of Theorem 1, as high KG functions are expected to have a larger generalization gap. 6.2 Learning with Noisy Labels As our proposed regularization approach constrains the network function to be simpler by minimizing an approximation of Kolmogorov Growth, it naturally applies to the case of noisy training labels. Without regularization, label noise in the training data usually forces a network to emulate a more complex function, as it potentially makes the decision boundary more complex, a fact that we also empirically observe in section 6.3. We stipulate that N2N regularization should help the network in achieving simpler functions to approximate the training data labels, favoring simpler decision boundaries over complex ones, and thus potentially shielding against the corrupted labels to a certain extent. We test whether enforcing a simpler function (large λ0, ...λm) at the cost of compromising training loss can help improve test accuracy, when the training data is corrupted by label noise. We tested the cases where symmetric and asymmetric label noise of some probability p was applied (same as in [21]), and show our results for symmetric noise with p = 0.5 and p = 0.2. Results with asymmetric pair-flip noise of probability p = 0.45 are shown in the Supplementary Material. First, we show the results for symmetric label noise of probability p = 0.5 and p = 0.2 on MNIST, CIFAR-10 and CIFAR-100 in Table 3. For F-correction [22], Decoupling [23], MentorNet [24] and Co-Teaching [25] methods, we report the accuracy over the last ten iterations of training as observed in [25], along with their standard cross-entropy results with corresponding network architectures for reference. We do the same for our implemented SCE and N2N methods on MNIST and CIFAR-10, but for CIFAR-100, we report the accuracies using a 48k-2k training-validation split of the data for both, as we find it to yield best performance (due to hard convergence). Note that we use the same network configurations for SCE. The values of λ0 and λ1 are provided in the supplementary material. We find that N2N regularization yields competitive performance in most cases. We also plot the test accuracy as a function of the regularization parameter λ0 in Figure 1. We find that for MNIST, large λ0 helps achieve significantly higher test accuracy, whereas for CIFAR-10 and CIFAR-100 accuracy peaks around λ0 = 0.6 and λ0 = 0.5 respectively. Note that accuracies may differ from Table 3 because of different training configurations used. 6.3 Comparing Kolmogorov Growth Trajectories during Training The improvements observed via the use of N2N regularization lead to the question of how the network trajectories differ when N2N regularization is used, as compared to when it is not applied. We use the result in Theorem 2 to compute the bounded approximation to empirical Kolmogorov growth of the network function. We thus plot the approximation of K̂G(f) of the function f represented by the neural network during the training process. Note that the variation of KG in all plots is only owing to the changes in the approximation error term δ in Theorem 2, as F1small is fixed to a single-layer CNN (of a fixed configuration) for all results in Fig. 2. Π̂S ( F1small ) was estimated using a VC dimension based approximation shown in [7]. Results are shown in Figure 2. We find that in the case of no training label noise, the KG of networks typically have high initial values, steeply reducing within a few epochs of training, after which it stabilizes. Expectedly, when trained with N2N regularization, we find that the final KG of networks are lower, compared to KG of networks trained without N2N. In the case of label noise, we report some interesting observations. First, we see that, differently from before, the KG values rather increase with training and stabilize eventually at higher values. This is almost an opposite trend to the case of no label noise. This can be partly explained by the fact that as training progresses, the network slowly adapts its decision boundary to fit the erroneous labelling, eventually resulting in a decision boundary of high complexity. For the label noise case, we find that N2N regularization significantly reduces the increase of KG during the training process. Furthermore, larger values of the λ parameter leads to networks which exhibit smaller KG values. This also helps explain the significant gains in test accuracy observed for MNIST earlier in Table 3, when using N2N regularization. 7 Discussion and Reflections This results in this paper further the recent work by [5], where it was shown that neural networks are inherently biased towards simpler functions of lower Kolmogorov complexity. In particular, we provide an actionable method for incorporating a function complexity prior while learning, using a novel measure called Kolmogorov Growth. Unlike Kolmogorov complexity, which is the description of the shortest program that generates some function f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions with shorter descriptions will typically need fewer variables and thus may have lower Kolmogorov Growth. Although smaller function spaces have less expressive power, as recent work in [6] shows, even shallower neural nets can fit random labels on the training data points. The observations in [5] however, put a new perspective on the results in [6]: any random choice of network weights on smaller networks is likely to yield a low complexity function. Thus, even shallower networks can potentially exhibit a wide range of complexities. Among them, the higher Kolmogorov complexity functions are likely required for a network to fit random labels (similar to observations in [5]). In the case of label noise, N2N considers this fact by avoiding directly training the shallower networks to fit the noisy labels, which helps reduce their descriptional complexity, which then helps in regularizing the larger base network. As such, when using a pre-trained shallower network to regularize the base network (reverse-KD), we found that performance can significantly suffer in the case of label noise. Via N2N regularization we see that enforcing low KG for large networks can improve their ability to generalize. The proposed approach greatly helps in the scenario where the training data has noisy labels, attaining competitive performance on the three tested datasets when the training labels are corrupted with symmetric label noise. In the case of label noise, we see that networks trained without N2N regularization have larger Kolmogorov Growth (see Figure 2), which reduces immediately following the application of N2N regularization. Furthermore, it is clear that by varying the emphasis on minimizing the regularization term via tuning the λ parameters, the KG of subsequently trained networks can effectively be controlled. As λ0 increases, more emphasis is put on lowering KG, which improves generalization and yields better test accuracy. However, this happens up to a threshold (see Figure 1) and test accuracy decreases as λ0 increases beyond the threshold. In the case of training data with noisy labels, we find that the threshold is larger because we can put less emphasis on fitting the noisy training labels and more emphasis on minimizing KG. Our theoretical results in Section 3.1 show that network configurations that can be approximated well by smaller networks of lower complexity will have low Kolmogorov growth, and subsequently, lower generalization error. These results concur with the very recent theoretical findings in [26], which finds analogous results for the Rademacher complexity based generalization error framework, in the context of knowledge distillation. Our main result in Theorem 1 outlines an Occam’s razor like principle for generalization. Theorem 1 implies that for all functions which have zero training error, the function with the smallest KGm(f) will be the most likely to show the least generalization error. Our empirical findings consistently show that driving the networks towards simpler functions of lower Kolmogorov Growth leads to networks that generalize better. We find multi-level N2N follows from a theoretical result shown in the Supplementary material, where we bound the empirical KG of the base network function based on the set of recursive mean-squared error estimates. However, KG bounds resulting from recursive estimation are provably less tight than single-estimation KG bounds of the form in Theorem 2. We believe that more bounded loss terms could be one of the reasons behind 2-level N2N yielding better performance on average, as compared to single-level N2N. In the case of label noise, we see that enforcing low KGm(f) on the classification function f , by increasing the regularization parameter values, can have a significant impact (Section 6.2). This also leads to a current limitation of our approach, which is that the hyperparameters (λ0, λ1, ..) have to be manually tuned. Automatic estimation of their optimal values is an avenue for future research. Another limitation of our work is that the growth function term in the empirical approximation of KG (Theorem 2) potentially can render the bounds quite loose. Thus, achieving tighter bounds with KG-based metrics is also a possible extension of this work. In N2N regularization, we observe that the properties of the smaller networks can dictate the learning of the base network. If we choose smaller networks which are highly rotation invariant in their structure (for e.g., by using a rotation-invariant CNN), we should expect the base network to adopt some of the rotation invariance properties as well. We thus conducted an additional experiment on a custom MNIST [17] dataset, which contains images of digits translated randomly within the image. We added symmetric noise on the labels (p = 0.5), and tested our proposed N2N regularization approach with a student network which is highly translation invariant (large max-pooling windows). We found that N2N shows larger improvements, reducing test error by 27% compared to other baselines. This demonstrates the possibility of extending this work by analyzing the effect of invariance/equivariance choices in the smaller networks on the generalization behaviour of the larger network, similar to the observations on distillation methods transferring inductive biases in [27]. Finally, since our work provides a certain level of robustness against label noise, it supports activities such as crowdsourcing data labelling, which potentially contains significant label noise. 8 Acknowledgements This research was supported by the National University of Singapore and by A*STAR, CISCO Systems (USA) Pte. Ltd and National University of Singapore under its Cisco-NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002). We would also like to acknowledge the helpful feedback provided by members of the Kent-Ridge AI research group at the National University of Singapore.
1. Are the proposed generalization bounds useful in practice? 2. How does the proposed measure of function complexity compare to other existing measures? 3. Can the authors provide more empirical evaluations to support the usefulness of the metric? 4. How does the proposed method perform under different types of label noise? 5. Can the authors provide training time results and compare them to standard training methods? 6. How does the proposed method compare to other recent works in terms of accuracy and computational cost? 7. Can the authors improve the readability of the tables and remove grammar errors?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a measure of function complexity that is based on a function itself rather than the function class it belongs to. It uses this complexity measure to establish generalization upper bounds for classifiers. In addition, an empirical method for approximating this measure, and optimizing it, is proposed and evaluated on three small benchmark datasets, exhibiting slightly improved test accuracy over existing baselines. The method is also shown to improve resilience to label noise. Review The problem addressed is significant, as indeed most function class-based generalization bounds are not particularly useful for deep networks. The work seems to be original, and is fairly clear. The theoretical analysis seems reasonable, and the experiments show slight improvements over baselines; however, the amount of improvement is not substantial. Questions: Are the KG-based generalization bounds (empirically approximated using the provided method) actually useful? It would be good to compute the right hand side of Corollary 1.1 (with KG_S replaced by the bound in (9)) on the datasets and compare this to the test error. Do the same trends in Table 2 hold up when the label noise is not symmetric? Areas for improvement: It would be interesting to compare the empirical KG of different model architectures. For example, do modern CV architectures (e.g., ResNets, DenseNets) have a lower empirical KG than older-style architectures (from simple fully-connected networks to VGG)? In my view, this would substantially strengthen the paper as an additional way to support the usefulness of the metric. The authors should discuss and cite "Fantastic Generalization Measures and Where to Find Them" (Jiang et al., 2019), which has a more exhaustive list of previously proposed complexity measures. In addition, this paper has strong empirical evaluations of these measures; adding in some of these evaluations as appropriate (and showing that empirical KG outperforms other existing complexity measures on these metrics) would also strengthen the paper. The authors should include training time results (e.g., wall-clock time). The proposed N2N method is more expensive than standard training as it involves training multiple networks. In terms of total parameter count, I am not sure whether the N2N approach yields better accuracy than a slightly larger network with total number of parameters approximately equal to the sum of the number of parameters in the base and smaller networks in N2N (especially given the relatively small size of the improvements). These evaluations could cause me to raise my score. Minor comments: The related work section 5.2 is somewhat lacking. Although the discussion of reverse knowledge distillation is good, more related work can be discussed. (However, a fair amount of related work is discussed in the introduction.) The tables can be tough to read. Increasing the line spacing slightly would be good. There are multiple grammar errors (such as extraneous commas); I recommend that the authors carefully copyedit before the final version.
NIPS
Title Network-to-Network Regularization: Enforcing Occam's Razor to Improve Generalization Abstract What makes a classifier have the ability to generalize? There have been a lot of important attempts to address this question, but a clear answer is still elusive. Proponents of complexity theory find that the complexity of the classifier’s function space is key to deciding generalization, whereas other recent work reveals that classifiers which extract invariant feature representations are likely to generalize better. Recent theoretical and empirical studies, however, have shown that even within a classifier’s function space, there can be significant differences in the ability to generalize. Specifically, empirical studies have shown that among functions which have a good training data fit, functions with lower Kolmogorov complexity (KC) are likely to generalize better, while the opposite is true for functions of higher KC. Motivated by these findings, we propose, in this work, a novel measure of complexity called Kolmogorov Growth (KG), which we use to derive new generalization error bounds that only depend on the final choice of the classification function. Guided by the bounds, we propose a novel way of regularizing neural networks by constraining the network trajectory to remain in the low KG zone during training. Minimizing KG while learning is akin to applying the Occam’s razor to neural networks. The proposed approach, called network-to-network regularization, leads to clear improvements in the generalization ability of classifiers. We verify this for three popular image datasets (MNIST, CIFAR-10, CIFAR-100) across varying training data sizes. Empirical studies find that conventional training of neural networks, unlike network-to-network regularization, leads to networks of high KG and lower test accuracies. Furthermore, we present the benefits of N2N regularization in the scenario where the training data labels are noisy. Using N2N regularization, we achieve competitive performance on MNIST, CIFAR-10 and CIFAR-100 datasets with corrupted training labels, significantly improving network performance compared to standard cross-entropy baselines in most cases. These findings illustrate the many benefits obtained from imposing a function complexity prior like Kolmogorov Growth during the training process. 1 Introduction and Motivation On the surface, the problem of learning to generalize well over unseen data seems an impossible task. Classification is inherently a problem in function estimation, and the finite information that the training data samples impart seems hardly enough to be able to correctly guess the behaviour of the function over the unseen data samples outside the training set. However, assumption of a structured ground truth label function, which is the unknown function which generates the ground truth label for any datapoint, leads to more optimistic outlook on the problem. Without the assumption of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). structuredness in the ground truth, learning is not guaranteed, as was observed in the no free lunch theorem [1]. As shown in that work, there is no universal learning algorithm which can generalize well for possible choices of ground truth label functions. Since one does not have any control over the true nature of the ground truth function in any classification problem, the other important parameter that decides the ability of a classifier to generalize, is the complexity of its function space itself. Over the past few decades, there have been multiple attempts at bounding the generalization error on the basis of various complexity measures [2, 3, 4] of the classifier’s function space F . The overall results of these theoretical developments indicate that generalization is primarily governed by metrics that are proportional to the size of F . Examples of complexity metrics in this regard include Rademacher complexity, VC-Dimension, and Local forms of Rademacher Complexity (for more tight bounds). However, as observed in [5] these metrics lead to relatively loose bounds on the generalization performance for deep neural networks. This is because deep neural networks have high complexity spaces with flexibility to learn any label assignment on any set of training datapoints. This was perhaps most clearly observed in [6], where even after random label assignments on the training data samples, the networks still were able to fit the training data labels with no errors. Thus, often other methods have been explored to establish the relationship between complexity measures and generalization performance [7, 8], such as causal relations. These studies lead to a natural question, which is, other than the metrics which relate to the size of the function space of a classifier, what other factors contribute towards its ability to generalize? State-of-the-art deep neural networks, as observed above, will yield very high complexity spaces, which does not explain their remarkable ability to generalize well in complex high-dimensional supervised classification problems, such as in vision. We note, that metrics such as Rademacher complexity or VC dimension, which relate to whole function spaces F (or subspaces within F which fit the training data well), essentially assign the same generalization gap to all functions within F (or the subspace within F ). However, there has been a longstanding understanding of the fact that usually among all functions which fit the training data well, simpler functions are expected to generalize better. This is an example of the Occam’s Razor principle, which states that among all hypotheses that explain a phenomenon, the simplest hypothesis is preferred [9]. In the context of neural networks, even for very high complexity deep neural network function spaces, there will always be some weight configurations of a deep neural network, which yield simpler functions. For example, all network weight configurations yielding input-output functions which can still be efficiently approximated by smaller, shallower networks, could be considered to be simpler. An extreme example of this would be where we assign all weight values to zero within a deep neural network. The function that results from this weight configuration is essentially the constant function f(X) = 0. This observation points to the fact that even within a function space of a deep CNN, not all functions are equally complex, as some of them can still be approximated by shallower networks. To that end, a primary objective of this paper is to probe complexity measures which enable us to assign a level of complexity to individual functions within a function space. Subsequently, our objective would be to steer the learning process towards network configurations which yield less complex input-output functions. An important work which explored similar directions is [10], where metrics from algorithmic complexity were developed to bias the learning process towards simpler functions. In this regard, measures of descriptional complexity [5] of functions have been proposed over the years, which quantify the level of complexity of an individual function, based on its shortest description. Although fundamental measures of descriptional complexity such as Kolmogorov Complexity [11] or Solomonoff Probability [12] are uncomputable, computable approximations to them have been developed [13]. In spite of this early work, there is a lack of concrete theoretical or empirical work to continue this line of investigation after [10]. Mainly, there is a lack of work that explores the relevance of such descriptional complexity measures to the generalizational ability of functions. That is, until recently, when interest in the descriptional complexity of a function was renewed as it was found that outputs of random maps tend to be biased towards simpler functions [14]. This result indirectly hinted that for classification tasks, the ground truth labelling function has a simple description with high probability. This was further investigated in [5] in the context of deep neural networks, where it was found that the Lempel-Ziv (LZ) complexity (a form of descriptional complexity) of most neural networks with random choices of weights are low (shorter description), in accordance with the result in [14]. Empirically, it was found that network weight configurations which lead to functions of smaller Lempel-Ziv complexity show better generalization performance. These results and observations show that descriptional complexity measures may help to understand empirically the generalization behaviour of deep neural networks. In this paper, we advance this line of investigation by proposing a new theoretical framework for relating descriptional complexity measures to generalization performance. Subsequently, we also provide a computable method which improves generalization performance of neural networks by lowering their descriptional complexity. 2 Contributions This paper makes the following contributions. 1. First, we undertake a brief theoretical analysis for exploring the relevance of descriptional complexity measures of functions to their expected generalization error. We propose a novel measure of complexity called Kolmogorov Growth (KG). Error bounds are estimated which portray the dependence of generalization error of a single function f toKG(f). The bounds depict that functions of higher KG(f) will likely lead to a higher generalization gap. This result formalizes the Occam’s razor principle for classifiers, and also concurs with the empirical findings in [5], where functions of higher LZ-Complexity showed worse generalization performance. 2. Like Kolmogorov Complexity, KG is also uncomputable. Therefore, we propose computable approximations of KG for neural networks, based on the concept of teacher-student approximation (similar to knowledge distillation [15]). Specifically, we show that neural network functions f which can be approximated well by smaller networks will have smaller empirical KG with high probability. 3. Next, using this idea, we then develop a novel method for regularizing neural networks, called network-to-network (N2N) regularization. N2N regularization forces trained network configurations to be of low KG. We find that doing so not only improves the generalization performance across a range of training data sizes, but also helps in the case of label noise. For instance in MNIST, we see test error decrease by 94%, reaching results competitive with benchmark methods. 4. Finally, we study the evolution of empirical KG as networks are trained and observe that, in the standard classification scenario, networks show a sharp decrease in their KG as training progresses. However, this trend completely reverses when the training data labels are noise corrupted and N2N regularization is able to stem the undesirable increase of KG during training. 3 Kolmogorov Growth: Relevance to Generalization Assume that our training data consists of m d-dimensional i.i.d samples and their labels S = [(z1, y1), (z2, y2), .., (zm, ym)] drawn from some distribution Pm. We propose two growth measures for a single function f , namely Kolmogorov Growth KGm(f) and empirical Kolmogorov Growth K̂GS(f). Note that, when appropriate, we may drop the subscript m and refer to KGm(f) as KG(f). These measures are primarily motivated from the well-known growth function in statistics. The growth function Πm(F) is defined as Πm(F) = max z∈Rd |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}| (1) Note that the growth function is defined for a space of functions and captures the maximum number of label assignments a function space F can generate on any m points in Rd. To define Kolmogorov Growth measures for a single function f , we first need to generate a function space from f , based on a description Df of f . This is done via noting all the parameters involved in the description Df (other than the co-ordinates in Rd), and then assigning all possible values to those parameters to generate a space of functions F(Df ) from the description Df . Next, we note that for a function f , we consider multiple descriptions (D1f , D 2 f , D 3 f , ...), all of which faithfully generate the output of f over all points in Rd, with the additional constraint that each F(Dif ) should fit any m datapoints sampled from Pm. Given these observations and denominations, we then can define the Kolmogorov Growth of a function f as follows: KGm(f) = min i log Πm ( F(Dif ) ) m . (2) Note that KGm(f) requires the knowledge of distribution Pm of the training samples. For an instance of the datapoints in S, we define the empirical Kolmogorov Growth via the empirical growth function Π̂S(F), which only computes the number of label assignments a function space can generate over the given m training data samples in S. Thus, Π̂S(F) = |{(f(z1), f(z2), ..., f(zm)) : f ∈ F}|. This leads to the following definition of the empirical Kolmogorov Growth of a function f : K̂GS(f) = min i log Π̂S ( F(Dif ) ) m . (3) Remark: Kolmorogov growth is indirectly motivated from Kolmogorov complexity itself. However, unlike Kolmogorov complexity, which is the length of the shortest program that generates f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions which have shorter descriptions usually require a smaller number of variables and are expected to have lower Kolmogorov Growth. Moreover, it turns out that Kolmogorov Growth allows us to directly comment on the error bounds for the function f (see Section 3.1). We believe that a possible direction of future work would be to do a deeper study of the relationship between Kolmogorov Growth and Kolmogorov Complexity itself. Remark: Note that, in the binary classification scenario, for a completely unstructured function f (i.e., f outputs random labels at every point X ∈ Rd), one expects KG(f) to be near its maximum value (i.e., log 2). A structured f would generate shorter descriptions with fewer parameters and therefore lead to smaller KG(f). 3.1 Bounding Generalization Error using Kolmogorov Growth Here we present error bounds that depend on KG(f), where f is the classification function, given the m training data samples and their labels in S. As before, the data samples and labels in S are drawn from some underlying distribution Pm. We now define a set of error functions for computing training loss (0-1 loss) on S, denoted as êrrS(f), and the overall generalization error with respect to the distribution P , denoted by errP (f). We define them as follows: êrrS(f) = m∑ i=1 (1− f(zi)yi) 2m (4) errP (f) = E z,y∼P [ (1− f(z)y) 2 ] . (5) These definitions hold for any function f . Note that the error functions depend both on the function f and the distribution P . With this, we have the following results. The proofs of all results are provided in the supplementary material. Theorem 1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2KGm(f) + √ log (1/δ) 2m . (6) The following corollary of the above theorem gives bounds that depend on empirical Kolmogorov growth K̂GS(f). Corollary 1.1 For 0 < δ < 1, with probability p ≥ 1− δ over the draw of S, we have errP (f) ≤êrrS(f) + √ 2K̂GS(f) + 4 √ 2 log (4/δ) m . (7) Remark: Theorem 1 and its corollary essentially state that for functions f of lower Kolmogorov growth, we should expect a smaller generalization gap. In what follows, we outline ways to approximate the empirical Kolmorogov growth K̂GS(f). 4 Teacher-Student Approximation Bounds for Kolmogorov Growth The fundamental idea for approximating empirical Kolmogorov growth of a function f which belongs to the function space F is to use a student classifier with a function space F1small (with a much smaller parametric count and complexity) to approximate the given function f (the teacher). We apply this idea recursively. That is, if the function f1small ∈ Fsmall approximates f best, we recursively estimate the empirical Kolmogorov growth of f1small by approximating it via another classifier with a smaller function space F2small (thus, Πm(F 2 small) < Πm(F 1 small)), and so on. We use this recursive way to then obtain a final estimate for K̂GS(f). We conjecture that, like Kolmogorov complexity itself, the true K̂GS(f) is uncomputable, so the estimate that results from this recursive approximation process is essentially an upper bound to the true K̂GS(f). The following theorem establishes an upper bound to empirical KG approximation from a single smaller student classifier. Theorem 2 Given the function f ∈ F : Rd −→ R2 which outputs class logits for binary classification. We construct a function space F1small such that Πm(F 1 small) < Πm(F) and ∀g ∈ F 1 small, there exists a description Dg such that Π̂S (F(Dg)) ≤ Π̂S(F1small). We approximate f via another function f1small ∈ F 1 small : Rd −→ R2 and let max be such that 2max/2 = max X∈Rd ‖f1small(X)− f(X)‖2. (8) Denote the output probabilities generated from the corresponding logit outputs of f(X) using the softmax operator (temperature T = 1), as P0(f(X)) (label 1 output) and P1(f(X)) (label 2 output). Let 0 ≤ δ ≤ 1 be such that Pr (∣∣∣∣log(P0(f(X))P1(f(X) )∣∣∣∣ ≤ max) ≤ δ, (9) when X is drawn from S. Then we have, K̂GS(f) ≤ δ log 2 + log Π̂S ( F1small ) m , (10) where m is the number of samples in S. Remark: Theorem 2 demonstrates a way to bound the true empirical Kolmogorov growth of the function f , using a single student classifier function f1small ∈ F 1 small. Note that unlike in Theorem 1, there are no direct constraints on the expressivity of F1small, but rather a joint constraint on F 1 small and δ combined. If F1small cannot fit all m points sampled from Pm, then the approximation error in δ will likely be higher, which will add to the estimate of K̂GS(f). The proof of Theorem 2 and its extension to the recursive approximation case are given in the supplementary material. 5 Network-to-Network (N2N) Regularization We denote the base network to be trained as N base and the function modelled by the network weights wbase asN base(wbase, X), whereX ∈ Rd is the input. Here, N base(wbase, X) represents the output logits for the networkN base when presented with the inputX . Thus, we haveN base(wbase, X) ∈ Rc, where c is the number of classes. For what follows, let us denote the available training data and their labels by S = {Xi, yi}mi=1. The approach that follows is directly motivated from the result in Theorem 2. The main objective is to ensure that the KG of the network stays low during learning, using the teacher-student approximation error in Theorem 2. This is primarily achieved by ensuring that during training, the base network function N base(wbase, X) is always near to some function within the smaller network’s function space. Next, we outline the details of the proposed multi-level network-to-network (N2N) regularization approach. 5.1 Multi-Level N2N: Details In multi-level N2N regularization, we have multiple smaller networks nsmall1 , n small 2 , ..., n small K of decreasing complexity such that Πm(F1small) > Πm(F 2 small) > · · · > Πm(F K small). Algorithm 1 N2N Regularization (Multi-Level) Input: Training data {Xi, yi}mi=1, base network N base and its weights wbase, K networks nsmall1 , n small 2 , ..., n small K , with weights w1, w2, ..., wK (s.t. |nsmall1 | > |nsmall2 | > ..|nsmallK | in size), Number of epochs J , Hyperparameters λ0, λ1, λ2, ..., λK−1, α0, .., αK , ebase, esmall. 1: for j = 1, 2, . . . , J do 2: for iter = 1, 2, . . . , ebase do 3: L1 = ∑m i=1(LCE(N base(Xi), yi) + λ0‖N base(Xi)− nsmall1 (Xi)‖2) 4: Weight update: wbase ←− wbase − α0m ∂L1 ∂wbase 5: for k = 1, 2, . . . ,K do 6: for iter = 1, 2, . . . , esmall do 7: if k = 1 then 8: Lk = ∑ i‖N base(Xi)− nsmall1 (Xi)‖2 + λ1‖nsmall2 (Xi)− nsmall1 (Xi)‖2 9: else if k = K then 10: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 11: else 12: Lk = ∑ i‖nsmallk (Xi)− nsmallk−1 (Xi)‖2 + λk‖nsmallk (Xi)− nsmallk+1 (Xi)‖2 13: Weight update: wk ←− wk − αkm ∂Lk ∂w1 The corresponding functions resulting from the network weights w1, w2, .., wK are denoted as nsmall1 (w1, X), n small 2 (w2, X), ..., n small K (wK , X). Next, we outline the loss functions for all networks. For the larger to-be-trained base networkN base, the loss objective is to minimize cross-entropy loss on S while being close to nsmall1 (w1, X) for some choice of weights w1 (L1 in Algorithm 1). For the smaller network nsmall1 , the objective is two-fold: find the weight configuration w1 that approximates the larger network function N base, while also being close to nsmall2 (w2, X) for some choice of w2 (L2 in Algorithm 1). Thus, we force the smaller network nsmall1 to be close to the base network and an even lower-complexity network nsmall2 at the same time. Similarly we can define L3,L3, ..,LK−1, except for LK which applies to the smallest network nsmallK . The loss objective for nsmallK is to just keep n small K (wK , X) close to n small K−1 (wK−1, X). Finally, we optimize the loss functions in an alternating manner in the order of L1,L2, ..,LK . Details are given in Algorithm 1. The choice of mean-squared error based loss functions here directly follows from the result in Theorem 2. Note that Algorithm 1 updates with the entire batch of training data points at each iteration, and can be extended to the case of minibatch stochastic gradient descent (SGD). 5.2 Other Relevant Approaches in Literature To the best of our knowledge, our proposed approach is novel, and we did not find much relevant work. Conceptually, we found the reverse knowledge distillation method [16] to be the most relevant to our proposed approach, which regularizes large teacher networks using smaller, trained versions of student networks of less depth. The outputs logits of the trained student networks are then essentially re-used for smoothing the outputs of the larger neural network. Here, we do not directly use trained student networks to supervise the teacher, but instead simply ensure that during training, the teacher network is within reach of some student network (which may change throughout the training process), which is a more relaxed constraint. Also, the mean-squared error based approximation error between the student and teacher networks is motivated from Theorem 2, and differs from KL-divergence based measures used in knowledge distillation. Another point of difference is that N2N uses a multi-level approach for a recursive way of regularizing multiple networks of different levels of complexity. 6 Experiments We test N2N on three datasets: MNIST [17], CIFAR-10 [18] and CIFAR-100 [19]. We also demonstrate that N2N regularization improves performance in the presence of label noise. Lastly, we analyse Kolmogorov growth of networks during training. Experiments were either carried out on an RTX 2060 GPU or a Tesla V100 or A100 GPU. As mentioned in Algorithm 1, an epoch refers to a total of ebase iterations of training the base network and esmall iterations of training the smaller networks on the whole dataset. Code will be made available at https://github.com/rghosh92/N2N. 6.1 Supervised Classification: MNIST, CIFAR-10, CIFAR-100 K̂GS(f) of the trained networks. The primary objective of the experiments presented here is to see whether N2N regularization can drive the training process towards network configurations that generalize better. For each dataset, results are reported for various choices of training data size. Furthermore, to show that our regularization approach complements other commonly used regularization approaches, we show results when our approach is combined with Dropout and L2-norm regularization. For the ResNet networks (CIFAR-10/100), we combine N2N with L2-norm regularization. All networks were trained for a total of 200 iterations, and in each case results reported are averaged over five networks. For all experiments we set ebase = 3, esmall = 1 in Algorithm 1. The values of the regularization parameters (λ0, λ1) are provided in the supplementary material. Note that due to the additional iterations for training the smaller networks, the worst case training time for the N2N approach is 1.5 times longer than standard training. Across all three datasets, we generally find that for larger training data sizes, smaller regularization parameters yield best performance, reinforcing the fact that N2N is indeed a form of regularization. This is primarily because for large training data, the distribution is dense enough for the network to learn, and thus less emphasis can be given to the N2N regularization term. Results are shown in Table 1, and the average approximation error δ for the trained networks is shown in Table 2. We note that the use of N2N regularization improves test accuracy. Mainly, we see that N2N regularization complements common regularization approaches such as dropout and L2-norm well. In all cases we find that combining these well-known regularization approaches with the proposed approach yields the best results. Furthermore, we also see that the improvement in performance persists when the training data size is increased. Lastly, in most cases, we see that 2-level N2N regularization (N2N-2, m = 2 in Algorithm 1), outperforms single-level N2N (N2N-1, m = 1 in Algorithm 1), with the exception of CIFAR-10 with the full training dataset. For the CIFAR-10 and CIFAR-100 datasets, we used the benchmark ResNet architectures ResNet44 and ResNet-50 respectively. Our results with L2-norm regularization for the ResNet-44 and ResNet-50 architectures are slightly better than the results originally reported in [20]. For the MNIST dataset, we used a 5-layer CNN with 3 conv layers and 2 fc layers. Network architecture details are provided in the supplementary material. Note that although better results can be found in literature, our objective was to demonstrate that using N2N regularization in conjunction with common regularization approaches can benefit both shallow CNN architectures (MNIST) and ResNets (CIFAR-10, CIFAR-100). Furthermore, as Table 2 shows, we find that N2N reduces the empirical KG of trained networks, and datasets on which test accuracies are lower yield higher KG of trained networks. This supports the implications of Theorem 1, as high KG functions are expected to have a larger generalization gap. 6.2 Learning with Noisy Labels As our proposed regularization approach constrains the network function to be simpler by minimizing an approximation of Kolmogorov Growth, it naturally applies to the case of noisy training labels. Without regularization, label noise in the training data usually forces a network to emulate a more complex function, as it potentially makes the decision boundary more complex, a fact that we also empirically observe in section 6.3. We stipulate that N2N regularization should help the network in achieving simpler functions to approximate the training data labels, favoring simpler decision boundaries over complex ones, and thus potentially shielding against the corrupted labels to a certain extent. We test whether enforcing a simpler function (large λ0, ...λm) at the cost of compromising training loss can help improve test accuracy, when the training data is corrupted by label noise. We tested the cases where symmetric and asymmetric label noise of some probability p was applied (same as in [21]), and show our results for symmetric noise with p = 0.5 and p = 0.2. Results with asymmetric pair-flip noise of probability p = 0.45 are shown in the Supplementary Material. First, we show the results for symmetric label noise of probability p = 0.5 and p = 0.2 on MNIST, CIFAR-10 and CIFAR-100 in Table 3. For F-correction [22], Decoupling [23], MentorNet [24] and Co-Teaching [25] methods, we report the accuracy over the last ten iterations of training as observed in [25], along with their standard cross-entropy results with corresponding network architectures for reference. We do the same for our implemented SCE and N2N methods on MNIST and CIFAR-10, but for CIFAR-100, we report the accuracies using a 48k-2k training-validation split of the data for both, as we find it to yield best performance (due to hard convergence). Note that we use the same network configurations for SCE. The values of λ0 and λ1 are provided in the supplementary material. We find that N2N regularization yields competitive performance in most cases. We also plot the test accuracy as a function of the regularization parameter λ0 in Figure 1. We find that for MNIST, large λ0 helps achieve significantly higher test accuracy, whereas for CIFAR-10 and CIFAR-100 accuracy peaks around λ0 = 0.6 and λ0 = 0.5 respectively. Note that accuracies may differ from Table 3 because of different training configurations used. 6.3 Comparing Kolmogorov Growth Trajectories during Training The improvements observed via the use of N2N regularization lead to the question of how the network trajectories differ when N2N regularization is used, as compared to when it is not applied. We use the result in Theorem 2 to compute the bounded approximation to empirical Kolmogorov growth of the network function. We thus plot the approximation of K̂G(f) of the function f represented by the neural network during the training process. Note that the variation of KG in all plots is only owing to the changes in the approximation error term δ in Theorem 2, as F1small is fixed to a single-layer CNN (of a fixed configuration) for all results in Fig. 2. Π̂S ( F1small ) was estimated using a VC dimension based approximation shown in [7]. Results are shown in Figure 2. We find that in the case of no training label noise, the KG of networks typically have high initial values, steeply reducing within a few epochs of training, after which it stabilizes. Expectedly, when trained with N2N regularization, we find that the final KG of networks are lower, compared to KG of networks trained without N2N. In the case of label noise, we report some interesting observations. First, we see that, differently from before, the KG values rather increase with training and stabilize eventually at higher values. This is almost an opposite trend to the case of no label noise. This can be partly explained by the fact that as training progresses, the network slowly adapts its decision boundary to fit the erroneous labelling, eventually resulting in a decision boundary of high complexity. For the label noise case, we find that N2N regularization significantly reduces the increase of KG during the training process. Furthermore, larger values of the λ parameter leads to networks which exhibit smaller KG values. This also helps explain the significant gains in test accuracy observed for MNIST earlier in Table 3, when using N2N regularization. 7 Discussion and Reflections This results in this paper further the recent work by [5], where it was shown that neural networks are inherently biased towards simpler functions of lower Kolmogorov complexity. In particular, we provide an actionable method for incorporating a function complexity prior while learning, using a novel measure called Kolmogorov Growth. Unlike Kolmogorov complexity, which is the description of the shortest program that generates some function f , Kolmogorov Growth is concerned with the smallest function space that f can belong to, that can still fit the data well. Functions with shorter descriptions will typically need fewer variables and thus may have lower Kolmogorov Growth. Although smaller function spaces have less expressive power, as recent work in [6] shows, even shallower neural nets can fit random labels on the training data points. The observations in [5] however, put a new perspective on the results in [6]: any random choice of network weights on smaller networks is likely to yield a low complexity function. Thus, even shallower networks can potentially exhibit a wide range of complexities. Among them, the higher Kolmogorov complexity functions are likely required for a network to fit random labels (similar to observations in [5]). In the case of label noise, N2N considers this fact by avoiding directly training the shallower networks to fit the noisy labels, which helps reduce their descriptional complexity, which then helps in regularizing the larger base network. As such, when using a pre-trained shallower network to regularize the base network (reverse-KD), we found that performance can significantly suffer in the case of label noise. Via N2N regularization we see that enforcing low KG for large networks can improve their ability to generalize. The proposed approach greatly helps in the scenario where the training data has noisy labels, attaining competitive performance on the three tested datasets when the training labels are corrupted with symmetric label noise. In the case of label noise, we see that networks trained without N2N regularization have larger Kolmogorov Growth (see Figure 2), which reduces immediately following the application of N2N regularization. Furthermore, it is clear that by varying the emphasis on minimizing the regularization term via tuning the λ parameters, the KG of subsequently trained networks can effectively be controlled. As λ0 increases, more emphasis is put on lowering KG, which improves generalization and yields better test accuracy. However, this happens up to a threshold (see Figure 1) and test accuracy decreases as λ0 increases beyond the threshold. In the case of training data with noisy labels, we find that the threshold is larger because we can put less emphasis on fitting the noisy training labels and more emphasis on minimizing KG. Our theoretical results in Section 3.1 show that network configurations that can be approximated well by smaller networks of lower complexity will have low Kolmogorov growth, and subsequently, lower generalization error. These results concur with the very recent theoretical findings in [26], which finds analogous results for the Rademacher complexity based generalization error framework, in the context of knowledge distillation. Our main result in Theorem 1 outlines an Occam’s razor like principle for generalization. Theorem 1 implies that for all functions which have zero training error, the function with the smallest KGm(f) will be the most likely to show the least generalization error. Our empirical findings consistently show that driving the networks towards simpler functions of lower Kolmogorov Growth leads to networks that generalize better. We find multi-level N2N follows from a theoretical result shown in the Supplementary material, where we bound the empirical KG of the base network function based on the set of recursive mean-squared error estimates. However, KG bounds resulting from recursive estimation are provably less tight than single-estimation KG bounds of the form in Theorem 2. We believe that more bounded loss terms could be one of the reasons behind 2-level N2N yielding better performance on average, as compared to single-level N2N. In the case of label noise, we see that enforcing low KGm(f) on the classification function f , by increasing the regularization parameter values, can have a significant impact (Section 6.2). This also leads to a current limitation of our approach, which is that the hyperparameters (λ0, λ1, ..) have to be manually tuned. Automatic estimation of their optimal values is an avenue for future research. Another limitation of our work is that the growth function term in the empirical approximation of KG (Theorem 2) potentially can render the bounds quite loose. Thus, achieving tighter bounds with KG-based metrics is also a possible extension of this work. In N2N regularization, we observe that the properties of the smaller networks can dictate the learning of the base network. If we choose smaller networks which are highly rotation invariant in their structure (for e.g., by using a rotation-invariant CNN), we should expect the base network to adopt some of the rotation invariance properties as well. We thus conducted an additional experiment on a custom MNIST [17] dataset, which contains images of digits translated randomly within the image. We added symmetric noise on the labels (p = 0.5), and tested our proposed N2N regularization approach with a student network which is highly translation invariant (large max-pooling windows). We found that N2N shows larger improvements, reducing test error by 27% compared to other baselines. This demonstrates the possibility of extending this work by analyzing the effect of invariance/equivariance choices in the smaller networks on the generalization behaviour of the larger network, similar to the observations on distillation methods transferring inductive biases in [27]. Finally, since our work provides a certain level of robustness against label noise, it supports activities such as crowdsourcing data labelling, which potentially contains significant label noise. 8 Acknowledgements This research was supported by the National University of Singapore and by A*STAR, CISCO Systems (USA) Pte. Ltd and National University of Singapore under its Cisco-NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002). We would also like to acknowledge the helpful feedback provided by members of the Kent-Ridge AI research group at the National University of Singapore.
1. What is the main contribution of the paper regarding regularization techniques for training deep neural networks? 2. What are the strengths of the proposed method, particularly in terms of its theoretical foundation and experimental results? 3. Do you have any questions or concerns about the proof of Theorem 2, specifically regarding the conditions described in the first two sentences of the proof? 4. How does the reviewer assess the novelty and effectiveness of the proposed method compared to other recent works in generalization for deep learning? 5. What are some potential strategies for choosing hyperparameters for the proposed method?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a novel and theoretically inspired regularization techniques for training deep neural networks. The technique involves regulating the network by regressing the output of the model to a simpler model but the simpler model also regresses towards the output of the base model. The experimental results show that the proposed techniques are effective at reducing the generalization gap and improving performance in presence of label noise. Review The paper first defines Kolmogorov Growth (KG) which is a variation of growth function that in addition considers the description length of the model that can still fit the data. Since KG is a growth function, the generalization gap can easily be bounded via Massart's finite lemma (Thm 1). The paper then proceeds to show that if a smaller model can fit the predictions of the base model, then KG of the larger model can be bounded by the growth function of the smaller model (Thm 2). The proof of theorem 1 is relatively straightforward but I have some questions regarding theorem 2 which I will elaborate below. Regarding the proposed method, the method seems fairly novel and is well motivated by the proposed theorem. The results show that it offers small improvement on standard classification benchmark and significant improvement in the presence of label noise which is pretty cool. Now I will discuss the concerns I have regarding the algorithm and experimental results: The paper does not describe the motivation behind using multiple levels of distillation of increasingly small capacities. While the theorem shows that you can bound the KG by chaining inequality, it does not explain why this is necessary. Empirically it seems there are some benefits but it is unclear why we need multiple levels. Afterall, bounding with one level (the smallest) will yield tighter bounds. While this might be hard to rigorously justify, I want to see more discussion about intuition and if possible theoretical results. -nEmpirically, how does the method compare to reverse distillation? Have you verified the conditions of theorem 2? To me it seems this condition is impossible to verify thus you don’t really know whether you are bounding the KG. In other words, did you compute δ in addition to the growth function? For example, I can pick a really small network so the δ is large -- Now the KG is dominated by δ instead of the growth function. Following the previous point, does the smallest student model fit the label perfectly? In my experience, models with 11k/16k parameters can have a hard time fitting cifar10 / cifar100 perfectly. If they don’t then the theory of KG breaks down. How did you compute the growth function for neural networks for the RHS of eq 9, thm 2? Even small networks should have very high growth functions. As far as I can tell it’s also intractable to compute the exact growth function of an arbitrary neural network. If you relied on approximation then what method did you use? The method has some hyperparameters. Could you discuss strategies for choosing the hyperparameters? Questions about proof: I have some questions about the details of thm 2’s proof. It is not immediately obvious to me that the conditions described by the first two sentences of the proof are true. Could you add more details to that part of the proof seeing that it is possibly the most important theoretical result of the paper. I will use g to denote f s m a l l 1 . First we can rewrite log ⁡ ( P 0 ( f ( X ) P 1 ( f ( X ) ) ) = f 0 ( X ) − f 1 ( X ) and ϵ m a x = max X ∈ R d | | g ( X ) − f ( X ) | | 2 = max X ∈ R d ( g 0 ( X ) − f 0 ( X ) ) 2 + ( g 1 ( X ) − f 1 ( X ) ) 2 The two sentences are essentially saying that | f 0 ( X ) − f 1 ( X ) | ≤ ϵ m a x ⟹ a r g m a x k f k ( X ) ≠ a r g m a x k g k ( X ) . Could you elaborate why this is true? Other comments: Equation 9 missing hat? The related work section of the work seems kind of sparse as generalization for deep learning has attracted a lot of attention recently. I listed a few works at the end of this review that I think you can consider to discuss. Overall, I think this is a good paper with a nice empirical method motivated by principled theory. If the authors can address my concerns, I would be happy to raise my score to 7. Reference [1] Uniform convergence may be unable to explain generalization in deep learning. Nagarajan et al. [2] Towards Learning Convolutions from Scratch. Neyshabur et al. [3] Fantastic Generalization Measures and Where to Find Them. Jiang et al. [4] In Search of Robust Measures of Generalization. Dziguite et al. [5] Transferring Inductive Biases through Knowledge Distillation. Abnar et al. I am not entirely up-to-date with literature on distillation so I could have missed some there.
NIPS
Title Structuring Uncertainty for Fine-Grained Sampling in Stochastic Segmentation Networks Abstract In image segmentation, the classic approach of learning a deterministic segmentation neither accounts for noise and ambiguity in the data nor for expert disagreements about the correct segmentation. This has been addressed by architectures that predict heteroscedastic (input-dependent) segmentation uncertainty, which indicates regions of segmentations that should be treated with care. What is missing are structural insights into the uncertainty, which would be desirable for interpretability and systematic adjustments. In the context of state-of-the-art stochastic segmentation networks (SSNs), we solve this issue by dismantling the overall predicted uncertainty into smaller uncertainty components. We obtain them directly from the low-rank Gaussian distribution for the logits in the network head of SSNs, based on a previously unconsidered view of this distribution as a factor model. The rank subsequently encodes a number of latent variables, each of which controls an individual uncertainty component. Hence, we can use the latent variables (called factors) for fine-grained sample control, thereby solving an open problem from previous work. There is one caveat though–factors are only unique up to orthogonal rotations. Factor rotations allow us to structure the uncertainty in a way that endorses simplicity, non-redundancy, and separation among the individual uncertainty components. To make the overall and factor-specific uncertainties at play comprehensible, we introduce flow probabilities that quantify deviations from the mean prediction and can also be used for uncertainty visualization. We show on medical-imaging, earth-observation, and traffic-scene data that rotation criteria based on factor-specific flow probabilities consistently yield the best factors for fine-grained sampling. ∗both authors contributed equally 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A In image segmentation, the classic approach of learning a deterministic segmentation neither accounts for noise and ambiguity in the data nor for expert disagreements about the correct segmentation. This has been addressed by architectures that predict heteroscedastic (input-dependent) segmentation uncertainty, which indicates regions of segmentations that should be treated with care. What is missing are structural insights into the uncertainty, which would be desirable for interpretability and systematic adjustments. In the context of state-of-the-art stochastic segmentation networks (SSNs), we solve this issue by dismantling the overall predicted uncertainty into smaller uncertainty components. We obtain them directly from the low-rank Gaussian distribution for the logits in the network head of SSNs, based on a previously unconsidered view of this distribution as a factor model. The rank subsequently encodes a number of latent variables, each of which controls an individual uncertainty component. Hence, we can use the latent variables (called factors) for fine-grained sample control, thereby solving an open problem from previous work. There is one caveat though–factors are only unique up to orthogonal rotations. Factor rotations allow us to structure the uncertainty in a way that endorses simplicity, non-redundancy, and separation among the individual uncertainty components. To make the overall and factor-specific uncertainties at play comprehensible, we introduce flow probabilities that quantify deviations from the mean prediction and can also be used for uncertainty visualization. We show on medical-imaging, earth-observation, and traffic-scene data that rotation criteria based on factor-specific flow probabilities consistently yield the best factors for fine-grained sampling. ∗both authors contributed equally 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Introduction Semantic Segmentation is the computer vision task of assigning a class to each pixel of an image. Examples for popular applications are the segmentation of medical images [1, 2, 39], land-cover classification in earth observation [28, 37, 42], and segmentation of imagery taken by autonomous vehicles [14, 25, 43]. Semantic segmentation tasks are affected by heteroscedastic (input-dependent) aleatoric uncertainty [26, 27] that is also called data uncertainty. Aleatoric uncertainty can emerge in form of label noise in the training data, differing expert opinions about the true segmentation, or ambiguity already contained in the data, for example, caused by technical restrictions like the image resolution [16, 20, 27]. To account for the prevailing aleatoric uncertainty, various probabilistic architectures have been proposed [3, 21, 24, 27, 28, 31]. The work [11] demonstrates that in general, uncertainties predicted by probabilistic segmentation architectures correlate positively with estimation errors, including those obtained from ensemble methods [23, 29] and MC-dropout [15]. Nevertheless, deterministic segmentation architectures [7, 35] are still pre-dominantly used [2, 44]. This may be because for practitioners it is often not clear how to take advantage of the predicted uncertainties. For instance, a limited number of sampled segmentations usually does not represent the predicted uncertainty well. As shown in Figure 1 (top), overview plots for the predicted uncertainty like entropy [11, 27] can be generated. They indicate areas of high uncertainty, where practitioners should take care. However, they do not convey pixel-wise correlations of uncertainty, that is, how changes in the segmentation of one region of an image affect changes in another. Moreover, overview plots cannot explain the overall uncertainty in terms of smaller uncertainty components, which would be desirable towards an interpretation of the uncertainty following the principles of problem decomposition and divide-and-conquer [30]. In the context of the recently introduced stochastic segmentation networks (SSNs) [31] that we explain below, the lack of understanding the overall uncertainty in terms of independent components connects to a problem that was also observed by the authors of [31]. They pointed out in their accompanying demo that ’fine-grained sample control’ is still missing. Indeed, the identification of independent or at least reasonably distinguished components of uncertainty represents a natural solution to this problem. We seek such components for SSNs with the goal of manipulating them individually for generating and fine-adjusting segmentations. SSNs model uncertainty via a low-rank multivariate Gaussian distribution on the logits, that is, on the network output before the softmax is applied, see Section 2. Surprisingly, it has not been made use of the fact that this uncertainty model itself offers a straightforward way for distinguishing between uncertainty components. This may be because the low-rank model was originally proposed for reducing the number of parameters. Here, we center our approach around the semantics of the low-rank model as a factor model. Factor models structure the overall uncertainty into individual components of uncertainty, each of which is governed by a single latent variable (called factor). Therefore, we solve the open problem from [31] by using the latent factor variables as control variables for a systematic exploration of the predicted heteroscedastic segmentation uncertainty. For the best result, however, some additional work is necessary: As is known from exploratory factor analysis [13, 40], the latent variables in factor models are only unique up to orthogonal rotations. Hence, they should be rotated for increased interpretability [6, 9, 40], which in our case amounts to generating more useful controls. Good controls encode uncertainty components that are simple, non-redundant, and separable in the sense that they affect distinguished image regions or classes. To evaluate these aspects, in Section 3 we introduce flow probabilities that quantify deviations from the mean prediction, which also enables uncertainty visualization, see Figure 1. Specifically, we compute factor-specific flow probabilities that quantify the impact of the uncertainty components encoded by individual factors. In Section 4, we fuse factor-specific flow probabilities with classic rotation criteria [6, 10] from exploratory factor analysis. In Section 5, we show that these fused criteria generally result in the best possible controls. Note that as a by-product, computing full flow probabilities for the overall uncertainty also yields a new type of overview plot that does not aggregate class-specific information about the uncertainty, see Figure 1 (top). Before we summarize our main contributions, we would like to emphasize that we do not benchmark SSNs as they have already been proven to produce state-of-the-art results w.r.t. various metrics, for instance, generalized energy distance to the ground truth distribution [24, 31]. Hence, it is safe for us to assume that after successful training, SSNs are capable of predicting the aleatoric uncertainty for a given input image reasonably well. Based on that our main contributions are: (1) control variables for the contributions of individual, factor-specific uncertainty components for fine-grained sampling (given by the latent factor variables), (2) flow probabilities for quantifying and visualizing overall and factor-specific uncertainties, (3) rotations based on factor-specific flow probabilities, which structure the uncertainty compo- nents and thereby provide simpler, less redundant, and well-separated control variables. Please find an overview figure of our contributions in Section A of our supplement. Additionally, we made the code for the proposed methods and experiments available under https://github.com/JakobCode/StructuringSSNs. 2 Factor modeling in stochastic segmentation networks Stochastic segmentation networks (SSNs) [31] are characterized by modeling the pixel-wise logits in the network head as a low-rank multivariate Gaussian distribution, that is, p(η | x) ∼ N ( µ(x),Γ(x)Γ(x)> + Ψ(x) ) . Here, η ∈ Rn are the n = hwc logits for an input image x of size h × w and a classification problem with c classes. The parameters of the Gaussian distribution are the mean µ(x) ∈ Rn and the covariance matrix, which decomposes into a matrix of rank bounded by r n with square root Γ(x) ∈ Rn×r and a diagonal matrix Ψ(x) ∈ Rn×n with positive diagonal elements. The parameters of the Gaussian distribution are the output of a backbone segmentation network with input x [7, 35]. Originally, the low-rank parameterization was solely introduced as a means for reducing the number of parameters [31]. However, the low-rank covariance model has a deeper structural meaning as a factor model [13, 38, 40]. Factor models are characterized by a typically small number of latent variables, called factors, that explain all correlations among a larger number of observed variables. In our case, the joint distribution of the observed logits and the latent factor variables z ∈ Rr is given by (η, z) ∼ N (( µ 0 ) , ( ΓΓ> + Ψ Γ Γ> Ir )) , where Ir is the (r × r) identity matrix, and for brevity, we omit the dependence on the input x in the notation from now on. Here, the interactions of the latent variables with the observed logits are described in the matrix Γ of factor loadings: Each column contains the loadings of one latent factor variable on the observed logits, yielding structured uncertainty. The loading characteristic becomes clear in the following sampling procedure for the logits from the factor model: η = µ+ Γz + Ψ1/2ε, where z ∼ N (0, Ir), ε ∼ N (0, In). (1) This procedure results from sampling from the joint distribution p(η, z) = p(z)p(η | z) as follows: First, the latent variables are sampled according to z ∼ N (0, Ir). Second, the logits are sampled from the conditional distribution η | z ∼ N (µ+ Γz,Ψ). Subsequently, only the logits are observed. Sampling the logits as in Equation (1) provides control over the contributions of the different latent factor variables. This invites for an individual manipulation of the factors, enabling fine-grained sampling. However, as pointed out in the introduction, factors should be rotated beforehand because they are only unique up to orthogonal rotations, see Lemma 1 in the supplement. In particular, orthogonal rotations of the latent factor variables do not change the marginal distribution p(η) of the logits. Indeed, replacing Γz by ΓOz for an orthogonal matrixO ∈ Rr×r in Equation (1) yields an equivalent sampling procedure. Therefore, one way to understand orthogonal rotations is that they change the basis of the r-dimensional affine space {µ+ Γz : z ∈ Rr} of the (noiseless) logits, where the basis elements are the columns of the factor loading matrix. 3 Flow probabilities In this section, we develop the notion of flow probabilities as our main tool for the analysis of factor models in SSNs. Flow probabilities can roughly be understood as probabilities of deviations from the mean prediction. We use flow probabilities for uncertainty quantification and visualization. 3.1 Factor-specific flow probabilities Because we need to understand and assess individual factors, it is important to analyze and quantify the uncertainty that is encoded in them. A useful tool for that is to compute factor-wise distributions of class predictions, for which we vary an individual latent factor variable z with associated factor loadings γ (a column of Γ, later we use the notation Γ:,j for a specific column of Γ). We keep the influence of all other latent factor and noise variables fixed to zero. Consequently, for z ∼ N (0, 1) we compute the following expected value: P = P (γ) = ∫ E(µ+ γz)p(z)dz ∈ [0, 1](wh)×c. (2) Here, E(µ+ γz) ∈ {0, 1}(wh)×c is the matrix whose rows correspond to the pixel-wise one-hot encoded class predictions, which are obtained by reshaping the logits µ + γz into shape (wh, c) and then applying an argmax along the class dimension. Note that the class probabilities P can be understood as a function of the factor loadings γ since the mean logits µ are fixed. We solve Equation (2) analytically. For that, we consider pixels separately since for (flat) spatial index i ∈ [wh] = {1, . . . , wh}, the probabilities pik from the i-th row of P only depend on its associated mean logits and factor loadings. Specifically, with the definition gik(z) = µik + γikz for k ∈ [c], for fixed z the predicted class is argmaxk gik(z). Hence, from Equation (2) we get that pik = pik(γ) = ∫ 1[k = argmaxk′ gik′(z)]p(z)dz, k ∈ [c] = {1, . . . , c}, (3) where 1 is the indicator function. We solve Equation (3) for binary classification first (k ∈ {1, 2}). Assuming that only the logits for the class k = 2 are learned, we set gi1(z) = 0 for consistency in Equation (3). Then, with µ = µi2 and assuming that γ = γi2 6= 0, the probability pi2 evaluates as pi2 = ∫ 1[µ+ γz ≥ 0]p(z)dz = { ψ(−µ/γ), γ < 0 1− ψ(−µ/γ), γ > 0 . Here, ψ is the cumulative distribution function of a standard normal random variable. For the last equality, observe that −µ/γ is the intersection point of the straight line gi2(z) = µ + γz with the z-axis gi1. If γ = 0, the argmax is not unique and probabilities can be split. For clarity of the technical exposition, we assume that the argmax is unique in the following. For general multi-class problems, the probabilities pik in Equation (3) can be derived from the class-prediction function z 7→ argmaxk′ gik′(z). In this function, the class prediction can only change at intersection points z of two non-parallel straight lines gik and gik′ , that is, z = (µik − µik′)/(γik′ − γik). Generally, if a class k is predicted for some z, then all z values for which the k-th class is predicted form a non-empty interval (zik, zik) ⊂ R. The end points of this interval can either be −∞, an intersection point of gik, or∞. In practice, the intervals (zik, zik) can be computed by sorting all intersection points and checking the values of the class-prediction function on the resulting partition of the z-axis. If a class k is never predicted, we set zik = zik = −∞. Finally, the class probability is given by pik = ψ(zik)− ψ(zik), where we use the conventions that ψ(−∞) = 0 and ψ(∞) = 1. Observe that the formula for binary problems given above is a special case of the one given for pik here. Overall, we obtain the following result: Proposition 1. Define Z = (zik) and Z = (zik) with entries i ∈ [wh] and k ∈ [c]. Then, the distribution of predicted classes under variation of the factor with associated loadings γ is given by P (γ) = ψ(Z)− ψ(Z), where ψ applies the cumulative distribution function of a standard normal variable element-wise. Now, to highlight the difference to the prediction from the mean µ, we compute factor-specific flow probabilities as F (γ) = P (γ)− E(µ) = ψ(Z)− ψ(Z)− E(µ) ∈ [−1, 1](wh)×c. Positive entries in the k-th column F (γ):,k indicate that the prediction for the corresponding pixels changes with positive probability from the mean prediction to class k. Based on this fact, factorspecific flow probabilities enable visualizations of the impact of individual factors, see Figure 1 (bottom rows). The visualizations are obtained by calculating a mixture of class-specific colors with weights given by the (factor-specific) flow probabilities, see the supplement for details. As factor-specific flow probabilities represent the real impact of a factor on output segmentations, they will also be a key to quality assessment of the factors, see Section 4. For future reference, we denote by F (Γ) ∈ [−1, 1](whc)×r the matrix of all factor-specific flow probabilities that is obtained by concatenating the factor-specific flow probabilities F (Γ:,j) as columns after flattening, where Γ:,j is the j-th column of Γ. Finally, since we use the latent factor variables as control variables for fine-grained sampling, it is helpful to also compute one-sided flow probabilities that encode the uncertainty for respectively positive and negative values of the latent factor variable. Corollary 1. Using the notation from Proposition 1, the one-sided factor-specific flow probabilities for a factor with loadings γ compute as F+(γ) = ∫ [0,∞) E(µ+ γz)p(z)dz − E(µ) = ψ(max(0,Z))− ψ(max(0,Z))− E(µ), F−(γ) = ∫ (−∞,0] E(µ+ γz)p(z)dz − E(µ) = ψ(min(0,Z))− ψ(min(0,Z))− E(µ). 3.2 Uncertainty quantification for the full factor model The idea of computing factor-specific flow probabilities for uncertainty quantification and visualization extends to the full factor model. For that, analogous to Equation (2), we compute the distribution of class predictions. However, this time we take the expected value over the full distribution of the logits given in Equation (1): P full = ∫ E(η)p(η)dη = ∫ E(µ+ Γz + Ψ1/2ε)p(z)p(ε)dzdε ∈ [0, 1](wh)×c. (4) The change from the mean prediction E(µ) is then given by the full flow probabilities, which we compute as F full = P full − E(µ) ∈ [−1, 1](wh)×c. Visualizing full flow probabilities as above by weighted mixtures of class-specific colors yields a new type of overview plot for the uncertainty, see Figure 1 (top row) for an example. Though for our work only a by-product, it has the advantage that it does not aggregate information about class-specific uncertainties, in contrast to overview plots like entropy (see also Figure 1, top row). In practice, the integral from Equation (4) is difficult to evaluate. This is because the argmax in E(η) technically amounts to determining a maximum of multivariate linear functions. Hence, we approximate the integral using Monte-Carlo integration with m i.i.d. samples z(1), . . . ,z(m) ∈ Rr drawn fromN (0, Ir) and i.i.d. samples ε(1), . . . , ε(m) ∈ Rwhc drawn from N (0, Iwhc). The matrix P full of class probabilities is thus approximated by P full ≈ 1 m m∑ j=1 E(µ+ Γz(j) + Ψ1/2ε(j)) ∈ [0, 1](wh)×c. The matrix F full of flow probabilities can be approximated similarly. In the supplement, we show empirically that the diagonal noise term has little impact on the flow probabilities. Hence, we can focus on the structural uncertainty that is induced by the latent factor variables. 4 Factor rotations As pointed out in Section 2, the latent variables/factors in factor models are only unique up to orthogonal rotations. Therefore, it is common practice in exploratory factor analysis to rotate them in order to maximize their interpretability [6, 22, 38]. The factor model in a SSN represents the predicted uncertainty for a given input image, where the factors themselves encode components of the overall uncertainty. We intend to use them as control variables for fine-grained sampling. From that we derive the following quality criteria: (1) The number of ’relevant’ factors should be small, where relevant factors are characterized by having a ’significant’ effect on output segmentations. (2) Relevant factors should be separable from each other in the sense that they encode distinguished uncertainty components. (3) Each area in the input image should be affected by only few factors. Here, the first criterion ensures that the number of impactful control variables is reduced to a necessary minimum, and the second criterion requires that the corresponding uncertainty components are distinct. Together, the first two criteria discourage factor redundancy. The last criterion reflects the general requirement of sparsity and simplicity that is also found among Thurstone’s rules [41] for simple structure of a factor loading matrix, which is the primary goal in exploratory factor analysis [13]. However, in our case we rather require a simple structure on the matrix F (Γ) of factor-specific flow probabilities (see Section 3.1) since they measure the actual impact of the factors on output segmentations. In Section 5, we evaluate different rotation criteria that we present in the following. First, we consider classic rotation criteria. Here, for a factor loading matrix Γ = (γij) ∈ Rn×r, Crawson and Ferguson [10] defined the CF family of rotation criteria: qκ(Γ) = (1− κ) n∑ i=1 r∑ j=1 γ2ij r∑ l:j 6=l γ2il + κ r∑ j=1 n∑ i=1 γ2ij n∑ l:i 6=l γ2lj , κ ∈ [0, 1]. The CF family is a generalization of the widely used orthomax family [17], where the parameter κ controls a trade-off between row complexity (first sum) and column complexity (second sum). We focus on popular choices: κ = 1/n yields an equivalent version of the Varimax criterion [22], which is the most used method. Intuitively, it tries to maximize the variance of the squared factor loadings. Next, κ = 0 yields the Quartimax criterion that minimizes the number of factors needed to explain a variable (in our case segmentation uncertainty of a pixel). Finally, κ = r/(2n) yields the Equamax criterion that represents a combination of Varimax and Quartimax. Classic rotation criteria do not consider the actual impact of factors on predicted segmentations because they only take the factor loadings Γ but not the mean µ into account. Therefore, we incorporate factor-specific flow probabilities into rotation criteria by applying a base rotation criterion q on the flow probabilities instead of the factor loadings. Hence, the objective function to be minimized becomesO 7→ q(F (ΓO)) instead ofO 7→ q(ΓO). We call the new family of rotation criteria the FP family. For instance, FP-Varimax applies the Varimax criterion on the flow probabilities. 5 Experiments The purpose of our experiments is to (1) evaluate rotation criteria based on the quality of rotated factors, (2) demonstrate the merits of fine-grained sample control based on reasonably-rotated factors. Data sets and training. First, we use the LIDC data set [1] in its pre-processed version from [28] that contains 2D slices of 3D thorax scans of size 128× 128 pixels. Each slice respectively has four ground truth segmentations from different experts. Second, we use the multi-spectral Sentinel-2 data from the SEN12MS data set [36] with images of size 244× 244 pixels and coarse labels for semantic segmentation of 10 types of land cover. Third, we use the CamVid data set [5], which contains images of road scenes in resolution 480× 360 and is pixel-wise labeled into 11 different classes. Additional details and statistics about the data sets (including splits) can be found in the supplement, where we also detail all training procedures. We respectively use r = 10 in our experiments, which accounts for the varying uncertainty in different images and has also been used in [31]. We would like to emphasize again that we do not benchmark SSNs since they have already shown to be state of the art [24, 31]. For examples of uncertainty predictions, see Figure 1, Figure 2, and the supplement. Computational aspects. We used Python 3.7, particularly with the libraries PyTorch 1.11 [32], scikit-learn [33], NumPy [18], and einops [34]. On a single core of an Intel Xeon Platinum 8260, factor-specific and full flow probabilities can be computed in the sub-second range without significant differences w.r.t. the used rotation, see the supplement for details. To obtain the optimal rotation matrices for the different rotation criteria, we adapted gradient projection algorithms from [4] to our needs. In our current implementation, optimization for criteria based on flow probabilities can take up to a few minutes, see the supplement for details. In practice, we recommend pre-computing rotations whenever possible. 5.1 Evaluation of rotation criteria We evaluate rotations according to the quality criteria from Section 4, that is, (1) the relevance of individual factors, (2) the separability of the relevant factors, and (3) the sparsity of the factors. 5.1.1 Factor relevance Here, we measure the impact of individual factors on the segmentation. In this section, we use the notation Γ̃ to denote a matrix of factor loadings that can be either rotated or unrotated. A simple measure for the impact of the j-th factor with loadings Γ̃:,j ∈ Rn is given by the `1-norm ‖F (Γ̃:,j)‖1 of the factor-specific flow probabilities. In what follows, we consider relevance curves that show how many factors exceed the overall uncertainty for varying thresholds τ ≥ 0. Specifically, we compute nτ = |Rτ |, where Rτ = {j : ‖F (Γ:,j)‖1 ≥ τ‖F full(Γ)‖1}, and we measure the overall uncertainty by the `1-norm of the full flow probabilities, approximated by 100 Monte-Carlo samples. Results and discussion. The results of averaging nτ over the respective test images are shown in Figure 3 (top row). First, classic rotations barely reduce the number of relevant factors compared to the unrotated representation. This is no surprise since they do not take the mean logits into account and only try to simplify the structure of the factor loadings Γ̃. Nevertheless, even classic rotations already seem to decrease redundancy. However, as intended by design, FP rotations reduce the number of relevant factors to a much greater extent. Especially for LIDC and SEN12MS, already small thresholds τ are sufficient to cut off most factors below the threshold: Figure 3 (top) shows that all FP rotations behave similarly with curves declining sharply for small τ . Consequently, FP rotations tend to produce a huge gap between a small number of relevant factors and the remaining ones, see Figure 1 for a visual example. This is desirable since it allows to focus only on a few relevant and meaningful factors during the exploration of the predicted uncertainty. It may be harder to find such factors if the predicted uncertainty has less inherent structure. CamVid is an example in this regard as uncertainty predictions are often restricted to class borders, which means that they are less spatially correlated. However, even for CamVid, there is structured uncertainty, see Figure 13 (Section D.2.3) in the supplement. Figure 3 (top) shows that also for CamVid, FP rotations significantly reduce the number of relevant factors. 5.1.2 Separability of relevant factors The second quality criterion from Section 4 concerns factor separation. Here, for a separation threshold ρ ∈ [0, 1], we compute the largest possible fraction of pairwise separated relevant factors: sτ (ρ) = n −1 τ ·max{|J | : J ⊂ Rτ , cos(F (Γ̃:,j), F (Γ̃:,j′)) ≤ ρ for all j 6= j′ ∈ J} ∈ [0, 1]. If nτ = 0, we set sτ (ρ) = 0 for all ρ. The separation of two factors is measured by the cosine similarity of their factor-specific flow probabilities, which is always non-negative since corresponding entries cannot have opposing signs. For sτ (ρ), a value of one is best since it means that all relevant factors are also separated. For fixed relevance thresholds τ , we also compute the area under the curve AUC(sτ ) for the comparison of different rotation criteria. Results and discussion. In Figure 3 (bottom row), we show the separation scores AUC(sτ ) for different relevance thresholds τ , respectively averaged over all test images. FP rotations consistently beat classic rotations by a factor of around two in terms of AUC. Classic rotation criteria are still better than the unrotated representations (which form the real baseline). The AUC separation scores drop for thresholds τ that fail to determine the number of relevant factors sensefully because they are too small or too large. Notably, for classic rotation criteria, the separation scores AUC(sτ ) respectively peak at a threshold τ for which the number of relevant factors nearly coincides with the one from FP rotations, compare the intersection of the curves in Figure 3 (top row). The peak of the separation scores is less pronounced for FP rotations, particularly for LIDC and SEN12MS, where the set of relevant factors is more stable across different thresholds τ . For SEN12MS, the results also distinguish among the FP-rotation criteria, where FP-Quartimax seems to be slightly favored over the other FP rotations. This may be because Quartimax emphasizes row sparsity the most, which reduces cosine similarities. We investigate row sparsity further in the next section. 5.1.3 Factor sparsity To evaluate to which degree different factors affect the same regions of the input image, we measure row sparsity of the factor-specific flow probabilities F (Γ̃) ∈ [−1, 1](whc)×r. For that, for a (row) vector v ∈ Rr let h(v) = √ r − ‖v‖1/‖v‖2√ r − 1 ∈ [0, 1] be the Hoyer measure [19], where values close to one indicate a high degree of sparsity. For us, sparsity only matters in rows with actual uncertainty for the pixel/class, therefore we additionally weigh each row proportional to its `1-norm. Hence, as a final measure, we compute the weighted Hoyer measure H(Γ̃) = ‖F (Γ̃)‖−11 · whc∑ i=1 ‖F (Γ̃)i,:‖1 · h(F (Γ̃)i,:). Results and discussion. FP rotations generally concentrate the uncertainty for single regions/classes in only few components, see Table 1. This means that FP rotation yield the most disentangled uncertainty components, which also indicates strong separation. For LIDC and SEN12MS, the amount of predicted uncertainty varies greatly across test images, causing high standard deviations. However, in general, large correlating uncertainty components can be found, allowing high row sparsity. This is in contrast to CamVid, where uncertainty is typically predicted for class borders. 5.2 Fine-grained sampling Monteiro et al. [31] already manipulated samples post-hoc by simple linear inter- or extrapolation w.r.t. the mean. However, they noted that additional more fine-grained sample control is necessary for a systematic exploration of the sample space: The interpolation approach lacks a solid foundation in the uncertainty model, and it relies on having a useful sample to start with. The meaningful control variables that we obtain by rotating factors provide all that has been missing. They enable users to systematically explore the sample space by fine-grained sampling: Starting from the mean prediction, they can inspect alternatives, correct possible mistakes, and fine-adjust borders. Particularly, they can manipulate the contribution of individual uncertainty components by manually setting the values of the corresponding factors. Pseudo-samples obtained in this way are shown in Figure 4. Alongside this paper, we provide an interface for fine-grained sampling. It allows the selection of a rotation criterion for a given input image (we recommend FP-Quartimax for a start), and control variables can be set conveniently using sliders. In the supplement, we provide some visuals. 6 Discussion and conclusion In this work, we interpreted the uncertainty model of stochastic segmentation networks (SSNs) as a factor model, which provides control variables for fine-grained sampling as requested by the authors of [31]. By (re-)structuring the uncertainty using rotations, we improved the controls and obtained as few as possible, but as many as necessary relevant uncertainty components. Here, it turned out that rotation criteria based on flow probabilities yield the most meaningful controls, where flow probabilities are a new quantification and visualization technique for the uncertainty in SSNs. Our controls allow to systematically explore the predicted uncertainty and to fine-adjust samples. However, the exploration of the sample space is only useful if the overall predicted uncertainty makes sense. This can be ensured by proper training. Structuring and examining the uncertainty is especially useful if there is a significant amount of aleatoric uncertainty. We note that one limitation caused by our current implementation is that the computation of flow-probability based rotations may take too long for performing it in an interactive scenario. However, there is significant potential for improving the optimization (scheme and parallelization). In any case, rotations should be precomputed whenever possible. Overall, we see a broader impact of our approach, which extends beyond the scope of SSNs. For instance, we believe that it could be used for large-scale image classification, where factor models have recently been employed for modeling class correlations [8]. Another promising application is to learn and inspect more structured latent spaces in (variational) autoencoders [12]. Structuring uncertainty as we do may be useful whenever a multivariate Gaussian forms part of a model. Next, flow probabilities can also be used for other probabilistic segmentation architectures that have a mean prediction as a reference point. Notably, flow-probability overview plots for the predicted uncertainty keep class-specific information, in contrast to other overview plots like entropy. To sum up, it is often easier to understand the whole in terms of smaller parts. In this light, we structured the predicted uncertainty of SSNs into meaningful smaller uncertainty components. Jointly, they enable fine-grained sample control, so for us, the sum of the parts is also greater than the whole. Acknowledgements We thank Prof. Dr.-Ing. Joachim Denzler for helpful comments and Ferdinand Rewicki for checking our work multiple times. We also thank all anonymous reviewers for their insightful feedback.
1. What is the main contribution of the paper regarding segmentation uncertainty? 2. How does the proposed approach view state-of-the-art SSNs as factor models? 3. What are the strengths and weaknesses of the proposed method in terms of its significance and originality? 4. What are some concerns regarding the quality and clarity of the paper, particularly in terms of documentation and experiment illustration? 5. What are some limitations of the method, especially regarding its ability to show uncertainty for attributed classes not only on borders? 6. Are there any confusions regarding the usage of loadings or rotated loadings in the separation criterion?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper tackles the issue of segmentation uncertainty. Using state-of-the-art SSNs, authors view these as factor models. They derive flow probabilities on these factors to visualize and quantify uncertainty associated with them, while looking for a "minimal rotation" of factors through orthogonal rotations. They show that this technique is suited to derive fine-grained maps for assessing uncertainty in segmentation results, as well as to tweak computed segmentations - which could prove useful for experts using such tools. Strengths And Weaknesses I find this piece of work sound and interesting. Flow probabilities (FP), after factors have been rotated in a meaningful way, make for a nice object to intuitively visualize uncertainty and authors show a convincing piece of code allowing users to update segmentations thanks to these FP. Significance The described method could prove very useful if it can indeed help domain experts perform fast segmentation tasks while providing them with intuitive uncertainty measures. However, it is yet unclear to me wether it can help discard entire segmented zones when the number of classes is higher than 2 or 3 (see Question 1). Originality This paper could be considered mildly original as it mostly combines existing results from SSNs, factor analysis and flow probabilities. However, I think the idea to view low-rank models underlying SSNs as factor models sheds a nice perspective on these models, and that this aspect is more important than pure originality. Quality I find this piece of work to be well written and illustrated. I appreciate that authors bundled a repo that I could use out of the box (I tested the notebook and read a great deal of the code). As is, the codebase is hard to grasp and use though, and I think it would greatly benefit from being documented more extensively (docstrings for all methods would be nice) and tested (tests make for a nice way to understand the overall structure of the codebase and usage of each method). Please cite used packages in the main document (numpy, scipy, sklearn, torch, einops to name a few). Clarity Although this work calls for very visual and easily understood experiments, I found the article a bit difficult to dive in. I think it would benefit from having a figure describing the overall procedure, training steps, and connection between concepts (loadings, factors, latent variables, FPs, rotations and rotation criteria). It could be included in the supp. mat. I think intuitions leading to Proposition 1 would also benefit from being illustrated in a Figure. Questions Why do you think rotated factors (from FP-Quartimax for instance) seem to be so different across tested datasets? In particular, as you mention at lines 266-268, uncertainty seems to be significant only on class borders for CamVid (cf Fig 11 in the supplemental material), which is the dataset with the highest number of classes. Is your method capable of showing uncertainty for attributed classes not only on borders, and if not, isn't this a major limitation of your model? Are you using loadings or rotated loadings (columns of Γ or Γ ⋅ O ) in the separation criterion exposed introduced 5.1.2? It is confusing to me that you should use loadings here, when you use rotated loadings line 222. Limitations Authors mention computation time of flow-probability based rotation, but I think question 1 could be an important limitation of this work.
NIPS
Title Structuring Uncertainty for Fine-Grained Sampling in Stochastic Segmentation Networks Abstract In image segmentation, the classic approach of learning a deterministic segmentation neither accounts for noise and ambiguity in the data nor for expert disagreements about the correct segmentation. This has been addressed by architectures that predict heteroscedastic (input-dependent) segmentation uncertainty, which indicates regions of segmentations that should be treated with care. What is missing are structural insights into the uncertainty, which would be desirable for interpretability and systematic adjustments. In the context of state-of-the-art stochastic segmentation networks (SSNs), we solve this issue by dismantling the overall predicted uncertainty into smaller uncertainty components. We obtain them directly from the low-rank Gaussian distribution for the logits in the network head of SSNs, based on a previously unconsidered view of this distribution as a factor model. The rank subsequently encodes a number of latent variables, each of which controls an individual uncertainty component. Hence, we can use the latent variables (called factors) for fine-grained sample control, thereby solving an open problem from previous work. There is one caveat though–factors are only unique up to orthogonal rotations. Factor rotations allow us to structure the uncertainty in a way that endorses simplicity, non-redundancy, and separation among the individual uncertainty components. To make the overall and factor-specific uncertainties at play comprehensible, we introduce flow probabilities that quantify deviations from the mean prediction and can also be used for uncertainty visualization. We show on medical-imaging, earth-observation, and traffic-scene data that rotation criteria based on factor-specific flow probabilities consistently yield the best factors for fine-grained sampling. ∗both authors contributed equally 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A In image segmentation, the classic approach of learning a deterministic segmentation neither accounts for noise and ambiguity in the data nor for expert disagreements about the correct segmentation. This has been addressed by architectures that predict heteroscedastic (input-dependent) segmentation uncertainty, which indicates regions of segmentations that should be treated with care. What is missing are structural insights into the uncertainty, which would be desirable for interpretability and systematic adjustments. In the context of state-of-the-art stochastic segmentation networks (SSNs), we solve this issue by dismantling the overall predicted uncertainty into smaller uncertainty components. We obtain them directly from the low-rank Gaussian distribution for the logits in the network head of SSNs, based on a previously unconsidered view of this distribution as a factor model. The rank subsequently encodes a number of latent variables, each of which controls an individual uncertainty component. Hence, we can use the latent variables (called factors) for fine-grained sample control, thereby solving an open problem from previous work. There is one caveat though–factors are only unique up to orthogonal rotations. Factor rotations allow us to structure the uncertainty in a way that endorses simplicity, non-redundancy, and separation among the individual uncertainty components. To make the overall and factor-specific uncertainties at play comprehensible, we introduce flow probabilities that quantify deviations from the mean prediction and can also be used for uncertainty visualization. We show on medical-imaging, earth-observation, and traffic-scene data that rotation criteria based on factor-specific flow probabilities consistently yield the best factors for fine-grained sampling. ∗both authors contributed equally 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Introduction Semantic Segmentation is the computer vision task of assigning a class to each pixel of an image. Examples for popular applications are the segmentation of medical images [1, 2, 39], land-cover classification in earth observation [28, 37, 42], and segmentation of imagery taken by autonomous vehicles [14, 25, 43]. Semantic segmentation tasks are affected by heteroscedastic (input-dependent) aleatoric uncertainty [26, 27] that is also called data uncertainty. Aleatoric uncertainty can emerge in form of label noise in the training data, differing expert opinions about the true segmentation, or ambiguity already contained in the data, for example, caused by technical restrictions like the image resolution [16, 20, 27]. To account for the prevailing aleatoric uncertainty, various probabilistic architectures have been proposed [3, 21, 24, 27, 28, 31]. The work [11] demonstrates that in general, uncertainties predicted by probabilistic segmentation architectures correlate positively with estimation errors, including those obtained from ensemble methods [23, 29] and MC-dropout [15]. Nevertheless, deterministic segmentation architectures [7, 35] are still pre-dominantly used [2, 44]. This may be because for practitioners it is often not clear how to take advantage of the predicted uncertainties. For instance, a limited number of sampled segmentations usually does not represent the predicted uncertainty well. As shown in Figure 1 (top), overview plots for the predicted uncertainty like entropy [11, 27] can be generated. They indicate areas of high uncertainty, where practitioners should take care. However, they do not convey pixel-wise correlations of uncertainty, that is, how changes in the segmentation of one region of an image affect changes in another. Moreover, overview plots cannot explain the overall uncertainty in terms of smaller uncertainty components, which would be desirable towards an interpretation of the uncertainty following the principles of problem decomposition and divide-and-conquer [30]. In the context of the recently introduced stochastic segmentation networks (SSNs) [31] that we explain below, the lack of understanding the overall uncertainty in terms of independent components connects to a problem that was also observed by the authors of [31]. They pointed out in their accompanying demo that ’fine-grained sample control’ is still missing. Indeed, the identification of independent or at least reasonably distinguished components of uncertainty represents a natural solution to this problem. We seek such components for SSNs with the goal of manipulating them individually for generating and fine-adjusting segmentations. SSNs model uncertainty via a low-rank multivariate Gaussian distribution on the logits, that is, on the network output before the softmax is applied, see Section 2. Surprisingly, it has not been made use of the fact that this uncertainty model itself offers a straightforward way for distinguishing between uncertainty components. This may be because the low-rank model was originally proposed for reducing the number of parameters. Here, we center our approach around the semantics of the low-rank model as a factor model. Factor models structure the overall uncertainty into individual components of uncertainty, each of which is governed by a single latent variable (called factor). Therefore, we solve the open problem from [31] by using the latent factor variables as control variables for a systematic exploration of the predicted heteroscedastic segmentation uncertainty. For the best result, however, some additional work is necessary: As is known from exploratory factor analysis [13, 40], the latent variables in factor models are only unique up to orthogonal rotations. Hence, they should be rotated for increased interpretability [6, 9, 40], which in our case amounts to generating more useful controls. Good controls encode uncertainty components that are simple, non-redundant, and separable in the sense that they affect distinguished image regions or classes. To evaluate these aspects, in Section 3 we introduce flow probabilities that quantify deviations from the mean prediction, which also enables uncertainty visualization, see Figure 1. Specifically, we compute factor-specific flow probabilities that quantify the impact of the uncertainty components encoded by individual factors. In Section 4, we fuse factor-specific flow probabilities with classic rotation criteria [6, 10] from exploratory factor analysis. In Section 5, we show that these fused criteria generally result in the best possible controls. Note that as a by-product, computing full flow probabilities for the overall uncertainty also yields a new type of overview plot that does not aggregate class-specific information about the uncertainty, see Figure 1 (top). Before we summarize our main contributions, we would like to emphasize that we do not benchmark SSNs as they have already been proven to produce state-of-the-art results w.r.t. various metrics, for instance, generalized energy distance to the ground truth distribution [24, 31]. Hence, it is safe for us to assume that after successful training, SSNs are capable of predicting the aleatoric uncertainty for a given input image reasonably well. Based on that our main contributions are: (1) control variables for the contributions of individual, factor-specific uncertainty components for fine-grained sampling (given by the latent factor variables), (2) flow probabilities for quantifying and visualizing overall and factor-specific uncertainties, (3) rotations based on factor-specific flow probabilities, which structure the uncertainty compo- nents and thereby provide simpler, less redundant, and well-separated control variables. Please find an overview figure of our contributions in Section A of our supplement. Additionally, we made the code for the proposed methods and experiments available under https://github.com/JakobCode/StructuringSSNs. 2 Factor modeling in stochastic segmentation networks Stochastic segmentation networks (SSNs) [31] are characterized by modeling the pixel-wise logits in the network head as a low-rank multivariate Gaussian distribution, that is, p(η | x) ∼ N ( µ(x),Γ(x)Γ(x)> + Ψ(x) ) . Here, η ∈ Rn are the n = hwc logits for an input image x of size h × w and a classification problem with c classes. The parameters of the Gaussian distribution are the mean µ(x) ∈ Rn and the covariance matrix, which decomposes into a matrix of rank bounded by r n with square root Γ(x) ∈ Rn×r and a diagonal matrix Ψ(x) ∈ Rn×n with positive diagonal elements. The parameters of the Gaussian distribution are the output of a backbone segmentation network with input x [7, 35]. Originally, the low-rank parameterization was solely introduced as a means for reducing the number of parameters [31]. However, the low-rank covariance model has a deeper structural meaning as a factor model [13, 38, 40]. Factor models are characterized by a typically small number of latent variables, called factors, that explain all correlations among a larger number of observed variables. In our case, the joint distribution of the observed logits and the latent factor variables z ∈ Rr is given by (η, z) ∼ N (( µ 0 ) , ( ΓΓ> + Ψ Γ Γ> Ir )) , where Ir is the (r × r) identity matrix, and for brevity, we omit the dependence on the input x in the notation from now on. Here, the interactions of the latent variables with the observed logits are described in the matrix Γ of factor loadings: Each column contains the loadings of one latent factor variable on the observed logits, yielding structured uncertainty. The loading characteristic becomes clear in the following sampling procedure for the logits from the factor model: η = µ+ Γz + Ψ1/2ε, where z ∼ N (0, Ir), ε ∼ N (0, In). (1) This procedure results from sampling from the joint distribution p(η, z) = p(z)p(η | z) as follows: First, the latent variables are sampled according to z ∼ N (0, Ir). Second, the logits are sampled from the conditional distribution η | z ∼ N (µ+ Γz,Ψ). Subsequently, only the logits are observed. Sampling the logits as in Equation (1) provides control over the contributions of the different latent factor variables. This invites for an individual manipulation of the factors, enabling fine-grained sampling. However, as pointed out in the introduction, factors should be rotated beforehand because they are only unique up to orthogonal rotations, see Lemma 1 in the supplement. In particular, orthogonal rotations of the latent factor variables do not change the marginal distribution p(η) of the logits. Indeed, replacing Γz by ΓOz for an orthogonal matrixO ∈ Rr×r in Equation (1) yields an equivalent sampling procedure. Therefore, one way to understand orthogonal rotations is that they change the basis of the r-dimensional affine space {µ+ Γz : z ∈ Rr} of the (noiseless) logits, where the basis elements are the columns of the factor loading matrix. 3 Flow probabilities In this section, we develop the notion of flow probabilities as our main tool for the analysis of factor models in SSNs. Flow probabilities can roughly be understood as probabilities of deviations from the mean prediction. We use flow probabilities for uncertainty quantification and visualization. 3.1 Factor-specific flow probabilities Because we need to understand and assess individual factors, it is important to analyze and quantify the uncertainty that is encoded in them. A useful tool for that is to compute factor-wise distributions of class predictions, for which we vary an individual latent factor variable z with associated factor loadings γ (a column of Γ, later we use the notation Γ:,j for a specific column of Γ). We keep the influence of all other latent factor and noise variables fixed to zero. Consequently, for z ∼ N (0, 1) we compute the following expected value: P = P (γ) = ∫ E(µ+ γz)p(z)dz ∈ [0, 1](wh)×c. (2) Here, E(µ+ γz) ∈ {0, 1}(wh)×c is the matrix whose rows correspond to the pixel-wise one-hot encoded class predictions, which are obtained by reshaping the logits µ + γz into shape (wh, c) and then applying an argmax along the class dimension. Note that the class probabilities P can be understood as a function of the factor loadings γ since the mean logits µ are fixed. We solve Equation (2) analytically. For that, we consider pixels separately since for (flat) spatial index i ∈ [wh] = {1, . . . , wh}, the probabilities pik from the i-th row of P only depend on its associated mean logits and factor loadings. Specifically, with the definition gik(z) = µik + γikz for k ∈ [c], for fixed z the predicted class is argmaxk gik(z). Hence, from Equation (2) we get that pik = pik(γ) = ∫ 1[k = argmaxk′ gik′(z)]p(z)dz, k ∈ [c] = {1, . . . , c}, (3) where 1 is the indicator function. We solve Equation (3) for binary classification first (k ∈ {1, 2}). Assuming that only the logits for the class k = 2 are learned, we set gi1(z) = 0 for consistency in Equation (3). Then, with µ = µi2 and assuming that γ = γi2 6= 0, the probability pi2 evaluates as pi2 = ∫ 1[µ+ γz ≥ 0]p(z)dz = { ψ(−µ/γ), γ < 0 1− ψ(−µ/γ), γ > 0 . Here, ψ is the cumulative distribution function of a standard normal random variable. For the last equality, observe that −µ/γ is the intersection point of the straight line gi2(z) = µ + γz with the z-axis gi1. If γ = 0, the argmax is not unique and probabilities can be split. For clarity of the technical exposition, we assume that the argmax is unique in the following. For general multi-class problems, the probabilities pik in Equation (3) can be derived from the class-prediction function z 7→ argmaxk′ gik′(z). In this function, the class prediction can only change at intersection points z of two non-parallel straight lines gik and gik′ , that is, z = (µik − µik′)/(γik′ − γik). Generally, if a class k is predicted for some z, then all z values for which the k-th class is predicted form a non-empty interval (zik, zik) ⊂ R. The end points of this interval can either be −∞, an intersection point of gik, or∞. In practice, the intervals (zik, zik) can be computed by sorting all intersection points and checking the values of the class-prediction function on the resulting partition of the z-axis. If a class k is never predicted, we set zik = zik = −∞. Finally, the class probability is given by pik = ψ(zik)− ψ(zik), where we use the conventions that ψ(−∞) = 0 and ψ(∞) = 1. Observe that the formula for binary problems given above is a special case of the one given for pik here. Overall, we obtain the following result: Proposition 1. Define Z = (zik) and Z = (zik) with entries i ∈ [wh] and k ∈ [c]. Then, the distribution of predicted classes under variation of the factor with associated loadings γ is given by P (γ) = ψ(Z)− ψ(Z), where ψ applies the cumulative distribution function of a standard normal variable element-wise. Now, to highlight the difference to the prediction from the mean µ, we compute factor-specific flow probabilities as F (γ) = P (γ)− E(µ) = ψ(Z)− ψ(Z)− E(µ) ∈ [−1, 1](wh)×c. Positive entries in the k-th column F (γ):,k indicate that the prediction for the corresponding pixels changes with positive probability from the mean prediction to class k. Based on this fact, factorspecific flow probabilities enable visualizations of the impact of individual factors, see Figure 1 (bottom rows). The visualizations are obtained by calculating a mixture of class-specific colors with weights given by the (factor-specific) flow probabilities, see the supplement for details. As factor-specific flow probabilities represent the real impact of a factor on output segmentations, they will also be a key to quality assessment of the factors, see Section 4. For future reference, we denote by F (Γ) ∈ [−1, 1](whc)×r the matrix of all factor-specific flow probabilities that is obtained by concatenating the factor-specific flow probabilities F (Γ:,j) as columns after flattening, where Γ:,j is the j-th column of Γ. Finally, since we use the latent factor variables as control variables for fine-grained sampling, it is helpful to also compute one-sided flow probabilities that encode the uncertainty for respectively positive and negative values of the latent factor variable. Corollary 1. Using the notation from Proposition 1, the one-sided factor-specific flow probabilities for a factor with loadings γ compute as F+(γ) = ∫ [0,∞) E(µ+ γz)p(z)dz − E(µ) = ψ(max(0,Z))− ψ(max(0,Z))− E(µ), F−(γ) = ∫ (−∞,0] E(µ+ γz)p(z)dz − E(µ) = ψ(min(0,Z))− ψ(min(0,Z))− E(µ). 3.2 Uncertainty quantification for the full factor model The idea of computing factor-specific flow probabilities for uncertainty quantification and visualization extends to the full factor model. For that, analogous to Equation (2), we compute the distribution of class predictions. However, this time we take the expected value over the full distribution of the logits given in Equation (1): P full = ∫ E(η)p(η)dη = ∫ E(µ+ Γz + Ψ1/2ε)p(z)p(ε)dzdε ∈ [0, 1](wh)×c. (4) The change from the mean prediction E(µ) is then given by the full flow probabilities, which we compute as F full = P full − E(µ) ∈ [−1, 1](wh)×c. Visualizing full flow probabilities as above by weighted mixtures of class-specific colors yields a new type of overview plot for the uncertainty, see Figure 1 (top row) for an example. Though for our work only a by-product, it has the advantage that it does not aggregate information about class-specific uncertainties, in contrast to overview plots like entropy (see also Figure 1, top row). In practice, the integral from Equation (4) is difficult to evaluate. This is because the argmax in E(η) technically amounts to determining a maximum of multivariate linear functions. Hence, we approximate the integral using Monte-Carlo integration with m i.i.d. samples z(1), . . . ,z(m) ∈ Rr drawn fromN (0, Ir) and i.i.d. samples ε(1), . . . , ε(m) ∈ Rwhc drawn from N (0, Iwhc). The matrix P full of class probabilities is thus approximated by P full ≈ 1 m m∑ j=1 E(µ+ Γz(j) + Ψ1/2ε(j)) ∈ [0, 1](wh)×c. The matrix F full of flow probabilities can be approximated similarly. In the supplement, we show empirically that the diagonal noise term has little impact on the flow probabilities. Hence, we can focus on the structural uncertainty that is induced by the latent factor variables. 4 Factor rotations As pointed out in Section 2, the latent variables/factors in factor models are only unique up to orthogonal rotations. Therefore, it is common practice in exploratory factor analysis to rotate them in order to maximize their interpretability [6, 22, 38]. The factor model in a SSN represents the predicted uncertainty for a given input image, where the factors themselves encode components of the overall uncertainty. We intend to use them as control variables for fine-grained sampling. From that we derive the following quality criteria: (1) The number of ’relevant’ factors should be small, where relevant factors are characterized by having a ’significant’ effect on output segmentations. (2) Relevant factors should be separable from each other in the sense that they encode distinguished uncertainty components. (3) Each area in the input image should be affected by only few factors. Here, the first criterion ensures that the number of impactful control variables is reduced to a necessary minimum, and the second criterion requires that the corresponding uncertainty components are distinct. Together, the first two criteria discourage factor redundancy. The last criterion reflects the general requirement of sparsity and simplicity that is also found among Thurstone’s rules [41] for simple structure of a factor loading matrix, which is the primary goal in exploratory factor analysis [13]. However, in our case we rather require a simple structure on the matrix F (Γ) of factor-specific flow probabilities (see Section 3.1) since they measure the actual impact of the factors on output segmentations. In Section 5, we evaluate different rotation criteria that we present in the following. First, we consider classic rotation criteria. Here, for a factor loading matrix Γ = (γij) ∈ Rn×r, Crawson and Ferguson [10] defined the CF family of rotation criteria: qκ(Γ) = (1− κ) n∑ i=1 r∑ j=1 γ2ij r∑ l:j 6=l γ2il + κ r∑ j=1 n∑ i=1 γ2ij n∑ l:i 6=l γ2lj , κ ∈ [0, 1]. The CF family is a generalization of the widely used orthomax family [17], where the parameter κ controls a trade-off between row complexity (first sum) and column complexity (second sum). We focus on popular choices: κ = 1/n yields an equivalent version of the Varimax criterion [22], which is the most used method. Intuitively, it tries to maximize the variance of the squared factor loadings. Next, κ = 0 yields the Quartimax criterion that minimizes the number of factors needed to explain a variable (in our case segmentation uncertainty of a pixel). Finally, κ = r/(2n) yields the Equamax criterion that represents a combination of Varimax and Quartimax. Classic rotation criteria do not consider the actual impact of factors on predicted segmentations because they only take the factor loadings Γ but not the mean µ into account. Therefore, we incorporate factor-specific flow probabilities into rotation criteria by applying a base rotation criterion q on the flow probabilities instead of the factor loadings. Hence, the objective function to be minimized becomesO 7→ q(F (ΓO)) instead ofO 7→ q(ΓO). We call the new family of rotation criteria the FP family. For instance, FP-Varimax applies the Varimax criterion on the flow probabilities. 5 Experiments The purpose of our experiments is to (1) evaluate rotation criteria based on the quality of rotated factors, (2) demonstrate the merits of fine-grained sample control based on reasonably-rotated factors. Data sets and training. First, we use the LIDC data set [1] in its pre-processed version from [28] that contains 2D slices of 3D thorax scans of size 128× 128 pixels. Each slice respectively has four ground truth segmentations from different experts. Second, we use the multi-spectral Sentinel-2 data from the SEN12MS data set [36] with images of size 244× 244 pixels and coarse labels for semantic segmentation of 10 types of land cover. Third, we use the CamVid data set [5], which contains images of road scenes in resolution 480× 360 and is pixel-wise labeled into 11 different classes. Additional details and statistics about the data sets (including splits) can be found in the supplement, where we also detail all training procedures. We respectively use r = 10 in our experiments, which accounts for the varying uncertainty in different images and has also been used in [31]. We would like to emphasize again that we do not benchmark SSNs since they have already shown to be state of the art [24, 31]. For examples of uncertainty predictions, see Figure 1, Figure 2, and the supplement. Computational aspects. We used Python 3.7, particularly with the libraries PyTorch 1.11 [32], scikit-learn [33], NumPy [18], and einops [34]. On a single core of an Intel Xeon Platinum 8260, factor-specific and full flow probabilities can be computed in the sub-second range without significant differences w.r.t. the used rotation, see the supplement for details. To obtain the optimal rotation matrices for the different rotation criteria, we adapted gradient projection algorithms from [4] to our needs. In our current implementation, optimization for criteria based on flow probabilities can take up to a few minutes, see the supplement for details. In practice, we recommend pre-computing rotations whenever possible. 5.1 Evaluation of rotation criteria We evaluate rotations according to the quality criteria from Section 4, that is, (1) the relevance of individual factors, (2) the separability of the relevant factors, and (3) the sparsity of the factors. 5.1.1 Factor relevance Here, we measure the impact of individual factors on the segmentation. In this section, we use the notation Γ̃ to denote a matrix of factor loadings that can be either rotated or unrotated. A simple measure for the impact of the j-th factor with loadings Γ̃:,j ∈ Rn is given by the `1-norm ‖F (Γ̃:,j)‖1 of the factor-specific flow probabilities. In what follows, we consider relevance curves that show how many factors exceed the overall uncertainty for varying thresholds τ ≥ 0. Specifically, we compute nτ = |Rτ |, where Rτ = {j : ‖F (Γ:,j)‖1 ≥ τ‖F full(Γ)‖1}, and we measure the overall uncertainty by the `1-norm of the full flow probabilities, approximated by 100 Monte-Carlo samples. Results and discussion. The results of averaging nτ over the respective test images are shown in Figure 3 (top row). First, classic rotations barely reduce the number of relevant factors compared to the unrotated representation. This is no surprise since they do not take the mean logits into account and only try to simplify the structure of the factor loadings Γ̃. Nevertheless, even classic rotations already seem to decrease redundancy. However, as intended by design, FP rotations reduce the number of relevant factors to a much greater extent. Especially for LIDC and SEN12MS, already small thresholds τ are sufficient to cut off most factors below the threshold: Figure 3 (top) shows that all FP rotations behave similarly with curves declining sharply for small τ . Consequently, FP rotations tend to produce a huge gap between a small number of relevant factors and the remaining ones, see Figure 1 for a visual example. This is desirable since it allows to focus only on a few relevant and meaningful factors during the exploration of the predicted uncertainty. It may be harder to find such factors if the predicted uncertainty has less inherent structure. CamVid is an example in this regard as uncertainty predictions are often restricted to class borders, which means that they are less spatially correlated. However, even for CamVid, there is structured uncertainty, see Figure 13 (Section D.2.3) in the supplement. Figure 3 (top) shows that also for CamVid, FP rotations significantly reduce the number of relevant factors. 5.1.2 Separability of relevant factors The second quality criterion from Section 4 concerns factor separation. Here, for a separation threshold ρ ∈ [0, 1], we compute the largest possible fraction of pairwise separated relevant factors: sτ (ρ) = n −1 τ ·max{|J | : J ⊂ Rτ , cos(F (Γ̃:,j), F (Γ̃:,j′)) ≤ ρ for all j 6= j′ ∈ J} ∈ [0, 1]. If nτ = 0, we set sτ (ρ) = 0 for all ρ. The separation of two factors is measured by the cosine similarity of their factor-specific flow probabilities, which is always non-negative since corresponding entries cannot have opposing signs. For sτ (ρ), a value of one is best since it means that all relevant factors are also separated. For fixed relevance thresholds τ , we also compute the area under the curve AUC(sτ ) for the comparison of different rotation criteria. Results and discussion. In Figure 3 (bottom row), we show the separation scores AUC(sτ ) for different relevance thresholds τ , respectively averaged over all test images. FP rotations consistently beat classic rotations by a factor of around two in terms of AUC. Classic rotation criteria are still better than the unrotated representations (which form the real baseline). The AUC separation scores drop for thresholds τ that fail to determine the number of relevant factors sensefully because they are too small or too large. Notably, for classic rotation criteria, the separation scores AUC(sτ ) respectively peak at a threshold τ for which the number of relevant factors nearly coincides with the one from FP rotations, compare the intersection of the curves in Figure 3 (top row). The peak of the separation scores is less pronounced for FP rotations, particularly for LIDC and SEN12MS, where the set of relevant factors is more stable across different thresholds τ . For SEN12MS, the results also distinguish among the FP-rotation criteria, where FP-Quartimax seems to be slightly favored over the other FP rotations. This may be because Quartimax emphasizes row sparsity the most, which reduces cosine similarities. We investigate row sparsity further in the next section. 5.1.3 Factor sparsity To evaluate to which degree different factors affect the same regions of the input image, we measure row sparsity of the factor-specific flow probabilities F (Γ̃) ∈ [−1, 1](whc)×r. For that, for a (row) vector v ∈ Rr let h(v) = √ r − ‖v‖1/‖v‖2√ r − 1 ∈ [0, 1] be the Hoyer measure [19], where values close to one indicate a high degree of sparsity. For us, sparsity only matters in rows with actual uncertainty for the pixel/class, therefore we additionally weigh each row proportional to its `1-norm. Hence, as a final measure, we compute the weighted Hoyer measure H(Γ̃) = ‖F (Γ̃)‖−11 · whc∑ i=1 ‖F (Γ̃)i,:‖1 · h(F (Γ̃)i,:). Results and discussion. FP rotations generally concentrate the uncertainty for single regions/classes in only few components, see Table 1. This means that FP rotation yield the most disentangled uncertainty components, which also indicates strong separation. For LIDC and SEN12MS, the amount of predicted uncertainty varies greatly across test images, causing high standard deviations. However, in general, large correlating uncertainty components can be found, allowing high row sparsity. This is in contrast to CamVid, where uncertainty is typically predicted for class borders. 5.2 Fine-grained sampling Monteiro et al. [31] already manipulated samples post-hoc by simple linear inter- or extrapolation w.r.t. the mean. However, they noted that additional more fine-grained sample control is necessary for a systematic exploration of the sample space: The interpolation approach lacks a solid foundation in the uncertainty model, and it relies on having a useful sample to start with. The meaningful control variables that we obtain by rotating factors provide all that has been missing. They enable users to systematically explore the sample space by fine-grained sampling: Starting from the mean prediction, they can inspect alternatives, correct possible mistakes, and fine-adjust borders. Particularly, they can manipulate the contribution of individual uncertainty components by manually setting the values of the corresponding factors. Pseudo-samples obtained in this way are shown in Figure 4. Alongside this paper, we provide an interface for fine-grained sampling. It allows the selection of a rotation criterion for a given input image (we recommend FP-Quartimax for a start), and control variables can be set conveniently using sliders. In the supplement, we provide some visuals. 6 Discussion and conclusion In this work, we interpreted the uncertainty model of stochastic segmentation networks (SSNs) as a factor model, which provides control variables for fine-grained sampling as requested by the authors of [31]. By (re-)structuring the uncertainty using rotations, we improved the controls and obtained as few as possible, but as many as necessary relevant uncertainty components. Here, it turned out that rotation criteria based on flow probabilities yield the most meaningful controls, where flow probabilities are a new quantification and visualization technique for the uncertainty in SSNs. Our controls allow to systematically explore the predicted uncertainty and to fine-adjust samples. However, the exploration of the sample space is only useful if the overall predicted uncertainty makes sense. This can be ensured by proper training. Structuring and examining the uncertainty is especially useful if there is a significant amount of aleatoric uncertainty. We note that one limitation caused by our current implementation is that the computation of flow-probability based rotations may take too long for performing it in an interactive scenario. However, there is significant potential for improving the optimization (scheme and parallelization). In any case, rotations should be precomputed whenever possible. Overall, we see a broader impact of our approach, which extends beyond the scope of SSNs. For instance, we believe that it could be used for large-scale image classification, where factor models have recently been employed for modeling class correlations [8]. Another promising application is to learn and inspect more structured latent spaces in (variational) autoencoders [12]. Structuring uncertainty as we do may be useful whenever a multivariate Gaussian forms part of a model. Next, flow probabilities can also be used for other probabilistic segmentation architectures that have a mean prediction as a reference point. Notably, flow-probability overview plots for the predicted uncertainty keep class-specific information, in contrast to other overview plots like entropy. To sum up, it is often easier to understand the whole in terms of smaller parts. In this light, we structured the predicted uncertainty of SSNs into meaningful smaller uncertainty components. Jointly, they enable fine-grained sample control, so for us, the sum of the parts is also greater than the whole. Acknowledgements We thank Prof. Dr.-Ing. Joachim Denzler for helpful comments and Ferdinand Rewicki for checking our work multiple times. We also thank all anonymous reviewers for their insightful feedback.
1. What is the focus and contribution of the paper on stochastic segmentation networks? 2. What are the strengths of the proposed approach, particularly in terms of factorization and sparsification of uncertainty? 3. What are the weaknesses of the paper regarding its ability to provide interpretable results? 4. Do you have any suggestions for intuitive examples that could demonstrate the usefulness of the proposed approach? 5. Can the authors provide timing information to illustrate the performance difference between the original SSN and the modified version?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The manuscripts reinterprets stochastic segmentation networks (SSN 2020) as a factor model and thus adds latent factors governing the noise components within the single covariance of SSNs. Additionally rotation of the factors with imposed sparsity leads to a parsimonious, and supposedly more interpretable, representation of the factors. The manuscript provides derivations of the reasoning behind the proposed representation and performs a rigorous empirical comparison of already available rotation approaches. The results in the main manuscript and the supplement, including the video demonstrate that the approach works in providing uncertainty factors that could be individually manipulated. Strengths And Weaknesses Strengths A simple and pragmatic approach to extending SSN with an ability to control uncertainty components independently A needed take on sparsity and uncertainty to address interpretability of uncertainty representation. Weaknesses The major weakness, in my view, is that besides meeting the goals of factorization and sparsification of uncertainty, the manuscript is not convincing in that the created tool is interpretable and thus useful. The value of the method for interpretation of the model is not coming through in any of the examples of the manuscript, supplement, and the video. The most intuitively interpretable examples from the CamVid dataset does not add any information and looks like uncertainty of all classes but, possibly, the cars is simply mixed around all the objects. Other demonstrations are also not helpful and it is unclear how would a user of the model benefit from the new approach in either of the remaining examples. I do not look at the satellite images every day and that may be the reason the flagship example in the main manuscript and the supplied video does not convey much information since the segmentation seems to be very poor (the DICE score would be really low). Questions Is there any way to show an intuitive example where the proposed way of displaying uncertainty can be useful, since everything shown is as not helpful as the rial methods of entropy and others? Can you report timing information for vanilla SSN vs the proposed modification to give a rough idea how severe is the slowdown? Limitations The value of the proposed approach is not clear from the paper, this limits further impact of the paper since practitioners won't be able to appreciate the need for this method. Minor: computational complexity, as noted by the authors.
NIPS
Title Structuring Uncertainty for Fine-Grained Sampling in Stochastic Segmentation Networks Abstract In image segmentation, the classic approach of learning a deterministic segmentation neither accounts for noise and ambiguity in the data nor for expert disagreements about the correct segmentation. This has been addressed by architectures that predict heteroscedastic (input-dependent) segmentation uncertainty, which indicates regions of segmentations that should be treated with care. What is missing are structural insights into the uncertainty, which would be desirable for interpretability and systematic adjustments. In the context of state-of-the-art stochastic segmentation networks (SSNs), we solve this issue by dismantling the overall predicted uncertainty into smaller uncertainty components. We obtain them directly from the low-rank Gaussian distribution for the logits in the network head of SSNs, based on a previously unconsidered view of this distribution as a factor model. The rank subsequently encodes a number of latent variables, each of which controls an individual uncertainty component. Hence, we can use the latent variables (called factors) for fine-grained sample control, thereby solving an open problem from previous work. There is one caveat though–factors are only unique up to orthogonal rotations. Factor rotations allow us to structure the uncertainty in a way that endorses simplicity, non-redundancy, and separation among the individual uncertainty components. To make the overall and factor-specific uncertainties at play comprehensible, we introduce flow probabilities that quantify deviations from the mean prediction and can also be used for uncertainty visualization. We show on medical-imaging, earth-observation, and traffic-scene data that rotation criteria based on factor-specific flow probabilities consistently yield the best factors for fine-grained sampling. ∗both authors contributed equally 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A In image segmentation, the classic approach of learning a deterministic segmentation neither accounts for noise and ambiguity in the data nor for expert disagreements about the correct segmentation. This has been addressed by architectures that predict heteroscedastic (input-dependent) segmentation uncertainty, which indicates regions of segmentations that should be treated with care. What is missing are structural insights into the uncertainty, which would be desirable for interpretability and systematic adjustments. In the context of state-of-the-art stochastic segmentation networks (SSNs), we solve this issue by dismantling the overall predicted uncertainty into smaller uncertainty components. We obtain them directly from the low-rank Gaussian distribution for the logits in the network head of SSNs, based on a previously unconsidered view of this distribution as a factor model. The rank subsequently encodes a number of latent variables, each of which controls an individual uncertainty component. Hence, we can use the latent variables (called factors) for fine-grained sample control, thereby solving an open problem from previous work. There is one caveat though–factors are only unique up to orthogonal rotations. Factor rotations allow us to structure the uncertainty in a way that endorses simplicity, non-redundancy, and separation among the individual uncertainty components. To make the overall and factor-specific uncertainties at play comprehensible, we introduce flow probabilities that quantify deviations from the mean prediction and can also be used for uncertainty visualization. We show on medical-imaging, earth-observation, and traffic-scene data that rotation criteria based on factor-specific flow probabilities consistently yield the best factors for fine-grained sampling. ∗both authors contributed equally 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 Introduction Semantic Segmentation is the computer vision task of assigning a class to each pixel of an image. Examples for popular applications are the segmentation of medical images [1, 2, 39], land-cover classification in earth observation [28, 37, 42], and segmentation of imagery taken by autonomous vehicles [14, 25, 43]. Semantic segmentation tasks are affected by heteroscedastic (input-dependent) aleatoric uncertainty [26, 27] that is also called data uncertainty. Aleatoric uncertainty can emerge in form of label noise in the training data, differing expert opinions about the true segmentation, or ambiguity already contained in the data, for example, caused by technical restrictions like the image resolution [16, 20, 27]. To account for the prevailing aleatoric uncertainty, various probabilistic architectures have been proposed [3, 21, 24, 27, 28, 31]. The work [11] demonstrates that in general, uncertainties predicted by probabilistic segmentation architectures correlate positively with estimation errors, including those obtained from ensemble methods [23, 29] and MC-dropout [15]. Nevertheless, deterministic segmentation architectures [7, 35] are still pre-dominantly used [2, 44]. This may be because for practitioners it is often not clear how to take advantage of the predicted uncertainties. For instance, a limited number of sampled segmentations usually does not represent the predicted uncertainty well. As shown in Figure 1 (top), overview plots for the predicted uncertainty like entropy [11, 27] can be generated. They indicate areas of high uncertainty, where practitioners should take care. However, they do not convey pixel-wise correlations of uncertainty, that is, how changes in the segmentation of one region of an image affect changes in another. Moreover, overview plots cannot explain the overall uncertainty in terms of smaller uncertainty components, which would be desirable towards an interpretation of the uncertainty following the principles of problem decomposition and divide-and-conquer [30]. In the context of the recently introduced stochastic segmentation networks (SSNs) [31] that we explain below, the lack of understanding the overall uncertainty in terms of independent components connects to a problem that was also observed by the authors of [31]. They pointed out in their accompanying demo that ’fine-grained sample control’ is still missing. Indeed, the identification of independent or at least reasonably distinguished components of uncertainty represents a natural solution to this problem. We seek such components for SSNs with the goal of manipulating them individually for generating and fine-adjusting segmentations. SSNs model uncertainty via a low-rank multivariate Gaussian distribution on the logits, that is, on the network output before the softmax is applied, see Section 2. Surprisingly, it has not been made use of the fact that this uncertainty model itself offers a straightforward way for distinguishing between uncertainty components. This may be because the low-rank model was originally proposed for reducing the number of parameters. Here, we center our approach around the semantics of the low-rank model as a factor model. Factor models structure the overall uncertainty into individual components of uncertainty, each of which is governed by a single latent variable (called factor). Therefore, we solve the open problem from [31] by using the latent factor variables as control variables for a systematic exploration of the predicted heteroscedastic segmentation uncertainty. For the best result, however, some additional work is necessary: As is known from exploratory factor analysis [13, 40], the latent variables in factor models are only unique up to orthogonal rotations. Hence, they should be rotated for increased interpretability [6, 9, 40], which in our case amounts to generating more useful controls. Good controls encode uncertainty components that are simple, non-redundant, and separable in the sense that they affect distinguished image regions or classes. To evaluate these aspects, in Section 3 we introduce flow probabilities that quantify deviations from the mean prediction, which also enables uncertainty visualization, see Figure 1. Specifically, we compute factor-specific flow probabilities that quantify the impact of the uncertainty components encoded by individual factors. In Section 4, we fuse factor-specific flow probabilities with classic rotation criteria [6, 10] from exploratory factor analysis. In Section 5, we show that these fused criteria generally result in the best possible controls. Note that as a by-product, computing full flow probabilities for the overall uncertainty also yields a new type of overview plot that does not aggregate class-specific information about the uncertainty, see Figure 1 (top). Before we summarize our main contributions, we would like to emphasize that we do not benchmark SSNs as they have already been proven to produce state-of-the-art results w.r.t. various metrics, for instance, generalized energy distance to the ground truth distribution [24, 31]. Hence, it is safe for us to assume that after successful training, SSNs are capable of predicting the aleatoric uncertainty for a given input image reasonably well. Based on that our main contributions are: (1) control variables for the contributions of individual, factor-specific uncertainty components for fine-grained sampling (given by the latent factor variables), (2) flow probabilities for quantifying and visualizing overall and factor-specific uncertainties, (3) rotations based on factor-specific flow probabilities, which structure the uncertainty compo- nents and thereby provide simpler, less redundant, and well-separated control variables. Please find an overview figure of our contributions in Section A of our supplement. Additionally, we made the code for the proposed methods and experiments available under https://github.com/JakobCode/StructuringSSNs. 2 Factor modeling in stochastic segmentation networks Stochastic segmentation networks (SSNs) [31] are characterized by modeling the pixel-wise logits in the network head as a low-rank multivariate Gaussian distribution, that is, p(η | x) ∼ N ( µ(x),Γ(x)Γ(x)> + Ψ(x) ) . Here, η ∈ Rn are the n = hwc logits for an input image x of size h × w and a classification problem with c classes. The parameters of the Gaussian distribution are the mean µ(x) ∈ Rn and the covariance matrix, which decomposes into a matrix of rank bounded by r n with square root Γ(x) ∈ Rn×r and a diagonal matrix Ψ(x) ∈ Rn×n with positive diagonal elements. The parameters of the Gaussian distribution are the output of a backbone segmentation network with input x [7, 35]. Originally, the low-rank parameterization was solely introduced as a means for reducing the number of parameters [31]. However, the low-rank covariance model has a deeper structural meaning as a factor model [13, 38, 40]. Factor models are characterized by a typically small number of latent variables, called factors, that explain all correlations among a larger number of observed variables. In our case, the joint distribution of the observed logits and the latent factor variables z ∈ Rr is given by (η, z) ∼ N (( µ 0 ) , ( ΓΓ> + Ψ Γ Γ> Ir )) , where Ir is the (r × r) identity matrix, and for brevity, we omit the dependence on the input x in the notation from now on. Here, the interactions of the latent variables with the observed logits are described in the matrix Γ of factor loadings: Each column contains the loadings of one latent factor variable on the observed logits, yielding structured uncertainty. The loading characteristic becomes clear in the following sampling procedure for the logits from the factor model: η = µ+ Γz + Ψ1/2ε, where z ∼ N (0, Ir), ε ∼ N (0, In). (1) This procedure results from sampling from the joint distribution p(η, z) = p(z)p(η | z) as follows: First, the latent variables are sampled according to z ∼ N (0, Ir). Second, the logits are sampled from the conditional distribution η | z ∼ N (µ+ Γz,Ψ). Subsequently, only the logits are observed. Sampling the logits as in Equation (1) provides control over the contributions of the different latent factor variables. This invites for an individual manipulation of the factors, enabling fine-grained sampling. However, as pointed out in the introduction, factors should be rotated beforehand because they are only unique up to orthogonal rotations, see Lemma 1 in the supplement. In particular, orthogonal rotations of the latent factor variables do not change the marginal distribution p(η) of the logits. Indeed, replacing Γz by ΓOz for an orthogonal matrixO ∈ Rr×r in Equation (1) yields an equivalent sampling procedure. Therefore, one way to understand orthogonal rotations is that they change the basis of the r-dimensional affine space {µ+ Γz : z ∈ Rr} of the (noiseless) logits, where the basis elements are the columns of the factor loading matrix. 3 Flow probabilities In this section, we develop the notion of flow probabilities as our main tool for the analysis of factor models in SSNs. Flow probabilities can roughly be understood as probabilities of deviations from the mean prediction. We use flow probabilities for uncertainty quantification and visualization. 3.1 Factor-specific flow probabilities Because we need to understand and assess individual factors, it is important to analyze and quantify the uncertainty that is encoded in them. A useful tool for that is to compute factor-wise distributions of class predictions, for which we vary an individual latent factor variable z with associated factor loadings γ (a column of Γ, later we use the notation Γ:,j for a specific column of Γ). We keep the influence of all other latent factor and noise variables fixed to zero. Consequently, for z ∼ N (0, 1) we compute the following expected value: P = P (γ) = ∫ E(µ+ γz)p(z)dz ∈ [0, 1](wh)×c. (2) Here, E(µ+ γz) ∈ {0, 1}(wh)×c is the matrix whose rows correspond to the pixel-wise one-hot encoded class predictions, which are obtained by reshaping the logits µ + γz into shape (wh, c) and then applying an argmax along the class dimension. Note that the class probabilities P can be understood as a function of the factor loadings γ since the mean logits µ are fixed. We solve Equation (2) analytically. For that, we consider pixels separately since for (flat) spatial index i ∈ [wh] = {1, . . . , wh}, the probabilities pik from the i-th row of P only depend on its associated mean logits and factor loadings. Specifically, with the definition gik(z) = µik + γikz for k ∈ [c], for fixed z the predicted class is argmaxk gik(z). Hence, from Equation (2) we get that pik = pik(γ) = ∫ 1[k = argmaxk′ gik′(z)]p(z)dz, k ∈ [c] = {1, . . . , c}, (3) where 1 is the indicator function. We solve Equation (3) for binary classification first (k ∈ {1, 2}). Assuming that only the logits for the class k = 2 are learned, we set gi1(z) = 0 for consistency in Equation (3). Then, with µ = µi2 and assuming that γ = γi2 6= 0, the probability pi2 evaluates as pi2 = ∫ 1[µ+ γz ≥ 0]p(z)dz = { ψ(−µ/γ), γ < 0 1− ψ(−µ/γ), γ > 0 . Here, ψ is the cumulative distribution function of a standard normal random variable. For the last equality, observe that −µ/γ is the intersection point of the straight line gi2(z) = µ + γz with the z-axis gi1. If γ = 0, the argmax is not unique and probabilities can be split. For clarity of the technical exposition, we assume that the argmax is unique in the following. For general multi-class problems, the probabilities pik in Equation (3) can be derived from the class-prediction function z 7→ argmaxk′ gik′(z). In this function, the class prediction can only change at intersection points z of two non-parallel straight lines gik and gik′ , that is, z = (µik − µik′)/(γik′ − γik). Generally, if a class k is predicted for some z, then all z values for which the k-th class is predicted form a non-empty interval (zik, zik) ⊂ R. The end points of this interval can either be −∞, an intersection point of gik, or∞. In practice, the intervals (zik, zik) can be computed by sorting all intersection points and checking the values of the class-prediction function on the resulting partition of the z-axis. If a class k is never predicted, we set zik = zik = −∞. Finally, the class probability is given by pik = ψ(zik)− ψ(zik), where we use the conventions that ψ(−∞) = 0 and ψ(∞) = 1. Observe that the formula for binary problems given above is a special case of the one given for pik here. Overall, we obtain the following result: Proposition 1. Define Z = (zik) and Z = (zik) with entries i ∈ [wh] and k ∈ [c]. Then, the distribution of predicted classes under variation of the factor with associated loadings γ is given by P (γ) = ψ(Z)− ψ(Z), where ψ applies the cumulative distribution function of a standard normal variable element-wise. Now, to highlight the difference to the prediction from the mean µ, we compute factor-specific flow probabilities as F (γ) = P (γ)− E(µ) = ψ(Z)− ψ(Z)− E(µ) ∈ [−1, 1](wh)×c. Positive entries in the k-th column F (γ):,k indicate that the prediction for the corresponding pixels changes with positive probability from the mean prediction to class k. Based on this fact, factorspecific flow probabilities enable visualizations of the impact of individual factors, see Figure 1 (bottom rows). The visualizations are obtained by calculating a mixture of class-specific colors with weights given by the (factor-specific) flow probabilities, see the supplement for details. As factor-specific flow probabilities represent the real impact of a factor on output segmentations, they will also be a key to quality assessment of the factors, see Section 4. For future reference, we denote by F (Γ) ∈ [−1, 1](whc)×r the matrix of all factor-specific flow probabilities that is obtained by concatenating the factor-specific flow probabilities F (Γ:,j) as columns after flattening, where Γ:,j is the j-th column of Γ. Finally, since we use the latent factor variables as control variables for fine-grained sampling, it is helpful to also compute one-sided flow probabilities that encode the uncertainty for respectively positive and negative values of the latent factor variable. Corollary 1. Using the notation from Proposition 1, the one-sided factor-specific flow probabilities for a factor with loadings γ compute as F+(γ) = ∫ [0,∞) E(µ+ γz)p(z)dz − E(µ) = ψ(max(0,Z))− ψ(max(0,Z))− E(µ), F−(γ) = ∫ (−∞,0] E(µ+ γz)p(z)dz − E(µ) = ψ(min(0,Z))− ψ(min(0,Z))− E(µ). 3.2 Uncertainty quantification for the full factor model The idea of computing factor-specific flow probabilities for uncertainty quantification and visualization extends to the full factor model. For that, analogous to Equation (2), we compute the distribution of class predictions. However, this time we take the expected value over the full distribution of the logits given in Equation (1): P full = ∫ E(η)p(η)dη = ∫ E(µ+ Γz + Ψ1/2ε)p(z)p(ε)dzdε ∈ [0, 1](wh)×c. (4) The change from the mean prediction E(µ) is then given by the full flow probabilities, which we compute as F full = P full − E(µ) ∈ [−1, 1](wh)×c. Visualizing full flow probabilities as above by weighted mixtures of class-specific colors yields a new type of overview plot for the uncertainty, see Figure 1 (top row) for an example. Though for our work only a by-product, it has the advantage that it does not aggregate information about class-specific uncertainties, in contrast to overview plots like entropy (see also Figure 1, top row). In practice, the integral from Equation (4) is difficult to evaluate. This is because the argmax in E(η) technically amounts to determining a maximum of multivariate linear functions. Hence, we approximate the integral using Monte-Carlo integration with m i.i.d. samples z(1), . . . ,z(m) ∈ Rr drawn fromN (0, Ir) and i.i.d. samples ε(1), . . . , ε(m) ∈ Rwhc drawn from N (0, Iwhc). The matrix P full of class probabilities is thus approximated by P full ≈ 1 m m∑ j=1 E(µ+ Γz(j) + Ψ1/2ε(j)) ∈ [0, 1](wh)×c. The matrix F full of flow probabilities can be approximated similarly. In the supplement, we show empirically that the diagonal noise term has little impact on the flow probabilities. Hence, we can focus on the structural uncertainty that is induced by the latent factor variables. 4 Factor rotations As pointed out in Section 2, the latent variables/factors in factor models are only unique up to orthogonal rotations. Therefore, it is common practice in exploratory factor analysis to rotate them in order to maximize their interpretability [6, 22, 38]. The factor model in a SSN represents the predicted uncertainty for a given input image, where the factors themselves encode components of the overall uncertainty. We intend to use them as control variables for fine-grained sampling. From that we derive the following quality criteria: (1) The number of ’relevant’ factors should be small, where relevant factors are characterized by having a ’significant’ effect on output segmentations. (2) Relevant factors should be separable from each other in the sense that they encode distinguished uncertainty components. (3) Each area in the input image should be affected by only few factors. Here, the first criterion ensures that the number of impactful control variables is reduced to a necessary minimum, and the second criterion requires that the corresponding uncertainty components are distinct. Together, the first two criteria discourage factor redundancy. The last criterion reflects the general requirement of sparsity and simplicity that is also found among Thurstone’s rules [41] for simple structure of a factor loading matrix, which is the primary goal in exploratory factor analysis [13]. However, in our case we rather require a simple structure on the matrix F (Γ) of factor-specific flow probabilities (see Section 3.1) since they measure the actual impact of the factors on output segmentations. In Section 5, we evaluate different rotation criteria that we present in the following. First, we consider classic rotation criteria. Here, for a factor loading matrix Γ = (γij) ∈ Rn×r, Crawson and Ferguson [10] defined the CF family of rotation criteria: qκ(Γ) = (1− κ) n∑ i=1 r∑ j=1 γ2ij r∑ l:j 6=l γ2il + κ r∑ j=1 n∑ i=1 γ2ij n∑ l:i 6=l γ2lj , κ ∈ [0, 1]. The CF family is a generalization of the widely used orthomax family [17], where the parameter κ controls a trade-off between row complexity (first sum) and column complexity (second sum). We focus on popular choices: κ = 1/n yields an equivalent version of the Varimax criterion [22], which is the most used method. Intuitively, it tries to maximize the variance of the squared factor loadings. Next, κ = 0 yields the Quartimax criterion that minimizes the number of factors needed to explain a variable (in our case segmentation uncertainty of a pixel). Finally, κ = r/(2n) yields the Equamax criterion that represents a combination of Varimax and Quartimax. Classic rotation criteria do not consider the actual impact of factors on predicted segmentations because they only take the factor loadings Γ but not the mean µ into account. Therefore, we incorporate factor-specific flow probabilities into rotation criteria by applying a base rotation criterion q on the flow probabilities instead of the factor loadings. Hence, the objective function to be minimized becomesO 7→ q(F (ΓO)) instead ofO 7→ q(ΓO). We call the new family of rotation criteria the FP family. For instance, FP-Varimax applies the Varimax criterion on the flow probabilities. 5 Experiments The purpose of our experiments is to (1) evaluate rotation criteria based on the quality of rotated factors, (2) demonstrate the merits of fine-grained sample control based on reasonably-rotated factors. Data sets and training. First, we use the LIDC data set [1] in its pre-processed version from [28] that contains 2D slices of 3D thorax scans of size 128× 128 pixels. Each slice respectively has four ground truth segmentations from different experts. Second, we use the multi-spectral Sentinel-2 data from the SEN12MS data set [36] with images of size 244× 244 pixels and coarse labels for semantic segmentation of 10 types of land cover. Third, we use the CamVid data set [5], which contains images of road scenes in resolution 480× 360 and is pixel-wise labeled into 11 different classes. Additional details and statistics about the data sets (including splits) can be found in the supplement, where we also detail all training procedures. We respectively use r = 10 in our experiments, which accounts for the varying uncertainty in different images and has also been used in [31]. We would like to emphasize again that we do not benchmark SSNs since they have already shown to be state of the art [24, 31]. For examples of uncertainty predictions, see Figure 1, Figure 2, and the supplement. Computational aspects. We used Python 3.7, particularly with the libraries PyTorch 1.11 [32], scikit-learn [33], NumPy [18], and einops [34]. On a single core of an Intel Xeon Platinum 8260, factor-specific and full flow probabilities can be computed in the sub-second range without significant differences w.r.t. the used rotation, see the supplement for details. To obtain the optimal rotation matrices for the different rotation criteria, we adapted gradient projection algorithms from [4] to our needs. In our current implementation, optimization for criteria based on flow probabilities can take up to a few minutes, see the supplement for details. In practice, we recommend pre-computing rotations whenever possible. 5.1 Evaluation of rotation criteria We evaluate rotations according to the quality criteria from Section 4, that is, (1) the relevance of individual factors, (2) the separability of the relevant factors, and (3) the sparsity of the factors. 5.1.1 Factor relevance Here, we measure the impact of individual factors on the segmentation. In this section, we use the notation Γ̃ to denote a matrix of factor loadings that can be either rotated or unrotated. A simple measure for the impact of the j-th factor with loadings Γ̃:,j ∈ Rn is given by the `1-norm ‖F (Γ̃:,j)‖1 of the factor-specific flow probabilities. In what follows, we consider relevance curves that show how many factors exceed the overall uncertainty for varying thresholds τ ≥ 0. Specifically, we compute nτ = |Rτ |, where Rτ = {j : ‖F (Γ:,j)‖1 ≥ τ‖F full(Γ)‖1}, and we measure the overall uncertainty by the `1-norm of the full flow probabilities, approximated by 100 Monte-Carlo samples. Results and discussion. The results of averaging nτ over the respective test images are shown in Figure 3 (top row). First, classic rotations barely reduce the number of relevant factors compared to the unrotated representation. This is no surprise since they do not take the mean logits into account and only try to simplify the structure of the factor loadings Γ̃. Nevertheless, even classic rotations already seem to decrease redundancy. However, as intended by design, FP rotations reduce the number of relevant factors to a much greater extent. Especially for LIDC and SEN12MS, already small thresholds τ are sufficient to cut off most factors below the threshold: Figure 3 (top) shows that all FP rotations behave similarly with curves declining sharply for small τ . Consequently, FP rotations tend to produce a huge gap between a small number of relevant factors and the remaining ones, see Figure 1 for a visual example. This is desirable since it allows to focus only on a few relevant and meaningful factors during the exploration of the predicted uncertainty. It may be harder to find such factors if the predicted uncertainty has less inherent structure. CamVid is an example in this regard as uncertainty predictions are often restricted to class borders, which means that they are less spatially correlated. However, even for CamVid, there is structured uncertainty, see Figure 13 (Section D.2.3) in the supplement. Figure 3 (top) shows that also for CamVid, FP rotations significantly reduce the number of relevant factors. 5.1.2 Separability of relevant factors The second quality criterion from Section 4 concerns factor separation. Here, for a separation threshold ρ ∈ [0, 1], we compute the largest possible fraction of pairwise separated relevant factors: sτ (ρ) = n −1 τ ·max{|J | : J ⊂ Rτ , cos(F (Γ̃:,j), F (Γ̃:,j′)) ≤ ρ for all j 6= j′ ∈ J} ∈ [0, 1]. If nτ = 0, we set sτ (ρ) = 0 for all ρ. The separation of two factors is measured by the cosine similarity of their factor-specific flow probabilities, which is always non-negative since corresponding entries cannot have opposing signs. For sτ (ρ), a value of one is best since it means that all relevant factors are also separated. For fixed relevance thresholds τ , we also compute the area under the curve AUC(sτ ) for the comparison of different rotation criteria. Results and discussion. In Figure 3 (bottom row), we show the separation scores AUC(sτ ) for different relevance thresholds τ , respectively averaged over all test images. FP rotations consistently beat classic rotations by a factor of around two in terms of AUC. Classic rotation criteria are still better than the unrotated representations (which form the real baseline). The AUC separation scores drop for thresholds τ that fail to determine the number of relevant factors sensefully because they are too small or too large. Notably, for classic rotation criteria, the separation scores AUC(sτ ) respectively peak at a threshold τ for which the number of relevant factors nearly coincides with the one from FP rotations, compare the intersection of the curves in Figure 3 (top row). The peak of the separation scores is less pronounced for FP rotations, particularly for LIDC and SEN12MS, where the set of relevant factors is more stable across different thresholds τ . For SEN12MS, the results also distinguish among the FP-rotation criteria, where FP-Quartimax seems to be slightly favored over the other FP rotations. This may be because Quartimax emphasizes row sparsity the most, which reduces cosine similarities. We investigate row sparsity further in the next section. 5.1.3 Factor sparsity To evaluate to which degree different factors affect the same regions of the input image, we measure row sparsity of the factor-specific flow probabilities F (Γ̃) ∈ [−1, 1](whc)×r. For that, for a (row) vector v ∈ Rr let h(v) = √ r − ‖v‖1/‖v‖2√ r − 1 ∈ [0, 1] be the Hoyer measure [19], where values close to one indicate a high degree of sparsity. For us, sparsity only matters in rows with actual uncertainty for the pixel/class, therefore we additionally weigh each row proportional to its `1-norm. Hence, as a final measure, we compute the weighted Hoyer measure H(Γ̃) = ‖F (Γ̃)‖−11 · whc∑ i=1 ‖F (Γ̃)i,:‖1 · h(F (Γ̃)i,:). Results and discussion. FP rotations generally concentrate the uncertainty for single regions/classes in only few components, see Table 1. This means that FP rotation yield the most disentangled uncertainty components, which also indicates strong separation. For LIDC and SEN12MS, the amount of predicted uncertainty varies greatly across test images, causing high standard deviations. However, in general, large correlating uncertainty components can be found, allowing high row sparsity. This is in contrast to CamVid, where uncertainty is typically predicted for class borders. 5.2 Fine-grained sampling Monteiro et al. [31] already manipulated samples post-hoc by simple linear inter- or extrapolation w.r.t. the mean. However, they noted that additional more fine-grained sample control is necessary for a systematic exploration of the sample space: The interpolation approach lacks a solid foundation in the uncertainty model, and it relies on having a useful sample to start with. The meaningful control variables that we obtain by rotating factors provide all that has been missing. They enable users to systematically explore the sample space by fine-grained sampling: Starting from the mean prediction, they can inspect alternatives, correct possible mistakes, and fine-adjust borders. Particularly, they can manipulate the contribution of individual uncertainty components by manually setting the values of the corresponding factors. Pseudo-samples obtained in this way are shown in Figure 4. Alongside this paper, we provide an interface for fine-grained sampling. It allows the selection of a rotation criterion for a given input image (we recommend FP-Quartimax for a start), and control variables can be set conveniently using sliders. In the supplement, we provide some visuals. 6 Discussion and conclusion In this work, we interpreted the uncertainty model of stochastic segmentation networks (SSNs) as a factor model, which provides control variables for fine-grained sampling as requested by the authors of [31]. By (re-)structuring the uncertainty using rotations, we improved the controls and obtained as few as possible, but as many as necessary relevant uncertainty components. Here, it turned out that rotation criteria based on flow probabilities yield the most meaningful controls, where flow probabilities are a new quantification and visualization technique for the uncertainty in SSNs. Our controls allow to systematically explore the predicted uncertainty and to fine-adjust samples. However, the exploration of the sample space is only useful if the overall predicted uncertainty makes sense. This can be ensured by proper training. Structuring and examining the uncertainty is especially useful if there is a significant amount of aleatoric uncertainty. We note that one limitation caused by our current implementation is that the computation of flow-probability based rotations may take too long for performing it in an interactive scenario. However, there is significant potential for improving the optimization (scheme and parallelization). In any case, rotations should be precomputed whenever possible. Overall, we see a broader impact of our approach, which extends beyond the scope of SSNs. For instance, we believe that it could be used for large-scale image classification, where factor models have recently been employed for modeling class correlations [8]. Another promising application is to learn and inspect more structured latent spaces in (variational) autoencoders [12]. Structuring uncertainty as we do may be useful whenever a multivariate Gaussian forms part of a model. Next, flow probabilities can also be used for other probabilistic segmentation architectures that have a mean prediction as a reference point. Notably, flow-probability overview plots for the predicted uncertainty keep class-specific information, in contrast to other overview plots like entropy. To sum up, it is often easier to understand the whole in terms of smaller parts. In this light, we structured the predicted uncertainty of SSNs into meaningful smaller uncertainty components. Jointly, they enable fine-grained sample control, so for us, the sum of the parts is also greater than the whole. Acknowledgements We thank Prof. Dr.-Ing. Joachim Denzler for helpful comments and Ferdinand Rewicki for checking our work multiple times. We also thank all anonymous reviewers for their insightful feedback.
1. What is the focus and contribution of the paper on structuring uncertainty in stochastic segmentation networks? 2. What are the strengths of the proposed method, particularly in its novel combination of techniques? 3. What are the weaknesses of the paper, especially regarding explanations and computational cost? 4. How does the reviewer suggest determining the number of factors used in the method? 5. How can the authors better address the limitation of heavy computation in their approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a novel method for structuring uncertainty in the context of stochastic segmentation networks(SSNs). They use low-rank multivariate gaussian distribution to solve SSNs model uncertainty. They also develop a tool for the analysis of factor models in SSNs and apply rotation criteria to provide simple and well-separated control variables. In the experimental part, the proposed method has been outperformed the current state of the art on several datasets. Strengths And Weaknesses The idea of this proposed method appears to be a novel combination of stochastic segmentation networks. The authors try to solve the segmentation uncertainty by using smaller latent factor variables based on the recent work called SSNs. The model proposed by the paper is clearly explained and it includes sufficient and detailed experiments in the supplement. One possible weakness maybe somewhat missing the explanations of the so called ‘significant’ effect on output segmentations; another weakness maybe more detailed description about the rotation criteria. It will be great if the authors could explain more about the interface of fine-grained sampling. In the supplement, the proposed methods seems to have heavy computation cost, I expect them to better address this issue if possible. Questions How to determine how many factors we should use from Figure 1? How to determine the ‘significant’ effect where the relevant factors are characterized. Limitations one limitation of this work is the heavy computation of the flow-probability rotation.
NIPS
Title A Unified Model for Multi-class Anomaly Detection Abstract Despite the rapid advance of unsupervised anomaly detection, existing methods require to train separate models for different objects. In this work, we present UniAD that accomplishes anomaly detection for multiple classes with a unified framework. Under such a challenging setting, popular reconstruction networks may fall into an “identical shortcut”, where both normal and anomalous samples can be well recovered, and hence fail to spot outliers. To tackle this obstacle, we make three improvements. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer, and confirm the important role of query embedding (i.e., within attention layer) in preventing the network from learning the shortcut. We therefore come up with a layer-wise query decoder to help model the multi-class distribution. Second, we employ a neighbor masked attention module to further avoid the information leak from the input feature to the reconstructed output feature. Third, we propose a feature jittering strategy that urges the model to recover the correct message even with noisy inputs. We evaluate our algorithm on MVTec-AD and CIFAR-10 datasets, where we surpass the state-of-the-art alternatives by a sufficiently large margin. For example, when learning a unified model for 15 categories in MVTec-AD, we surpass the second competitor on the tasks of both anomaly detection (from 88.1% to 96.5%) and anomaly localization (from 89.5% to 96.8%). Code is available at https:// github.com/zhiyuanyou/UniAD. 1 Introduction Anomaly detection has found an increasingly wide utilization in manufacturing defect detection [4], medical image analysis [17], and video surveillance [46]. Considering the highly diverse anomaly types, a common solution is to model the distribution of normal samples and then identify anomalous ones via finding outliers. It is therefore crucial to learn a compact boundary for normal data, as shown in Fig. 1a. For this purpose, existing methods [6, 11, 25, 27, 48, 49, 52] propose to train separate models for different classes of objects, like in Fig. 1c. However, such a one-class-one-model scheme could be memory-consuming especially along with the number of classes increasing, and also uncongenial to the scenarios where the normal samples manifest themselves in a large intra-class diversity (i.e., one object consists of various types). In this work, we target a more practical task, which is to detect anomalies from different object classes with a unified framework. The task setting is illustrated in Fig. 1d, where the training data covers normal samples from a range of categories, and the learned model is asked to accomplish anomaly detection for all these categories without any fine-tuning. It is noteworthy that the categorical information (i.e., class label) is inaccessible at both the training and the inference stages, considerably ∗ Contribute Equally. † Corresponding Author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). easing the difficulty of data preparation. Nonetheless, solving such a task is fairly challenging. Recall that the rationale behind unsupervised anomaly detection is to model the distribution of normal data and find a compact decision boundary as in Fig. 1a. When it comes to the multi-class case, we expect the model to capture the distribution of all classes simultaneously such that they can share the same boundary as in Fig. 1b. But if we focus on a particular category, say the green one in Fig. 1b, all the samples from other categories should be considered as anomalies no matter whether they are normal (i.e., blue circles) or anomalous (i.e., blue triangles) themselves. From this perspective, how to accurately model the multi-class distribution becomes vital. A widely used approach to learning the normal data distribution draws support from image (or feature) reconstruction [2, 5, 26, 39, 51], which assumes that a well-trained model always produces normal samples regardless of the defects within the inputs. In this way, there will be large reconstruction errors for anomalous samples, making them distinguishable from the normal ones. However, we find that popular reconstruction networks suggest unsatisfying performance on the challenging task studied in this work. They typically fall into an “identity shortcut”, which appears as returning a direct copy of the input disregarding its content.1 As a result, even anomalous samples can be well recovered with the learned model and hence become hard to detect. Moreover, under the unified case, where the distribution of normal data is more complex, the “identical shortcut” problem is magnified. Intuitively, to learn a unified model that can reconstruct all kinds of objects, it requires the model to work extremely hard to learn the joint distribution. From this perspective, learning an “identical shortcut” appears as a far easier solution. To address this issue, we carefully tailor a feature reconstruction framework that prevents the model from learning the shortcut. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer used in neural networks, and observe that both fully-connected layer and convolutional layer face the risk of learning a trivial solution. This drawback is further amplified under the multi-class setting in that the normal data distribution becomes far more complex. Instead, the attention layer is sheltered from such a risk, benefiting from a learnable query embedding (see Sec. 3.1). Accordingly, we propose a layer-wise query decoder to intensify the use of query embedding. Second, we argue that the full attention (i.e., every feature point relates to each other) also contributes to the shortcut issue, because it offers the chance of directly copying the input to the output. To avoid the information leak, we employ a neighbor masked attention module, where a feature point relates to neither itself nor its neighbors. Third, inspired by Bengio et al. [3], we propose a feature jittering strategy, which requires the model to recover the source message even with noisy inputs. All these designs help the model escape from the “identity shortcut”, as shown in Fig. 2b. Extensive experiments on MVTec-AD [4] and CIFAR-10 [23] demonstrate the sufficient superiority of our approach, which we call UniAD, over existing alternatives under the unified task setting. For instance, when learning a single model for 15 categories in MVTec-AD, we achieve state-of-the-art performance on the tasks of both anomaly detection and anomaly localization, boosting the AUROC from 88.1% to 96.5% and from 89.5% to 96.8%, respectively. 1A detailed analysis can be found in Sec. 3.1 and Fig. 2. 2 Related work Anomaly detection. 1) Classical approaches extend classical machine learning methods for one-class classification, such as one-class support vector machine (OC-SVM) [38] and support vector data description (SVDD) [35, 41]. Patch-level embedding [48], geometric transformation [18], and elastic weight consolidation [33] are incorporated for improvement. 2) Pseudo-anomaly converts anomaly detection to supervised learning, including classification [25, 32, 45], image denoising [52], and hypersphere segmentation [27]. However, these methods partly rely on how well proxy anomalies match real anomalies that are not known [13]. 3) Modeling then comparison assumes that the pre-trained network is capable of extracting discriminative features for anomaly detection [11, 34]. PaDiM [11] and MDND [34] extract pre-trained features to model normal distribution, then utilize a distance metric to measure the anomalies. Nevertheless, these methods need to memorize and model all normal features, thus are computationally expensive. 4) Knowledge distillation proposes that the student distilled by a teacher on normal samples could only extract normal features [6, 13, 37, 44, 45]. Recent works mainly focus on model ensemble [6], feature pyramid [37, 44], and reverse distillation [13]. Reconstruction-based anomaly detection. These methods rely on the hypothesis that reconstruction models trained on normal samples only succeed in normal regions, but fail in anomalous regions [5, 8, 26, 36, 49]. Early attempts include Auto-Encoder (AE) [5, 9], Variational Auto-Encoder (VAE) [22, 26], and Generative Adversarial Net (GAN) [2, 30, 36, 51]. However, these methods face the problem that the model could learn tricks that the anomalies are also restored well. Accordingly, researchers adopt different strategies to tackle this issue, such as adding instructional information (i.e., structural [53] or semantic [39, 46]), memory mechanism [19, 20, 29], iteration mechanism [12], image masking strategy [47], and pseudo-anomaly [9, 32]. Recently, DRAEM [52] first recovers the pseudo-anomaly disturbed normal images for representation, then utilizes a discriminative net to distinguish the anomalies, achieving excellent performance. However, DRAEM [52] ceases to be effective under the unified case. Moreover, there is still an important aspect that has not been well studied, i.e., what architecture is the best reconstruction model? In this paper, we first compare and analyze three popular architectures including MLP, CNN, and transformer. Then, accordingly, we base on the transformer and further design three improvements, which compose our UniAD. Transformer in anomaly detection. Transformer [42] with attention mechanism, first proposed in natural language processing, has been successfully used in computer vision [7, 16]. Some attempts try to utilize transformer for anomaly detection. InTra [31] adopts transformer to recover the image by recovering all masked patches one by one. VT-ADL [28] and AnoVit [50] both apply transformer encoder to reconstruct images. However, these methods directly utilize vanilla transformer, and do not figure out why transformer brings improvement. In contrast, we confirm the efficacy of the query embedding to prevent the shortcut, and accordingly design a layer-wise query decoder. Also, to avoid the information leak of the full attention, we employ a neighbor masked attention module. 3 Method 3.1 Revisiting feature reconstruction for anomaly detection In Fig. 2, following the feature reconstruction paradigm [39, 49], we build an MLP, a CNN, and a transformer (with query embedding) to reconstruct the features extracted by a pre-trained backbone. The reconstruction errors represent the anomaly possibility. The architectures of the three networks are given in Appendix. The metric is evaluated every 10 epochs. Note that the periodic evaluation is impractical since anomalies are not available during training. As shown in Fig. 2a, after a period of training, the performances of the three networks decrease severely with the losses going extremely small. We attribute this to the problem of “identical shortcut”, where both normal and anomalous regions can be well recovered, thus failing to spot anomalies. This speculation is verified by the visualization results in Fig. 2b (more results in Appendix). However, compared with MLP and CNN, the transformer suffers from a much smaller performance drop, indicating a slighter shortcut problem. This encourages us to analyze as follows. We denote the features in a normal image as x+ ∈ RK×C , where K is the feature number, C is the channel dimension. The batch dimension is omitted for simplicity. Similarly, the features in an anomalous image are denoted as x− ∈ RK×C . The reconstruction loss is chosen as the MSE loss. We provide a rough analysis using a simple 1-layer network as the reconstruction net, which is trained with x+ and tested to detect anomalous regions in x−. Fully-connected layer in MLP. Denote the weights and bias in this layer as w ∈ RC×C , b ∈ RC , respectively, this layer can be represented as, y = x+w + b ∈ RK×C . (1) With the MSE loss pushing y to x+, the model may take shortcut to regress w → I (identity matrix), b → 0. Ultimately, this model could also reconstruct x− well, failing in anomaly detection. Convolutional layer in CNN. A convolutional layer with 1×1 kernel is equivalent to a fullyconnected layer. Besides, An n× n (n > 1) kernel has more parameters and larger capacity, and can complete whatever 1×1 kernel can. Thus, this layer also has the chance to learn a shortcut. Transformer with query embedding. In such a model, there is an attention layer with a learnable query embedding, q ∈ RK×C . When using this layer as the reconstruction model, it is denoted as, y = softmax(q(x+)T / √ C)x+ ∈ RK×C . (2) To push y to x+, the attention map, softmax(q(x+)T / √ C), should approximate I (identity matrix), so q must be highly related to x+. Considering that q in the trained model is relevant to normal samples, the model could not reconstruct x− well. The ablation study in Sec. 4.6 shows that without the query embedding, the performance of transformer drops dramatically by 18.1% and 13.4% in anomaly detection and localization, respectively. Thus the query embedding is of vital significance to model the normal distribution. However, transformer still suffers from the shortcut problem, which inspires our three improvements. 1) According to that the query embedding can prevent reconstructing anomalies, we design a Layerwise Query Decoder (LQD) by adding the query embedding in each decoder layer rather than only the first layer in vanilla transformer. 2) We suspect that the full attention increases the possibility of the shortcut. Since one token could see itself and its neighbor regions, it is easy to reconstruct by simply copying. Thus we mask the neighbor tokens when calculating the attention map, called Neighbor Masked Attention (NMA). 3) We employ a Feature Jittering (FJ) strategy to disturb the input features, leading the model to learn normal distribution from denoising. Benefiting from these designs, our UniAD achieves satisfying performance, as illustrated in Fig. 2. Relation between the “identical shortcut” problem and the unified case. In Fig. 2a, we aim to visualize the “identical shortcut” problem, where the loss becomes smaller yet the performance drops. We conduct the same experiment under the separate case on MLP. As shown in Fig. 4, the accuracy (green for detection and red for localization) keeps growing up along with the loss (blue) getting smaller. This helps reveal the relation between the “identical shortcut” problem and the unified case, which is that the unified case is more challenging and hence magnifies the “identical shortcut” problem. Therefore, since our approach is specially designed to solve the “identical shortcut” problem, our method can be effective in the unified case. 3.2 Improving feature reconstruction for unified anomaly detection Overview. As shown in Fig. 3, our UniAD is composed of a Neighbor Masked Encoder (NME) and a Layer-wise Query Decoder (LQD). Firstly, the feature tokens extracted by a fixed pre-trained backbone are further integrated by NME to derive the encoder embeddings. Then, in each layer of LQD, a learnable query embedding is successively fused with the encoder embeddings and the outputs of the previous layer (self-fusion for the first layer). The feature fusion is completed by the Neighbor Masked Attention (NMA). The final outputs of LQD are viewed as the reconstructed features. Also, we propose a Feature Jittering (FJ) strategy to add perturbations to the input features, leading the model to learn normal distribution from the denoising task. Finally, the results of anomaly localization and detection are obtained through the reconstruction differences. Neighbor masked attention. We suspect that the full attention in vanilla transformer [42] contributes to the “identical shortcut”. In full attention, one token is permitted to see itself, so it will be easy to reconstruct by simply copying. Moreover, considering that the feature tokens are extracted by a CNN backbone, the neighbor tokens must share lots of similarities. Therefore, we propose to mask the neighbor tokens when calculating the attention map, called Neighbor Masked Attention (NMA). Note that the neighbor region is defined in the 2D space, as shown in Fig. 5. Neighbor masked encoder. The encoder follows the standard architecture in vanilla transformer. Each layer consists of an attention module and a Feed-Forward Network (FFN). However, the full attention is replaced by our proposed NMA to prevent the information leak. Layer-wise query decoder. It is analyzed in Sec. 3.1 that the query embedding could help prevent reconstructing anomalies well. However, there is only one query embedding in the vanilla transformer. Therefore, we design a Layer-wise Query Decoder (LQD) to intensify the use of query embedding, as shown in Fig. 3. Specifically, in each layer of LQD, a learnable query embedding is first fused with the encoder embeddings, then integrated with the outputs of the previous layer (self-integration for the first layer). The feature fusion is implemented by NMA. Following the vanilla transformer, a 2-layer FFN is applied to handle these fused tokens, and the residual connection is utilized to facilitate the training. The final outputs of LQD serve as the reconstructed features. Feature jittering. Inspired by Denoising Auto-Encoder (DAE) [3, 43], we add perturbations to feature tokens, guiding the model to learn knowledge of normal samples by the denoising task. Specifically, for a feature token, ftok ∈ RC , we sample the disturbance D from a Gaussian distribution, D ∼ N(µ = 0, σ2 = (α ||ftok||2 C )2), (3) where α is the jittering scale to control the noisy degree. Also, the sampled disturbance is added to ftok with a fixed jittering probability, p. 3.3 Implementation details Feature extraction. We adopt a fixed EfficientNet-b4 [40] pre-trained on ImageNet [14] as the feature extractor. The features from stage-1 to stage-4 are selected. Here the stage means the combination of blocks that have the same size of feature maps. Then these features are resized to the same size, and concatenated along channel dimension to form a feature map, forg ∈ RCorg×H×W . Feature reconstruction. The feature map, forg , is first tokenized to H ×W feature tokens, followed by a linear projection to reduce Corg to a smaller channel, C. Then these tokens are processed by NME and LQD. The learnable position embeddings [15, 16] are added in attention modules to inform the spatial information. Afterward, another linear projection is used to recover the channel from C to Corg. After reshape, the reconstructed feature map, frec ∈ RCorg×H×W , is finally obtained. Objective function. Our model is trained with the MSE loss as, L = 1 H ×W ||forg − frec||22. (4) Inference for anomaly localization. The result of anomaly localization is an anomaly score map, which assigns an anomaly score for each pixel. Specifically, the anomaly score map, s, is calculated as the L2 norm of the reconstruction differences as, s = ||forg − frec||2 ∈ RH×W . (5) Then s is up-sampled to the image size with bi-linear interpolation to obtain the localization results. Inference for anomaly detection. Anomaly detection aims to detect whether an image contains anomalous regions. We transform the anomaly score map, s, to the anomaly score of the image by taking the maximum value of the averagely pooled s. 4 Experiment 4.1 Datasets and metrics MVTec-AD [4] is a comprehensive, multi-object, multi-defect industrial anomaly detection dataset with 15 classes. For each anomalous sample in the test set, the ground-truth includes both image label and anomaly segmentation. In the existing literature, only the separate case is researched. In this paper, we introduce the unified case, where only one model is used to handle all categories. CIFAR-10 [23] is a classical image classification dataset with 10 categories. Existing methods [6, 24, 37] evaluate CIFAR-10 mainly in the one-versus-many setting, where one class is viewed as normal samples, and others serve as anomalies. Semantic AD [1, 10] proposes a many-versus-one setting, treating one class as anomalous and the remaining classes as normal. Different from both, we propose a unified case (many-versus-many setting), which is detailed in Sec. 4.4. Metrics. Following prior works [4, 6, 52], the Area Under the Receiver Operating Curve (AUROC) is used as the evaluation metric for anomaly detection. 4.2 Anomaly detection on MVTec-AD Setup. Anomaly detection aims to detect whether an image contains anomalous regions. The anomaly detection performance is evaluated on MVTec-AD [4]. The image size is selected as 224× 224, and the size for resizing feature maps is set as 14 × 14. The feature maps from stage-1 to stage-4 of EfficientNet-b4 [40] are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamW optimizer [21] with weight decay 1× 10−4 is used. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1× 10−4 initially, and dropped by 0.1 after 800 epochs. The layer numbers of the encoder and decoder are both 4. The neighbor size, jittering scale, and jittering probability are set as 7×7, 20, and 1, respectively. The evaluation is run with 5 random seeds. In both the separate case and the unified case, the reconstruction models are trained from the scratch. Baselines. Our approach is compared with baselines including: US [6], PSVDD [48], PaDiM [11], CutPaste [25], MKD [37], and DRAEM [52]. Under the separate case, the baselines’ metric is reported in their papers except the metric of US borrowed from [52]. Under the unified case, US, PSVDD, PaDiM, CutPaste, MKD, and DRAEM are run with the publicly available implementations. Quantitative results of anomaly detection on MVTec-AD [4] are shown in Tab. 1. Though all baselines achieve excellent performances under the separate case, their performances drop dramatically under the unified case. The previous SOTA, DRAEM, a reconstruction-based method trained by pseudo-anomaly, suffers from a drop of near 10%. For another strong baseline, CutPaste, a pseudo-anomaly approach, the drop is as large as 18.6%. However, our UniAD has almost no performance drop from the separate case (96.6%) to the unified case (96.5%). Moreover, we beat the best competitor, DRAEM, by a dramatically large margin (8.4%), demonstrating our superiority. 4.3 Anomaly localization on MVTec-AD Setup and baselines. Anomaly localization aims to localize anomalous regions in an anomalous image. MVTec-AD [4] is chosen as the benchmark dataset. The setup is the same as that in Sec. 4.2. Besides the competitors in Sec. 4.2, FCDD [27] is included, whose metric under the separate case is reported in its paper. Under the unified case, we run FCDD with the implementation: FCDD. Quantitative results of anomaly localization on MVTec-AD [4] are reported in Tab. 2. Similar to Sec. 4.2, switching from the separate case to the unified case, the performance of all competitors drops significantly. For example, the performance of US, an important distillation-based baseline, decreases by 12.1%. FCDD, a pseudo-anomaly approach, suffers from a dramatic drop of 28.7%, reflecting the pseudo-anomaly is not suitable for the unified case. However, our UniAD even gains a slight improvement from the separate case (96.6%) to the unified case (96.8%), proving the suitability of our UniAD for the unified case. Moreover, we significantly surpass the strongest baseline, PaDiM, by 7.3%. This significant improvement reflects the effectiveness of our model. Qualitative results for anomaly localization on MVTec-AD [4] are illustrated in Fig. 6. For both global (Fig. 6a) and local (Fig. 6b) structural anomalies, both scattered texture perturbations (Fig. 6c) and multiple texture scratches (Fig. 6d), our method could successfully reconstruct anomalies to their corresponding normal samples, then accurately localize anomalous regions through reconstruction differences. More qualitative results are given in Appendix. 4.4 Anomaly detection on CIFAR-10 Setup. To further verify the effectiveness of our UniAD, we extend CIFAR-10 [23] to the unified case, which consists of four combinations. For each combination, five categories together serve as normal samples, while other categories are viewed as anomalies. The class indices of the four combinations are {01234}, {56789}, {02468}, {13579}. Here, {01234} means the normal samples include images from class 0, 1, 2, 3, 4, and similar for others. Note that the class index is obtained by sorting the class names of 10 classes. The setup of the model is detailed in Appendix. Baselines. US [6], FCDD [27], FCDD+OE [27], PANDA [33], and MKD [37] serve as competitors. US, FCDD, FCDD+OE, PANDA, and MKD are run with the publicly available implementations. Quantitative results of anomaly detection on CIFAR-10 [23] are shown in Tab. 3. When five classes together serve as normal samples, two recent baselines, US and FCDD, almost lose their ability to detect anomalies. When utilizing 10000 images sampled from CIFAR-100 [23] as auxiliary Outlier Exposure (OE), FCDD+OE improves the performance by a large margin. We still stably outperform FCDD+OE by 8.3% without the help of OE, indicating the efficacy of our UniAD. 4.5 Comparison with transformer-based competitors As described in Sec. 2, some attempts [31, 28, 50] also try to utilize transformer for anomaly detection. Here we compare our UniAD with existing transformer-based competitors on MVTec-AD [4]. Recall that, we choose transformer as the reconstruction model considering its great potential in preventing the model from learning the “identical shortcut” (refer to Sec. 3.1). Concretely, we find that the learnable query embedding is essential for avoiding such a shortcut but is seldom explored in existing transformer-based approaches. As shown in Tab. 4, after introducing even only one query embedding, our baseline already outperforms existing alternatives by a sufficiently large margin in the unified setting. Our proposed three components further improve our strong baseline. Recall that all three components are proposed to avoid the model from directly outputting the inputs. 4.6 Ablation studies To verify the effectiveness of the proposed modules and the selection of hyperparameters, we implement extensive ablation studies on MVTec-AD [4] under the unified case. Layer-wise query. Tab. 5a verifies our assertion that the query embedding is of vital significance. 1) Without query embedding, meaning the encoder embeddings are directly input to the decoder, the performance is the worst. 2) Adding only one query embedding to the first decoder layer (i.e., vanilla transformer [42]) promotes the performance dramatically by 18.1% and 13.4% in anomaly detection and localization, respectively. 3) With layer-wise query embedding in each decoder layer, image-level and pixel-level AUROC is further improved by 7.4% and 3.7%, respectively. Layer number. We conduct experiments to investigate the influence of layer number, as shown in Tab. 5b. 1) No matter with which combination, our model outperforms vanilla transformer by a large margin, reflecting the effectiveness of our design. 2) The best performance is achieved with a (b) Layer Number of Encoder & Decoder moderate layer number: 4Enc+4Dec. A larger layer number like 6Enc+6Dec does not bring further promotion, which may be because more layers are harder to train. Neighbor masked attention. 1) The effectiveness of NMA is proven in Tab. 5a. Under the case of one query embedding, adding NMA brings promotion by 8.5% for detection and 3.5% for localization. 2) The neighbor size of NMA is selected in Tab. 5c. 1×1 neighbor size is the worst, because 1×1 is too small to prevent the information leak, thus the recovery could be completed by copying neighbor regions. A larger neighbor size (≥ 5×5) is obviously much better, and the best one is selected as 7×7. 3) We also study the place to add NMA in Tab. 5d. Only adding NMA in the encoder (Enc) is not enough. The performance could be stably improved when further adding NMA in the first or second attention in the decoder (Enc+Dec1, Enc+Dec2) or both (All). This reflects that the full attention of the decoder also contributes to the information leak. Feature jittering. 1) Tab. 5a confirms the efficacy of FJ. With one query embedding as the baseline, introducing FJ could bring an increase of 7.4% for detection and 3.0% for localization, respectively. 2) According to Tab. 5e, the jittering scale, α, is chosen as 20. A larger α (i.e., 30) disturbs the feature too much, degrading the results. 3) In Tab. 5f, the jittering probability, p, is studied. In essence, the task would be a denoising task with feature jittering, and be a reconstruction task without feature jittering. The results show that the full denoising task (i.e., p = 1) is the best. 5 Conclusion In this work, we propose UniAD that unifies anomaly detection regarding multiple classes. For such a challenging task, we assist the model against learning an “identical shortcut” with three improvements. First, we confirm the effectiveness of the learnable query embedding and carefully tailor a layer-wise query decoder to help model the complex distribution of multi-class data. Second, we come up with a neighbor masked attention module to avoid the information leak from the input to the output. Third, we propose feature jittering that helps the model less sensitive to the input perturbations. Under the unified task setting, our method achieves state-of-the-art performance on MVTec-AD and CIFAR-10 datasets, significantly outperforming existing alternatives. Discussion. In this work, different kinds of objects are handled without being distinguished. We have not used the category labels that may help the model better fit multi-class data. How to incorporate the unified model with category labels should be further studied. In practical uses, normal samples are not as consistent as those in MVTec-AD, often manifest themselves in some diversity. Our UniAD could handle all 15 categories in MVTec-AD, hence would be more suitable for real scenes. However, anomaly detection may be used for video surveillance, which may infringe personal privacy. Acknowledgments and Disclosure of Funding Acknowledgement. This work is sponsored by the National Key Research and Development Program of China (2021YFB1716000) and National Natural Science Foundation of China (62176152).
1. What is the focus and contribution of the paper regarding anomaly detection? 2. What are the strengths of the proposed approach, particularly in its ability to handle multi-class datasets without labels? 3. What are the weaknesses of the paper, especially regarding its lack of analysis on the effectiveness of the proposed method? 4. Do you have any questions regarding the training process and performance of the proposed method and other approaches in both unified and separate cases? 5. What are the limitations mentioned in section 5 of the review?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper tackles anomaly detection of multiple classes without class labels. In other words, the proposed method learns the normality of multiple classes at once without the need for class label information. The paper analyzes that the reconstruction-based anomaly detectors learn ‘identity shortcut’ and introduces techniques how to prevent this phenomenon. To this end, the paper proposes three techniques: the use of query embedding in multiple layers of a transformer, neighbor masked attention and feature jittering. The experiments are conducted on MV-Tech and CIFAR10 datasets. Strengths And Weaknesses Strength Problem setup Anomaly detection on a multi-mode dataset (multi-class dataset without class information) is a relatively underexplored area. Most of the out-of-distribution detection papers assume class information given. In addition, this paper targets anomaly localization tasks. Analysis and extensive ablation study supports the idea Section 3.1 shows the performance deprecation over the training epochs showing that reconstruction-based models’ performance is unstable during training (the phenomenon of identical shortcuts). Section4.5 includes extensive ablation studies supporting the design of the method and sensitivity in each component and hyperparameters. Most hyperparameters are insensitive to performance showing less than ~1% gap. Strong performance The proposed method shows a notable performance gap over competing methods in a unified scenario on MV-tech and CIFAR 10. Weakness Lack of analysis The design of the method is not targeted for a unified case but is effective. Why? The idea of the proposed method uses general ML techniques not tailored for unified (multi-class data without label information) cases, yet effective. What would be the main reason for this? How is the problem of learning identical-shortcut relevant to the performance of unified anomaly detection scenarios? Why does the proposed method not gain performance when label information is added? (Separate case) In Table1 and Table2, the proposed method does not improve much by adding label information. Questions How did the proposed method and competing methods are trained for the unified and separate case? In the separate case, are they trained on the whole dataset and finetuned for each class-wise dataset? It is unclear how the models are trained in each scenario. Limitations Limitations are addressed in Section5.
NIPS
Title A Unified Model for Multi-class Anomaly Detection Abstract Despite the rapid advance of unsupervised anomaly detection, existing methods require to train separate models for different objects. In this work, we present UniAD that accomplishes anomaly detection for multiple classes with a unified framework. Under such a challenging setting, popular reconstruction networks may fall into an “identical shortcut”, where both normal and anomalous samples can be well recovered, and hence fail to spot outliers. To tackle this obstacle, we make three improvements. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer, and confirm the important role of query embedding (i.e., within attention layer) in preventing the network from learning the shortcut. We therefore come up with a layer-wise query decoder to help model the multi-class distribution. Second, we employ a neighbor masked attention module to further avoid the information leak from the input feature to the reconstructed output feature. Third, we propose a feature jittering strategy that urges the model to recover the correct message even with noisy inputs. We evaluate our algorithm on MVTec-AD and CIFAR-10 datasets, where we surpass the state-of-the-art alternatives by a sufficiently large margin. For example, when learning a unified model for 15 categories in MVTec-AD, we surpass the second competitor on the tasks of both anomaly detection (from 88.1% to 96.5%) and anomaly localization (from 89.5% to 96.8%). Code is available at https:// github.com/zhiyuanyou/UniAD. 1 Introduction Anomaly detection has found an increasingly wide utilization in manufacturing defect detection [4], medical image analysis [17], and video surveillance [46]. Considering the highly diverse anomaly types, a common solution is to model the distribution of normal samples and then identify anomalous ones via finding outliers. It is therefore crucial to learn a compact boundary for normal data, as shown in Fig. 1a. For this purpose, existing methods [6, 11, 25, 27, 48, 49, 52] propose to train separate models for different classes of objects, like in Fig. 1c. However, such a one-class-one-model scheme could be memory-consuming especially along with the number of classes increasing, and also uncongenial to the scenarios where the normal samples manifest themselves in a large intra-class diversity (i.e., one object consists of various types). In this work, we target a more practical task, which is to detect anomalies from different object classes with a unified framework. The task setting is illustrated in Fig. 1d, where the training data covers normal samples from a range of categories, and the learned model is asked to accomplish anomaly detection for all these categories without any fine-tuning. It is noteworthy that the categorical information (i.e., class label) is inaccessible at both the training and the inference stages, considerably ∗ Contribute Equally. † Corresponding Author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). easing the difficulty of data preparation. Nonetheless, solving such a task is fairly challenging. Recall that the rationale behind unsupervised anomaly detection is to model the distribution of normal data and find a compact decision boundary as in Fig. 1a. When it comes to the multi-class case, we expect the model to capture the distribution of all classes simultaneously such that they can share the same boundary as in Fig. 1b. But if we focus on a particular category, say the green one in Fig. 1b, all the samples from other categories should be considered as anomalies no matter whether they are normal (i.e., blue circles) or anomalous (i.e., blue triangles) themselves. From this perspective, how to accurately model the multi-class distribution becomes vital. A widely used approach to learning the normal data distribution draws support from image (or feature) reconstruction [2, 5, 26, 39, 51], which assumes that a well-trained model always produces normal samples regardless of the defects within the inputs. In this way, there will be large reconstruction errors for anomalous samples, making them distinguishable from the normal ones. However, we find that popular reconstruction networks suggest unsatisfying performance on the challenging task studied in this work. They typically fall into an “identity shortcut”, which appears as returning a direct copy of the input disregarding its content.1 As a result, even anomalous samples can be well recovered with the learned model and hence become hard to detect. Moreover, under the unified case, where the distribution of normal data is more complex, the “identical shortcut” problem is magnified. Intuitively, to learn a unified model that can reconstruct all kinds of objects, it requires the model to work extremely hard to learn the joint distribution. From this perspective, learning an “identical shortcut” appears as a far easier solution. To address this issue, we carefully tailor a feature reconstruction framework that prevents the model from learning the shortcut. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer used in neural networks, and observe that both fully-connected layer and convolutional layer face the risk of learning a trivial solution. This drawback is further amplified under the multi-class setting in that the normal data distribution becomes far more complex. Instead, the attention layer is sheltered from such a risk, benefiting from a learnable query embedding (see Sec. 3.1). Accordingly, we propose a layer-wise query decoder to intensify the use of query embedding. Second, we argue that the full attention (i.e., every feature point relates to each other) also contributes to the shortcut issue, because it offers the chance of directly copying the input to the output. To avoid the information leak, we employ a neighbor masked attention module, where a feature point relates to neither itself nor its neighbors. Third, inspired by Bengio et al. [3], we propose a feature jittering strategy, which requires the model to recover the source message even with noisy inputs. All these designs help the model escape from the “identity shortcut”, as shown in Fig. 2b. Extensive experiments on MVTec-AD [4] and CIFAR-10 [23] demonstrate the sufficient superiority of our approach, which we call UniAD, over existing alternatives under the unified task setting. For instance, when learning a single model for 15 categories in MVTec-AD, we achieve state-of-the-art performance on the tasks of both anomaly detection and anomaly localization, boosting the AUROC from 88.1% to 96.5% and from 89.5% to 96.8%, respectively. 1A detailed analysis can be found in Sec. 3.1 and Fig. 2. 2 Related work Anomaly detection. 1) Classical approaches extend classical machine learning methods for one-class classification, such as one-class support vector machine (OC-SVM) [38] and support vector data description (SVDD) [35, 41]. Patch-level embedding [48], geometric transformation [18], and elastic weight consolidation [33] are incorporated for improvement. 2) Pseudo-anomaly converts anomaly detection to supervised learning, including classification [25, 32, 45], image denoising [52], and hypersphere segmentation [27]. However, these methods partly rely on how well proxy anomalies match real anomalies that are not known [13]. 3) Modeling then comparison assumes that the pre-trained network is capable of extracting discriminative features for anomaly detection [11, 34]. PaDiM [11] and MDND [34] extract pre-trained features to model normal distribution, then utilize a distance metric to measure the anomalies. Nevertheless, these methods need to memorize and model all normal features, thus are computationally expensive. 4) Knowledge distillation proposes that the student distilled by a teacher on normal samples could only extract normal features [6, 13, 37, 44, 45]. Recent works mainly focus on model ensemble [6], feature pyramid [37, 44], and reverse distillation [13]. Reconstruction-based anomaly detection. These methods rely on the hypothesis that reconstruction models trained on normal samples only succeed in normal regions, but fail in anomalous regions [5, 8, 26, 36, 49]. Early attempts include Auto-Encoder (AE) [5, 9], Variational Auto-Encoder (VAE) [22, 26], and Generative Adversarial Net (GAN) [2, 30, 36, 51]. However, these methods face the problem that the model could learn tricks that the anomalies are also restored well. Accordingly, researchers adopt different strategies to tackle this issue, such as adding instructional information (i.e., structural [53] or semantic [39, 46]), memory mechanism [19, 20, 29], iteration mechanism [12], image masking strategy [47], and pseudo-anomaly [9, 32]. Recently, DRAEM [52] first recovers the pseudo-anomaly disturbed normal images for representation, then utilizes a discriminative net to distinguish the anomalies, achieving excellent performance. However, DRAEM [52] ceases to be effective under the unified case. Moreover, there is still an important aspect that has not been well studied, i.e., what architecture is the best reconstruction model? In this paper, we first compare and analyze three popular architectures including MLP, CNN, and transformer. Then, accordingly, we base on the transformer and further design three improvements, which compose our UniAD. Transformer in anomaly detection. Transformer [42] with attention mechanism, first proposed in natural language processing, has been successfully used in computer vision [7, 16]. Some attempts try to utilize transformer for anomaly detection. InTra [31] adopts transformer to recover the image by recovering all masked patches one by one. VT-ADL [28] and AnoVit [50] both apply transformer encoder to reconstruct images. However, these methods directly utilize vanilla transformer, and do not figure out why transformer brings improvement. In contrast, we confirm the efficacy of the query embedding to prevent the shortcut, and accordingly design a layer-wise query decoder. Also, to avoid the information leak of the full attention, we employ a neighbor masked attention module. 3 Method 3.1 Revisiting feature reconstruction for anomaly detection In Fig. 2, following the feature reconstruction paradigm [39, 49], we build an MLP, a CNN, and a transformer (with query embedding) to reconstruct the features extracted by a pre-trained backbone. The reconstruction errors represent the anomaly possibility. The architectures of the three networks are given in Appendix. The metric is evaluated every 10 epochs. Note that the periodic evaluation is impractical since anomalies are not available during training. As shown in Fig. 2a, after a period of training, the performances of the three networks decrease severely with the losses going extremely small. We attribute this to the problem of “identical shortcut”, where both normal and anomalous regions can be well recovered, thus failing to spot anomalies. This speculation is verified by the visualization results in Fig. 2b (more results in Appendix). However, compared with MLP and CNN, the transformer suffers from a much smaller performance drop, indicating a slighter shortcut problem. This encourages us to analyze as follows. We denote the features in a normal image as x+ ∈ RK×C , where K is the feature number, C is the channel dimension. The batch dimension is omitted for simplicity. Similarly, the features in an anomalous image are denoted as x− ∈ RK×C . The reconstruction loss is chosen as the MSE loss. We provide a rough analysis using a simple 1-layer network as the reconstruction net, which is trained with x+ and tested to detect anomalous regions in x−. Fully-connected layer in MLP. Denote the weights and bias in this layer as w ∈ RC×C , b ∈ RC , respectively, this layer can be represented as, y = x+w + b ∈ RK×C . (1) With the MSE loss pushing y to x+, the model may take shortcut to regress w → I (identity matrix), b → 0. Ultimately, this model could also reconstruct x− well, failing in anomaly detection. Convolutional layer in CNN. A convolutional layer with 1×1 kernel is equivalent to a fullyconnected layer. Besides, An n× n (n > 1) kernel has more parameters and larger capacity, and can complete whatever 1×1 kernel can. Thus, this layer also has the chance to learn a shortcut. Transformer with query embedding. In such a model, there is an attention layer with a learnable query embedding, q ∈ RK×C . When using this layer as the reconstruction model, it is denoted as, y = softmax(q(x+)T / √ C)x+ ∈ RK×C . (2) To push y to x+, the attention map, softmax(q(x+)T / √ C), should approximate I (identity matrix), so q must be highly related to x+. Considering that q in the trained model is relevant to normal samples, the model could not reconstruct x− well. The ablation study in Sec. 4.6 shows that without the query embedding, the performance of transformer drops dramatically by 18.1% and 13.4% in anomaly detection and localization, respectively. Thus the query embedding is of vital significance to model the normal distribution. However, transformer still suffers from the shortcut problem, which inspires our three improvements. 1) According to that the query embedding can prevent reconstructing anomalies, we design a Layerwise Query Decoder (LQD) by adding the query embedding in each decoder layer rather than only the first layer in vanilla transformer. 2) We suspect that the full attention increases the possibility of the shortcut. Since one token could see itself and its neighbor regions, it is easy to reconstruct by simply copying. Thus we mask the neighbor tokens when calculating the attention map, called Neighbor Masked Attention (NMA). 3) We employ a Feature Jittering (FJ) strategy to disturb the input features, leading the model to learn normal distribution from denoising. Benefiting from these designs, our UniAD achieves satisfying performance, as illustrated in Fig. 2. Relation between the “identical shortcut” problem and the unified case. In Fig. 2a, we aim to visualize the “identical shortcut” problem, where the loss becomes smaller yet the performance drops. We conduct the same experiment under the separate case on MLP. As shown in Fig. 4, the accuracy (green for detection and red for localization) keeps growing up along with the loss (blue) getting smaller. This helps reveal the relation between the “identical shortcut” problem and the unified case, which is that the unified case is more challenging and hence magnifies the “identical shortcut” problem. Therefore, since our approach is specially designed to solve the “identical shortcut” problem, our method can be effective in the unified case. 3.2 Improving feature reconstruction for unified anomaly detection Overview. As shown in Fig. 3, our UniAD is composed of a Neighbor Masked Encoder (NME) and a Layer-wise Query Decoder (LQD). Firstly, the feature tokens extracted by a fixed pre-trained backbone are further integrated by NME to derive the encoder embeddings. Then, in each layer of LQD, a learnable query embedding is successively fused with the encoder embeddings and the outputs of the previous layer (self-fusion for the first layer). The feature fusion is completed by the Neighbor Masked Attention (NMA). The final outputs of LQD are viewed as the reconstructed features. Also, we propose a Feature Jittering (FJ) strategy to add perturbations to the input features, leading the model to learn normal distribution from the denoising task. Finally, the results of anomaly localization and detection are obtained through the reconstruction differences. Neighbor masked attention. We suspect that the full attention in vanilla transformer [42] contributes to the “identical shortcut”. In full attention, one token is permitted to see itself, so it will be easy to reconstruct by simply copying. Moreover, considering that the feature tokens are extracted by a CNN backbone, the neighbor tokens must share lots of similarities. Therefore, we propose to mask the neighbor tokens when calculating the attention map, called Neighbor Masked Attention (NMA). Note that the neighbor region is defined in the 2D space, as shown in Fig. 5. Neighbor masked encoder. The encoder follows the standard architecture in vanilla transformer. Each layer consists of an attention module and a Feed-Forward Network (FFN). However, the full attention is replaced by our proposed NMA to prevent the information leak. Layer-wise query decoder. It is analyzed in Sec. 3.1 that the query embedding could help prevent reconstructing anomalies well. However, there is only one query embedding in the vanilla transformer. Therefore, we design a Layer-wise Query Decoder (LQD) to intensify the use of query embedding, as shown in Fig. 3. Specifically, in each layer of LQD, a learnable query embedding is first fused with the encoder embeddings, then integrated with the outputs of the previous layer (self-integration for the first layer). The feature fusion is implemented by NMA. Following the vanilla transformer, a 2-layer FFN is applied to handle these fused tokens, and the residual connection is utilized to facilitate the training. The final outputs of LQD serve as the reconstructed features. Feature jittering. Inspired by Denoising Auto-Encoder (DAE) [3, 43], we add perturbations to feature tokens, guiding the model to learn knowledge of normal samples by the denoising task. Specifically, for a feature token, ftok ∈ RC , we sample the disturbance D from a Gaussian distribution, D ∼ N(µ = 0, σ2 = (α ||ftok||2 C )2), (3) where α is the jittering scale to control the noisy degree. Also, the sampled disturbance is added to ftok with a fixed jittering probability, p. 3.3 Implementation details Feature extraction. We adopt a fixed EfficientNet-b4 [40] pre-trained on ImageNet [14] as the feature extractor. The features from stage-1 to stage-4 are selected. Here the stage means the combination of blocks that have the same size of feature maps. Then these features are resized to the same size, and concatenated along channel dimension to form a feature map, forg ∈ RCorg×H×W . Feature reconstruction. The feature map, forg , is first tokenized to H ×W feature tokens, followed by a linear projection to reduce Corg to a smaller channel, C. Then these tokens are processed by NME and LQD. The learnable position embeddings [15, 16] are added in attention modules to inform the spatial information. Afterward, another linear projection is used to recover the channel from C to Corg. After reshape, the reconstructed feature map, frec ∈ RCorg×H×W , is finally obtained. Objective function. Our model is trained with the MSE loss as, L = 1 H ×W ||forg − frec||22. (4) Inference for anomaly localization. The result of anomaly localization is an anomaly score map, which assigns an anomaly score for each pixel. Specifically, the anomaly score map, s, is calculated as the L2 norm of the reconstruction differences as, s = ||forg − frec||2 ∈ RH×W . (5) Then s is up-sampled to the image size with bi-linear interpolation to obtain the localization results. Inference for anomaly detection. Anomaly detection aims to detect whether an image contains anomalous regions. We transform the anomaly score map, s, to the anomaly score of the image by taking the maximum value of the averagely pooled s. 4 Experiment 4.1 Datasets and metrics MVTec-AD [4] is a comprehensive, multi-object, multi-defect industrial anomaly detection dataset with 15 classes. For each anomalous sample in the test set, the ground-truth includes both image label and anomaly segmentation. In the existing literature, only the separate case is researched. In this paper, we introduce the unified case, where only one model is used to handle all categories. CIFAR-10 [23] is a classical image classification dataset with 10 categories. Existing methods [6, 24, 37] evaluate CIFAR-10 mainly in the one-versus-many setting, where one class is viewed as normal samples, and others serve as anomalies. Semantic AD [1, 10] proposes a many-versus-one setting, treating one class as anomalous and the remaining classes as normal. Different from both, we propose a unified case (many-versus-many setting), which is detailed in Sec. 4.4. Metrics. Following prior works [4, 6, 52], the Area Under the Receiver Operating Curve (AUROC) is used as the evaluation metric for anomaly detection. 4.2 Anomaly detection on MVTec-AD Setup. Anomaly detection aims to detect whether an image contains anomalous regions. The anomaly detection performance is evaluated on MVTec-AD [4]. The image size is selected as 224× 224, and the size for resizing feature maps is set as 14 × 14. The feature maps from stage-1 to stage-4 of EfficientNet-b4 [40] are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamW optimizer [21] with weight decay 1× 10−4 is used. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1× 10−4 initially, and dropped by 0.1 after 800 epochs. The layer numbers of the encoder and decoder are both 4. The neighbor size, jittering scale, and jittering probability are set as 7×7, 20, and 1, respectively. The evaluation is run with 5 random seeds. In both the separate case and the unified case, the reconstruction models are trained from the scratch. Baselines. Our approach is compared with baselines including: US [6], PSVDD [48], PaDiM [11], CutPaste [25], MKD [37], and DRAEM [52]. Under the separate case, the baselines’ metric is reported in their papers except the metric of US borrowed from [52]. Under the unified case, US, PSVDD, PaDiM, CutPaste, MKD, and DRAEM are run with the publicly available implementations. Quantitative results of anomaly detection on MVTec-AD [4] are shown in Tab. 1. Though all baselines achieve excellent performances under the separate case, their performances drop dramatically under the unified case. The previous SOTA, DRAEM, a reconstruction-based method trained by pseudo-anomaly, suffers from a drop of near 10%. For another strong baseline, CutPaste, a pseudo-anomaly approach, the drop is as large as 18.6%. However, our UniAD has almost no performance drop from the separate case (96.6%) to the unified case (96.5%). Moreover, we beat the best competitor, DRAEM, by a dramatically large margin (8.4%), demonstrating our superiority. 4.3 Anomaly localization on MVTec-AD Setup and baselines. Anomaly localization aims to localize anomalous regions in an anomalous image. MVTec-AD [4] is chosen as the benchmark dataset. The setup is the same as that in Sec. 4.2. Besides the competitors in Sec. 4.2, FCDD [27] is included, whose metric under the separate case is reported in its paper. Under the unified case, we run FCDD with the implementation: FCDD. Quantitative results of anomaly localization on MVTec-AD [4] are reported in Tab. 2. Similar to Sec. 4.2, switching from the separate case to the unified case, the performance of all competitors drops significantly. For example, the performance of US, an important distillation-based baseline, decreases by 12.1%. FCDD, a pseudo-anomaly approach, suffers from a dramatic drop of 28.7%, reflecting the pseudo-anomaly is not suitable for the unified case. However, our UniAD even gains a slight improvement from the separate case (96.6%) to the unified case (96.8%), proving the suitability of our UniAD for the unified case. Moreover, we significantly surpass the strongest baseline, PaDiM, by 7.3%. This significant improvement reflects the effectiveness of our model. Qualitative results for anomaly localization on MVTec-AD [4] are illustrated in Fig. 6. For both global (Fig. 6a) and local (Fig. 6b) structural anomalies, both scattered texture perturbations (Fig. 6c) and multiple texture scratches (Fig. 6d), our method could successfully reconstruct anomalies to their corresponding normal samples, then accurately localize anomalous regions through reconstruction differences. More qualitative results are given in Appendix. 4.4 Anomaly detection on CIFAR-10 Setup. To further verify the effectiveness of our UniAD, we extend CIFAR-10 [23] to the unified case, which consists of four combinations. For each combination, five categories together serve as normal samples, while other categories are viewed as anomalies. The class indices of the four combinations are {01234}, {56789}, {02468}, {13579}. Here, {01234} means the normal samples include images from class 0, 1, 2, 3, 4, and similar for others. Note that the class index is obtained by sorting the class names of 10 classes. The setup of the model is detailed in Appendix. Baselines. US [6], FCDD [27], FCDD+OE [27], PANDA [33], and MKD [37] serve as competitors. US, FCDD, FCDD+OE, PANDA, and MKD are run with the publicly available implementations. Quantitative results of anomaly detection on CIFAR-10 [23] are shown in Tab. 3. When five classes together serve as normal samples, two recent baselines, US and FCDD, almost lose their ability to detect anomalies. When utilizing 10000 images sampled from CIFAR-100 [23] as auxiliary Outlier Exposure (OE), FCDD+OE improves the performance by a large margin. We still stably outperform FCDD+OE by 8.3% without the help of OE, indicating the efficacy of our UniAD. 4.5 Comparison with transformer-based competitors As described in Sec. 2, some attempts [31, 28, 50] also try to utilize transformer for anomaly detection. Here we compare our UniAD with existing transformer-based competitors on MVTec-AD [4]. Recall that, we choose transformer as the reconstruction model considering its great potential in preventing the model from learning the “identical shortcut” (refer to Sec. 3.1). Concretely, we find that the learnable query embedding is essential for avoiding such a shortcut but is seldom explored in existing transformer-based approaches. As shown in Tab. 4, after introducing even only one query embedding, our baseline already outperforms existing alternatives by a sufficiently large margin in the unified setting. Our proposed three components further improve our strong baseline. Recall that all three components are proposed to avoid the model from directly outputting the inputs. 4.6 Ablation studies To verify the effectiveness of the proposed modules and the selection of hyperparameters, we implement extensive ablation studies on MVTec-AD [4] under the unified case. Layer-wise query. Tab. 5a verifies our assertion that the query embedding is of vital significance. 1) Without query embedding, meaning the encoder embeddings are directly input to the decoder, the performance is the worst. 2) Adding only one query embedding to the first decoder layer (i.e., vanilla transformer [42]) promotes the performance dramatically by 18.1% and 13.4% in anomaly detection and localization, respectively. 3) With layer-wise query embedding in each decoder layer, image-level and pixel-level AUROC is further improved by 7.4% and 3.7%, respectively. Layer number. We conduct experiments to investigate the influence of layer number, as shown in Tab. 5b. 1) No matter with which combination, our model outperforms vanilla transformer by a large margin, reflecting the effectiveness of our design. 2) The best performance is achieved with a (b) Layer Number of Encoder & Decoder moderate layer number: 4Enc+4Dec. A larger layer number like 6Enc+6Dec does not bring further promotion, which may be because more layers are harder to train. Neighbor masked attention. 1) The effectiveness of NMA is proven in Tab. 5a. Under the case of one query embedding, adding NMA brings promotion by 8.5% for detection and 3.5% for localization. 2) The neighbor size of NMA is selected in Tab. 5c. 1×1 neighbor size is the worst, because 1×1 is too small to prevent the information leak, thus the recovery could be completed by copying neighbor regions. A larger neighbor size (≥ 5×5) is obviously much better, and the best one is selected as 7×7. 3) We also study the place to add NMA in Tab. 5d. Only adding NMA in the encoder (Enc) is not enough. The performance could be stably improved when further adding NMA in the first or second attention in the decoder (Enc+Dec1, Enc+Dec2) or both (All). This reflects that the full attention of the decoder also contributes to the information leak. Feature jittering. 1) Tab. 5a confirms the efficacy of FJ. With one query embedding as the baseline, introducing FJ could bring an increase of 7.4% for detection and 3.0% for localization, respectively. 2) According to Tab. 5e, the jittering scale, α, is chosen as 20. A larger α (i.e., 30) disturbs the feature too much, degrading the results. 3) In Tab. 5f, the jittering probability, p, is studied. In essence, the task would be a denoising task with feature jittering, and be a reconstruction task without feature jittering. The results show that the full denoising task (i.e., p = 1) is the best. 5 Conclusion In this work, we propose UniAD that unifies anomaly detection regarding multiple classes. For such a challenging task, we assist the model against learning an “identical shortcut” with three improvements. First, we confirm the effectiveness of the learnable query embedding and carefully tailor a layer-wise query decoder to help model the complex distribution of multi-class data. Second, we come up with a neighbor masked attention module to avoid the information leak from the input to the output. Third, we propose feature jittering that helps the model less sensitive to the input perturbations. Under the unified task setting, our method achieves state-of-the-art performance on MVTec-AD and CIFAR-10 datasets, significantly outperforming existing alternatives. Discussion. In this work, different kinds of objects are handled without being distinguished. We have not used the category labels that may help the model better fit multi-class data. How to incorporate the unified model with category labels should be further studied. In practical uses, normal samples are not as consistent as those in MVTec-AD, often manifest themselves in some diversity. Our UniAD could handle all 15 categories in MVTec-AD, hence would be more suitable for real scenes. However, anomaly detection may be used for video surveillance, which may infringe personal privacy. Acknowledgments and Disclosure of Funding Acknowledgement. This work is sponsored by the National Key Research and Development Program of China (2021YFB1716000) and National Natural Science Foundation of China (62176152).
1. What is the focus and contribution of the paper on multi-class anomaly detection? 2. What are the strengths of the proposed approach, particularly in addressing the "identical shortcut" phenomenon? 3. What are the weaknesses of the paper regarding its experimental design and lack of novelty in some components? 4. Do you have any concerns about the effectiveness of the proposed method in certain scenarios or its interpretability? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper aims to learn a unified framework for detecting multi-class anomalies. Anomaly detection is only trained on normal data, so the so-called “identical shortcut” phenomenon may occur. To solve this problem, this paper proposes the following strategies: 1) layer-wise query decoder, 2) neighbor masked attention, 3) feature jittering. The experimental results on the MVTec-AD and CIFAR-10 datasets show that the proposed method can alleviate the “identical shortcut” phenomenon. Strengths And Weaknesses Strengths This paper proposes a novel neighbor masked attention. This paper is well organized, easy to understand, and clearly written. Good results are achieved on the MVTec-AD and CIFAR-10 datasets. Weaknesses The introduction to evaluation metric (AUROC) is missing. The ablation experiments in Table 4 are incomplete. For example, under 1 q., the results in the case of only NMA and FJ are missing; under Layer-wise q., the results in the case of only NMA are missing; under Layer-wise q., the results in the case of only FJ are missing. Feature jittering is a common practice and lacks novelty. Questions On lines 125-126, the author mentions that the model may learn a trivial solution, causing anomaly detection to fail. But in MLP, this should not be the case due to the presence of nonlinear activations. Can the authors explain this further through experiments or visualizations? In Table 2, for the categories Capsule and Transistor, the results of the method DRAEM (50.5 and 64.5) are significantly smaller than those of the method in this paper (98.5 and 97.9), but for the category Zipper, the results of the method DRAEM (98.3) are higher than those of the method in this paper (96.8). What are the reasons for the above phenomenon? Please analyze this. Please analyze the complexity of the method in this paper compared with the one-class-one model method. Limitations The authors have adequately addressed the limitations and potential negative societal impact of their work.
NIPS
Title A Unified Model for Multi-class Anomaly Detection Abstract Despite the rapid advance of unsupervised anomaly detection, existing methods require to train separate models for different objects. In this work, we present UniAD that accomplishes anomaly detection for multiple classes with a unified framework. Under such a challenging setting, popular reconstruction networks may fall into an “identical shortcut”, where both normal and anomalous samples can be well recovered, and hence fail to spot outliers. To tackle this obstacle, we make three improvements. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer, and confirm the important role of query embedding (i.e., within attention layer) in preventing the network from learning the shortcut. We therefore come up with a layer-wise query decoder to help model the multi-class distribution. Second, we employ a neighbor masked attention module to further avoid the information leak from the input feature to the reconstructed output feature. Third, we propose a feature jittering strategy that urges the model to recover the correct message even with noisy inputs. We evaluate our algorithm on MVTec-AD and CIFAR-10 datasets, where we surpass the state-of-the-art alternatives by a sufficiently large margin. For example, when learning a unified model for 15 categories in MVTec-AD, we surpass the second competitor on the tasks of both anomaly detection (from 88.1% to 96.5%) and anomaly localization (from 89.5% to 96.8%). Code is available at https:// github.com/zhiyuanyou/UniAD. 1 Introduction Anomaly detection has found an increasingly wide utilization in manufacturing defect detection [4], medical image analysis [17], and video surveillance [46]. Considering the highly diverse anomaly types, a common solution is to model the distribution of normal samples and then identify anomalous ones via finding outliers. It is therefore crucial to learn a compact boundary for normal data, as shown in Fig. 1a. For this purpose, existing methods [6, 11, 25, 27, 48, 49, 52] propose to train separate models for different classes of objects, like in Fig. 1c. However, such a one-class-one-model scheme could be memory-consuming especially along with the number of classes increasing, and also uncongenial to the scenarios where the normal samples manifest themselves in a large intra-class diversity (i.e., one object consists of various types). In this work, we target a more practical task, which is to detect anomalies from different object classes with a unified framework. The task setting is illustrated in Fig. 1d, where the training data covers normal samples from a range of categories, and the learned model is asked to accomplish anomaly detection for all these categories without any fine-tuning. It is noteworthy that the categorical information (i.e., class label) is inaccessible at both the training and the inference stages, considerably ∗ Contribute Equally. † Corresponding Author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). easing the difficulty of data preparation. Nonetheless, solving such a task is fairly challenging. Recall that the rationale behind unsupervised anomaly detection is to model the distribution of normal data and find a compact decision boundary as in Fig. 1a. When it comes to the multi-class case, we expect the model to capture the distribution of all classes simultaneously such that they can share the same boundary as in Fig. 1b. But if we focus on a particular category, say the green one in Fig. 1b, all the samples from other categories should be considered as anomalies no matter whether they are normal (i.e., blue circles) or anomalous (i.e., blue triangles) themselves. From this perspective, how to accurately model the multi-class distribution becomes vital. A widely used approach to learning the normal data distribution draws support from image (or feature) reconstruction [2, 5, 26, 39, 51], which assumes that a well-trained model always produces normal samples regardless of the defects within the inputs. In this way, there will be large reconstruction errors for anomalous samples, making them distinguishable from the normal ones. However, we find that popular reconstruction networks suggest unsatisfying performance on the challenging task studied in this work. They typically fall into an “identity shortcut”, which appears as returning a direct copy of the input disregarding its content.1 As a result, even anomalous samples can be well recovered with the learned model and hence become hard to detect. Moreover, under the unified case, where the distribution of normal data is more complex, the “identical shortcut” problem is magnified. Intuitively, to learn a unified model that can reconstruct all kinds of objects, it requires the model to work extremely hard to learn the joint distribution. From this perspective, learning an “identical shortcut” appears as a far easier solution. To address this issue, we carefully tailor a feature reconstruction framework that prevents the model from learning the shortcut. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer used in neural networks, and observe that both fully-connected layer and convolutional layer face the risk of learning a trivial solution. This drawback is further amplified under the multi-class setting in that the normal data distribution becomes far more complex. Instead, the attention layer is sheltered from such a risk, benefiting from a learnable query embedding (see Sec. 3.1). Accordingly, we propose a layer-wise query decoder to intensify the use of query embedding. Second, we argue that the full attention (i.e., every feature point relates to each other) also contributes to the shortcut issue, because it offers the chance of directly copying the input to the output. To avoid the information leak, we employ a neighbor masked attention module, where a feature point relates to neither itself nor its neighbors. Third, inspired by Bengio et al. [3], we propose a feature jittering strategy, which requires the model to recover the source message even with noisy inputs. All these designs help the model escape from the “identity shortcut”, as shown in Fig. 2b. Extensive experiments on MVTec-AD [4] and CIFAR-10 [23] demonstrate the sufficient superiority of our approach, which we call UniAD, over existing alternatives under the unified task setting. For instance, when learning a single model for 15 categories in MVTec-AD, we achieve state-of-the-art performance on the tasks of both anomaly detection and anomaly localization, boosting the AUROC from 88.1% to 96.5% and from 89.5% to 96.8%, respectively. 1A detailed analysis can be found in Sec. 3.1 and Fig. 2. 2 Related work Anomaly detection. 1) Classical approaches extend classical machine learning methods for one-class classification, such as one-class support vector machine (OC-SVM) [38] and support vector data description (SVDD) [35, 41]. Patch-level embedding [48], geometric transformation [18], and elastic weight consolidation [33] are incorporated for improvement. 2) Pseudo-anomaly converts anomaly detection to supervised learning, including classification [25, 32, 45], image denoising [52], and hypersphere segmentation [27]. However, these methods partly rely on how well proxy anomalies match real anomalies that are not known [13]. 3) Modeling then comparison assumes that the pre-trained network is capable of extracting discriminative features for anomaly detection [11, 34]. PaDiM [11] and MDND [34] extract pre-trained features to model normal distribution, then utilize a distance metric to measure the anomalies. Nevertheless, these methods need to memorize and model all normal features, thus are computationally expensive. 4) Knowledge distillation proposes that the student distilled by a teacher on normal samples could only extract normal features [6, 13, 37, 44, 45]. Recent works mainly focus on model ensemble [6], feature pyramid [37, 44], and reverse distillation [13]. Reconstruction-based anomaly detection. These methods rely on the hypothesis that reconstruction models trained on normal samples only succeed in normal regions, but fail in anomalous regions [5, 8, 26, 36, 49]. Early attempts include Auto-Encoder (AE) [5, 9], Variational Auto-Encoder (VAE) [22, 26], and Generative Adversarial Net (GAN) [2, 30, 36, 51]. However, these methods face the problem that the model could learn tricks that the anomalies are also restored well. Accordingly, researchers adopt different strategies to tackle this issue, such as adding instructional information (i.e., structural [53] or semantic [39, 46]), memory mechanism [19, 20, 29], iteration mechanism [12], image masking strategy [47], and pseudo-anomaly [9, 32]. Recently, DRAEM [52] first recovers the pseudo-anomaly disturbed normal images for representation, then utilizes a discriminative net to distinguish the anomalies, achieving excellent performance. However, DRAEM [52] ceases to be effective under the unified case. Moreover, there is still an important aspect that has not been well studied, i.e., what architecture is the best reconstruction model? In this paper, we first compare and analyze three popular architectures including MLP, CNN, and transformer. Then, accordingly, we base on the transformer and further design three improvements, which compose our UniAD. Transformer in anomaly detection. Transformer [42] with attention mechanism, first proposed in natural language processing, has been successfully used in computer vision [7, 16]. Some attempts try to utilize transformer for anomaly detection. InTra [31] adopts transformer to recover the image by recovering all masked patches one by one. VT-ADL [28] and AnoVit [50] both apply transformer encoder to reconstruct images. However, these methods directly utilize vanilla transformer, and do not figure out why transformer brings improvement. In contrast, we confirm the efficacy of the query embedding to prevent the shortcut, and accordingly design a layer-wise query decoder. Also, to avoid the information leak of the full attention, we employ a neighbor masked attention module. 3 Method 3.1 Revisiting feature reconstruction for anomaly detection In Fig. 2, following the feature reconstruction paradigm [39, 49], we build an MLP, a CNN, and a transformer (with query embedding) to reconstruct the features extracted by a pre-trained backbone. The reconstruction errors represent the anomaly possibility. The architectures of the three networks are given in Appendix. The metric is evaluated every 10 epochs. Note that the periodic evaluation is impractical since anomalies are not available during training. As shown in Fig. 2a, after a period of training, the performances of the three networks decrease severely with the losses going extremely small. We attribute this to the problem of “identical shortcut”, where both normal and anomalous regions can be well recovered, thus failing to spot anomalies. This speculation is verified by the visualization results in Fig. 2b (more results in Appendix). However, compared with MLP and CNN, the transformer suffers from a much smaller performance drop, indicating a slighter shortcut problem. This encourages us to analyze as follows. We denote the features in a normal image as x+ ∈ RK×C , where K is the feature number, C is the channel dimension. The batch dimension is omitted for simplicity. Similarly, the features in an anomalous image are denoted as x− ∈ RK×C . The reconstruction loss is chosen as the MSE loss. We provide a rough analysis using a simple 1-layer network as the reconstruction net, which is trained with x+ and tested to detect anomalous regions in x−. Fully-connected layer in MLP. Denote the weights and bias in this layer as w ∈ RC×C , b ∈ RC , respectively, this layer can be represented as, y = x+w + b ∈ RK×C . (1) With the MSE loss pushing y to x+, the model may take shortcut to regress w → I (identity matrix), b → 0. Ultimately, this model could also reconstruct x− well, failing in anomaly detection. Convolutional layer in CNN. A convolutional layer with 1×1 kernel is equivalent to a fullyconnected layer. Besides, An n× n (n > 1) kernel has more parameters and larger capacity, and can complete whatever 1×1 kernel can. Thus, this layer also has the chance to learn a shortcut. Transformer with query embedding. In such a model, there is an attention layer with a learnable query embedding, q ∈ RK×C . When using this layer as the reconstruction model, it is denoted as, y = softmax(q(x+)T / √ C)x+ ∈ RK×C . (2) To push y to x+, the attention map, softmax(q(x+)T / √ C), should approximate I (identity matrix), so q must be highly related to x+. Considering that q in the trained model is relevant to normal samples, the model could not reconstruct x− well. The ablation study in Sec. 4.6 shows that without the query embedding, the performance of transformer drops dramatically by 18.1% and 13.4% in anomaly detection and localization, respectively. Thus the query embedding is of vital significance to model the normal distribution. However, transformer still suffers from the shortcut problem, which inspires our three improvements. 1) According to that the query embedding can prevent reconstructing anomalies, we design a Layerwise Query Decoder (LQD) by adding the query embedding in each decoder layer rather than only the first layer in vanilla transformer. 2) We suspect that the full attention increases the possibility of the shortcut. Since one token could see itself and its neighbor regions, it is easy to reconstruct by simply copying. Thus we mask the neighbor tokens when calculating the attention map, called Neighbor Masked Attention (NMA). 3) We employ a Feature Jittering (FJ) strategy to disturb the input features, leading the model to learn normal distribution from denoising. Benefiting from these designs, our UniAD achieves satisfying performance, as illustrated in Fig. 2. Relation between the “identical shortcut” problem and the unified case. In Fig. 2a, we aim to visualize the “identical shortcut” problem, where the loss becomes smaller yet the performance drops. We conduct the same experiment under the separate case on MLP. As shown in Fig. 4, the accuracy (green for detection and red for localization) keeps growing up along with the loss (blue) getting smaller. This helps reveal the relation between the “identical shortcut” problem and the unified case, which is that the unified case is more challenging and hence magnifies the “identical shortcut” problem. Therefore, since our approach is specially designed to solve the “identical shortcut” problem, our method can be effective in the unified case. 3.2 Improving feature reconstruction for unified anomaly detection Overview. As shown in Fig. 3, our UniAD is composed of a Neighbor Masked Encoder (NME) and a Layer-wise Query Decoder (LQD). Firstly, the feature tokens extracted by a fixed pre-trained backbone are further integrated by NME to derive the encoder embeddings. Then, in each layer of LQD, a learnable query embedding is successively fused with the encoder embeddings and the outputs of the previous layer (self-fusion for the first layer). The feature fusion is completed by the Neighbor Masked Attention (NMA). The final outputs of LQD are viewed as the reconstructed features. Also, we propose a Feature Jittering (FJ) strategy to add perturbations to the input features, leading the model to learn normal distribution from the denoising task. Finally, the results of anomaly localization and detection are obtained through the reconstruction differences. Neighbor masked attention. We suspect that the full attention in vanilla transformer [42] contributes to the “identical shortcut”. In full attention, one token is permitted to see itself, so it will be easy to reconstruct by simply copying. Moreover, considering that the feature tokens are extracted by a CNN backbone, the neighbor tokens must share lots of similarities. Therefore, we propose to mask the neighbor tokens when calculating the attention map, called Neighbor Masked Attention (NMA). Note that the neighbor region is defined in the 2D space, as shown in Fig. 5. Neighbor masked encoder. The encoder follows the standard architecture in vanilla transformer. Each layer consists of an attention module and a Feed-Forward Network (FFN). However, the full attention is replaced by our proposed NMA to prevent the information leak. Layer-wise query decoder. It is analyzed in Sec. 3.1 that the query embedding could help prevent reconstructing anomalies well. However, there is only one query embedding in the vanilla transformer. Therefore, we design a Layer-wise Query Decoder (LQD) to intensify the use of query embedding, as shown in Fig. 3. Specifically, in each layer of LQD, a learnable query embedding is first fused with the encoder embeddings, then integrated with the outputs of the previous layer (self-integration for the first layer). The feature fusion is implemented by NMA. Following the vanilla transformer, a 2-layer FFN is applied to handle these fused tokens, and the residual connection is utilized to facilitate the training. The final outputs of LQD serve as the reconstructed features. Feature jittering. Inspired by Denoising Auto-Encoder (DAE) [3, 43], we add perturbations to feature tokens, guiding the model to learn knowledge of normal samples by the denoising task. Specifically, for a feature token, ftok ∈ RC , we sample the disturbance D from a Gaussian distribution, D ∼ N(µ = 0, σ2 = (α ||ftok||2 C )2), (3) where α is the jittering scale to control the noisy degree. Also, the sampled disturbance is added to ftok with a fixed jittering probability, p. 3.3 Implementation details Feature extraction. We adopt a fixed EfficientNet-b4 [40] pre-trained on ImageNet [14] as the feature extractor. The features from stage-1 to stage-4 are selected. Here the stage means the combination of blocks that have the same size of feature maps. Then these features are resized to the same size, and concatenated along channel dimension to form a feature map, forg ∈ RCorg×H×W . Feature reconstruction. The feature map, forg , is first tokenized to H ×W feature tokens, followed by a linear projection to reduce Corg to a smaller channel, C. Then these tokens are processed by NME and LQD. The learnable position embeddings [15, 16] are added in attention modules to inform the spatial information. Afterward, another linear projection is used to recover the channel from C to Corg. After reshape, the reconstructed feature map, frec ∈ RCorg×H×W , is finally obtained. Objective function. Our model is trained with the MSE loss as, L = 1 H ×W ||forg − frec||22. (4) Inference for anomaly localization. The result of anomaly localization is an anomaly score map, which assigns an anomaly score for each pixel. Specifically, the anomaly score map, s, is calculated as the L2 norm of the reconstruction differences as, s = ||forg − frec||2 ∈ RH×W . (5) Then s is up-sampled to the image size with bi-linear interpolation to obtain the localization results. Inference for anomaly detection. Anomaly detection aims to detect whether an image contains anomalous regions. We transform the anomaly score map, s, to the anomaly score of the image by taking the maximum value of the averagely pooled s. 4 Experiment 4.1 Datasets and metrics MVTec-AD [4] is a comprehensive, multi-object, multi-defect industrial anomaly detection dataset with 15 classes. For each anomalous sample in the test set, the ground-truth includes both image label and anomaly segmentation. In the existing literature, only the separate case is researched. In this paper, we introduce the unified case, where only one model is used to handle all categories. CIFAR-10 [23] is a classical image classification dataset with 10 categories. Existing methods [6, 24, 37] evaluate CIFAR-10 mainly in the one-versus-many setting, where one class is viewed as normal samples, and others serve as anomalies. Semantic AD [1, 10] proposes a many-versus-one setting, treating one class as anomalous and the remaining classes as normal. Different from both, we propose a unified case (many-versus-many setting), which is detailed in Sec. 4.4. Metrics. Following prior works [4, 6, 52], the Area Under the Receiver Operating Curve (AUROC) is used as the evaluation metric for anomaly detection. 4.2 Anomaly detection on MVTec-AD Setup. Anomaly detection aims to detect whether an image contains anomalous regions. The anomaly detection performance is evaluated on MVTec-AD [4]. The image size is selected as 224× 224, and the size for resizing feature maps is set as 14 × 14. The feature maps from stage-1 to stage-4 of EfficientNet-b4 [40] are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamW optimizer [21] with weight decay 1× 10−4 is used. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1× 10−4 initially, and dropped by 0.1 after 800 epochs. The layer numbers of the encoder and decoder are both 4. The neighbor size, jittering scale, and jittering probability are set as 7×7, 20, and 1, respectively. The evaluation is run with 5 random seeds. In both the separate case and the unified case, the reconstruction models are trained from the scratch. Baselines. Our approach is compared with baselines including: US [6], PSVDD [48], PaDiM [11], CutPaste [25], MKD [37], and DRAEM [52]. Under the separate case, the baselines’ metric is reported in their papers except the metric of US borrowed from [52]. Under the unified case, US, PSVDD, PaDiM, CutPaste, MKD, and DRAEM are run with the publicly available implementations. Quantitative results of anomaly detection on MVTec-AD [4] are shown in Tab. 1. Though all baselines achieve excellent performances under the separate case, their performances drop dramatically under the unified case. The previous SOTA, DRAEM, a reconstruction-based method trained by pseudo-anomaly, suffers from a drop of near 10%. For another strong baseline, CutPaste, a pseudo-anomaly approach, the drop is as large as 18.6%. However, our UniAD has almost no performance drop from the separate case (96.6%) to the unified case (96.5%). Moreover, we beat the best competitor, DRAEM, by a dramatically large margin (8.4%), demonstrating our superiority. 4.3 Anomaly localization on MVTec-AD Setup and baselines. Anomaly localization aims to localize anomalous regions in an anomalous image. MVTec-AD [4] is chosen as the benchmark dataset. The setup is the same as that in Sec. 4.2. Besides the competitors in Sec. 4.2, FCDD [27] is included, whose metric under the separate case is reported in its paper. Under the unified case, we run FCDD with the implementation: FCDD. Quantitative results of anomaly localization on MVTec-AD [4] are reported in Tab. 2. Similar to Sec. 4.2, switching from the separate case to the unified case, the performance of all competitors drops significantly. For example, the performance of US, an important distillation-based baseline, decreases by 12.1%. FCDD, a pseudo-anomaly approach, suffers from a dramatic drop of 28.7%, reflecting the pseudo-anomaly is not suitable for the unified case. However, our UniAD even gains a slight improvement from the separate case (96.6%) to the unified case (96.8%), proving the suitability of our UniAD for the unified case. Moreover, we significantly surpass the strongest baseline, PaDiM, by 7.3%. This significant improvement reflects the effectiveness of our model. Qualitative results for anomaly localization on MVTec-AD [4] are illustrated in Fig. 6. For both global (Fig. 6a) and local (Fig. 6b) structural anomalies, both scattered texture perturbations (Fig. 6c) and multiple texture scratches (Fig. 6d), our method could successfully reconstruct anomalies to their corresponding normal samples, then accurately localize anomalous regions through reconstruction differences. More qualitative results are given in Appendix. 4.4 Anomaly detection on CIFAR-10 Setup. To further verify the effectiveness of our UniAD, we extend CIFAR-10 [23] to the unified case, which consists of four combinations. For each combination, five categories together serve as normal samples, while other categories are viewed as anomalies. The class indices of the four combinations are {01234}, {56789}, {02468}, {13579}. Here, {01234} means the normal samples include images from class 0, 1, 2, 3, 4, and similar for others. Note that the class index is obtained by sorting the class names of 10 classes. The setup of the model is detailed in Appendix. Baselines. US [6], FCDD [27], FCDD+OE [27], PANDA [33], and MKD [37] serve as competitors. US, FCDD, FCDD+OE, PANDA, and MKD are run with the publicly available implementations. Quantitative results of anomaly detection on CIFAR-10 [23] are shown in Tab. 3. When five classes together serve as normal samples, two recent baselines, US and FCDD, almost lose their ability to detect anomalies. When utilizing 10000 images sampled from CIFAR-100 [23] as auxiliary Outlier Exposure (OE), FCDD+OE improves the performance by a large margin. We still stably outperform FCDD+OE by 8.3% without the help of OE, indicating the efficacy of our UniAD. 4.5 Comparison with transformer-based competitors As described in Sec. 2, some attempts [31, 28, 50] also try to utilize transformer for anomaly detection. Here we compare our UniAD with existing transformer-based competitors on MVTec-AD [4]. Recall that, we choose transformer as the reconstruction model considering its great potential in preventing the model from learning the “identical shortcut” (refer to Sec. 3.1). Concretely, we find that the learnable query embedding is essential for avoiding such a shortcut but is seldom explored in existing transformer-based approaches. As shown in Tab. 4, after introducing even only one query embedding, our baseline already outperforms existing alternatives by a sufficiently large margin in the unified setting. Our proposed three components further improve our strong baseline. Recall that all three components are proposed to avoid the model from directly outputting the inputs. 4.6 Ablation studies To verify the effectiveness of the proposed modules and the selection of hyperparameters, we implement extensive ablation studies on MVTec-AD [4] under the unified case. Layer-wise query. Tab. 5a verifies our assertion that the query embedding is of vital significance. 1) Without query embedding, meaning the encoder embeddings are directly input to the decoder, the performance is the worst. 2) Adding only one query embedding to the first decoder layer (i.e., vanilla transformer [42]) promotes the performance dramatically by 18.1% and 13.4% in anomaly detection and localization, respectively. 3) With layer-wise query embedding in each decoder layer, image-level and pixel-level AUROC is further improved by 7.4% and 3.7%, respectively. Layer number. We conduct experiments to investigate the influence of layer number, as shown in Tab. 5b. 1) No matter with which combination, our model outperforms vanilla transformer by a large margin, reflecting the effectiveness of our design. 2) The best performance is achieved with a (b) Layer Number of Encoder & Decoder moderate layer number: 4Enc+4Dec. A larger layer number like 6Enc+6Dec does not bring further promotion, which may be because more layers are harder to train. Neighbor masked attention. 1) The effectiveness of NMA is proven in Tab. 5a. Under the case of one query embedding, adding NMA brings promotion by 8.5% for detection and 3.5% for localization. 2) The neighbor size of NMA is selected in Tab. 5c. 1×1 neighbor size is the worst, because 1×1 is too small to prevent the information leak, thus the recovery could be completed by copying neighbor regions. A larger neighbor size (≥ 5×5) is obviously much better, and the best one is selected as 7×7. 3) We also study the place to add NMA in Tab. 5d. Only adding NMA in the encoder (Enc) is not enough. The performance could be stably improved when further adding NMA in the first or second attention in the decoder (Enc+Dec1, Enc+Dec2) or both (All). This reflects that the full attention of the decoder also contributes to the information leak. Feature jittering. 1) Tab. 5a confirms the efficacy of FJ. With one query embedding as the baseline, introducing FJ could bring an increase of 7.4% for detection and 3.0% for localization, respectively. 2) According to Tab. 5e, the jittering scale, α, is chosen as 20. A larger α (i.e., 30) disturbs the feature too much, degrading the results. 3) In Tab. 5f, the jittering probability, p, is studied. In essence, the task would be a denoising task with feature jittering, and be a reconstruction task without feature jittering. The results show that the full denoising task (i.e., p = 1) is the best. 5 Conclusion In this work, we propose UniAD that unifies anomaly detection regarding multiple classes. For such a challenging task, we assist the model against learning an “identical shortcut” with three improvements. First, we confirm the effectiveness of the learnable query embedding and carefully tailor a layer-wise query decoder to help model the complex distribution of multi-class data. Second, we come up with a neighbor masked attention module to avoid the information leak from the input to the output. Third, we propose feature jittering that helps the model less sensitive to the input perturbations. Under the unified task setting, our method achieves state-of-the-art performance on MVTec-AD and CIFAR-10 datasets, significantly outperforming existing alternatives. Discussion. In this work, different kinds of objects are handled without being distinguished. We have not used the category labels that may help the model better fit multi-class data. How to incorporate the unified model with category labels should be further studied. In practical uses, normal samples are not as consistent as those in MVTec-AD, often manifest themselves in some diversity. Our UniAD could handle all 15 categories in MVTec-AD, hence would be more suitable for real scenes. However, anomaly detection may be used for video surveillance, which may infringe personal privacy. Acknowledgments and Disclosure of Funding Acknowledgement. This work is sponsored by the National Key Research and Development Program of China (2021YFB1716000) and National Natural Science Foundation of China (62176152).
1. What is the main contribution of the paper regarding anomaly detection? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other recent works? 3. How does the reviewer assess the clarity and novelty of the paper's content? 4. What are some suggested improvements for the paper, including ablations and comparisons with transformer-based models? 5. Are there any potential societal impacts or limitations of the proposed approach that should be considered?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose the learning of multi-class decision boundaries for the task of anomaly detection (AD) over multiple object classes. For this, they employ reconstruction-based scores obtained from a transformer network, modified with a couple of simple tricks, such as masking neighboring points in the attention map, and increasing the capacity of the decoder. Results on MVTec show this is a promising direction for AD over multiple object classes. Strengths And Weaknesses Strengths To enable their transformer-based model to work with the task of adopting a complex normal distributions, the authors come up with some modifications, in particular neighbor-masked attention which they insert directly into the architecture to replace the default attention layer. While the authors employ other approaches such as "feature jittering", these correspond to a simple addition of Gaussian noise during the input stage, and given its simplicity I would suggest authors can remove this from abstract etc., as it doesn't present any significant novelty. The experimental results are the strong point of this work, with outstanding performance on MVTec-AD and CIFAR10 (which is a much less relevant benchmark though, in particular as more challenging ones have recently been used, e.g. CIFAR100/STL10, c.f. works listed below). Moreover, the paper is well-written and easy to follow. Weaknesses The proposed modifications are relatively straightforward, in particular the ablations in Table 4 indicate that a vanilla transformer would end up outperforming the existing state of the art on MVTec-AD. Given there are recent works on using transformers for AD, e.g. AnoVIT (as pointed out by the authors on page 3 lines 99-101), this somewhat limits the novelty of the proposed method. Surprisingly, from the ablation in Table 4 it appears as if feature jittering (FJ) boosts performance nearly as much as neighborhood-masked attention (NMA). I would suggest included pairwise coupling (e.g. NMA+FJ) to shed light on which ones of these are required in unison, or whether they obtain similar outcomes. Moreover, there have been various works in the recent past that attempt to perform anomaly detection over classes that contain more than a single object class (with no labels assumed present to distinguish the objects). A discussion of these recent works, and how they relate/differ from the way in which multiple classes are treated here is missing, e.g.: "Deep Semi-Supervised Anomaly Detection", Ruff et al., ICLR 2020 "Detection Semantic Anomalies", Ahmed & Courville, AAAI 2020 "Transfer-Based Semantic Anomaly Detection", Deecke et al., ICML 2021 These manuscripts investigate the presence of multiple classes in the normal distribution. While the focus is different (latent classes), it should be compared against in this work. Update The authors have addressed points raised in my original review by adding ablations and contrasting their proposed setting to existing ones. Some concerns remain around the novelty of the proposed approach. The score has been increased to reflect the newly incorporated changes. Questions This paper presents strong experimental results, however appears to target a somewhat loose definition of multi-class AD. I would suggest the authors work to improve the clarity of their manuscript. In particular: clarify how the proposed multi-object AD task of "unified AD" fits in or differs from existing works that assume multiple objects in the normal distribution (e.g. "semantic AD"). clarify the novelty of the proposed transformer components, which appear highly incremental. Improvements could consist in more direct comparisons (and ablations) against transformer-based competitor models. Limitations Limitations are discussed in Section 5, potential societal impacts (say, anomaly detection for tasks such as video surveillance) have not been discussed.
NIPS
Title Tensor decompositions of higher-order correlations by nonlinear Hebbian plasticity Abstract Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higherorder input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion. 1 Introduction In Hebbian learning, potentiation of the net synaptic weight between two neurons is driven by the correlation between pre- and postsynaptic activity [1]. That postulate is a cornerstone of the theory of synaptic plasticity and learning [2, 3]. In its basic form, the Hebbian model leads to runaway potentiation or depression of synapses, since the pre-post correlation increases with increasing synaptic weight [4]. That runaway potentiation can be stabilized by supplemental homeostatic plasticity dynamics [5], by weight dependence in the learning rule [6, 7], or by synaptic scaling regulating a neuron’s total synaptic weight [8, 9]. In 1982, Erkki Oja observed that a linear neuron with Hebbian plasticity and synaptic scaling learns the first principal component of its inputs [10]. In 1985, Oja and Karhunen proved that this is a global attractor of the Hebbian dynamics [11]. This led to a fountain of research on unsupervised feature learning in neural networks [12, 13]. Principal component analysis (PCA) describes second-order features of a random variable. Both naturalistic stimuli and neural activity can, however, exhibit higher-order correlations [14, 15]. Canonical models of retinal and thalamic processing whiten inputs, removing pairwise features [16–22]. Beyond-pairwise features, encoded in tensors, can provide a powerful substrate for learning from data [23–27]. The basic Hebbian postulate does not take into account fundamental nonlinear aspects of biological synaptic plasticity in cortical pyramidal neurons. First, synaptic plasticity depends on beyond-pairwise activity correlations [28–33]. Second, spatially clustered and temporally coactive synapses exhibit 35th Conference on Neural Information Processing Systems (NeurIPS 2021). correlated and cooperative plasticity [34–41]. There is a rich literature on computationally motivated forms of nonlinear Hebbian learning (section 3.2). Here, we will prove that these biologically motivated nonlinearities allow a neuron to learn higher-order features of its inputs. We study the dynamics of a simple family of generalized Hebbian learning rules, combined with synaptic scaling (eq. 1). Equilibria of these learning rules are invariants of higher-order input correlation tensors. The order of input correlation (pair, triplet, etc.) depends on the pre- and postsynaptic nonlinearities of the learning rule. When the only nonlinearity in the plasticity rule is postsynaptic, the steady states are eigenvectors of higher-order input correlation tensors [42, 43]. We prove that these eigenvectors are attractors of the generalized Hebbian plasticity dynamics and characterize their basins of attraction. Then, we study further generalizations of these learning rules. We show that any plasticity model (with a finite Taylor expansion in the synaptic input, neural output, and synaptic weight) has steady states that generalize those tensor decompositions to multiple input correlations, including generalized tensor eigenvectors. We show that these generalized tensor eigenvectors are stable equilibria of the learning dynamics. Due to the complexity of the arbitrary learning rules, we are unable to fully determine their basins of attraction. We do find that these generalized tensor eigenvectors are in an attracting set for the dynamics, and characterize its basin of attraction. Finally, we conclude by discussing extensions of these results to spiking models and weight-dependent plasticity. 2 Results Take a neuron receiving K inputs xi(t), i 2 [K], each filtered through a connection with synaptic weight Ji(t) to produce activity n(t). We consider synaptic plasticity where the evolution of Ji can depend nonlinearly on the postsynaptic activity n(t), the local input xi(t), and the current synaptic weight Ji(t). We model these dependencies in a learning rule f : Ji(t+ dt) = Ji(t) + dt/⌧ fi(n(t), xi(t), Ji(t)) J(t) + dt/⌧f n(t),x(t),J(t) , where fi(t) = n a(t) xbi (t) J c i (t). (1) The parameter a sets the output-dependent nonlinearity of the learning rule, b sets the input-dependent nonlinearity, and c sets its dependence on the current synaptic weight. Eq. 1 assumes a simple form for these nonlinearities; we discuss arbitrary nonlinear learning rules in section 2.3. We assume that a and b are positive integers, as in higher-order voltage or spike timing–dependent plasticity (STDP) models [44–46]. The scaling by the norm of the synaptic weight vector, ||J ||, models homeostatic synaptic scaling [8–10]. Bold type indicates a vector, matrix, or tensor (depending on the variable) and regular font with lower indices indicates elements thereof. Roman type denotes a random variable (x). We assume that x(t) is drawn from a stationary distribution with finite moments of order a + b. Combined with a linear neuron, n = JTx, and a slow learning rate, ⌧ dt, this implies the following dynamics for J (appendix A.1): ⌧ J̇i = J c i X ↵ µi,↵(J ⌦a)↵ Ji X j,↵ Jc+1j µj,↵(J ⌦a)↵. (2) In eq. 2, J̇i = dJi/dt, ↵ = (j1, . . . , ja) is a multi-index, and ⌦ is the vector outer product; J⌦a is the a-fold outer product of the synaptic weight vector J . µ is a higher-order moment (correlation) tensor of the inputs: µi,↵ = hxbi (x ⌦a)↵ix (3) where hix denotes the expectation with respect to the distribution of the inputs. µ is an (a+ 1)-order tensor containing an (a+ b)-order joint moment of x. The order of the tensor refers to its number of indices, so a vector is a first-order tensor and a matrix a second-order tensor. µ is cubical; each mode of µ has the same dimension K. In the first term of eq. 2, for example, P ↵ µi,↵(J ⌦a)↵ takes the dot product of J along modes 2 through a+ 1 of µ. 2.1 Steady states of nonlinear Hebbian learning If we take a = b = 1, then µ is the second-order correlation of x and ↵ is just the index j. With c = 0 also, eq. 2 reduces to Oja’s rule and J is guaranteed to converge to the dominant eigenvector of µ [10, 11]. We next investigate the steady states of eq. 2 for arbitrary (a, b) 2 Z2+, c 2 R. Ji = 0 is a trivial steady state. At steady states of eq. 2 where Ji 6= 0, X ↵ µi,↵(J ⌦a)↵ = J 1 c i , where (µ,J) = X j,↵ Jc+1j µj,↵(J ⌦a)↵, (4) so that J is invariant under the multilinear map of µ except for a scaling by and element-wise exponentiation by 1 c. For two parameter families (a, b, c), eq. 4 reduces to different types of tensor eigenequation [47, 42, 43]. We next briefly describe these and some of their properties. First, if a+c = 1, we have the tensor eigenequation P ↵ µi,↵(J ⌦a)↵ = Jai . Qi called ,J the tensor eigenpair [42] and Lim called them the `a-norm eigenpair [43]. There are KaK 1 such eigenpairs [42]. If µ 0 element-wise, then it has a unique largest eigenvalue with a real, non-negative eigenvector J , analogous to the Perron-Frobenius theorem for matrices [43, 48]. If µ is weakly irreducible, that eigenvector is strictly positive [49]. In contrast to matrix eigenvectors, however, for a > 1 these tensor eigenvectors are not necessarily invariant under orthogonal transformations [42]. If c = 0, we have another variant of tensor eigenvalue/vector equation: X ↵ µi,↵(J ⌦a)↵ = Ji (5) Qi called these ,J an E-eigenpair [42] and Lim called them the `2-eigenpair [43]. In general, a tensor may have infinitely many such eigenpairs. If the spectrum of a K-dimensional tensor of order a+ 1 is finite, however, there are (aK 1)/(a 1) eigenvalues counted with multiplicity, and the spectrum of a symmetric tensor is finite [50, 51]. (If b = 1, µ is symmetric.) Unlike the steady states when a + c = 1, these eigenpairs are invariant under orthogonal transformations [42]. For non-negative µ, there exists a positive eigenpair [52]. It may not be unique, however, unlike the largest eigenpair for a+ c = 1 (an anti-Perron-Frobenius result) [51]. In the remainder of this paper we will usually focus on parameter sets with c = 0 and use “tensor eigenvector” to refer to those of eq. 5. 2.2 Dynamics of nonlinear Hebbian learning For the linear Hebbian rule, (a, b, c) = (1, 1, 0), Oja and Karhunen proved that the first principal component of the inputs is a global attractor of eq. 2 [11]. We thus asked whether the first tensor eigenvector is a global attractor of eq. 2 when c = 0 but (a, b) 6= (1, 1). We first simulated the nonlinear Hebbian dynamics. For the inputs x, we whitened 35⇥35 pixel image patches sampled from the Berkeley segmentation dataset (fig. 1a; [53]). For b 6= 1, the correlation of these image patches was not symmetric (fig. 1b). The mean squared error of the canonical polyadic (CP) approximation of these tensors was higher for b = 1 than b = 2 (fig. 1d). It decreased slowly past rank ⇠ 10, and the rank of the input correlation tensors was at least 30 (fig. 1d). The nonlinear Hebbian learning dynamics converged to an equilibrium from random initial conditions (e.g., fig. 1e, f), around which the weights fluctuated due to the finite learning timescale ⌧ . Any equilibrium is guaranteed to be some eigenvector of the input correlation tensor µ (section 2.1). For individual realizations of the weight dynamics, we computed the overlap between the final synaptic weight vector and each of the first 10 eigenvectors (components of the Tucker decomposition) of the corresponding input correlation µ [47, 54]. The dynamics most frequently converged to the first eigenvector. For a non-negligible fraction of initial conditions, however, the nonlinear Hebbian rule converged to subdominant eigenvectors (fig. 1g,h). The input correlations µ did have a unique dominant eigenvector (fig. 3a, blue), but the dynamics of eq. 2 did not always converge to it. This finding stands in contrast to the standard Hebbian rule, which must converge to the first eigenvector if it is unique [11]. While the top eigenvector of a matrix can be computed efficiently, computing the top eigenvector of a tensor is, in general, NP-hard [55]. To understand the learning dynamics further, we examined them analytically. Our main finding is that with (b, c) = (1, 0) in the generalized Hebbian rule, eigenvectors of µ are attractors of eq. 2. Contrary to the case when a = 1 (Oja’s rule), the dynamics are thus not guaranteed to converge to the first eigenvector of the input correlation tensor when a > 1. The first eigenvector of µ does, however, have the largest basin of attraction. Theorem 1. In eq. 2, take (b, c) = (1, 0). Let µ be a cubical, symmetric tensor of order a+ 1 and odeco with R components: µ = RX r=1 r (Ur) ⌦a+1 (6) where U is a matrix of unit-norm orthogonal eigenvectors: UTU = I . Let i > 0 for each i 2 [R] and i 6= j 8 (i, j) 2 [R]⇥ [R] with i 6= j. Then for each k 2 [R]: 1. With any odd a > 1, J = ±Uk are attracting fixed points of eq. 2 and their basin of attrac- tion is T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . Within that region, the separatrix of +Uk and Uk is the hyperplane orthogonal to UTk : {J : U T k J = 0}. 2. With any even positive a, J = Uk is an attracting fixed point of eq. 2 and its basin of attraction is J : UTk J > 0 T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . 3. With any even positive a, J = 0 is a neutrally stable fixed point of eq. 2 with basin of attraction n J : PR j=1(U T j J) 2 < 1 ^UTk J < 0 8 k 2 [R] o . Remark. Each component of the orthogonal decomposition of an odeco tensor µ (eq. 5) is an eigenvector of µ. If R < (aK 1)/(a 1), there are additional eigenvectors. The components of the orthogonal decomposition are the robust eigenvectors of µ: the attractors of its multilinear map [26, 56]. The non-robust eigenvectors of an odeco tensor are fixed by its robust eigenvectors and their eigenvalues [57]. The proof of theorem 1 is given in appendix A.2. To prove theorem 1, we project J onto the eigenvectors of µ, and study the dynamics of the loadings v = UTJ . This leads to the discovery of a collection of unstable manifolds: each pair of axes (i, k) has an associated unstable hyperplane vi = vk ( k/ i) 1/(a 1) (and if a is odd, also the corresponding hyperplane with negative slope). These partition the phase space into the basins of attraction of the eigenvectors of µ. For example, consider a fourth-order input correlation (corresponding to a = 3 in eq. 1) with two eigenvectors. The phase portrait of the loadings is in fig. 2a, with the nullclines in black and attracting and unstable manifolds in blue. The attracting manifold is the unit sphere. There are two unstable hyperplanes that partition the phase space into the basins of attraction of (0,±1) and (±1, 0), where the synaptic weights J are an eigenvector of µ. For even a, only the unstable hyperplanes with positive slope survive (fig. 2b, blue line). The unit sphere (fig. 2b, blue) is attracting from any region where at least one loading is positive. Its vertices [1]⇥[1] are equilibria; (v1, v2) = (1, 0) and (0, 1) are attractors and the unstable hyperplane separates their basins of attraction. For the region with all loadings negative that is the basin of attraction of the origin, noise will drive the system away from zero towards one of the eigenvector equilibria. are v2 = ±v1 ( 1/ 2) 1/a 1. b) Even a (a = 2). The unstable set is v2 = v1 ( 1/ 2) 1/a 1 (solid blue line). In (a,b), ( 1, 2) = (3, 1). c) Phase portrait for a two-term learning rule (eq. 9). All parameters of the input correlation tensors ( mr, Ai) are equal to one. Dashed blue curve: L = {v : P m,j mjv am+1 j = 0}. By partitioning the phase space of J into basins of attraction for eigenvectors of µ, theorem 1 also allows us to determine the volumes of those basins of attraction. The basins of attraction are open sections of RK so we measure their volume relative to that of a large hypercube. Corollary 1.1. Let Vk be the relative volume of the basin of attraction for J = Uk. For odd a > 1, Vk = R 1 RY i=1 ✓ k i ◆1/(a 1) (7) Corollary 1.2. Let Vk be the relative volume of the basin of attraction for J = Uk. For even positive a, Vk =2 1 R R 1 Y i ✓ k i ◆1/(a 1) + (R 1) 1 X j 6=k Y i 6=j ✓ k i ◆1/(a 1) + (R 2) 1 X j,l 6=k Y i 6=j,l ✓ k i ◆1/(a 1) + . . .+ 1 ! (8) The calculations for corollaries 1.1, 1.2 are given in appendix A.2. We see that the volumes of the basins of attraction depend on the spectrum of µ, its rank R, and its order a. The result for odd a also provides a lower bound on the volume for even a. The result is simpler for odd a so we focus our discussion here on that case. While eigenvectors of µ with small eigenvalues contribute little to values of the input correlation µ, they can have a large impact on the basins of attraction. The volume of the basin of attraction of eigenvector k is proportional to R/(a 1)k . An eigenvector with eigenvalue ✏ scales the basins of attraction for the other eigenvectors by ✏ 1/a 1. The relative volume of two eigenvectors’ basins of attraction is, however, unaffected by the other eigenvalues whatever their amplitude. With a odd, the ratio of the volumes of the basins of attraction of eigenvectors k and j is Vk/Vj = ( k/ j) R/a 1. We see in theorem 1 that attractors of eq. 2 are points on the unit hypersphere, S. For odd a, S is an attracting set for eq. 2. For even a, the section of S with at least one positive coordinate is an attracting set (fig. 2a, b, blue circle; see proof of theorem 1 in appendix A.2). We thus next computed the surface area of the section of S in the basin of attraction for eigenvector k, Ak (corollary 1.3 in appendix A.2). The result requires knowledge of all non-negligible eigenvalues of µ, and the ratio Ak/Aj does not exhibit the cancellation that Vk/Vj does for odd a. We saw in simulations with natural image patch inputs and initial conditions for J chosen uniformly at random on S, the basin of attraction for UT1 was ⇠ 3⇥ larger than that for UT2 and the higher eigenvectors had negligible basins of attraction (fig. 1h). In Oja’s model, (a, b, c) = (1, 1, 0) in eq. 1, if the largest eigenvalue has multiplicity d > 1 then the d-sphere spanned by those codominant eigenvectors is a globally attracting equilibrium manifold for the synaptic weights. The corresponding result for a > 1 is (for the formal statement and proof, see corollary 1.4 in appendix A.2): Corollary 1.3. (Informal) If any d robust eigenvalues of µ are equal, the d-sphere spanned by their robust eigenvectors is an attracting equilibrium manifold and its basin of attraction is defined by each of those d eigenvectors’ basins of attraction boundaries with the other R d robust eigenvectors. 2.3 Arbitrary learning rules So far we have studied phenomenological plasticity rules in the particular form of eq. 1. The neural output n, input xi, and synaptic weight Ji were each raised to a power and then multiplied together. Changes in the strength of actual synapses are governed by complex biochemical, transcriptional, and regulatory pathways [58]. We view these as specifying some unknown function of the neural output, input, and synaptic weight, f(n,x,J). That function might not have the form of eq. 1. So, we next investigate the dynamics induced by arbitrary learning rules f . We see here that under a mild condition, any equilibrium of the plasticity dynamics will have a similar form as the steady states of eq. 2. If f does not depend on J except through n, equilibria will be generalized eigenvectors of higher-order input correlations. The Taylor expansion of f around zero is: fi (n(t), xi(t), Ji(t)) = 1X m=1 Am n am(t) xbmi (t) J cm i (t) (9) where the coefficients Am are partial derivates of f . We assume that there exists a finite integer N such that derivatives of order N +1 and higher are negligible compared to the lower-order derivatives. We then approximate f , truncating its expansion after those N terms. With a linear neuron, synaptic scaling, and slow learning, this implies the plasticity dynamics ⌧ J̇i = X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m Ji X j,m X ↵m Jcm+1j mµj,↵m(J ⌦am)↵m (10) where mµi,↵m = Amhx bm i (x ⌦am)↵mix. At steady states where Ji 6= 0, X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m = Ji, where (J , {mµ}) = X m X j,↵m Jcm+1j mµj,↵(J ⌦am)↵. (11) If each cm = 0, this is a kind of generalized tensor eigenequation: X m,↵m mµi,↵(J ⌦am)↵ = Ji (12) so that J is invariant under the combined action of the multilinear maps mµ (which are potentially of different orders). If a1 = a2 = · · · = am, then this can be simplified to a tensor eigenvector equation by summing the input correlations mµ. If different terms of the expansion of f generate different-order input correlations, however, the steady states are no longer necessarily equivalent to tensor eigenvectors. If there exists a synaptic weight vector J that is an eigenvector of each of those input correlation tensors, P ↵ mµi,↵m(J ⌦am)↵m = Ji for each m, then that configuration J is a steady state of the plasticity dynamics with each cm = 0. We next investigated whether these steady states were attractors in simulations of a learning rule with a contribution from two-point and three-point correlations (a = (1, 2), b = 1, c = 0, A = (1, 1/2) in eq. 9). As before, we used whitened natural image patches for the inputs x (fig. 1a). The two- and three-point correlations of those image patches had similar first eigenvalues, but the spectrum of the two-point correlation decreased more quickly than the three-point correlation (fig. 3a). The first three eigenvectors of the different correlations overlapped strongly (fig. 3b). This was not due to a trivial constant offset since the inputs were whitened. With this parameter set, the synaptic weights usually converged to the (shared) first eigenvector of the input correlations (fig. 3c,d, e (blue)). We next asked how the weights of the different input correlations in the learning rule (the parameters A1, A2) affected the plasticity dynamics. When the learning rule weighted the inputs’ three-point correlation more strongly than the two-point correlation (A = (1/2, 1)), the dynamics converged almost always to the first eigenvector of the three-point correlation (fig. 3e, blue vs orange). Without loss of generality, we then fixed A2 = 1 and varied the amplitude of A1. As A1 increased, the learning dynamics converged to equilibria increasingly aligned with the top eigenvectors of the input correlations (fig. 3f). For sufficiently negative A1, the dynamics converged to steady states that were neither eigenvectors of the two-point input correlation nor any of the top 20 eigenvectors of the three-point input correlation (fig. 3f). Earlier, we saw that in single-term learning rules, the only attractors were robust eigenvectors of the input correlation (theorem 1). The dynamics of eq. 2 usually converged to the first eigenvector because it had the largest basin of attraction (corollaries 1.1, 1.2). Here, we saw that at least for some parameter sets, the dynamics of a multi-term generalized Hebbian rule may not converge to an input eigenvector. This suggested the existence of other attractors for the dynamics of eq. 10. We next investigated the steady states of multi-term nonlinear Hebbian rules analytically. We focused on the case when the different input correlations generated by the learning rule all have a shared set of eigenvectors. In this case, those shared eigenvectors are all stable equilibria of eq. 10. They are not, however, the only stable equilibria. We see in a simple example that there can be equilibria that are linear combinations of those eigenvectors with all negative weights (in fig. 2c, the fixed point in the lower left quadrant on the unit circle). In fact, any stable equilibrium that is not a shared eigenvector must be such a negative combination. Our results are summarized in the following theorem: Theorem 2. In eq. 10, take b = 1, c = 0,a 2 ZN+ , and consider N cubical, symmetric tensors, mµ, each of order am + 1 for m 2 [N ], that are mutually odeco into R components: mµ = RX r=1 mrUr ⌦Ur ⌦ · · ·⌦Ur (13) with UTU = I . Let mr 0 and P m mr > 0 for each m, r 2 [N ]⇥ [R]. Let S(J) = RX i=1 (UTi J) 2 and L(J) = NX m=1 RmX i=1 mi(U T i J) am+1. (14) Then: 1. S ⇤ = {J : S(J) = 1 ^ L(J) > 0} is an attracting set for eq. 10 and its basin of attraction includes {J : L(J) > 0}. 2. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10. 3. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10 if P m m k( 1) am < 0 (and unstable if P m m k( 1) am < 0). 4. Any other stable equilibrium must have UTk J 0 for each k 2 [R]. The claims of theorem 2 are proven in appendix A.2. Similar to theorem 1, we see that the robust eigenvectors of each input correlation generated by the learning rule are stable equilibria of the learning dynamics. The complexity of eq. 10 has kept us from determining their basins of attraction. We can, however, make several guarantees. First, in a large region, the unit sphere is an attracting set for the dynamics of eq. 10. Second, the only stable fixed points are either the eigenvectors of ±µ or combinations of the eigenvectors of µ with only nonpositive weights. This is in contrast to the situation where the learning rule has only one term; then theorem 1 guarantees that the only attractors are eigenvectors. 3 Discussion We have analyzed biologically motivated plasticity dynamics that generalize the Oja rule. One class of these compute tensor eigenvectors. We proved that without a multiplicative weight-dependence in the plasticity, those eigenvectors are attractors of the dynamics (theorem 1, figs. 1, 2a, b). Contrary to Oja’s rule, the first eigenvector of higher-order input correlations is not a unique attractor. Rather, each eigenvector k has a finite basin of attraction, the size of which is proportional to R/a 1 k . If there are d codominant eigenvectors ( 1 = 2 = . . . = d), the d-sphere they span is an attracting equilibrium manifold (corollary 1.4 in appendix A.2). Furthermore, steady states of any plasticity model with a finite Taylor polynomial in the neural output and inputs are generalized eigenvectors of multiple input correlations. These steady states are stable and attracting (theorem 2, figs. 2c, 3). 3.1 Spiking neurons and weight-dependence While biological synaptic plasticity is certainly more complex than the simple generalized Hebbian rule of eq. 1, neural activity is also more complex than the linear model n = JTx. We examined the simple linear-nonlinear-Poisson spiking model and a generalized spike timing–dependent plasticity (STDP) rule ([45]; appendix A.3). Similar to eqs. 2 and 10, we can write the dynamical equation for J as a function of joint cumulant tensors of the input (eq. 56 in appendix A.3). These dynamics have a different structure than eqs. 2 and 10. We focused here mainly on learning rules with no direct dependence on the synaptic weight (c = 0 in eq. 1, c = 0 in eq. 9). When c 6= 0, the learning dynamics cannot be simply analyzed in terms of the loading onto the input correlations’ eigenvectors. We studied the learning dynamics with weight-dependence for two simple families of input correlations: diagonal µ and piecewise-constant rank one µ (appendix A.4). In both cases, we found that with eigenvectors of those simple input correlations were also attractors of the plasticity rule. With diagonal input correlations, sparse steady states with one nonzero synapse are always stable and attracting when a+ c > 0, but if a+ c 0 synaptic weights converge to solutions where all weights have the same magnitude (fig. A.4.1). With rank one input correlations, multiplicative weight-dependence can interfere with synaptic scaling and lead to an instability for the neurons’ total synaptic amplitude (fig. A.4.2). 3.2 Related work and applications There is a rich literature on generalized or nonlinear forms of Hebbian learning. We briefly discuss the most closely related results, to our knowledge. The family of Bienenstock, Cooper, and Munro (BCM) learning rules supplement the classic Hebbian model with a stabilizing sliding threshold for potentiation rather than synaptic scaling [59]. BCM rules balance terms driven by third- and fourth-order joint moments of the pre- and postsynaptic activity [60]. A triplet STDP model with rate-dependent depression and uncorrelated Poisson spiking has BCM dynamics [45] and can develop selective (sparse) connectivity in response to rate- or correlation-based input patterns [61]. If the input is drawn from a mixture model then under a BCM rule, the synaptic weights are guaranteed to converge to the class means of the mixture [62]. Learning rules with suitable postsynaptic nonlinearities can allow a neuron to perform independent component analysis (ICA) [63, 64]. These learning rules optimize the kurtosis of the neural response. In contrast, we show that a simple nonlinear Hebbian model learns tensor eigenvectors of higher-order input correlations. Those higher-order input correlations can determine which features are learned by gradient-based ICA algorithms [65]. Taylor and Coombes showed that a generalization of the Oja rule to higher-order neurons can also learn higher-order correlations [66], which can allow learning independent components [67]. In that model, however, the synaptic weights J are a higher-order tensor. Computing the robust eigenvectors of an odeco tensor µ by power iteration has O(Ka+1) space complexity: it requires first computing µ. The discrete-time dynamics of eq. 1 correspond to streaming power iteration, with O(K) space complexity [68–70]). Eq. 2 defines limiting continuoustime dynamics for tensor power iteration, exposing the basins of attraction. Oja’s rule inspired a generation of neural algorithms for PCA and subspace learning [12, 13]. Local learning rules for approximating higher-order correlation tensors may also prove useful, for example in neuromorphic devices [71–73]. Code availability The code associated with figures one and three is available at https://github.com/gocker/ TensorHebb. Acknowledgments We thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement and support.
1. What is the focus of the paper in terms of learning rules and their application? 2. What are the strengths of the proposed approach, particularly in its ability to learn tensor decompositions? 3. How does the paper extend prior works on local linear learning rules, and what are the implications of this extension? 4. Are there any concerns or limitations regarding the proposed method, especially in terms of its biological plausibility? 5. Can the reviewer provide additional context or references to help understand the significance of the paper's contributions?
Summary Of The Paper Review
Summary Of The Paper The authors study generalized, nonlinear Hebbian learning rules, and find that neurons with such plasticity and feedforward inputs can learn tensor decompositions of higher-order correlations. They further demonstrate that for simple, biologically plausible scenarios, the tensor eigenvectors are attractors. After determining the basins of attraction, the authors show that the basin of the dominant eigenvector has the largest volume. Finally, they show that any plasticity rule with a finite Taylor expansion in synaptic input, neural output, and weight variables has equilibrium states that are similar to those already found and describe higher-order input correlations. Review The authors have extended a large body of prior work on local linear learning rules. These Hebbian (and generalized Hebbian) rules have been shown to allow neurons to model input statistics; for example, Oja's rule has been shown to be stable and instantiate an online algorithm for PCA. The thorough analysis encapsulated in this manuscript represents a welcome advance in this classical unsupervised learning framework.
NIPS
Title Tensor decompositions of higher-order correlations by nonlinear Hebbian plasticity Abstract Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higherorder input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion. 1 Introduction In Hebbian learning, potentiation of the net synaptic weight between two neurons is driven by the correlation between pre- and postsynaptic activity [1]. That postulate is a cornerstone of the theory of synaptic plasticity and learning [2, 3]. In its basic form, the Hebbian model leads to runaway potentiation or depression of synapses, since the pre-post correlation increases with increasing synaptic weight [4]. That runaway potentiation can be stabilized by supplemental homeostatic plasticity dynamics [5], by weight dependence in the learning rule [6, 7], or by synaptic scaling regulating a neuron’s total synaptic weight [8, 9]. In 1982, Erkki Oja observed that a linear neuron with Hebbian plasticity and synaptic scaling learns the first principal component of its inputs [10]. In 1985, Oja and Karhunen proved that this is a global attractor of the Hebbian dynamics [11]. This led to a fountain of research on unsupervised feature learning in neural networks [12, 13]. Principal component analysis (PCA) describes second-order features of a random variable. Both naturalistic stimuli and neural activity can, however, exhibit higher-order correlations [14, 15]. Canonical models of retinal and thalamic processing whiten inputs, removing pairwise features [16–22]. Beyond-pairwise features, encoded in tensors, can provide a powerful substrate for learning from data [23–27]. The basic Hebbian postulate does not take into account fundamental nonlinear aspects of biological synaptic plasticity in cortical pyramidal neurons. First, synaptic plasticity depends on beyond-pairwise activity correlations [28–33]. Second, spatially clustered and temporally coactive synapses exhibit 35th Conference on Neural Information Processing Systems (NeurIPS 2021). correlated and cooperative plasticity [34–41]. There is a rich literature on computationally motivated forms of nonlinear Hebbian learning (section 3.2). Here, we will prove that these biologically motivated nonlinearities allow a neuron to learn higher-order features of its inputs. We study the dynamics of a simple family of generalized Hebbian learning rules, combined with synaptic scaling (eq. 1). Equilibria of these learning rules are invariants of higher-order input correlation tensors. The order of input correlation (pair, triplet, etc.) depends on the pre- and postsynaptic nonlinearities of the learning rule. When the only nonlinearity in the plasticity rule is postsynaptic, the steady states are eigenvectors of higher-order input correlation tensors [42, 43]. We prove that these eigenvectors are attractors of the generalized Hebbian plasticity dynamics and characterize their basins of attraction. Then, we study further generalizations of these learning rules. We show that any plasticity model (with a finite Taylor expansion in the synaptic input, neural output, and synaptic weight) has steady states that generalize those tensor decompositions to multiple input correlations, including generalized tensor eigenvectors. We show that these generalized tensor eigenvectors are stable equilibria of the learning dynamics. Due to the complexity of the arbitrary learning rules, we are unable to fully determine their basins of attraction. We do find that these generalized tensor eigenvectors are in an attracting set for the dynamics, and characterize its basin of attraction. Finally, we conclude by discussing extensions of these results to spiking models and weight-dependent plasticity. 2 Results Take a neuron receiving K inputs xi(t), i 2 [K], each filtered through a connection with synaptic weight Ji(t) to produce activity n(t). We consider synaptic plasticity where the evolution of Ji can depend nonlinearly on the postsynaptic activity n(t), the local input xi(t), and the current synaptic weight Ji(t). We model these dependencies in a learning rule f : Ji(t+ dt) = Ji(t) + dt/⌧ fi(n(t), xi(t), Ji(t)) J(t) + dt/⌧f n(t),x(t),J(t) , where fi(t) = n a(t) xbi (t) J c i (t). (1) The parameter a sets the output-dependent nonlinearity of the learning rule, b sets the input-dependent nonlinearity, and c sets its dependence on the current synaptic weight. Eq. 1 assumes a simple form for these nonlinearities; we discuss arbitrary nonlinear learning rules in section 2.3. We assume that a and b are positive integers, as in higher-order voltage or spike timing–dependent plasticity (STDP) models [44–46]. The scaling by the norm of the synaptic weight vector, ||J ||, models homeostatic synaptic scaling [8–10]. Bold type indicates a vector, matrix, or tensor (depending on the variable) and regular font with lower indices indicates elements thereof. Roman type denotes a random variable (x). We assume that x(t) is drawn from a stationary distribution with finite moments of order a + b. Combined with a linear neuron, n = JTx, and a slow learning rate, ⌧ dt, this implies the following dynamics for J (appendix A.1): ⌧ J̇i = J c i X ↵ µi,↵(J ⌦a)↵ Ji X j,↵ Jc+1j µj,↵(J ⌦a)↵. (2) In eq. 2, J̇i = dJi/dt, ↵ = (j1, . . . , ja) is a multi-index, and ⌦ is the vector outer product; J⌦a is the a-fold outer product of the synaptic weight vector J . µ is a higher-order moment (correlation) tensor of the inputs: µi,↵ = hxbi (x ⌦a)↵ix (3) where hix denotes the expectation with respect to the distribution of the inputs. µ is an (a+ 1)-order tensor containing an (a+ b)-order joint moment of x. The order of the tensor refers to its number of indices, so a vector is a first-order tensor and a matrix a second-order tensor. µ is cubical; each mode of µ has the same dimension K. In the first term of eq. 2, for example, P ↵ µi,↵(J ⌦a)↵ takes the dot product of J along modes 2 through a+ 1 of µ. 2.1 Steady states of nonlinear Hebbian learning If we take a = b = 1, then µ is the second-order correlation of x and ↵ is just the index j. With c = 0 also, eq. 2 reduces to Oja’s rule and J is guaranteed to converge to the dominant eigenvector of µ [10, 11]. We next investigate the steady states of eq. 2 for arbitrary (a, b) 2 Z2+, c 2 R. Ji = 0 is a trivial steady state. At steady states of eq. 2 where Ji 6= 0, X ↵ µi,↵(J ⌦a)↵ = J 1 c i , where (µ,J) = X j,↵ Jc+1j µj,↵(J ⌦a)↵, (4) so that J is invariant under the multilinear map of µ except for a scaling by and element-wise exponentiation by 1 c. For two parameter families (a, b, c), eq. 4 reduces to different types of tensor eigenequation [47, 42, 43]. We next briefly describe these and some of their properties. First, if a+c = 1, we have the tensor eigenequation P ↵ µi,↵(J ⌦a)↵ = Jai . Qi called ,J the tensor eigenpair [42] and Lim called them the `a-norm eigenpair [43]. There are KaK 1 such eigenpairs [42]. If µ 0 element-wise, then it has a unique largest eigenvalue with a real, non-negative eigenvector J , analogous to the Perron-Frobenius theorem for matrices [43, 48]. If µ is weakly irreducible, that eigenvector is strictly positive [49]. In contrast to matrix eigenvectors, however, for a > 1 these tensor eigenvectors are not necessarily invariant under orthogonal transformations [42]. If c = 0, we have another variant of tensor eigenvalue/vector equation: X ↵ µi,↵(J ⌦a)↵ = Ji (5) Qi called these ,J an E-eigenpair [42] and Lim called them the `2-eigenpair [43]. In general, a tensor may have infinitely many such eigenpairs. If the spectrum of a K-dimensional tensor of order a+ 1 is finite, however, there are (aK 1)/(a 1) eigenvalues counted with multiplicity, and the spectrum of a symmetric tensor is finite [50, 51]. (If b = 1, µ is symmetric.) Unlike the steady states when a + c = 1, these eigenpairs are invariant under orthogonal transformations [42]. For non-negative µ, there exists a positive eigenpair [52]. It may not be unique, however, unlike the largest eigenpair for a+ c = 1 (an anti-Perron-Frobenius result) [51]. In the remainder of this paper we will usually focus on parameter sets with c = 0 and use “tensor eigenvector” to refer to those of eq. 5. 2.2 Dynamics of nonlinear Hebbian learning For the linear Hebbian rule, (a, b, c) = (1, 1, 0), Oja and Karhunen proved that the first principal component of the inputs is a global attractor of eq. 2 [11]. We thus asked whether the first tensor eigenvector is a global attractor of eq. 2 when c = 0 but (a, b) 6= (1, 1). We first simulated the nonlinear Hebbian dynamics. For the inputs x, we whitened 35⇥35 pixel image patches sampled from the Berkeley segmentation dataset (fig. 1a; [53]). For b 6= 1, the correlation of these image patches was not symmetric (fig. 1b). The mean squared error of the canonical polyadic (CP) approximation of these tensors was higher for b = 1 than b = 2 (fig. 1d). It decreased slowly past rank ⇠ 10, and the rank of the input correlation tensors was at least 30 (fig. 1d). The nonlinear Hebbian learning dynamics converged to an equilibrium from random initial conditions (e.g., fig. 1e, f), around which the weights fluctuated due to the finite learning timescale ⌧ . Any equilibrium is guaranteed to be some eigenvector of the input correlation tensor µ (section 2.1). For individual realizations of the weight dynamics, we computed the overlap between the final synaptic weight vector and each of the first 10 eigenvectors (components of the Tucker decomposition) of the corresponding input correlation µ [47, 54]. The dynamics most frequently converged to the first eigenvector. For a non-negligible fraction of initial conditions, however, the nonlinear Hebbian rule converged to subdominant eigenvectors (fig. 1g,h). The input correlations µ did have a unique dominant eigenvector (fig. 3a, blue), but the dynamics of eq. 2 did not always converge to it. This finding stands in contrast to the standard Hebbian rule, which must converge to the first eigenvector if it is unique [11]. While the top eigenvector of a matrix can be computed efficiently, computing the top eigenvector of a tensor is, in general, NP-hard [55]. To understand the learning dynamics further, we examined them analytically. Our main finding is that with (b, c) = (1, 0) in the generalized Hebbian rule, eigenvectors of µ are attractors of eq. 2. Contrary to the case when a = 1 (Oja’s rule), the dynamics are thus not guaranteed to converge to the first eigenvector of the input correlation tensor when a > 1. The first eigenvector of µ does, however, have the largest basin of attraction. Theorem 1. In eq. 2, take (b, c) = (1, 0). Let µ be a cubical, symmetric tensor of order a+ 1 and odeco with R components: µ = RX r=1 r (Ur) ⌦a+1 (6) where U is a matrix of unit-norm orthogonal eigenvectors: UTU = I . Let i > 0 for each i 2 [R] and i 6= j 8 (i, j) 2 [R]⇥ [R] with i 6= j. Then for each k 2 [R]: 1. With any odd a > 1, J = ±Uk are attracting fixed points of eq. 2 and their basin of attrac- tion is T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . Within that region, the separatrix of +Uk and Uk is the hyperplane orthogonal to UTk : {J : U T k J = 0}. 2. With any even positive a, J = Uk is an attracting fixed point of eq. 2 and its basin of attraction is J : UTk J > 0 T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . 3. With any even positive a, J = 0 is a neutrally stable fixed point of eq. 2 with basin of attraction n J : PR j=1(U T j J) 2 < 1 ^UTk J < 0 8 k 2 [R] o . Remark. Each component of the orthogonal decomposition of an odeco tensor µ (eq. 5) is an eigenvector of µ. If R < (aK 1)/(a 1), there are additional eigenvectors. The components of the orthogonal decomposition are the robust eigenvectors of µ: the attractors of its multilinear map [26, 56]. The non-robust eigenvectors of an odeco tensor are fixed by its robust eigenvectors and their eigenvalues [57]. The proof of theorem 1 is given in appendix A.2. To prove theorem 1, we project J onto the eigenvectors of µ, and study the dynamics of the loadings v = UTJ . This leads to the discovery of a collection of unstable manifolds: each pair of axes (i, k) has an associated unstable hyperplane vi = vk ( k/ i) 1/(a 1) (and if a is odd, also the corresponding hyperplane with negative slope). These partition the phase space into the basins of attraction of the eigenvectors of µ. For example, consider a fourth-order input correlation (corresponding to a = 3 in eq. 1) with two eigenvectors. The phase portrait of the loadings is in fig. 2a, with the nullclines in black and attracting and unstable manifolds in blue. The attracting manifold is the unit sphere. There are two unstable hyperplanes that partition the phase space into the basins of attraction of (0,±1) and (±1, 0), where the synaptic weights J are an eigenvector of µ. For even a, only the unstable hyperplanes with positive slope survive (fig. 2b, blue line). The unit sphere (fig. 2b, blue) is attracting from any region where at least one loading is positive. Its vertices [1]⇥[1] are equilibria; (v1, v2) = (1, 0) and (0, 1) are attractors and the unstable hyperplane separates their basins of attraction. For the region with all loadings negative that is the basin of attraction of the origin, noise will drive the system away from zero towards one of the eigenvector equilibria. are v2 = ±v1 ( 1/ 2) 1/a 1. b) Even a (a = 2). The unstable set is v2 = v1 ( 1/ 2) 1/a 1 (solid blue line). In (a,b), ( 1, 2) = (3, 1). c) Phase portrait for a two-term learning rule (eq. 9). All parameters of the input correlation tensors ( mr, Ai) are equal to one. Dashed blue curve: L = {v : P m,j mjv am+1 j = 0}. By partitioning the phase space of J into basins of attraction for eigenvectors of µ, theorem 1 also allows us to determine the volumes of those basins of attraction. The basins of attraction are open sections of RK so we measure their volume relative to that of a large hypercube. Corollary 1.1. Let Vk be the relative volume of the basin of attraction for J = Uk. For odd a > 1, Vk = R 1 RY i=1 ✓ k i ◆1/(a 1) (7) Corollary 1.2. Let Vk be the relative volume of the basin of attraction for J = Uk. For even positive a, Vk =2 1 R R 1 Y i ✓ k i ◆1/(a 1) + (R 1) 1 X j 6=k Y i 6=j ✓ k i ◆1/(a 1) + (R 2) 1 X j,l 6=k Y i 6=j,l ✓ k i ◆1/(a 1) + . . .+ 1 ! (8) The calculations for corollaries 1.1, 1.2 are given in appendix A.2. We see that the volumes of the basins of attraction depend on the spectrum of µ, its rank R, and its order a. The result for odd a also provides a lower bound on the volume for even a. The result is simpler for odd a so we focus our discussion here on that case. While eigenvectors of µ with small eigenvalues contribute little to values of the input correlation µ, they can have a large impact on the basins of attraction. The volume of the basin of attraction of eigenvector k is proportional to R/(a 1)k . An eigenvector with eigenvalue ✏ scales the basins of attraction for the other eigenvectors by ✏ 1/a 1. The relative volume of two eigenvectors’ basins of attraction is, however, unaffected by the other eigenvalues whatever their amplitude. With a odd, the ratio of the volumes of the basins of attraction of eigenvectors k and j is Vk/Vj = ( k/ j) R/a 1. We see in theorem 1 that attractors of eq. 2 are points on the unit hypersphere, S. For odd a, S is an attracting set for eq. 2. For even a, the section of S with at least one positive coordinate is an attracting set (fig. 2a, b, blue circle; see proof of theorem 1 in appendix A.2). We thus next computed the surface area of the section of S in the basin of attraction for eigenvector k, Ak (corollary 1.3 in appendix A.2). The result requires knowledge of all non-negligible eigenvalues of µ, and the ratio Ak/Aj does not exhibit the cancellation that Vk/Vj does for odd a. We saw in simulations with natural image patch inputs and initial conditions for J chosen uniformly at random on S, the basin of attraction for UT1 was ⇠ 3⇥ larger than that for UT2 and the higher eigenvectors had negligible basins of attraction (fig. 1h). In Oja’s model, (a, b, c) = (1, 1, 0) in eq. 1, if the largest eigenvalue has multiplicity d > 1 then the d-sphere spanned by those codominant eigenvectors is a globally attracting equilibrium manifold for the synaptic weights. The corresponding result for a > 1 is (for the formal statement and proof, see corollary 1.4 in appendix A.2): Corollary 1.3. (Informal) If any d robust eigenvalues of µ are equal, the d-sphere spanned by their robust eigenvectors is an attracting equilibrium manifold and its basin of attraction is defined by each of those d eigenvectors’ basins of attraction boundaries with the other R d robust eigenvectors. 2.3 Arbitrary learning rules So far we have studied phenomenological plasticity rules in the particular form of eq. 1. The neural output n, input xi, and synaptic weight Ji were each raised to a power and then multiplied together. Changes in the strength of actual synapses are governed by complex biochemical, transcriptional, and regulatory pathways [58]. We view these as specifying some unknown function of the neural output, input, and synaptic weight, f(n,x,J). That function might not have the form of eq. 1. So, we next investigate the dynamics induced by arbitrary learning rules f . We see here that under a mild condition, any equilibrium of the plasticity dynamics will have a similar form as the steady states of eq. 2. If f does not depend on J except through n, equilibria will be generalized eigenvectors of higher-order input correlations. The Taylor expansion of f around zero is: fi (n(t), xi(t), Ji(t)) = 1X m=1 Am n am(t) xbmi (t) J cm i (t) (9) where the coefficients Am are partial derivates of f . We assume that there exists a finite integer N such that derivatives of order N +1 and higher are negligible compared to the lower-order derivatives. We then approximate f , truncating its expansion after those N terms. With a linear neuron, synaptic scaling, and slow learning, this implies the plasticity dynamics ⌧ J̇i = X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m Ji X j,m X ↵m Jcm+1j mµj,↵m(J ⌦am)↵m (10) where mµi,↵m = Amhx bm i (x ⌦am)↵mix. At steady states where Ji 6= 0, X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m = Ji, where (J , {mµ}) = X m X j,↵m Jcm+1j mµj,↵(J ⌦am)↵. (11) If each cm = 0, this is a kind of generalized tensor eigenequation: X m,↵m mµi,↵(J ⌦am)↵ = Ji (12) so that J is invariant under the combined action of the multilinear maps mµ (which are potentially of different orders). If a1 = a2 = · · · = am, then this can be simplified to a tensor eigenvector equation by summing the input correlations mµ. If different terms of the expansion of f generate different-order input correlations, however, the steady states are no longer necessarily equivalent to tensor eigenvectors. If there exists a synaptic weight vector J that is an eigenvector of each of those input correlation tensors, P ↵ mµi,↵m(J ⌦am)↵m = Ji for each m, then that configuration J is a steady state of the plasticity dynamics with each cm = 0. We next investigated whether these steady states were attractors in simulations of a learning rule with a contribution from two-point and three-point correlations (a = (1, 2), b = 1, c = 0, A = (1, 1/2) in eq. 9). As before, we used whitened natural image patches for the inputs x (fig. 1a). The two- and three-point correlations of those image patches had similar first eigenvalues, but the spectrum of the two-point correlation decreased more quickly than the three-point correlation (fig. 3a). The first three eigenvectors of the different correlations overlapped strongly (fig. 3b). This was not due to a trivial constant offset since the inputs were whitened. With this parameter set, the synaptic weights usually converged to the (shared) first eigenvector of the input correlations (fig. 3c,d, e (blue)). We next asked how the weights of the different input correlations in the learning rule (the parameters A1, A2) affected the plasticity dynamics. When the learning rule weighted the inputs’ three-point correlation more strongly than the two-point correlation (A = (1/2, 1)), the dynamics converged almost always to the first eigenvector of the three-point correlation (fig. 3e, blue vs orange). Without loss of generality, we then fixed A2 = 1 and varied the amplitude of A1. As A1 increased, the learning dynamics converged to equilibria increasingly aligned with the top eigenvectors of the input correlations (fig. 3f). For sufficiently negative A1, the dynamics converged to steady states that were neither eigenvectors of the two-point input correlation nor any of the top 20 eigenvectors of the three-point input correlation (fig. 3f). Earlier, we saw that in single-term learning rules, the only attractors were robust eigenvectors of the input correlation (theorem 1). The dynamics of eq. 2 usually converged to the first eigenvector because it had the largest basin of attraction (corollaries 1.1, 1.2). Here, we saw that at least for some parameter sets, the dynamics of a multi-term generalized Hebbian rule may not converge to an input eigenvector. This suggested the existence of other attractors for the dynamics of eq. 10. We next investigated the steady states of multi-term nonlinear Hebbian rules analytically. We focused on the case when the different input correlations generated by the learning rule all have a shared set of eigenvectors. In this case, those shared eigenvectors are all stable equilibria of eq. 10. They are not, however, the only stable equilibria. We see in a simple example that there can be equilibria that are linear combinations of those eigenvectors with all negative weights (in fig. 2c, the fixed point in the lower left quadrant on the unit circle). In fact, any stable equilibrium that is not a shared eigenvector must be such a negative combination. Our results are summarized in the following theorem: Theorem 2. In eq. 10, take b = 1, c = 0,a 2 ZN+ , and consider N cubical, symmetric tensors, mµ, each of order am + 1 for m 2 [N ], that are mutually odeco into R components: mµ = RX r=1 mrUr ⌦Ur ⌦ · · ·⌦Ur (13) with UTU = I . Let mr 0 and P m mr > 0 for each m, r 2 [N ]⇥ [R]. Let S(J) = RX i=1 (UTi J) 2 and L(J) = NX m=1 RmX i=1 mi(U T i J) am+1. (14) Then: 1. S ⇤ = {J : S(J) = 1 ^ L(J) > 0} is an attracting set for eq. 10 and its basin of attraction includes {J : L(J) > 0}. 2. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10. 3. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10 if P m m k( 1) am < 0 (and unstable if P m m k( 1) am < 0). 4. Any other stable equilibrium must have UTk J 0 for each k 2 [R]. The claims of theorem 2 are proven in appendix A.2. Similar to theorem 1, we see that the robust eigenvectors of each input correlation generated by the learning rule are stable equilibria of the learning dynamics. The complexity of eq. 10 has kept us from determining their basins of attraction. We can, however, make several guarantees. First, in a large region, the unit sphere is an attracting set for the dynamics of eq. 10. Second, the only stable fixed points are either the eigenvectors of ±µ or combinations of the eigenvectors of µ with only nonpositive weights. This is in contrast to the situation where the learning rule has only one term; then theorem 1 guarantees that the only attractors are eigenvectors. 3 Discussion We have analyzed biologically motivated plasticity dynamics that generalize the Oja rule. One class of these compute tensor eigenvectors. We proved that without a multiplicative weight-dependence in the plasticity, those eigenvectors are attractors of the dynamics (theorem 1, figs. 1, 2a, b). Contrary to Oja’s rule, the first eigenvector of higher-order input correlations is not a unique attractor. Rather, each eigenvector k has a finite basin of attraction, the size of which is proportional to R/a 1 k . If there are d codominant eigenvectors ( 1 = 2 = . . . = d), the d-sphere they span is an attracting equilibrium manifold (corollary 1.4 in appendix A.2). Furthermore, steady states of any plasticity model with a finite Taylor polynomial in the neural output and inputs are generalized eigenvectors of multiple input correlations. These steady states are stable and attracting (theorem 2, figs. 2c, 3). 3.1 Spiking neurons and weight-dependence While biological synaptic plasticity is certainly more complex than the simple generalized Hebbian rule of eq. 1, neural activity is also more complex than the linear model n = JTx. We examined the simple linear-nonlinear-Poisson spiking model and a generalized spike timing–dependent plasticity (STDP) rule ([45]; appendix A.3). Similar to eqs. 2 and 10, we can write the dynamical equation for J as a function of joint cumulant tensors of the input (eq. 56 in appendix A.3). These dynamics have a different structure than eqs. 2 and 10. We focused here mainly on learning rules with no direct dependence on the synaptic weight (c = 0 in eq. 1, c = 0 in eq. 9). When c 6= 0, the learning dynamics cannot be simply analyzed in terms of the loading onto the input correlations’ eigenvectors. We studied the learning dynamics with weight-dependence for two simple families of input correlations: diagonal µ and piecewise-constant rank one µ (appendix A.4). In both cases, we found that with eigenvectors of those simple input correlations were also attractors of the plasticity rule. With diagonal input correlations, sparse steady states with one nonzero synapse are always stable and attracting when a+ c > 0, but if a+ c 0 synaptic weights converge to solutions where all weights have the same magnitude (fig. A.4.1). With rank one input correlations, multiplicative weight-dependence can interfere with synaptic scaling and lead to an instability for the neurons’ total synaptic amplitude (fig. A.4.2). 3.2 Related work and applications There is a rich literature on generalized or nonlinear forms of Hebbian learning. We briefly discuss the most closely related results, to our knowledge. The family of Bienenstock, Cooper, and Munro (BCM) learning rules supplement the classic Hebbian model with a stabilizing sliding threshold for potentiation rather than synaptic scaling [59]. BCM rules balance terms driven by third- and fourth-order joint moments of the pre- and postsynaptic activity [60]. A triplet STDP model with rate-dependent depression and uncorrelated Poisson spiking has BCM dynamics [45] and can develop selective (sparse) connectivity in response to rate- or correlation-based input patterns [61]. If the input is drawn from a mixture model then under a BCM rule, the synaptic weights are guaranteed to converge to the class means of the mixture [62]. Learning rules with suitable postsynaptic nonlinearities can allow a neuron to perform independent component analysis (ICA) [63, 64]. These learning rules optimize the kurtosis of the neural response. In contrast, we show that a simple nonlinear Hebbian model learns tensor eigenvectors of higher-order input correlations. Those higher-order input correlations can determine which features are learned by gradient-based ICA algorithms [65]. Taylor and Coombes showed that a generalization of the Oja rule to higher-order neurons can also learn higher-order correlations [66], which can allow learning independent components [67]. In that model, however, the synaptic weights J are a higher-order tensor. Computing the robust eigenvectors of an odeco tensor µ by power iteration has O(Ka+1) space complexity: it requires first computing µ. The discrete-time dynamics of eq. 1 correspond to streaming power iteration, with O(K) space complexity [68–70]). Eq. 2 defines limiting continuoustime dynamics for tensor power iteration, exposing the basins of attraction. Oja’s rule inspired a generation of neural algorithms for PCA and subspace learning [12, 13]. Local learning rules for approximating higher-order correlation tensors may also prove useful, for example in neuromorphic devices [71–73]. Code availability The code associated with figures one and three is available at https://github.com/gocker/ TensorHebb. Acknowledgments We thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement and support.
1. What is the main contribution of the paper regarding nonlinear Hebbian learning rules? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any unanswered questions or open issues related to the paper's findings? 5. How might the results be generalized or extended to other cases, and what would be the potential impact?
Summary Of The Paper Review
Summary Of The Paper The authors analyze a broad class of nonlinear Hebbian learning rules. These rules generalize the well-studied Oja's learning rule to higher-order input correlations, which can be represented in symmetric higher-order tensors of synaptic input statistics. There is a broad modeling literature on synaptic plasticity, and it is not my primary area of research. Nonetheless, this generalization of Oja's rule appears novel to me, and the extensions outlined in this manuscript feel very natural and interesting. Review The authors analyze a broad class of nonlinear Hebbian learning rules. These rules generalize the well-studied Oja's learning rule to higher-order input correlations, which can be represented in symmetric higher-order tensors of synaptic input statistics. There is a broad modeling literature on synaptic plasticity, and it is not my primary area of research. Nonetheless, this generalization of Oja's rule appears novel to me, and the extensions outlined in this manuscript feel very natural and interesting. The authors point out that "any plasticity model with a finite Taylor polynomial" can be seen as a special case of their framework. However, there are few insights into this general case, and most of their analysis is confined to simpler cases that are analytically tractable. These special cases are still interesting. In the classic case of Oja's rule, the largest eigenvector is the unique/global attractor of the system. However, this is not the case for the generalized model considered here. Thus, the authors' results are somewhat pessimistic because the learning dynamics will reach different steady-states depending on the initialization. This makes it more difficult to interpret the functional purpose or outcome of the learning rule, unless one is able to make additional assumptions about the initial state of the system. On the other hand, it appears that the dynamics do converge, in practice, to the top few eigenvectors when initialized randomly (see fig. 1h). This at least holds in the special cases the authors examined empirically. One of the more useful results in the paper is a proof on the relative sizes of the basins of attraction, which provides good intuition. One unanswered question I have is whether there are reasons to think the top eigenvectors of higher-order input statistics would be useful features to learn? In Oja's rule, the synaptic weights converge to the top principal component, and thus the neuron becomes tuned to detect salient (i.e. high-variance) features. The interpretation of feature-detectors for higher-order statistics is less intuitive to me, though I know there has been some work done on this -- e.g., https://doi.org/10.1167/13.9.974 Overall, I think these results are very thorough and interesting, if still a bit abstract and general. If the authors could provide readers with more intuition about how higher-order features may be useful features for downstream neural computation, I think it would broaden the appeal and impact of the paper. Either way, I hope to see this paper accepted. Other comments: I attempted to verify all derivations and statements in the supplemental material and I could not find any errors. Do the dynamics in fig. 1e actually converge? The trajectories look very volatile. The fact that the learning rule does not always converge to the largest eigenvector is not suprising, since recovering the top eigenvector is NP-hard even for symmetric tensors (Hillar & Lim, 2013). Notation is somewhat confusing and inconsistent in places. There are three different typefaces for the variable "x" in equations 1 and 2. The variable \mu_{i, \alpha} is boldfaced in equation 3, but is not boldfaced in equation 2. It would be helpful if the explanation that \alpha is a multi-index could be moved a few sentences earlier, right after equation 2. On lines 90-91, it is claimed that the first k E-eigenvectors define the best rank-k approximation approximation of \mu in terms of a least-squares criterion. Typically a rank-k approximation is understood to mean a CP-decomposition of the tensor (see Kolda & Bader, 2009), but it seems like here it is meant to be the best rank-k Tucker decomposition to approximate the tensor? This is the sense I got from reading the citation to De Lathauwer (2000), which distriguishes between a rank-R and a rank-(R_1, R_2, ..., R_n) approximation to a tensor. Can the authors clarify these points more explicitly? In the caption of Figure 2, a small typo appears: a "v2" should be "v_2" On line 553 of the supplement, a reference / citation is broken.
NIPS
Title Tensor decompositions of higher-order correlations by nonlinear Hebbian plasticity Abstract Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higherorder input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion. 1 Introduction In Hebbian learning, potentiation of the net synaptic weight between two neurons is driven by the correlation between pre- and postsynaptic activity [1]. That postulate is a cornerstone of the theory of synaptic plasticity and learning [2, 3]. In its basic form, the Hebbian model leads to runaway potentiation or depression of synapses, since the pre-post correlation increases with increasing synaptic weight [4]. That runaway potentiation can be stabilized by supplemental homeostatic plasticity dynamics [5], by weight dependence in the learning rule [6, 7], or by synaptic scaling regulating a neuron’s total synaptic weight [8, 9]. In 1982, Erkki Oja observed that a linear neuron with Hebbian plasticity and synaptic scaling learns the first principal component of its inputs [10]. In 1985, Oja and Karhunen proved that this is a global attractor of the Hebbian dynamics [11]. This led to a fountain of research on unsupervised feature learning in neural networks [12, 13]. Principal component analysis (PCA) describes second-order features of a random variable. Both naturalistic stimuli and neural activity can, however, exhibit higher-order correlations [14, 15]. Canonical models of retinal and thalamic processing whiten inputs, removing pairwise features [16–22]. Beyond-pairwise features, encoded in tensors, can provide a powerful substrate for learning from data [23–27]. The basic Hebbian postulate does not take into account fundamental nonlinear aspects of biological synaptic plasticity in cortical pyramidal neurons. First, synaptic plasticity depends on beyond-pairwise activity correlations [28–33]. Second, spatially clustered and temporally coactive synapses exhibit 35th Conference on Neural Information Processing Systems (NeurIPS 2021). correlated and cooperative plasticity [34–41]. There is a rich literature on computationally motivated forms of nonlinear Hebbian learning (section 3.2). Here, we will prove that these biologically motivated nonlinearities allow a neuron to learn higher-order features of its inputs. We study the dynamics of a simple family of generalized Hebbian learning rules, combined with synaptic scaling (eq. 1). Equilibria of these learning rules are invariants of higher-order input correlation tensors. The order of input correlation (pair, triplet, etc.) depends on the pre- and postsynaptic nonlinearities of the learning rule. When the only nonlinearity in the plasticity rule is postsynaptic, the steady states are eigenvectors of higher-order input correlation tensors [42, 43]. We prove that these eigenvectors are attractors of the generalized Hebbian plasticity dynamics and characterize their basins of attraction. Then, we study further generalizations of these learning rules. We show that any plasticity model (with a finite Taylor expansion in the synaptic input, neural output, and synaptic weight) has steady states that generalize those tensor decompositions to multiple input correlations, including generalized tensor eigenvectors. We show that these generalized tensor eigenvectors are stable equilibria of the learning dynamics. Due to the complexity of the arbitrary learning rules, we are unable to fully determine their basins of attraction. We do find that these generalized tensor eigenvectors are in an attracting set for the dynamics, and characterize its basin of attraction. Finally, we conclude by discussing extensions of these results to spiking models and weight-dependent plasticity. 2 Results Take a neuron receiving K inputs xi(t), i 2 [K], each filtered through a connection with synaptic weight Ji(t) to produce activity n(t). We consider synaptic plasticity where the evolution of Ji can depend nonlinearly on the postsynaptic activity n(t), the local input xi(t), and the current synaptic weight Ji(t). We model these dependencies in a learning rule f : Ji(t+ dt) = Ji(t) + dt/⌧ fi(n(t), xi(t), Ji(t)) J(t) + dt/⌧f n(t),x(t),J(t) , where fi(t) = n a(t) xbi (t) J c i (t). (1) The parameter a sets the output-dependent nonlinearity of the learning rule, b sets the input-dependent nonlinearity, and c sets its dependence on the current synaptic weight. Eq. 1 assumes a simple form for these nonlinearities; we discuss arbitrary nonlinear learning rules in section 2.3. We assume that a and b are positive integers, as in higher-order voltage or spike timing–dependent plasticity (STDP) models [44–46]. The scaling by the norm of the synaptic weight vector, ||J ||, models homeostatic synaptic scaling [8–10]. Bold type indicates a vector, matrix, or tensor (depending on the variable) and regular font with lower indices indicates elements thereof. Roman type denotes a random variable (x). We assume that x(t) is drawn from a stationary distribution with finite moments of order a + b. Combined with a linear neuron, n = JTx, and a slow learning rate, ⌧ dt, this implies the following dynamics for J (appendix A.1): ⌧ J̇i = J c i X ↵ µi,↵(J ⌦a)↵ Ji X j,↵ Jc+1j µj,↵(J ⌦a)↵. (2) In eq. 2, J̇i = dJi/dt, ↵ = (j1, . . . , ja) is a multi-index, and ⌦ is the vector outer product; J⌦a is the a-fold outer product of the synaptic weight vector J . µ is a higher-order moment (correlation) tensor of the inputs: µi,↵ = hxbi (x ⌦a)↵ix (3) where hix denotes the expectation with respect to the distribution of the inputs. µ is an (a+ 1)-order tensor containing an (a+ b)-order joint moment of x. The order of the tensor refers to its number of indices, so a vector is a first-order tensor and a matrix a second-order tensor. µ is cubical; each mode of µ has the same dimension K. In the first term of eq. 2, for example, P ↵ µi,↵(J ⌦a)↵ takes the dot product of J along modes 2 through a+ 1 of µ. 2.1 Steady states of nonlinear Hebbian learning If we take a = b = 1, then µ is the second-order correlation of x and ↵ is just the index j. With c = 0 also, eq. 2 reduces to Oja’s rule and J is guaranteed to converge to the dominant eigenvector of µ [10, 11]. We next investigate the steady states of eq. 2 for arbitrary (a, b) 2 Z2+, c 2 R. Ji = 0 is a trivial steady state. At steady states of eq. 2 where Ji 6= 0, X ↵ µi,↵(J ⌦a)↵ = J 1 c i , where (µ,J) = X j,↵ Jc+1j µj,↵(J ⌦a)↵, (4) so that J is invariant under the multilinear map of µ except for a scaling by and element-wise exponentiation by 1 c. For two parameter families (a, b, c), eq. 4 reduces to different types of tensor eigenequation [47, 42, 43]. We next briefly describe these and some of their properties. First, if a+c = 1, we have the tensor eigenequation P ↵ µi,↵(J ⌦a)↵ = Jai . Qi called ,J the tensor eigenpair [42] and Lim called them the `a-norm eigenpair [43]. There are KaK 1 such eigenpairs [42]. If µ 0 element-wise, then it has a unique largest eigenvalue with a real, non-negative eigenvector J , analogous to the Perron-Frobenius theorem for matrices [43, 48]. If µ is weakly irreducible, that eigenvector is strictly positive [49]. In contrast to matrix eigenvectors, however, for a > 1 these tensor eigenvectors are not necessarily invariant under orthogonal transformations [42]. If c = 0, we have another variant of tensor eigenvalue/vector equation: X ↵ µi,↵(J ⌦a)↵ = Ji (5) Qi called these ,J an E-eigenpair [42] and Lim called them the `2-eigenpair [43]. In general, a tensor may have infinitely many such eigenpairs. If the spectrum of a K-dimensional tensor of order a+ 1 is finite, however, there are (aK 1)/(a 1) eigenvalues counted with multiplicity, and the spectrum of a symmetric tensor is finite [50, 51]. (If b = 1, µ is symmetric.) Unlike the steady states when a + c = 1, these eigenpairs are invariant under orthogonal transformations [42]. For non-negative µ, there exists a positive eigenpair [52]. It may not be unique, however, unlike the largest eigenpair for a+ c = 1 (an anti-Perron-Frobenius result) [51]. In the remainder of this paper we will usually focus on parameter sets with c = 0 and use “tensor eigenvector” to refer to those of eq. 5. 2.2 Dynamics of nonlinear Hebbian learning For the linear Hebbian rule, (a, b, c) = (1, 1, 0), Oja and Karhunen proved that the first principal component of the inputs is a global attractor of eq. 2 [11]. We thus asked whether the first tensor eigenvector is a global attractor of eq. 2 when c = 0 but (a, b) 6= (1, 1). We first simulated the nonlinear Hebbian dynamics. For the inputs x, we whitened 35⇥35 pixel image patches sampled from the Berkeley segmentation dataset (fig. 1a; [53]). For b 6= 1, the correlation of these image patches was not symmetric (fig. 1b). The mean squared error of the canonical polyadic (CP) approximation of these tensors was higher for b = 1 than b = 2 (fig. 1d). It decreased slowly past rank ⇠ 10, and the rank of the input correlation tensors was at least 30 (fig. 1d). The nonlinear Hebbian learning dynamics converged to an equilibrium from random initial conditions (e.g., fig. 1e, f), around which the weights fluctuated due to the finite learning timescale ⌧ . Any equilibrium is guaranteed to be some eigenvector of the input correlation tensor µ (section 2.1). For individual realizations of the weight dynamics, we computed the overlap between the final synaptic weight vector and each of the first 10 eigenvectors (components of the Tucker decomposition) of the corresponding input correlation µ [47, 54]. The dynamics most frequently converged to the first eigenvector. For a non-negligible fraction of initial conditions, however, the nonlinear Hebbian rule converged to subdominant eigenvectors (fig. 1g,h). The input correlations µ did have a unique dominant eigenvector (fig. 3a, blue), but the dynamics of eq. 2 did not always converge to it. This finding stands in contrast to the standard Hebbian rule, which must converge to the first eigenvector if it is unique [11]. While the top eigenvector of a matrix can be computed efficiently, computing the top eigenvector of a tensor is, in general, NP-hard [55]. To understand the learning dynamics further, we examined them analytically. Our main finding is that with (b, c) = (1, 0) in the generalized Hebbian rule, eigenvectors of µ are attractors of eq. 2. Contrary to the case when a = 1 (Oja’s rule), the dynamics are thus not guaranteed to converge to the first eigenvector of the input correlation tensor when a > 1. The first eigenvector of µ does, however, have the largest basin of attraction. Theorem 1. In eq. 2, take (b, c) = (1, 0). Let µ be a cubical, symmetric tensor of order a+ 1 and odeco with R components: µ = RX r=1 r (Ur) ⌦a+1 (6) where U is a matrix of unit-norm orthogonal eigenvectors: UTU = I . Let i > 0 for each i 2 [R] and i 6= j 8 (i, j) 2 [R]⇥ [R] with i 6= j. Then for each k 2 [R]: 1. With any odd a > 1, J = ±Uk are attracting fixed points of eq. 2 and their basin of attrac- tion is T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . Within that region, the separatrix of +Uk and Uk is the hyperplane orthogonal to UTk : {J : U T k J = 0}. 2. With any even positive a, J = Uk is an attracting fixed point of eq. 2 and its basin of attraction is J : UTk J > 0 T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . 3. With any even positive a, J = 0 is a neutrally stable fixed point of eq. 2 with basin of attraction n J : PR j=1(U T j J) 2 < 1 ^UTk J < 0 8 k 2 [R] o . Remark. Each component of the orthogonal decomposition of an odeco tensor µ (eq. 5) is an eigenvector of µ. If R < (aK 1)/(a 1), there are additional eigenvectors. The components of the orthogonal decomposition are the robust eigenvectors of µ: the attractors of its multilinear map [26, 56]. The non-robust eigenvectors of an odeco tensor are fixed by its robust eigenvectors and their eigenvalues [57]. The proof of theorem 1 is given in appendix A.2. To prove theorem 1, we project J onto the eigenvectors of µ, and study the dynamics of the loadings v = UTJ . This leads to the discovery of a collection of unstable manifolds: each pair of axes (i, k) has an associated unstable hyperplane vi = vk ( k/ i) 1/(a 1) (and if a is odd, also the corresponding hyperplane with negative slope). These partition the phase space into the basins of attraction of the eigenvectors of µ. For example, consider a fourth-order input correlation (corresponding to a = 3 in eq. 1) with two eigenvectors. The phase portrait of the loadings is in fig. 2a, with the nullclines in black and attracting and unstable manifolds in blue. The attracting manifold is the unit sphere. There are two unstable hyperplanes that partition the phase space into the basins of attraction of (0,±1) and (±1, 0), where the synaptic weights J are an eigenvector of µ. For even a, only the unstable hyperplanes with positive slope survive (fig. 2b, blue line). The unit sphere (fig. 2b, blue) is attracting from any region where at least one loading is positive. Its vertices [1]⇥[1] are equilibria; (v1, v2) = (1, 0) and (0, 1) are attractors and the unstable hyperplane separates their basins of attraction. For the region with all loadings negative that is the basin of attraction of the origin, noise will drive the system away from zero towards one of the eigenvector equilibria. are v2 = ±v1 ( 1/ 2) 1/a 1. b) Even a (a = 2). The unstable set is v2 = v1 ( 1/ 2) 1/a 1 (solid blue line). In (a,b), ( 1, 2) = (3, 1). c) Phase portrait for a two-term learning rule (eq. 9). All parameters of the input correlation tensors ( mr, Ai) are equal to one. Dashed blue curve: L = {v : P m,j mjv am+1 j = 0}. By partitioning the phase space of J into basins of attraction for eigenvectors of µ, theorem 1 also allows us to determine the volumes of those basins of attraction. The basins of attraction are open sections of RK so we measure their volume relative to that of a large hypercube. Corollary 1.1. Let Vk be the relative volume of the basin of attraction for J = Uk. For odd a > 1, Vk = R 1 RY i=1 ✓ k i ◆1/(a 1) (7) Corollary 1.2. Let Vk be the relative volume of the basin of attraction for J = Uk. For even positive a, Vk =2 1 R R 1 Y i ✓ k i ◆1/(a 1) + (R 1) 1 X j 6=k Y i 6=j ✓ k i ◆1/(a 1) + (R 2) 1 X j,l 6=k Y i 6=j,l ✓ k i ◆1/(a 1) + . . .+ 1 ! (8) The calculations for corollaries 1.1, 1.2 are given in appendix A.2. We see that the volumes of the basins of attraction depend on the spectrum of µ, its rank R, and its order a. The result for odd a also provides a lower bound on the volume for even a. The result is simpler for odd a so we focus our discussion here on that case. While eigenvectors of µ with small eigenvalues contribute little to values of the input correlation µ, they can have a large impact on the basins of attraction. The volume of the basin of attraction of eigenvector k is proportional to R/(a 1)k . An eigenvector with eigenvalue ✏ scales the basins of attraction for the other eigenvectors by ✏ 1/a 1. The relative volume of two eigenvectors’ basins of attraction is, however, unaffected by the other eigenvalues whatever their amplitude. With a odd, the ratio of the volumes of the basins of attraction of eigenvectors k and j is Vk/Vj = ( k/ j) R/a 1. We see in theorem 1 that attractors of eq. 2 are points on the unit hypersphere, S. For odd a, S is an attracting set for eq. 2. For even a, the section of S with at least one positive coordinate is an attracting set (fig. 2a, b, blue circle; see proof of theorem 1 in appendix A.2). We thus next computed the surface area of the section of S in the basin of attraction for eigenvector k, Ak (corollary 1.3 in appendix A.2). The result requires knowledge of all non-negligible eigenvalues of µ, and the ratio Ak/Aj does not exhibit the cancellation that Vk/Vj does for odd a. We saw in simulations with natural image patch inputs and initial conditions for J chosen uniformly at random on S, the basin of attraction for UT1 was ⇠ 3⇥ larger than that for UT2 and the higher eigenvectors had negligible basins of attraction (fig. 1h). In Oja’s model, (a, b, c) = (1, 1, 0) in eq. 1, if the largest eigenvalue has multiplicity d > 1 then the d-sphere spanned by those codominant eigenvectors is a globally attracting equilibrium manifold for the synaptic weights. The corresponding result for a > 1 is (for the formal statement and proof, see corollary 1.4 in appendix A.2): Corollary 1.3. (Informal) If any d robust eigenvalues of µ are equal, the d-sphere spanned by their robust eigenvectors is an attracting equilibrium manifold and its basin of attraction is defined by each of those d eigenvectors’ basins of attraction boundaries with the other R d robust eigenvectors. 2.3 Arbitrary learning rules So far we have studied phenomenological plasticity rules in the particular form of eq. 1. The neural output n, input xi, and synaptic weight Ji were each raised to a power and then multiplied together. Changes in the strength of actual synapses are governed by complex biochemical, transcriptional, and regulatory pathways [58]. We view these as specifying some unknown function of the neural output, input, and synaptic weight, f(n,x,J). That function might not have the form of eq. 1. So, we next investigate the dynamics induced by arbitrary learning rules f . We see here that under a mild condition, any equilibrium of the plasticity dynamics will have a similar form as the steady states of eq. 2. If f does not depend on J except through n, equilibria will be generalized eigenvectors of higher-order input correlations. The Taylor expansion of f around zero is: fi (n(t), xi(t), Ji(t)) = 1X m=1 Am n am(t) xbmi (t) J cm i (t) (9) where the coefficients Am are partial derivates of f . We assume that there exists a finite integer N such that derivatives of order N +1 and higher are negligible compared to the lower-order derivatives. We then approximate f , truncating its expansion after those N terms. With a linear neuron, synaptic scaling, and slow learning, this implies the plasticity dynamics ⌧ J̇i = X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m Ji X j,m X ↵m Jcm+1j mµj,↵m(J ⌦am)↵m (10) where mµi,↵m = Amhx bm i (x ⌦am)↵mix. At steady states where Ji 6= 0, X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m = Ji, where (J , {mµ}) = X m X j,↵m Jcm+1j mµj,↵(J ⌦am)↵. (11) If each cm = 0, this is a kind of generalized tensor eigenequation: X m,↵m mµi,↵(J ⌦am)↵ = Ji (12) so that J is invariant under the combined action of the multilinear maps mµ (which are potentially of different orders). If a1 = a2 = · · · = am, then this can be simplified to a tensor eigenvector equation by summing the input correlations mµ. If different terms of the expansion of f generate different-order input correlations, however, the steady states are no longer necessarily equivalent to tensor eigenvectors. If there exists a synaptic weight vector J that is an eigenvector of each of those input correlation tensors, P ↵ mµi,↵m(J ⌦am)↵m = Ji for each m, then that configuration J is a steady state of the plasticity dynamics with each cm = 0. We next investigated whether these steady states were attractors in simulations of a learning rule with a contribution from two-point and three-point correlations (a = (1, 2), b = 1, c = 0, A = (1, 1/2) in eq. 9). As before, we used whitened natural image patches for the inputs x (fig. 1a). The two- and three-point correlations of those image patches had similar first eigenvalues, but the spectrum of the two-point correlation decreased more quickly than the three-point correlation (fig. 3a). The first three eigenvectors of the different correlations overlapped strongly (fig. 3b). This was not due to a trivial constant offset since the inputs were whitened. With this parameter set, the synaptic weights usually converged to the (shared) first eigenvector of the input correlations (fig. 3c,d, e (blue)). We next asked how the weights of the different input correlations in the learning rule (the parameters A1, A2) affected the plasticity dynamics. When the learning rule weighted the inputs’ three-point correlation more strongly than the two-point correlation (A = (1/2, 1)), the dynamics converged almost always to the first eigenvector of the three-point correlation (fig. 3e, blue vs orange). Without loss of generality, we then fixed A2 = 1 and varied the amplitude of A1. As A1 increased, the learning dynamics converged to equilibria increasingly aligned with the top eigenvectors of the input correlations (fig. 3f). For sufficiently negative A1, the dynamics converged to steady states that were neither eigenvectors of the two-point input correlation nor any of the top 20 eigenvectors of the three-point input correlation (fig. 3f). Earlier, we saw that in single-term learning rules, the only attractors were robust eigenvectors of the input correlation (theorem 1). The dynamics of eq. 2 usually converged to the first eigenvector because it had the largest basin of attraction (corollaries 1.1, 1.2). Here, we saw that at least for some parameter sets, the dynamics of a multi-term generalized Hebbian rule may not converge to an input eigenvector. This suggested the existence of other attractors for the dynamics of eq. 10. We next investigated the steady states of multi-term nonlinear Hebbian rules analytically. We focused on the case when the different input correlations generated by the learning rule all have a shared set of eigenvectors. In this case, those shared eigenvectors are all stable equilibria of eq. 10. They are not, however, the only stable equilibria. We see in a simple example that there can be equilibria that are linear combinations of those eigenvectors with all negative weights (in fig. 2c, the fixed point in the lower left quadrant on the unit circle). In fact, any stable equilibrium that is not a shared eigenvector must be such a negative combination. Our results are summarized in the following theorem: Theorem 2. In eq. 10, take b = 1, c = 0,a 2 ZN+ , and consider N cubical, symmetric tensors, mµ, each of order am + 1 for m 2 [N ], that are mutually odeco into R components: mµ = RX r=1 mrUr ⌦Ur ⌦ · · ·⌦Ur (13) with UTU = I . Let mr 0 and P m mr > 0 for each m, r 2 [N ]⇥ [R]. Let S(J) = RX i=1 (UTi J) 2 and L(J) = NX m=1 RmX i=1 mi(U T i J) am+1. (14) Then: 1. S ⇤ = {J : S(J) = 1 ^ L(J) > 0} is an attracting set for eq. 10 and its basin of attraction includes {J : L(J) > 0}. 2. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10. 3. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10 if P m m k( 1) am < 0 (and unstable if P m m k( 1) am < 0). 4. Any other stable equilibrium must have UTk J 0 for each k 2 [R]. The claims of theorem 2 are proven in appendix A.2. Similar to theorem 1, we see that the robust eigenvectors of each input correlation generated by the learning rule are stable equilibria of the learning dynamics. The complexity of eq. 10 has kept us from determining their basins of attraction. We can, however, make several guarantees. First, in a large region, the unit sphere is an attracting set for the dynamics of eq. 10. Second, the only stable fixed points are either the eigenvectors of ±µ or combinations of the eigenvectors of µ with only nonpositive weights. This is in contrast to the situation where the learning rule has only one term; then theorem 1 guarantees that the only attractors are eigenvectors. 3 Discussion We have analyzed biologically motivated plasticity dynamics that generalize the Oja rule. One class of these compute tensor eigenvectors. We proved that without a multiplicative weight-dependence in the plasticity, those eigenvectors are attractors of the dynamics (theorem 1, figs. 1, 2a, b). Contrary to Oja’s rule, the first eigenvector of higher-order input correlations is not a unique attractor. Rather, each eigenvector k has a finite basin of attraction, the size of which is proportional to R/a 1 k . If there are d codominant eigenvectors ( 1 = 2 = . . . = d), the d-sphere they span is an attracting equilibrium manifold (corollary 1.4 in appendix A.2). Furthermore, steady states of any plasticity model with a finite Taylor polynomial in the neural output and inputs are generalized eigenvectors of multiple input correlations. These steady states are stable and attracting (theorem 2, figs. 2c, 3). 3.1 Spiking neurons and weight-dependence While biological synaptic plasticity is certainly more complex than the simple generalized Hebbian rule of eq. 1, neural activity is also more complex than the linear model n = JTx. We examined the simple linear-nonlinear-Poisson spiking model and a generalized spike timing–dependent plasticity (STDP) rule ([45]; appendix A.3). Similar to eqs. 2 and 10, we can write the dynamical equation for J as a function of joint cumulant tensors of the input (eq. 56 in appendix A.3). These dynamics have a different structure than eqs. 2 and 10. We focused here mainly on learning rules with no direct dependence on the synaptic weight (c = 0 in eq. 1, c = 0 in eq. 9). When c 6= 0, the learning dynamics cannot be simply analyzed in terms of the loading onto the input correlations’ eigenvectors. We studied the learning dynamics with weight-dependence for two simple families of input correlations: diagonal µ and piecewise-constant rank one µ (appendix A.4). In both cases, we found that with eigenvectors of those simple input correlations were also attractors of the plasticity rule. With diagonal input correlations, sparse steady states with one nonzero synapse are always stable and attracting when a+ c > 0, but if a+ c 0 synaptic weights converge to solutions where all weights have the same magnitude (fig. A.4.1). With rank one input correlations, multiplicative weight-dependence can interfere with synaptic scaling and lead to an instability for the neurons’ total synaptic amplitude (fig. A.4.2). 3.2 Related work and applications There is a rich literature on generalized or nonlinear forms of Hebbian learning. We briefly discuss the most closely related results, to our knowledge. The family of Bienenstock, Cooper, and Munro (BCM) learning rules supplement the classic Hebbian model with a stabilizing sliding threshold for potentiation rather than synaptic scaling [59]. BCM rules balance terms driven by third- and fourth-order joint moments of the pre- and postsynaptic activity [60]. A triplet STDP model with rate-dependent depression and uncorrelated Poisson spiking has BCM dynamics [45] and can develop selective (sparse) connectivity in response to rate- or correlation-based input patterns [61]. If the input is drawn from a mixture model then under a BCM rule, the synaptic weights are guaranteed to converge to the class means of the mixture [62]. Learning rules with suitable postsynaptic nonlinearities can allow a neuron to perform independent component analysis (ICA) [63, 64]. These learning rules optimize the kurtosis of the neural response. In contrast, we show that a simple nonlinear Hebbian model learns tensor eigenvectors of higher-order input correlations. Those higher-order input correlations can determine which features are learned by gradient-based ICA algorithms [65]. Taylor and Coombes showed that a generalization of the Oja rule to higher-order neurons can also learn higher-order correlations [66], which can allow learning independent components [67]. In that model, however, the synaptic weights J are a higher-order tensor. Computing the robust eigenvectors of an odeco tensor µ by power iteration has O(Ka+1) space complexity: it requires first computing µ. The discrete-time dynamics of eq. 1 correspond to streaming power iteration, with O(K) space complexity [68–70]). Eq. 2 defines limiting continuoustime dynamics for tensor power iteration, exposing the basins of attraction. Oja’s rule inspired a generation of neural algorithms for PCA and subspace learning [12, 13]. Local learning rules for approximating higher-order correlation tensors may also prove useful, for example in neuromorphic devices [71–73]. Code availability The code associated with figures one and three is available at https://github.com/gocker/ TensorHebb. Acknowledgments We thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement and support.
1. What is the main contribution of the paper regarding nonlinear Hebbian rules? 2. How does the proposed approach enable learning tensor eigenvectors and characterize their basins of attractions? 3. Do you have any concerns about the notations and complexity of the mathematical expressions used in the paper? 4. Can you clarify the inconsistency between Equation 1 in the main text and Equation 2 in the Appendix? 5. Could you explain how to derive Equation 3 from Equation 2 in the Appendix? 6. What is the difference between J_i and the bold J_i on the right-hand side of the definition of \dot{J}_j(t)? 7. Would NeurIPS be an appropriate forum for this work, or would Neural Computation or Plos Computational Neuroscience be better suited? 8. How could the authors provide more explanations and concrete examples to illustrate the results and help readers understand what higher-order correlations the neurons learn? 9. Is the presentation of the paper clear and effective, or could it be improved? 10. Are there any minor issues in the first few steps of the paper that need to be addressed?
Summary Of The Paper Review
Summary Of The Paper This paper shows that nonlinear Hebbian rules allow a neuron to learn tensor decompositions of the higher-order input correlation. There have been theoretical works on nonlinear Hebbian learning rules as well as triplet STDP BCM learning rules, but I believe this might be the first that showed systematically and mathematically it enables the learning of tensor eigenvectors and characterize their basins of attractions. Review While I understand that higher-order bit, I found the paper too dense and hard to follow. I found the math notations in the main text complex, perhaps unnecessarily complex, and hard to follow. I looked into the Appendix for clarification but found Eq. 1 in the main text and Eqn. 2 in the Appendix to be inconsistent -- was one of them wrong? I also could not derive Eq. 3 from Eq. 2 in the Appendix. My puzzle is that even if expanding the norm of J of dt, the 1st order expansion still depends on the norm of vector J. In contrast, Eq. 3 in Appendix doesn't depend on the norm of vector J, but only depend on each element J_j. The definition of \dot{J}_j(t) right after Eq. 4 in Appendix is also unclear -- what is the difference between J_i and the bold J_i on the right-hand side of the definition? While they might be minor issues, these issues in the first few steps made it sufficiently frustrating for me to press on to evaluate the details. In any case, it is unreasonable for the reviewers to plow through the 20 pages of math in the Appendix. While I believe the work is probably solid and the contribution fundamental, I wonder if NeurIPS is the appropriate forum. Maybe Neural Computation or Plos Computational Neuroscience would be a better forum. It would be helpful to provide more explanations (e.g. explaining \mu in Eq. 3 is the input correlation tensor ) and perhaps focus on one or two simple concrete examples to illustrate the results and show what higher-order correlations the neurons actually learn in those examples. Perhaps something like the long contours in Lawlor and Zucker's mixture model, or perhaps corners or curvatures as in Olshausen's overcomplete sparse coding? Perhaps they are trying to do this with Figure 1c? But more explanations are necessary. I don't enjoy the presentation of the paper, but I am willing to give it a high score because I do think the contribution is potentially important and am willing to give it the benefit of the doubt for now.
NIPS
Title Tensor decompositions of higher-order correlations by nonlinear Hebbian plasticity Abstract Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higherorder input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion. 1 Introduction In Hebbian learning, potentiation of the net synaptic weight between two neurons is driven by the correlation between pre- and postsynaptic activity [1]. That postulate is a cornerstone of the theory of synaptic plasticity and learning [2, 3]. In its basic form, the Hebbian model leads to runaway potentiation or depression of synapses, since the pre-post correlation increases with increasing synaptic weight [4]. That runaway potentiation can be stabilized by supplemental homeostatic plasticity dynamics [5], by weight dependence in the learning rule [6, 7], or by synaptic scaling regulating a neuron’s total synaptic weight [8, 9]. In 1982, Erkki Oja observed that a linear neuron with Hebbian plasticity and synaptic scaling learns the first principal component of its inputs [10]. In 1985, Oja and Karhunen proved that this is a global attractor of the Hebbian dynamics [11]. This led to a fountain of research on unsupervised feature learning in neural networks [12, 13]. Principal component analysis (PCA) describes second-order features of a random variable. Both naturalistic stimuli and neural activity can, however, exhibit higher-order correlations [14, 15]. Canonical models of retinal and thalamic processing whiten inputs, removing pairwise features [16–22]. Beyond-pairwise features, encoded in tensors, can provide a powerful substrate for learning from data [23–27]. The basic Hebbian postulate does not take into account fundamental nonlinear aspects of biological synaptic plasticity in cortical pyramidal neurons. First, synaptic plasticity depends on beyond-pairwise activity correlations [28–33]. Second, spatially clustered and temporally coactive synapses exhibit 35th Conference on Neural Information Processing Systems (NeurIPS 2021). correlated and cooperative plasticity [34–41]. There is a rich literature on computationally motivated forms of nonlinear Hebbian learning (section 3.2). Here, we will prove that these biologically motivated nonlinearities allow a neuron to learn higher-order features of its inputs. We study the dynamics of a simple family of generalized Hebbian learning rules, combined with synaptic scaling (eq. 1). Equilibria of these learning rules are invariants of higher-order input correlation tensors. The order of input correlation (pair, triplet, etc.) depends on the pre- and postsynaptic nonlinearities of the learning rule. When the only nonlinearity in the plasticity rule is postsynaptic, the steady states are eigenvectors of higher-order input correlation tensors [42, 43]. We prove that these eigenvectors are attractors of the generalized Hebbian plasticity dynamics and characterize their basins of attraction. Then, we study further generalizations of these learning rules. We show that any plasticity model (with a finite Taylor expansion in the synaptic input, neural output, and synaptic weight) has steady states that generalize those tensor decompositions to multiple input correlations, including generalized tensor eigenvectors. We show that these generalized tensor eigenvectors are stable equilibria of the learning dynamics. Due to the complexity of the arbitrary learning rules, we are unable to fully determine their basins of attraction. We do find that these generalized tensor eigenvectors are in an attracting set for the dynamics, and characterize its basin of attraction. Finally, we conclude by discussing extensions of these results to spiking models and weight-dependent plasticity. 2 Results Take a neuron receiving K inputs xi(t), i 2 [K], each filtered through a connection with synaptic weight Ji(t) to produce activity n(t). We consider synaptic plasticity where the evolution of Ji can depend nonlinearly on the postsynaptic activity n(t), the local input xi(t), and the current synaptic weight Ji(t). We model these dependencies in a learning rule f : Ji(t+ dt) = Ji(t) + dt/⌧ fi(n(t), xi(t), Ji(t)) J(t) + dt/⌧f n(t),x(t),J(t) , where fi(t) = n a(t) xbi (t) J c i (t). (1) The parameter a sets the output-dependent nonlinearity of the learning rule, b sets the input-dependent nonlinearity, and c sets its dependence on the current synaptic weight. Eq. 1 assumes a simple form for these nonlinearities; we discuss arbitrary nonlinear learning rules in section 2.3. We assume that a and b are positive integers, as in higher-order voltage or spike timing–dependent plasticity (STDP) models [44–46]. The scaling by the norm of the synaptic weight vector, ||J ||, models homeostatic synaptic scaling [8–10]. Bold type indicates a vector, matrix, or tensor (depending on the variable) and regular font with lower indices indicates elements thereof. Roman type denotes a random variable (x). We assume that x(t) is drawn from a stationary distribution with finite moments of order a + b. Combined with a linear neuron, n = JTx, and a slow learning rate, ⌧ dt, this implies the following dynamics for J (appendix A.1): ⌧ J̇i = J c i X ↵ µi,↵(J ⌦a)↵ Ji X j,↵ Jc+1j µj,↵(J ⌦a)↵. (2) In eq. 2, J̇i = dJi/dt, ↵ = (j1, . . . , ja) is a multi-index, and ⌦ is the vector outer product; J⌦a is the a-fold outer product of the synaptic weight vector J . µ is a higher-order moment (correlation) tensor of the inputs: µi,↵ = hxbi (x ⌦a)↵ix (3) where hix denotes the expectation with respect to the distribution of the inputs. µ is an (a+ 1)-order tensor containing an (a+ b)-order joint moment of x. The order of the tensor refers to its number of indices, so a vector is a first-order tensor and a matrix a second-order tensor. µ is cubical; each mode of µ has the same dimension K. In the first term of eq. 2, for example, P ↵ µi,↵(J ⌦a)↵ takes the dot product of J along modes 2 through a+ 1 of µ. 2.1 Steady states of nonlinear Hebbian learning If we take a = b = 1, then µ is the second-order correlation of x and ↵ is just the index j. With c = 0 also, eq. 2 reduces to Oja’s rule and J is guaranteed to converge to the dominant eigenvector of µ [10, 11]. We next investigate the steady states of eq. 2 for arbitrary (a, b) 2 Z2+, c 2 R. Ji = 0 is a trivial steady state. At steady states of eq. 2 where Ji 6= 0, X ↵ µi,↵(J ⌦a)↵ = J 1 c i , where (µ,J) = X j,↵ Jc+1j µj,↵(J ⌦a)↵, (4) so that J is invariant under the multilinear map of µ except for a scaling by and element-wise exponentiation by 1 c. For two parameter families (a, b, c), eq. 4 reduces to different types of tensor eigenequation [47, 42, 43]. We next briefly describe these and some of their properties. First, if a+c = 1, we have the tensor eigenequation P ↵ µi,↵(J ⌦a)↵ = Jai . Qi called ,J the tensor eigenpair [42] and Lim called them the `a-norm eigenpair [43]. There are KaK 1 such eigenpairs [42]. If µ 0 element-wise, then it has a unique largest eigenvalue with a real, non-negative eigenvector J , analogous to the Perron-Frobenius theorem for matrices [43, 48]. If µ is weakly irreducible, that eigenvector is strictly positive [49]. In contrast to matrix eigenvectors, however, for a > 1 these tensor eigenvectors are not necessarily invariant under orthogonal transformations [42]. If c = 0, we have another variant of tensor eigenvalue/vector equation: X ↵ µi,↵(J ⌦a)↵ = Ji (5) Qi called these ,J an E-eigenpair [42] and Lim called them the `2-eigenpair [43]. In general, a tensor may have infinitely many such eigenpairs. If the spectrum of a K-dimensional tensor of order a+ 1 is finite, however, there are (aK 1)/(a 1) eigenvalues counted with multiplicity, and the spectrum of a symmetric tensor is finite [50, 51]. (If b = 1, µ is symmetric.) Unlike the steady states when a + c = 1, these eigenpairs are invariant under orthogonal transformations [42]. For non-negative µ, there exists a positive eigenpair [52]. It may not be unique, however, unlike the largest eigenpair for a+ c = 1 (an anti-Perron-Frobenius result) [51]. In the remainder of this paper we will usually focus on parameter sets with c = 0 and use “tensor eigenvector” to refer to those of eq. 5. 2.2 Dynamics of nonlinear Hebbian learning For the linear Hebbian rule, (a, b, c) = (1, 1, 0), Oja and Karhunen proved that the first principal component of the inputs is a global attractor of eq. 2 [11]. We thus asked whether the first tensor eigenvector is a global attractor of eq. 2 when c = 0 but (a, b) 6= (1, 1). We first simulated the nonlinear Hebbian dynamics. For the inputs x, we whitened 35⇥35 pixel image patches sampled from the Berkeley segmentation dataset (fig. 1a; [53]). For b 6= 1, the correlation of these image patches was not symmetric (fig. 1b). The mean squared error of the canonical polyadic (CP) approximation of these tensors was higher for b = 1 than b = 2 (fig. 1d). It decreased slowly past rank ⇠ 10, and the rank of the input correlation tensors was at least 30 (fig. 1d). The nonlinear Hebbian learning dynamics converged to an equilibrium from random initial conditions (e.g., fig. 1e, f), around which the weights fluctuated due to the finite learning timescale ⌧ . Any equilibrium is guaranteed to be some eigenvector of the input correlation tensor µ (section 2.1). For individual realizations of the weight dynamics, we computed the overlap between the final synaptic weight vector and each of the first 10 eigenvectors (components of the Tucker decomposition) of the corresponding input correlation µ [47, 54]. The dynamics most frequently converged to the first eigenvector. For a non-negligible fraction of initial conditions, however, the nonlinear Hebbian rule converged to subdominant eigenvectors (fig. 1g,h). The input correlations µ did have a unique dominant eigenvector (fig. 3a, blue), but the dynamics of eq. 2 did not always converge to it. This finding stands in contrast to the standard Hebbian rule, which must converge to the first eigenvector if it is unique [11]. While the top eigenvector of a matrix can be computed efficiently, computing the top eigenvector of a tensor is, in general, NP-hard [55]. To understand the learning dynamics further, we examined them analytically. Our main finding is that with (b, c) = (1, 0) in the generalized Hebbian rule, eigenvectors of µ are attractors of eq. 2. Contrary to the case when a = 1 (Oja’s rule), the dynamics are thus not guaranteed to converge to the first eigenvector of the input correlation tensor when a > 1. The first eigenvector of µ does, however, have the largest basin of attraction. Theorem 1. In eq. 2, take (b, c) = (1, 0). Let µ be a cubical, symmetric tensor of order a+ 1 and odeco with R components: µ = RX r=1 r (Ur) ⌦a+1 (6) where U is a matrix of unit-norm orthogonal eigenvectors: UTU = I . Let i > 0 for each i 2 [R] and i 6= j 8 (i, j) 2 [R]⇥ [R] with i 6= j. Then for each k 2 [R]: 1. With any odd a > 1, J = ±Uk are attracting fixed points of eq. 2 and their basin of attrac- tion is T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . Within that region, the separatrix of +Uk and Uk is the hyperplane orthogonal to UTk : {J : U T k J = 0}. 2. With any even positive a, J = Uk is an attracting fixed point of eq. 2 and its basin of attraction is J : UTk J > 0 T i2[R]\k n J : UTi J/UTk J < ( k/ i) 1/(a 1) o . 3. With any even positive a, J = 0 is a neutrally stable fixed point of eq. 2 with basin of attraction n J : PR j=1(U T j J) 2 < 1 ^UTk J < 0 8 k 2 [R] o . Remark. Each component of the orthogonal decomposition of an odeco tensor µ (eq. 5) is an eigenvector of µ. If R < (aK 1)/(a 1), there are additional eigenvectors. The components of the orthogonal decomposition are the robust eigenvectors of µ: the attractors of its multilinear map [26, 56]. The non-robust eigenvectors of an odeco tensor are fixed by its robust eigenvectors and their eigenvalues [57]. The proof of theorem 1 is given in appendix A.2. To prove theorem 1, we project J onto the eigenvectors of µ, and study the dynamics of the loadings v = UTJ . This leads to the discovery of a collection of unstable manifolds: each pair of axes (i, k) has an associated unstable hyperplane vi = vk ( k/ i) 1/(a 1) (and if a is odd, also the corresponding hyperplane with negative slope). These partition the phase space into the basins of attraction of the eigenvectors of µ. For example, consider a fourth-order input correlation (corresponding to a = 3 in eq. 1) with two eigenvectors. The phase portrait of the loadings is in fig. 2a, with the nullclines in black and attracting and unstable manifolds in blue. The attracting manifold is the unit sphere. There are two unstable hyperplanes that partition the phase space into the basins of attraction of (0,±1) and (±1, 0), where the synaptic weights J are an eigenvector of µ. For even a, only the unstable hyperplanes with positive slope survive (fig. 2b, blue line). The unit sphere (fig. 2b, blue) is attracting from any region where at least one loading is positive. Its vertices [1]⇥[1] are equilibria; (v1, v2) = (1, 0) and (0, 1) are attractors and the unstable hyperplane separates their basins of attraction. For the region with all loadings negative that is the basin of attraction of the origin, noise will drive the system away from zero towards one of the eigenvector equilibria. are v2 = ±v1 ( 1/ 2) 1/a 1. b) Even a (a = 2). The unstable set is v2 = v1 ( 1/ 2) 1/a 1 (solid blue line). In (a,b), ( 1, 2) = (3, 1). c) Phase portrait for a two-term learning rule (eq. 9). All parameters of the input correlation tensors ( mr, Ai) are equal to one. Dashed blue curve: L = {v : P m,j mjv am+1 j = 0}. By partitioning the phase space of J into basins of attraction for eigenvectors of µ, theorem 1 also allows us to determine the volumes of those basins of attraction. The basins of attraction are open sections of RK so we measure their volume relative to that of a large hypercube. Corollary 1.1. Let Vk be the relative volume of the basin of attraction for J = Uk. For odd a > 1, Vk = R 1 RY i=1 ✓ k i ◆1/(a 1) (7) Corollary 1.2. Let Vk be the relative volume of the basin of attraction for J = Uk. For even positive a, Vk =2 1 R R 1 Y i ✓ k i ◆1/(a 1) + (R 1) 1 X j 6=k Y i 6=j ✓ k i ◆1/(a 1) + (R 2) 1 X j,l 6=k Y i 6=j,l ✓ k i ◆1/(a 1) + . . .+ 1 ! (8) The calculations for corollaries 1.1, 1.2 are given in appendix A.2. We see that the volumes of the basins of attraction depend on the spectrum of µ, its rank R, and its order a. The result for odd a also provides a lower bound on the volume for even a. The result is simpler for odd a so we focus our discussion here on that case. While eigenvectors of µ with small eigenvalues contribute little to values of the input correlation µ, they can have a large impact on the basins of attraction. The volume of the basin of attraction of eigenvector k is proportional to R/(a 1)k . An eigenvector with eigenvalue ✏ scales the basins of attraction for the other eigenvectors by ✏ 1/a 1. The relative volume of two eigenvectors’ basins of attraction is, however, unaffected by the other eigenvalues whatever their amplitude. With a odd, the ratio of the volumes of the basins of attraction of eigenvectors k and j is Vk/Vj = ( k/ j) R/a 1. We see in theorem 1 that attractors of eq. 2 are points on the unit hypersphere, S. For odd a, S is an attracting set for eq. 2. For even a, the section of S with at least one positive coordinate is an attracting set (fig. 2a, b, blue circle; see proof of theorem 1 in appendix A.2). We thus next computed the surface area of the section of S in the basin of attraction for eigenvector k, Ak (corollary 1.3 in appendix A.2). The result requires knowledge of all non-negligible eigenvalues of µ, and the ratio Ak/Aj does not exhibit the cancellation that Vk/Vj does for odd a. We saw in simulations with natural image patch inputs and initial conditions for J chosen uniformly at random on S, the basin of attraction for UT1 was ⇠ 3⇥ larger than that for UT2 and the higher eigenvectors had negligible basins of attraction (fig. 1h). In Oja’s model, (a, b, c) = (1, 1, 0) in eq. 1, if the largest eigenvalue has multiplicity d > 1 then the d-sphere spanned by those codominant eigenvectors is a globally attracting equilibrium manifold for the synaptic weights. The corresponding result for a > 1 is (for the formal statement and proof, see corollary 1.4 in appendix A.2): Corollary 1.3. (Informal) If any d robust eigenvalues of µ are equal, the d-sphere spanned by their robust eigenvectors is an attracting equilibrium manifold and its basin of attraction is defined by each of those d eigenvectors’ basins of attraction boundaries with the other R d robust eigenvectors. 2.3 Arbitrary learning rules So far we have studied phenomenological plasticity rules in the particular form of eq. 1. The neural output n, input xi, and synaptic weight Ji were each raised to a power and then multiplied together. Changes in the strength of actual synapses are governed by complex biochemical, transcriptional, and regulatory pathways [58]. We view these as specifying some unknown function of the neural output, input, and synaptic weight, f(n,x,J). That function might not have the form of eq. 1. So, we next investigate the dynamics induced by arbitrary learning rules f . We see here that under a mild condition, any equilibrium of the plasticity dynamics will have a similar form as the steady states of eq. 2. If f does not depend on J except through n, equilibria will be generalized eigenvectors of higher-order input correlations. The Taylor expansion of f around zero is: fi (n(t), xi(t), Ji(t)) = 1X m=1 Am n am(t) xbmi (t) J cm i (t) (9) where the coefficients Am are partial derivates of f . We assume that there exists a finite integer N such that derivatives of order N +1 and higher are negligible compared to the lower-order derivatives. We then approximate f , truncating its expansion after those N terms. With a linear neuron, synaptic scaling, and slow learning, this implies the plasticity dynamics ⌧ J̇i = X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m Ji X j,m X ↵m Jcm+1j mµj,↵m(J ⌦am)↵m (10) where mµi,↵m = Amhx bm i (x ⌦am)↵mix. At steady states where Ji 6= 0, X m Jcmi X ↵m mµi,↵m(J ⌦am)↵m = Ji, where (J , {mµ}) = X m X j,↵m Jcm+1j mµj,↵(J ⌦am)↵. (11) If each cm = 0, this is a kind of generalized tensor eigenequation: X m,↵m mµi,↵(J ⌦am)↵ = Ji (12) so that J is invariant under the combined action of the multilinear maps mµ (which are potentially of different orders). If a1 = a2 = · · · = am, then this can be simplified to a tensor eigenvector equation by summing the input correlations mµ. If different terms of the expansion of f generate different-order input correlations, however, the steady states are no longer necessarily equivalent to tensor eigenvectors. If there exists a synaptic weight vector J that is an eigenvector of each of those input correlation tensors, P ↵ mµi,↵m(J ⌦am)↵m = Ji for each m, then that configuration J is a steady state of the plasticity dynamics with each cm = 0. We next investigated whether these steady states were attractors in simulations of a learning rule with a contribution from two-point and three-point correlations (a = (1, 2), b = 1, c = 0, A = (1, 1/2) in eq. 9). As before, we used whitened natural image patches for the inputs x (fig. 1a). The two- and three-point correlations of those image patches had similar first eigenvalues, but the spectrum of the two-point correlation decreased more quickly than the three-point correlation (fig. 3a). The first three eigenvectors of the different correlations overlapped strongly (fig. 3b). This was not due to a trivial constant offset since the inputs were whitened. With this parameter set, the synaptic weights usually converged to the (shared) first eigenvector of the input correlations (fig. 3c,d, e (blue)). We next asked how the weights of the different input correlations in the learning rule (the parameters A1, A2) affected the plasticity dynamics. When the learning rule weighted the inputs’ three-point correlation more strongly than the two-point correlation (A = (1/2, 1)), the dynamics converged almost always to the first eigenvector of the three-point correlation (fig. 3e, blue vs orange). Without loss of generality, we then fixed A2 = 1 and varied the amplitude of A1. As A1 increased, the learning dynamics converged to equilibria increasingly aligned with the top eigenvectors of the input correlations (fig. 3f). For sufficiently negative A1, the dynamics converged to steady states that were neither eigenvectors of the two-point input correlation nor any of the top 20 eigenvectors of the three-point input correlation (fig. 3f). Earlier, we saw that in single-term learning rules, the only attractors were robust eigenvectors of the input correlation (theorem 1). The dynamics of eq. 2 usually converged to the first eigenvector because it had the largest basin of attraction (corollaries 1.1, 1.2). Here, we saw that at least for some parameter sets, the dynamics of a multi-term generalized Hebbian rule may not converge to an input eigenvector. This suggested the existence of other attractors for the dynamics of eq. 10. We next investigated the steady states of multi-term nonlinear Hebbian rules analytically. We focused on the case when the different input correlations generated by the learning rule all have a shared set of eigenvectors. In this case, those shared eigenvectors are all stable equilibria of eq. 10. They are not, however, the only stable equilibria. We see in a simple example that there can be equilibria that are linear combinations of those eigenvectors with all negative weights (in fig. 2c, the fixed point in the lower left quadrant on the unit circle). In fact, any stable equilibrium that is not a shared eigenvector must be such a negative combination. Our results are summarized in the following theorem: Theorem 2. In eq. 10, take b = 1, c = 0,a 2 ZN+ , and consider N cubical, symmetric tensors, mµ, each of order am + 1 for m 2 [N ], that are mutually odeco into R components: mµ = RX r=1 mrUr ⌦Ur ⌦ · · ·⌦Ur (13) with UTU = I . Let mr 0 and P m mr > 0 for each m, r 2 [N ]⇥ [R]. Let S(J) = RX i=1 (UTi J) 2 and L(J) = NX m=1 RmX i=1 mi(U T i J) am+1. (14) Then: 1. S ⇤ = {J : S(J) = 1 ^ L(J) > 0} is an attracting set for eq. 10 and its basin of attraction includes {J : L(J) > 0}. 2. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10. 3. For each k 2 [R], J = Uk is a stable equilibrium of eq. 10 if P m m k( 1) am < 0 (and unstable if P m m k( 1) am < 0). 4. Any other stable equilibrium must have UTk J 0 for each k 2 [R]. The claims of theorem 2 are proven in appendix A.2. Similar to theorem 1, we see that the robust eigenvectors of each input correlation generated by the learning rule are stable equilibria of the learning dynamics. The complexity of eq. 10 has kept us from determining their basins of attraction. We can, however, make several guarantees. First, in a large region, the unit sphere is an attracting set for the dynamics of eq. 10. Second, the only stable fixed points are either the eigenvectors of ±µ or combinations of the eigenvectors of µ with only nonpositive weights. This is in contrast to the situation where the learning rule has only one term; then theorem 1 guarantees that the only attractors are eigenvectors. 3 Discussion We have analyzed biologically motivated plasticity dynamics that generalize the Oja rule. One class of these compute tensor eigenvectors. We proved that without a multiplicative weight-dependence in the plasticity, those eigenvectors are attractors of the dynamics (theorem 1, figs. 1, 2a, b). Contrary to Oja’s rule, the first eigenvector of higher-order input correlations is not a unique attractor. Rather, each eigenvector k has a finite basin of attraction, the size of which is proportional to R/a 1 k . If there are d codominant eigenvectors ( 1 = 2 = . . . = d), the d-sphere they span is an attracting equilibrium manifold (corollary 1.4 in appendix A.2). Furthermore, steady states of any plasticity model with a finite Taylor polynomial in the neural output and inputs are generalized eigenvectors of multiple input correlations. These steady states are stable and attracting (theorem 2, figs. 2c, 3). 3.1 Spiking neurons and weight-dependence While biological synaptic plasticity is certainly more complex than the simple generalized Hebbian rule of eq. 1, neural activity is also more complex than the linear model n = JTx. We examined the simple linear-nonlinear-Poisson spiking model and a generalized spike timing–dependent plasticity (STDP) rule ([45]; appendix A.3). Similar to eqs. 2 and 10, we can write the dynamical equation for J as a function of joint cumulant tensors of the input (eq. 56 in appendix A.3). These dynamics have a different structure than eqs. 2 and 10. We focused here mainly on learning rules with no direct dependence on the synaptic weight (c = 0 in eq. 1, c = 0 in eq. 9). When c 6= 0, the learning dynamics cannot be simply analyzed in terms of the loading onto the input correlations’ eigenvectors. We studied the learning dynamics with weight-dependence for two simple families of input correlations: diagonal µ and piecewise-constant rank one µ (appendix A.4). In both cases, we found that with eigenvectors of those simple input correlations were also attractors of the plasticity rule. With diagonal input correlations, sparse steady states with one nonzero synapse are always stable and attracting when a+ c > 0, but if a+ c 0 synaptic weights converge to solutions where all weights have the same magnitude (fig. A.4.1). With rank one input correlations, multiplicative weight-dependence can interfere with synaptic scaling and lead to an instability for the neurons’ total synaptic amplitude (fig. A.4.2). 3.2 Related work and applications There is a rich literature on generalized or nonlinear forms of Hebbian learning. We briefly discuss the most closely related results, to our knowledge. The family of Bienenstock, Cooper, and Munro (BCM) learning rules supplement the classic Hebbian model with a stabilizing sliding threshold for potentiation rather than synaptic scaling [59]. BCM rules balance terms driven by third- and fourth-order joint moments of the pre- and postsynaptic activity [60]. A triplet STDP model with rate-dependent depression and uncorrelated Poisson spiking has BCM dynamics [45] and can develop selective (sparse) connectivity in response to rate- or correlation-based input patterns [61]. If the input is drawn from a mixture model then under a BCM rule, the synaptic weights are guaranteed to converge to the class means of the mixture [62]. Learning rules with suitable postsynaptic nonlinearities can allow a neuron to perform independent component analysis (ICA) [63, 64]. These learning rules optimize the kurtosis of the neural response. In contrast, we show that a simple nonlinear Hebbian model learns tensor eigenvectors of higher-order input correlations. Those higher-order input correlations can determine which features are learned by gradient-based ICA algorithms [65]. Taylor and Coombes showed that a generalization of the Oja rule to higher-order neurons can also learn higher-order correlations [66], which can allow learning independent components [67]. In that model, however, the synaptic weights J are a higher-order tensor. Computing the robust eigenvectors of an odeco tensor µ by power iteration has O(Ka+1) space complexity: it requires first computing µ. The discrete-time dynamics of eq. 1 correspond to streaming power iteration, with O(K) space complexity [68–70]). Eq. 2 defines limiting continuoustime dynamics for tensor power iteration, exposing the basins of attraction. Oja’s rule inspired a generation of neural algorithms for PCA and subspace learning [12, 13]. Local learning rules for approximating higher-order correlation tensors may also prove useful, for example in neuromorphic devices [71–73]. Code availability The code associated with figures one and three is available at https://github.com/gocker/ TensorHebb. Acknowledgments We thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement and support.
1. What is the main contribution of the paper regarding Hebbian rules and their stability analysis? 2. How does the proposed approach differ from existing Hebbian rules, and what are its limitations? 3. What is the significance of analyzing the solutions of generalized Hebbian rules in terms of biological plausibility and practical applications? 4. How does the paper relate to previous work in tensor decomposition and independent component analysis? 5. What are the strengths and weaknesses of the experimental results presented in the paper? 6. Can you provide examples or references to support your criticism of the paper's organization, clarity, and relevance to neuroscientific facts?
Summary Of The Paper Review
Summary Of The Paper This work proposes to determine the solutions of "generalized" Hebbian rules, i.e., applied to tensor matrices, and analyze the stability of these solutions. The model starts by postulating a biologically sensible form of nonlinear Hebbian rule as polynomials of local input activity, postsynaptic activity, and current synaptic weight. From this generalized form of Hebbian rule, the authors can determine the nature of the steady-states of the dynamic and relate it to the spectrum and eigenspace of the higher-order covariance matrix. Furthermore, the size of the basins of attraction is determined. The model is then used on toy experiments proving the convergence of the learning rules for specifically chosen parameters of the learning rule. Review First is addressed the organization and clarity of the paper. The paper is well-written, and results and theorems are nicely explained using informal explanations and leaving the full proof to the appendix. Unfortunately, the paper starts with a "Result" section without introducing the necessary background and existing rules and contextualizing the problem. Related work can certainly appear at the end, but I believe various things should have been introduced early on. For example: What are the limitations of existing Hebbian rules, and why do we need higher-order power? And what is the difference with nonlinear inputs/outputs? The point above would have naturally lead to the introduction of higher-order covariance matrices, which are naturally formulated as tensors. As a result, one can regret the mention of prevalent tasks that is tensor PCA or independent component analysis (very briefly mentioned by the authors). For example, the work of Cichocki, most recently in [i], presents examples where tensor decomposition is essential. The author of [i] is one of many interested in biological implementation for third and fourth-order tensors and has been omitted to be cited. For example, the work of [ii] proposes similar learning rules in the case of ICA and discusses the possible biological implementation using higher-order neurons. Tensor decomposition is also a very active field of research in the machine learning community, and one can regret that "gradient-based" methods (which Oja is one of) have not been discussed. The experiments are following each other as they happened chronologically, following the authors' discoveries as they happened rather than proposing an overarching methodology. It can be seen when using "We next investigated" "We next asked" followed by another "We next investigated" in section 2.3. Having a separate methodology section could help the reader understand what the paper aims to achieve. But it still important to point out that paragraphs taken separately are nicely written and clear. Now is addressed the originality of the paper. The main originality of the paper is the analysis of the solution of the ad-hoc learning rule. It cannot be said that such learning has never been proposed nor analyzed in other contexts cf reference given above. Hebbian/Oja's rules have long been known to be gradient-based methods of "Extended covariance" matrices, so this result is not very surprising, although important to be theoretically proved. Unfortunately, it is also true that how these rules can be biologically implemented is heavily discussed and shown to require recurrent connections or to operate on various timescales when considering more than one output neuron. These facts are largely overlooked, which greatly weakens the paper as its relevance can be questioned from the start if the main assumptions of the paper are not properly cleared out. The authors claim to address phenomenological rules, but no results seem to validate or invalidate known plasticity rules. One could have preferred fewer but clearer examples that would relate this work to neuroscientific facts, rather than speculations of what is and what is not biologically plausible. In the case of the spiking neuron (Sec 3.1), this work should have related to the paper from [iv] which would have strengthened the paper as it offers a nice theoretical framework for it. Now is addressed the significance of the paper. The paper appears to propose an interesting analysis of the solution of an ad-hoc learning rule. Unfortunately, this work does not make predictions on how (implementation-wise) and where (structurally) they could occur in the brain. There have also been various higher-order neuron models that have been used c.f. [ii] in concrete tasks, when the experiments are not really relatable and do not add much to the paper. One could have expected examples where tensor decomposition has been used either in neuroscience or machine learning. [i] Cichocki, Andrzej. "Tensor decompositions: a new concept in brain data analysis?." arXiv preprint arXiv:1305.0395 (2013). [ii] Ziegaus, Ch, and Elmar Wolfgang Lang. "A neural implementation of the JADE algorithm (nJADE) using higher-order neurons." Neurocomputing 56 (2004): 79-100. [iii] Ge, Rong, et al. "Escaping from saddle points—online stochastic gradient for tensor decomposition." Conference on learning theory. PMLR, 2015. [iv] Gjorgjieva, Julijana, et al. "A triplet spike-timing–dependent plasticity model generalizes the Bienenstock–Cooper–Munro rule to higher-order spatiotemporal correlations." Proceedings of the National Academy of Sciences 108.48 (2011): 19383-19388.
NIPS
Title AutoST: Towards the Universal Modeling of Spatio-temporal Sequences Abstract The analysis of spatio-temporal sequences plays an important role in many realworld applications, demanding a high model capacity to capture the interdependence among spatial and temporal dimensions. Previous studies provided separated network design in three categories: spatial first, temporal first, and spatio-temporal synchronous. However, the manually-designed heterogeneous models can hardly meet the spatio-temporal dependency capturing priority for various tasks. To address this, we proposed a universal modeling framework with three distinctive characteristics: (i) Attention-based network backbone, including S2T Layer (spatial first), T2S Layer (temporal first), and STS Layer (spatio-temporal synchronous). (ii) The universal modeling framework, named UniST, with a unified architecture that enables flexible modeling priorities with the proposed three different modules. (iii) An automatic search strategy, named AutoST, automatically searches the optimal spatio-temporal modeling priority by network architecture search. Extensive experiments on five real-world datasets demonstrate that UniST with any single type of our three proposed modules can achieve state-of-the-art performance. Furthermore, AutoST can achieve overwhelming performance with UniST. 1 Introduction Modeling and predicting the future of spatio-temporal (ST) sequences based on past observations has been extensively studied and has been successfully applied in many fields, such as road traffic [13], medical diagnosis [29], and meteorological research [24]. Traditional statistical methods typically require input sequence satisfying certain assumptions, which limits its ability in capturing the complex spatial-temporal dependency. Then, recurrent neural network (RNN) methods [15] leverage the universal approximation property to build separated network branches to model dependency and make predictions with fusion gate blocks from the stacking branches. The intrinsic gradient flow in the back-propagation training process [7] may bring the ST dependency into incorrespondence with the ⇤Jainxin Li is the corresponding author. †BDBC: Beijing Advanced Innovation Center for Big Data and Brain Computing. ‡HKUST(GZ): Hong Kong University of Science and Technology (Guangzhou). §HKUST FYTRI: Guangzhou HKUST Fok Ying Tung Research Institute. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Sp at ia l Temporal Spatial-first task 1 task 2 task 3 Temporal Temporal-first task 1 task 2 task 3 Temporal Spatio-temporal sync. task 1 task 3 task 2 Temporal Ours task 1 task 2 task 3 Sp at ia l Temporal Ideal task 1 task 2 task 3 (d)(a) (b) (c) (e) Sp at ia l Temporal Spatial-first task 1 task 2 task 3 Sp at ia l Temporal Temporal-first task 1 task 2 task 3 Sp at ia l Temporal Spatio-temporal synchronous task 1 task 3 Sp at ia l Temporal Ours task 1 task 2 task 3 task 2 Sp at ia l Temporal Ideal task 1 task 2 task 3 Figure 2: Modeling orders of ST data. The stars are tasks with different spatial-temporal dependencies. The red lines are spatio-temporal modeling procedure with anisotropic tendency. The red lines’ color going darkness/lightness refers to the modeling ability of the model along with the current modeling tendency. network branches’ configuration, especially for the deeper network [11]. Recently, the Transformer-based models show larger modeling capacity in both spatial and temporal modeling [21, 32, 31], which motivates us to find a universal way to capture the ST dependency in a universal framework simultaneously. As shown in Fig.1, the ST dependency individually exists in sequences: spatial correspondences, temporal correspondences, and spatiotemporal correspondences. Take the road traffic forecasting as an example, previous research fall into three typical paradigms. (a) Spatial-first modeling [1, 27, 13]: Predicting the traffic for next road junction, which has strong connec- tions to previous intersections, stops, and surrounding traffic. (b) Temporal-first modeling [23, 5]: Predicting the traffic of a high school on Friday afternoon, which shows strong periodic relationships with the school’s times schedule. (c) Spatio-temporal synchronous modeling [19, 12, 28, 6]: Analyzing the city-wide traffic, which is tightly entangled in both spatial and temporal. During the sequences modeling, the main problem is to align the network design with the natural spatio-temporal distribution. However, the distribution of ST dependencies varies and depends on the forecasting task and corresponding datasets. They are mixed in a compound way when modeling ST sequences, and the three tasks in Fig.1(b) are the representative ones. What makes it worse is that, the prevalent modeling methods show anisotropic tendency to capture the ST dependency. If we use the spatial-first models on the three tasks in Fig.2(b), the task 1’s states are highly influenced by the surrounding information, and the periodic pattern is the underlying factors, which makes the spatial-first model fits it properly. We can compare the model ability (red lines) with the ideal one in Fig.2(a), this kind of model will be insufficient for task 2 and task 3. Similarly, suppose we use the temporal-first models on the three tasks in Fig.2(c). In that case, the model ability only matches task 3, where the periodic pattern decides the states other than the spatial information. The previous analysis also applies to the spatio-temporal synchronous situation, where the states are mainly influenced by the complex associations across the spatial and temporal, like semantic relationships. In this paper, we aim to propose a universal model that alleviates the the modeling gap on different tasks. The contributions are: 1) The first to raise and address the modeling order proplem in spatio-temperal forecasting tasks by proposing a universal modeling framework UniST and an automatic structure search strategy AutoST. 2) Proposing 3 replacable and unified attention-based modeling units named S2T, T2S and STS, which model spatio-time sequence with three different priorities: spatial first, temperal first and spatio-temperal synchronous. 3) Extensive experiments on 5 datasets and 3 sequence forcasting tasks demonstrate that only using our three modeling units (S2T, T2S, and STS) outperforms the baseline methods, and our framework together with AutoST achieves the new state-of-the-art performance. 2 Related Work Existing spatio-temporal forecasting methods can be roughly grouped into three categories: spatialfirst, temporal-first, and spatio-temporal synchronous methods. Spatial-first: STG2Seq [1] uses stacking GCN layers to capture the entire inputs sequence, where each GCN layer operates on a limited historical time window, and the final results are concatenated together to make forecasting. In the view of this paper, it belongs to spatial-first modeling. STGCN [27] propose the blocks that contains two temporal gated convolution layers with one spatial graph convolution layer in the middle, which starts from a convolution-based temporal layer. DCRNN [13] is proposed to forecast traffic flow using diffusion convolution and recurrent units to capture spatial and temporal information successively. Temporal-first: Graph WaveNet [23] built the basic modeling layers with two gated temporal convolution modules at the beginning and followed by a graph convolution module, which models from temporal to spatial. GSTNet [5] builds several layers of spatial-temporal blocks to produce the forecasting, which is consists of a multi-resolution temporal module followed by a global correlated spatial module. Spatio-temporal synchronous: STSGCN [19] construct a spatiotemporal synchronous extraction module composed of graph convolutional networks. STFGNN [12] modeling spatio-temporal correlations simultaneously by fusing a dilated convolutional neural network with a gating mechanism and a spatio-temporal fusion graph module. ST-ResNet [28] using convolution on a sequence of image-like 2D matrices to model spatio-temporal at the same time. ASTGCN [6] proposed a spatial-temporal convolution that simultaneously captures the spatial patterns and temporal features. 3 Preliminary 3.1 Spatio-temporal Sequence Forecasting Spatio-temporal sequence forecasting (STSF) is to predict the future sequence of spatio-temporal inputs based on the historical observations. Specifically, given a graph G = (V,E,A), where V and E are the node set and edge set, and N is the number of nodes, A 2 RN⇥N is the adjacency matrix of G. If vi, vj 2 V and (vi, vj) 2 E, Aij = 1, otherwise Aij = 0. X = {X1,X2, . . . ,XT } is a ST sequence of T time steps, where X 2 RT⇥N⇥C . The snapshot at time step t is denoted as xt 2 RN⇥C , where C is the feature dimension of a node. Then the ST sequence forecasting problem can be defined as: given S time steps historical observations of input graph G, the goal is to predict the future sequence of the features on each node with a learning function f : ⇥ X(t S):t, G] f! X(t+1):(t+P ), where X(t S):t and X(t+1):(t+P ) are the ST sequence with length S and P respectively. 3.2 Network Architecture Search Network (neural) architecture search (NAS) are automated methods for generating and optimizing neural networks. A representative gradient-based approach is DARTS [16], which is the foundation of our proposed training framework. DARTS aims to search optimal directed edge connections on a directed acyclic graph with predefined computing cells as nodes. The result connections of node j is denoted as x(j) = P i<j o (i,j) x (i) , where o(i,j) is an operator, e.g. layers in a model, represented by a directed edge from node i to node j. DARTS proposes a method to relax the discrete searching space to be continuous, and uses bi-level optimization to learn a differentiable objective on the joint optimization problem of both network architecture and model weights. The objective function is: min↵ Lval (w⇤(↵),↵) s.t. w⇤(↵) = argminw Ltrain(w,↵), where ↵ is the architecture, and w is the model weights. In Section 4.4, we improve the design of the directed acyclic graph of the search architecture, and the two-stage optimization of the architecture parameters. 4 Methods In this section, we firstly introduce two basic modeling units: the time series linear self-attention, and the high order mix graph convolution. Then we proposed three layers as different network backbones, and we build a universal modeling framework based on the tree “atomic” layers. Next, we propose an automatic searching strategy for spatio-temporal information fusion, which aimed for the optimal order of spatio-temporal modeling on various downstream tasks. 4.1 Spatial / Temporal Modeling Unit 4.1.1 Time Series Linear Self-Attention Self-attention mechanism [21] has been widely used in nature language processing, computer vision, and time series forecasting, which is defined as: Attention(Q,K,V) = V0 = Softmax(QK>/ p d)V, where Q = XWQ,K = XWK ,V = XWV , and the projection matrix WQ 2 RC⇥D,WK 2 RC⇥D,WV 2 RC⇥D. However, the original self-attention suffers from high computational and memory cost. Because the dot product computation of Q and K leads to O(N2) time and space complexity. [9] proposed linear self-attention, which represents the similarity function of Q and K in the self-attention by a kernel function: V0i = (Qi) T PN j=1 (Kj)V T j / (Qi) T PN j=1 (Kj). Such that for each query Qi, the two terms PN j=1 (Kj)Vj and PN j=1 (Kj) are the same and reused for efficient computing. Following [31], we use the technique of linear self-attention in representing time series features. 4.1.2 High-order Mix Graph Convolution To acquire better spatial information representation, we propose a high-order mix graph convolutional operation for spatial information mixing and feature extraction of the original inputs, it is defined as: HighOrder(X,A, order) def = Horder = 8 < : X if order = 0 MixGC(X,A) if order = 1 MixGC(H(order 1)A) if order > 1, (1) where order denotes the total order of the graph convolution operations, i.e., to consider orderhop neighbor relationship of each node. In this paper, we define the 1st-order mix convolutional operation by combining the 1st-order ChebNet [10] and the Adaptive Diffusion Convolution [23]: MixGC(X,A) = ChebNet(X,A) + AdapDC(X,A) = ÂXWg + PfXWf + PbXWb + ÂadpXWadp, where  = D 1/2ÃD 1/2 is a normalized adjacency matrix with self-loop. ChebNet focuses on 1st-order neighbor information, while AdapDC focuses on multi-hop information. à is defined as à = A + I, where Dii = P j Ãij , I is an identity matrix. Pf = A rowsum(A) , Pb = A> rowsum(A>) refers to a forward and backward state transition matrix, respectively. Âadp is an adaptive matrix for complementary spatial state information, which is calculated by two learnable node embedding matrices E1,E2 2 RN⇥C [20] as Âadp = Softmax(ReLU(E1E>2 )). 4.2 Unified Spatio-temporal Modeling Backbone In order to solve the problem of spatio-temporal dependency distribution differences in the modeling procedure, we first propose three novel modules: S2T Layer, T2S Layer, STS Layer, that are suitable for three typical spatio-temporal dependencies: spatial-first, temporal-first, spatio-temporal synchronous, respectively. We design all these three modeling module to have the same dimension of inputs and outputs. This provides a solid foundation for our later flexible and universal modeling. 4.2.1 Spatial-first Modeling Layer The spatial-first sequence modeling method, S2T Layer, models from spatial to temporal. The spatial information between the nodes on the graph is first characterized on a single slice. After that, node information at different times is exchanged along the time dimension, whose spatial information has been shared with its neighbors. As shown in Fig.3(a), S2T Layer first uses two high-order mix graph convolution defined in Eq.(1) to process the input spatio-temporal sequences XL 1 to obtain two sequence representations with mixed spatial information. Then the key K and value V of the input of the subsequent self-attention are obtained by a transformation using the parameter matrix WK ,WV , respectively, while the query Q is obtained by transforming the original input ST sequence using the parameter matrix as follows: Q = XL 1WQ,K = HighOrder1(XL 1,A, order)WK ,V = HighOrder2(XL 1,A, order)WV . Then the original ST sequence and the new sequence with mixed spatial information are processed using a multi-head linear self-attention, from which it learns temporal dependencies and exchanges information at different time slices to obtain further representations of the ST sequence: Z = Attention(Q,K,V) . (2) The output is concatenated with the initial input once for residuals and processed a layer normalization, followed by a two-layer fully connected network for further ST representation learning. This network is applied separately and identically to each point-in-time position in the ST sequence, thus maintaining the continuous transfer of position-encoded information. Finally, the resulting ST sequence representation is again connected to the initial input with one residual and layer normalization to obtain the output XL = Norm(max(0,Norm(Z+XL 1)W1 + b1)W2 + b2). 4.2.2 Temporal-first Modeling Layer This module is designed to model the ST sequence from temporal to spatial, named T2S Layer. Different from spatial-first modeling, at the beginning, the original inputs are projected into Q,K,V by three weight matrices as Q = XL 1WQ, K = XL 1WK , and V = XL 1WV . The projection results are used to calculate temporal representations at first, using the time series linear self-attention in Eq.(2). Then the temporal representation on each node are send to the high-order mix graph convolution, together with the adjacency matrix, to fusion the temporal information from every neighbors. Z0 = HighOrder(Z,A, order). Finally, the output representations are executed with the feed forward and layer normalization operations the same way as the S2T Layer. 4.2.3 Spatial-temporal Synchronous Layer This module named STS Layer, which aims to model the spatial and temporal information simultaneously. Different from the former two modules, the inputs are directly used to calculate spatial and temporal representations at the same time. For temporal modeling part, it still project the original inputs into Q,K,V, and execute a linear self-attention operation for a temporal representation. For the spatial part, it accepts the original inputs and the spatial information and uses high-order mix graph convolution operation to construct spatial representation. Temporal Z1 = Attention(Q,K,V), Spatial Z2 = HighOrder(XL 1,A, order). The outputs are concatenated together as: Z0 = concat[Z1,Z2]. Then it is executed with the following operations and output as XL similar with the former two modules. 4.3 Universal Modeling Framework Embedding Layer Targeting to the ST sequence forecasting task, we propose a unified ST sequence modeling framework (UniST) with the proposed unified modeling backbones in Fig.4, which follows the encoderdecoder architecture. It uses a unified architecture with interchangeable and replaceable mode units. 4.3.1 Spatio-temporal Embedding Layer Since the Transformer model solely relies on the self-attention for global alignments, the positional embedding [21] and extra embeddings [32] are needed to capture spatio-temporal dependency. Then we introduce four types of embeddings EP , EV , ES , ET in Appendix A. Fusion embedding. The four embeddings are summed together as the final embedding added to the inputs: EF = EP +EV +ES +ET , note that the shape of token embedding EV 2 RT⇥N⇥d, while other embeddings’ shape are EP 2 R1⇥1⇥d, ES 2 R1⇥N⇥d, ET 2 RT⇥1⇥d. When calculating the summation, they will be replicated and expanded with broadcast on the respective missing dimensions. 4.3.2 Encoder The encoder of UniST consists of multiple Spatio-Temporal Extractors (STE(·)), which can be arbitrarily chosen from {T2S Layer, S2T Layer, STS Layer}. All extractors are connected end to end, i.e., the output of the previous one is the input of the next one. To acquire a more diversity representation, the outputs of each extractor are added to form the final output of the encoder. Let the outputs of the embedding layer be X0, the encoder is computed as: Xen = PL i=1 STE i(X0), where L refers to the number of spatio-temporal extractors. 4.3.3 Decoder The decoder accepts the output of encoder, i.e., L outputs from L spatio-temporal extractors. They are firstly added as a unified spatio-temporal representation. Then the results are through two times of ReLU activation and Linear projection, and produce the final sequence forecasting result. Denote X` as the output of extractor `, we have the calculation of decoder as: Ȳ = Xde = Linear(ReLU(Linear(ReLU( P ` X`)))). 4.4 Automated Search for UniST With the proposed unified ST sequence modeling framework UniST, it still suffers from the potential wrong network configuration problem, where we build an arbitrary modeling order with the replaceable model units {T2S Layer, S2T Layer, STS Layer}. Considering the various downstream tasks, how can we build a universal model with an optimal configuration? Here we propose the Automated Spatio-Temporal modeling approach (AutoST), which learns the optimal combinatorial order that suits the spatio-temporal dependency of the current task. We designed two schemes for layer combination. In this section, we first define the basic searching unit of AutoST, then we introduce two designs of AutoST with different searching schemes. 4.4.1 AutoST Cell The basic searching unit in AutoST is the network cell. Here we define its structure and computing process. Definition 1. AutoST Cell. Let G = (V, E) be a direct acyclic graph (DAG), V denotes the node set, each node refers to a representation comes from the outputs of a computation layer. The representation on node i is defined as H i 2 RT⇥|V|⇥d, where T is the length of spatio-temporal seqence, d is the Hi Hj Hout S2T Layer T2S Layer STS Layer Hi Hj Hout STS Layer Hi Hj Hout S2T Layer T2S Layer STS Layer Hi Hj Hout STS Layer feature dimension. The input of each AutoST Cell is denoted as H 0 , and the output of each cell is the summation of all nodes, i.e., all interval representations: Hout = P|V| i H i . On graph G, the directed edge (i, j) from node i to node j stands for a mixture of all candidate modeling modules O = {T2S Layer, S2T Layer, STS Layer}, and it is represented as o(i,j). So that the representation between node j and other nodes can be written as: Hj = P i<j o (i,j) H i . On each directed edge, there exist a set of weight parameters ↵ (i,j) = {↵(i,j)o |o 2 O}, which indicates the probability of the corresponding modeling module should be retained. Every weight parameters of the candidate modeling module is calculated as: H j = P i<j P o2O exp(↵(i,j)o ) P o02O exp ⇣ ↵(i,j) o0 ⌘o H i . 4.4.2 Sequential Stacking Search AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero Based on the proposed UniTS, we propose two searching schemes to find better combination of the modeling layers. The proposed searching schemes are lossless replacements for the encoder of UniTS. We simply replace the encoder’ spatiotemporal extractor with the AutoST Cell. The first one is multi-layer sequential stacking searching scheme, with which the whole model is named AutoST1. As illustrated in Fig.5(a), it has simple structure within each AutoST Cell, while has more complicated stacking structure between cells. Each cell holds a DAG with three nodes, one input node H0, one output node Hout and an intermediate node H1. And two directed edges are pre-defined between the former two nodes’ output and the output node: (H0,Hout) and (H1,Hout). The searching space of this scheme is shown as the directed red dotted line. The red dashed box is the search candidate set, including {T2S Layer, S2T Layer, STS Layer}. We conduct two gradient based network architecture search methods in the experiments, i.e., DARTS [16], PAS [22]. After searching, the cell essentially becomes one of the three model units. This scheme allows multiple stacking of cells, therefore, the new encoder of the whole model will become the sequential stacking between different modeling layers. 4.4.3 Hybrid Assembling Search The another searching scheme is called hybrid assembling searching. The structure is similar with the sequential stacking searching. However, in this scheme, the encoder only consists of a single AutoST Cell. The cells are not stacked layer-by-layer, it will conduct searching on a more complicated DAG on a single AutoST Cell. In this searching scheme, the DAG of AutoST Cell is shown as Fig.5(b). There are multiple nodes in the cell, generally, it will be set as 4-7 nodes in the experiments. The pre-defined connections are each node’s output to the output node. The multiple directed red dotted lines show the searching space of this scheme. The candidate set is expanded with two operations: Identity and Zero. Identity means no modeling module in this edge, i.e., build directly connection with no operation. Zero means to set all the data passing through it to zero, i.e., no connection is made. This scheme makes the entire DAG form a complex and deep network structure. Through the complex structure design inside the cell, multiple spatio-temporal modeling modules are combined to fit the spatio-temporal dependency distribution in a target task. 5 Experiments This section empirically evaluates the effectiveness of UniST and AutoST models with short-term, medium-term, and long-term ST sequence forecasting tasks on five real-world datasets. Platform: Intel(R) Xeon(R) CPU 2.40GHz ⇥ 2 + NVIDIA Tesla V100 GPU (32 GB) ⇥ 4. The code is available at https://github.com/shuaibuaa/autost2022. 5.1 Datasets In order to study the effect of various ST sequence forecasting methods under complex spatiotemporal distribution, five real-world datasets with different tasks and data states are selected. The statistic information of the five datasets are listed in Table 3. METR-LA [8]: The traffic speed dataset contains 4 months of data from March 1, 2012 to June 30, 2012, recorded by sensors at 207 different locations on highways in Los Angeles County, USA. The data granularity is 5 minutes per point, and the spatial information provided by the dataset includes the coordinates of each sensor and the distance between the sensors. PEMS-BAY [14]: The traffic speed dataset comes from the California Transportation Agencies (CalTrans) Performance Evaluation System (PeMS). The dataset contains data recorded by 325 sensors in the Bay area for a total of 6 months from January 1, 2017 to May 31, 2017, and the data granularity is 5 minutes per point. The spatial information provided by the dataset includes the coordinates of each sensor and the distance between the sensors. PEMS-03/04/08 [3]: The three traffic datasets are also from the PeMS system of the California Transportation Agency, and each dataset is data recorded by sensors in a certain area of California. The PEMS-03 dataset contains data recorded by 358 sensors for 3 months from September 1, 2018 to November 30, 2018. PEMS-04 dataset contains 2 months of data recorded by 307 sensors from January 1, 2018 to February 28, 2018. PEMS-08 dataset contains 2 months of data recorded by 170 sensors from July 1, 2016 to August 31, 2016. The data granularity of these three datasets is 5 minutes per point, and the spatial information provided by the datasets only includes the connectivity between sensors. 5.2 Main Results Table 1 summarizes the ST sequence forecasting results. UniSTS , UniSTT , UniSTST stands for our UniST framework with three same stacking layers of S2T Layer, T2S Layer, STS Layer, respectively. The results in bold font in Table 1 show that our proposed UniST outperforms all baseline methods and achieves State-of-the-Art on all 9 tasks of 5 datasets, with all three proposed layers. This demonstrates our proposed unified spatio-temporal modeling layers and the unified forecasting framework are more expressive than traditional methods. Specifically, compared with GMAN, which is also based on self-attention mechanism, our methods achieve at most 18.41%, 15.31%, 13.82% MAE decreases on the short-term, medium-term, and long-term forecasting task, respectively. Compared with the most advanced method STFGNN, our methods gain 12.06%, 10.61%, 9.43% MAE decreases on the short, medium, long-term forecasting tasks. From the last two columns of Table 1, we can see that both of AutoST1 and AutoST2 beat all other methods on every metric of every task, including our proposed UniST. Recall that the difference between AutoST and UniST is the encoder part, UniST uses all same layers from {S2T Layer, T2S Layer, STS Layer}, while AutoST aims to search for a better combination and connection using the three types of layers. This demonstrates that combining and integrating modules with different spatio-temporal modeling abilities can better deal with uncertainty spatio-temporal dependency and more comprehensively model spatio-temporal sequences. At the same time, AutoST2 beats AutoST1 on every task. It shows that using a single-layer but complex internal connection searching method for these tasks is more effective. Moreover, we can see that the best methods among all baselines are STFGNN, the only method in spatio-temporal synchronous modeling. Comparing GMAN and Graph WaveNet, the two representative methods in spatial-first modeling and temporal-first modeling, respectively, we can see that although they beat other baselines, while our proposed Uni- series model shows a better performance. From this perspective, the proposed method matches the design. 5.3 Ablation Study In AutoST, we design to use gradient-based network architecture search methods to optimize the connections of the modeling modules, and we choose DARTS and PAS as the NAS methods. At the same time, we also conduct experiments to compare the method of random search and AutoSTG [17], which also uses the network architecture search technology for ST sequence prediction. The results are shown in Table 2. We can conclude that our proposed AutoST can outperform AutoSTG with all searching methods. That is because AutoSTG uses more fine-grained temporal convolution and graph convolution structure as the candidates, while AutoST’s candidate layers are designed to model the three different spatio-temporal dependencies, which can make better use of the modeling ability of the three layers on different temporal and spatial relations. It reduces the searching space and the search time simultaneously so that AutoST can make better efficiency and accuracy. 5.4 Result Visualization Forecasting Visualization: We randomly selected one day from the PEMS-BAY dataset and compared our methods’ forecasting results and baseline methods in a visualization way. Typical results are shown in Fig.8, the time span is selected from 13:00 to 19:00, which can represent the most typical scene of the rush hour from afternoon to evening. From the forecasting lines compared to the ground truth, we can see that our proposed methods outperform the two typical baseline methods mostly when forecasting smooth traffic and traffic jam. The bar lies on the bottom of the figures is the forecasting error of each method. We can find that all methods encountered accuracy drop Table 2: The results of network searching methods. Method METR-LA (60 min) PEMS-BAY (60 min) RMSE MAE MAPE RMSE MAE MAPE AutoSTG 7.27 3.47 / 4.38 1.92 / AutoST1-R 6.12 2.97 8.32 3.69 1.62 3.65 AutoST2-R 6.08 2.92 8.26 3.68 1.62 3.66 AutoST1-D 6.09 2.85 7.96 3.74 1.68 3.72 AutoST2-D 6.16 3.25 8.48 3.64 1.60 3.61 AutoST1-P 6.03 2.85 7.88 3.60 1.57 3.52 AutoST2-P 6.00 2.78 7.79 3.58 1.52 3.48 (a) The search results of AutoST1. AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero S2T Layer H0 H1 Hout T2S Layer H0 H1 Hout S2T Layer H0 H1 Hout STS Layer H0 H1 Hout STS Layer H0 H1 Hout H2 T2S Layer STS Layer STS Layer S2T Layer T2S Layer T2S Layer H3 (b) The search results of AutoST2. AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero S2T Layer H0 H1 Hout T2S Layer H0 H1 Hout S2T Layer H0 H1 Hout STS Layer H0 H1 Hout STS Layer H0 H1 Hout H2 T2S Layer STS Layer STS Layer S2T Layer T2S Layer T2S Layer H3 around 14:00 and 17:30. That is because the status of the road is changed dramatically. However, our methods are more stable with the changes and can quickly be adapted to the new road state. For the complete data visualization of the day, please refer to Fig.9 in the appendix. Learned Architecture Visualization: The learned architectures of the AutoST1 and AutoST2 on PEMS-08 dataset are shown in Fig.7. We can find that although UniSTT has a better effect when modeling with single type modeling module, the search result of AutoST1 show that the STS Layer occupies a larger number, and the model achieves better performance than UniSTT . This demonstrates that combining and stacking multiple spatio-temporal dependency modeling methods reasonably can better fit the real spatio-temporal dependencies. In addition, we can find that AutoST2 obtains a complex connections between the four computation nodes in a single cell. And this learned architecture helps AutoST2 achieve state-of-the-art on this task. Although we cannot yet explain why stacking the modules leads to better results, we can see a potentially broad range of applications [25] for unified architecture searching in this way. 6 Conclusion In this work, we illustrated the existence of the modeling gap problem, especially the modeling order, in the spatio-temporal analysis. Moreover, we build three different layers, namely S2T, T2S, and STS, as new network modeling backbones. Then, an automatic searching strategy is proposed to search the optimal modeling priority automatically. Extensive experiments on five real-world datasets show the overwhelming performance over SOTA baselines. Acknowledgments This work was supported by grants from the Natural Science Foundation of China (U20B2053, 62202029) and Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21-FYTRI02A). Thanks for computing infrastructure provided by Beijing Advanced Innovation Center for Big Data and Brain Computing. This work was also sponsored by CAAI-Huawei MindSpore Open Fund. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
1. What is the focus and contribution of the paper regarding spatio-temporal forecasting tasks? 2. What are the strengths of the proposed approach, particularly in its ability to unify different types of spatio-temporal modeling layers? 3. What are the weaknesses of the paper, especially regarding its technical novelty and contribution compared to prior works? 4. Do you have any concerns about the necessity of using automated architecture search algorithms to select modeling layers? 5. What are some limitations of the paper that should be addressed in future research?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper focuses on architecture design for spatio-temporal modeling for forecasting tasks. The paper first proposes three types of layers (i.e., S2T, T2S, STS) which have emphasis on modeling spatial, temporal and spatio-temporal information, respectively. The three layers are then applied in a unified encoder-decoder framework for spatio-temporal forecasting tasks. An automated architecture search algorithm is further adopted to better decide the connection configuration of different modeling layers. The final model, AutoST, achieves better results than baseline methods as well as the proposed UniST, which only involves one of the three modeling layers (S2T / T2S / STS). Strengths And Weaknesses Strength The idea of analyzing and unifying different types of spatio-temporal modeling (i.e., spatial first, temporal first, spatio-temporal synchronous) is well-motivated and potentially valuable to the research community. The paper provides clear ablation studies (in Table1) that show (1) the performance difference of the three modeling types on different datasets / temporal setups; (2) the effectiveness of introducing automated architecture search to fuse the three types of modeling layers. Weaknesses Although I like the idea of unifying different types of spatio-temporal modeling, the technical novelty and contribution of this work is not significant enough. Most of the techniques in the paper are adopted from prior works with minor modifications, for example, linear self-attention, mix graph convolution (with combination of ChebNet and AdapDC), and DARTS / PAS for automated architecture search. Even from the framework design perspective, the ideas of spatial-first / temporal-first and joint spatio-temporal modeling are widely explored in previous work mentioned as baselines in Table 1, as well as the decoupled spatio-temporal modeling design [1,2] for video action recognition. The encoder-decoder framework adopted in UniST is also widely used for both NLP and CV tasks in recent years. In addition, although the paper provides comparison between UniST and other baseline methods in Table 1, the comparison is not totally fair due to different model complexity, computation cost and even optimization details. This also makes the contribution of the proposed UniST insufficient. The major contribution of this work to me is using architecture search algorithms to unify three types of modeling layers, i.e. the final AutoST model. While the paper compares the results of using different NAS algorithms, an important baseline is missing. Instead of only using one the the three types of modeling layers (UniST), it is intuitive to have a model that includes all of the three layers at each STE and fuses all three outputs, optionally augmented with a gating module to introduce a weighted fusion mechanism. With this baseline result provided, the necessarity of using NAS to select modeling layers would be more convincing. [1] Tran, Du, et al. "A closer look at spatiotemporal convolutions for action recognition." Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2018. [2] Qiu, Zhaofan, Ting Yao, and Tao Mei. "Learning spatio-temporal representation with pseudo-3d residual networks." proceedings of the IEEE International Conference on Computer Vision. 2017. Questions Minor: A comma is missing in the last equation of Eqn. 1. The equation in L204 is inconsistent with the text "the outputs of each extractor are added to form the final output". Limitations Not discussed in the paper.
NIPS
Title AutoST: Towards the Universal Modeling of Spatio-temporal Sequences Abstract The analysis of spatio-temporal sequences plays an important role in many realworld applications, demanding a high model capacity to capture the interdependence among spatial and temporal dimensions. Previous studies provided separated network design in three categories: spatial first, temporal first, and spatio-temporal synchronous. However, the manually-designed heterogeneous models can hardly meet the spatio-temporal dependency capturing priority for various tasks. To address this, we proposed a universal modeling framework with three distinctive characteristics: (i) Attention-based network backbone, including S2T Layer (spatial first), T2S Layer (temporal first), and STS Layer (spatio-temporal synchronous). (ii) The universal modeling framework, named UniST, with a unified architecture that enables flexible modeling priorities with the proposed three different modules. (iii) An automatic search strategy, named AutoST, automatically searches the optimal spatio-temporal modeling priority by network architecture search. Extensive experiments on five real-world datasets demonstrate that UniST with any single type of our three proposed modules can achieve state-of-the-art performance. Furthermore, AutoST can achieve overwhelming performance with UniST. 1 Introduction Modeling and predicting the future of spatio-temporal (ST) sequences based on past observations has been extensively studied and has been successfully applied in many fields, such as road traffic [13], medical diagnosis [29], and meteorological research [24]. Traditional statistical methods typically require input sequence satisfying certain assumptions, which limits its ability in capturing the complex spatial-temporal dependency. Then, recurrent neural network (RNN) methods [15] leverage the universal approximation property to build separated network branches to model dependency and make predictions with fusion gate blocks from the stacking branches. The intrinsic gradient flow in the back-propagation training process [7] may bring the ST dependency into incorrespondence with the ⇤Jainxin Li is the corresponding author. †BDBC: Beijing Advanced Innovation Center for Big Data and Brain Computing. ‡HKUST(GZ): Hong Kong University of Science and Technology (Guangzhou). §HKUST FYTRI: Guangzhou HKUST Fok Ying Tung Research Institute. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Sp at ia l Temporal Spatial-first task 1 task 2 task 3 Temporal Temporal-first task 1 task 2 task 3 Temporal Spatio-temporal sync. task 1 task 3 task 2 Temporal Ours task 1 task 2 task 3 Sp at ia l Temporal Ideal task 1 task 2 task 3 (d)(a) (b) (c) (e) Sp at ia l Temporal Spatial-first task 1 task 2 task 3 Sp at ia l Temporal Temporal-first task 1 task 2 task 3 Sp at ia l Temporal Spatio-temporal synchronous task 1 task 3 Sp at ia l Temporal Ours task 1 task 2 task 3 task 2 Sp at ia l Temporal Ideal task 1 task 2 task 3 Figure 2: Modeling orders of ST data. The stars are tasks with different spatial-temporal dependencies. The red lines are spatio-temporal modeling procedure with anisotropic tendency. The red lines’ color going darkness/lightness refers to the modeling ability of the model along with the current modeling tendency. network branches’ configuration, especially for the deeper network [11]. Recently, the Transformer-based models show larger modeling capacity in both spatial and temporal modeling [21, 32, 31], which motivates us to find a universal way to capture the ST dependency in a universal framework simultaneously. As shown in Fig.1, the ST dependency individually exists in sequences: spatial correspondences, temporal correspondences, and spatiotemporal correspondences. Take the road traffic forecasting as an example, previous research fall into three typical paradigms. (a) Spatial-first modeling [1, 27, 13]: Predicting the traffic for next road junction, which has strong connec- tions to previous intersections, stops, and surrounding traffic. (b) Temporal-first modeling [23, 5]: Predicting the traffic of a high school on Friday afternoon, which shows strong periodic relationships with the school’s times schedule. (c) Spatio-temporal synchronous modeling [19, 12, 28, 6]: Analyzing the city-wide traffic, which is tightly entangled in both spatial and temporal. During the sequences modeling, the main problem is to align the network design with the natural spatio-temporal distribution. However, the distribution of ST dependencies varies and depends on the forecasting task and corresponding datasets. They are mixed in a compound way when modeling ST sequences, and the three tasks in Fig.1(b) are the representative ones. What makes it worse is that, the prevalent modeling methods show anisotropic tendency to capture the ST dependency. If we use the spatial-first models on the three tasks in Fig.2(b), the task 1’s states are highly influenced by the surrounding information, and the periodic pattern is the underlying factors, which makes the spatial-first model fits it properly. We can compare the model ability (red lines) with the ideal one in Fig.2(a), this kind of model will be insufficient for task 2 and task 3. Similarly, suppose we use the temporal-first models on the three tasks in Fig.2(c). In that case, the model ability only matches task 3, where the periodic pattern decides the states other than the spatial information. The previous analysis also applies to the spatio-temporal synchronous situation, where the states are mainly influenced by the complex associations across the spatial and temporal, like semantic relationships. In this paper, we aim to propose a universal model that alleviates the the modeling gap on different tasks. The contributions are: 1) The first to raise and address the modeling order proplem in spatio-temperal forecasting tasks by proposing a universal modeling framework UniST and an automatic structure search strategy AutoST. 2) Proposing 3 replacable and unified attention-based modeling units named S2T, T2S and STS, which model spatio-time sequence with three different priorities: spatial first, temperal first and spatio-temperal synchronous. 3) Extensive experiments on 5 datasets and 3 sequence forcasting tasks demonstrate that only using our three modeling units (S2T, T2S, and STS) outperforms the baseline methods, and our framework together with AutoST achieves the new state-of-the-art performance. 2 Related Work Existing spatio-temporal forecasting methods can be roughly grouped into three categories: spatialfirst, temporal-first, and spatio-temporal synchronous methods. Spatial-first: STG2Seq [1] uses stacking GCN layers to capture the entire inputs sequence, where each GCN layer operates on a limited historical time window, and the final results are concatenated together to make forecasting. In the view of this paper, it belongs to spatial-first modeling. STGCN [27] propose the blocks that contains two temporal gated convolution layers with one spatial graph convolution layer in the middle, which starts from a convolution-based temporal layer. DCRNN [13] is proposed to forecast traffic flow using diffusion convolution and recurrent units to capture spatial and temporal information successively. Temporal-first: Graph WaveNet [23] built the basic modeling layers with two gated temporal convolution modules at the beginning and followed by a graph convolution module, which models from temporal to spatial. GSTNet [5] builds several layers of spatial-temporal blocks to produce the forecasting, which is consists of a multi-resolution temporal module followed by a global correlated spatial module. Spatio-temporal synchronous: STSGCN [19] construct a spatiotemporal synchronous extraction module composed of graph convolutional networks. STFGNN [12] modeling spatio-temporal correlations simultaneously by fusing a dilated convolutional neural network with a gating mechanism and a spatio-temporal fusion graph module. ST-ResNet [28] using convolution on a sequence of image-like 2D matrices to model spatio-temporal at the same time. ASTGCN [6] proposed a spatial-temporal convolution that simultaneously captures the spatial patterns and temporal features. 3 Preliminary 3.1 Spatio-temporal Sequence Forecasting Spatio-temporal sequence forecasting (STSF) is to predict the future sequence of spatio-temporal inputs based on the historical observations. Specifically, given a graph G = (V,E,A), where V and E are the node set and edge set, and N is the number of nodes, A 2 RN⇥N is the adjacency matrix of G. If vi, vj 2 V and (vi, vj) 2 E, Aij = 1, otherwise Aij = 0. X = {X1,X2, . . . ,XT } is a ST sequence of T time steps, where X 2 RT⇥N⇥C . The snapshot at time step t is denoted as xt 2 RN⇥C , where C is the feature dimension of a node. Then the ST sequence forecasting problem can be defined as: given S time steps historical observations of input graph G, the goal is to predict the future sequence of the features on each node with a learning function f : ⇥ X(t S):t, G] f! X(t+1):(t+P ), where X(t S):t and X(t+1):(t+P ) are the ST sequence with length S and P respectively. 3.2 Network Architecture Search Network (neural) architecture search (NAS) are automated methods for generating and optimizing neural networks. A representative gradient-based approach is DARTS [16], which is the foundation of our proposed training framework. DARTS aims to search optimal directed edge connections on a directed acyclic graph with predefined computing cells as nodes. The result connections of node j is denoted as x(j) = P i<j o (i,j) x (i) , where o(i,j) is an operator, e.g. layers in a model, represented by a directed edge from node i to node j. DARTS proposes a method to relax the discrete searching space to be continuous, and uses bi-level optimization to learn a differentiable objective on the joint optimization problem of both network architecture and model weights. The objective function is: min↵ Lval (w⇤(↵),↵) s.t. w⇤(↵) = argminw Ltrain(w,↵), where ↵ is the architecture, and w is the model weights. In Section 4.4, we improve the design of the directed acyclic graph of the search architecture, and the two-stage optimization of the architecture parameters. 4 Methods In this section, we firstly introduce two basic modeling units: the time series linear self-attention, and the high order mix graph convolution. Then we proposed three layers as different network backbones, and we build a universal modeling framework based on the tree “atomic” layers. Next, we propose an automatic searching strategy for spatio-temporal information fusion, which aimed for the optimal order of spatio-temporal modeling on various downstream tasks. 4.1 Spatial / Temporal Modeling Unit 4.1.1 Time Series Linear Self-Attention Self-attention mechanism [21] has been widely used in nature language processing, computer vision, and time series forecasting, which is defined as: Attention(Q,K,V) = V0 = Softmax(QK>/ p d)V, where Q = XWQ,K = XWK ,V = XWV , and the projection matrix WQ 2 RC⇥D,WK 2 RC⇥D,WV 2 RC⇥D. However, the original self-attention suffers from high computational and memory cost. Because the dot product computation of Q and K leads to O(N2) time and space complexity. [9] proposed linear self-attention, which represents the similarity function of Q and K in the self-attention by a kernel function: V0i = (Qi) T PN j=1 (Kj)V T j / (Qi) T PN j=1 (Kj). Such that for each query Qi, the two terms PN j=1 (Kj)Vj and PN j=1 (Kj) are the same and reused for efficient computing. Following [31], we use the technique of linear self-attention in representing time series features. 4.1.2 High-order Mix Graph Convolution To acquire better spatial information representation, we propose a high-order mix graph convolutional operation for spatial information mixing and feature extraction of the original inputs, it is defined as: HighOrder(X,A, order) def = Horder = 8 < : X if order = 0 MixGC(X,A) if order = 1 MixGC(H(order 1)A) if order > 1, (1) where order denotes the total order of the graph convolution operations, i.e., to consider orderhop neighbor relationship of each node. In this paper, we define the 1st-order mix convolutional operation by combining the 1st-order ChebNet [10] and the Adaptive Diffusion Convolution [23]: MixGC(X,A) = ChebNet(X,A) + AdapDC(X,A) = ÂXWg + PfXWf + PbXWb + ÂadpXWadp, where  = D 1/2ÃD 1/2 is a normalized adjacency matrix with self-loop. ChebNet focuses on 1st-order neighbor information, while AdapDC focuses on multi-hop information. à is defined as à = A + I, where Dii = P j Ãij , I is an identity matrix. Pf = A rowsum(A) , Pb = A> rowsum(A>) refers to a forward and backward state transition matrix, respectively. Âadp is an adaptive matrix for complementary spatial state information, which is calculated by two learnable node embedding matrices E1,E2 2 RN⇥C [20] as Âadp = Softmax(ReLU(E1E>2 )). 4.2 Unified Spatio-temporal Modeling Backbone In order to solve the problem of spatio-temporal dependency distribution differences in the modeling procedure, we first propose three novel modules: S2T Layer, T2S Layer, STS Layer, that are suitable for three typical spatio-temporal dependencies: spatial-first, temporal-first, spatio-temporal synchronous, respectively. We design all these three modeling module to have the same dimension of inputs and outputs. This provides a solid foundation for our later flexible and universal modeling. 4.2.1 Spatial-first Modeling Layer The spatial-first sequence modeling method, S2T Layer, models from spatial to temporal. The spatial information between the nodes on the graph is first characterized on a single slice. After that, node information at different times is exchanged along the time dimension, whose spatial information has been shared with its neighbors. As shown in Fig.3(a), S2T Layer first uses two high-order mix graph convolution defined in Eq.(1) to process the input spatio-temporal sequences XL 1 to obtain two sequence representations with mixed spatial information. Then the key K and value V of the input of the subsequent self-attention are obtained by a transformation using the parameter matrix WK ,WV , respectively, while the query Q is obtained by transforming the original input ST sequence using the parameter matrix as follows: Q = XL 1WQ,K = HighOrder1(XL 1,A, order)WK ,V = HighOrder2(XL 1,A, order)WV . Then the original ST sequence and the new sequence with mixed spatial information are processed using a multi-head linear self-attention, from which it learns temporal dependencies and exchanges information at different time slices to obtain further representations of the ST sequence: Z = Attention(Q,K,V) . (2) The output is concatenated with the initial input once for residuals and processed a layer normalization, followed by a two-layer fully connected network for further ST representation learning. This network is applied separately and identically to each point-in-time position in the ST sequence, thus maintaining the continuous transfer of position-encoded information. Finally, the resulting ST sequence representation is again connected to the initial input with one residual and layer normalization to obtain the output XL = Norm(max(0,Norm(Z+XL 1)W1 + b1)W2 + b2). 4.2.2 Temporal-first Modeling Layer This module is designed to model the ST sequence from temporal to spatial, named T2S Layer. Different from spatial-first modeling, at the beginning, the original inputs are projected into Q,K,V by three weight matrices as Q = XL 1WQ, K = XL 1WK , and V = XL 1WV . The projection results are used to calculate temporal representations at first, using the time series linear self-attention in Eq.(2). Then the temporal representation on each node are send to the high-order mix graph convolution, together with the adjacency matrix, to fusion the temporal information from every neighbors. Z0 = HighOrder(Z,A, order). Finally, the output representations are executed with the feed forward and layer normalization operations the same way as the S2T Layer. 4.2.3 Spatial-temporal Synchronous Layer This module named STS Layer, which aims to model the spatial and temporal information simultaneously. Different from the former two modules, the inputs are directly used to calculate spatial and temporal representations at the same time. For temporal modeling part, it still project the original inputs into Q,K,V, and execute a linear self-attention operation for a temporal representation. For the spatial part, it accepts the original inputs and the spatial information and uses high-order mix graph convolution operation to construct spatial representation. Temporal Z1 = Attention(Q,K,V), Spatial Z2 = HighOrder(XL 1,A, order). The outputs are concatenated together as: Z0 = concat[Z1,Z2]. Then it is executed with the following operations and output as XL similar with the former two modules. 4.3 Universal Modeling Framework Embedding Layer Targeting to the ST sequence forecasting task, we propose a unified ST sequence modeling framework (UniST) with the proposed unified modeling backbones in Fig.4, which follows the encoderdecoder architecture. It uses a unified architecture with interchangeable and replaceable mode units. 4.3.1 Spatio-temporal Embedding Layer Since the Transformer model solely relies on the self-attention for global alignments, the positional embedding [21] and extra embeddings [32] are needed to capture spatio-temporal dependency. Then we introduce four types of embeddings EP , EV , ES , ET in Appendix A. Fusion embedding. The four embeddings are summed together as the final embedding added to the inputs: EF = EP +EV +ES +ET , note that the shape of token embedding EV 2 RT⇥N⇥d, while other embeddings’ shape are EP 2 R1⇥1⇥d, ES 2 R1⇥N⇥d, ET 2 RT⇥1⇥d. When calculating the summation, they will be replicated and expanded with broadcast on the respective missing dimensions. 4.3.2 Encoder The encoder of UniST consists of multiple Spatio-Temporal Extractors (STE(·)), which can be arbitrarily chosen from {T2S Layer, S2T Layer, STS Layer}. All extractors are connected end to end, i.e., the output of the previous one is the input of the next one. To acquire a more diversity representation, the outputs of each extractor are added to form the final output of the encoder. Let the outputs of the embedding layer be X0, the encoder is computed as: Xen = PL i=1 STE i(X0), where L refers to the number of spatio-temporal extractors. 4.3.3 Decoder The decoder accepts the output of encoder, i.e., L outputs from L spatio-temporal extractors. They are firstly added as a unified spatio-temporal representation. Then the results are through two times of ReLU activation and Linear projection, and produce the final sequence forecasting result. Denote X` as the output of extractor `, we have the calculation of decoder as: Ȳ = Xde = Linear(ReLU(Linear(ReLU( P ` X`)))). 4.4 Automated Search for UniST With the proposed unified ST sequence modeling framework UniST, it still suffers from the potential wrong network configuration problem, where we build an arbitrary modeling order with the replaceable model units {T2S Layer, S2T Layer, STS Layer}. Considering the various downstream tasks, how can we build a universal model with an optimal configuration? Here we propose the Automated Spatio-Temporal modeling approach (AutoST), which learns the optimal combinatorial order that suits the spatio-temporal dependency of the current task. We designed two schemes for layer combination. In this section, we first define the basic searching unit of AutoST, then we introduce two designs of AutoST with different searching schemes. 4.4.1 AutoST Cell The basic searching unit in AutoST is the network cell. Here we define its structure and computing process. Definition 1. AutoST Cell. Let G = (V, E) be a direct acyclic graph (DAG), V denotes the node set, each node refers to a representation comes from the outputs of a computation layer. The representation on node i is defined as H i 2 RT⇥|V|⇥d, where T is the length of spatio-temporal seqence, d is the Hi Hj Hout S2T Layer T2S Layer STS Layer Hi Hj Hout STS Layer Hi Hj Hout S2T Layer T2S Layer STS Layer Hi Hj Hout STS Layer feature dimension. The input of each AutoST Cell is denoted as H 0 , and the output of each cell is the summation of all nodes, i.e., all interval representations: Hout = P|V| i H i . On graph G, the directed edge (i, j) from node i to node j stands for a mixture of all candidate modeling modules O = {T2S Layer, S2T Layer, STS Layer}, and it is represented as o(i,j). So that the representation between node j and other nodes can be written as: Hj = P i<j o (i,j) H i . On each directed edge, there exist a set of weight parameters ↵ (i,j) = {↵(i,j)o |o 2 O}, which indicates the probability of the corresponding modeling module should be retained. Every weight parameters of the candidate modeling module is calculated as: H j = P i<j P o2O exp(↵(i,j)o ) P o02O exp ⇣ ↵(i,j) o0 ⌘o H i . 4.4.2 Sequential Stacking Search AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero Based on the proposed UniTS, we propose two searching schemes to find better combination of the modeling layers. The proposed searching schemes are lossless replacements for the encoder of UniTS. We simply replace the encoder’ spatiotemporal extractor with the AutoST Cell. The first one is multi-layer sequential stacking searching scheme, with which the whole model is named AutoST1. As illustrated in Fig.5(a), it has simple structure within each AutoST Cell, while has more complicated stacking structure between cells. Each cell holds a DAG with three nodes, one input node H0, one output node Hout and an intermediate node H1. And two directed edges are pre-defined between the former two nodes’ output and the output node: (H0,Hout) and (H1,Hout). The searching space of this scheme is shown as the directed red dotted line. The red dashed box is the search candidate set, including {T2S Layer, S2T Layer, STS Layer}. We conduct two gradient based network architecture search methods in the experiments, i.e., DARTS [16], PAS [22]. After searching, the cell essentially becomes one of the three model units. This scheme allows multiple stacking of cells, therefore, the new encoder of the whole model will become the sequential stacking between different modeling layers. 4.4.3 Hybrid Assembling Search The another searching scheme is called hybrid assembling searching. The structure is similar with the sequential stacking searching. However, in this scheme, the encoder only consists of a single AutoST Cell. The cells are not stacked layer-by-layer, it will conduct searching on a more complicated DAG on a single AutoST Cell. In this searching scheme, the DAG of AutoST Cell is shown as Fig.5(b). There are multiple nodes in the cell, generally, it will be set as 4-7 nodes in the experiments. The pre-defined connections are each node’s output to the output node. The multiple directed red dotted lines show the searching space of this scheme. The candidate set is expanded with two operations: Identity and Zero. Identity means no modeling module in this edge, i.e., build directly connection with no operation. Zero means to set all the data passing through it to zero, i.e., no connection is made. This scheme makes the entire DAG form a complex and deep network structure. Through the complex structure design inside the cell, multiple spatio-temporal modeling modules are combined to fit the spatio-temporal dependency distribution in a target task. 5 Experiments This section empirically evaluates the effectiveness of UniST and AutoST models with short-term, medium-term, and long-term ST sequence forecasting tasks on five real-world datasets. Platform: Intel(R) Xeon(R) CPU 2.40GHz ⇥ 2 + NVIDIA Tesla V100 GPU (32 GB) ⇥ 4. The code is available at https://github.com/shuaibuaa/autost2022. 5.1 Datasets In order to study the effect of various ST sequence forecasting methods under complex spatiotemporal distribution, five real-world datasets with different tasks and data states are selected. The statistic information of the five datasets are listed in Table 3. METR-LA [8]: The traffic speed dataset contains 4 months of data from March 1, 2012 to June 30, 2012, recorded by sensors at 207 different locations on highways in Los Angeles County, USA. The data granularity is 5 minutes per point, and the spatial information provided by the dataset includes the coordinates of each sensor and the distance between the sensors. PEMS-BAY [14]: The traffic speed dataset comes from the California Transportation Agencies (CalTrans) Performance Evaluation System (PeMS). The dataset contains data recorded by 325 sensors in the Bay area for a total of 6 months from January 1, 2017 to May 31, 2017, and the data granularity is 5 minutes per point. The spatial information provided by the dataset includes the coordinates of each sensor and the distance between the sensors. PEMS-03/04/08 [3]: The three traffic datasets are also from the PeMS system of the California Transportation Agency, and each dataset is data recorded by sensors in a certain area of California. The PEMS-03 dataset contains data recorded by 358 sensors for 3 months from September 1, 2018 to November 30, 2018. PEMS-04 dataset contains 2 months of data recorded by 307 sensors from January 1, 2018 to February 28, 2018. PEMS-08 dataset contains 2 months of data recorded by 170 sensors from July 1, 2016 to August 31, 2016. The data granularity of these three datasets is 5 minutes per point, and the spatial information provided by the datasets only includes the connectivity between sensors. 5.2 Main Results Table 1 summarizes the ST sequence forecasting results. UniSTS , UniSTT , UniSTST stands for our UniST framework with three same stacking layers of S2T Layer, T2S Layer, STS Layer, respectively. The results in bold font in Table 1 show that our proposed UniST outperforms all baseline methods and achieves State-of-the-Art on all 9 tasks of 5 datasets, with all three proposed layers. This demonstrates our proposed unified spatio-temporal modeling layers and the unified forecasting framework are more expressive than traditional methods. Specifically, compared with GMAN, which is also based on self-attention mechanism, our methods achieve at most 18.41%, 15.31%, 13.82% MAE decreases on the short-term, medium-term, and long-term forecasting task, respectively. Compared with the most advanced method STFGNN, our methods gain 12.06%, 10.61%, 9.43% MAE decreases on the short, medium, long-term forecasting tasks. From the last two columns of Table 1, we can see that both of AutoST1 and AutoST2 beat all other methods on every metric of every task, including our proposed UniST. Recall that the difference between AutoST and UniST is the encoder part, UniST uses all same layers from {S2T Layer, T2S Layer, STS Layer}, while AutoST aims to search for a better combination and connection using the three types of layers. This demonstrates that combining and integrating modules with different spatio-temporal modeling abilities can better deal with uncertainty spatio-temporal dependency and more comprehensively model spatio-temporal sequences. At the same time, AutoST2 beats AutoST1 on every task. It shows that using a single-layer but complex internal connection searching method for these tasks is more effective. Moreover, we can see that the best methods among all baselines are STFGNN, the only method in spatio-temporal synchronous modeling. Comparing GMAN and Graph WaveNet, the two representative methods in spatial-first modeling and temporal-first modeling, respectively, we can see that although they beat other baselines, while our proposed Uni- series model shows a better performance. From this perspective, the proposed method matches the design. 5.3 Ablation Study In AutoST, we design to use gradient-based network architecture search methods to optimize the connections of the modeling modules, and we choose DARTS and PAS as the NAS methods. At the same time, we also conduct experiments to compare the method of random search and AutoSTG [17], which also uses the network architecture search technology for ST sequence prediction. The results are shown in Table 2. We can conclude that our proposed AutoST can outperform AutoSTG with all searching methods. That is because AutoSTG uses more fine-grained temporal convolution and graph convolution structure as the candidates, while AutoST’s candidate layers are designed to model the three different spatio-temporal dependencies, which can make better use of the modeling ability of the three layers on different temporal and spatial relations. It reduces the searching space and the search time simultaneously so that AutoST can make better efficiency and accuracy. 5.4 Result Visualization Forecasting Visualization: We randomly selected one day from the PEMS-BAY dataset and compared our methods’ forecasting results and baseline methods in a visualization way. Typical results are shown in Fig.8, the time span is selected from 13:00 to 19:00, which can represent the most typical scene of the rush hour from afternoon to evening. From the forecasting lines compared to the ground truth, we can see that our proposed methods outperform the two typical baseline methods mostly when forecasting smooth traffic and traffic jam. The bar lies on the bottom of the figures is the forecasting error of each method. We can find that all methods encountered accuracy drop Table 2: The results of network searching methods. Method METR-LA (60 min) PEMS-BAY (60 min) RMSE MAE MAPE RMSE MAE MAPE AutoSTG 7.27 3.47 / 4.38 1.92 / AutoST1-R 6.12 2.97 8.32 3.69 1.62 3.65 AutoST2-R 6.08 2.92 8.26 3.68 1.62 3.66 AutoST1-D 6.09 2.85 7.96 3.74 1.68 3.72 AutoST2-D 6.16 3.25 8.48 3.64 1.60 3.61 AutoST1-P 6.03 2.85 7.88 3.60 1.57 3.52 AutoST2-P 6.00 2.78 7.79 3.58 1.52 3.48 (a) The search results of AutoST1. AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero S2T Layer H0 H1 Hout T2S Layer H0 H1 Hout S2T Layer H0 H1 Hout STS Layer H0 H1 Hout STS Layer H0 H1 Hout H2 T2S Layer STS Layer STS Layer S2T Layer T2S Layer T2S Layer H3 (b) The search results of AutoST2. AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero S2T Layer H0 H1 Hout T2S Layer H0 H1 Hout S2T Layer H0 H1 Hout STS Layer H0 H1 Hout STS Layer H0 H1 Hout H2 T2S Layer STS Layer STS Layer S2T Layer T2S Layer T2S Layer H3 around 14:00 and 17:30. That is because the status of the road is changed dramatically. However, our methods are more stable with the changes and can quickly be adapted to the new road state. For the complete data visualization of the day, please refer to Fig.9 in the appendix. Learned Architecture Visualization: The learned architectures of the AutoST1 and AutoST2 on PEMS-08 dataset are shown in Fig.7. We can find that although UniSTT has a better effect when modeling with single type modeling module, the search result of AutoST1 show that the STS Layer occupies a larger number, and the model achieves better performance than UniSTT . This demonstrates that combining and stacking multiple spatio-temporal dependency modeling methods reasonably can better fit the real spatio-temporal dependencies. In addition, we can find that AutoST2 obtains a complex connections between the four computation nodes in a single cell. And this learned architecture helps AutoST2 achieve state-of-the-art on this task. Although we cannot yet explain why stacking the modules leads to better results, we can see a potentially broad range of applications [25] for unified architecture searching in this way. 6 Conclusion In this work, we illustrated the existence of the modeling gap problem, especially the modeling order, in the spatio-temporal analysis. Moreover, we build three different layers, namely S2T, T2S, and STS, as new network modeling backbones. Then, an automatic searching strategy is proposed to search the optimal modeling priority automatically. Extensive experiments on five real-world datasets show the overwhelming performance over SOTA baselines. Acknowledgments This work was supported by grants from the Natural Science Foundation of China (U20B2053, 62202029) and Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21-FYTRI02A). Thanks for computing infrastructure provided by Beijing Advanced Innovation Center for Big Data and Brain Computing. This work was also sponsored by CAAI-Huawei MindSpore Open Fund. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
1. What is the focus and contribution of the paper on spatio-temporal forecasting? 2. What are the strengths of the proposed approach, particularly in terms of its universal framework and combination of spatial, temporal, and spatio-temporal synchronous approaches? 3. What are the weaknesses of the paper, especially regarding the experiment section? 4. Do you have any concerns or suggestions regarding the automatic search strategy used in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Paper proposes a novel universal framework for spatio-temporal forecasting. The framework combines spatial first, temporal first and spatio-temporal synchronous approaches in one network. These are implemented as S2T Layer, T2S Layer and STS Layer respectively. This is termed UniST. Paper further proposes a novel automatic search strategy by network architecture search. Experimental results on 5 datasets across 9 tasks outperforms existing methods significantly. Strengths And Weaknesses Strengths Paper is well-written and easy to understand. Method is presented clearly with detailed explanations and good use of diagrams. Experiments are extensive and strongly supports the main claims of the paper Weakness Minor. The labeling of Table 1 is too brief. It should contains more information to explain the Table content. Questions No question. Limitations Not applicable.
NIPS
Title AutoST: Towards the Universal Modeling of Spatio-temporal Sequences Abstract The analysis of spatio-temporal sequences plays an important role in many realworld applications, demanding a high model capacity to capture the interdependence among spatial and temporal dimensions. Previous studies provided separated network design in three categories: spatial first, temporal first, and spatio-temporal synchronous. However, the manually-designed heterogeneous models can hardly meet the spatio-temporal dependency capturing priority for various tasks. To address this, we proposed a universal modeling framework with three distinctive characteristics: (i) Attention-based network backbone, including S2T Layer (spatial first), T2S Layer (temporal first), and STS Layer (spatio-temporal synchronous). (ii) The universal modeling framework, named UniST, with a unified architecture that enables flexible modeling priorities with the proposed three different modules. (iii) An automatic search strategy, named AutoST, automatically searches the optimal spatio-temporal modeling priority by network architecture search. Extensive experiments on five real-world datasets demonstrate that UniST with any single type of our three proposed modules can achieve state-of-the-art performance. Furthermore, AutoST can achieve overwhelming performance with UniST. 1 Introduction Modeling and predicting the future of spatio-temporal (ST) sequences based on past observations has been extensively studied and has been successfully applied in many fields, such as road traffic [13], medical diagnosis [29], and meteorological research [24]. Traditional statistical methods typically require input sequence satisfying certain assumptions, which limits its ability in capturing the complex spatial-temporal dependency. Then, recurrent neural network (RNN) methods [15] leverage the universal approximation property to build separated network branches to model dependency and make predictions with fusion gate blocks from the stacking branches. The intrinsic gradient flow in the back-propagation training process [7] may bring the ST dependency into incorrespondence with the ⇤Jainxin Li is the corresponding author. †BDBC: Beijing Advanced Innovation Center for Big Data and Brain Computing. ‡HKUST(GZ): Hong Kong University of Science and Technology (Guangzhou). §HKUST FYTRI: Guangzhou HKUST Fok Ying Tung Research Institute. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Sp at ia l Temporal Spatial-first task 1 task 2 task 3 Temporal Temporal-first task 1 task 2 task 3 Temporal Spatio-temporal sync. task 1 task 3 task 2 Temporal Ours task 1 task 2 task 3 Sp at ia l Temporal Ideal task 1 task 2 task 3 (d)(a) (b) (c) (e) Sp at ia l Temporal Spatial-first task 1 task 2 task 3 Sp at ia l Temporal Temporal-first task 1 task 2 task 3 Sp at ia l Temporal Spatio-temporal synchronous task 1 task 3 Sp at ia l Temporal Ours task 1 task 2 task 3 task 2 Sp at ia l Temporal Ideal task 1 task 2 task 3 Figure 2: Modeling orders of ST data. The stars are tasks with different spatial-temporal dependencies. The red lines are spatio-temporal modeling procedure with anisotropic tendency. The red lines’ color going darkness/lightness refers to the modeling ability of the model along with the current modeling tendency. network branches’ configuration, especially for the deeper network [11]. Recently, the Transformer-based models show larger modeling capacity in both spatial and temporal modeling [21, 32, 31], which motivates us to find a universal way to capture the ST dependency in a universal framework simultaneously. As shown in Fig.1, the ST dependency individually exists in sequences: spatial correspondences, temporal correspondences, and spatiotemporal correspondences. Take the road traffic forecasting as an example, previous research fall into three typical paradigms. (a) Spatial-first modeling [1, 27, 13]: Predicting the traffic for next road junction, which has strong connec- tions to previous intersections, stops, and surrounding traffic. (b) Temporal-first modeling [23, 5]: Predicting the traffic of a high school on Friday afternoon, which shows strong periodic relationships with the school’s times schedule. (c) Spatio-temporal synchronous modeling [19, 12, 28, 6]: Analyzing the city-wide traffic, which is tightly entangled in both spatial and temporal. During the sequences modeling, the main problem is to align the network design with the natural spatio-temporal distribution. However, the distribution of ST dependencies varies and depends on the forecasting task and corresponding datasets. They are mixed in a compound way when modeling ST sequences, and the three tasks in Fig.1(b) are the representative ones. What makes it worse is that, the prevalent modeling methods show anisotropic tendency to capture the ST dependency. If we use the spatial-first models on the three tasks in Fig.2(b), the task 1’s states are highly influenced by the surrounding information, and the periodic pattern is the underlying factors, which makes the spatial-first model fits it properly. We can compare the model ability (red lines) with the ideal one in Fig.2(a), this kind of model will be insufficient for task 2 and task 3. Similarly, suppose we use the temporal-first models on the three tasks in Fig.2(c). In that case, the model ability only matches task 3, where the periodic pattern decides the states other than the spatial information. The previous analysis also applies to the spatio-temporal synchronous situation, where the states are mainly influenced by the complex associations across the spatial and temporal, like semantic relationships. In this paper, we aim to propose a universal model that alleviates the the modeling gap on different tasks. The contributions are: 1) The first to raise and address the modeling order proplem in spatio-temperal forecasting tasks by proposing a universal modeling framework UniST and an automatic structure search strategy AutoST. 2) Proposing 3 replacable and unified attention-based modeling units named S2T, T2S and STS, which model spatio-time sequence with three different priorities: spatial first, temperal first and spatio-temperal synchronous. 3) Extensive experiments on 5 datasets and 3 sequence forcasting tasks demonstrate that only using our three modeling units (S2T, T2S, and STS) outperforms the baseline methods, and our framework together with AutoST achieves the new state-of-the-art performance. 2 Related Work Existing spatio-temporal forecasting methods can be roughly grouped into three categories: spatialfirst, temporal-first, and spatio-temporal synchronous methods. Spatial-first: STG2Seq [1] uses stacking GCN layers to capture the entire inputs sequence, where each GCN layer operates on a limited historical time window, and the final results are concatenated together to make forecasting. In the view of this paper, it belongs to spatial-first modeling. STGCN [27] propose the blocks that contains two temporal gated convolution layers with one spatial graph convolution layer in the middle, which starts from a convolution-based temporal layer. DCRNN [13] is proposed to forecast traffic flow using diffusion convolution and recurrent units to capture spatial and temporal information successively. Temporal-first: Graph WaveNet [23] built the basic modeling layers with two gated temporal convolution modules at the beginning and followed by a graph convolution module, which models from temporal to spatial. GSTNet [5] builds several layers of spatial-temporal blocks to produce the forecasting, which is consists of a multi-resolution temporal module followed by a global correlated spatial module. Spatio-temporal synchronous: STSGCN [19] construct a spatiotemporal synchronous extraction module composed of graph convolutional networks. STFGNN [12] modeling spatio-temporal correlations simultaneously by fusing a dilated convolutional neural network with a gating mechanism and a spatio-temporal fusion graph module. ST-ResNet [28] using convolution on a sequence of image-like 2D matrices to model spatio-temporal at the same time. ASTGCN [6] proposed a spatial-temporal convolution that simultaneously captures the spatial patterns and temporal features. 3 Preliminary 3.1 Spatio-temporal Sequence Forecasting Spatio-temporal sequence forecasting (STSF) is to predict the future sequence of spatio-temporal inputs based on the historical observations. Specifically, given a graph G = (V,E,A), where V and E are the node set and edge set, and N is the number of nodes, A 2 RN⇥N is the adjacency matrix of G. If vi, vj 2 V and (vi, vj) 2 E, Aij = 1, otherwise Aij = 0. X = {X1,X2, . . . ,XT } is a ST sequence of T time steps, where X 2 RT⇥N⇥C . The snapshot at time step t is denoted as xt 2 RN⇥C , where C is the feature dimension of a node. Then the ST sequence forecasting problem can be defined as: given S time steps historical observations of input graph G, the goal is to predict the future sequence of the features on each node with a learning function f : ⇥ X(t S):t, G] f! X(t+1):(t+P ), where X(t S):t and X(t+1):(t+P ) are the ST sequence with length S and P respectively. 3.2 Network Architecture Search Network (neural) architecture search (NAS) are automated methods for generating and optimizing neural networks. A representative gradient-based approach is DARTS [16], which is the foundation of our proposed training framework. DARTS aims to search optimal directed edge connections on a directed acyclic graph with predefined computing cells as nodes. The result connections of node j is denoted as x(j) = P i<j o (i,j) x (i) , where o(i,j) is an operator, e.g. layers in a model, represented by a directed edge from node i to node j. DARTS proposes a method to relax the discrete searching space to be continuous, and uses bi-level optimization to learn a differentiable objective on the joint optimization problem of both network architecture and model weights. The objective function is: min↵ Lval (w⇤(↵),↵) s.t. w⇤(↵) = argminw Ltrain(w,↵), where ↵ is the architecture, and w is the model weights. In Section 4.4, we improve the design of the directed acyclic graph of the search architecture, and the two-stage optimization of the architecture parameters. 4 Methods In this section, we firstly introduce two basic modeling units: the time series linear self-attention, and the high order mix graph convolution. Then we proposed three layers as different network backbones, and we build a universal modeling framework based on the tree “atomic” layers. Next, we propose an automatic searching strategy for spatio-temporal information fusion, which aimed for the optimal order of spatio-temporal modeling on various downstream tasks. 4.1 Spatial / Temporal Modeling Unit 4.1.1 Time Series Linear Self-Attention Self-attention mechanism [21] has been widely used in nature language processing, computer vision, and time series forecasting, which is defined as: Attention(Q,K,V) = V0 = Softmax(QK>/ p d)V, where Q = XWQ,K = XWK ,V = XWV , and the projection matrix WQ 2 RC⇥D,WK 2 RC⇥D,WV 2 RC⇥D. However, the original self-attention suffers from high computational and memory cost. Because the dot product computation of Q and K leads to O(N2) time and space complexity. [9] proposed linear self-attention, which represents the similarity function of Q and K in the self-attention by a kernel function: V0i = (Qi) T PN j=1 (Kj)V T j / (Qi) T PN j=1 (Kj). Such that for each query Qi, the two terms PN j=1 (Kj)Vj and PN j=1 (Kj) are the same and reused for efficient computing. Following [31], we use the technique of linear self-attention in representing time series features. 4.1.2 High-order Mix Graph Convolution To acquire better spatial information representation, we propose a high-order mix graph convolutional operation for spatial information mixing and feature extraction of the original inputs, it is defined as: HighOrder(X,A, order) def = Horder = 8 < : X if order = 0 MixGC(X,A) if order = 1 MixGC(H(order 1)A) if order > 1, (1) where order denotes the total order of the graph convolution operations, i.e., to consider orderhop neighbor relationship of each node. In this paper, we define the 1st-order mix convolutional operation by combining the 1st-order ChebNet [10] and the Adaptive Diffusion Convolution [23]: MixGC(X,A) = ChebNet(X,A) + AdapDC(X,A) = ÂXWg + PfXWf + PbXWb + ÂadpXWadp, where  = D 1/2ÃD 1/2 is a normalized adjacency matrix with self-loop. ChebNet focuses on 1st-order neighbor information, while AdapDC focuses on multi-hop information. à is defined as à = A + I, where Dii = P j Ãij , I is an identity matrix. Pf = A rowsum(A) , Pb = A> rowsum(A>) refers to a forward and backward state transition matrix, respectively. Âadp is an adaptive matrix for complementary spatial state information, which is calculated by two learnable node embedding matrices E1,E2 2 RN⇥C [20] as Âadp = Softmax(ReLU(E1E>2 )). 4.2 Unified Spatio-temporal Modeling Backbone In order to solve the problem of spatio-temporal dependency distribution differences in the modeling procedure, we first propose three novel modules: S2T Layer, T2S Layer, STS Layer, that are suitable for three typical spatio-temporal dependencies: spatial-first, temporal-first, spatio-temporal synchronous, respectively. We design all these three modeling module to have the same dimension of inputs and outputs. This provides a solid foundation for our later flexible and universal modeling. 4.2.1 Spatial-first Modeling Layer The spatial-first sequence modeling method, S2T Layer, models from spatial to temporal. The spatial information between the nodes on the graph is first characterized on a single slice. After that, node information at different times is exchanged along the time dimension, whose spatial information has been shared with its neighbors. As shown in Fig.3(a), S2T Layer first uses two high-order mix graph convolution defined in Eq.(1) to process the input spatio-temporal sequences XL 1 to obtain two sequence representations with mixed spatial information. Then the key K and value V of the input of the subsequent self-attention are obtained by a transformation using the parameter matrix WK ,WV , respectively, while the query Q is obtained by transforming the original input ST sequence using the parameter matrix as follows: Q = XL 1WQ,K = HighOrder1(XL 1,A, order)WK ,V = HighOrder2(XL 1,A, order)WV . Then the original ST sequence and the new sequence with mixed spatial information are processed using a multi-head linear self-attention, from which it learns temporal dependencies and exchanges information at different time slices to obtain further representations of the ST sequence: Z = Attention(Q,K,V) . (2) The output is concatenated with the initial input once for residuals and processed a layer normalization, followed by a two-layer fully connected network for further ST representation learning. This network is applied separately and identically to each point-in-time position in the ST sequence, thus maintaining the continuous transfer of position-encoded information. Finally, the resulting ST sequence representation is again connected to the initial input with one residual and layer normalization to obtain the output XL = Norm(max(0,Norm(Z+XL 1)W1 + b1)W2 + b2). 4.2.2 Temporal-first Modeling Layer This module is designed to model the ST sequence from temporal to spatial, named T2S Layer. Different from spatial-first modeling, at the beginning, the original inputs are projected into Q,K,V by three weight matrices as Q = XL 1WQ, K = XL 1WK , and V = XL 1WV . The projection results are used to calculate temporal representations at first, using the time series linear self-attention in Eq.(2). Then the temporal representation on each node are send to the high-order mix graph convolution, together with the adjacency matrix, to fusion the temporal information from every neighbors. Z0 = HighOrder(Z,A, order). Finally, the output representations are executed with the feed forward and layer normalization operations the same way as the S2T Layer. 4.2.3 Spatial-temporal Synchronous Layer This module named STS Layer, which aims to model the spatial and temporal information simultaneously. Different from the former two modules, the inputs are directly used to calculate spatial and temporal representations at the same time. For temporal modeling part, it still project the original inputs into Q,K,V, and execute a linear self-attention operation for a temporal representation. For the spatial part, it accepts the original inputs and the spatial information and uses high-order mix graph convolution operation to construct spatial representation. Temporal Z1 = Attention(Q,K,V), Spatial Z2 = HighOrder(XL 1,A, order). The outputs are concatenated together as: Z0 = concat[Z1,Z2]. Then it is executed with the following operations and output as XL similar with the former two modules. 4.3 Universal Modeling Framework Embedding Layer Targeting to the ST sequence forecasting task, we propose a unified ST sequence modeling framework (UniST) with the proposed unified modeling backbones in Fig.4, which follows the encoderdecoder architecture. It uses a unified architecture with interchangeable and replaceable mode units. 4.3.1 Spatio-temporal Embedding Layer Since the Transformer model solely relies on the self-attention for global alignments, the positional embedding [21] and extra embeddings [32] are needed to capture spatio-temporal dependency. Then we introduce four types of embeddings EP , EV , ES , ET in Appendix A. Fusion embedding. The four embeddings are summed together as the final embedding added to the inputs: EF = EP +EV +ES +ET , note that the shape of token embedding EV 2 RT⇥N⇥d, while other embeddings’ shape are EP 2 R1⇥1⇥d, ES 2 R1⇥N⇥d, ET 2 RT⇥1⇥d. When calculating the summation, they will be replicated and expanded with broadcast on the respective missing dimensions. 4.3.2 Encoder The encoder of UniST consists of multiple Spatio-Temporal Extractors (STE(·)), which can be arbitrarily chosen from {T2S Layer, S2T Layer, STS Layer}. All extractors are connected end to end, i.e., the output of the previous one is the input of the next one. To acquire a more diversity representation, the outputs of each extractor are added to form the final output of the encoder. Let the outputs of the embedding layer be X0, the encoder is computed as: Xen = PL i=1 STE i(X0), where L refers to the number of spatio-temporal extractors. 4.3.3 Decoder The decoder accepts the output of encoder, i.e., L outputs from L spatio-temporal extractors. They are firstly added as a unified spatio-temporal representation. Then the results are through two times of ReLU activation and Linear projection, and produce the final sequence forecasting result. Denote X` as the output of extractor `, we have the calculation of decoder as: Ȳ = Xde = Linear(ReLU(Linear(ReLU( P ` X`)))). 4.4 Automated Search for UniST With the proposed unified ST sequence modeling framework UniST, it still suffers from the potential wrong network configuration problem, where we build an arbitrary modeling order with the replaceable model units {T2S Layer, S2T Layer, STS Layer}. Considering the various downstream tasks, how can we build a universal model with an optimal configuration? Here we propose the Automated Spatio-Temporal modeling approach (AutoST), which learns the optimal combinatorial order that suits the spatio-temporal dependency of the current task. We designed two schemes for layer combination. In this section, we first define the basic searching unit of AutoST, then we introduce two designs of AutoST with different searching schemes. 4.4.1 AutoST Cell The basic searching unit in AutoST is the network cell. Here we define its structure and computing process. Definition 1. AutoST Cell. Let G = (V, E) be a direct acyclic graph (DAG), V denotes the node set, each node refers to a representation comes from the outputs of a computation layer. The representation on node i is defined as H i 2 RT⇥|V|⇥d, where T is the length of spatio-temporal seqence, d is the Hi Hj Hout S2T Layer T2S Layer STS Layer Hi Hj Hout STS Layer Hi Hj Hout S2T Layer T2S Layer STS Layer Hi Hj Hout STS Layer feature dimension. The input of each AutoST Cell is denoted as H 0 , and the output of each cell is the summation of all nodes, i.e., all interval representations: Hout = P|V| i H i . On graph G, the directed edge (i, j) from node i to node j stands for a mixture of all candidate modeling modules O = {T2S Layer, S2T Layer, STS Layer}, and it is represented as o(i,j). So that the representation between node j and other nodes can be written as: Hj = P i<j o (i,j) H i . On each directed edge, there exist a set of weight parameters ↵ (i,j) = {↵(i,j)o |o 2 O}, which indicates the probability of the corresponding modeling module should be retained. Every weight parameters of the candidate modeling module is calculated as: H j = P i<j P o2O exp(↵(i,j)o ) P o02O exp ⇣ ↵(i,j) o0 ⌘o H i . 4.4.2 Sequential Stacking Search AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero Based on the proposed UniTS, we propose two searching schemes to find better combination of the modeling layers. The proposed searching schemes are lossless replacements for the encoder of UniTS. We simply replace the encoder’ spatiotemporal extractor with the AutoST Cell. The first one is multi-layer sequential stacking searching scheme, with which the whole model is named AutoST1. As illustrated in Fig.5(a), it has simple structure within each AutoST Cell, while has more complicated stacking structure between cells. Each cell holds a DAG with three nodes, one input node H0, one output node Hout and an intermediate node H1. And two directed edges are pre-defined between the former two nodes’ output and the output node: (H0,Hout) and (H1,Hout). The searching space of this scheme is shown as the directed red dotted line. The red dashed box is the search candidate set, including {T2S Layer, S2T Layer, STS Layer}. We conduct two gradient based network architecture search methods in the experiments, i.e., DARTS [16], PAS [22]. After searching, the cell essentially becomes one of the three model units. This scheme allows multiple stacking of cells, therefore, the new encoder of the whole model will become the sequential stacking between different modeling layers. 4.4.3 Hybrid Assembling Search The another searching scheme is called hybrid assembling searching. The structure is similar with the sequential stacking searching. However, in this scheme, the encoder only consists of a single AutoST Cell. The cells are not stacked layer-by-layer, it will conduct searching on a more complicated DAG on a single AutoST Cell. In this searching scheme, the DAG of AutoST Cell is shown as Fig.5(b). There are multiple nodes in the cell, generally, it will be set as 4-7 nodes in the experiments. The pre-defined connections are each node’s output to the output node. The multiple directed red dotted lines show the searching space of this scheme. The candidate set is expanded with two operations: Identity and Zero. Identity means no modeling module in this edge, i.e., build directly connection with no operation. Zero means to set all the data passing through it to zero, i.e., no connection is made. This scheme makes the entire DAG form a complex and deep network structure. Through the complex structure design inside the cell, multiple spatio-temporal modeling modules are combined to fit the spatio-temporal dependency distribution in a target task. 5 Experiments This section empirically evaluates the effectiveness of UniST and AutoST models with short-term, medium-term, and long-term ST sequence forecasting tasks on five real-world datasets. Platform: Intel(R) Xeon(R) CPU 2.40GHz ⇥ 2 + NVIDIA Tesla V100 GPU (32 GB) ⇥ 4. The code is available at https://github.com/shuaibuaa/autost2022. 5.1 Datasets In order to study the effect of various ST sequence forecasting methods under complex spatiotemporal distribution, five real-world datasets with different tasks and data states are selected. The statistic information of the five datasets are listed in Table 3. METR-LA [8]: The traffic speed dataset contains 4 months of data from March 1, 2012 to June 30, 2012, recorded by sensors at 207 different locations on highways in Los Angeles County, USA. The data granularity is 5 minutes per point, and the spatial information provided by the dataset includes the coordinates of each sensor and the distance between the sensors. PEMS-BAY [14]: The traffic speed dataset comes from the California Transportation Agencies (CalTrans) Performance Evaluation System (PeMS). The dataset contains data recorded by 325 sensors in the Bay area for a total of 6 months from January 1, 2017 to May 31, 2017, and the data granularity is 5 minutes per point. The spatial information provided by the dataset includes the coordinates of each sensor and the distance between the sensors. PEMS-03/04/08 [3]: The three traffic datasets are also from the PeMS system of the California Transportation Agency, and each dataset is data recorded by sensors in a certain area of California. The PEMS-03 dataset contains data recorded by 358 sensors for 3 months from September 1, 2018 to November 30, 2018. PEMS-04 dataset contains 2 months of data recorded by 307 sensors from January 1, 2018 to February 28, 2018. PEMS-08 dataset contains 2 months of data recorded by 170 sensors from July 1, 2016 to August 31, 2016. The data granularity of these three datasets is 5 minutes per point, and the spatial information provided by the datasets only includes the connectivity between sensors. 5.2 Main Results Table 1 summarizes the ST sequence forecasting results. UniSTS , UniSTT , UniSTST stands for our UniST framework with three same stacking layers of S2T Layer, T2S Layer, STS Layer, respectively. The results in bold font in Table 1 show that our proposed UniST outperforms all baseline methods and achieves State-of-the-Art on all 9 tasks of 5 datasets, with all three proposed layers. This demonstrates our proposed unified spatio-temporal modeling layers and the unified forecasting framework are more expressive than traditional methods. Specifically, compared with GMAN, which is also based on self-attention mechanism, our methods achieve at most 18.41%, 15.31%, 13.82% MAE decreases on the short-term, medium-term, and long-term forecasting task, respectively. Compared with the most advanced method STFGNN, our methods gain 12.06%, 10.61%, 9.43% MAE decreases on the short, medium, long-term forecasting tasks. From the last two columns of Table 1, we can see that both of AutoST1 and AutoST2 beat all other methods on every metric of every task, including our proposed UniST. Recall that the difference between AutoST and UniST is the encoder part, UniST uses all same layers from {S2T Layer, T2S Layer, STS Layer}, while AutoST aims to search for a better combination and connection using the three types of layers. This demonstrates that combining and integrating modules with different spatio-temporal modeling abilities can better deal with uncertainty spatio-temporal dependency and more comprehensively model spatio-temporal sequences. At the same time, AutoST2 beats AutoST1 on every task. It shows that using a single-layer but complex internal connection searching method for these tasks is more effective. Moreover, we can see that the best methods among all baselines are STFGNN, the only method in spatio-temporal synchronous modeling. Comparing GMAN and Graph WaveNet, the two representative methods in spatial-first modeling and temporal-first modeling, respectively, we can see that although they beat other baselines, while our proposed Uni- series model shows a better performance. From this perspective, the proposed method matches the design. 5.3 Ablation Study In AutoST, we design to use gradient-based network architecture search methods to optimize the connections of the modeling modules, and we choose DARTS and PAS as the NAS methods. At the same time, we also conduct experiments to compare the method of random search and AutoSTG [17], which also uses the network architecture search technology for ST sequence prediction. The results are shown in Table 2. We can conclude that our proposed AutoST can outperform AutoSTG with all searching methods. That is because AutoSTG uses more fine-grained temporal convolution and graph convolution structure as the candidates, while AutoST’s candidate layers are designed to model the three different spatio-temporal dependencies, which can make better use of the modeling ability of the three layers on different temporal and spatial relations. It reduces the searching space and the search time simultaneously so that AutoST can make better efficiency and accuracy. 5.4 Result Visualization Forecasting Visualization: We randomly selected one day from the PEMS-BAY dataset and compared our methods’ forecasting results and baseline methods in a visualization way. Typical results are shown in Fig.8, the time span is selected from 13:00 to 19:00, which can represent the most typical scene of the rush hour from afternoon to evening. From the forecasting lines compared to the ground truth, we can see that our proposed methods outperform the two typical baseline methods mostly when forecasting smooth traffic and traffic jam. The bar lies on the bottom of the figures is the forecasting error of each method. We can find that all methods encountered accuracy drop Table 2: The results of network searching methods. Method METR-LA (60 min) PEMS-BAY (60 min) RMSE MAE MAPE RMSE MAE MAPE AutoSTG 7.27 3.47 / 4.38 1.92 / AutoST1-R 6.12 2.97 8.32 3.69 1.62 3.65 AutoST2-R 6.08 2.92 8.26 3.68 1.62 3.66 AutoST1-D 6.09 2.85 7.96 3.74 1.68 3.72 AutoST2-D 6.16 3.25 8.48 3.64 1.60 3.61 AutoST1-P 6.03 2.85 7.88 3.60 1.57 3.52 AutoST2-P 6.00 2.78 7.79 3.58 1.52 3.48 (a) The search results of AutoST1. AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero S2T Layer H0 H1 Hout T2S Layer H0 H1 Hout S2T Layer H0 H1 Hout STS Layer H0 H1 Hout STS Layer H0 H1 Hout H2 T2S Layer STS Layer STS Layer S2T Layer T2S Layer T2S Layer H3 (b) The search results of AutoST2. AutoST Cell En co de r H0 S2T Layer T2S Layer STS Layer H1 Hout Add AutoST Cell En co de r H0 H1 Hout H2 H3 H4 S2T Layer T2S Layer STS Layer Identity Zero S2T Layer H0 H1 Hout T2S Layer H0 H1 Hout S2T Layer H0 H1 Hout STS Layer H0 H1 Hout STS Layer H0 H1 Hout H2 T2S Layer STS Layer STS Layer S2T Layer T2S Layer T2S Layer H3 around 14:00 and 17:30. That is because the status of the road is changed dramatically. However, our methods are more stable with the changes and can quickly be adapted to the new road state. For the complete data visualization of the day, please refer to Fig.9 in the appendix. Learned Architecture Visualization: The learned architectures of the AutoST1 and AutoST2 on PEMS-08 dataset are shown in Fig.7. We can find that although UniSTT has a better effect when modeling with single type modeling module, the search result of AutoST1 show that the STS Layer occupies a larger number, and the model achieves better performance than UniSTT . This demonstrates that combining and stacking multiple spatio-temporal dependency modeling methods reasonably can better fit the real spatio-temporal dependencies. In addition, we can find that AutoST2 obtains a complex connections between the four computation nodes in a single cell. And this learned architecture helps AutoST2 achieve state-of-the-art on this task. Although we cannot yet explain why stacking the modules leads to better results, we can see a potentially broad range of applications [25] for unified architecture searching in this way. 6 Conclusion In this work, we illustrated the existence of the modeling gap problem, especially the modeling order, in the spatio-temporal analysis. Moreover, we build three different layers, namely S2T, T2S, and STS, as new network modeling backbones. Then, an automatic searching strategy is proposed to search the optimal modeling priority automatically. Extensive experiments on five real-world datasets show the overwhelming performance over SOTA baselines. Acknowledgments This work was supported by grants from the Natural Science Foundation of China (U20B2053, 62202029) and Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21-FYTRI02A). Thanks for computing infrastructure provided by Beijing Advanced Innovation Center for Big Data and Brain Computing. This work was also sponsored by CAAI-Huawei MindSpore Open Fund. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
1. What is the focus and contribution of the paper regarding network architecture search? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its application to specific tasks? 3. Do you have any questions regarding the motivation behind certain design choices in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor errors or typos in the paper that the reviewer noticed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Two Network architecture search algorithms are proposed, called Automated Spatio-Temporal modeling approach, AutoST1 and AutoST2. They are focused on automatically crafting spatio temporal forecasting systems. Both the search algorithms operate by selecting from a layer candidate set, formed by three types of layer: T2S Layer (Temporal-first Modeling Layer), S2T Layer (Spatial-first Modeling Layer), STS Layer(Spatial-temporal Synchronous Layer). All of these three layers have exactly same inputs and outputs, and operate by assembling in different fashion a Time Series Linear Self-Attention Unit and a High-order Mix Graph Convolution Unit. AutoST2 is the most performing one, which creates a serial architecture of hidden layers, each one of them can be a T2S, S2T, STS layer. Results on traffic datasets support the search algorithm on five traffic forecasting datasets. Strengths And Weaknesses Strengths: The idea of learning an architecture which understands whether spatial or temporal relations are more important to forecast a given signal is intriguing. Figure 2 has a very effective visualization, which makes the intuition of the authors crystal clear. Weaknesses: -- The experiments focus on very specific datasets, all of them representing the same problem (traffic forecasting). It would have been definitely better to have more heterogenous tasks, as for example pose forecasting, where the spatial part is the body skeleton (expressed as a graph) and the temporal part let the body joints move accordingly to anatomical costraints. An interesting comparison would have been Theodoros Sofianos, Alessio Sampieri, Luca Franco and Fabio Galasso Space-Time-Separable Graph Convolutional Network for Pose Forecasting In Proc. International Conference on Computer Vision (ICCV) Virtual, October 2021 They use a Space-Time Separable GCN (STS-GCN. Which is not the same as the STSGCN cited in the proposed paper as [19]). -- The Complexity of the AutoST1, and especially AutoST2 is not detailed. If complexity cannot be given, timings become very important. In particular, since AutoST2 has to look for multiple optimal layers at each cell, what is the time spent overall, and for completing each layer of Fig.5.b? -- The definition of Adaptive Diffusion Convolution is clear (ChebNet + ADC), but its motivation is not. A sentence on the rationale for this choice would be beneficial. -- In section 3.2, it is not clear in the result connection formula what is o^{(i,j)}, and why i<j, w.r.t. which ordering? -- The rationale why T2S, S2T, STS have been built as described in 4.2.1, 4.2.2., 4.2.3 ,with that precise architecture, is not given. MINOR: --Fig:3 S2T Layer: firstly model temporal... -->T2S Layer: firstly model temporal... --pag.3: is automated methods --> are automated methods --pag.3 our proposed trainning framework --> our proposed training framework --In the objective function of section 3.2, w and alpha are undefined. Questions See my question on the time complexity. Having five datasets of traffic forecasting, most of them having very similar structure and content, is definitely a cons. It would have been definitely better to take into account a dramatically different scenario/task (eg pose forecasting) Limitations No limitations or potential negative impacts have been taken into account by the authors. I think it should be, since managing traffic data could be very impactful on the privacy management. ..
NIPS
Title The Physical Systems Behind Optimization Algorithms Abstract We use differential equations based approaches to provide some physics insights into analyzing the dynamics of popular optimization algorithms in machine learning. In particular, we study gradient descent, proximal gradient descent, coordinate gradient descent, proximal coordinate gradient, and Newton’s methods as well as their Nesterov’s accelerated variants in a unified framework motivated by a natural connection of optimization algorithms to physical systems. Our analysis is applicable to more general algorithms and optimization problems beyond convexity and strong convexity, e.g. Polyak-Łojasiewicz and error bound conditions (possibly nonconvex). 1 Introduction Many machine learning problems can be cast into an optimization problem of the following form: x∗ = argmin x∈X f(x), (1.1) where X ⊆ Rd and f : X → R is a continuously differentiable function. For simplicity, we assume that f is convex or approximately convex (more on this later). Perhaps, the earliest algorithm for solving (1.1) is the vanilla gradient descent (VGD) algorithm, which dates back to Euler and Lagrange. VGD is simple, intuitive, and easy to implement in practice. For large-scale problems, it is usually more scalable than more sophisticated algorithms (e.g. Newton). Existing state-of-the-art analysis shows that VGD achieves an O(1/k) convergence rate for smooth convex functions and a linear convergence rate for strongly convex functions, where k is the number of iterations [11]. Recently, a class of Nesterov’s accelerated gradient (NAG) algorithms have gained popularity in statistical signal processing and machine learning communities. These algorithms combine the vanilla gradient descent algorithm with an additional momentum term at each iteration. Such a modification, though simple, has a profound impact: the NAG algorithms attain faster convergence than VGD. Specifically, NAG achievesO(1/k2) convergence for smooth convex functions, and linear convergence with a better constant term for strongly convex functions [11]. Another closely related class of algorithms is randomized coordinate gradient descent (RCGD) algorithms. These algorithms conduct a gradient descent-type step in each iteration, but only with ∗Work was done while the author was at Johns Hopkins University. This work is partially supported by the National Science Foundation under grant numbers 1546482, 1447639, 1650041 and 1652257, the ONR Award N00014-18-1-2364, the Israel Science Foundation grant #897/13, a Minerva Foundation grant, and by DARPA award W911NF1820267. †Corresponding author. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. respect to a single coordinate. RCGD has similar convergence rates to VGD, but has a smaller overall computational complexity, since its computational cost per iteration of RCGD is much smaller than VGD [10, 7]. More recently, [5, 2] applied Nesterov’s acceleration to RCGD, and proposed accelerated randomized coordinate gradient (ARCG) algorithms. Accordingly, they established similar accelerated convergence rates for ARCG. Another line of research focuses on relaxing the convexity and strong convexity conditions for alternative regularity conditions, including restricted secant inequality, error bound, Polyak-Łojasiewicz, and quadratic growth conditions. These conditions have been shown to hold for many optimization problems in machine learning, and faster convergence rates have been established (e.g. [8, 6, 9, 20, 3, 4]). Although various theoretical results have been established, the algorithmic proof of convergence and regularity conditions in these analyses rely heavily on algebraic tricks that are sometimes arguably mysterious to understand. To this end, a popular recent trend in the analysis of optimization algorithms has been to study gradient descent as a discretization of gradient flow; these approaches often provide a clear interpretation for the continuous approximation of the algorithmic systems [16, 17]. In [16], authors propose a framework for studying discrete algorithmic systems under the limit of infinitesimal time step. They show that Nesterov’s accelerated gradient (NAG) algorithm can be described by an ordinary differential equation (ODE) under the limit that time step tends to zero. In [17], authors study a more general family of ODE’s that essentially correspond to accelerated gradient algorithms. All these analyses, however, lack a natural interpretation in terms of physical systems behind the optimization algorithms. Therefore, they do not clearly explain why the momentum leads to acceleration. Meanwhile, these analyses only consider general convex conditions and gradient descent-type algorithms, and are NOT applicable to either the aforementioned relaxed conditions or coordinate-gradient-type algorithms (due to the randomized coordinate selection). Our Contribution (I): We provide novel physics-based insights into the differential equation approaches for optimization. In particular, we connect the optimization algorithms to natural physical systems through differential equations. This allows us to establish a unified theory for understanding optimization algorithms. Specifically, we consider the VGD, NAG, RCGD, and ARCG algorithms. All of these algorithms are associated with damped oscillator systems with different particle mass and damping coefficients. For example, VGD corresponds to a massless particle system while NAG corresponds to a massive particle system. A damped oscillator system has a natural dissipation of its mechanical energy. The decay rate of the mechanical energy in the system is connected to the convergence rate of the algorithm. Our results match the convergence rates of all algorithms considered here to those known in existing literature. We show that for a massless system, the convergence rate only depends on the gradient (force field) and smoothness of the function, whereas a massive particle system has an energy decay rate proportional to the ratio between the mass and damping coefficient. We further show that optimal algorithms such as NAG correspond to an oscillator system near critical damping. Such a phenomenon is known in the physical literature that the critically damped system undergoes the fastest energy dissipation. We believe that this view can potentially help us design novel optimization algorithms in a more intuitive manner. As pointed out by the anonymous reviewers, some of the intuitions we provide are also presented in [13]; however, we give a more detailed analysis in this paper. Our Contribution (II): We provide new analysis for more general optimization problems beyond general convexity and strong convexity, as well as more general algorithms. Specifically, we provide several concrete examples: (1) VGD achieves linear convergence under the Polyak-Łojasiewicz (PL) condition (possibly nonconvex), which matches the state-of-art result in [4]; (2) NAG achieves accelerated linear convergence (with a better constant term) under both general convex and quadratic growth conditions, which matches the state-of-art result in [19]; (3) Coordinate-gradient-type algorithms share the same ODE approximation with gradient-type algorithms, and our analysis involves a more refined infinitesimal analysis; (4) Newton’s algorithm achieves linear convergence under the strongly convex and self-concordance conditions. See Table 1 for a summary. Due to space limitations, we present the extension to the nonsmooth composite optimization problem in Appendix. Table 1: Our contribution compared with [16, 17]. [15]/[16]/Ours VGD NAG RCGD ARCG Newton General Convex --/--/R R/R/R --/--/R --/--/R --/R/-Strongly Convex --/--/R --/--/R --/--/R --/--/R --/--/R Proximal Variants --/--/R R/--/R --/--/R --/--/R --/--/R PL Condition --/--/R --/--/R --/--/R --/--/R --/--/-- Physical Systems --/--/R --/--/R --/--/R --/--/R --/--/R Recently, an independent work considered a framework similar to ours for analyzing the first-order optimization algorithms [18]; while the focus there is on bridging the gap between discrete algorithmic analysis and continuous approximation, we focus on understanding the physical systems behind the optimization algorithms. Both perspectives are essential and complementary to each other. Before we proceed, we first introduce assumptions on the objective f . Assumption 1.1 (L-smooth). There exists a constant L > 0 such that for any x, y ∈ Rd, we have ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖. Assumption 1.2 (µ-strongly convex). There exists a constant µ such that for any x, y ∈ Rd, we have f(x) ≥ f(y) + 〈∇f(y), x− y〉+ µ2 ‖x− y‖ 2. Assumption 1.3 . (Lmax-coordinate-smooth) There exists a constant Lmax such that for any x, y ∈ Rd, we have |∇jf(x)−∇jf(x\j , yj)| ≤ Lmax(xj − yj)2 for all j = 1, ..., d. The Lmax-coordinate-smooth condition has been shown to be satisfied by many machine learning problems such as Ridge Regression and Logistic Regression. For convenience, we define κ = L/µ and κmax = Lmax/µ. Note that we also have Lmax ≤ L ≤ dLmax and κmax ≤ κ ≤ dκmax. 2 From Optimization Algorithms to ODE We develop a unified representation for the continuous approximations of the aforementioned optimization algorithms. Our analysis is inspired by [16], where the NAG algorithm for general convex function is approximated by an ordinary differential equation under the limit of infinitesimal time step. We start with VGD and NAG, and later show that RCGD and ARCG can also be approximated by the same ODE. For self-containedness, we present a brief review for popular optimization algorithms in Appendix A (VGD, NAG, RCGD, ARCG, and Newton). 2.1 A Unified Framework for Continuous Approximation Analysis By considering an infinitesimal step size, we rewrite VGD and NAG in the following generic form: x(k) = y(k−1) − η∇f(y(k−1)) and y(k) = x(k) + α(x(k) − x(k−1)). (2.1) For VGD, α = 0; For NAG, α = √ 1/(µη)−1√ 1/(µη)+1 when f is strongly convex, and α = k−1k+2 when f is general convex. We then rewrite (2.1) as( x(k+1) − x(k) ) − α ( x(k) − x(k−1) ) + η∇f ( x(k) + α(x(k) − x(k−1)) ) = 0. (2.2) When considering the continuous-time limit of the above equation, it is not immediately clear how the continuous-time is related to the step size k. We thus let h denote the time scaling factor and study the possible choices of h later on. With this, we define a continuous time variable t = kh with X(t) = x(dt/he) = x(k), (2.3) where k is the iteration index, and X(t) from t = 0 to t = ∞ is a trajectory characterizing the dynamics of the algorithm. Throughout the paper, we may omit (t) if it is clear from the context. Note that our definition in (2.3) is very different from [16], where t is defined as t = k √ η, i.e., fixing h = √ η. There are several advantages by using our new definition: (1) The new definition leads to a unified analysis for both VGD and NAG. Specifically, if we follow the same notion as [16], we need to redefine t = kη for VGD, which is different from t = k √ η for NAG; (2) The new definition is more flexible, and leads to a unified analysis for both gradient-type (VGD and NAG) and coordinate-gradient-type algorithms (RCGD and ARCG), regardless of their different step sizes, e.g η = 1/L for VGD and NAG, and η = 1/Lmax for RCGD and ARCG; (3) The new definition is equivalent to [16] only when h = √ η. We will show later that, however, h √η is a natural requirement of a massive particle system rather than an artificial choice of h. We then proceed to derive the differential equation for (2.2). By Taylor expansion( x(k+1) − x(k) ) = Ẋ(t)h+ 1 2 Ẍ(t)h2 + o(h),( x(k) − x(k−1) ) = Ẋ(t)h− 1 2 Ẍ(t)h2 + o(h), and η∇f [ x(k) + α ( x(k) − x(k−1) )] = η∇f(X(t)) +O(ηh). where Ẋ(t) = dX(t)dt and Ẍ(t) = d2X dt2 , we can rewrite (2.2) as (1 + α)h2 2η Ẍ(t) + (1− α)h η Ẋ(t) +∇f(X(t)) +O(h) = 0. (2.4) Taking the limit of h→ 0, we rewrite (2.4) in a more convenient form, mẌ(t) + cẊ(t) +∇f(X(t)) = 0. (2.5) Here (2.5) describes exactly a damped oscillator system in d dimensions with m := 1+α2 h2 η as the particle mass, c := (1−α)hη as the damping coefficient, and f(x) as the potential field. Let us now consider how to choose h for different settings. The basic principle is that both m and c are finite under the limit h, η → 0. In other words, the physical system is valid. Taking VGD as an example, for which we have α = 0. In this case, the only valid setting is h = Θ(η), under which, m → 0 and c → c0 for some constant c0. We call such a particle system massless. For NAG, it can also be verified that only h = Θ( √ η) results in a valid physical system and it is massive (0 < m <∞, 0 ≤ c <∞). Therefore, we provide a unified framework of choosing the correct time scaling factor h. 2.2 A Physical System: Damped Harmonic Oscillator In classic mechanics, the harmonic oscillator is one of the first mechanic systems, which admit an exact solution. This system consists of a massive particle and restoring force. A typical example is a massive particle connecting to a massless spring. The spring always tends to stay at the equilibrium position. When it is stretched or compressed, there will be a force acting on the object that stretches or compresses it. The force is always pointing toward the equilibrium position. The energy stored in the spring is V (X) := 1 2 KX2, where X denotes the displacement of the spring, and K is the Hooke’s constant of the spring. Here V (x) is called the potential energy in existing literature on physics. m A F1 = kx1 m B x1 B : Equilibrium position m C x2 F2 = kx2 m A m B m C Damping coefficient: c One natural way to stop the particle at the equilibrium is adding damping to the system, which dissipates the mechanic energy, just like the real-world mechanics. A simple damping is a force proportional to the negative velocity of the particle (e.g. submerge the system in some viscous fluid) defined as Ff = −cẊ, where c is the viscous damping coefficient. Suppose the potential energy of the system is f(x), then the differential equation of the system is, mẌ + cẊ +∇f(X) = 0. (2.6) For the quadratic potential, i.e., f(x) = K2 ‖x− x ∗‖2, the energy exhibits exponential decay, i.e., E(t) ∝ exp(−ct/(2m)) for under damped or nearly critical damped system (e.g. c2 . 4mK). For an over damped system (i.e. c2 > 4mK), the energy decay is E(t) ∝ exp ( − 1 2 [ c m − √ c2 m2 − 4K m ] t ) . For extremely over damping cases, i.e., c2 4mK, we have cm − √ c2 m2 − 4K m → 2K c . This decay does not depend on the particle mass. The system exhibits a behavior as if the particle has no mass. In the language of optimization, the corresponding algorithm has linear convergence. Note that the convergence rate does only depend on the ratio c/m and does not depend on K when the system is under damped or critically damped. The fastest convergence rate is obtained, when the system is critically damped, c2 = 4mK. 2.3 Sufficient Conditions for Convergence For notational simplicity, we assume that x∗ = 0 is a global minimum of f with f(x∗) = 0. The potential energy of the particle system is simply defined as V (t) := V (X(t)) := f(X(t)). If an algorithm converges to optimal, a sufficient condition is that the corresponding potential energy V decreases over time. The decreasing rate determines the convergence rate of the corresponding algorithm. Theorem 2.1. Let γ(t) > 0 be a nondecreasing function of t and Γ(t) ≥ 0 be a nonnegative function. Suppose that γ(t) and Γ(t) satisfy d(γ(t)(V (t) + Γ(t))) dt ≤ 0 and lim t→0+ γ(t)(V (t) + Γ(t))) <∞. Then the convergence rate of the algorithm is characterized by 1γ(t) . Proof. By d(γ(t)(V (t)+Γ(t)))dt ≤ 0, we have γ(t)(V (t) + Γ(t)) ≤ γ(0+)(f(X(0+)) + Γ(0+)). This further implies f(X) ≤ V (t) + Γ(t) ≤ γ(0 +)(f(X(0+))+Γ(0+)) γ(t) . In words, γ(t)[V (t) + Γ(t)] serves as a Lyapunov function of system. We say that an algorithm is (1/γ)-convergent, if the potential energy decay rate is O(1/γ). For example, γ(t) = eat corresponds to linear convergence, and γ = at corresponds to sublinear convergence, where a is a constant and independent of t. In the following section, we apply Theorem 2.1 to different problems by choosing different γ’s and Γ’s. 3 Convergence Rate in Continuous Time We derive the convergence rates of different algorithms for different families of objective functions. Given our proposed framework, we only need to find γ and Γ to characterize the energy decay. 3.1 Convergence Analysis of VGD We study the convergence of VGD for two classes of functions: (1) General convex function — [11] has shown that VGD achieves O(L/k) convergence for general convex functions; (2) A class of functions satisfying the Polyak-Łojasiewicz (PŁ) condition, which is defined as follows [14, 4]. Assumption 3.1 . We say that f satisfies the µ-PŁ condition, if there exists a constant µ such that for any x ∈ Rd, we have 0 < f(x)‖∇f(x)‖2 ≤ 1 2µ . [4] has shown that the PŁ condition is the weakest condition among the following conditions: strong convexity (SC), essential strong convexity (ESC), weak strong convexity (WSC), restricted secant inequality (RSI) and error bound (EB). Thus, the convergence analysis for the PŁ condition naturally extends to all the above conditions. Please refer to [4] for more detailed definitions and analyses as well as various examples satisfying such a condition in machine learning. 3.1.1 Sublinear Convergence for General Convex Function By choosing Γ(t) = c‖X‖ 2 2t and γ(t) = t, we have d(γ(t)(V (t) + Γ(t))) dt = f(X(t)) + t 〈 ∇f(X(t)), Ẋ(t) 〉 + 〈 X(t), cẊ(t) 〉 = f(X(t))− 〈∇f(X(t)), X(t)〉 − t c ‖∇f(X(t))‖2 ≤ 0, where the last inequality follows from the convexity of f . Thus, Theorem 2.1 implies f(X(t)) ≤ c‖x0‖ 2 2t . (3.1) Plugging t = kh and c = h/η into (3.1) and set η = 1L , we match the convergence rate in [11]: f(x(k)) ≤ c ‖x0‖ 2 2kh = L ‖x0‖2 2k . (3.2) 3.1.2 Linear Convergence Under the Polyak-Łojasiewicz Condition Equation (2.5) implies Ẋ = − 1c∇f(X(t)). By choosing Γ(t) = 0 and γ(t) = exp ( 2µt c ) , we obtain d(γ(t)(V (t) + Γ(t))) dt = γ(t) ( 2µ c f(X(t)) + 〈 ∇f(X(t)), Ẋ(t) 〉) = γ(t) ( 2µ c f(X(t))− 1 c ‖∇f(X(t))‖2 ) . By the µ-PŁ condition: 0 < f(X(t))‖∇f(X(t))‖2 ≤ 1 2µ for some constant µ and any t, we have d(γ(t)(V (t) + Γ(t))) dt ≤ 0. By Theorem 2.1, for some constant C depending on x0, we obtain f(X(t)) ≤ C ′ exp ( −2µt c ) , (3.3) which matches the behavior of an extremely over damped harmonic oscillator. Plugging t = kh and c = h/η into (3.3) and set η = 1L , we match the convergence rate in [4]: f(xk) ≤ C exp ( −2µ L k ) (3.4) for some constant C depending on x(0). 3.2 Convergence Analysis of NAG We study the convergence of NAG for a class of convex functions satisfying the Polyak-Łojasiewicz (PŁ) condition. The convergence of NAG has been studied for general convex functions in [16], and therefore is omitted. [11] has shown that NAG achieves a linear convergence for strongly convex functions. Our analysis shows that the strong convexity can be relaxed as it does in VGD. However, in contrast to VGD, NAG requires f to be convex. For a L-smooth convex function satisfying µ-PŁ condition, we have the particle mass and damping coefficient as m = h 2 η and c = 2 √ µh√ η = 2 √ mµ. By [4], under convexity, PŁ is equivalent to quadratic growth (QG). Formally, we assume that f satisfies the following condition. Assumption 3.2 . We say that f satisfies the µ-QG condition, if there exists a constant µ such that for any x ∈ Rd, we have f(x)− f(x∗) ≥ µ2 ‖x− x ∗‖2. We then proceed with the proof for NAG. We first define two parameters, λ and σ. Let γ(t) = exp(λct) and Γ(t) = m 2 ‖Ẋ + σcX‖2. Given properly chosen λ and σ, we show that the required condition in Theorem 2.1 is satisfied. Recall that our proposed physical system has kinetic energy m2 ‖Ẋ(t)‖ 2. In contrast to an un-damped system, NAG takes an effective velocity Ẋ + σcX in the viscous fluid. By simple manipulation, d(V (t) + Γ(t)) dt = 〈∇f(X), Ẋ〉+m〈Ẋ + σcX, Ẍ + σcẊ〉. We then observe exp(−λct)td(γ(t)(V (t) + Γ(t))) dt = [ λcf(X) + λcm 2 ‖Ẋ + σcX‖2 + d(V (t) + Γ(t)) dt ] ≤ [ λc ( 1 + mσ2c2 µ ) f(X) + 〈Ẋ, ( λcm 2 +mσc ) Ẋ +∇f(X) +mẌ〉 + 〈X, (λσmc2 +mσ2c2)Ẋ +mσcẌ〉 ] . Since c2 = 4mµ, we argue that if positive σ and λ satisfy m(λ+ σ) = 1 and λ ( 1 + mσ2c2 µ ) ≤ σ, (3.5) then we guarantee d(γ(t)(V (t)+Γ(t)))dt ≤ 0. Indeed, we obtain 〈Ẋ, ( λcm 2 +mσc ) Ẋ +∇f(X) +mẌ〉 = −λmc 2 ‖Ẋ‖2 ≤ 0 and 〈X, (λσmc2 +mσ2c2)Ẋ +mσcẌ〉 = −σc〈X,∇f(X)〉. By convexity of f , we have λc ( 1+mσ 2c2 µ ) f(X)−σc〈X,∇f(X)〉 ≤ σcf(X)−σc〈X,∇f(X)〉 ≤ 0. To make (3.5) hold, it is sufficient to set σ = 45m and λ = 1 5m . By Theorem 2.1, we obtain f(X(t)) ≤ C ′′ exp ( − ct 5m ) (3.6) for some constant C ′′ depending on x(0). Plugging t = hk, m = h 2 η , c = 2 √ mµ, and η = 1L into (3.6), we have that f(xk) ≤ C ′′ exp ( −2 5 √ µ L k ) . (3.7) Comparing with VGD, NAG improves the constant term on the convergence rate for convex functions satisfying PŁ condition from L/µ to √ L/µ. This matches with the algorithmic proof of [11] for strongly convex functions, and [19] for convex functions satisfying the QG condition. 3.3 Convergence Analysis of RCGD and ARCG Our proposed framework also justifies the convergence analysis of the RCGD and ARCG algorithms. We will show that the trajectory of the RCGD algorithm converges weakly to the VGD algorithm, and thus our analysis for VGD directly applies. Conditioning on x(k), the updating formula for RCGD is x (k) i = x (k−1) i − η∇if(x (k−1)) and x(k)\i = x (k−1) \i , (3.8) where η is the step size and i is randomly selected from {1, 2, . . . , d} with equal probabilities. Fixing a coordinate i, we compute its expectation and variance as E ( x (k) i − x (k−1) i ∣∣x(k)i ) = −ηd∇if (x(k−1)) and Var ( x (k) i − x (k−1) i ∣∣x(k)i ) = η2(d− 1)d2 ∥∥∥∇if (x(k−1))∥∥∥2. We define the infinitesimal time scaling factor h ≤ η as it does in Section 2.1 and denote X̃h(t) := x(bt/hc). We prove that for each i ∈ [d], X̃hi (t) converges weakly to a deterministic function Xi(t) as η → 0. Specifically, we rewrite (3.8) as, X̃h(t+ h)− X̃h(t) = −η∇if(X̃h(t)). (3.9) Taking the limit of η → 0 at a fix time t, we have |Xi(t+ h)−Xi(t)| = O(η) and 1 η E ( X̃h(t+ h)− X̃h(t) ∣∣X̃h(t)) = −1 d ∇f(X̃h(t)) +O(h). Since ‖∇f(X̃h(t))‖2 is bounded at the time t, we have 1η Var ( X̃h(t+h)− X̃h(t) ∣∣X̃h(t)) = O(h). Using an infinitesimal generator argument in [1], we conclude that X̃h(t) converges to X(t) weakly as h → 0, where X(t) satisfies, Ẋ(t) + 1d∇f(X(t)) = 0 and X(0) = x (0). Since η ≤ 1Lmax , by (3.4), we have f(xk) ≤ C1 exp ( − 2µ dLmax k ) . for some constant C1 depending on x(0). The analysis for general convex functions follows similarly. One can easily match the convergence rate as it does in (3.2), f(x(k)) ≤ c‖x0‖ 2 2kh = dLmax‖x0‖2 2k . Repeating the above argument for ARCG, we obtain that the trajectory X̃h(t) converges weakly to X(t), where X(t) satisfies mẌ(t) + cẊ(t) +∇f(X(t)) = 0. For general convex function, we have m = h 2 η′ and c = 3m t , where η ′ = ηd . By the analysis of [16], we have f(xk) ≤ C2dk2 , for some constant C2 depending on x (0) and Lmax. For convex functions satisfying µ-QG condition, m = h 2 η′ and c = 2 √ mµ d . By (3.7), we obtain f(xk) ≤ C3 exp ( − 25d √ µ Lmax ) for some constant C3 depending on x(0). 3.4 Convergence Analysis for Newton Newton’s algorithm is a second-order algorithm. Although it is different from both VGD and NAG, we can fit it into our proposed framework by choosing η = 1L and the gradient asL [ ∇2f(X) ]−1∇f(X). We consider only the case f is µ-strongly convex, L-smooth and ν-self-concordant. By (2.5), if h/η is not vanishing under the limit of h→ 0, we achieve a similar equation, CẊ +∇f(X) = 0, where C = h∇2f(X) is the viscosity tensor of the system. In such a system, the function f not only determines the gradient field, but also determines a viscosity tensor field. The particle system is as if submerged in an anisotropic fluid that exhibits different viscosity along different directions. We release the particle at point x0 that is sufficiently close to the minimizer 0, i.e. ‖x0 − 0‖ ≤ ζ for some parameter ζ determined by ν, µ, and L. Now we consider the decay of the potential energy V (X) := f(X). By Theorem 2.1 with γ(t) = exp( t2h ) and Γ(t) = 0, we have d(γ(t)f(X)) dt = exp ( t 2h ) · [ 1 2h f(X)− 1 h 〈 ∇f(X), (∇2f(X))−1∇f(X) 〉] . By simple calculus, we have ∇f(X) = − ∫ 0 1 ∇2f((1− t)X)dt · X . By the self-concordance condition, we have (1− νt ‖X‖X) 2∇2f(X) ∇2f((1− t)X)dt 1 (1− νt ‖X‖X)2 ∇2f(X), where ‖v‖X = ( vT∇2f(X)v ) ∈ [µ ‖v‖2 , L ‖v‖2]. Let β = νζL ≤ 1/2. By integration and the convexity of f , we have (1− β)∇2f(X) ∫ 1 0 ∇2f((1− t)X)dt 1 1− β ∇2f(X) and 1 2 f(X)− 〈 ∇f(X), (∇2f(X))−1∇f(X) 〉 ≤ 1 2 f(X)− 1 2 〈∇f(X), X〉 ≤ 0. Note that our proposed ODE framework only proves a local linear convergence for Newton method under the strongly convex, smooth and self concordant conditions. The convergence rate contains an absolute constant, which does not depend on µ and L. This partially justifies the superior local convergence performance of the Newton’s algorithm for ill-conditioned problems with very small µ and very large L. Existing literature, however, has proved the local quadratic convergence of the Newton’s algorithm, which is better than our ODE-type analysis. This is mainly because the discrete algorithmic analysis takes the advantage of “large” step sizes, but the ODE only characterizes “small” step sizes, and therefore fails to achieve quadratic convergence. 4 Numerical Simulations We present an illustration of our theoretical analysis in Figure 2. We consider a strongly convex quadratic program f(x) = 1 2 x>Hx, where H = [ 300 1 1 50 ] . Obviously, f(x) is strongly convex and x∗ = [0, 0]> is the minimizer. We choose η = 10−4 for VGD and NAG, and η = 2× 10−4 for RCGD and ARCG. The trajectories of VGD and NAG are obtained by the default method for solving ODE in MATLAB. 5 Discussions We then give a more detailed interpretation of our proposed system from a perspective of physics: Consequence of Particle Mass — As shown in Section 2, a massless particle system (mass m = 0) describes the simple gradient descent algorithm. By Newton’s law, a 0-mass particle can achieve infinite acceleration and has infinitesimal response time to any force acting on it. Thus, the particle is “locked” on the force field (the gradient field) of the potential (f ) – the velocity of the particle is always proportional to the restoration force acting on the particle. The convergence rate of the algorithm is only determined by the function f and the damping coefficient. The mechanic energy is stored in the force field (the potential energy) rather than in the kinetic energy. Whereas for a massive particle system, the mechanic energy is also partially stored in the kinetic energy of the particle. Therefore, even when the force field is not strong enough, the particle keeps a high speed. Damping and Convergence Rate — For a quadratic potential V (x) = µ2 ‖x‖ 2, the system has a exponential energy decay, where the exponent factor depends on mass m, damping coefficient c, and the property of the function (e.g. PŁ-conefficient). As discussed in Section 2, the decay rate is the fastest when the system is critically damped, i.e, c2 = 4mµ. For either under or over damped system, the decay rate is slower. For a potential function f satisfying convexity and µ-PŁ condition, NAG corresponds to a nearly critically damped system, whereas VGD corresponds to an extremely over damped system, i.e., c2 4mµ. Moreover, we can achieve different acceleration rate by choosing different m/c ratio for NAG, i.e., α = 1/(µη) s−1 1/(µη)s+1 for some absolute constant s > 0. However s = 1/2 achieves the largest convergence rate since it is exactly the critical damping: c2 = 4mµ. Connecting PŁ Condition to Hooke’s law — The µ-PŁ and convex conditions together naturally mimic the property of a quadratic potential V , i.e., a damped harmonic oscillator. Specifically, the µ-PŁ condition Hooke’s constant Displacement Potential EnergyPotential Energy of Spring µ 2 rV µ 2 V (x) guarantees that the force field is strong enough, since the left hand side of the above equation is exactly the potential energy of a spring based on Hooke’s law. Moreover, the convexity condition V (x) ≤ 〈∇V (x), X〉 guarantees that the force field has a large component pointing at the equilibrium point (acting as a restoration force). As indicated in [4], PŁ is a much weaker condition than the strong convexity. Some functions that satisfy local PŁ condition do not even satisfy convexity, e.g., matrix factorization. The connection between the PŁ condition and the Hooke’s law indicates that strong convexity is not the fundamental characterization of linear convergence. If there is another condition that employs a form of the Hooke’s law, it should employ linear convergence as well.
1. What is the main contribution of the paper regarding optimization algorithms? 2. How does the reviewer assess the quality and clarity of the mathematical analysis and presentation? 3. What are the strengths and weaknesses of the proposed ODE interpretation and its application to nonsmooth composite optimization? 4. How does the reviewer evaluate the originality and significance of the paper's content? 5. Are there any specific aspects or claims in the paper that the reviewer questions or suggests for improvement?
Review
Review The paper presents a continuous-time ODE interpretation of four popular optimization algorithms: Gradient descent, proximal gradient descent, coordinate gradient decent and Newton's method. The four algorithms are all interpreted as damped oscillators with different mass and damping coefficients. It is shown that this ODE formulation can be used to derive the (known) convergence rates in a fairly straight forward manner. Further, the ODE formulation allows to analyze convergence in the non-convex case under the PL-condition. An extension to nonsmooth composite optimization is also discussed. Quality The mathematical analysis seems widely correct. Some aspect on the extension to nonsmooth optimization are unclear to me. The function $G(X,\dot{X})$ must not necessarily be (Lipschitz) continuous. Analyzing the convergence of an ODE with nonsmooth force field requires specific care (e.g., existence & uniqueness of solutions). It seems much more natural to me to analyze nonsmooth functions in a discrete-time setting, rather than in a continuous time setting. The implications of having (potentially) a non-smooth vector field should be carefully analyzed here. Clarity The paper is clearly written, with a straight forward line of argumentation. I would appreciate a brief notation section, introducing all special symbols. Originality The unified interpretation of the different optimization algorithms in one ODE formulation is nice to see. The differentiation to [15] by choosing a different step size is interesting. I would appreciate a further discussion on the impact of different choice. This could for example be investigated in the simulation section. Would the ODE representation of [15] approximate the discrete steps better or worse? I very much like the connection of the PL condition to Hooke's law, although the discussion of this is very brief in the paper. Significance The paper gives some nice insights. However, it seems that the practical relevance remains limited. It is mentioned that the method can potentially help to develop new algorithms, but this step has not been taken. After the authors' feedback: My comments were properly addressed. I have no concerns and recommend to accept the paper.
NIPS
Title The Physical Systems Behind Optimization Algorithms Abstract We use differential equations based approaches to provide some physics insights into analyzing the dynamics of popular optimization algorithms in machine learning. In particular, we study gradient descent, proximal gradient descent, coordinate gradient descent, proximal coordinate gradient, and Newton’s methods as well as their Nesterov’s accelerated variants in a unified framework motivated by a natural connection of optimization algorithms to physical systems. Our analysis is applicable to more general algorithms and optimization problems beyond convexity and strong convexity, e.g. Polyak-Łojasiewicz and error bound conditions (possibly nonconvex). 1 Introduction Many machine learning problems can be cast into an optimization problem of the following form: x∗ = argmin x∈X f(x), (1.1) where X ⊆ Rd and f : X → R is a continuously differentiable function. For simplicity, we assume that f is convex or approximately convex (more on this later). Perhaps, the earliest algorithm for solving (1.1) is the vanilla gradient descent (VGD) algorithm, which dates back to Euler and Lagrange. VGD is simple, intuitive, and easy to implement in practice. For large-scale problems, it is usually more scalable than more sophisticated algorithms (e.g. Newton). Existing state-of-the-art analysis shows that VGD achieves an O(1/k) convergence rate for smooth convex functions and a linear convergence rate for strongly convex functions, where k is the number of iterations [11]. Recently, a class of Nesterov’s accelerated gradient (NAG) algorithms have gained popularity in statistical signal processing and machine learning communities. These algorithms combine the vanilla gradient descent algorithm with an additional momentum term at each iteration. Such a modification, though simple, has a profound impact: the NAG algorithms attain faster convergence than VGD. Specifically, NAG achievesO(1/k2) convergence for smooth convex functions, and linear convergence with a better constant term for strongly convex functions [11]. Another closely related class of algorithms is randomized coordinate gradient descent (RCGD) algorithms. These algorithms conduct a gradient descent-type step in each iteration, but only with ∗Work was done while the author was at Johns Hopkins University. This work is partially supported by the National Science Foundation under grant numbers 1546482, 1447639, 1650041 and 1652257, the ONR Award N00014-18-1-2364, the Israel Science Foundation grant #897/13, a Minerva Foundation grant, and by DARPA award W911NF1820267. †Corresponding author. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. respect to a single coordinate. RCGD has similar convergence rates to VGD, but has a smaller overall computational complexity, since its computational cost per iteration of RCGD is much smaller than VGD [10, 7]. More recently, [5, 2] applied Nesterov’s acceleration to RCGD, and proposed accelerated randomized coordinate gradient (ARCG) algorithms. Accordingly, they established similar accelerated convergence rates for ARCG. Another line of research focuses on relaxing the convexity and strong convexity conditions for alternative regularity conditions, including restricted secant inequality, error bound, Polyak-Łojasiewicz, and quadratic growth conditions. These conditions have been shown to hold for many optimization problems in machine learning, and faster convergence rates have been established (e.g. [8, 6, 9, 20, 3, 4]). Although various theoretical results have been established, the algorithmic proof of convergence and regularity conditions in these analyses rely heavily on algebraic tricks that are sometimes arguably mysterious to understand. To this end, a popular recent trend in the analysis of optimization algorithms has been to study gradient descent as a discretization of gradient flow; these approaches often provide a clear interpretation for the continuous approximation of the algorithmic systems [16, 17]. In [16], authors propose a framework for studying discrete algorithmic systems under the limit of infinitesimal time step. They show that Nesterov’s accelerated gradient (NAG) algorithm can be described by an ordinary differential equation (ODE) under the limit that time step tends to zero. In [17], authors study a more general family of ODE’s that essentially correspond to accelerated gradient algorithms. All these analyses, however, lack a natural interpretation in terms of physical systems behind the optimization algorithms. Therefore, they do not clearly explain why the momentum leads to acceleration. Meanwhile, these analyses only consider general convex conditions and gradient descent-type algorithms, and are NOT applicable to either the aforementioned relaxed conditions or coordinate-gradient-type algorithms (due to the randomized coordinate selection). Our Contribution (I): We provide novel physics-based insights into the differential equation approaches for optimization. In particular, we connect the optimization algorithms to natural physical systems through differential equations. This allows us to establish a unified theory for understanding optimization algorithms. Specifically, we consider the VGD, NAG, RCGD, and ARCG algorithms. All of these algorithms are associated with damped oscillator systems with different particle mass and damping coefficients. For example, VGD corresponds to a massless particle system while NAG corresponds to a massive particle system. A damped oscillator system has a natural dissipation of its mechanical energy. The decay rate of the mechanical energy in the system is connected to the convergence rate of the algorithm. Our results match the convergence rates of all algorithms considered here to those known in existing literature. We show that for a massless system, the convergence rate only depends on the gradient (force field) and smoothness of the function, whereas a massive particle system has an energy decay rate proportional to the ratio between the mass and damping coefficient. We further show that optimal algorithms such as NAG correspond to an oscillator system near critical damping. Such a phenomenon is known in the physical literature that the critically damped system undergoes the fastest energy dissipation. We believe that this view can potentially help us design novel optimization algorithms in a more intuitive manner. As pointed out by the anonymous reviewers, some of the intuitions we provide are also presented in [13]; however, we give a more detailed analysis in this paper. Our Contribution (II): We provide new analysis for more general optimization problems beyond general convexity and strong convexity, as well as more general algorithms. Specifically, we provide several concrete examples: (1) VGD achieves linear convergence under the Polyak-Łojasiewicz (PL) condition (possibly nonconvex), which matches the state-of-art result in [4]; (2) NAG achieves accelerated linear convergence (with a better constant term) under both general convex and quadratic growth conditions, which matches the state-of-art result in [19]; (3) Coordinate-gradient-type algorithms share the same ODE approximation with gradient-type algorithms, and our analysis involves a more refined infinitesimal analysis; (4) Newton’s algorithm achieves linear convergence under the strongly convex and self-concordance conditions. See Table 1 for a summary. Due to space limitations, we present the extension to the nonsmooth composite optimization problem in Appendix. Table 1: Our contribution compared with [16, 17]. [15]/[16]/Ours VGD NAG RCGD ARCG Newton General Convex --/--/R R/R/R --/--/R --/--/R --/R/-Strongly Convex --/--/R --/--/R --/--/R --/--/R --/--/R Proximal Variants --/--/R R/--/R --/--/R --/--/R --/--/R PL Condition --/--/R --/--/R --/--/R --/--/R --/--/-- Physical Systems --/--/R --/--/R --/--/R --/--/R --/--/R Recently, an independent work considered a framework similar to ours for analyzing the first-order optimization algorithms [18]; while the focus there is on bridging the gap between discrete algorithmic analysis and continuous approximation, we focus on understanding the physical systems behind the optimization algorithms. Both perspectives are essential and complementary to each other. Before we proceed, we first introduce assumptions on the objective f . Assumption 1.1 (L-smooth). There exists a constant L > 0 such that for any x, y ∈ Rd, we have ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖. Assumption 1.2 (µ-strongly convex). There exists a constant µ such that for any x, y ∈ Rd, we have f(x) ≥ f(y) + 〈∇f(y), x− y〉+ µ2 ‖x− y‖ 2. Assumption 1.3 . (Lmax-coordinate-smooth) There exists a constant Lmax such that for any x, y ∈ Rd, we have |∇jf(x)−∇jf(x\j , yj)| ≤ Lmax(xj − yj)2 for all j = 1, ..., d. The Lmax-coordinate-smooth condition has been shown to be satisfied by many machine learning problems such as Ridge Regression and Logistic Regression. For convenience, we define κ = L/µ and κmax = Lmax/µ. Note that we also have Lmax ≤ L ≤ dLmax and κmax ≤ κ ≤ dκmax. 2 From Optimization Algorithms to ODE We develop a unified representation for the continuous approximations of the aforementioned optimization algorithms. Our analysis is inspired by [16], where the NAG algorithm for general convex function is approximated by an ordinary differential equation under the limit of infinitesimal time step. We start with VGD and NAG, and later show that RCGD and ARCG can also be approximated by the same ODE. For self-containedness, we present a brief review for popular optimization algorithms in Appendix A (VGD, NAG, RCGD, ARCG, and Newton). 2.1 A Unified Framework for Continuous Approximation Analysis By considering an infinitesimal step size, we rewrite VGD and NAG in the following generic form: x(k) = y(k−1) − η∇f(y(k−1)) and y(k) = x(k) + α(x(k) − x(k−1)). (2.1) For VGD, α = 0; For NAG, α = √ 1/(µη)−1√ 1/(µη)+1 when f is strongly convex, and α = k−1k+2 when f is general convex. We then rewrite (2.1) as( x(k+1) − x(k) ) − α ( x(k) − x(k−1) ) + η∇f ( x(k) + α(x(k) − x(k−1)) ) = 0. (2.2) When considering the continuous-time limit of the above equation, it is not immediately clear how the continuous-time is related to the step size k. We thus let h denote the time scaling factor and study the possible choices of h later on. With this, we define a continuous time variable t = kh with X(t) = x(dt/he) = x(k), (2.3) where k is the iteration index, and X(t) from t = 0 to t = ∞ is a trajectory characterizing the dynamics of the algorithm. Throughout the paper, we may omit (t) if it is clear from the context. Note that our definition in (2.3) is very different from [16], where t is defined as t = k √ η, i.e., fixing h = √ η. There are several advantages by using our new definition: (1) The new definition leads to a unified analysis for both VGD and NAG. Specifically, if we follow the same notion as [16], we need to redefine t = kη for VGD, which is different from t = k √ η for NAG; (2) The new definition is more flexible, and leads to a unified analysis for both gradient-type (VGD and NAG) and coordinate-gradient-type algorithms (RCGD and ARCG), regardless of their different step sizes, e.g η = 1/L for VGD and NAG, and η = 1/Lmax for RCGD and ARCG; (3) The new definition is equivalent to [16] only when h = √ η. We will show later that, however, h √η is a natural requirement of a massive particle system rather than an artificial choice of h. We then proceed to derive the differential equation for (2.2). By Taylor expansion( x(k+1) − x(k) ) = Ẋ(t)h+ 1 2 Ẍ(t)h2 + o(h),( x(k) − x(k−1) ) = Ẋ(t)h− 1 2 Ẍ(t)h2 + o(h), and η∇f [ x(k) + α ( x(k) − x(k−1) )] = η∇f(X(t)) +O(ηh). where Ẋ(t) = dX(t)dt and Ẍ(t) = d2X dt2 , we can rewrite (2.2) as (1 + α)h2 2η Ẍ(t) + (1− α)h η Ẋ(t) +∇f(X(t)) +O(h) = 0. (2.4) Taking the limit of h→ 0, we rewrite (2.4) in a more convenient form, mẌ(t) + cẊ(t) +∇f(X(t)) = 0. (2.5) Here (2.5) describes exactly a damped oscillator system in d dimensions with m := 1+α2 h2 η as the particle mass, c := (1−α)hη as the damping coefficient, and f(x) as the potential field. Let us now consider how to choose h for different settings. The basic principle is that both m and c are finite under the limit h, η → 0. In other words, the physical system is valid. Taking VGD as an example, for which we have α = 0. In this case, the only valid setting is h = Θ(η), under which, m → 0 and c → c0 for some constant c0. We call such a particle system massless. For NAG, it can also be verified that only h = Θ( √ η) results in a valid physical system and it is massive (0 < m <∞, 0 ≤ c <∞). Therefore, we provide a unified framework of choosing the correct time scaling factor h. 2.2 A Physical System: Damped Harmonic Oscillator In classic mechanics, the harmonic oscillator is one of the first mechanic systems, which admit an exact solution. This system consists of a massive particle and restoring force. A typical example is a massive particle connecting to a massless spring. The spring always tends to stay at the equilibrium position. When it is stretched or compressed, there will be a force acting on the object that stretches or compresses it. The force is always pointing toward the equilibrium position. The energy stored in the spring is V (X) := 1 2 KX2, where X denotes the displacement of the spring, and K is the Hooke’s constant of the spring. Here V (x) is called the potential energy in existing literature on physics. m A F1 = kx1 m B x1 B : Equilibrium position m C x2 F2 = kx2 m A m B m C Damping coefficient: c One natural way to stop the particle at the equilibrium is adding damping to the system, which dissipates the mechanic energy, just like the real-world mechanics. A simple damping is a force proportional to the negative velocity of the particle (e.g. submerge the system in some viscous fluid) defined as Ff = −cẊ, where c is the viscous damping coefficient. Suppose the potential energy of the system is f(x), then the differential equation of the system is, mẌ + cẊ +∇f(X) = 0. (2.6) For the quadratic potential, i.e., f(x) = K2 ‖x− x ∗‖2, the energy exhibits exponential decay, i.e., E(t) ∝ exp(−ct/(2m)) for under damped or nearly critical damped system (e.g. c2 . 4mK). For an over damped system (i.e. c2 > 4mK), the energy decay is E(t) ∝ exp ( − 1 2 [ c m − √ c2 m2 − 4K m ] t ) . For extremely over damping cases, i.e., c2 4mK, we have cm − √ c2 m2 − 4K m → 2K c . This decay does not depend on the particle mass. The system exhibits a behavior as if the particle has no mass. In the language of optimization, the corresponding algorithm has linear convergence. Note that the convergence rate does only depend on the ratio c/m and does not depend on K when the system is under damped or critically damped. The fastest convergence rate is obtained, when the system is critically damped, c2 = 4mK. 2.3 Sufficient Conditions for Convergence For notational simplicity, we assume that x∗ = 0 is a global minimum of f with f(x∗) = 0. The potential energy of the particle system is simply defined as V (t) := V (X(t)) := f(X(t)). If an algorithm converges to optimal, a sufficient condition is that the corresponding potential energy V decreases over time. The decreasing rate determines the convergence rate of the corresponding algorithm. Theorem 2.1. Let γ(t) > 0 be a nondecreasing function of t and Γ(t) ≥ 0 be a nonnegative function. Suppose that γ(t) and Γ(t) satisfy d(γ(t)(V (t) + Γ(t))) dt ≤ 0 and lim t→0+ γ(t)(V (t) + Γ(t))) <∞. Then the convergence rate of the algorithm is characterized by 1γ(t) . Proof. By d(γ(t)(V (t)+Γ(t)))dt ≤ 0, we have γ(t)(V (t) + Γ(t)) ≤ γ(0+)(f(X(0+)) + Γ(0+)). This further implies f(X) ≤ V (t) + Γ(t) ≤ γ(0 +)(f(X(0+))+Γ(0+)) γ(t) . In words, γ(t)[V (t) + Γ(t)] serves as a Lyapunov function of system. We say that an algorithm is (1/γ)-convergent, if the potential energy decay rate is O(1/γ). For example, γ(t) = eat corresponds to linear convergence, and γ = at corresponds to sublinear convergence, where a is a constant and independent of t. In the following section, we apply Theorem 2.1 to different problems by choosing different γ’s and Γ’s. 3 Convergence Rate in Continuous Time We derive the convergence rates of different algorithms for different families of objective functions. Given our proposed framework, we only need to find γ and Γ to characterize the energy decay. 3.1 Convergence Analysis of VGD We study the convergence of VGD for two classes of functions: (1) General convex function — [11] has shown that VGD achieves O(L/k) convergence for general convex functions; (2) A class of functions satisfying the Polyak-Łojasiewicz (PŁ) condition, which is defined as follows [14, 4]. Assumption 3.1 . We say that f satisfies the µ-PŁ condition, if there exists a constant µ such that for any x ∈ Rd, we have 0 < f(x)‖∇f(x)‖2 ≤ 1 2µ . [4] has shown that the PŁ condition is the weakest condition among the following conditions: strong convexity (SC), essential strong convexity (ESC), weak strong convexity (WSC), restricted secant inequality (RSI) and error bound (EB). Thus, the convergence analysis for the PŁ condition naturally extends to all the above conditions. Please refer to [4] for more detailed definitions and analyses as well as various examples satisfying such a condition in machine learning. 3.1.1 Sublinear Convergence for General Convex Function By choosing Γ(t) = c‖X‖ 2 2t and γ(t) = t, we have d(γ(t)(V (t) + Γ(t))) dt = f(X(t)) + t 〈 ∇f(X(t)), Ẋ(t) 〉 + 〈 X(t), cẊ(t) 〉 = f(X(t))− 〈∇f(X(t)), X(t)〉 − t c ‖∇f(X(t))‖2 ≤ 0, where the last inequality follows from the convexity of f . Thus, Theorem 2.1 implies f(X(t)) ≤ c‖x0‖ 2 2t . (3.1) Plugging t = kh and c = h/η into (3.1) and set η = 1L , we match the convergence rate in [11]: f(x(k)) ≤ c ‖x0‖ 2 2kh = L ‖x0‖2 2k . (3.2) 3.1.2 Linear Convergence Under the Polyak-Łojasiewicz Condition Equation (2.5) implies Ẋ = − 1c∇f(X(t)). By choosing Γ(t) = 0 and γ(t) = exp ( 2µt c ) , we obtain d(γ(t)(V (t) + Γ(t))) dt = γ(t) ( 2µ c f(X(t)) + 〈 ∇f(X(t)), Ẋ(t) 〉) = γ(t) ( 2µ c f(X(t))− 1 c ‖∇f(X(t))‖2 ) . By the µ-PŁ condition: 0 < f(X(t))‖∇f(X(t))‖2 ≤ 1 2µ for some constant µ and any t, we have d(γ(t)(V (t) + Γ(t))) dt ≤ 0. By Theorem 2.1, for some constant C depending on x0, we obtain f(X(t)) ≤ C ′ exp ( −2µt c ) , (3.3) which matches the behavior of an extremely over damped harmonic oscillator. Plugging t = kh and c = h/η into (3.3) and set η = 1L , we match the convergence rate in [4]: f(xk) ≤ C exp ( −2µ L k ) (3.4) for some constant C depending on x(0). 3.2 Convergence Analysis of NAG We study the convergence of NAG for a class of convex functions satisfying the Polyak-Łojasiewicz (PŁ) condition. The convergence of NAG has been studied for general convex functions in [16], and therefore is omitted. [11] has shown that NAG achieves a linear convergence for strongly convex functions. Our analysis shows that the strong convexity can be relaxed as it does in VGD. However, in contrast to VGD, NAG requires f to be convex. For a L-smooth convex function satisfying µ-PŁ condition, we have the particle mass and damping coefficient as m = h 2 η and c = 2 √ µh√ η = 2 √ mµ. By [4], under convexity, PŁ is equivalent to quadratic growth (QG). Formally, we assume that f satisfies the following condition. Assumption 3.2 . We say that f satisfies the µ-QG condition, if there exists a constant µ such that for any x ∈ Rd, we have f(x)− f(x∗) ≥ µ2 ‖x− x ∗‖2. We then proceed with the proof for NAG. We first define two parameters, λ and σ. Let γ(t) = exp(λct) and Γ(t) = m 2 ‖Ẋ + σcX‖2. Given properly chosen λ and σ, we show that the required condition in Theorem 2.1 is satisfied. Recall that our proposed physical system has kinetic energy m2 ‖Ẋ(t)‖ 2. In contrast to an un-damped system, NAG takes an effective velocity Ẋ + σcX in the viscous fluid. By simple manipulation, d(V (t) + Γ(t)) dt = 〈∇f(X), Ẋ〉+m〈Ẋ + σcX, Ẍ + σcẊ〉. We then observe exp(−λct)td(γ(t)(V (t) + Γ(t))) dt = [ λcf(X) + λcm 2 ‖Ẋ + σcX‖2 + d(V (t) + Γ(t)) dt ] ≤ [ λc ( 1 + mσ2c2 µ ) f(X) + 〈Ẋ, ( λcm 2 +mσc ) Ẋ +∇f(X) +mẌ〉 + 〈X, (λσmc2 +mσ2c2)Ẋ +mσcẌ〉 ] . Since c2 = 4mµ, we argue that if positive σ and λ satisfy m(λ+ σ) = 1 and λ ( 1 + mσ2c2 µ ) ≤ σ, (3.5) then we guarantee d(γ(t)(V (t)+Γ(t)))dt ≤ 0. Indeed, we obtain 〈Ẋ, ( λcm 2 +mσc ) Ẋ +∇f(X) +mẌ〉 = −λmc 2 ‖Ẋ‖2 ≤ 0 and 〈X, (λσmc2 +mσ2c2)Ẋ +mσcẌ〉 = −σc〈X,∇f(X)〉. By convexity of f , we have λc ( 1+mσ 2c2 µ ) f(X)−σc〈X,∇f(X)〉 ≤ σcf(X)−σc〈X,∇f(X)〉 ≤ 0. To make (3.5) hold, it is sufficient to set σ = 45m and λ = 1 5m . By Theorem 2.1, we obtain f(X(t)) ≤ C ′′ exp ( − ct 5m ) (3.6) for some constant C ′′ depending on x(0). Plugging t = hk, m = h 2 η , c = 2 √ mµ, and η = 1L into (3.6), we have that f(xk) ≤ C ′′ exp ( −2 5 √ µ L k ) . (3.7) Comparing with VGD, NAG improves the constant term on the convergence rate for convex functions satisfying PŁ condition from L/µ to √ L/µ. This matches with the algorithmic proof of [11] for strongly convex functions, and [19] for convex functions satisfying the QG condition. 3.3 Convergence Analysis of RCGD and ARCG Our proposed framework also justifies the convergence analysis of the RCGD and ARCG algorithms. We will show that the trajectory of the RCGD algorithm converges weakly to the VGD algorithm, and thus our analysis for VGD directly applies. Conditioning on x(k), the updating formula for RCGD is x (k) i = x (k−1) i − η∇if(x (k−1)) and x(k)\i = x (k−1) \i , (3.8) where η is the step size and i is randomly selected from {1, 2, . . . , d} with equal probabilities. Fixing a coordinate i, we compute its expectation and variance as E ( x (k) i − x (k−1) i ∣∣x(k)i ) = −ηd∇if (x(k−1)) and Var ( x (k) i − x (k−1) i ∣∣x(k)i ) = η2(d− 1)d2 ∥∥∥∇if (x(k−1))∥∥∥2. We define the infinitesimal time scaling factor h ≤ η as it does in Section 2.1 and denote X̃h(t) := x(bt/hc). We prove that for each i ∈ [d], X̃hi (t) converges weakly to a deterministic function Xi(t) as η → 0. Specifically, we rewrite (3.8) as, X̃h(t+ h)− X̃h(t) = −η∇if(X̃h(t)). (3.9) Taking the limit of η → 0 at a fix time t, we have |Xi(t+ h)−Xi(t)| = O(η) and 1 η E ( X̃h(t+ h)− X̃h(t) ∣∣X̃h(t)) = −1 d ∇f(X̃h(t)) +O(h). Since ‖∇f(X̃h(t))‖2 is bounded at the time t, we have 1η Var ( X̃h(t+h)− X̃h(t) ∣∣X̃h(t)) = O(h). Using an infinitesimal generator argument in [1], we conclude that X̃h(t) converges to X(t) weakly as h → 0, where X(t) satisfies, Ẋ(t) + 1d∇f(X(t)) = 0 and X(0) = x (0). Since η ≤ 1Lmax , by (3.4), we have f(xk) ≤ C1 exp ( − 2µ dLmax k ) . for some constant C1 depending on x(0). The analysis for general convex functions follows similarly. One can easily match the convergence rate as it does in (3.2), f(x(k)) ≤ c‖x0‖ 2 2kh = dLmax‖x0‖2 2k . Repeating the above argument for ARCG, we obtain that the trajectory X̃h(t) converges weakly to X(t), where X(t) satisfies mẌ(t) + cẊ(t) +∇f(X(t)) = 0. For general convex function, we have m = h 2 η′ and c = 3m t , where η ′ = ηd . By the analysis of [16], we have f(xk) ≤ C2dk2 , for some constant C2 depending on x (0) and Lmax. For convex functions satisfying µ-QG condition, m = h 2 η′ and c = 2 √ mµ d . By (3.7), we obtain f(xk) ≤ C3 exp ( − 25d √ µ Lmax ) for some constant C3 depending on x(0). 3.4 Convergence Analysis for Newton Newton’s algorithm is a second-order algorithm. Although it is different from both VGD and NAG, we can fit it into our proposed framework by choosing η = 1L and the gradient asL [ ∇2f(X) ]−1∇f(X). We consider only the case f is µ-strongly convex, L-smooth and ν-self-concordant. By (2.5), if h/η is not vanishing under the limit of h→ 0, we achieve a similar equation, CẊ +∇f(X) = 0, where C = h∇2f(X) is the viscosity tensor of the system. In such a system, the function f not only determines the gradient field, but also determines a viscosity tensor field. The particle system is as if submerged in an anisotropic fluid that exhibits different viscosity along different directions. We release the particle at point x0 that is sufficiently close to the minimizer 0, i.e. ‖x0 − 0‖ ≤ ζ for some parameter ζ determined by ν, µ, and L. Now we consider the decay of the potential energy V (X) := f(X). By Theorem 2.1 with γ(t) = exp( t2h ) and Γ(t) = 0, we have d(γ(t)f(X)) dt = exp ( t 2h ) · [ 1 2h f(X)− 1 h 〈 ∇f(X), (∇2f(X))−1∇f(X) 〉] . By simple calculus, we have ∇f(X) = − ∫ 0 1 ∇2f((1− t)X)dt · X . By the self-concordance condition, we have (1− νt ‖X‖X) 2∇2f(X) ∇2f((1− t)X)dt 1 (1− νt ‖X‖X)2 ∇2f(X), where ‖v‖X = ( vT∇2f(X)v ) ∈ [µ ‖v‖2 , L ‖v‖2]. Let β = νζL ≤ 1/2. By integration and the convexity of f , we have (1− β)∇2f(X) ∫ 1 0 ∇2f((1− t)X)dt 1 1− β ∇2f(X) and 1 2 f(X)− 〈 ∇f(X), (∇2f(X))−1∇f(X) 〉 ≤ 1 2 f(X)− 1 2 〈∇f(X), X〉 ≤ 0. Note that our proposed ODE framework only proves a local linear convergence for Newton method under the strongly convex, smooth and self concordant conditions. The convergence rate contains an absolute constant, which does not depend on µ and L. This partially justifies the superior local convergence performance of the Newton’s algorithm for ill-conditioned problems with very small µ and very large L. Existing literature, however, has proved the local quadratic convergence of the Newton’s algorithm, which is better than our ODE-type analysis. This is mainly because the discrete algorithmic analysis takes the advantage of “large” step sizes, but the ODE only characterizes “small” step sizes, and therefore fails to achieve quadratic convergence. 4 Numerical Simulations We present an illustration of our theoretical analysis in Figure 2. We consider a strongly convex quadratic program f(x) = 1 2 x>Hx, where H = [ 300 1 1 50 ] . Obviously, f(x) is strongly convex and x∗ = [0, 0]> is the minimizer. We choose η = 10−4 for VGD and NAG, and η = 2× 10−4 for RCGD and ARCG. The trajectories of VGD and NAG are obtained by the default method for solving ODE in MATLAB. 5 Discussions We then give a more detailed interpretation of our proposed system from a perspective of physics: Consequence of Particle Mass — As shown in Section 2, a massless particle system (mass m = 0) describes the simple gradient descent algorithm. By Newton’s law, a 0-mass particle can achieve infinite acceleration and has infinitesimal response time to any force acting on it. Thus, the particle is “locked” on the force field (the gradient field) of the potential (f ) – the velocity of the particle is always proportional to the restoration force acting on the particle. The convergence rate of the algorithm is only determined by the function f and the damping coefficient. The mechanic energy is stored in the force field (the potential energy) rather than in the kinetic energy. Whereas for a massive particle system, the mechanic energy is also partially stored in the kinetic energy of the particle. Therefore, even when the force field is not strong enough, the particle keeps a high speed. Damping and Convergence Rate — For a quadratic potential V (x) = µ2 ‖x‖ 2, the system has a exponential energy decay, where the exponent factor depends on mass m, damping coefficient c, and the property of the function (e.g. PŁ-conefficient). As discussed in Section 2, the decay rate is the fastest when the system is critically damped, i.e, c2 = 4mµ. For either under or over damped system, the decay rate is slower. For a potential function f satisfying convexity and µ-PŁ condition, NAG corresponds to a nearly critically damped system, whereas VGD corresponds to an extremely over damped system, i.e., c2 4mµ. Moreover, we can achieve different acceleration rate by choosing different m/c ratio for NAG, i.e., α = 1/(µη) s−1 1/(µη)s+1 for some absolute constant s > 0. However s = 1/2 achieves the largest convergence rate since it is exactly the critical damping: c2 = 4mµ. Connecting PŁ Condition to Hooke’s law — The µ-PŁ and convex conditions together naturally mimic the property of a quadratic potential V , i.e., a damped harmonic oscillator. Specifically, the µ-PŁ condition Hooke’s constant Displacement Potential EnergyPotential Energy of Spring µ 2 rV µ 2 V (x) guarantees that the force field is strong enough, since the left hand side of the above equation is exactly the potential energy of a spring based on Hooke’s law. Moreover, the convexity condition V (x) ≤ 〈∇V (x), X〉 guarantees that the force field has a large component pointing at the equilibrium point (acting as a restoration force). As indicated in [4], PŁ is a much weaker condition than the strong convexity. Some functions that satisfy local PŁ condition do not even satisfy convexity, e.g., matrix factorization. The connection between the PŁ condition and the Hooke’s law indicates that strong convexity is not the fundamental characterization of linear convergence. If there is another condition that employs a form of the Hooke’s law, it should employ linear convergence as well.
1. What is the focus and contribution of the paper regarding optimization algorithms and physical dynamics? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of clarity and organization? 3. Do you have any concerns or questions about the connection between optimization algorithms and physical systems? 4. How does the paper contribute to the field of optimization, and what insights does it provide? 5. Are there any specific areas where the paper could improve in terms of explanations, examples, or proof derivations?
Review
Review 1. Summary This paper described the connection between various optimization algorithms for convex functions and similar variants and differential equations that describe physical systems. This work builds on previous work using differential equations to analyze optimization algorithms. 2. High level paper I think the paper is a bit hard to read from Section 3 onward. Section 3 is packed with proofs of convergence for various types of functions, and new notation is frequently introduced. The contribution of the paper seems novel. And I think this work would be interesting to the optimization community. 3. High level technical Overall I think the biggest things that could be improved are organization of the paper, and the clarity of the writing. Organization: I would spend more time explaining the proof steps in Section 3 and the general connection between optimization algorithms and physical dynamics in Section 5. I think the most interesting contribution of this paper is this optimization-physics connection. This paper will have a lot more impact if you take the time to deeply explain this connection, rather than go through the proofs of each convergence rate. For instance, it would be great to answer the questions: Why must the potential energy decrease for an algorithm to converge? Why is connection between the PL condition and Hooke's law important, or what insight does it give us? Why is the PL condition useful (I know that you cite [4] for examples, but I think you should include some to help with motivation). To have more space for all this I would move Section 4 to the appendix. And possibly move some proofs in Section 3 as well. Clarity: Right now Section 5 assumes a lot of background knowledge in physics. With the additional space I would describe these concepts in more detail, similar to Section 2. A few more specific comments that could improve clarity: - Lines 37-38: Could you be more specific when you say 'analyses rely heavily on algebraic tricks that are sometimes arguably mysterious to be understood'? This would better motivate the method. - Equations above 2.4 (Taylor expansion): Could you derive this quickly? - Line 115: Is it reasonable to require c -> c_0? Doesn't this mean that \eta = O(h) always? Is this reasonable? It would be good to discuss this. - Proof of Theorem 2.1: What does 0^+ mean? - Equation (3.1): How does Theorem 2.1 imply the bound? Do we assume that \gamma(0^+)=1 and f(X(0^+))=0, and do we ignore \gamma(t)? If so, then I see how Theorem 2.1 implies eq. (3.1), but otherwise I don't... - Line 177: It only implies this if m=0 or X^{..}=0, is this true? 4. Low level technical - Line 1: 'differential equations based' -> 'differential equations-based' - Line 17: 'that achieves' -> 'that it achieves' - Line 39: 'recently attract' -> 'recently have attracted' - Line 45: 'are lack of link' -> 'lack a link' - Line 90: 'We starts with' -> 'We start with' - Line 124: I would remove the word 'mechanic' because this quantity is really the potential energy, as you describe below, and I think 'mechanic' creates confusion. - Line 156: 'independent on' -> 'independent of' - Line 167: 'strongly' -> 'strong' - Line 202: 'suffice' -> 'sufficient' - Line 270: 'As have been' -> 'As has been' 5. 1/2 sentence summary Overall, while the paper is somewhat unclear, I think the novelty of the contribution, and its impact, justifies this paper being accepted. Post-Rebuttal ---------------------- The authors do a very thorough job of addressing my review, but are slightly sloppy answering one of my questions: the response on lines 30-32 is a bit sloppy because: (a) they define t=kh in the paper, but in the response they say k=th, (b) in the response they say (x^{(k-1)} - x^{(k)}) = X(t + h) - X(t), but it must actually be equal to X(t - h) - X(t), furthermore, the initial equation (x^{(k-1)} - x^{(k)}) appears in none of the lines directly above equation (2.4) in the paper. For me, I think this paper is worth accepting, but it is so dense that I think the authors need to move some of the proofs out of the main paper and into the supplementary to properly explain all of the concepts. Ultimately, because of their detailed response in the rebuttal, I still believe the paper should be accepted.
NIPS
Title The Physical Systems Behind Optimization Algorithms Abstract We use differential equations based approaches to provide some physics insights into analyzing the dynamics of popular optimization algorithms in machine learning. In particular, we study gradient descent, proximal gradient descent, coordinate gradient descent, proximal coordinate gradient, and Newton’s methods as well as their Nesterov’s accelerated variants in a unified framework motivated by a natural connection of optimization algorithms to physical systems. Our analysis is applicable to more general algorithms and optimization problems beyond convexity and strong convexity, e.g. Polyak-Łojasiewicz and error bound conditions (possibly nonconvex). 1 Introduction Many machine learning problems can be cast into an optimization problem of the following form: x∗ = argmin x∈X f(x), (1.1) where X ⊆ Rd and f : X → R is a continuously differentiable function. For simplicity, we assume that f is convex or approximately convex (more on this later). Perhaps, the earliest algorithm for solving (1.1) is the vanilla gradient descent (VGD) algorithm, which dates back to Euler and Lagrange. VGD is simple, intuitive, and easy to implement in practice. For large-scale problems, it is usually more scalable than more sophisticated algorithms (e.g. Newton). Existing state-of-the-art analysis shows that VGD achieves an O(1/k) convergence rate for smooth convex functions and a linear convergence rate for strongly convex functions, where k is the number of iterations [11]. Recently, a class of Nesterov’s accelerated gradient (NAG) algorithms have gained popularity in statistical signal processing and machine learning communities. These algorithms combine the vanilla gradient descent algorithm with an additional momentum term at each iteration. Such a modification, though simple, has a profound impact: the NAG algorithms attain faster convergence than VGD. Specifically, NAG achievesO(1/k2) convergence for smooth convex functions, and linear convergence with a better constant term for strongly convex functions [11]. Another closely related class of algorithms is randomized coordinate gradient descent (RCGD) algorithms. These algorithms conduct a gradient descent-type step in each iteration, but only with ∗Work was done while the author was at Johns Hopkins University. This work is partially supported by the National Science Foundation under grant numbers 1546482, 1447639, 1650041 and 1652257, the ONR Award N00014-18-1-2364, the Israel Science Foundation grant #897/13, a Minerva Foundation grant, and by DARPA award W911NF1820267. †Corresponding author. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. respect to a single coordinate. RCGD has similar convergence rates to VGD, but has a smaller overall computational complexity, since its computational cost per iteration of RCGD is much smaller than VGD [10, 7]. More recently, [5, 2] applied Nesterov’s acceleration to RCGD, and proposed accelerated randomized coordinate gradient (ARCG) algorithms. Accordingly, they established similar accelerated convergence rates for ARCG. Another line of research focuses on relaxing the convexity and strong convexity conditions for alternative regularity conditions, including restricted secant inequality, error bound, Polyak-Łojasiewicz, and quadratic growth conditions. These conditions have been shown to hold for many optimization problems in machine learning, and faster convergence rates have been established (e.g. [8, 6, 9, 20, 3, 4]). Although various theoretical results have been established, the algorithmic proof of convergence and regularity conditions in these analyses rely heavily on algebraic tricks that are sometimes arguably mysterious to understand. To this end, a popular recent trend in the analysis of optimization algorithms has been to study gradient descent as a discretization of gradient flow; these approaches often provide a clear interpretation for the continuous approximation of the algorithmic systems [16, 17]. In [16], authors propose a framework for studying discrete algorithmic systems under the limit of infinitesimal time step. They show that Nesterov’s accelerated gradient (NAG) algorithm can be described by an ordinary differential equation (ODE) under the limit that time step tends to zero. In [17], authors study a more general family of ODE’s that essentially correspond to accelerated gradient algorithms. All these analyses, however, lack a natural interpretation in terms of physical systems behind the optimization algorithms. Therefore, they do not clearly explain why the momentum leads to acceleration. Meanwhile, these analyses only consider general convex conditions and gradient descent-type algorithms, and are NOT applicable to either the aforementioned relaxed conditions or coordinate-gradient-type algorithms (due to the randomized coordinate selection). Our Contribution (I): We provide novel physics-based insights into the differential equation approaches for optimization. In particular, we connect the optimization algorithms to natural physical systems through differential equations. This allows us to establish a unified theory for understanding optimization algorithms. Specifically, we consider the VGD, NAG, RCGD, and ARCG algorithms. All of these algorithms are associated with damped oscillator systems with different particle mass and damping coefficients. For example, VGD corresponds to a massless particle system while NAG corresponds to a massive particle system. A damped oscillator system has a natural dissipation of its mechanical energy. The decay rate of the mechanical energy in the system is connected to the convergence rate of the algorithm. Our results match the convergence rates of all algorithms considered here to those known in existing literature. We show that for a massless system, the convergence rate only depends on the gradient (force field) and smoothness of the function, whereas a massive particle system has an energy decay rate proportional to the ratio between the mass and damping coefficient. We further show that optimal algorithms such as NAG correspond to an oscillator system near critical damping. Such a phenomenon is known in the physical literature that the critically damped system undergoes the fastest energy dissipation. We believe that this view can potentially help us design novel optimization algorithms in a more intuitive manner. As pointed out by the anonymous reviewers, some of the intuitions we provide are also presented in [13]; however, we give a more detailed analysis in this paper. Our Contribution (II): We provide new analysis for more general optimization problems beyond general convexity and strong convexity, as well as more general algorithms. Specifically, we provide several concrete examples: (1) VGD achieves linear convergence under the Polyak-Łojasiewicz (PL) condition (possibly nonconvex), which matches the state-of-art result in [4]; (2) NAG achieves accelerated linear convergence (with a better constant term) under both general convex and quadratic growth conditions, which matches the state-of-art result in [19]; (3) Coordinate-gradient-type algorithms share the same ODE approximation with gradient-type algorithms, and our analysis involves a more refined infinitesimal analysis; (4) Newton’s algorithm achieves linear convergence under the strongly convex and self-concordance conditions. See Table 1 for a summary. Due to space limitations, we present the extension to the nonsmooth composite optimization problem in Appendix. Table 1: Our contribution compared with [16, 17]. [15]/[16]/Ours VGD NAG RCGD ARCG Newton General Convex --/--/R R/R/R --/--/R --/--/R --/R/-Strongly Convex --/--/R --/--/R --/--/R --/--/R --/--/R Proximal Variants --/--/R R/--/R --/--/R --/--/R --/--/R PL Condition --/--/R --/--/R --/--/R --/--/R --/--/-- Physical Systems --/--/R --/--/R --/--/R --/--/R --/--/R Recently, an independent work considered a framework similar to ours for analyzing the first-order optimization algorithms [18]; while the focus there is on bridging the gap between discrete algorithmic analysis and continuous approximation, we focus on understanding the physical systems behind the optimization algorithms. Both perspectives are essential and complementary to each other. Before we proceed, we first introduce assumptions on the objective f . Assumption 1.1 (L-smooth). There exists a constant L > 0 such that for any x, y ∈ Rd, we have ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖. Assumption 1.2 (µ-strongly convex). There exists a constant µ such that for any x, y ∈ Rd, we have f(x) ≥ f(y) + 〈∇f(y), x− y〉+ µ2 ‖x− y‖ 2. Assumption 1.3 . (Lmax-coordinate-smooth) There exists a constant Lmax such that for any x, y ∈ Rd, we have |∇jf(x)−∇jf(x\j , yj)| ≤ Lmax(xj − yj)2 for all j = 1, ..., d. The Lmax-coordinate-smooth condition has been shown to be satisfied by many machine learning problems such as Ridge Regression and Logistic Regression. For convenience, we define κ = L/µ and κmax = Lmax/µ. Note that we also have Lmax ≤ L ≤ dLmax and κmax ≤ κ ≤ dκmax. 2 From Optimization Algorithms to ODE We develop a unified representation for the continuous approximations of the aforementioned optimization algorithms. Our analysis is inspired by [16], where the NAG algorithm for general convex function is approximated by an ordinary differential equation under the limit of infinitesimal time step. We start with VGD and NAG, and later show that RCGD and ARCG can also be approximated by the same ODE. For self-containedness, we present a brief review for popular optimization algorithms in Appendix A (VGD, NAG, RCGD, ARCG, and Newton). 2.1 A Unified Framework for Continuous Approximation Analysis By considering an infinitesimal step size, we rewrite VGD and NAG in the following generic form: x(k) = y(k−1) − η∇f(y(k−1)) and y(k) = x(k) + α(x(k) − x(k−1)). (2.1) For VGD, α = 0; For NAG, α = √ 1/(µη)−1√ 1/(µη)+1 when f is strongly convex, and α = k−1k+2 when f is general convex. We then rewrite (2.1) as( x(k+1) − x(k) ) − α ( x(k) − x(k−1) ) + η∇f ( x(k) + α(x(k) − x(k−1)) ) = 0. (2.2) When considering the continuous-time limit of the above equation, it is not immediately clear how the continuous-time is related to the step size k. We thus let h denote the time scaling factor and study the possible choices of h later on. With this, we define a continuous time variable t = kh with X(t) = x(dt/he) = x(k), (2.3) where k is the iteration index, and X(t) from t = 0 to t = ∞ is a trajectory characterizing the dynamics of the algorithm. Throughout the paper, we may omit (t) if it is clear from the context. Note that our definition in (2.3) is very different from [16], where t is defined as t = k √ η, i.e., fixing h = √ η. There are several advantages by using our new definition: (1) The new definition leads to a unified analysis for both VGD and NAG. Specifically, if we follow the same notion as [16], we need to redefine t = kη for VGD, which is different from t = k √ η for NAG; (2) The new definition is more flexible, and leads to a unified analysis for both gradient-type (VGD and NAG) and coordinate-gradient-type algorithms (RCGD and ARCG), regardless of their different step sizes, e.g η = 1/L for VGD and NAG, and η = 1/Lmax for RCGD and ARCG; (3) The new definition is equivalent to [16] only when h = √ η. We will show later that, however, h √η is a natural requirement of a massive particle system rather than an artificial choice of h. We then proceed to derive the differential equation for (2.2). By Taylor expansion( x(k+1) − x(k) ) = Ẋ(t)h+ 1 2 Ẍ(t)h2 + o(h),( x(k) − x(k−1) ) = Ẋ(t)h− 1 2 Ẍ(t)h2 + o(h), and η∇f [ x(k) + α ( x(k) − x(k−1) )] = η∇f(X(t)) +O(ηh). where Ẋ(t) = dX(t)dt and Ẍ(t) = d2X dt2 , we can rewrite (2.2) as (1 + α)h2 2η Ẍ(t) + (1− α)h η Ẋ(t) +∇f(X(t)) +O(h) = 0. (2.4) Taking the limit of h→ 0, we rewrite (2.4) in a more convenient form, mẌ(t) + cẊ(t) +∇f(X(t)) = 0. (2.5) Here (2.5) describes exactly a damped oscillator system in d dimensions with m := 1+α2 h2 η as the particle mass, c := (1−α)hη as the damping coefficient, and f(x) as the potential field. Let us now consider how to choose h for different settings. The basic principle is that both m and c are finite under the limit h, η → 0. In other words, the physical system is valid. Taking VGD as an example, for which we have α = 0. In this case, the only valid setting is h = Θ(η), under which, m → 0 and c → c0 for some constant c0. We call such a particle system massless. For NAG, it can also be verified that only h = Θ( √ η) results in a valid physical system and it is massive (0 < m <∞, 0 ≤ c <∞). Therefore, we provide a unified framework of choosing the correct time scaling factor h. 2.2 A Physical System: Damped Harmonic Oscillator In classic mechanics, the harmonic oscillator is one of the first mechanic systems, which admit an exact solution. This system consists of a massive particle and restoring force. A typical example is a massive particle connecting to a massless spring. The spring always tends to stay at the equilibrium position. When it is stretched or compressed, there will be a force acting on the object that stretches or compresses it. The force is always pointing toward the equilibrium position. The energy stored in the spring is V (X) := 1 2 KX2, where X denotes the displacement of the spring, and K is the Hooke’s constant of the spring. Here V (x) is called the potential energy in existing literature on physics. m A F1 = kx1 m B x1 B : Equilibrium position m C x2 F2 = kx2 m A m B m C Damping coefficient: c One natural way to stop the particle at the equilibrium is adding damping to the system, which dissipates the mechanic energy, just like the real-world mechanics. A simple damping is a force proportional to the negative velocity of the particle (e.g. submerge the system in some viscous fluid) defined as Ff = −cẊ, where c is the viscous damping coefficient. Suppose the potential energy of the system is f(x), then the differential equation of the system is, mẌ + cẊ +∇f(X) = 0. (2.6) For the quadratic potential, i.e., f(x) = K2 ‖x− x ∗‖2, the energy exhibits exponential decay, i.e., E(t) ∝ exp(−ct/(2m)) for under damped or nearly critical damped system (e.g. c2 . 4mK). For an over damped system (i.e. c2 > 4mK), the energy decay is E(t) ∝ exp ( − 1 2 [ c m − √ c2 m2 − 4K m ] t ) . For extremely over damping cases, i.e., c2 4mK, we have cm − √ c2 m2 − 4K m → 2K c . This decay does not depend on the particle mass. The system exhibits a behavior as if the particle has no mass. In the language of optimization, the corresponding algorithm has linear convergence. Note that the convergence rate does only depend on the ratio c/m and does not depend on K when the system is under damped or critically damped. The fastest convergence rate is obtained, when the system is critically damped, c2 = 4mK. 2.3 Sufficient Conditions for Convergence For notational simplicity, we assume that x∗ = 0 is a global minimum of f with f(x∗) = 0. The potential energy of the particle system is simply defined as V (t) := V (X(t)) := f(X(t)). If an algorithm converges to optimal, a sufficient condition is that the corresponding potential energy V decreases over time. The decreasing rate determines the convergence rate of the corresponding algorithm. Theorem 2.1. Let γ(t) > 0 be a nondecreasing function of t and Γ(t) ≥ 0 be a nonnegative function. Suppose that γ(t) and Γ(t) satisfy d(γ(t)(V (t) + Γ(t))) dt ≤ 0 and lim t→0+ γ(t)(V (t) + Γ(t))) <∞. Then the convergence rate of the algorithm is characterized by 1γ(t) . Proof. By d(γ(t)(V (t)+Γ(t)))dt ≤ 0, we have γ(t)(V (t) + Γ(t)) ≤ γ(0+)(f(X(0+)) + Γ(0+)). This further implies f(X) ≤ V (t) + Γ(t) ≤ γ(0 +)(f(X(0+))+Γ(0+)) γ(t) . In words, γ(t)[V (t) + Γ(t)] serves as a Lyapunov function of system. We say that an algorithm is (1/γ)-convergent, if the potential energy decay rate is O(1/γ). For example, γ(t) = eat corresponds to linear convergence, and γ = at corresponds to sublinear convergence, where a is a constant and independent of t. In the following section, we apply Theorem 2.1 to different problems by choosing different γ’s and Γ’s. 3 Convergence Rate in Continuous Time We derive the convergence rates of different algorithms for different families of objective functions. Given our proposed framework, we only need to find γ and Γ to characterize the energy decay. 3.1 Convergence Analysis of VGD We study the convergence of VGD for two classes of functions: (1) General convex function — [11] has shown that VGD achieves O(L/k) convergence for general convex functions; (2) A class of functions satisfying the Polyak-Łojasiewicz (PŁ) condition, which is defined as follows [14, 4]. Assumption 3.1 . We say that f satisfies the µ-PŁ condition, if there exists a constant µ such that for any x ∈ Rd, we have 0 < f(x)‖∇f(x)‖2 ≤ 1 2µ . [4] has shown that the PŁ condition is the weakest condition among the following conditions: strong convexity (SC), essential strong convexity (ESC), weak strong convexity (WSC), restricted secant inequality (RSI) and error bound (EB). Thus, the convergence analysis for the PŁ condition naturally extends to all the above conditions. Please refer to [4] for more detailed definitions and analyses as well as various examples satisfying such a condition in machine learning. 3.1.1 Sublinear Convergence for General Convex Function By choosing Γ(t) = c‖X‖ 2 2t and γ(t) = t, we have d(γ(t)(V (t) + Γ(t))) dt = f(X(t)) + t 〈 ∇f(X(t)), Ẋ(t) 〉 + 〈 X(t), cẊ(t) 〉 = f(X(t))− 〈∇f(X(t)), X(t)〉 − t c ‖∇f(X(t))‖2 ≤ 0, where the last inequality follows from the convexity of f . Thus, Theorem 2.1 implies f(X(t)) ≤ c‖x0‖ 2 2t . (3.1) Plugging t = kh and c = h/η into (3.1) and set η = 1L , we match the convergence rate in [11]: f(x(k)) ≤ c ‖x0‖ 2 2kh = L ‖x0‖2 2k . (3.2) 3.1.2 Linear Convergence Under the Polyak-Łojasiewicz Condition Equation (2.5) implies Ẋ = − 1c∇f(X(t)). By choosing Γ(t) = 0 and γ(t) = exp ( 2µt c ) , we obtain d(γ(t)(V (t) + Γ(t))) dt = γ(t) ( 2µ c f(X(t)) + 〈 ∇f(X(t)), Ẋ(t) 〉) = γ(t) ( 2µ c f(X(t))− 1 c ‖∇f(X(t))‖2 ) . By the µ-PŁ condition: 0 < f(X(t))‖∇f(X(t))‖2 ≤ 1 2µ for some constant µ and any t, we have d(γ(t)(V (t) + Γ(t))) dt ≤ 0. By Theorem 2.1, for some constant C depending on x0, we obtain f(X(t)) ≤ C ′ exp ( −2µt c ) , (3.3) which matches the behavior of an extremely over damped harmonic oscillator. Plugging t = kh and c = h/η into (3.3) and set η = 1L , we match the convergence rate in [4]: f(xk) ≤ C exp ( −2µ L k ) (3.4) for some constant C depending on x(0). 3.2 Convergence Analysis of NAG We study the convergence of NAG for a class of convex functions satisfying the Polyak-Łojasiewicz (PŁ) condition. The convergence of NAG has been studied for general convex functions in [16], and therefore is omitted. [11] has shown that NAG achieves a linear convergence for strongly convex functions. Our analysis shows that the strong convexity can be relaxed as it does in VGD. However, in contrast to VGD, NAG requires f to be convex. For a L-smooth convex function satisfying µ-PŁ condition, we have the particle mass and damping coefficient as m = h 2 η and c = 2 √ µh√ η = 2 √ mµ. By [4], under convexity, PŁ is equivalent to quadratic growth (QG). Formally, we assume that f satisfies the following condition. Assumption 3.2 . We say that f satisfies the µ-QG condition, if there exists a constant µ such that for any x ∈ Rd, we have f(x)− f(x∗) ≥ µ2 ‖x− x ∗‖2. We then proceed with the proof for NAG. We first define two parameters, λ and σ. Let γ(t) = exp(λct) and Γ(t) = m 2 ‖Ẋ + σcX‖2. Given properly chosen λ and σ, we show that the required condition in Theorem 2.1 is satisfied. Recall that our proposed physical system has kinetic energy m2 ‖Ẋ(t)‖ 2. In contrast to an un-damped system, NAG takes an effective velocity Ẋ + σcX in the viscous fluid. By simple manipulation, d(V (t) + Γ(t)) dt = 〈∇f(X), Ẋ〉+m〈Ẋ + σcX, Ẍ + σcẊ〉. We then observe exp(−λct)td(γ(t)(V (t) + Γ(t))) dt = [ λcf(X) + λcm 2 ‖Ẋ + σcX‖2 + d(V (t) + Γ(t)) dt ] ≤ [ λc ( 1 + mσ2c2 µ ) f(X) + 〈Ẋ, ( λcm 2 +mσc ) Ẋ +∇f(X) +mẌ〉 + 〈X, (λσmc2 +mσ2c2)Ẋ +mσcẌ〉 ] . Since c2 = 4mµ, we argue that if positive σ and λ satisfy m(λ+ σ) = 1 and λ ( 1 + mσ2c2 µ ) ≤ σ, (3.5) then we guarantee d(γ(t)(V (t)+Γ(t)))dt ≤ 0. Indeed, we obtain 〈Ẋ, ( λcm 2 +mσc ) Ẋ +∇f(X) +mẌ〉 = −λmc 2 ‖Ẋ‖2 ≤ 0 and 〈X, (λσmc2 +mσ2c2)Ẋ +mσcẌ〉 = −σc〈X,∇f(X)〉. By convexity of f , we have λc ( 1+mσ 2c2 µ ) f(X)−σc〈X,∇f(X)〉 ≤ σcf(X)−σc〈X,∇f(X)〉 ≤ 0. To make (3.5) hold, it is sufficient to set σ = 45m and λ = 1 5m . By Theorem 2.1, we obtain f(X(t)) ≤ C ′′ exp ( − ct 5m ) (3.6) for some constant C ′′ depending on x(0). Plugging t = hk, m = h 2 η , c = 2 √ mµ, and η = 1L into (3.6), we have that f(xk) ≤ C ′′ exp ( −2 5 √ µ L k ) . (3.7) Comparing with VGD, NAG improves the constant term on the convergence rate for convex functions satisfying PŁ condition from L/µ to √ L/µ. This matches with the algorithmic proof of [11] for strongly convex functions, and [19] for convex functions satisfying the QG condition. 3.3 Convergence Analysis of RCGD and ARCG Our proposed framework also justifies the convergence analysis of the RCGD and ARCG algorithms. We will show that the trajectory of the RCGD algorithm converges weakly to the VGD algorithm, and thus our analysis for VGD directly applies. Conditioning on x(k), the updating formula for RCGD is x (k) i = x (k−1) i − η∇if(x (k−1)) and x(k)\i = x (k−1) \i , (3.8) where η is the step size and i is randomly selected from {1, 2, . . . , d} with equal probabilities. Fixing a coordinate i, we compute its expectation and variance as E ( x (k) i − x (k−1) i ∣∣x(k)i ) = −ηd∇if (x(k−1)) and Var ( x (k) i − x (k−1) i ∣∣x(k)i ) = η2(d− 1)d2 ∥∥∥∇if (x(k−1))∥∥∥2. We define the infinitesimal time scaling factor h ≤ η as it does in Section 2.1 and denote X̃h(t) := x(bt/hc). We prove that for each i ∈ [d], X̃hi (t) converges weakly to a deterministic function Xi(t) as η → 0. Specifically, we rewrite (3.8) as, X̃h(t+ h)− X̃h(t) = −η∇if(X̃h(t)). (3.9) Taking the limit of η → 0 at a fix time t, we have |Xi(t+ h)−Xi(t)| = O(η) and 1 η E ( X̃h(t+ h)− X̃h(t) ∣∣X̃h(t)) = −1 d ∇f(X̃h(t)) +O(h). Since ‖∇f(X̃h(t))‖2 is bounded at the time t, we have 1η Var ( X̃h(t+h)− X̃h(t) ∣∣X̃h(t)) = O(h). Using an infinitesimal generator argument in [1], we conclude that X̃h(t) converges to X(t) weakly as h → 0, where X(t) satisfies, Ẋ(t) + 1d∇f(X(t)) = 0 and X(0) = x (0). Since η ≤ 1Lmax , by (3.4), we have f(xk) ≤ C1 exp ( − 2µ dLmax k ) . for some constant C1 depending on x(0). The analysis for general convex functions follows similarly. One can easily match the convergence rate as it does in (3.2), f(x(k)) ≤ c‖x0‖ 2 2kh = dLmax‖x0‖2 2k . Repeating the above argument for ARCG, we obtain that the trajectory X̃h(t) converges weakly to X(t), where X(t) satisfies mẌ(t) + cẊ(t) +∇f(X(t)) = 0. For general convex function, we have m = h 2 η′ and c = 3m t , where η ′ = ηd . By the analysis of [16], we have f(xk) ≤ C2dk2 , for some constant C2 depending on x (0) and Lmax. For convex functions satisfying µ-QG condition, m = h 2 η′ and c = 2 √ mµ d . By (3.7), we obtain f(xk) ≤ C3 exp ( − 25d √ µ Lmax ) for some constant C3 depending on x(0). 3.4 Convergence Analysis for Newton Newton’s algorithm is a second-order algorithm. Although it is different from both VGD and NAG, we can fit it into our proposed framework by choosing η = 1L and the gradient asL [ ∇2f(X) ]−1∇f(X). We consider only the case f is µ-strongly convex, L-smooth and ν-self-concordant. By (2.5), if h/η is not vanishing under the limit of h→ 0, we achieve a similar equation, CẊ +∇f(X) = 0, where C = h∇2f(X) is the viscosity tensor of the system. In such a system, the function f not only determines the gradient field, but also determines a viscosity tensor field. The particle system is as if submerged in an anisotropic fluid that exhibits different viscosity along different directions. We release the particle at point x0 that is sufficiently close to the minimizer 0, i.e. ‖x0 − 0‖ ≤ ζ for some parameter ζ determined by ν, µ, and L. Now we consider the decay of the potential energy V (X) := f(X). By Theorem 2.1 with γ(t) = exp( t2h ) and Γ(t) = 0, we have d(γ(t)f(X)) dt = exp ( t 2h ) · [ 1 2h f(X)− 1 h 〈 ∇f(X), (∇2f(X))−1∇f(X) 〉] . By simple calculus, we have ∇f(X) = − ∫ 0 1 ∇2f((1− t)X)dt · X . By the self-concordance condition, we have (1− νt ‖X‖X) 2∇2f(X) ∇2f((1− t)X)dt 1 (1− νt ‖X‖X)2 ∇2f(X), where ‖v‖X = ( vT∇2f(X)v ) ∈ [µ ‖v‖2 , L ‖v‖2]. Let β = νζL ≤ 1/2. By integration and the convexity of f , we have (1− β)∇2f(X) ∫ 1 0 ∇2f((1− t)X)dt 1 1− β ∇2f(X) and 1 2 f(X)− 〈 ∇f(X), (∇2f(X))−1∇f(X) 〉 ≤ 1 2 f(X)− 1 2 〈∇f(X), X〉 ≤ 0. Note that our proposed ODE framework only proves a local linear convergence for Newton method under the strongly convex, smooth and self concordant conditions. The convergence rate contains an absolute constant, which does not depend on µ and L. This partially justifies the superior local convergence performance of the Newton’s algorithm for ill-conditioned problems with very small µ and very large L. Existing literature, however, has proved the local quadratic convergence of the Newton’s algorithm, which is better than our ODE-type analysis. This is mainly because the discrete algorithmic analysis takes the advantage of “large” step sizes, but the ODE only characterizes “small” step sizes, and therefore fails to achieve quadratic convergence. 4 Numerical Simulations We present an illustration of our theoretical analysis in Figure 2. We consider a strongly convex quadratic program f(x) = 1 2 x>Hx, where H = [ 300 1 1 50 ] . Obviously, f(x) is strongly convex and x∗ = [0, 0]> is the minimizer. We choose η = 10−4 for VGD and NAG, and η = 2× 10−4 for RCGD and ARCG. The trajectories of VGD and NAG are obtained by the default method for solving ODE in MATLAB. 5 Discussions We then give a more detailed interpretation of our proposed system from a perspective of physics: Consequence of Particle Mass — As shown in Section 2, a massless particle system (mass m = 0) describes the simple gradient descent algorithm. By Newton’s law, a 0-mass particle can achieve infinite acceleration and has infinitesimal response time to any force acting on it. Thus, the particle is “locked” on the force field (the gradient field) of the potential (f ) – the velocity of the particle is always proportional to the restoration force acting on the particle. The convergence rate of the algorithm is only determined by the function f and the damping coefficient. The mechanic energy is stored in the force field (the potential energy) rather than in the kinetic energy. Whereas for a massive particle system, the mechanic energy is also partially stored in the kinetic energy of the particle. Therefore, even when the force field is not strong enough, the particle keeps a high speed. Damping and Convergence Rate — For a quadratic potential V (x) = µ2 ‖x‖ 2, the system has a exponential energy decay, where the exponent factor depends on mass m, damping coefficient c, and the property of the function (e.g. PŁ-conefficient). As discussed in Section 2, the decay rate is the fastest when the system is critically damped, i.e, c2 = 4mµ. For either under or over damped system, the decay rate is slower. For a potential function f satisfying convexity and µ-PŁ condition, NAG corresponds to a nearly critically damped system, whereas VGD corresponds to an extremely over damped system, i.e., c2 4mµ. Moreover, we can achieve different acceleration rate by choosing different m/c ratio for NAG, i.e., α = 1/(µη) s−1 1/(µη)s+1 for some absolute constant s > 0. However s = 1/2 achieves the largest convergence rate since it is exactly the critical damping: c2 = 4mµ. Connecting PŁ Condition to Hooke’s law — The µ-PŁ and convex conditions together naturally mimic the property of a quadratic potential V , i.e., a damped harmonic oscillator. Specifically, the µ-PŁ condition Hooke’s constant Displacement Potential EnergyPotential Energy of Spring µ 2 rV µ 2 V (x) guarantees that the force field is strong enough, since the left hand side of the above equation is exactly the potential energy of a spring based on Hooke’s law. Moreover, the convexity condition V (x) ≤ 〈∇V (x), X〉 guarantees that the force field has a large component pointing at the equilibrium point (acting as a restoration force). As indicated in [4], PŁ is a much weaker condition than the strong convexity. Some functions that satisfy local PŁ condition do not even satisfy convexity, e.g., matrix factorization. The connection between the PŁ condition and the Hooke’s law indicates that strong convexity is not the fundamental characterization of linear convergence. If there is another condition that employs a form of the Hooke’s law, it should employ linear convergence as well.
1. What is the main contribution of the paper in linking optimization algorithms and continuous time ODEs? 2. What are the strengths of the paper, particularly in its presentation and the unifying link to a natural system? 3. What are the weaknesses of the paper regarding its density and lack of clarity in certain areas? 4. How does the reviewer assess the novelty and significance of the paper's content compared to prior works? 5. Are there any questions or concerns regarding the paper's analysis and connections to previous research?
Review
Review Summary: This paper contributes to a growing body of work that links optimization algorithms and associated gradient flows to certain continuous time ODEs. In particular, this paper reformulates vanilla and accelerated gradient descent, Newton's method and other popular methods in terms of damped oscillator systems with different masses and damping coefficients. The potential energy of the system serves as a Lyapunov function whose decrease rate can then be related to convergence rates of different iteration schemes. Evaluation: Damped oscillator systems are very well studied; the notion of potential energy is clearly related to Lyapunov methods which have been previously used for analyzing optimization algorithms. Despite the fact that closely related analyses have appeared recently, the paper is overall well presented and the unifying link to the dynamics of a natural system is insightful. - The paper is a bit dense and hard to read at places. - The connection to "A Lyapunov Analysis of Momentum Methods in Optimization" is not fully explained. - The role of ODE integrator that connects the continuous time dynamics to the actual algorithm is not made clear.
NIPS
Title Efficient Aggregated Kernel Tests using Incomplete $U$-statistics Abstract We propose a series of computationally efficient nonparametric tests for the twosample, independence, and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and Kernel Stein Discrepancy (KSD), respectively. Our test statistics are incomplete U -statistics, with a computational cost that interpolates between linear time in the number of samples, and quadratic time, as associated with classical U -statistic tests. The three proposed tests aggregate over several kernel bandwidths to detect departures from the null on various scales: we call the resulting tests MMDAggInc, HSICAggInc and KSDAggInc. This procedure provides a solution to the fundamental kernel selection problem as we can aggregate a large number of kernels with several bandwidths without incurring a significant loss of test power. For the test thresholds, we derive a quantile bound for wild bootstrapped incomplete U -statistics, which is of independent interest. We derive non-asymptotic uniform separation rates for MMDAggInc and HSICAggInc, and quantify exactly the tradeoff between computational efficiency and the attainable rates: this result is novel for tests based on incomplete U -statistics, to our knowledge. We further show that in the quadratic-time case, the wild bootstrap incurs no penalty to test power over the more widespread permutation-based approach, since both attain the same minimax optimal rates (which in turn match the rates that use oracle quantiles). We support our claims with numerical experiments on the trade-off between computational efficiency and test power. In all three testing frameworks, the linear-time versions of our proposed tests perform at least as well as the current linear-time state-of-the-art tests. 1 Introduction Nonparametric hypothesis testing is a fundamental field of statistics, and is widely used by the machine learning community and practitioners in numerous other fields, due to the increasing availability of huge amounts of data. When dealing with large-scale datasets, computational cost can quickly emerge as a major issue which might prevent from using expensive tests in practice; constructing efficient tests is therefore crucial for their real-world applications. In this paper, we construct kernel-based aggregated tests using incomplete U -statistics (Blom, 1976) for the two-sample, independence and 36th Conference on Neural Information Processing Systems (NeurIPS 2022). goodness-of-fit problems (which we detail in Section 2). The quadratic-time aggregation procedure has been shown to result in powerful tests (Fromont et al., 2012; Fromont et al., 2013; Albert et al., 2022; Schrab et al., 2021, 2022), we propose efficient variants of these well-studied tests, with computational cost interpolating from the classical quadratic-time regime to the linear-time one. Related work: aggregated tests. Kernel selection (or kernel bandwidth selection) is a fundamental problem in nonparametric hypothesis testing as this choice has a major influence on test power. Motivated by this problem, non-asymptotic aggregated tests, which combine tests with different kernel bandwidths, have been proposed for the two-sample (Fromont et al., 2012, 2013; Kim et al., 2022; Schrab et al., 2021), independence (Albert et al., 2022; Kim et al., 2022), and goodness-of-fit (Schrab et al., 2022) testing frameworks. Li and Yuan (2019) and Balasubramanian et al. (2021) construct similar aggregated tests for these three problems, with the difference that they work in the asymptotic regime. All the mentioned works study aggregated tests in terms of uniform separation rates (Baraud, 2002). Those rates depend on the sample size and satisfy the following property: if the L2-norm difference between the densities is greater than the uniform separation rate, then the test is guaranteed to have high power. All aggregated kernel-based tests in the existing literature have been studied using U -statistic estimators (Hoeffding, 1992) with tests running in quadratic time. Related work: efficient kernel tests. Several linear-time kernel tests have been proposed for those three testing frameworks. Those include tests using classical linear-time estimators with median bandwidth (Gretton et al., 2012a; Liu et al., 2016) or selecting an optimal bandwidth on held-out data to maximize power (Gretton et al., 2012b), tests using eigenspectrum approximation (Gretton et al., 2009), tests using post-selection inference for adaptive kernel selection with incomplete U -statistics (Yamada et al., 2018, 2019; Lim et al., 2019, 2020; Kübler et al., 2020; Freidling et al., 2021), tests which use a Nyström approximation of the asymptotic null distribution (Zhang et al., 2018; Cherfaoui et al., 2022), random Fourier features tests (Zhang et al., 2018; Zhao and Meng, 2015; Chwialkowski et al., 2015), tests based on random feature Stein discrepancies (Huggins and Mackey, 2018), the adaptive tests which use features selected on held-out data to maximize power (Jitkrittum et al., 2016, 2017a,b), as well as tests using neural networks to learn a discrepancy (Grathwohl et al., 2020). We also point out the very relevant works of Kübler et al. (2022) on a quadratic-time test, and of Ho and Shieh (2006), Zaremba et al. (2013) and Zhang et al. (2018) on the use of block U -statistics with complexity O(N1.5) for block size p N where N is the sample size. Contributions and outline. In Section 2, we present the three testing problems with their associated well-known quadratic-time kernel-based estimators (MMD, HSIC, KSD) which are U -statistics. We introduce three associated incomplete U -statistics estimators, which can be computed efficiently, in Section 3. We then provide quantile and variance bounds for generic incomplete U -statistics using a wild bootstrap, in Section 4. We study the level and power guarantees at every finite sample sizes for our efficient tests using incomplete U -statistics for a fixed kernel bandwidth, in Section 5. In particular, we obtain non-asymptotic uniform separation rates for the two-sample and independence tests over a Sobolev ball, and show that these rates are minimax optimal up to the cost incurred for efficiency of the test. In Section 6, we propose our efficient aggregated tests which combine tests with multiple kernel bandwidths. We prove that the proposed tests are adaptive over Sobolev balls and achieve the same uniform separation rate (up to an iterated logarithmic term) as the tests with optimal bandwidths. As a result of our analysis, we have shown minimax optimality over Sobolev balls of the quadratictime tests using quantiles estimated with a wild bootstrap. Whether this optimality result also holds for tests using the more general permutation-based procedure to approximate HSIC quantiles, was an open problem formulated by Kim et al. (2022), we prove that it indeed holds in Section 7. As observed in Section 8, the linear-time versions of MMDAggInc, HSICAggInc and KSDAggInc retain high power, and either outperform or match the power of other state-of-the-art linear-time kernel tests. Our implementation of the tests and code for reproducibility of the experiments are available online under the MIT license: https://github.com/antoninschrab/agginc-paper. 2 Background In this section, we briefly describe our main problems of interest, comprising the two-sample, independence and goodness-of-fit problems. We approach these problems from a nonparametric point of view using the kernel-based statistics: MMD, HSIC, and KSD. We briefly introduce original forms of these statistics, which can be computed in quadratic time, and also discuss ways of calibrating tests proposed in the literature. The three quadratic-time expressions are presented in Appendix B. Two-sample testing. In this problem, we are given independent samples Xm := (Xi)1im and Yn = (Yj)1jn, consisting of i.i.d. random variables with respective probability density functions1 p and q on Rd. We assume we work with balanced sample sizes, that is2 max(m,n) . min(m,n). We are interested in testing the null hypothesis H0 : p = q against the alternative H1 : p 6= q; that is, we want to know if the samples come from the same distribution. Gretton et al. (2012a) propose a nonparametric kernel test based on the Maximum Mean Discrepancy (MMD), a measure between probability distributions which uses a characteristic kernel k (Fukumizu et al., 2008; Sriperumbudur et al., 2011). It can be estimated using a quadratic-time estimator (Gretton et al., 2012a, Lemma 6) which, as noted by Kim et al. (2022), can be expressed as a two-sample U -statistic (both of second order) (Hoeffding, 1992), \MMD 2 k(Xm,Yn) = 1 im 2 in 2 X (i,i0)2im2 X (j,j0)2in2 hMMDk (Xi, Xi0 ;Yj , Yj0), (1) where iba with a b denotes the set of all a-tuples drawn without replacement from {1, . . . , b} so that iba = b · · · (b a+ 1), and where, for x1, x2, y1, y2 2 Rd, we let hMMDk (x1, x2; y1, y2) := k(x1, x2) k(x1, y2) k(x2, y1) + k(y1, y2). (2) Independence testing. In this problem, we have access to i.i.d. pairs of samples ZN := Zi 1iN = (Xi, Yi) 1iN with joint probability density pxy on Rdx⇥Rdy and marginals px on Rdx and py on Rdy . We are interested in testing H0 : pxy = px⌦py against H1 : pxy 6= px⌦py; that is, we want to know if two components of the pairs of samples are independent or dependent. Gretton et al. (2005, 2008) propose a nonparametric kernel test based on the Hilbert Schmidt Independence Criterion (HSIC). It can be estimated using the quadratic-time estimator proposed by Song et al. (2012, Equation 5) which is a fourth-order one-sample U -statistic \HSICk,`(ZN ) = 1 iN 4 X (i,j,r,s)2iN4 hHSICk,` (Zi, Zj , Zr, Zs) (3) for characteristic kernels k on Rdx and ` on Rdy (Gretton, 2015), and where for za = (xa, ya) 2 Rdx ⇥ Rdy , a = 1, . . . , 4, we let hHSICk,` (z1, z2, z3, z4) := 1 4 hMMDk (x1, x2;x3, x4)h MMD ` (y1, y2; y3, y4). (4) Goodness-of-fit testing. For this problem, we are given a model density p on Rd and i.i.d. samples ZN := (Zi)1iN drawn from a density q on Rd. The aim is again to test H0 : p = q against H1 : p 6= q; that is, we want to know if the samples have been drawn from the model. Chwialkowski et al. (2016) and Liu et al. (2016) both construct a nonparametric goodness-of-fit test using the Kernel Stein Discrepancy (KSD). A quadratic-time KSD estimator can be computed as the second-order one-sample U -statistic, [KSD 2 p,k(ZN ) := 1 iN 2 X (i,j)2iN2 hKSDk,p (Zi, Zj), (5) where the Stein kernel hKSDk,p : Rd ⇥ Rd ! R is defined as hKSDk,p (x, y) := r log p(x)>r log p(y) k(x, y) +r log p(y)>rxk(x, y) +r log p(x)>ryk(x, y) + dX i=1 @ @xi@yi k(x, y). (6) In order to guarantee consistency of the Stein goodness-of-fit test (Chwialkowski et al., 2016, Theorem 2.2), we assume that the kernel k is C0-universal (Carmeli et al., 2010, Definition 4.1) and that Eq h hKSDk,p (z, z) i < 1 and Eq " r log ✓ p(z) q(z) ◆ 2 2 # < 1. (7) 1All probability density functions in this paper are with respect to the Lebesgue measure. 2We use the notation a . b when there exists a constant C > 0 such that a Cb. We similarly use the notation &. We write a ⇣ b if a . b and a & b. We also use the convention that all constants are generically denoted by C, even though they might be different. Quantile estimation. Multiple strategies have been proposed to estimate the quantiles of test statistics under the null for these three tests. We primarily focus on the wild bootstrap approach (Chwialkowski et al., 2014), though our results also hold using a parametric bootstrap for the goodness-of-fit setting (Schrab et al., 2022). In Section 7, we show that the same uniform separation rates can be derived for HSIC quadratic-time tests using permutations instead of a wild bootstrap. More details on MMD, HSIC, KSD, and on quantile estimation are provided in Appendix B. 3 Incomplete U -statistics for MMD, HSIC and KSD As presented above, the quadratic-time statistics for the two-sample (MMD), independence (HSIC) and goodness-of-fit (KSD) problems can be rewritten as U -statistics with kernels hMMDk , h HSIC k,` and hKSDk,p , respectively. The computational cost of tests based on these U -statistics grows quadratically with the sample size. When working with very large sample sizes, as it is often the case in real-world uses of those tests, this quadratic cost can become very problematic, and faster alternative tests are better adapted to this ‘big data’ setting. Multiple linear-time kernel tests have been proposed in the three testing frameworks (see Section 1 for details). We construct computationally efficient variants of the aggregated kernel tests proposed by Fromont et al. (2013), Albert et al. (2022), Kim et al. (2022), and Schrab et al. (2021, 2022) for the three settings, with the aim of retaining the significant power advantages of the aggregation procedure observed for quadratic-time tests. To this end, we propose to replace the quadratic-time U -statistics presented in Equations (1), (3) and (5) with second-order incomplete U -statistics (Blom, 1976; Janson, 1984; Lee, 1990), MMD 2 k Xm,Yn;DN := 1 DN X (i,j)2DN hMMDk (Xi, Xj ;Yi, Yj), (8) HSICk,` ZN ;DbN/2c := 1 DbN/2c X (i,j)2DbN/2c hHSICk,` Zi, Zj , Zi+bN/2c, Zj+bN/2c , (9) KSD 2 p,k ZN ;DN := 1 DN X (i,j)2DN hKSDk,p (Zi, Zj), (10) where for the two-sample problem we let N := min(m,n), and where the design Db is a subset of ib2 (the set of all 2-tuples drawn without replacement from {1, . . . , b}). Note that DbN/2c ✓ i N/2 2 ⇢ iN 2 . The design can be deterministic. For example, for the two-sample problem with equal even sample sizes m = n = N , the deterministic design DN = {(2a 1, 2a) : a = 1, . . . , N/2} corresponds to the MMD linear-time estimator proposed by Gretton et al. (2012a, Lemma 14). For fixed design size, the elements of the design can also be chosen at random without replacement, in which case the estimators in Equations (8) to (10) become random quantities given the data. For generality purposes, the results presented in this paper hold for both deterministic and random (without replacement) design choices while we focus on the deterministic design in our experiments. By fixing the design sizes in Equations (8) to (10) to be, for example, DN = DbN/2c = cN (11) for some small constant c 2 N \ {0}, we obtain incomplete U -statistics which can be computed in linear time. Note that by pairing the samples Zi := (Xi, Yi), i = 1, . . . , N for the MMD case and eZi := Zi, Zi+bN/2c , i = 1, . . . , bN/2c for the HSIC case, we observe that all three incomplete U - statistics of second order have the same form, with only the kernel functions and the design differing. The motivation for defining the estimators in Equations (8) and (9) as incomplete U -statistics of order 2 (rather than of higher order) derives from the reasoning of Kim et al. (2022, Section 6) for permuted complete U -statistics for the two-sample and independence problems (see Appendix E.1). 4 Quantile and variance bounds for incomplete U -statistics In this section, we derive upper quantile and variance bounds for a second-order incomplete degenerate U -statistic with a generic degenerate kernel h, for some design D ✓ iN 2 , defined as U ZN ;D := 1 |D| X (i,j)2D h(Zi, Zj). We will use these results to bound the quantiles and variances of our three test statistics for our hypothesis tests in Section 5. The derived bounds are of independent interest. In the following lemma, building on the results of Lee (1990), we directly derive an upper bound on the variance of the incomplete U -statistic in terms of the sample size N and of the design size |D|. Lemma 1. The variance of the incomplete U -statistic can be upper bounded in terms of the quantities 2 1 := var E ⇥ h(Z,Z 0) Z 0 ⇤ and 2 2 := var(h(Z,Z 0)) with different bounds depending on the design choice. For deterministic (LHS) or random (RHS) design D and sample size N , we have var U . N|D| 2 1 + 1 |D| 2 2 and var U . 1 N 2 1 + 1 |D| 2 2 . The proof of Lemma 1 is deferred to Appendix F.2. We emphasize the fact that this variance bound also holds for random design with replacement, as considered by Blom (1976) and Lee (1990). For random design, we observe that if |D| ⇣ N2 then the bound is 2 1 /N + 2 2 /N2 which is the variance bound of the complete U -statistic (Albert et al., 2022, Lemma 10). If N . |D| . N2, the variance bound is 2 1 /N + 2 2 /|D|, and if |D| . N it is 2 2 /|D| since 2 1 2 2 /2 (Blom, 1976, Equation 2.1). Kim et al. (2022) develop exponential concentration bounds for permuted complete U -statistics, and Clémençon et al. (2013) study the uniform approximation of U -statistics by incomplete U -statistics. To the best of our knowledge, no quantile bounds have yet been obtained for incomplete U -statistics in the literature. While permutations are well-suited for complete U -statistics (Kim et al., 2022), using them with incomplete U -statistics results in having to compute new kernel values, which comes at an additional computational cost we would like to avoid. Restricting the set of permutations to those for which the kernel values have already been computed for the original incomplete U -statistic corresponds exactly to using a wild bootstrap (Schrab et al., 2021, Appendix B). Hence, we consider the wild bootstrapped second-order incomplete U -statistic U ✏ ZN ;D := 1 |D| X (i,j)2D ✏i✏jh(Zi, Zj) (12) for i.i.d. Rademacher random variables ✏1, . . . , ✏N with values in { 1, 1}, for which we derive an exponential concentration bound (quantile bound). We note the in-depth work of Chwialkowski et al. (2014) on the wild bootstrap procedure for kernel tests with applications to quadratic-time MMD and HSIC tests. We now provide exponential tail bounds for wild bootstrapped incomplete U -statistics. Lemma 2. There exists some constant C > 0 such that, for every t 0, we have P✏ ⇣ U ✏ t ZN ,D ⌘ 2 exp ✓ C t Ainc ◆ 2 exp ✓ C t A ◆ where A2 inc := |D| 2 P (i,j)2D h(Zi, Zj) 2 and A2 := |D| 2 P (i,j)2iN2 h(Zi, Zj)2. Lemma 2 is proved in Appendix F.3. While the second bound in Lemma 2 is less tight, it has the benefit of not depending on the choice of design D but only on its size |D| which is usually fixed. 5 Efficient kernel tests using incomplete U -statistics We now formally define the hypothesis tests obtained using the incomplete U -statistics with a wild bootstrap. This is done for fixed kernel bandwidths 2 (0,1)dx , µ 2 (0,1)dy , for the kernels3 k (x, y) := dxY i=1 1 i Ki ✓ xi yi i ◆ , `µ(x, y) := dyY i=1 1 µi Li ✓ xi yi µi ◆ , (13) for characteristic kernels (x, y) 7! Ki(x y), (x, y) 7! Li(x y) on R⇥R for functions Ki, Li 2 L1(R) \ L2(R) integrating to 1. We unify the notation for the three testing frameworks. For the twosample and goodness-of-fit problems, we work only with k and have d = dx. For the independence 3Our results are presented for bandwidth selection, but they hold in the more general setting of kernel selection, as considered by Schrab et al. (2022). The goodness-of-fit results hold for a wider range of kernels including the IMQ (inverse multiquadric) kernel (Gorham and Mackey, 2017), as in Schrab et al. (2022). problem, we work with the two kernels k and `µ, and for ease of notation we let d := dx + dy and dx+i := µi for i = 1, . . . , dy. We also simply write p := pxy and q := px ⌦ py. We let U and h denote either MMD 2 k and h MMD k , or HSICk ,`µ and hHSICk ,`µ , or KSD 2 p,k and h KSD k ,p , respectively. We denote the design size of the incomplete U -statistics in Equations (8) to (10) by L := DN = DbN/2c . For the three testing frameworks, we estimate the quantiles of the test statistics by simulating the null hypothesis using a wild bootstrap, as done in the case of complete U -statistics by Fromont et al. (2012) and Schrab et al. (2021) for the two-sample problem, and by Schrab et al. (2022) for the goodness-of-fit problem. This is done by considering the original test statistic UB1+1 := U together with B1 wild bootstrapped incomplete U -statistics U1 , . . . , U B1 computed as in Equation (12), and estimating the (1 ↵)-quantile with a Monte Carlo approximation bq 1 ↵ := inf ⇢ t 2 R : 1 ↵ 1 B1 + 1 B1+1X b=1 1 U b t = U•dB1(1 ↵)e , (14) where U•1 · · · U •B1+1 are the sorted elements U 1 , . . . , U B1+1 . The test ↵ is defined as rejecting the null if the original test statistic U is greater than the estimated (1 ↵)-quantile, that is, ↵(ZN ) := 1 U (ZN ) > bq 1 ↵ . The resulting test has time complexity O(B1L) where L is the design size (1 L N(N 1)). We show in Proposition 1 that the test ↵ has well-calibrated asymptotic level for goodness-of-fit testing, and well-calibrated non-asymptotic level for two-sample and independence testing. The proof of the latter non-asymptotic guarantee is based on the exchangeability of U1 , . . . , U B1+1 under the null hypothesis along with the result of Romano and Wolf (2005, Lemma 1). A similar proof strategy can be found in Fromont et al. (2012, Proposition 2), Albert et al. (2022, Proposition 1), and Schrab et al. (2021, Proposition 1). The exchangeability of wild bootstrapped incomplete U -statistics for independence testing does not follow directly from the mentioned works. We show this through the interesting connection between hHSICk,` and {hMMDk , hMMD` }, the proof is deferred to Appendix F.1. Proposition 1. The test ↵ has level ↵ 2 (0, 1), i.e. PH0 ↵(ZN ) = 1 ↵. This holds nonasymptotically for the two-sample and independence cases, and asymptotically for goodness-of-fit. 4 Having established the validity of the test ↵, we now study power guarantees for it in terms of the L2-norm of the difference in densities kp qk2. In Theorem 1, we show for the three tests that, if kp qk2 exceeds some threshold, we can guarantee high test power. For the two-sample and independence problems, we derive uniform separation rates (Baraud, 2002) over Sobolev balls Ssd(R) := n f 2 L1 Rd \ L2 Rd : Z Rd k⇠k2s 2 | bf(⇠)|2d⇠ (2⇡)dR2 o , (15) with radius R > 0 and smoothness parameter s > 0, where bf denotes the Fourier transform of f . The uniform separation rate over Ssd(R) is the smallest value of t such that, for any alternative with kp qk2 > t and5 p q 2 Ssd(R), the probability of type II error of ↵ can be controlled by 2 (0, 1). Before presenting Theorem 1, we introduce further notation unified over the three testing frameworks; we define the integral transform T as (T f)(x) := Z Rd f(x)K (x, y) dy (16) for f 2 L2(Rd), x 2 Rd, where K := k for the two-sample problem, K := k ⌦ `µ for the independence problem, and K := hKSDk ,p for the goodness-of-fit problem. Note that, for the twosample and independence testing frameworks, since K is translation-invariant, the integral transform corresponds to a convolution. However, this is not true for the goodness-of-fit setting as hKSDk ,p is not translation-invariant. We are now in a position to present our main contribution in Theorem 1: we derive power guarantee conditions for our tests using incomplete U -statistics, and uniform separation rates over Sobolev balls for the two-sample and independence settings. 4Level is non-asymptotic for the goodness-of-fit case using a parametric bootstrap (Schrab et al., 2022). For the goodness-of-fit setting, we also recall that the further assumptions in Equation (7) need to be satisfied. 5We stress that we only assume p q 2 Ssd(R) and not p, q 2 Ssd(R) as considered by Li and Yuan (2019). Viewing q as a perturbed version of p, we only require that the perturbation is smooth (i.e. lies in a Sobolev ball). Theorem 1. Suppose that the assumptions in Appendix A.1 hold, and consider 2 (0,1)d. (i) For sample size N and design size L, if there exists some C > 0 such that kp qk2 2 k(p q) T (p q)k22 + C N L ln(1/↵) 2, , then PH1 ↵(ZN ) = 0 (type II error), where 2, . 1/ p 1 · · · d for MMD and HSIC. (ii) Fix R > 0 and s > 0, and consider the bandwidths ⇤i := (N/L) 2/(4s+d) for i = 1, . . . , d. For MMD and HSIC, the uniform separation rate of ⇤ ↵ over the Sobolev ball Ssd(R) is (up to a constant) L/N 2s/(4s+d) . The proof of Theorem 1 relies on the variance and quantile bounds presented in Lemmas 1 and 2, and also uses results of Albert et al. (2022) and Schrab et al. (2021, 2022) on complete U -statistics. The details can be found in Appendix F.4. The power condition in Theorem 1 (i) corresponds to a variance-bias decomposition; for large bandwidths the bias term (first term) dominates, while for small bandwidths the variance term (second term which also controls the quantile) dominates. While the power guarantees of Theorem 1 hold for any design (either deterministic or uniformly random without replacement) of fixed size L, the choice of design still influences the performance of the test in practice. The variance (but not its upper bound) depends on the choice of design; certain choices lead to minimum variance of the incomplete U -statistic (Lee, 1990, Section 4.3.2). The minimax (i.e. optimal) rate over the Sobolev ball Ssd(R) is N 2s/(4s+d) for the two-sample (Li and Yuan, 2019, Theorem 5 (ii)) and independence (Albert et al., 2022, Theorem 4; Berrett et al., 2021, Corollary 5) problems. The rate for our incomplete U -statistic test with time complexity O(B1L) has the same dependence in the exponent as the minimax rate; (L/N) 2s/(4s+d) = N 2s/(4s+d) N2/L 2s/(4s+d) where L . N2 with L the design size and N the sample size. • If L ⇣ N2 then the test runs in quadratic time and we recover exactly the minimax rate. • If N . L . N2 then the rate still converges to 0; there is a trade-off between the cost . (N2/L)2s/(4s+d) incurred in the minimax rate and the computational efficiency O(B1L). • If L . N then there is no guarantee that the rate converges to 0. To summarize, the tests we propose have computational cost O(B1L) which can be specified by the user with the choice of the number of wild bootstraps B1, and of the design size L (as a function of the sample size N ). There is a trade-off between test power and computational cost. We provide theoretical rates in terms of L and N , working up to a constant. The rate is minimax optimal in the case where L grows quadratically with N . We quantify exactly how, as the computational cost decreases from quadratic to linear in the sample size, the rate deteriorates gradually from being minimax optimal to not being guaranteed to convergence to zero. In our experiments, we use a design size which grows linearly with the sample size in order to compare our tests against other linear-time tests in the literature. The assumption guaranteeing that the rate converges to 0 is not satisfied in this setting, however, it would be satisfied for any faster growth of the design size (e.g. L ⇣ N log logN ). 6 Efficient aggregated kernel tests using incomplete U -statistics We now introduce our aggregated tests that combine single tests with different bandwidths. Our aggregation scheme is similar to those of Fromont et al. (2013), Albert et al. (2022) and Schrab et al. (2021, 2022), and can yield a test which is adaptive to the unknown smoothness parameter s of the Sobolev ball Ssd(R), with relatively low price. Let ⇤ be a finite collection of bandwidths, (w ) 2⇤ be associated weights satisfying P 2⇤ w 1, and u↵ be some correction term defined shortly in Equation (17). Then, using the incomplete U -statistic U , we define our aggregated test ⇤↵ as ⇤ ↵(ZN ) := 1 ⇣ U (ZN ) > bq 1 u↵w for some 2 ⇤ ⌘ . The levels of the single tests are weighted and adjusted with a correction term u↵ := supB3 ( u 2 ✓ 0, min 2⇤ w 1 ◆ : 1 B2 B2X b=1 1 ✓ max 2⇤ ⇣ eU b U •dB1(1 uw )e ⌘ > 0 ◆ ↵ ) , (17) where the wild bootstrapped incomplete U -statistics eU1 , . . . , eU B2 computed as in Equation (12) are used to perform a Monte Carlo approximation of the probability under the null, and where the supremum is estimated using B3 steps of bisection method. Proposition 1, along with the reasoning of Schrab et al. (2021, Proposition 8), ensures that ⇤↵ has non-asymptotic level ↵ for the twosample and independence cases, and asymptotic level ↵ for the goodness-of-fit case. We refer to the three aggregated tests constructed using incomplete U -statistics as MMDAggInc, HSICAggInc and KSDAggInc. The computational complexity of those tests is O(|⇤|(B1 +B2)L), which means, for example, that if L ⇣ N as in Equation (11), the tests run efficiently in linear time in the sample size. We formally record error guarantees of ⇤↵ and derive uniform separation rates over Sobolev balls. Theorem 2. Suppose that the assumptions in Appendix A.2 hold, and consider a collection ⇤. (i) For sample size N and design size L, if there exists some C > 0 such that kp qk2 2 min 2⇤ k(p q) T (p q)k22 + C N L ln 1/(↵w ) 2, ! , then PH1 ⇤ ↵(ZN ) = 0 (type II error), where 2, . 1/ p 1 · · · d for MMD and HSIC. (ii) Assume L > N so that ln(ln(L/N)) is well-defined. Consider the collections of bandwidths and weights (independent of the parameters s and R of the Sobolev ball Ssd(R)) ⇤ := n 2 `, . . . , 2 ` 2 (0,1)d : ` 2 n 1, . . . , l 2 d log 2 ⇣ L/N ln(ln(L/N)) ⌘moo , w := 6 ⇡2`2 . For the two-sample and independence problems, the uniform separation rate of ⇤ ↵ over the Sobolev balls Ssd(R) : R > 0, s > 0 is (up to a constant) ✓ L/N ln(ln(L/N)) ◆ 2s/(4s+d) . The extension from Theorem 1 to Theorem 2 has been proved for complete U -statistics in the two-sample (Fromont et al., 2013; Schrab et al., 2021), independence (Albert et al., 2022) and goodness-of-fit (Schrab et al., 2022) testing frameworks. The proof of Theorem 2 follows with the same reasoning by simply replacing N with L/N as we work with incomplete U -statistics; this ‘replacement’ is theoretically justified by Theorem 1. Theorem 2 shows that the aggregated test ⇤↵ is adaptive over Sobolev balls Ssd(R) : R > 0, s > 0 : the test ⇤↵ does not depend on the unknown smoothness parameter s (unlike ⇤ ↵ in Theorem 1) and achieves the minimax rate, up to an iterated logarithmic factor, and up to the cost incurred for efficiency of the test (i.e. L/N instead of N ). 7 Minimax optimal permuted quadratic-time aggregated independence test Considering Theorem 2 with our incomplete U -statistic with full design D = iN 2 for which L ⇣ N2, we have proved that the quadratic-time two-sample and independence aggregated tests using a wild bootstrap achieve the rate (N/ln(ln(N))) 2s/(4s+d) over the Sobolev balls Ssd(R) : R > 0, s > 0 . This is the minimax rate (Li and Yuan, 2019; Albert et al., 2022), up to some iterated logarithmic term. For the two-sample problem, Kim et al. (2022) and Schrab et al. (2021) show that this optimality result also holds when using complete U -statistics with permutations. Whether the equivalent statement for the independence test with permutations holds has not yet been addressed; the rate can be proved using theoretical (unknown) quantiles with a Gaussian kernel (Albert et al., 2022), but has not yet been proved using permutations. Kim et al. (2022, Proposition 8.7) consider this problem, again using a Gaussian kernel, but they do not obtain the correct dependence on ↵ (i.e. they obtain ↵ 1/2 rather than ln(1/↵)), hence they cannot recover the desired rate. As pointed out by Kim et al. (2022, Section 8): ‘It remains an open question as to whether [the power guarantee] continues to hold when ↵ 1/2 is replaced by ln(1/↵)’. We now prove that we can improve the ↵-dependence to ln(1/↵)3/2 for any bounded kernel of the form of Equation (13), and that this allows us to obtain the desired rate over Sobolev balls Ssd(R) : R > 0, s > d/4 . The assumption s > d/4 imposes a stronger smoothness restriction on p q 2 Ssd(R), which is similarly also considered by Li and Yuan (2019). Theorem 3. Consider the quadratic-time independence test using the complete U -statistic HSIC estimator with a quantile estimated using permutations as done by Kim et al. (2022, Proposition 8.7), with kernels as in Equation (13) for bounded functions Ki and Lj for i = 1, . . . , dx, j = 1, . . . , dy . (i) Suppose that the assumptions in Appendix A.1 hold. For fixed R > 0, s > d/4, and bandwidths ⇤i := N 2/(4s+d) for i = 1, . . . , d, the probability of type II error of the test is controlled by when kp qk2 2 k(p q) T ⇤(p q)k22 + C 1 N ln(1/↵)3/2 p ⇤ 1 · · · ⇤d for some constant C > 0. The uniform separation rate over the Sobolev ball Ssd(R) is, up to a constant, N 2s/(4s+d). (ii) Suppose that the assumptions in Appendix A.2 hold. The uniform separation rate over the Sobolev balls Ssd(R) : R > 0, s > d/4 is N/ ln(ln(N)) 2s/(4s+d) , up to a constant, with the collections ⇤ := n 2 `, . . . , 2 ` 2 (0,1)d : ` 2 n 1, . . . , l 2 d log 2 ⇣ N ln(ln(N)) ⌘moo , w := 6 ⇡2`2 . The proof of Theorem 3, in Appendix F.5, uses the exponential concentration bound of Kim et al. (2022, Theorem 6.3) for permuted complete U -statistics. Another possible approach to obtain the correct dependency on ↵ is to employ the sample-splitting method proposed by Kim et al. (2022, Section 8.3) in order to transform the independence problem into a two-sample problem. While this indirect approach leads to a logarithmic factor in ↵, the practical power would be suboptimal due to an inefficient use of the data from sample splitting. Theorem 3 (i) shows that a ln(1/↵)3/2 dependence is achieved by the more practical permutation-based HSIC test. Theorem 3 (ii) demonstrates that this leads to a minimax optimal rate for the aggregated HSIC test, up to the ln(ln(N)) cost for adaptivity. 8 Experiments For the two-sample problem, we consider testing samples drawn from a uniform density on [0, 1]d against samples drawn from a perturbed uniform density. For the independence problem, the joint density is a perturbed uniform density on [0, 1]dx+dy , the marginals are then simply uniform densities. Those perturbed uniform densities can be shown to lie in Sobolev balls (Li and Yuan, 2019; Albert et al., 2022), to which our tests are adaptive. For the goodness-of-fit problem, we use a GaussianBernoulli Restricted Boltzmann Machine as first considered by Liu et al. (2016) in this testing framework. We use collections of 21 bandwidths for MMD and HSIC and of 25 bandwidth pairs for HSIC; more details on the experiments (e.g. model and test parameters) are presented in Appendix C. We consider our incomplete aggregated tests MMDAggInc, HSICAggInc and KSDAggInc, with parameter R 2 {1, . . . , N 1} which fixes the deterministic design to consist of the first R subdiagonals of the N ⇥N matrix, i.e. D := {(i, i+ r) : i = 1, . . . , N r for r = 1, . . . , R} with size |D| = RN R(R 1)/2. We run our incomplete tests with R 2 {1, 100, 200} and also the complete test using the full design D = iN 2 . We compare their performances with current linear-time state-of- the-art tests: ME, SCF, FSIC and FSSD (Jitkrittum et al., 2016, 2017a,b) which evaluate the witness functions at a finite set of locations chosen to maximize the power, Cauchy RFF (random Fourier feature) and L1 IMQ (Huggins and Mackey, 2018) which are random feature Stein discrepancies, LSD (Grathwohl et al., 2020) which uses a neural network to learn the Stein discrepancy, and OST PSI (Kübler et al., 2020) which performs kernel selection using post selection inference. Similar trends are observed across all our experiments in Figure 1, for the three testing frameworks, when varying the sample size, the dimension, and the difficulty of the problem (scale of perturbations or noise level). The linear-time tests AggInc R = 200 almost match the power obtained by the quadratic-time tests AggCom in all settings (except in Figure 1 (i) where the difference is larger) while being computationally much more efficient as can be seen in Figure 1 (d, h, l). The incomplete tests with R = 100 have power only slightly below the ones using R = 200, and run roughly twice as fast (Figure 1 (d, h, l)). In all experiments, those three tests (AggInc R = 100, 200 and AggCom) have significantly higher power than the linear-time tests which optimize test locations (ME, SCF, FSIC and FSSD); in the two-sample case the aggregated tests run faster for small sample size but slower for large sample size, in the independence case the aggregated tests run much faster, and in the goodness-of-fit case FSSD runs faster. While both types of tests are linear, we note that the runtimes of the tests of Jitkrittum et al. (2016, 2017a,b) increase slower with the sample size than those of our aggregated tests with R = 100, 200, but a fixed computational cost is incurred for their optimization step, even for small sample sizes. In the goodness-of-fit framework, L1 IMQ performs similarly to FSSD which is in line with the results presented by Huggins and Mackey (2018, Figure 4d) who consider the same experiment. All other goodness-of-fit tests (except KSDAggInc R = 1) achieve much higher test power. Cauchy RFF and KSDAggInc R = 200 obtain similar power in almost all the experiments. While KSDAggInc R = 200 runs much faster in the experiments presented6, it seems that the KSDAggInc runtimes increase more steeply with the sample size than the Cauchy RFF and L1 IMQ runtimes (see Appendix D.5 for details). LSD matches the power of KSDAggInc R = 100 when varying the noise level in Figure 1 (k) (KSDAggInc R = 200 has higher power), and when varying the hidden dimension in Figure 1 (j) where dx = 100. When varying the sample size in Figure 1 (i), both KSDAggInc tests with R = 100, 200 achieve much higher power than LSD. Unsurprisingly, AggInc R = 1, which runs much faster than all the aforementioned tests, has low power in every experiment. For the two-sample problem, it obtains slightly higher power than OST PSI which runs even faster. We include more experiments in Appendix D: we present experiments on the MNIST dataset (same trends are observed), we use different collection of bandwidths, we verify that all tests have well-calibrated levels, and illustrate the benefits of the aggregation procedure. 9 Acknowledgements Antonin Schrab acknowledges support from the U.K. Research and Innovation (EP/S021566/1). Ilmun Kim acknowledges support from the Yonsei University Research Fund of 2021-22-0332, and from the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (2022R1A4A1033384). Benjamin Guedj acknowledges partial support by the U.S. Army Research Laboratory and the U.S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (EP/R013616/1), and by the French National Agency for Research (ANR-18-CE40-0016-01 & ANR-18-CE23-0015-02). Arthur Gretton acknowledges support from the Gatsby Charitable Foundation. 6The runtimes in Figure 1 (d, h, l) can also vary due to the different implementations of the respective authors.
1. What is the main contribution of the paper regarding nonparametric two-sample, independence, and goodness-of-fit tests? 2. What are the strengths of the paper, particularly in its theoretical analysis and adaptive estimator? 3. What are the weaknesses of the paper, such as notation issues and lack of clarity in certain sections? 4. Do you have any questions or concerns regarding the paper's distinctions from prior works? 5. Could the authors provide more explanation or references regarding the motivation for defining the estimators of order 2? 6. Can the authors clarify the improvement provided by Theorem 3 compared to existing results for independence testing? 7. Are there any further limitations or suggestions for future work that could be discussed in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies a family of nonparametric two-sample, independence, and goodness-of-fit tests based on incomplete kernel-based U-statistics, for which it proves validity (Proposition 1) as well as guarantees on power, assuming the true densities lie in a Sobolev space and are sufficiently well separated in L 2 distance. The power guarantees are initially proven for a statistic depending on the true smoothness of the Sobolev space (Theorem 1), but Theorem 2 extends this to an estimate that is adaptive to unknown smoothness. Theorem 3 shows that, compared to existing results for independence testing, a tighter bound, with better dependence on the type-1 error probability, can be obtained. Finally, some experiments demonstrate how the performance of the proposed estimators varies with hyperparameters and how this compares with some other linear-time nonparametric tests. Strengths And Weaknesses Strengths: This paper provides much more compelling theoretical results (minimax optimality, especially for an adaptive estimator (Theorem 2)) than most related work on two-sample testing, which usually only shows consistency. It’s also nice that a unified but rigorous discussion is given for the closely related problems of two-sample, independence, and goodness-of-fit testing. Weaknesses: The paper has a lot of notation that is very similar or overloaded, but not explained or disambiguated near where it is used. For example, L is defined just before Line 177 as the design size of the incomplete U-statistic, but is also used as a kernel (in Eq. (12)) and for L p spaces (in Eq. (17)). This made it a bit hard for me to follow the paper’s notation. I think it would help if the paper was a bit more explicit (even redundant) with explaining its notation near where it is used (e.g., reiterating “where L is the design size” after Theorem 1). The paper also is not particularly clear about its distinctions from prior work (see questions below, although I was able to piece this together from various parts of the paper and by reading some of the references). Questions Lemma 1: The variance bound for random design includes a 1 | D r | + 1 N 2 term. Since | D r | ≤ N 2 , isn’t the second term redundant (i.e., can’t it be absorbed into the constant C )? Line 144-146, “The motivation for defining the estimators… of order 2 (rather than of higher order) derives from the reasoning of Kim et al. (2022, Section 6)...”: I didn’t quite understand this sentence. I skimmed Section 6 of Kim et al. (2022), and, while they do indeed study U-statistics of order 2, the motivation for order 2 (rather than of higher order) wasn’t obvious to me. Could the authors clarify? I found the motivation for Theorem 3 (lines 241-256) a bit hard to understand. Am I understanding correctly that the improved (logarithmic rather than polynomial) dependence on 1 / α has been previously shown for two-sample testing but not for independence testing. Later on (Lines 262-263), the paper says “As discussed by Kim et al. (2022, Section 8.3), their proposed sample-splitting method can also be used to obtain the correct dependency on α .” So what exactly is the new contribution of Theorem 3? Figure 1: The first row of plots includes a green curve that isn’t included in the legend. What is this? Also, the paper discusses results for some methods (e.g., OST PSI) for which I didn’t see any results in Figure 1. Where are these results reported? Could the authors elaborate on advantages of the proposed tests over previous tests that have been shown to be minimax optimal (e.g., the Gaussian-kernel-based tests of Li and Yuan (2019))? Limitations The paper would definitely benefit from further discussion of the limitations of its present results and suggestions for future work. However, given space limitations, I don't think further discussion of this is strictly necessary for acceptance.
NIPS
Title Efficient Aggregated Kernel Tests using Incomplete $U$-statistics Abstract We propose a series of computationally efficient nonparametric tests for the twosample, independence, and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and Kernel Stein Discrepancy (KSD), respectively. Our test statistics are incomplete U -statistics, with a computational cost that interpolates between linear time in the number of samples, and quadratic time, as associated with classical U -statistic tests. The three proposed tests aggregate over several kernel bandwidths to detect departures from the null on various scales: we call the resulting tests MMDAggInc, HSICAggInc and KSDAggInc. This procedure provides a solution to the fundamental kernel selection problem as we can aggregate a large number of kernels with several bandwidths without incurring a significant loss of test power. For the test thresholds, we derive a quantile bound for wild bootstrapped incomplete U -statistics, which is of independent interest. We derive non-asymptotic uniform separation rates for MMDAggInc and HSICAggInc, and quantify exactly the tradeoff between computational efficiency and the attainable rates: this result is novel for tests based on incomplete U -statistics, to our knowledge. We further show that in the quadratic-time case, the wild bootstrap incurs no penalty to test power over the more widespread permutation-based approach, since both attain the same minimax optimal rates (which in turn match the rates that use oracle quantiles). We support our claims with numerical experiments on the trade-off between computational efficiency and test power. In all three testing frameworks, the linear-time versions of our proposed tests perform at least as well as the current linear-time state-of-the-art tests. 1 Introduction Nonparametric hypothesis testing is a fundamental field of statistics, and is widely used by the machine learning community and practitioners in numerous other fields, due to the increasing availability of huge amounts of data. When dealing with large-scale datasets, computational cost can quickly emerge as a major issue which might prevent from using expensive tests in practice; constructing efficient tests is therefore crucial for their real-world applications. In this paper, we construct kernel-based aggregated tests using incomplete U -statistics (Blom, 1976) for the two-sample, independence and 36th Conference on Neural Information Processing Systems (NeurIPS 2022). goodness-of-fit problems (which we detail in Section 2). The quadratic-time aggregation procedure has been shown to result in powerful tests (Fromont et al., 2012; Fromont et al., 2013; Albert et al., 2022; Schrab et al., 2021, 2022), we propose efficient variants of these well-studied tests, with computational cost interpolating from the classical quadratic-time regime to the linear-time one. Related work: aggregated tests. Kernel selection (or kernel bandwidth selection) is a fundamental problem in nonparametric hypothesis testing as this choice has a major influence on test power. Motivated by this problem, non-asymptotic aggregated tests, which combine tests with different kernel bandwidths, have been proposed for the two-sample (Fromont et al., 2012, 2013; Kim et al., 2022; Schrab et al., 2021), independence (Albert et al., 2022; Kim et al., 2022), and goodness-of-fit (Schrab et al., 2022) testing frameworks. Li and Yuan (2019) and Balasubramanian et al. (2021) construct similar aggregated tests for these three problems, with the difference that they work in the asymptotic regime. All the mentioned works study aggregated tests in terms of uniform separation rates (Baraud, 2002). Those rates depend on the sample size and satisfy the following property: if the L2-norm difference between the densities is greater than the uniform separation rate, then the test is guaranteed to have high power. All aggregated kernel-based tests in the existing literature have been studied using U -statistic estimators (Hoeffding, 1992) with tests running in quadratic time. Related work: efficient kernel tests. Several linear-time kernel tests have been proposed for those three testing frameworks. Those include tests using classical linear-time estimators with median bandwidth (Gretton et al., 2012a; Liu et al., 2016) or selecting an optimal bandwidth on held-out data to maximize power (Gretton et al., 2012b), tests using eigenspectrum approximation (Gretton et al., 2009), tests using post-selection inference for adaptive kernel selection with incomplete U -statistics (Yamada et al., 2018, 2019; Lim et al., 2019, 2020; Kübler et al., 2020; Freidling et al., 2021), tests which use a Nyström approximation of the asymptotic null distribution (Zhang et al., 2018; Cherfaoui et al., 2022), random Fourier features tests (Zhang et al., 2018; Zhao and Meng, 2015; Chwialkowski et al., 2015), tests based on random feature Stein discrepancies (Huggins and Mackey, 2018), the adaptive tests which use features selected on held-out data to maximize power (Jitkrittum et al., 2016, 2017a,b), as well as tests using neural networks to learn a discrepancy (Grathwohl et al., 2020). We also point out the very relevant works of Kübler et al. (2022) on a quadratic-time test, and of Ho and Shieh (2006), Zaremba et al. (2013) and Zhang et al. (2018) on the use of block U -statistics with complexity O(N1.5) for block size p N where N is the sample size. Contributions and outline. In Section 2, we present the three testing problems with their associated well-known quadratic-time kernel-based estimators (MMD, HSIC, KSD) which are U -statistics. We introduce three associated incomplete U -statistics estimators, which can be computed efficiently, in Section 3. We then provide quantile and variance bounds for generic incomplete U -statistics using a wild bootstrap, in Section 4. We study the level and power guarantees at every finite sample sizes for our efficient tests using incomplete U -statistics for a fixed kernel bandwidth, in Section 5. In particular, we obtain non-asymptotic uniform separation rates for the two-sample and independence tests over a Sobolev ball, and show that these rates are minimax optimal up to the cost incurred for efficiency of the test. In Section 6, we propose our efficient aggregated tests which combine tests with multiple kernel bandwidths. We prove that the proposed tests are adaptive over Sobolev balls and achieve the same uniform separation rate (up to an iterated logarithmic term) as the tests with optimal bandwidths. As a result of our analysis, we have shown minimax optimality over Sobolev balls of the quadratictime tests using quantiles estimated with a wild bootstrap. Whether this optimality result also holds for tests using the more general permutation-based procedure to approximate HSIC quantiles, was an open problem formulated by Kim et al. (2022), we prove that it indeed holds in Section 7. As observed in Section 8, the linear-time versions of MMDAggInc, HSICAggInc and KSDAggInc retain high power, and either outperform or match the power of other state-of-the-art linear-time kernel tests. Our implementation of the tests and code for reproducibility of the experiments are available online under the MIT license: https://github.com/antoninschrab/agginc-paper. 2 Background In this section, we briefly describe our main problems of interest, comprising the two-sample, independence and goodness-of-fit problems. We approach these problems from a nonparametric point of view using the kernel-based statistics: MMD, HSIC, and KSD. We briefly introduce original forms of these statistics, which can be computed in quadratic time, and also discuss ways of calibrating tests proposed in the literature. The three quadratic-time expressions are presented in Appendix B. Two-sample testing. In this problem, we are given independent samples Xm := (Xi)1im and Yn = (Yj)1jn, consisting of i.i.d. random variables with respective probability density functions1 p and q on Rd. We assume we work with balanced sample sizes, that is2 max(m,n) . min(m,n). We are interested in testing the null hypothesis H0 : p = q against the alternative H1 : p 6= q; that is, we want to know if the samples come from the same distribution. Gretton et al. (2012a) propose a nonparametric kernel test based on the Maximum Mean Discrepancy (MMD), a measure between probability distributions which uses a characteristic kernel k (Fukumizu et al., 2008; Sriperumbudur et al., 2011). It can be estimated using a quadratic-time estimator (Gretton et al., 2012a, Lemma 6) which, as noted by Kim et al. (2022), can be expressed as a two-sample U -statistic (both of second order) (Hoeffding, 1992), \MMD 2 k(Xm,Yn) = 1 im 2 in 2 X (i,i0)2im2 X (j,j0)2in2 hMMDk (Xi, Xi0 ;Yj , Yj0), (1) where iba with a b denotes the set of all a-tuples drawn without replacement from {1, . . . , b} so that iba = b · · · (b a+ 1), and where, for x1, x2, y1, y2 2 Rd, we let hMMDk (x1, x2; y1, y2) := k(x1, x2) k(x1, y2) k(x2, y1) + k(y1, y2). (2) Independence testing. In this problem, we have access to i.i.d. pairs of samples ZN := Zi 1iN = (Xi, Yi) 1iN with joint probability density pxy on Rdx⇥Rdy and marginals px on Rdx and py on Rdy . We are interested in testing H0 : pxy = px⌦py against H1 : pxy 6= px⌦py; that is, we want to know if two components of the pairs of samples are independent or dependent. Gretton et al. (2005, 2008) propose a nonparametric kernel test based on the Hilbert Schmidt Independence Criterion (HSIC). It can be estimated using the quadratic-time estimator proposed by Song et al. (2012, Equation 5) which is a fourth-order one-sample U -statistic \HSICk,`(ZN ) = 1 iN 4 X (i,j,r,s)2iN4 hHSICk,` (Zi, Zj , Zr, Zs) (3) for characteristic kernels k on Rdx and ` on Rdy (Gretton, 2015), and where for za = (xa, ya) 2 Rdx ⇥ Rdy , a = 1, . . . , 4, we let hHSICk,` (z1, z2, z3, z4) := 1 4 hMMDk (x1, x2;x3, x4)h MMD ` (y1, y2; y3, y4). (4) Goodness-of-fit testing. For this problem, we are given a model density p on Rd and i.i.d. samples ZN := (Zi)1iN drawn from a density q on Rd. The aim is again to test H0 : p = q against H1 : p 6= q; that is, we want to know if the samples have been drawn from the model. Chwialkowski et al. (2016) and Liu et al. (2016) both construct a nonparametric goodness-of-fit test using the Kernel Stein Discrepancy (KSD). A quadratic-time KSD estimator can be computed as the second-order one-sample U -statistic, [KSD 2 p,k(ZN ) := 1 iN 2 X (i,j)2iN2 hKSDk,p (Zi, Zj), (5) where the Stein kernel hKSDk,p : Rd ⇥ Rd ! R is defined as hKSDk,p (x, y) := r log p(x)>r log p(y) k(x, y) +r log p(y)>rxk(x, y) +r log p(x)>ryk(x, y) + dX i=1 @ @xi@yi k(x, y). (6) In order to guarantee consistency of the Stein goodness-of-fit test (Chwialkowski et al., 2016, Theorem 2.2), we assume that the kernel k is C0-universal (Carmeli et al., 2010, Definition 4.1) and that Eq h hKSDk,p (z, z) i < 1 and Eq " r log ✓ p(z) q(z) ◆ 2 2 # < 1. (7) 1All probability density functions in this paper are with respect to the Lebesgue measure. 2We use the notation a . b when there exists a constant C > 0 such that a Cb. We similarly use the notation &. We write a ⇣ b if a . b and a & b. We also use the convention that all constants are generically denoted by C, even though they might be different. Quantile estimation. Multiple strategies have been proposed to estimate the quantiles of test statistics under the null for these three tests. We primarily focus on the wild bootstrap approach (Chwialkowski et al., 2014), though our results also hold using a parametric bootstrap for the goodness-of-fit setting (Schrab et al., 2022). In Section 7, we show that the same uniform separation rates can be derived for HSIC quadratic-time tests using permutations instead of a wild bootstrap. More details on MMD, HSIC, KSD, and on quantile estimation are provided in Appendix B. 3 Incomplete U -statistics for MMD, HSIC and KSD As presented above, the quadratic-time statistics for the two-sample (MMD), independence (HSIC) and goodness-of-fit (KSD) problems can be rewritten as U -statistics with kernels hMMDk , h HSIC k,` and hKSDk,p , respectively. The computational cost of tests based on these U -statistics grows quadratically with the sample size. When working with very large sample sizes, as it is often the case in real-world uses of those tests, this quadratic cost can become very problematic, and faster alternative tests are better adapted to this ‘big data’ setting. Multiple linear-time kernel tests have been proposed in the three testing frameworks (see Section 1 for details). We construct computationally efficient variants of the aggregated kernel tests proposed by Fromont et al. (2013), Albert et al. (2022), Kim et al. (2022), and Schrab et al. (2021, 2022) for the three settings, with the aim of retaining the significant power advantages of the aggregation procedure observed for quadratic-time tests. To this end, we propose to replace the quadratic-time U -statistics presented in Equations (1), (3) and (5) with second-order incomplete U -statistics (Blom, 1976; Janson, 1984; Lee, 1990), MMD 2 k Xm,Yn;DN := 1 DN X (i,j)2DN hMMDk (Xi, Xj ;Yi, Yj), (8) HSICk,` ZN ;DbN/2c := 1 DbN/2c X (i,j)2DbN/2c hHSICk,` Zi, Zj , Zi+bN/2c, Zj+bN/2c , (9) KSD 2 p,k ZN ;DN := 1 DN X (i,j)2DN hKSDk,p (Zi, Zj), (10) where for the two-sample problem we let N := min(m,n), and where the design Db is a subset of ib2 (the set of all 2-tuples drawn without replacement from {1, . . . , b}). Note that DbN/2c ✓ i N/2 2 ⇢ iN 2 . The design can be deterministic. For example, for the two-sample problem with equal even sample sizes m = n = N , the deterministic design DN = {(2a 1, 2a) : a = 1, . . . , N/2} corresponds to the MMD linear-time estimator proposed by Gretton et al. (2012a, Lemma 14). For fixed design size, the elements of the design can also be chosen at random without replacement, in which case the estimators in Equations (8) to (10) become random quantities given the data. For generality purposes, the results presented in this paper hold for both deterministic and random (without replacement) design choices while we focus on the deterministic design in our experiments. By fixing the design sizes in Equations (8) to (10) to be, for example, DN = DbN/2c = cN (11) for some small constant c 2 N \ {0}, we obtain incomplete U -statistics which can be computed in linear time. Note that by pairing the samples Zi := (Xi, Yi), i = 1, . . . , N for the MMD case and eZi := Zi, Zi+bN/2c , i = 1, . . . , bN/2c for the HSIC case, we observe that all three incomplete U - statistics of second order have the same form, with only the kernel functions and the design differing. The motivation for defining the estimators in Equations (8) and (9) as incomplete U -statistics of order 2 (rather than of higher order) derives from the reasoning of Kim et al. (2022, Section 6) for permuted complete U -statistics for the two-sample and independence problems (see Appendix E.1). 4 Quantile and variance bounds for incomplete U -statistics In this section, we derive upper quantile and variance bounds for a second-order incomplete degenerate U -statistic with a generic degenerate kernel h, for some design D ✓ iN 2 , defined as U ZN ;D := 1 |D| X (i,j)2D h(Zi, Zj). We will use these results to bound the quantiles and variances of our three test statistics for our hypothesis tests in Section 5. The derived bounds are of independent interest. In the following lemma, building on the results of Lee (1990), we directly derive an upper bound on the variance of the incomplete U -statistic in terms of the sample size N and of the design size |D|. Lemma 1. The variance of the incomplete U -statistic can be upper bounded in terms of the quantities 2 1 := var E ⇥ h(Z,Z 0) Z 0 ⇤ and 2 2 := var(h(Z,Z 0)) with different bounds depending on the design choice. For deterministic (LHS) or random (RHS) design D and sample size N , we have var U . N|D| 2 1 + 1 |D| 2 2 and var U . 1 N 2 1 + 1 |D| 2 2 . The proof of Lemma 1 is deferred to Appendix F.2. We emphasize the fact that this variance bound also holds for random design with replacement, as considered by Blom (1976) and Lee (1990). For random design, we observe that if |D| ⇣ N2 then the bound is 2 1 /N + 2 2 /N2 which is the variance bound of the complete U -statistic (Albert et al., 2022, Lemma 10). If N . |D| . N2, the variance bound is 2 1 /N + 2 2 /|D|, and if |D| . N it is 2 2 /|D| since 2 1 2 2 /2 (Blom, 1976, Equation 2.1). Kim et al. (2022) develop exponential concentration bounds for permuted complete U -statistics, and Clémençon et al. (2013) study the uniform approximation of U -statistics by incomplete U -statistics. To the best of our knowledge, no quantile bounds have yet been obtained for incomplete U -statistics in the literature. While permutations are well-suited for complete U -statistics (Kim et al., 2022), using them with incomplete U -statistics results in having to compute new kernel values, which comes at an additional computational cost we would like to avoid. Restricting the set of permutations to those for which the kernel values have already been computed for the original incomplete U -statistic corresponds exactly to using a wild bootstrap (Schrab et al., 2021, Appendix B). Hence, we consider the wild bootstrapped second-order incomplete U -statistic U ✏ ZN ;D := 1 |D| X (i,j)2D ✏i✏jh(Zi, Zj) (12) for i.i.d. Rademacher random variables ✏1, . . . , ✏N with values in { 1, 1}, for which we derive an exponential concentration bound (quantile bound). We note the in-depth work of Chwialkowski et al. (2014) on the wild bootstrap procedure for kernel tests with applications to quadratic-time MMD and HSIC tests. We now provide exponential tail bounds for wild bootstrapped incomplete U -statistics. Lemma 2. There exists some constant C > 0 such that, for every t 0, we have P✏ ⇣ U ✏ t ZN ,D ⌘ 2 exp ✓ C t Ainc ◆ 2 exp ✓ C t A ◆ where A2 inc := |D| 2 P (i,j)2D h(Zi, Zj) 2 and A2 := |D| 2 P (i,j)2iN2 h(Zi, Zj)2. Lemma 2 is proved in Appendix F.3. While the second bound in Lemma 2 is less tight, it has the benefit of not depending on the choice of design D but only on its size |D| which is usually fixed. 5 Efficient kernel tests using incomplete U -statistics We now formally define the hypothesis tests obtained using the incomplete U -statistics with a wild bootstrap. This is done for fixed kernel bandwidths 2 (0,1)dx , µ 2 (0,1)dy , for the kernels3 k (x, y) := dxY i=1 1 i Ki ✓ xi yi i ◆ , `µ(x, y) := dyY i=1 1 µi Li ✓ xi yi µi ◆ , (13) for characteristic kernels (x, y) 7! Ki(x y), (x, y) 7! Li(x y) on R⇥R for functions Ki, Li 2 L1(R) \ L2(R) integrating to 1. We unify the notation for the three testing frameworks. For the twosample and goodness-of-fit problems, we work only with k and have d = dx. For the independence 3Our results are presented for bandwidth selection, but they hold in the more general setting of kernel selection, as considered by Schrab et al. (2022). The goodness-of-fit results hold for a wider range of kernels including the IMQ (inverse multiquadric) kernel (Gorham and Mackey, 2017), as in Schrab et al. (2022). problem, we work with the two kernels k and `µ, and for ease of notation we let d := dx + dy and dx+i := µi for i = 1, . . . , dy. We also simply write p := pxy and q := px ⌦ py. We let U and h denote either MMD 2 k and h MMD k , or HSICk ,`µ and hHSICk ,`µ , or KSD 2 p,k and h KSD k ,p , respectively. We denote the design size of the incomplete U -statistics in Equations (8) to (10) by L := DN = DbN/2c . For the three testing frameworks, we estimate the quantiles of the test statistics by simulating the null hypothesis using a wild bootstrap, as done in the case of complete U -statistics by Fromont et al. (2012) and Schrab et al. (2021) for the two-sample problem, and by Schrab et al. (2022) for the goodness-of-fit problem. This is done by considering the original test statistic UB1+1 := U together with B1 wild bootstrapped incomplete U -statistics U1 , . . . , U B1 computed as in Equation (12), and estimating the (1 ↵)-quantile with a Monte Carlo approximation bq 1 ↵ := inf ⇢ t 2 R : 1 ↵ 1 B1 + 1 B1+1X b=1 1 U b t = U•dB1(1 ↵)e , (14) where U•1 · · · U •B1+1 are the sorted elements U 1 , . . . , U B1+1 . The test ↵ is defined as rejecting the null if the original test statistic U is greater than the estimated (1 ↵)-quantile, that is, ↵(ZN ) := 1 U (ZN ) > bq 1 ↵ . The resulting test has time complexity O(B1L) where L is the design size (1 L N(N 1)). We show in Proposition 1 that the test ↵ has well-calibrated asymptotic level for goodness-of-fit testing, and well-calibrated non-asymptotic level for two-sample and independence testing. The proof of the latter non-asymptotic guarantee is based on the exchangeability of U1 , . . . , U B1+1 under the null hypothesis along with the result of Romano and Wolf (2005, Lemma 1). A similar proof strategy can be found in Fromont et al. (2012, Proposition 2), Albert et al. (2022, Proposition 1), and Schrab et al. (2021, Proposition 1). The exchangeability of wild bootstrapped incomplete U -statistics for independence testing does not follow directly from the mentioned works. We show this through the interesting connection between hHSICk,` and {hMMDk , hMMD` }, the proof is deferred to Appendix F.1. Proposition 1. The test ↵ has level ↵ 2 (0, 1), i.e. PH0 ↵(ZN ) = 1 ↵. This holds nonasymptotically for the two-sample and independence cases, and asymptotically for goodness-of-fit. 4 Having established the validity of the test ↵, we now study power guarantees for it in terms of the L2-norm of the difference in densities kp qk2. In Theorem 1, we show for the three tests that, if kp qk2 exceeds some threshold, we can guarantee high test power. For the two-sample and independence problems, we derive uniform separation rates (Baraud, 2002) over Sobolev balls Ssd(R) := n f 2 L1 Rd \ L2 Rd : Z Rd k⇠k2s 2 | bf(⇠)|2d⇠ (2⇡)dR2 o , (15) with radius R > 0 and smoothness parameter s > 0, where bf denotes the Fourier transform of f . The uniform separation rate over Ssd(R) is the smallest value of t such that, for any alternative with kp qk2 > t and5 p q 2 Ssd(R), the probability of type II error of ↵ can be controlled by 2 (0, 1). Before presenting Theorem 1, we introduce further notation unified over the three testing frameworks; we define the integral transform T as (T f)(x) := Z Rd f(x)K (x, y) dy (16) for f 2 L2(Rd), x 2 Rd, where K := k for the two-sample problem, K := k ⌦ `µ for the independence problem, and K := hKSDk ,p for the goodness-of-fit problem. Note that, for the twosample and independence testing frameworks, since K is translation-invariant, the integral transform corresponds to a convolution. However, this is not true for the goodness-of-fit setting as hKSDk ,p is not translation-invariant. We are now in a position to present our main contribution in Theorem 1: we derive power guarantee conditions for our tests using incomplete U -statistics, and uniform separation rates over Sobolev balls for the two-sample and independence settings. 4Level is non-asymptotic for the goodness-of-fit case using a parametric bootstrap (Schrab et al., 2022). For the goodness-of-fit setting, we also recall that the further assumptions in Equation (7) need to be satisfied. 5We stress that we only assume p q 2 Ssd(R) and not p, q 2 Ssd(R) as considered by Li and Yuan (2019). Viewing q as a perturbed version of p, we only require that the perturbation is smooth (i.e. lies in a Sobolev ball). Theorem 1. Suppose that the assumptions in Appendix A.1 hold, and consider 2 (0,1)d. (i) For sample size N and design size L, if there exists some C > 0 such that kp qk2 2 k(p q) T (p q)k22 + C N L ln(1/↵) 2, , then PH1 ↵(ZN ) = 0 (type II error), where 2, . 1/ p 1 · · · d for MMD and HSIC. (ii) Fix R > 0 and s > 0, and consider the bandwidths ⇤i := (N/L) 2/(4s+d) for i = 1, . . . , d. For MMD and HSIC, the uniform separation rate of ⇤ ↵ over the Sobolev ball Ssd(R) is (up to a constant) L/N 2s/(4s+d) . The proof of Theorem 1 relies on the variance and quantile bounds presented in Lemmas 1 and 2, and also uses results of Albert et al. (2022) and Schrab et al. (2021, 2022) on complete U -statistics. The details can be found in Appendix F.4. The power condition in Theorem 1 (i) corresponds to a variance-bias decomposition; for large bandwidths the bias term (first term) dominates, while for small bandwidths the variance term (second term which also controls the quantile) dominates. While the power guarantees of Theorem 1 hold for any design (either deterministic or uniformly random without replacement) of fixed size L, the choice of design still influences the performance of the test in practice. The variance (but not its upper bound) depends on the choice of design; certain choices lead to minimum variance of the incomplete U -statistic (Lee, 1990, Section 4.3.2). The minimax (i.e. optimal) rate over the Sobolev ball Ssd(R) is N 2s/(4s+d) for the two-sample (Li and Yuan, 2019, Theorem 5 (ii)) and independence (Albert et al., 2022, Theorem 4; Berrett et al., 2021, Corollary 5) problems. The rate for our incomplete U -statistic test with time complexity O(B1L) has the same dependence in the exponent as the minimax rate; (L/N) 2s/(4s+d) = N 2s/(4s+d) N2/L 2s/(4s+d) where L . N2 with L the design size and N the sample size. • If L ⇣ N2 then the test runs in quadratic time and we recover exactly the minimax rate. • If N . L . N2 then the rate still converges to 0; there is a trade-off between the cost . (N2/L)2s/(4s+d) incurred in the minimax rate and the computational efficiency O(B1L). • If L . N then there is no guarantee that the rate converges to 0. To summarize, the tests we propose have computational cost O(B1L) which can be specified by the user with the choice of the number of wild bootstraps B1, and of the design size L (as a function of the sample size N ). There is a trade-off between test power and computational cost. We provide theoretical rates in terms of L and N , working up to a constant. The rate is minimax optimal in the case where L grows quadratically with N . We quantify exactly how, as the computational cost decreases from quadratic to linear in the sample size, the rate deteriorates gradually from being minimax optimal to not being guaranteed to convergence to zero. In our experiments, we use a design size which grows linearly with the sample size in order to compare our tests against other linear-time tests in the literature. The assumption guaranteeing that the rate converges to 0 is not satisfied in this setting, however, it would be satisfied for any faster growth of the design size (e.g. L ⇣ N log logN ). 6 Efficient aggregated kernel tests using incomplete U -statistics We now introduce our aggregated tests that combine single tests with different bandwidths. Our aggregation scheme is similar to those of Fromont et al. (2013), Albert et al. (2022) and Schrab et al. (2021, 2022), and can yield a test which is adaptive to the unknown smoothness parameter s of the Sobolev ball Ssd(R), with relatively low price. Let ⇤ be a finite collection of bandwidths, (w ) 2⇤ be associated weights satisfying P 2⇤ w 1, and u↵ be some correction term defined shortly in Equation (17). Then, using the incomplete U -statistic U , we define our aggregated test ⇤↵ as ⇤ ↵(ZN ) := 1 ⇣ U (ZN ) > bq 1 u↵w for some 2 ⇤ ⌘ . The levels of the single tests are weighted and adjusted with a correction term u↵ := supB3 ( u 2 ✓ 0, min 2⇤ w 1 ◆ : 1 B2 B2X b=1 1 ✓ max 2⇤ ⇣ eU b U •dB1(1 uw )e ⌘ > 0 ◆ ↵ ) , (17) where the wild bootstrapped incomplete U -statistics eU1 , . . . , eU B2 computed as in Equation (12) are used to perform a Monte Carlo approximation of the probability under the null, and where the supremum is estimated using B3 steps of bisection method. Proposition 1, along with the reasoning of Schrab et al. (2021, Proposition 8), ensures that ⇤↵ has non-asymptotic level ↵ for the twosample and independence cases, and asymptotic level ↵ for the goodness-of-fit case. We refer to the three aggregated tests constructed using incomplete U -statistics as MMDAggInc, HSICAggInc and KSDAggInc. The computational complexity of those tests is O(|⇤|(B1 +B2)L), which means, for example, that if L ⇣ N as in Equation (11), the tests run efficiently in linear time in the sample size. We formally record error guarantees of ⇤↵ and derive uniform separation rates over Sobolev balls. Theorem 2. Suppose that the assumptions in Appendix A.2 hold, and consider a collection ⇤. (i) For sample size N and design size L, if there exists some C > 0 such that kp qk2 2 min 2⇤ k(p q) T (p q)k22 + C N L ln 1/(↵w ) 2, ! , then PH1 ⇤ ↵(ZN ) = 0 (type II error), where 2, . 1/ p 1 · · · d for MMD and HSIC. (ii) Assume L > N so that ln(ln(L/N)) is well-defined. Consider the collections of bandwidths and weights (independent of the parameters s and R of the Sobolev ball Ssd(R)) ⇤ := n 2 `, . . . , 2 ` 2 (0,1)d : ` 2 n 1, . . . , l 2 d log 2 ⇣ L/N ln(ln(L/N)) ⌘moo , w := 6 ⇡2`2 . For the two-sample and independence problems, the uniform separation rate of ⇤ ↵ over the Sobolev balls Ssd(R) : R > 0, s > 0 is (up to a constant) ✓ L/N ln(ln(L/N)) ◆ 2s/(4s+d) . The extension from Theorem 1 to Theorem 2 has been proved for complete U -statistics in the two-sample (Fromont et al., 2013; Schrab et al., 2021), independence (Albert et al., 2022) and goodness-of-fit (Schrab et al., 2022) testing frameworks. The proof of Theorem 2 follows with the same reasoning by simply replacing N with L/N as we work with incomplete U -statistics; this ‘replacement’ is theoretically justified by Theorem 1. Theorem 2 shows that the aggregated test ⇤↵ is adaptive over Sobolev balls Ssd(R) : R > 0, s > 0 : the test ⇤↵ does not depend on the unknown smoothness parameter s (unlike ⇤ ↵ in Theorem 1) and achieves the minimax rate, up to an iterated logarithmic factor, and up to the cost incurred for efficiency of the test (i.e. L/N instead of N ). 7 Minimax optimal permuted quadratic-time aggregated independence test Considering Theorem 2 with our incomplete U -statistic with full design D = iN 2 for which L ⇣ N2, we have proved that the quadratic-time two-sample and independence aggregated tests using a wild bootstrap achieve the rate (N/ln(ln(N))) 2s/(4s+d) over the Sobolev balls Ssd(R) : R > 0, s > 0 . This is the minimax rate (Li and Yuan, 2019; Albert et al., 2022), up to some iterated logarithmic term. For the two-sample problem, Kim et al. (2022) and Schrab et al. (2021) show that this optimality result also holds when using complete U -statistics with permutations. Whether the equivalent statement for the independence test with permutations holds has not yet been addressed; the rate can be proved using theoretical (unknown) quantiles with a Gaussian kernel (Albert et al., 2022), but has not yet been proved using permutations. Kim et al. (2022, Proposition 8.7) consider this problem, again using a Gaussian kernel, but they do not obtain the correct dependence on ↵ (i.e. they obtain ↵ 1/2 rather than ln(1/↵)), hence they cannot recover the desired rate. As pointed out by Kim et al. (2022, Section 8): ‘It remains an open question as to whether [the power guarantee] continues to hold when ↵ 1/2 is replaced by ln(1/↵)’. We now prove that we can improve the ↵-dependence to ln(1/↵)3/2 for any bounded kernel of the form of Equation (13), and that this allows us to obtain the desired rate over Sobolev balls Ssd(R) : R > 0, s > d/4 . The assumption s > d/4 imposes a stronger smoothness restriction on p q 2 Ssd(R), which is similarly also considered by Li and Yuan (2019). Theorem 3. Consider the quadratic-time independence test using the complete U -statistic HSIC estimator with a quantile estimated using permutations as done by Kim et al. (2022, Proposition 8.7), with kernels as in Equation (13) for bounded functions Ki and Lj for i = 1, . . . , dx, j = 1, . . . , dy . (i) Suppose that the assumptions in Appendix A.1 hold. For fixed R > 0, s > d/4, and bandwidths ⇤i := N 2/(4s+d) for i = 1, . . . , d, the probability of type II error of the test is controlled by when kp qk2 2 k(p q) T ⇤(p q)k22 + C 1 N ln(1/↵)3/2 p ⇤ 1 · · · ⇤d for some constant C > 0. The uniform separation rate over the Sobolev ball Ssd(R) is, up to a constant, N 2s/(4s+d). (ii) Suppose that the assumptions in Appendix A.2 hold. The uniform separation rate over the Sobolev balls Ssd(R) : R > 0, s > d/4 is N/ ln(ln(N)) 2s/(4s+d) , up to a constant, with the collections ⇤ := n 2 `, . . . , 2 ` 2 (0,1)d : ` 2 n 1, . . . , l 2 d log 2 ⇣ N ln(ln(N)) ⌘moo , w := 6 ⇡2`2 . The proof of Theorem 3, in Appendix F.5, uses the exponential concentration bound of Kim et al. (2022, Theorem 6.3) for permuted complete U -statistics. Another possible approach to obtain the correct dependency on ↵ is to employ the sample-splitting method proposed by Kim et al. (2022, Section 8.3) in order to transform the independence problem into a two-sample problem. While this indirect approach leads to a logarithmic factor in ↵, the practical power would be suboptimal due to an inefficient use of the data from sample splitting. Theorem 3 (i) shows that a ln(1/↵)3/2 dependence is achieved by the more practical permutation-based HSIC test. Theorem 3 (ii) demonstrates that this leads to a minimax optimal rate for the aggregated HSIC test, up to the ln(ln(N)) cost for adaptivity. 8 Experiments For the two-sample problem, we consider testing samples drawn from a uniform density on [0, 1]d against samples drawn from a perturbed uniform density. For the independence problem, the joint density is a perturbed uniform density on [0, 1]dx+dy , the marginals are then simply uniform densities. Those perturbed uniform densities can be shown to lie in Sobolev balls (Li and Yuan, 2019; Albert et al., 2022), to which our tests are adaptive. For the goodness-of-fit problem, we use a GaussianBernoulli Restricted Boltzmann Machine as first considered by Liu et al. (2016) in this testing framework. We use collections of 21 bandwidths for MMD and HSIC and of 25 bandwidth pairs for HSIC; more details on the experiments (e.g. model and test parameters) are presented in Appendix C. We consider our incomplete aggregated tests MMDAggInc, HSICAggInc and KSDAggInc, with parameter R 2 {1, . . . , N 1} which fixes the deterministic design to consist of the first R subdiagonals of the N ⇥N matrix, i.e. D := {(i, i+ r) : i = 1, . . . , N r for r = 1, . . . , R} with size |D| = RN R(R 1)/2. We run our incomplete tests with R 2 {1, 100, 200} and also the complete test using the full design D = iN 2 . We compare their performances with current linear-time state-of- the-art tests: ME, SCF, FSIC and FSSD (Jitkrittum et al., 2016, 2017a,b) which evaluate the witness functions at a finite set of locations chosen to maximize the power, Cauchy RFF (random Fourier feature) and L1 IMQ (Huggins and Mackey, 2018) which are random feature Stein discrepancies, LSD (Grathwohl et al., 2020) which uses a neural network to learn the Stein discrepancy, and OST PSI (Kübler et al., 2020) which performs kernel selection using post selection inference. Similar trends are observed across all our experiments in Figure 1, for the three testing frameworks, when varying the sample size, the dimension, and the difficulty of the problem (scale of perturbations or noise level). The linear-time tests AggInc R = 200 almost match the power obtained by the quadratic-time tests AggCom in all settings (except in Figure 1 (i) where the difference is larger) while being computationally much more efficient as can be seen in Figure 1 (d, h, l). The incomplete tests with R = 100 have power only slightly below the ones using R = 200, and run roughly twice as fast (Figure 1 (d, h, l)). In all experiments, those three tests (AggInc R = 100, 200 and AggCom) have significantly higher power than the linear-time tests which optimize test locations (ME, SCF, FSIC and FSSD); in the two-sample case the aggregated tests run faster for small sample size but slower for large sample size, in the independence case the aggregated tests run much faster, and in the goodness-of-fit case FSSD runs faster. While both types of tests are linear, we note that the runtimes of the tests of Jitkrittum et al. (2016, 2017a,b) increase slower with the sample size than those of our aggregated tests with R = 100, 200, but a fixed computational cost is incurred for their optimization step, even for small sample sizes. In the goodness-of-fit framework, L1 IMQ performs similarly to FSSD which is in line with the results presented by Huggins and Mackey (2018, Figure 4d) who consider the same experiment. All other goodness-of-fit tests (except KSDAggInc R = 1) achieve much higher test power. Cauchy RFF and KSDAggInc R = 200 obtain similar power in almost all the experiments. While KSDAggInc R = 200 runs much faster in the experiments presented6, it seems that the KSDAggInc runtimes increase more steeply with the sample size than the Cauchy RFF and L1 IMQ runtimes (see Appendix D.5 for details). LSD matches the power of KSDAggInc R = 100 when varying the noise level in Figure 1 (k) (KSDAggInc R = 200 has higher power), and when varying the hidden dimension in Figure 1 (j) where dx = 100. When varying the sample size in Figure 1 (i), both KSDAggInc tests with R = 100, 200 achieve much higher power than LSD. Unsurprisingly, AggInc R = 1, which runs much faster than all the aforementioned tests, has low power in every experiment. For the two-sample problem, it obtains slightly higher power than OST PSI which runs even faster. We include more experiments in Appendix D: we present experiments on the MNIST dataset (same trends are observed), we use different collection of bandwidths, we verify that all tests have well-calibrated levels, and illustrate the benefits of the aggregation procedure. 9 Acknowledgements Antonin Schrab acknowledges support from the U.K. Research and Innovation (EP/S021566/1). Ilmun Kim acknowledges support from the Yonsei University Research Fund of 2021-22-0332, and from the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (2022R1A4A1033384). Benjamin Guedj acknowledges partial support by the U.S. Army Research Laboratory and the U.S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (EP/R013616/1), and by the French National Agency for Research (ANR-18-CE40-0016-01 & ANR-18-CE23-0015-02). Arthur Gretton acknowledges support from the Gatsby Charitable Foundation. 6The runtimes in Figure 1 (d, h, l) can also vary due to the different implementations of the respective authors.
1. What is the focus and contribution of the paper regarding faster than quadratic tests for various problems? 2. What are the strengths of the proposed methods, particularly in terms of their adaptive nature and power achievement? 3. Do you have any concerns or questions regarding the linear time baseline tests, specifically those in Huggins and Mackey? 4. How does the reviewer assess the tradeoff in Theorem 1, and do they think it could be tighter for smaller values of L? 5. Are there any limitations or requirements for the choice of design D in the experiments, and can it be adaptive? 6. Minor comment: Would it be helpful to state assumptions on densities in a separate environment for easier processing of results?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper POST REBUTTAL: Score increased to 7. In this work, the authors propose faster than quadratic tests for the two-sample, independence, and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and Kernel Stein Discrepancy (KSD), respectively. They are based on incomplete U statistics that can interpolate between linear time, and quadratic time costs (the latter cost is incurred by typical tests which are complete U-statistics). Strengths And Weaknesses The authors provide a tradeoff between the computational cost used and the power achieved in Theorem 1---while achieving the minimax rates over Sobolev balls when using the quadratic runtime variant. The authors then use this result to also achieve appropriate power results for kernel selection (up to logarithmic inflation in the number of kernels). Notably, this result is adaptive and does not require the knowledge of the smoothness parameter of the difference between the null and the alternative density. The authors also provide several experiments which demonstrate the advantages of the proposed methods. The writing of the paper is pretty clear and worth appreciating! Questions The work in Huggins and Mackey (e.g., the L1 IMQ and Cauchy RFF random feature Stein discrepancies) were all linear and not quadratic time as the authors mention in l 53. Given the focus on linear time tests in this work, I believe that these tests should be treated as a useful baseline for goodness-of-fit comparison experiments. In particular, Huggins and Mackey's experiments showed that their tests typically outperformed the FSSD test (which is one of the key linear time baselines in the current work). In the separation rate, the dependence on alpha is logarithmic but that on beta is polynomial (1/x)--is the latter unavoidable? I can see from the proof that it is because of the nature of concentration inequalities used in the two contexts (namely Rademacher chaos concentration, and Markov's inequality)--but are the arguments known to be tight? Does there exist a setting where such dependence is necessarily needed? Is the tradeoff in Theorem 1 tight? I can see it's tight when L = N^2 but is it tight for smaller values of L? Some discussion on this would be very useful. Do you not need any requirements on p for Proposition 1? And does only the difference p-q need to lie in the Sobolev ball for Thm 1(ii)? Does the choice of design D not matter? Does it have to be an iid subsample? Can it be adaptive? (My guess is all the arguments go through relying on iidness of data points in D). Minor comment: It would be easier to process the results if the assumptions on densities are stated in an assumption environment, and then referenced in the theorem results. Limitations See questions.
NIPS
Title Efficient Aggregated Kernel Tests using Incomplete $U$-statistics Abstract We propose a series of computationally efficient nonparametric tests for the twosample, independence, and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and Kernel Stein Discrepancy (KSD), respectively. Our test statistics are incomplete U -statistics, with a computational cost that interpolates between linear time in the number of samples, and quadratic time, as associated with classical U -statistic tests. The three proposed tests aggregate over several kernel bandwidths to detect departures from the null on various scales: we call the resulting tests MMDAggInc, HSICAggInc and KSDAggInc. This procedure provides a solution to the fundamental kernel selection problem as we can aggregate a large number of kernels with several bandwidths without incurring a significant loss of test power. For the test thresholds, we derive a quantile bound for wild bootstrapped incomplete U -statistics, which is of independent interest. We derive non-asymptotic uniform separation rates for MMDAggInc and HSICAggInc, and quantify exactly the tradeoff between computational efficiency and the attainable rates: this result is novel for tests based on incomplete U -statistics, to our knowledge. We further show that in the quadratic-time case, the wild bootstrap incurs no penalty to test power over the more widespread permutation-based approach, since both attain the same minimax optimal rates (which in turn match the rates that use oracle quantiles). We support our claims with numerical experiments on the trade-off between computational efficiency and test power. In all three testing frameworks, the linear-time versions of our proposed tests perform at least as well as the current linear-time state-of-the-art tests. 1 Introduction Nonparametric hypothesis testing is a fundamental field of statistics, and is widely used by the machine learning community and practitioners in numerous other fields, due to the increasing availability of huge amounts of data. When dealing with large-scale datasets, computational cost can quickly emerge as a major issue which might prevent from using expensive tests in practice; constructing efficient tests is therefore crucial for their real-world applications. In this paper, we construct kernel-based aggregated tests using incomplete U -statistics (Blom, 1976) for the two-sample, independence and 36th Conference on Neural Information Processing Systems (NeurIPS 2022). goodness-of-fit problems (which we detail in Section 2). The quadratic-time aggregation procedure has been shown to result in powerful tests (Fromont et al., 2012; Fromont et al., 2013; Albert et al., 2022; Schrab et al., 2021, 2022), we propose efficient variants of these well-studied tests, with computational cost interpolating from the classical quadratic-time regime to the linear-time one. Related work: aggregated tests. Kernel selection (or kernel bandwidth selection) is a fundamental problem in nonparametric hypothesis testing as this choice has a major influence on test power. Motivated by this problem, non-asymptotic aggregated tests, which combine tests with different kernel bandwidths, have been proposed for the two-sample (Fromont et al., 2012, 2013; Kim et al., 2022; Schrab et al., 2021), independence (Albert et al., 2022; Kim et al., 2022), and goodness-of-fit (Schrab et al., 2022) testing frameworks. Li and Yuan (2019) and Balasubramanian et al. (2021) construct similar aggregated tests for these three problems, with the difference that they work in the asymptotic regime. All the mentioned works study aggregated tests in terms of uniform separation rates (Baraud, 2002). Those rates depend on the sample size and satisfy the following property: if the L2-norm difference between the densities is greater than the uniform separation rate, then the test is guaranteed to have high power. All aggregated kernel-based tests in the existing literature have been studied using U -statistic estimators (Hoeffding, 1992) with tests running in quadratic time. Related work: efficient kernel tests. Several linear-time kernel tests have been proposed for those three testing frameworks. Those include tests using classical linear-time estimators with median bandwidth (Gretton et al., 2012a; Liu et al., 2016) or selecting an optimal bandwidth on held-out data to maximize power (Gretton et al., 2012b), tests using eigenspectrum approximation (Gretton et al., 2009), tests using post-selection inference for adaptive kernel selection with incomplete U -statistics (Yamada et al., 2018, 2019; Lim et al., 2019, 2020; Kübler et al., 2020; Freidling et al., 2021), tests which use a Nyström approximation of the asymptotic null distribution (Zhang et al., 2018; Cherfaoui et al., 2022), random Fourier features tests (Zhang et al., 2018; Zhao and Meng, 2015; Chwialkowski et al., 2015), tests based on random feature Stein discrepancies (Huggins and Mackey, 2018), the adaptive tests which use features selected on held-out data to maximize power (Jitkrittum et al., 2016, 2017a,b), as well as tests using neural networks to learn a discrepancy (Grathwohl et al., 2020). We also point out the very relevant works of Kübler et al. (2022) on a quadratic-time test, and of Ho and Shieh (2006), Zaremba et al. (2013) and Zhang et al. (2018) on the use of block U -statistics with complexity O(N1.5) for block size p N where N is the sample size. Contributions and outline. In Section 2, we present the three testing problems with their associated well-known quadratic-time kernel-based estimators (MMD, HSIC, KSD) which are U -statistics. We introduce three associated incomplete U -statistics estimators, which can be computed efficiently, in Section 3. We then provide quantile and variance bounds for generic incomplete U -statistics using a wild bootstrap, in Section 4. We study the level and power guarantees at every finite sample sizes for our efficient tests using incomplete U -statistics for a fixed kernel bandwidth, in Section 5. In particular, we obtain non-asymptotic uniform separation rates for the two-sample and independence tests over a Sobolev ball, and show that these rates are minimax optimal up to the cost incurred for efficiency of the test. In Section 6, we propose our efficient aggregated tests which combine tests with multiple kernel bandwidths. We prove that the proposed tests are adaptive over Sobolev balls and achieve the same uniform separation rate (up to an iterated logarithmic term) as the tests with optimal bandwidths. As a result of our analysis, we have shown minimax optimality over Sobolev balls of the quadratictime tests using quantiles estimated with a wild bootstrap. Whether this optimality result also holds for tests using the more general permutation-based procedure to approximate HSIC quantiles, was an open problem formulated by Kim et al. (2022), we prove that it indeed holds in Section 7. As observed in Section 8, the linear-time versions of MMDAggInc, HSICAggInc and KSDAggInc retain high power, and either outperform or match the power of other state-of-the-art linear-time kernel tests. Our implementation of the tests and code for reproducibility of the experiments are available online under the MIT license: https://github.com/antoninschrab/agginc-paper. 2 Background In this section, we briefly describe our main problems of interest, comprising the two-sample, independence and goodness-of-fit problems. We approach these problems from a nonparametric point of view using the kernel-based statistics: MMD, HSIC, and KSD. We briefly introduce original forms of these statistics, which can be computed in quadratic time, and also discuss ways of calibrating tests proposed in the literature. The three quadratic-time expressions are presented in Appendix B. Two-sample testing. In this problem, we are given independent samples Xm := (Xi)1im and Yn = (Yj)1jn, consisting of i.i.d. random variables with respective probability density functions1 p and q on Rd. We assume we work with balanced sample sizes, that is2 max(m,n) . min(m,n). We are interested in testing the null hypothesis H0 : p = q against the alternative H1 : p 6= q; that is, we want to know if the samples come from the same distribution. Gretton et al. (2012a) propose a nonparametric kernel test based on the Maximum Mean Discrepancy (MMD), a measure between probability distributions which uses a characteristic kernel k (Fukumizu et al., 2008; Sriperumbudur et al., 2011). It can be estimated using a quadratic-time estimator (Gretton et al., 2012a, Lemma 6) which, as noted by Kim et al. (2022), can be expressed as a two-sample U -statistic (both of second order) (Hoeffding, 1992), \MMD 2 k(Xm,Yn) = 1 im 2 in 2 X (i,i0)2im2 X (j,j0)2in2 hMMDk (Xi, Xi0 ;Yj , Yj0), (1) where iba with a b denotes the set of all a-tuples drawn without replacement from {1, . . . , b} so that iba = b · · · (b a+ 1), and where, for x1, x2, y1, y2 2 Rd, we let hMMDk (x1, x2; y1, y2) := k(x1, x2) k(x1, y2) k(x2, y1) + k(y1, y2). (2) Independence testing. In this problem, we have access to i.i.d. pairs of samples ZN := Zi 1iN = (Xi, Yi) 1iN with joint probability density pxy on Rdx⇥Rdy and marginals px on Rdx and py on Rdy . We are interested in testing H0 : pxy = px⌦py against H1 : pxy 6= px⌦py; that is, we want to know if two components of the pairs of samples are independent or dependent. Gretton et al. (2005, 2008) propose a nonparametric kernel test based on the Hilbert Schmidt Independence Criterion (HSIC). It can be estimated using the quadratic-time estimator proposed by Song et al. (2012, Equation 5) which is a fourth-order one-sample U -statistic \HSICk,`(ZN ) = 1 iN 4 X (i,j,r,s)2iN4 hHSICk,` (Zi, Zj , Zr, Zs) (3) for characteristic kernels k on Rdx and ` on Rdy (Gretton, 2015), and where for za = (xa, ya) 2 Rdx ⇥ Rdy , a = 1, . . . , 4, we let hHSICk,` (z1, z2, z3, z4) := 1 4 hMMDk (x1, x2;x3, x4)h MMD ` (y1, y2; y3, y4). (4) Goodness-of-fit testing. For this problem, we are given a model density p on Rd and i.i.d. samples ZN := (Zi)1iN drawn from a density q on Rd. The aim is again to test H0 : p = q against H1 : p 6= q; that is, we want to know if the samples have been drawn from the model. Chwialkowski et al. (2016) and Liu et al. (2016) both construct a nonparametric goodness-of-fit test using the Kernel Stein Discrepancy (KSD). A quadratic-time KSD estimator can be computed as the second-order one-sample U -statistic, [KSD 2 p,k(ZN ) := 1 iN 2 X (i,j)2iN2 hKSDk,p (Zi, Zj), (5) where the Stein kernel hKSDk,p : Rd ⇥ Rd ! R is defined as hKSDk,p (x, y) := r log p(x)>r log p(y) k(x, y) +r log p(y)>rxk(x, y) +r log p(x)>ryk(x, y) + dX i=1 @ @xi@yi k(x, y). (6) In order to guarantee consistency of the Stein goodness-of-fit test (Chwialkowski et al., 2016, Theorem 2.2), we assume that the kernel k is C0-universal (Carmeli et al., 2010, Definition 4.1) and that Eq h hKSDk,p (z, z) i < 1 and Eq " r log ✓ p(z) q(z) ◆ 2 2 # < 1. (7) 1All probability density functions in this paper are with respect to the Lebesgue measure. 2We use the notation a . b when there exists a constant C > 0 such that a Cb. We similarly use the notation &. We write a ⇣ b if a . b and a & b. We also use the convention that all constants are generically denoted by C, even though they might be different. Quantile estimation. Multiple strategies have been proposed to estimate the quantiles of test statistics under the null for these three tests. We primarily focus on the wild bootstrap approach (Chwialkowski et al., 2014), though our results also hold using a parametric bootstrap for the goodness-of-fit setting (Schrab et al., 2022). In Section 7, we show that the same uniform separation rates can be derived for HSIC quadratic-time tests using permutations instead of a wild bootstrap. More details on MMD, HSIC, KSD, and on quantile estimation are provided in Appendix B. 3 Incomplete U -statistics for MMD, HSIC and KSD As presented above, the quadratic-time statistics for the two-sample (MMD), independence (HSIC) and goodness-of-fit (KSD) problems can be rewritten as U -statistics with kernels hMMDk , h HSIC k,` and hKSDk,p , respectively. The computational cost of tests based on these U -statistics grows quadratically with the sample size. When working with very large sample sizes, as it is often the case in real-world uses of those tests, this quadratic cost can become very problematic, and faster alternative tests are better adapted to this ‘big data’ setting. Multiple linear-time kernel tests have been proposed in the three testing frameworks (see Section 1 for details). We construct computationally efficient variants of the aggregated kernel tests proposed by Fromont et al. (2013), Albert et al. (2022), Kim et al. (2022), and Schrab et al. (2021, 2022) for the three settings, with the aim of retaining the significant power advantages of the aggregation procedure observed for quadratic-time tests. To this end, we propose to replace the quadratic-time U -statistics presented in Equations (1), (3) and (5) with second-order incomplete U -statistics (Blom, 1976; Janson, 1984; Lee, 1990), MMD 2 k Xm,Yn;DN := 1 DN X (i,j)2DN hMMDk (Xi, Xj ;Yi, Yj), (8) HSICk,` ZN ;DbN/2c := 1 DbN/2c X (i,j)2DbN/2c hHSICk,` Zi, Zj , Zi+bN/2c, Zj+bN/2c , (9) KSD 2 p,k ZN ;DN := 1 DN X (i,j)2DN hKSDk,p (Zi, Zj), (10) where for the two-sample problem we let N := min(m,n), and where the design Db is a subset of ib2 (the set of all 2-tuples drawn without replacement from {1, . . . , b}). Note that DbN/2c ✓ i N/2 2 ⇢ iN 2 . The design can be deterministic. For example, for the two-sample problem with equal even sample sizes m = n = N , the deterministic design DN = {(2a 1, 2a) : a = 1, . . . , N/2} corresponds to the MMD linear-time estimator proposed by Gretton et al. (2012a, Lemma 14). For fixed design size, the elements of the design can also be chosen at random without replacement, in which case the estimators in Equations (8) to (10) become random quantities given the data. For generality purposes, the results presented in this paper hold for both deterministic and random (without replacement) design choices while we focus on the deterministic design in our experiments. By fixing the design sizes in Equations (8) to (10) to be, for example, DN = DbN/2c = cN (11) for some small constant c 2 N \ {0}, we obtain incomplete U -statistics which can be computed in linear time. Note that by pairing the samples Zi := (Xi, Yi), i = 1, . . . , N for the MMD case and eZi := Zi, Zi+bN/2c , i = 1, . . . , bN/2c for the HSIC case, we observe that all three incomplete U - statistics of second order have the same form, with only the kernel functions and the design differing. The motivation for defining the estimators in Equations (8) and (9) as incomplete U -statistics of order 2 (rather than of higher order) derives from the reasoning of Kim et al. (2022, Section 6) for permuted complete U -statistics for the two-sample and independence problems (see Appendix E.1). 4 Quantile and variance bounds for incomplete U -statistics In this section, we derive upper quantile and variance bounds for a second-order incomplete degenerate U -statistic with a generic degenerate kernel h, for some design D ✓ iN 2 , defined as U ZN ;D := 1 |D| X (i,j)2D h(Zi, Zj). We will use these results to bound the quantiles and variances of our three test statistics for our hypothesis tests in Section 5. The derived bounds are of independent interest. In the following lemma, building on the results of Lee (1990), we directly derive an upper bound on the variance of the incomplete U -statistic in terms of the sample size N and of the design size |D|. Lemma 1. The variance of the incomplete U -statistic can be upper bounded in terms of the quantities 2 1 := var E ⇥ h(Z,Z 0) Z 0 ⇤ and 2 2 := var(h(Z,Z 0)) with different bounds depending on the design choice. For deterministic (LHS) or random (RHS) design D and sample size N , we have var U . N|D| 2 1 + 1 |D| 2 2 and var U . 1 N 2 1 + 1 |D| 2 2 . The proof of Lemma 1 is deferred to Appendix F.2. We emphasize the fact that this variance bound also holds for random design with replacement, as considered by Blom (1976) and Lee (1990). For random design, we observe that if |D| ⇣ N2 then the bound is 2 1 /N + 2 2 /N2 which is the variance bound of the complete U -statistic (Albert et al., 2022, Lemma 10). If N . |D| . N2, the variance bound is 2 1 /N + 2 2 /|D|, and if |D| . N it is 2 2 /|D| since 2 1 2 2 /2 (Blom, 1976, Equation 2.1). Kim et al. (2022) develop exponential concentration bounds for permuted complete U -statistics, and Clémençon et al. (2013) study the uniform approximation of U -statistics by incomplete U -statistics. To the best of our knowledge, no quantile bounds have yet been obtained for incomplete U -statistics in the literature. While permutations are well-suited for complete U -statistics (Kim et al., 2022), using them with incomplete U -statistics results in having to compute new kernel values, which comes at an additional computational cost we would like to avoid. Restricting the set of permutations to those for which the kernel values have already been computed for the original incomplete U -statistic corresponds exactly to using a wild bootstrap (Schrab et al., 2021, Appendix B). Hence, we consider the wild bootstrapped second-order incomplete U -statistic U ✏ ZN ;D := 1 |D| X (i,j)2D ✏i✏jh(Zi, Zj) (12) for i.i.d. Rademacher random variables ✏1, . . . , ✏N with values in { 1, 1}, for which we derive an exponential concentration bound (quantile bound). We note the in-depth work of Chwialkowski et al. (2014) on the wild bootstrap procedure for kernel tests with applications to quadratic-time MMD and HSIC tests. We now provide exponential tail bounds for wild bootstrapped incomplete U -statistics. Lemma 2. There exists some constant C > 0 such that, for every t 0, we have P✏ ⇣ U ✏ t ZN ,D ⌘ 2 exp ✓ C t Ainc ◆ 2 exp ✓ C t A ◆ where A2 inc := |D| 2 P (i,j)2D h(Zi, Zj) 2 and A2 := |D| 2 P (i,j)2iN2 h(Zi, Zj)2. Lemma 2 is proved in Appendix F.3. While the second bound in Lemma 2 is less tight, it has the benefit of not depending on the choice of design D but only on its size |D| which is usually fixed. 5 Efficient kernel tests using incomplete U -statistics We now formally define the hypothesis tests obtained using the incomplete U -statistics with a wild bootstrap. This is done for fixed kernel bandwidths 2 (0,1)dx , µ 2 (0,1)dy , for the kernels3 k (x, y) := dxY i=1 1 i Ki ✓ xi yi i ◆ , `µ(x, y) := dyY i=1 1 µi Li ✓ xi yi µi ◆ , (13) for characteristic kernels (x, y) 7! Ki(x y), (x, y) 7! Li(x y) on R⇥R for functions Ki, Li 2 L1(R) \ L2(R) integrating to 1. We unify the notation for the three testing frameworks. For the twosample and goodness-of-fit problems, we work only with k and have d = dx. For the independence 3Our results are presented for bandwidth selection, but they hold in the more general setting of kernel selection, as considered by Schrab et al. (2022). The goodness-of-fit results hold for a wider range of kernels including the IMQ (inverse multiquadric) kernel (Gorham and Mackey, 2017), as in Schrab et al. (2022). problem, we work with the two kernels k and `µ, and for ease of notation we let d := dx + dy and dx+i := µi for i = 1, . . . , dy. We also simply write p := pxy and q := px ⌦ py. We let U and h denote either MMD 2 k and h MMD k , or HSICk ,`µ and hHSICk ,`µ , or KSD 2 p,k and h KSD k ,p , respectively. We denote the design size of the incomplete U -statistics in Equations (8) to (10) by L := DN = DbN/2c . For the three testing frameworks, we estimate the quantiles of the test statistics by simulating the null hypothesis using a wild bootstrap, as done in the case of complete U -statistics by Fromont et al. (2012) and Schrab et al. (2021) for the two-sample problem, and by Schrab et al. (2022) for the goodness-of-fit problem. This is done by considering the original test statistic UB1+1 := U together with B1 wild bootstrapped incomplete U -statistics U1 , . . . , U B1 computed as in Equation (12), and estimating the (1 ↵)-quantile with a Monte Carlo approximation bq 1 ↵ := inf ⇢ t 2 R : 1 ↵ 1 B1 + 1 B1+1X b=1 1 U b t = U•dB1(1 ↵)e , (14) where U•1 · · · U •B1+1 are the sorted elements U 1 , . . . , U B1+1 . The test ↵ is defined as rejecting the null if the original test statistic U is greater than the estimated (1 ↵)-quantile, that is, ↵(ZN ) := 1 U (ZN ) > bq 1 ↵ . The resulting test has time complexity O(B1L) where L is the design size (1 L N(N 1)). We show in Proposition 1 that the test ↵ has well-calibrated asymptotic level for goodness-of-fit testing, and well-calibrated non-asymptotic level for two-sample and independence testing. The proof of the latter non-asymptotic guarantee is based on the exchangeability of U1 , . . . , U B1+1 under the null hypothesis along with the result of Romano and Wolf (2005, Lemma 1). A similar proof strategy can be found in Fromont et al. (2012, Proposition 2), Albert et al. (2022, Proposition 1), and Schrab et al. (2021, Proposition 1). The exchangeability of wild bootstrapped incomplete U -statistics for independence testing does not follow directly from the mentioned works. We show this through the interesting connection between hHSICk,` and {hMMDk , hMMD` }, the proof is deferred to Appendix F.1. Proposition 1. The test ↵ has level ↵ 2 (0, 1), i.e. PH0 ↵(ZN ) = 1 ↵. This holds nonasymptotically for the two-sample and independence cases, and asymptotically for goodness-of-fit. 4 Having established the validity of the test ↵, we now study power guarantees for it in terms of the L2-norm of the difference in densities kp qk2. In Theorem 1, we show for the three tests that, if kp qk2 exceeds some threshold, we can guarantee high test power. For the two-sample and independence problems, we derive uniform separation rates (Baraud, 2002) over Sobolev balls Ssd(R) := n f 2 L1 Rd \ L2 Rd : Z Rd k⇠k2s 2 | bf(⇠)|2d⇠ (2⇡)dR2 o , (15) with radius R > 0 and smoothness parameter s > 0, where bf denotes the Fourier transform of f . The uniform separation rate over Ssd(R) is the smallest value of t such that, for any alternative with kp qk2 > t and5 p q 2 Ssd(R), the probability of type II error of ↵ can be controlled by 2 (0, 1). Before presenting Theorem 1, we introduce further notation unified over the three testing frameworks; we define the integral transform T as (T f)(x) := Z Rd f(x)K (x, y) dy (16) for f 2 L2(Rd), x 2 Rd, where K := k for the two-sample problem, K := k ⌦ `µ for the independence problem, and K := hKSDk ,p for the goodness-of-fit problem. Note that, for the twosample and independence testing frameworks, since K is translation-invariant, the integral transform corresponds to a convolution. However, this is not true for the goodness-of-fit setting as hKSDk ,p is not translation-invariant. We are now in a position to present our main contribution in Theorem 1: we derive power guarantee conditions for our tests using incomplete U -statistics, and uniform separation rates over Sobolev balls for the two-sample and independence settings. 4Level is non-asymptotic for the goodness-of-fit case using a parametric bootstrap (Schrab et al., 2022). For the goodness-of-fit setting, we also recall that the further assumptions in Equation (7) need to be satisfied. 5We stress that we only assume p q 2 Ssd(R) and not p, q 2 Ssd(R) as considered by Li and Yuan (2019). Viewing q as a perturbed version of p, we only require that the perturbation is smooth (i.e. lies in a Sobolev ball). Theorem 1. Suppose that the assumptions in Appendix A.1 hold, and consider 2 (0,1)d. (i) For sample size N and design size L, if there exists some C > 0 such that kp qk2 2 k(p q) T (p q)k22 + C N L ln(1/↵) 2, , then PH1 ↵(ZN ) = 0 (type II error), where 2, . 1/ p 1 · · · d for MMD and HSIC. (ii) Fix R > 0 and s > 0, and consider the bandwidths ⇤i := (N/L) 2/(4s+d) for i = 1, . . . , d. For MMD and HSIC, the uniform separation rate of ⇤ ↵ over the Sobolev ball Ssd(R) is (up to a constant) L/N 2s/(4s+d) . The proof of Theorem 1 relies on the variance and quantile bounds presented in Lemmas 1 and 2, and also uses results of Albert et al. (2022) and Schrab et al. (2021, 2022) on complete U -statistics. The details can be found in Appendix F.4. The power condition in Theorem 1 (i) corresponds to a variance-bias decomposition; for large bandwidths the bias term (first term) dominates, while for small bandwidths the variance term (second term which also controls the quantile) dominates. While the power guarantees of Theorem 1 hold for any design (either deterministic or uniformly random without replacement) of fixed size L, the choice of design still influences the performance of the test in practice. The variance (but not its upper bound) depends on the choice of design; certain choices lead to minimum variance of the incomplete U -statistic (Lee, 1990, Section 4.3.2). The minimax (i.e. optimal) rate over the Sobolev ball Ssd(R) is N 2s/(4s+d) for the two-sample (Li and Yuan, 2019, Theorem 5 (ii)) and independence (Albert et al., 2022, Theorem 4; Berrett et al., 2021, Corollary 5) problems. The rate for our incomplete U -statistic test with time complexity O(B1L) has the same dependence in the exponent as the minimax rate; (L/N) 2s/(4s+d) = N 2s/(4s+d) N2/L 2s/(4s+d) where L . N2 with L the design size and N the sample size. • If L ⇣ N2 then the test runs in quadratic time and we recover exactly the minimax rate. • If N . L . N2 then the rate still converges to 0; there is a trade-off between the cost . (N2/L)2s/(4s+d) incurred in the minimax rate and the computational efficiency O(B1L). • If L . N then there is no guarantee that the rate converges to 0. To summarize, the tests we propose have computational cost O(B1L) which can be specified by the user with the choice of the number of wild bootstraps B1, and of the design size L (as a function of the sample size N ). There is a trade-off between test power and computational cost. We provide theoretical rates in terms of L and N , working up to a constant. The rate is minimax optimal in the case where L grows quadratically with N . We quantify exactly how, as the computational cost decreases from quadratic to linear in the sample size, the rate deteriorates gradually from being minimax optimal to not being guaranteed to convergence to zero. In our experiments, we use a design size which grows linearly with the sample size in order to compare our tests against other linear-time tests in the literature. The assumption guaranteeing that the rate converges to 0 is not satisfied in this setting, however, it would be satisfied for any faster growth of the design size (e.g. L ⇣ N log logN ). 6 Efficient aggregated kernel tests using incomplete U -statistics We now introduce our aggregated tests that combine single tests with different bandwidths. Our aggregation scheme is similar to those of Fromont et al. (2013), Albert et al. (2022) and Schrab et al. (2021, 2022), and can yield a test which is adaptive to the unknown smoothness parameter s of the Sobolev ball Ssd(R), with relatively low price. Let ⇤ be a finite collection of bandwidths, (w ) 2⇤ be associated weights satisfying P 2⇤ w 1, and u↵ be some correction term defined shortly in Equation (17). Then, using the incomplete U -statistic U , we define our aggregated test ⇤↵ as ⇤ ↵(ZN ) := 1 ⇣ U (ZN ) > bq 1 u↵w for some 2 ⇤ ⌘ . The levels of the single tests are weighted and adjusted with a correction term u↵ := supB3 ( u 2 ✓ 0, min 2⇤ w 1 ◆ : 1 B2 B2X b=1 1 ✓ max 2⇤ ⇣ eU b U •dB1(1 uw )e ⌘ > 0 ◆ ↵ ) , (17) where the wild bootstrapped incomplete U -statistics eU1 , . . . , eU B2 computed as in Equation (12) are used to perform a Monte Carlo approximation of the probability under the null, and where the supremum is estimated using B3 steps of bisection method. Proposition 1, along with the reasoning of Schrab et al. (2021, Proposition 8), ensures that ⇤↵ has non-asymptotic level ↵ for the twosample and independence cases, and asymptotic level ↵ for the goodness-of-fit case. We refer to the three aggregated tests constructed using incomplete U -statistics as MMDAggInc, HSICAggInc and KSDAggInc. The computational complexity of those tests is O(|⇤|(B1 +B2)L), which means, for example, that if L ⇣ N as in Equation (11), the tests run efficiently in linear time in the sample size. We formally record error guarantees of ⇤↵ and derive uniform separation rates over Sobolev balls. Theorem 2. Suppose that the assumptions in Appendix A.2 hold, and consider a collection ⇤. (i) For sample size N and design size L, if there exists some C > 0 such that kp qk2 2 min 2⇤ k(p q) T (p q)k22 + C N L ln 1/(↵w ) 2, ! , then PH1 ⇤ ↵(ZN ) = 0 (type II error), where 2, . 1/ p 1 · · · d for MMD and HSIC. (ii) Assume L > N so that ln(ln(L/N)) is well-defined. Consider the collections of bandwidths and weights (independent of the parameters s and R of the Sobolev ball Ssd(R)) ⇤ := n 2 `, . . . , 2 ` 2 (0,1)d : ` 2 n 1, . . . , l 2 d log 2 ⇣ L/N ln(ln(L/N)) ⌘moo , w := 6 ⇡2`2 . For the two-sample and independence problems, the uniform separation rate of ⇤ ↵ over the Sobolev balls Ssd(R) : R > 0, s > 0 is (up to a constant) ✓ L/N ln(ln(L/N)) ◆ 2s/(4s+d) . The extension from Theorem 1 to Theorem 2 has been proved for complete U -statistics in the two-sample (Fromont et al., 2013; Schrab et al., 2021), independence (Albert et al., 2022) and goodness-of-fit (Schrab et al., 2022) testing frameworks. The proof of Theorem 2 follows with the same reasoning by simply replacing N with L/N as we work with incomplete U -statistics; this ‘replacement’ is theoretically justified by Theorem 1. Theorem 2 shows that the aggregated test ⇤↵ is adaptive over Sobolev balls Ssd(R) : R > 0, s > 0 : the test ⇤↵ does not depend on the unknown smoothness parameter s (unlike ⇤ ↵ in Theorem 1) and achieves the minimax rate, up to an iterated logarithmic factor, and up to the cost incurred for efficiency of the test (i.e. L/N instead of N ). 7 Minimax optimal permuted quadratic-time aggregated independence test Considering Theorem 2 with our incomplete U -statistic with full design D = iN 2 for which L ⇣ N2, we have proved that the quadratic-time two-sample and independence aggregated tests using a wild bootstrap achieve the rate (N/ln(ln(N))) 2s/(4s+d) over the Sobolev balls Ssd(R) : R > 0, s > 0 . This is the minimax rate (Li and Yuan, 2019; Albert et al., 2022), up to some iterated logarithmic term. For the two-sample problem, Kim et al. (2022) and Schrab et al. (2021) show that this optimality result also holds when using complete U -statistics with permutations. Whether the equivalent statement for the independence test with permutations holds has not yet been addressed; the rate can be proved using theoretical (unknown) quantiles with a Gaussian kernel (Albert et al., 2022), but has not yet been proved using permutations. Kim et al. (2022, Proposition 8.7) consider this problem, again using a Gaussian kernel, but they do not obtain the correct dependence on ↵ (i.e. they obtain ↵ 1/2 rather than ln(1/↵)), hence they cannot recover the desired rate. As pointed out by Kim et al. (2022, Section 8): ‘It remains an open question as to whether [the power guarantee] continues to hold when ↵ 1/2 is replaced by ln(1/↵)’. We now prove that we can improve the ↵-dependence to ln(1/↵)3/2 for any bounded kernel of the form of Equation (13), and that this allows us to obtain the desired rate over Sobolev balls Ssd(R) : R > 0, s > d/4 . The assumption s > d/4 imposes a stronger smoothness restriction on p q 2 Ssd(R), which is similarly also considered by Li and Yuan (2019). Theorem 3. Consider the quadratic-time independence test using the complete U -statistic HSIC estimator with a quantile estimated using permutations as done by Kim et al. (2022, Proposition 8.7), with kernels as in Equation (13) for bounded functions Ki and Lj for i = 1, . . . , dx, j = 1, . . . , dy . (i) Suppose that the assumptions in Appendix A.1 hold. For fixed R > 0, s > d/4, and bandwidths ⇤i := N 2/(4s+d) for i = 1, . . . , d, the probability of type II error of the test is controlled by when kp qk2 2 k(p q) T ⇤(p q)k22 + C 1 N ln(1/↵)3/2 p ⇤ 1 · · · ⇤d for some constant C > 0. The uniform separation rate over the Sobolev ball Ssd(R) is, up to a constant, N 2s/(4s+d). (ii) Suppose that the assumptions in Appendix A.2 hold. The uniform separation rate over the Sobolev balls Ssd(R) : R > 0, s > d/4 is N/ ln(ln(N)) 2s/(4s+d) , up to a constant, with the collections ⇤ := n 2 `, . . . , 2 ` 2 (0,1)d : ` 2 n 1, . . . , l 2 d log 2 ⇣ N ln(ln(N)) ⌘moo , w := 6 ⇡2`2 . The proof of Theorem 3, in Appendix F.5, uses the exponential concentration bound of Kim et al. (2022, Theorem 6.3) for permuted complete U -statistics. Another possible approach to obtain the correct dependency on ↵ is to employ the sample-splitting method proposed by Kim et al. (2022, Section 8.3) in order to transform the independence problem into a two-sample problem. While this indirect approach leads to a logarithmic factor in ↵, the practical power would be suboptimal due to an inefficient use of the data from sample splitting. Theorem 3 (i) shows that a ln(1/↵)3/2 dependence is achieved by the more practical permutation-based HSIC test. Theorem 3 (ii) demonstrates that this leads to a minimax optimal rate for the aggregated HSIC test, up to the ln(ln(N)) cost for adaptivity. 8 Experiments For the two-sample problem, we consider testing samples drawn from a uniform density on [0, 1]d against samples drawn from a perturbed uniform density. For the independence problem, the joint density is a perturbed uniform density on [0, 1]dx+dy , the marginals are then simply uniform densities. Those perturbed uniform densities can be shown to lie in Sobolev balls (Li and Yuan, 2019; Albert et al., 2022), to which our tests are adaptive. For the goodness-of-fit problem, we use a GaussianBernoulli Restricted Boltzmann Machine as first considered by Liu et al. (2016) in this testing framework. We use collections of 21 bandwidths for MMD and HSIC and of 25 bandwidth pairs for HSIC; more details on the experiments (e.g. model and test parameters) are presented in Appendix C. We consider our incomplete aggregated tests MMDAggInc, HSICAggInc and KSDAggInc, with parameter R 2 {1, . . . , N 1} which fixes the deterministic design to consist of the first R subdiagonals of the N ⇥N matrix, i.e. D := {(i, i+ r) : i = 1, . . . , N r for r = 1, . . . , R} with size |D| = RN R(R 1)/2. We run our incomplete tests with R 2 {1, 100, 200} and also the complete test using the full design D = iN 2 . We compare their performances with current linear-time state-of- the-art tests: ME, SCF, FSIC and FSSD (Jitkrittum et al., 2016, 2017a,b) which evaluate the witness functions at a finite set of locations chosen to maximize the power, Cauchy RFF (random Fourier feature) and L1 IMQ (Huggins and Mackey, 2018) which are random feature Stein discrepancies, LSD (Grathwohl et al., 2020) which uses a neural network to learn the Stein discrepancy, and OST PSI (Kübler et al., 2020) which performs kernel selection using post selection inference. Similar trends are observed across all our experiments in Figure 1, for the three testing frameworks, when varying the sample size, the dimension, and the difficulty of the problem (scale of perturbations or noise level). The linear-time tests AggInc R = 200 almost match the power obtained by the quadratic-time tests AggCom in all settings (except in Figure 1 (i) where the difference is larger) while being computationally much more efficient as can be seen in Figure 1 (d, h, l). The incomplete tests with R = 100 have power only slightly below the ones using R = 200, and run roughly twice as fast (Figure 1 (d, h, l)). In all experiments, those three tests (AggInc R = 100, 200 and AggCom) have significantly higher power than the linear-time tests which optimize test locations (ME, SCF, FSIC and FSSD); in the two-sample case the aggregated tests run faster for small sample size but slower for large sample size, in the independence case the aggregated tests run much faster, and in the goodness-of-fit case FSSD runs faster. While both types of tests are linear, we note that the runtimes of the tests of Jitkrittum et al. (2016, 2017a,b) increase slower with the sample size than those of our aggregated tests with R = 100, 200, but a fixed computational cost is incurred for their optimization step, even for small sample sizes. In the goodness-of-fit framework, L1 IMQ performs similarly to FSSD which is in line with the results presented by Huggins and Mackey (2018, Figure 4d) who consider the same experiment. All other goodness-of-fit tests (except KSDAggInc R = 1) achieve much higher test power. Cauchy RFF and KSDAggInc R = 200 obtain similar power in almost all the experiments. While KSDAggInc R = 200 runs much faster in the experiments presented6, it seems that the KSDAggInc runtimes increase more steeply with the sample size than the Cauchy RFF and L1 IMQ runtimes (see Appendix D.5 for details). LSD matches the power of KSDAggInc R = 100 when varying the noise level in Figure 1 (k) (KSDAggInc R = 200 has higher power), and when varying the hidden dimension in Figure 1 (j) where dx = 100. When varying the sample size in Figure 1 (i), both KSDAggInc tests with R = 100, 200 achieve much higher power than LSD. Unsurprisingly, AggInc R = 1, which runs much faster than all the aforementioned tests, has low power in every experiment. For the two-sample problem, it obtains slightly higher power than OST PSI which runs even faster. We include more experiments in Appendix D: we present experiments on the MNIST dataset (same trends are observed), we use different collection of bandwidths, we verify that all tests have well-calibrated levels, and illustrate the benefits of the aggregation procedure. 9 Acknowledgements Antonin Schrab acknowledges support from the U.K. Research and Innovation (EP/S021566/1). Ilmun Kim acknowledges support from the Yonsei University Research Fund of 2021-22-0332, and from the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (2022R1A4A1033384). Benjamin Guedj acknowledges partial support by the U.S. Army Research Laboratory and the U.S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (EP/R013616/1), and by the French National Agency for Research (ANR-18-CE40-0016-01 & ANR-18-CE23-0015-02). Arthur Gretton acknowledges support from the Gatsby Charitable Foundation. 6The runtimes in Figure 1 (d, h, l) can also vary due to the different implementations of the respective authors.
1. What is the main contribution of the paper regarding kernel-based hypothesis testing? 2. What are the strengths and weaknesses of the proposed approach, particularly in its practical relevance and theoretical analysis? 3. How does the reviewer assess the trade-off between computational resources and statistical significance in the provided non-asymptotic tests? 4. What are the limitations of the paper regarding the choice of kernels and their aggregation procedure? 5. How does the reviewer evaluate the presentation and clarity of the initial statistics and their scaling properties? 6. Do you have any concerns about the claim made in the paper regarding the aggregation procedure leading to state-of-the-art powerful tests? 7. What are your opinions on the comparison with prior works, such as Sutherland (ICLR 2017) and Liu (ICML 2020), regarding continuous optimization of kernels? 8. Are there any questions or comments regarding the experimental results and their illustration of the findings? 9. How should practitioners choose the number of bandwidths in practice, and what are the potential consequences of including too many bandwidths? 10. Does the paper have any direct negative societal impact, and how does the reviewer perceive the overall value of the work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Post rebuttal: after the authors response and reading the other reviews, I updated my score to 6 (from initially 4) see also my comment below. -------- original review ------ The paper considers a general framework for kernel-based hypothesis testing, where the test statistic is given by a U-statistic. The framework covers Two-sample testing, Independence Testing, and Goodness-of-fit testing. The paper shows that using an incomplete U-statistic estimate allows to trade-off computational resources for statistical significance, which follows from general theory of incomplete U-statistics. Furthermore, it adopts recent advances to aggregate such tests over multiple kernels and provides insights into the minimax separation rates over Sobolev balls. Lastly, the paper provides simple experiments on toy data, illustrating their findings and comparing to some other approaches to tackle the respective testing problems. Strengths And Weaknesses Strengths: the work applies to three different testing scenarios and illustrates their close connections. Although the use of incomplete U-statistics is not completely new for kernel-based tests (see e.g. Yamada et al ICLR 2019) the provided non-asymptotic tests are relevant and nicely illustrate the trade-off between computational resources and statistical significance. The tests provably control type-I error also at finite data, while some of the existing methods, like OST, do not. The theoretical insights are concisely stated and the prior results properly attributed. Arguably, though the provided results are rather simple consequences and combinations of prior results. The provided Code is clean and it is easy to reproduce the experiments. The code was submitted after the deadline, which might be a violation of the rules! Weaknesses: Overall, I think the practical relevance is quite limited: a) The experiments are limited to very simple toy data sets. While they illustrate the effect of using incomplete U-statistics, the effect of the aggregation procedure is not illustrated at all. b) Only very few (4!) kernels are aggregated over. IMO this does not suffice to illustrate the benefits of the aggregation procedure. c) Overall there should be more guidance for practitioners. How many kernels should one choose in practice? The authors consider ‘linear-time’ tests (eq. 10), which I think is misleading. By their theoretical results Theorem 1 ii) and considering L = c N , the uniform separation rate is not guaranteed to converge to zero. I thus also think that the gray box on page 7 is actually wrong. IMO it should be changed to the following: let L = N 1 + a . Then for a = 1 one recovers the minimax rate. For a > 0 the rate still converges to zero, but slower. For a ≤ 0 there is no guarantee. Overall the provided theory does not provide guarantees for linear-time tests. For the two-sample problem, the provided tests cannot handle imbalanced samples in a meaningful way. The provided approach simply truncates data ( N = m i n ( m , n ) in line 131). I think the presentation of the initial statistics (1) and (3) is suboptimal. For unexperienced readers it seems that these statistics scale like N^4. So I think it would be better to directly introduce the statistics such that they correspond to the complete U-statistics of (7) and (8). Alternatively, it should be explained why this statistics scale quadratically (no need to tell me in the rebuttal). The claim that the “aggregation procedure is known to lead to state-of-the-art powerful tests” (l. 26f) seems a bit biased. Prior work (Sutherland (ICLR 2017), Liu (ICML 2020)) showed that continuously optimizing a kernel is quite advantageous and harnesses the beenfits of gradient-based optimization. The present work only allows to combine finitely many (prespecified) kernels. The aggregation scheme is a direct adaption from prior work (the authors are transparent about this). Questions Are the tests really 'linear-time'? (see comment in limitations) l. 152-156: why do you discuss the random design when in the end you are using the deterministic one? How should one choose the number of bandwidths ( l ) in practice? Minor Comments: l. 147: define what a degenerate kernel is. Type in Equation before line 500: should be x i 1 ,... Limitations The work is theoretical and no direct negative societal impact is to be expected. The theoretical results discuss minimax optimal rates, which leaves the impression that nothing can go wrong. But in practice there remain some parameters that users have to choose, for example how many bandwidths to include in the aggregation. For the experiments the authors only use 4 bandwidths, which in my opinion hardly suffices to illustrate the benefits of this aggregation. On the other hand, it is not clear what happens if too many bandwidths are included.
NIPS
Title Rethinking pooling in graph neural networks Abstract Graph pooling is a central component of a myriad of graph neural network (GNN) architectures. As an inheritance from traditional CNNs, most approaches formulate graph pooling as a cluster assignment problem, extending the idea of local patches in regular grids to graphs. Despite the wide adherence to this design choice, no work has rigorously evaluated its influence on the success of GNNs. In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph. Strikingly, our experiments demonstrate that using these variants does not result in any decrease in performance. To understand this phenomenon, we study the interplay between convolutional layers and the subsequent pooling ones. We show that the convolutions play a leading role in the learned representations. In contrast to the common belief, local pooling is not responsible for the success of GNNs on relevant and widely-used benchmarks. 1 Introduction The success of graph neural networks (GNNs) [3, 36] in many domains [9, 15, 25, 41, 46, 50] is due to their ability to extract meaningful representations from graph-structured data. Similarly to convolutional neural networks (CNNs), a typical GNN sequentially combines local filtering, nonlinearity, and (possibly) pooling operations to obtain refined graph representations at each layer. Whereas the convolutional filters capture local regularities in the input graph, the interleaved pooling operations reduce the graph representation while ideally preserving important structural information. Although strategies for graph pooling come in many flavors [26, 30, 47, 49], most GNNs follow a hierarchical scheme in which the pooling regions correspond to graph clusters that, in turn, are combined to produce a coarser graph [4, 7, 13, 21, 47, 48]. Intuitively, these clusters generalize the notion of local neighborhood exploited in traditional CNNs and allow for pooling graphs of varying sizes. The cluster assignments can be obtained via deterministic clustering algorithms [4, 7] or be learned in an end-to-end fashion [21, 47]. Also, one can leverage node embeddings [21], graph topology [8], or both [47, 48], to pool graphs. We refer to these approaches as local pooling. Together with attention-based mechanisms [24, 26], the notion that clustering is a must-have property of graph pooling has been tremendously influential, resulting in an ever-increasing number of pooling schemes [14, 18, 21, 27, 48]. Implicit in any pooling approach is the belief that the quality of the cluster assignments is crucial for GNNs performance. Nonetheless, to the best of our knowledge, this belief has not been rigorously evaluated. Misconceptions not only hinder new advances but also may lead to unnecessary complexity and obfuscate interpretability. This is particularly critical in graph representation learning, as we have seen a clear trend towards simplified GNNs [5, 6, 11, 31, 43]. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we study the extent to which local pooling plays a role in GNNs. In particular, we choose representative models that are popular or claim to achieve state-of-the-art performances and simplify their pooling operators by eliminating any clustering-enforcing component. We either apply randomized cluster assignments or operate on complementary graphs. Surprisingly, the empirical results show that the non-local GNN variants exhibit comparable, if not superior, performance to the original methods in all experiments. To understand our findings, we design new experiments to evaluate the interplay between convolutional layers and pooling; and analyze the learned embeddings. We show that graph coarsening in both the original methods and our simplifications lead to homogeneous embeddings. This is because successful GNNs usually learn low-pass filters at early convolutional stages. Consequently, the specific way in which we combine nodes for pooling becomes less relevant. In a nutshell, the contributions of this paper are: i) we show that popular and modern representative GNNs do not perform better than simple baselines built upon randomization and non-local pooling; ii) we explain why the simplified GNNs work and analyze the conditions for this to happen; and iii) we discuss the overall impact of pooling in the design of efficient GNNs. Aware of common misleading evaluation protocols [10, 11], we use benchmarks on which GNNs have proven to beat structureagnostic baselines. We believe this work presents a sanity-check for local pooling, suggesting that novel pooling schemes should count on more ablation studies to validate their effectiveness. Notation. We represent a graph G, with n > 0 nodes, as an ordered pair (A,X) comprising a symmetric adjacency matrix A ∈ {0, 1}n×n and a matrix of node features X ∈ Rn×d. The matrix A defines the graph structure: two nodes i, j are connected if and only if Aij = 1. We denote by D the diagonal degree matrix of G, i.e., Dii = ∑ j Aij . We denote the complement of G by Ḡ = (Ā,X), where Ā has zeros in its diagonal and Āij = 1−Aij for i 6= j. 2 Exposing local pooling 2.1 Experimental setup Models. To investigate the relevance of local pooling, we study three representative models. We first consider GRACLUS [8], an efficient graph clustering algorithm that has been adopted as a pooling layer in modern GNNs [7, 34]. We combine GRACLUS with a sum-based convolutional operator [28]. Our second choice is the popular differential pooling model (DIFFPOOL) [47]. DIFFPOOL is the pioneering approach to learned pooling schemes and has served as inspiration to many methods [14]. Last, we look into the graph memory network (GMN) [21], a recently proposed model that reports state-of-the-art results on graph-level prediction tasks. Here, we focus on local pooling mechanisms and expect the results to be relevant for a large class of models whose principle is rooted in CNNs. Tasks and datasets. We use four graph-level prediction tasks as running examples: predicting the constrained solubility of molecules (ZINC, [20]), classifying chemical compounds regarding their activity against lung cancer (NCI1, [40]); categorizing ego-networks of actors w.r.t. the genre of the movies in which they collaborated (IMDB-B, [45]); and classifying handwritten digits (Superpixels MNIST, [1, 10]). The datasets cover graphs with none, discrete, and continuous node features. For completeness, we also report results on five other broadly used datasets in Section 3.1. Statistics of the datasets are available in the supplementary material (Section A.1). Evaluation. We split each dataset into train (80%), validation (10%) and test (10%) data. For the regression task, we use the mean absolute error (MAE) as performance metric. We report statistics of the performance metrics over 20 runs with different seeds. Similarly to the evaluation protocol in [10], we train all models with Adam [22] and apply learning rate decay, ranging from initial 10−3 down to 10−5, with decay ratio of 0.5 and patience of 10 epochs. Also, we use early stopping based on the validation accuracy. For further details, we refer to Appendix A in the supplementary material. Notably, we do not aim to benchmark the performance of the GNN models. Rather, we want to isolate local pooling effects. Therefore, for model selection, we follow guidelines provided by the original authors or in benchmarking papers and simply modify the pooling mechanism, keeping the remaining model structure untouched. All methods were implemented in PyTorch [12, 33] and our code is available at https://github.com/AaltoPML/Rethinking-pooling-in-GNNs. Case 1: Pooling with off-the-shelf graph clustering We first consider a network design that resembles standard CNNs. Following architectures used in [7, 12, 13], we alternate graph convolutions [28] and pooling layers based on graph clustering [8]. At each layer, a neighborhood aggregation step combines each node feature vector with the features of its neighbors in the graph. The features are linearly transformed before running through a component-wise non-linear function (e.g., ReLU). In matrix form, the convolution is Z(l) = ReLU ( X(l−1)W (l) 1 + A (l−1)X(l−1)W (l) 2 ) with (A(0),X(0)) = (A,X), (1) where W (l)2 ,W (l) 1 ∈ Rdl−1×dl are model parameters, and dl is the embedding dimension at layer l. The next step consists of applying the GRACLUS algorithm [8] to obtain a cluster assignment matrix S(l) ∈ {0, 1}nl−1×nl mapping each node to its cluster index in {1, . . . , nl}, with nl < nl−1 clusters. We then coarsen the features by max-pooling the nodes in the same cluster: X (l) kj = max i:S (l) ik =1 Z (l) ij k = 1, . . . , nl (2) and coarsen the adjacency matrix such that A(l)ij = 1 iff clusters i and j have neighbors in G(l−1): A(l) = S(l) ᵀ A(l−1)S(l) with A(l)kk = 0 k = 1, . . . , nl. (3) Clustering the complement graph. The clustering step holds the idea that good pooling regions, equivalently to their CNN counterparts, should group nearby nodes. To challenge this intuition, we follow an opposite idea and set pooling regions by grouping nodes that are not connected in the graph. In particular, we compute the assignments S(l) by applying GRACLUS to the complement graph Ḡ(l−1) of G(l−1). Note that we only employ the complement graph to compute cluster assignments S(l). With the assignments in hand, we apply the pooling operation (Equations 2 and 3) using the original graph structure. Henceforth, we refer to this approach as COMPLEMENT. Results. Figure 1 shows the distribution of the performance of the standard approach (GRACLUS) and its variant that operates on the complement graph (COMPLEMENT). In all tasks, both models perform almost on par and the distributions have a similar shape. Despite their simplicity, GRACLUS and COMPLEMENT are strong baselines (see Table 1). For instance, Errica et al. [11] report GIN [44] as the best performing model for the NCI1 dataset, achieving 80.0±1.4 accuracy. This is indistinguishable from COMPLEMENT’s performance (80.1± 1.6). We observe the same trend on IMDB-B, GraphSAGE [16] obtains 69.9 ± 4.6 and COMPLEMENT scores 70.6 ± 5.1 — a less than 0.6% difference. Using the same data splits as [10], COMPLEMENT and GIN perform within a margin of 1% in accuracy (SMNIST) and 0.01 in absolute error (ZINC). Case 2: Differential pooling DIFFPOOL [47] uses a GNN to learn cluster assignments for graph pooling. At each layer l, the soft cluster assignment matrix S(l) ∈ Rnl−1×nl is S(l) = softmax ( GNN (l) 1 (A (l−1),X(l−1)) ) with (A(0),X(0)) = (A,X). (4) The next step applies S(l) and a second GNN to compute the graph representation at layer l: X(l) = S(l) ᵀ GNN (l) 2 (A (l−1),X(l−1)) and A(l) = S(l) ᵀ A(l−1)S(l). (5) During training, DIFFPOOL employs a sum of three loss functions: i) a supervised loss; ii) the Frobenius norm between A(l) and the Gramian of the cluster assignments, at each layer, i.e.,∑ l ‖A(l) − S(l)S(l) ᵀ‖; iii) the entropy of the cluster assignments at each layer. The second loss is referred to as the link prediction loss and enforces nearby nodes to be pooled together. The third loss penalizes the entropy, encouraging sharp cluster assignments. Random assignments. To confront the influence of the learned cluster assignments, we replace S(l) in Equation 4 with a normalized random matrix softmax(S̃(l)). We consider three distributions: (Uniform) S̃(l)ij ∼ U(a, b) (Normal) S̃ (l) ij ∼ N (µ, σ2) (Bernoulli) S̃ (l) ij ∼ B(α) (6) We sample the assignment matrix before training starts and do not propagate gradients through it. Results. Figure 2 compares DIFFPOOL against the randomized variants. In all tasks, the highest average accuracy is due to a randomized approach. Nonetheless, there is no clear winner among all methods. Notably, the variances obtained with the random pooling schemes are not significantly higher than those from DIFFPOOL. Remark 1. Permutation invariance is an important property of most GNNs that assures consistency across different graph representations. However, the randomized variants break the invariance of DIFF- POOL. A simple fix consists of taking S̃(l) = X(l−1)S̃′, where S̃′ ∈ Rdl−1×nl is a random matrix. Figure 3 compares the randomized variants with and without this fix w.r.t. the validation error on artificially permuted graphs during training on the ZINC dataset. Results suggest that the variants are approximately invariant. Case 3: Graph memory networks Graph memory networks (GMNs) [21] consist of a sequence of memory layers stacked on top of a GNN, also known as the initial query network. We denote the output of the initial query network by Q(0). The first step in a memory layer computes kernel matrices between input queries Q(l−1) = [q (l−1) 1 , . . . , q (l−1) nl−1 ] ᵀ and multi-head keys K(l)h = [k (l) 1h , . . . ,k (l) nlh ]ᵀ: S (l) h : S (l) ijh ∝ ( 1 + ‖q(l−1)i − k (l) jh‖2/τ )− τ+12 ∀h = 1 . . . H, (7) where H is the number of heads and τ is the degrees of freedom of the Student’s t-kernel. We then aggregate the multi-head assignments S(l)h into a single matrix S (l) using a 1×1 convolution followed by row-wise softmax normalization. Finally, we pool the node embeddings Q(l−1) according to their soft assignments S(l) and apply a single-layer neural net: Q(l) = ReLU ( S(l) ᵀ Q(l−1)W (l) ) . (8) In this notation, queries Q(l) correspond to node embeddings X(l) for l > 0. Also, note that the memory layer does not leverage graph structure information as it is fully condensed into Q(0). Following [21], we use a GNN as query network. In particular, we employ a two layer network with the same convolutional operator as in Equation 1. The loss function employed to learn GMNs consists of a convex combination of: i) a supervised loss; and ii) the Kullback-Leibler divergence between the learned assignments and their self-normalized squares. The latter aim to enforce sharp soft-assignments, similarly to the entropy loss in DIFFPOOL. Remark 2. Curiously, the intuition behind the loss that allegedly improves cluster purity might be misleading. For instance, uniform cluster assignments, the ones the loss was engineered to avoid, are a perfect minimizer for it. We provide more details in Appendix D. Simplifying GMN. The principle behind GMNs consists of grouping nodes based on their similarities to learned keys (centroids). To scrutinize this principle, we propose two variants. In the first, we replace the kernel in Equation 7 by the euclidean distance taken from fixed keys drawn from a uniform distribution. Opposite to vanilla GMNs, the resulting assignments group nodes that are farthest from a key. The second variant substitutes multi-head assignment matrices for a fixed matrix whose entries are independently sampled from a uniform distribution. We refer to these approaches, respectively, as DISTANCE and RANDOM. Results. Figure 4 compares GMN with its simplified variants. For all datasets, DISTANCE and RANDOM perform on par with GMN, with slightly better MAE for the ZINC dataset. Also, the variants present no noticeable increase in variance. It is worth mentioning that the simplified GMNs are naturally faster to train as they have significantly less learned parameters. In the case of DISTANCE, keys are taken as constants once sampled. Additionally, RANDOM bypasses the computation of the pairwise distances in Equation 7, which dominates the time of the forward pass in GMNs. On the largest dataset (SMNIST), DISTANCE takes up to half the time of GMN (per epoch), whereas RANDOM is up to ten times faster than GMN. 3 Analysis The results in the previous section are counter-intuitive. We now analyze the factors that led to these results. We show that the convolutions preceding the pooling layers learn representations which are approximately invariant to how nodes are grouped. In particular, GNNs learn smooth node representations at the early layers. To experimentally show this, we remove the initial convolutions that perform this early smoothing. As a result, all networks experience a significant drop in performance. Finally, we show that local pooling offers no benefit in terms of accuracy to the evaluated models. Pooling learns approximately homogeneous node representations. The results with the random pooling methods suggest that any convex combination of the node features enables us to extract good graph representation. Intuitively, this is possible if the nodes display similar activation patterns before pooling. If we interpret convolutions as filters defined over node features/signals, this phenomenon can be explained if the initial convolutional layers extract low-frequency information across the graph input channels/embedding components. To evaluate this intuition, we compute the activations before the first pooling layer and after each of the subsequent pooling layers. Figure 5 shows activations for the random pooling variants of DIFFPOOL and GMN, and the COMPLEMENT approach on ZINC. The plots in Figure 5 validate that the first convolutional layers produce node features which are relatively homogeneous within the same graph, specially for the randomized variants. The networks learn features that resemble vertical bars. As expected, the pooling layers accentuate this phenomenon, extracting even more similar representations. We report embeddings for other datasets in Appendix E. Even methods based on local pooling tend to learn homogeneous representations. As one can notice from Figure 6, DIFFPOOL and GMN show smoothed patterns in the outputs of their initial pooling layer. This phenomenon explains why the performance of the randomized approaches matches those of their original counterparts. The results suggest that the loss terms designed to enforce local clustering are either not beneficial to the learning task or are obfuscated by the supervised loss. This observation does not apply to GRACLUS, as it employs a deterministic clustering algorithm, separating the possibly competing goals of learning hierarchical structures and minimizing a supervised loss. To further gauge the impact of the unsupervised loss on the performance of these GNNs, we compare two DIFFPOOL models trained with the link prediction loss multiplied by the weighting factors λ = 100 and λ = 103. Figure 7 shows the validation curves of the supervised loss for ZINC and SMNIST. We observe that the supervised losses for models with both of the λ values converge to a similar point, at a similar rate. This validates that the un- supervised loss (link prediction) has little to no effect on the predictive performance of DIFFPOOL. We have observed a similar behavior for GMNs, which we report in the supplementary material. Discouraging smoothness. In Figure 6, we observe homogeneous node representations even before the first pooling layer. This naturally poses a challenge for the upcoming pooling layers to learn meaningful local structures. These homogeneous embeddings correspond to low frequency signals defined on the graph nodes. In the general case, achieving such patterns is only possible with more than a single convolution. This can be explained from a spectral perspective. Since each convolutional layer corresponds to filters that act linearly in the spectral domain, a single convolution cannot filter out specific frequency bands. These ideas have already been exploited to develop simplified GNNs [31, 43] that compute fixed polynomial filters in a normalized spectrum. Remarkably, using multiple convolutions to obtain initial embeddings is common practice [18, 26]. To evaluate its impact, we apply a single convolution before the first pooling layer in all networks. We then compare these networks against the original implementations. Table 2 displays the results. All models report a significant drop in performance with a single convolutional layer. On NCI1, the methods obtain accura- cies about 4% lower, on average. Likewise, GMN and GRACLUS report a 4% performance drop on SMNIST. With a single initial GraphSAGE convolution, the performances of Diffpool and its uniform variant become as lower as 66.4% on SMNIST. This dramatic drop is not observed only for IMDB-B, which counts on constant node features and therefore may not benefit from the additional convolutions. Note that under reduced expressiveness, pooling far away nodes seems to impose a negative inductive bias as COMPLEMENT consistently fails to rival the performance of GRACLUS. Intuitively, the number of convolutions needed to achieve smooth representations depend on the richness of the initial node features. Many popular datasets rely on one-hot features and might require a small number of initial convolutions. Extricating these effects is outside the scope of this work. Is pooling overrated? Our experiments suggest that local pooling does not play an important role on the performance of GNNs. A natural next question is whether local pooling presents any advantage over global pooling strategies. We run a GNN with global mean pooling on top of three convolutional layers, the results are: 79.65 ± 2.07 (NCI), 71.05 ± 4.58 (IMDB-B), 95.20 ± 0.18 (SMNIST), and 0.443 ± 0.03 (ZINC). These performances are on par with our previous results obtained using GRACLUS, DIFFPOOL and GMN. Regardless of how surprising this may sound, our findings are consistent with results reported in the literature. For instance, GraphSAGE networks outperform DIFFPOOL in a rigorous evaluation protocol [10, 11], although GraphSAGE is the convolutional component of DIFFPOOL. Likewise, but in the context of attention, Knyazev et al. [24] argue that, except under certain circumstances, general attention mechanisms are negligible or even harmful. Similar findings also hold for CNNs [35, 39]. Permutation invariance is a relevant property for GNNs as it guarantees that the network’s predictions do not vary with the graph representation. For completeness, we state in Appendix C the permutation invariance of the simplified GNNs discussed in Section 2. 3.1 Additional results More datasets. We consider four additional datasets commonly used to assess GNNs, three of which are part of the TU datasets [29]: PROTEINS, NCI109, and DD; and one from the recently proposed Open Graph Benchamark (OGB) framework [17]: MOLHIV. Since Ivanov et al. [19] showed that the IMDB-B dataset is affected by a serious isomorphism bias, we also report results on the “cleaned” version of the IMDB-B dataset, hereafter refereed to as IMDB-C. Table 3 shows the results for the additional datasets. Similarly to what we have observed in the previous experiments, non-local pooling performs on par with local pooling. The largest performance gap occurs for the PROTEINS and MOHIV datasets, on which COMPLEMENT achieves accuracy around 3% higher than GRACLUS, on average. The performance of all methods on IMDB-C does not differ significantly from their performance on IMDB-B. However, removing the isomorphisms in IMDB-B reduces the dataset size, which clearly increases variance. Another pooling method. We also provide results for MINCUTPOOL [2], a recently proposed pooling scheme based on spectral clustering. This scheme integrates an unsupervised loss which stems from a relaxation of a MINCUT objective and learns to assign clusters in a spectrum-free way. We compare MINCUTPOOL with a random variant for all datasets employed in this work. Again, we find that a random pooling mechanism achieves comparable results to its local counterpart. Details are given in Appendix B. Sensitivity to hyperparameters. As mentioned in Section 2.1, this paper does not intend to benchmark the predictive performance of models equipped with different pooling strategies. Consequently, we did not exhaustively optimize model hyperparameters. One may wonder whether our results hold for a broader set of hyperparameter choices. Figure 8 depicts the performance gap between GRACLUS and COMPLEMENT for a varying number of pooling layers and embedding dimensionality over a single run. We observe the greatest performance gap in favor of COMPLEMENT (e.g., 5 layers and 32 dimensions on ZINC), which does not amount to an improvement greater than 0.05 in MAE. Overall, we find that the choice of hyperparameters does not significantly increase the performance gap between GRACLUS and COMPLEMENT. 4 Related works Graph pooling usually falls into global and hierarchical approaches. Global methods aggregate the node representations either via simple flattening procedures such as summing (or averaging) the node embeddings [44] or more sophisticated set aggregation schemes [32, 49]. On the other hand, hierarchical methods [7, 14, 18, 21, 26, 27, 47] sequentially coarsen graph representations over the network’s layers. Notably, Knyazev et al. [24] provides a unified view of local pooling and node attention mechanisms, and study the ability of pooling methods to generalize to larger and noisy graphs. Also, they show that the effect of attention is not crucial and sometimes can be harmful, and propose a weakly-supervised approach. Simple GNNs. Over the last years, we have seen a surge of simplified GNNs. Wu et al. [43] show that removing the nonlinearity in GCNs [23] does not negatively impact on their performances on node classification tasks. The resulting feature extractor consists of a low-pass-type filter. In [31], the authors take this idea further and evaluate the resilience to feature noise and provide insights on GCN-based designs. For graph classification, Chen et al. [6] report that linear convolutional filters followed by nonlinear set functions achieve competitive performances against modern GNNs. Cai and Wang [5] propose a strong baseline based on local node statistics for non-attributed graph classification tasks. Benchmarking GNNs. Errica et al. [11] demonstrate how the lack of rigorous evaluation protocols affects reproducibility and hinders new advances in the field. They found that structure-agnostic baselines outperform popular GNNs on at least three commonly used chemical datasets. At the rescue of GNNs, Dwivedi et al. [10] argue that this lack of proper evaluation comes mainly from using small datasets. To tackle this, they introduce a new benchmarking framework with datasets that are large enough for researchers to identify key design principles and assess statistically relevant differences between GNN architectures. Similar issues related to the use of small datasets are reported in [37]. Understanding pooling and attention in regular domains. In the context of CNNs for object recognition tasks, Ruderman et al. [35] evaluate the role of pooling in CNNs w.r.t. the ability to handle deformation stability. The results show that pooling is not necessary for appropriate deformation stability. The explanation lies in the network’s ability to learn smooth filters across the layers. Sheng et al. [38] and Zhao et al. [51] propose random pooling as a fast alternative to conventional CNNs. Regarding attention-based models, Wiegreffe and Pinter [42] show that attention weights usually do not provide meaningful explanations for predictions. These works demonstrate the importance of proper assessment of core assumptions in deep learning. 5 Conclusion In contrast to the ever-increasing influx of GNN architectures, very few works rigorously assess which design choices are crucial for efficient learning. Consequently, misconceptions can be widely spread, influencing the development of models drinking from flawed intuitions. In this paper, we study the role of local pooling and its impact on the performance of GNNs. We show that most GNN architectures employ convolutions that can quickly lead to smooth node representations. As a result, the pooling layers become approximately invariant to specific cluster assignments. We also found that clustering-enforcing regularization is usually innocuous. In a series of experiments on accredited benchmarks, we show that extracting local information is not a necessary principle for efficient pooling. By shedding new light onto the role of pooling, we hope to contribute to the community in at least two ways: i) providing a simple sanity-check for novel pooling strategies; ii) deconstruct misconceptions and wrong intuitions related to the benefits of graph pooling. Acknowledgments and Disclosure of Funding This work was funded by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI, and grants 294238, 292334 and 319264). We acknowledge the computational resources provided by the Aalto Science-IT Project. Broader impact Graph neural networks (GNNs) have become the de facto learning tools in many valuable domains such as social network analysis, drug discovery, recommender systems, and natural language processing. Nonetheless, the fundamental design principles behind the success of GNNs are only partially understood. This work takes a step further in understanding local pooling, one of the core design choices in many GNN architectures. We believe this work will help researchers and practitioners better choose in which directions to employ their time and resources to build more accurate GNNs.
1. What is the focus of the paper in terms of graph neural networks? 2. What are the strengths of the paper regarding its research question and contribution? 3. What are the weaknesses of the paper regarding its experimental design and claims? 4. How does the reviewer assess the clarity and quality of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper deals with supervised graph classification and regression. Specifically, it investigates the impact of pooling for learning graph-level vectorial representation based on graph neural networks. The paper argues that current (cluster-based) pooling methods do not provide benefits over simple baselines (such as random cluster assignments). The authors empirically investigate well-known cluster-based GNN pooling layers on four well-known datasets. By running a set of carefully crafted experiments, they conclude that current cluster-based pooling layers do not provide benefits over random cluster assignments and argue that is because common GNN layers suffer from "over smoothing" limiting the effectiveness of such pooling methods. Strengths - Clearly written - Research question raised is meaningful (has not been studied before, as far as I know) - Raises awareness for sloppy evaluation protocols inherent to most GNN papers - Provides some (but limited) insights on the failure of common pooling layers Weaknesses - only evaluated on *four* (mostly small) datasets with small graphs stemming from chemo- and bioinformatics - a lot of the arguments are of a handwavy nature - strong bold claims, that are only partially backup-ed with experiments
NIPS
Title Rethinking pooling in graph neural networks Abstract Graph pooling is a central component of a myriad of graph neural network (GNN) architectures. As an inheritance from traditional CNNs, most approaches formulate graph pooling as a cluster assignment problem, extending the idea of local patches in regular grids to graphs. Despite the wide adherence to this design choice, no work has rigorously evaluated its influence on the success of GNNs. In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph. Strikingly, our experiments demonstrate that using these variants does not result in any decrease in performance. To understand this phenomenon, we study the interplay between convolutional layers and the subsequent pooling ones. We show that the convolutions play a leading role in the learned representations. In contrast to the common belief, local pooling is not responsible for the success of GNNs on relevant and widely-used benchmarks. 1 Introduction The success of graph neural networks (GNNs) [3, 36] in many domains [9, 15, 25, 41, 46, 50] is due to their ability to extract meaningful representations from graph-structured data. Similarly to convolutional neural networks (CNNs), a typical GNN sequentially combines local filtering, nonlinearity, and (possibly) pooling operations to obtain refined graph representations at each layer. Whereas the convolutional filters capture local regularities in the input graph, the interleaved pooling operations reduce the graph representation while ideally preserving important structural information. Although strategies for graph pooling come in many flavors [26, 30, 47, 49], most GNNs follow a hierarchical scheme in which the pooling regions correspond to graph clusters that, in turn, are combined to produce a coarser graph [4, 7, 13, 21, 47, 48]. Intuitively, these clusters generalize the notion of local neighborhood exploited in traditional CNNs and allow for pooling graphs of varying sizes. The cluster assignments can be obtained via deterministic clustering algorithms [4, 7] or be learned in an end-to-end fashion [21, 47]. Also, one can leverage node embeddings [21], graph topology [8], or both [47, 48], to pool graphs. We refer to these approaches as local pooling. Together with attention-based mechanisms [24, 26], the notion that clustering is a must-have property of graph pooling has been tremendously influential, resulting in an ever-increasing number of pooling schemes [14, 18, 21, 27, 48]. Implicit in any pooling approach is the belief that the quality of the cluster assignments is crucial for GNNs performance. Nonetheless, to the best of our knowledge, this belief has not been rigorously evaluated. Misconceptions not only hinder new advances but also may lead to unnecessary complexity and obfuscate interpretability. This is particularly critical in graph representation learning, as we have seen a clear trend towards simplified GNNs [5, 6, 11, 31, 43]. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we study the extent to which local pooling plays a role in GNNs. In particular, we choose representative models that are popular or claim to achieve state-of-the-art performances and simplify their pooling operators by eliminating any clustering-enforcing component. We either apply randomized cluster assignments or operate on complementary graphs. Surprisingly, the empirical results show that the non-local GNN variants exhibit comparable, if not superior, performance to the original methods in all experiments. To understand our findings, we design new experiments to evaluate the interplay between convolutional layers and pooling; and analyze the learned embeddings. We show that graph coarsening in both the original methods and our simplifications lead to homogeneous embeddings. This is because successful GNNs usually learn low-pass filters at early convolutional stages. Consequently, the specific way in which we combine nodes for pooling becomes less relevant. In a nutshell, the contributions of this paper are: i) we show that popular and modern representative GNNs do not perform better than simple baselines built upon randomization and non-local pooling; ii) we explain why the simplified GNNs work and analyze the conditions for this to happen; and iii) we discuss the overall impact of pooling in the design of efficient GNNs. Aware of common misleading evaluation protocols [10, 11], we use benchmarks on which GNNs have proven to beat structureagnostic baselines. We believe this work presents a sanity-check for local pooling, suggesting that novel pooling schemes should count on more ablation studies to validate their effectiveness. Notation. We represent a graph G, with n > 0 nodes, as an ordered pair (A,X) comprising a symmetric adjacency matrix A ∈ {0, 1}n×n and a matrix of node features X ∈ Rn×d. The matrix A defines the graph structure: two nodes i, j are connected if and only if Aij = 1. We denote by D the diagonal degree matrix of G, i.e., Dii = ∑ j Aij . We denote the complement of G by Ḡ = (Ā,X), where Ā has zeros in its diagonal and Āij = 1−Aij for i 6= j. 2 Exposing local pooling 2.1 Experimental setup Models. To investigate the relevance of local pooling, we study three representative models. We first consider GRACLUS [8], an efficient graph clustering algorithm that has been adopted as a pooling layer in modern GNNs [7, 34]. We combine GRACLUS with a sum-based convolutional operator [28]. Our second choice is the popular differential pooling model (DIFFPOOL) [47]. DIFFPOOL is the pioneering approach to learned pooling schemes and has served as inspiration to many methods [14]. Last, we look into the graph memory network (GMN) [21], a recently proposed model that reports state-of-the-art results on graph-level prediction tasks. Here, we focus on local pooling mechanisms and expect the results to be relevant for a large class of models whose principle is rooted in CNNs. Tasks and datasets. We use four graph-level prediction tasks as running examples: predicting the constrained solubility of molecules (ZINC, [20]), classifying chemical compounds regarding their activity against lung cancer (NCI1, [40]); categorizing ego-networks of actors w.r.t. the genre of the movies in which they collaborated (IMDB-B, [45]); and classifying handwritten digits (Superpixels MNIST, [1, 10]). The datasets cover graphs with none, discrete, and continuous node features. For completeness, we also report results on five other broadly used datasets in Section 3.1. Statistics of the datasets are available in the supplementary material (Section A.1). Evaluation. We split each dataset into train (80%), validation (10%) and test (10%) data. For the regression task, we use the mean absolute error (MAE) as performance metric. We report statistics of the performance metrics over 20 runs with different seeds. Similarly to the evaluation protocol in [10], we train all models with Adam [22] and apply learning rate decay, ranging from initial 10−3 down to 10−5, with decay ratio of 0.5 and patience of 10 epochs. Also, we use early stopping based on the validation accuracy. For further details, we refer to Appendix A in the supplementary material. Notably, we do not aim to benchmark the performance of the GNN models. Rather, we want to isolate local pooling effects. Therefore, for model selection, we follow guidelines provided by the original authors or in benchmarking papers and simply modify the pooling mechanism, keeping the remaining model structure untouched. All methods were implemented in PyTorch [12, 33] and our code is available at https://github.com/AaltoPML/Rethinking-pooling-in-GNNs. Case 1: Pooling with off-the-shelf graph clustering We first consider a network design that resembles standard CNNs. Following architectures used in [7, 12, 13], we alternate graph convolutions [28] and pooling layers based on graph clustering [8]. At each layer, a neighborhood aggregation step combines each node feature vector with the features of its neighbors in the graph. The features are linearly transformed before running through a component-wise non-linear function (e.g., ReLU). In matrix form, the convolution is Z(l) = ReLU ( X(l−1)W (l) 1 + A (l−1)X(l−1)W (l) 2 ) with (A(0),X(0)) = (A,X), (1) where W (l)2 ,W (l) 1 ∈ Rdl−1×dl are model parameters, and dl is the embedding dimension at layer l. The next step consists of applying the GRACLUS algorithm [8] to obtain a cluster assignment matrix S(l) ∈ {0, 1}nl−1×nl mapping each node to its cluster index in {1, . . . , nl}, with nl < nl−1 clusters. We then coarsen the features by max-pooling the nodes in the same cluster: X (l) kj = max i:S (l) ik =1 Z (l) ij k = 1, . . . , nl (2) and coarsen the adjacency matrix such that A(l)ij = 1 iff clusters i and j have neighbors in G(l−1): A(l) = S(l) ᵀ A(l−1)S(l) with A(l)kk = 0 k = 1, . . . , nl. (3) Clustering the complement graph. The clustering step holds the idea that good pooling regions, equivalently to their CNN counterparts, should group nearby nodes. To challenge this intuition, we follow an opposite idea and set pooling regions by grouping nodes that are not connected in the graph. In particular, we compute the assignments S(l) by applying GRACLUS to the complement graph Ḡ(l−1) of G(l−1). Note that we only employ the complement graph to compute cluster assignments S(l). With the assignments in hand, we apply the pooling operation (Equations 2 and 3) using the original graph structure. Henceforth, we refer to this approach as COMPLEMENT. Results. Figure 1 shows the distribution of the performance of the standard approach (GRACLUS) and its variant that operates on the complement graph (COMPLEMENT). In all tasks, both models perform almost on par and the distributions have a similar shape. Despite their simplicity, GRACLUS and COMPLEMENT are strong baselines (see Table 1). For instance, Errica et al. [11] report GIN [44] as the best performing model for the NCI1 dataset, achieving 80.0±1.4 accuracy. This is indistinguishable from COMPLEMENT’s performance (80.1± 1.6). We observe the same trend on IMDB-B, GraphSAGE [16] obtains 69.9 ± 4.6 and COMPLEMENT scores 70.6 ± 5.1 — a less than 0.6% difference. Using the same data splits as [10], COMPLEMENT and GIN perform within a margin of 1% in accuracy (SMNIST) and 0.01 in absolute error (ZINC). Case 2: Differential pooling DIFFPOOL [47] uses a GNN to learn cluster assignments for graph pooling. At each layer l, the soft cluster assignment matrix S(l) ∈ Rnl−1×nl is S(l) = softmax ( GNN (l) 1 (A (l−1),X(l−1)) ) with (A(0),X(0)) = (A,X). (4) The next step applies S(l) and a second GNN to compute the graph representation at layer l: X(l) = S(l) ᵀ GNN (l) 2 (A (l−1),X(l−1)) and A(l) = S(l) ᵀ A(l−1)S(l). (5) During training, DIFFPOOL employs a sum of three loss functions: i) a supervised loss; ii) the Frobenius norm between A(l) and the Gramian of the cluster assignments, at each layer, i.e.,∑ l ‖A(l) − S(l)S(l) ᵀ‖; iii) the entropy of the cluster assignments at each layer. The second loss is referred to as the link prediction loss and enforces nearby nodes to be pooled together. The third loss penalizes the entropy, encouraging sharp cluster assignments. Random assignments. To confront the influence of the learned cluster assignments, we replace S(l) in Equation 4 with a normalized random matrix softmax(S̃(l)). We consider three distributions: (Uniform) S̃(l)ij ∼ U(a, b) (Normal) S̃ (l) ij ∼ N (µ, σ2) (Bernoulli) S̃ (l) ij ∼ B(α) (6) We sample the assignment matrix before training starts and do not propagate gradients through it. Results. Figure 2 compares DIFFPOOL against the randomized variants. In all tasks, the highest average accuracy is due to a randomized approach. Nonetheless, there is no clear winner among all methods. Notably, the variances obtained with the random pooling schemes are not significantly higher than those from DIFFPOOL. Remark 1. Permutation invariance is an important property of most GNNs that assures consistency across different graph representations. However, the randomized variants break the invariance of DIFF- POOL. A simple fix consists of taking S̃(l) = X(l−1)S̃′, where S̃′ ∈ Rdl−1×nl is a random matrix. Figure 3 compares the randomized variants with and without this fix w.r.t. the validation error on artificially permuted graphs during training on the ZINC dataset. Results suggest that the variants are approximately invariant. Case 3: Graph memory networks Graph memory networks (GMNs) [21] consist of a sequence of memory layers stacked on top of a GNN, also known as the initial query network. We denote the output of the initial query network by Q(0). The first step in a memory layer computes kernel matrices between input queries Q(l−1) = [q (l−1) 1 , . . . , q (l−1) nl−1 ] ᵀ and multi-head keys K(l)h = [k (l) 1h , . . . ,k (l) nlh ]ᵀ: S (l) h : S (l) ijh ∝ ( 1 + ‖q(l−1)i − k (l) jh‖2/τ )− τ+12 ∀h = 1 . . . H, (7) where H is the number of heads and τ is the degrees of freedom of the Student’s t-kernel. We then aggregate the multi-head assignments S(l)h into a single matrix S (l) using a 1×1 convolution followed by row-wise softmax normalization. Finally, we pool the node embeddings Q(l−1) according to their soft assignments S(l) and apply a single-layer neural net: Q(l) = ReLU ( S(l) ᵀ Q(l−1)W (l) ) . (8) In this notation, queries Q(l) correspond to node embeddings X(l) for l > 0. Also, note that the memory layer does not leverage graph structure information as it is fully condensed into Q(0). Following [21], we use a GNN as query network. In particular, we employ a two layer network with the same convolutional operator as in Equation 1. The loss function employed to learn GMNs consists of a convex combination of: i) a supervised loss; and ii) the Kullback-Leibler divergence between the learned assignments and their self-normalized squares. The latter aim to enforce sharp soft-assignments, similarly to the entropy loss in DIFFPOOL. Remark 2. Curiously, the intuition behind the loss that allegedly improves cluster purity might be misleading. For instance, uniform cluster assignments, the ones the loss was engineered to avoid, are a perfect minimizer for it. We provide more details in Appendix D. Simplifying GMN. The principle behind GMNs consists of grouping nodes based on their similarities to learned keys (centroids). To scrutinize this principle, we propose two variants. In the first, we replace the kernel in Equation 7 by the euclidean distance taken from fixed keys drawn from a uniform distribution. Opposite to vanilla GMNs, the resulting assignments group nodes that are farthest from a key. The second variant substitutes multi-head assignment matrices for a fixed matrix whose entries are independently sampled from a uniform distribution. We refer to these approaches, respectively, as DISTANCE and RANDOM. Results. Figure 4 compares GMN with its simplified variants. For all datasets, DISTANCE and RANDOM perform on par with GMN, with slightly better MAE for the ZINC dataset. Also, the variants present no noticeable increase in variance. It is worth mentioning that the simplified GMNs are naturally faster to train as they have significantly less learned parameters. In the case of DISTANCE, keys are taken as constants once sampled. Additionally, RANDOM bypasses the computation of the pairwise distances in Equation 7, which dominates the time of the forward pass in GMNs. On the largest dataset (SMNIST), DISTANCE takes up to half the time of GMN (per epoch), whereas RANDOM is up to ten times faster than GMN. 3 Analysis The results in the previous section are counter-intuitive. We now analyze the factors that led to these results. We show that the convolutions preceding the pooling layers learn representations which are approximately invariant to how nodes are grouped. In particular, GNNs learn smooth node representations at the early layers. To experimentally show this, we remove the initial convolutions that perform this early smoothing. As a result, all networks experience a significant drop in performance. Finally, we show that local pooling offers no benefit in terms of accuracy to the evaluated models. Pooling learns approximately homogeneous node representations. The results with the random pooling methods suggest that any convex combination of the node features enables us to extract good graph representation. Intuitively, this is possible if the nodes display similar activation patterns before pooling. If we interpret convolutions as filters defined over node features/signals, this phenomenon can be explained if the initial convolutional layers extract low-frequency information across the graph input channels/embedding components. To evaluate this intuition, we compute the activations before the first pooling layer and after each of the subsequent pooling layers. Figure 5 shows activations for the random pooling variants of DIFFPOOL and GMN, and the COMPLEMENT approach on ZINC. The plots in Figure 5 validate that the first convolutional layers produce node features which are relatively homogeneous within the same graph, specially for the randomized variants. The networks learn features that resemble vertical bars. As expected, the pooling layers accentuate this phenomenon, extracting even more similar representations. We report embeddings for other datasets in Appendix E. Even methods based on local pooling tend to learn homogeneous representations. As one can notice from Figure 6, DIFFPOOL and GMN show smoothed patterns in the outputs of their initial pooling layer. This phenomenon explains why the performance of the randomized approaches matches those of their original counterparts. The results suggest that the loss terms designed to enforce local clustering are either not beneficial to the learning task or are obfuscated by the supervised loss. This observation does not apply to GRACLUS, as it employs a deterministic clustering algorithm, separating the possibly competing goals of learning hierarchical structures and minimizing a supervised loss. To further gauge the impact of the unsupervised loss on the performance of these GNNs, we compare two DIFFPOOL models trained with the link prediction loss multiplied by the weighting factors λ = 100 and λ = 103. Figure 7 shows the validation curves of the supervised loss for ZINC and SMNIST. We observe that the supervised losses for models with both of the λ values converge to a similar point, at a similar rate. This validates that the un- supervised loss (link prediction) has little to no effect on the predictive performance of DIFFPOOL. We have observed a similar behavior for GMNs, which we report in the supplementary material. Discouraging smoothness. In Figure 6, we observe homogeneous node representations even before the first pooling layer. This naturally poses a challenge for the upcoming pooling layers to learn meaningful local structures. These homogeneous embeddings correspond to low frequency signals defined on the graph nodes. In the general case, achieving such patterns is only possible with more than a single convolution. This can be explained from a spectral perspective. Since each convolutional layer corresponds to filters that act linearly in the spectral domain, a single convolution cannot filter out specific frequency bands. These ideas have already been exploited to develop simplified GNNs [31, 43] that compute fixed polynomial filters in a normalized spectrum. Remarkably, using multiple convolutions to obtain initial embeddings is common practice [18, 26]. To evaluate its impact, we apply a single convolution before the first pooling layer in all networks. We then compare these networks against the original implementations. Table 2 displays the results. All models report a significant drop in performance with a single convolutional layer. On NCI1, the methods obtain accura- cies about 4% lower, on average. Likewise, GMN and GRACLUS report a 4% performance drop on SMNIST. With a single initial GraphSAGE convolution, the performances of Diffpool and its uniform variant become as lower as 66.4% on SMNIST. This dramatic drop is not observed only for IMDB-B, which counts on constant node features and therefore may not benefit from the additional convolutions. Note that under reduced expressiveness, pooling far away nodes seems to impose a negative inductive bias as COMPLEMENT consistently fails to rival the performance of GRACLUS. Intuitively, the number of convolutions needed to achieve smooth representations depend on the richness of the initial node features. Many popular datasets rely on one-hot features and might require a small number of initial convolutions. Extricating these effects is outside the scope of this work. Is pooling overrated? Our experiments suggest that local pooling does not play an important role on the performance of GNNs. A natural next question is whether local pooling presents any advantage over global pooling strategies. We run a GNN with global mean pooling on top of three convolutional layers, the results are: 79.65 ± 2.07 (NCI), 71.05 ± 4.58 (IMDB-B), 95.20 ± 0.18 (SMNIST), and 0.443 ± 0.03 (ZINC). These performances are on par with our previous results obtained using GRACLUS, DIFFPOOL and GMN. Regardless of how surprising this may sound, our findings are consistent with results reported in the literature. For instance, GraphSAGE networks outperform DIFFPOOL in a rigorous evaluation protocol [10, 11], although GraphSAGE is the convolutional component of DIFFPOOL. Likewise, but in the context of attention, Knyazev et al. [24] argue that, except under certain circumstances, general attention mechanisms are negligible or even harmful. Similar findings also hold for CNNs [35, 39]. Permutation invariance is a relevant property for GNNs as it guarantees that the network’s predictions do not vary with the graph representation. For completeness, we state in Appendix C the permutation invariance of the simplified GNNs discussed in Section 2. 3.1 Additional results More datasets. We consider four additional datasets commonly used to assess GNNs, three of which are part of the TU datasets [29]: PROTEINS, NCI109, and DD; and one from the recently proposed Open Graph Benchamark (OGB) framework [17]: MOLHIV. Since Ivanov et al. [19] showed that the IMDB-B dataset is affected by a serious isomorphism bias, we also report results on the “cleaned” version of the IMDB-B dataset, hereafter refereed to as IMDB-C. Table 3 shows the results for the additional datasets. Similarly to what we have observed in the previous experiments, non-local pooling performs on par with local pooling. The largest performance gap occurs for the PROTEINS and MOHIV datasets, on which COMPLEMENT achieves accuracy around 3% higher than GRACLUS, on average. The performance of all methods on IMDB-C does not differ significantly from their performance on IMDB-B. However, removing the isomorphisms in IMDB-B reduces the dataset size, which clearly increases variance. Another pooling method. We also provide results for MINCUTPOOL [2], a recently proposed pooling scheme based on spectral clustering. This scheme integrates an unsupervised loss which stems from a relaxation of a MINCUT objective and learns to assign clusters in a spectrum-free way. We compare MINCUTPOOL with a random variant for all datasets employed in this work. Again, we find that a random pooling mechanism achieves comparable results to its local counterpart. Details are given in Appendix B. Sensitivity to hyperparameters. As mentioned in Section 2.1, this paper does not intend to benchmark the predictive performance of models equipped with different pooling strategies. Consequently, we did not exhaustively optimize model hyperparameters. One may wonder whether our results hold for a broader set of hyperparameter choices. Figure 8 depicts the performance gap between GRACLUS and COMPLEMENT for a varying number of pooling layers and embedding dimensionality over a single run. We observe the greatest performance gap in favor of COMPLEMENT (e.g., 5 layers and 32 dimensions on ZINC), which does not amount to an improvement greater than 0.05 in MAE. Overall, we find that the choice of hyperparameters does not significantly increase the performance gap between GRACLUS and COMPLEMENT. 4 Related works Graph pooling usually falls into global and hierarchical approaches. Global methods aggregate the node representations either via simple flattening procedures such as summing (or averaging) the node embeddings [44] or more sophisticated set aggregation schemes [32, 49]. On the other hand, hierarchical methods [7, 14, 18, 21, 26, 27, 47] sequentially coarsen graph representations over the network’s layers. Notably, Knyazev et al. [24] provides a unified view of local pooling and node attention mechanisms, and study the ability of pooling methods to generalize to larger and noisy graphs. Also, they show that the effect of attention is not crucial and sometimes can be harmful, and propose a weakly-supervised approach. Simple GNNs. Over the last years, we have seen a surge of simplified GNNs. Wu et al. [43] show that removing the nonlinearity in GCNs [23] does not negatively impact on their performances on node classification tasks. The resulting feature extractor consists of a low-pass-type filter. In [31], the authors take this idea further and evaluate the resilience to feature noise and provide insights on GCN-based designs. For graph classification, Chen et al. [6] report that linear convolutional filters followed by nonlinear set functions achieve competitive performances against modern GNNs. Cai and Wang [5] propose a strong baseline based on local node statistics for non-attributed graph classification tasks. Benchmarking GNNs. Errica et al. [11] demonstrate how the lack of rigorous evaluation protocols affects reproducibility and hinders new advances in the field. They found that structure-agnostic baselines outperform popular GNNs on at least three commonly used chemical datasets. At the rescue of GNNs, Dwivedi et al. [10] argue that this lack of proper evaluation comes mainly from using small datasets. To tackle this, they introduce a new benchmarking framework with datasets that are large enough for researchers to identify key design principles and assess statistically relevant differences between GNN architectures. Similar issues related to the use of small datasets are reported in [37]. Understanding pooling and attention in regular domains. In the context of CNNs for object recognition tasks, Ruderman et al. [35] evaluate the role of pooling in CNNs w.r.t. the ability to handle deformation stability. The results show that pooling is not necessary for appropriate deformation stability. The explanation lies in the network’s ability to learn smooth filters across the layers. Sheng et al. [38] and Zhao et al. [51] propose random pooling as a fast alternative to conventional CNNs. Regarding attention-based models, Wiegreffe and Pinter [42] show that attention weights usually do not provide meaningful explanations for predictions. These works demonstrate the importance of proper assessment of core assumptions in deep learning. 5 Conclusion In contrast to the ever-increasing influx of GNN architectures, very few works rigorously assess which design choices are crucial for efficient learning. Consequently, misconceptions can be widely spread, influencing the development of models drinking from flawed intuitions. In this paper, we study the role of local pooling and its impact on the performance of GNNs. We show that most GNN architectures employ convolutions that can quickly lead to smooth node representations. As a result, the pooling layers become approximately invariant to specific cluster assignments. We also found that clustering-enforcing regularization is usually innocuous. In a series of experiments on accredited benchmarks, we show that extracting local information is not a necessary principle for efficient pooling. By shedding new light onto the role of pooling, we hope to contribute to the community in at least two ways: i) providing a simple sanity-check for novel pooling strategies; ii) deconstruct misconceptions and wrong intuitions related to the benefits of graph pooling. Acknowledgments and Disclosure of Funding This work was funded by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI, and grants 294238, 292334 and 319264). We acknowledge the computational resources provided by the Aalto Science-IT Project. Broader impact Graph neural networks (GNNs) have become the de facto learning tools in many valuable domains such as social network analysis, drug discovery, recommender systems, and natural language processing. Nonetheless, the fundamental design principles behind the success of GNNs are only partially understood. This work takes a step further in understanding local pooling, one of the core design choices in many GNN architectures. We believe this work will help researchers and practitioners better choose in which directions to employ their time and resources to build more accurate GNNs.
1. What is the focus of the paper regarding graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its contribution to the field? 3. What are the weaknesses of the paper, especially regarding its experimental design and scope? 4. Do you have any concerns about the generalizability of the conclusions drawn from the study? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work studies pooling operations in graph neural networks and concludes that existing pooling techniques do not improve performances. Strengths It is an interesting rigorous study of graph pooling operations. Weaknesses One concern is that the evaluation of local pooling techniques is done in the context of graph-level tasks, i.e. graph regression for e.g. ZINC and graph classification for e.g. MNIST. Local pooling techniques should have also been evaluated in the context of node/edge-level tasks like node classification or link prediction. Graph pooling techniques may provide no improvement or even decrease the performances on small graphs, s.a graphs with 20-60 nodes used in this work. However, the question remains open for large graphs with 100K-1M nodes. This is in comparison with imagenet images of grid size 256x256 pixels = graph with 65k nodes, where pooling or strided convolutions improve performances. It would interesting to run some experiments on the recent large OGB datasets to confirm that these results are not caused by the small graph sizes.
NIPS
Title Rethinking pooling in graph neural networks Abstract Graph pooling is a central component of a myriad of graph neural network (GNN) architectures. As an inheritance from traditional CNNs, most approaches formulate graph pooling as a cluster assignment problem, extending the idea of local patches in regular grids to graphs. Despite the wide adherence to this design choice, no work has rigorously evaluated its influence on the success of GNNs. In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph. Strikingly, our experiments demonstrate that using these variants does not result in any decrease in performance. To understand this phenomenon, we study the interplay between convolutional layers and the subsequent pooling ones. We show that the convolutions play a leading role in the learned representations. In contrast to the common belief, local pooling is not responsible for the success of GNNs on relevant and widely-used benchmarks. 1 Introduction The success of graph neural networks (GNNs) [3, 36] in many domains [9, 15, 25, 41, 46, 50] is due to their ability to extract meaningful representations from graph-structured data. Similarly to convolutional neural networks (CNNs), a typical GNN sequentially combines local filtering, nonlinearity, and (possibly) pooling operations to obtain refined graph representations at each layer. Whereas the convolutional filters capture local regularities in the input graph, the interleaved pooling operations reduce the graph representation while ideally preserving important structural information. Although strategies for graph pooling come in many flavors [26, 30, 47, 49], most GNNs follow a hierarchical scheme in which the pooling regions correspond to graph clusters that, in turn, are combined to produce a coarser graph [4, 7, 13, 21, 47, 48]. Intuitively, these clusters generalize the notion of local neighborhood exploited in traditional CNNs and allow for pooling graphs of varying sizes. The cluster assignments can be obtained via deterministic clustering algorithms [4, 7] or be learned in an end-to-end fashion [21, 47]. Also, one can leverage node embeddings [21], graph topology [8], or both [47, 48], to pool graphs. We refer to these approaches as local pooling. Together with attention-based mechanisms [24, 26], the notion that clustering is a must-have property of graph pooling has been tremendously influential, resulting in an ever-increasing number of pooling schemes [14, 18, 21, 27, 48]. Implicit in any pooling approach is the belief that the quality of the cluster assignments is crucial for GNNs performance. Nonetheless, to the best of our knowledge, this belief has not been rigorously evaluated. Misconceptions not only hinder new advances but also may lead to unnecessary complexity and obfuscate interpretability. This is particularly critical in graph representation learning, as we have seen a clear trend towards simplified GNNs [5, 6, 11, 31, 43]. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we study the extent to which local pooling plays a role in GNNs. In particular, we choose representative models that are popular or claim to achieve state-of-the-art performances and simplify their pooling operators by eliminating any clustering-enforcing component. We either apply randomized cluster assignments or operate on complementary graphs. Surprisingly, the empirical results show that the non-local GNN variants exhibit comparable, if not superior, performance to the original methods in all experiments. To understand our findings, we design new experiments to evaluate the interplay between convolutional layers and pooling; and analyze the learned embeddings. We show that graph coarsening in both the original methods and our simplifications lead to homogeneous embeddings. This is because successful GNNs usually learn low-pass filters at early convolutional stages. Consequently, the specific way in which we combine nodes for pooling becomes less relevant. In a nutshell, the contributions of this paper are: i) we show that popular and modern representative GNNs do not perform better than simple baselines built upon randomization and non-local pooling; ii) we explain why the simplified GNNs work and analyze the conditions for this to happen; and iii) we discuss the overall impact of pooling in the design of efficient GNNs. Aware of common misleading evaluation protocols [10, 11], we use benchmarks on which GNNs have proven to beat structureagnostic baselines. We believe this work presents a sanity-check for local pooling, suggesting that novel pooling schemes should count on more ablation studies to validate their effectiveness. Notation. We represent a graph G, with n > 0 nodes, as an ordered pair (A,X) comprising a symmetric adjacency matrix A ∈ {0, 1}n×n and a matrix of node features X ∈ Rn×d. The matrix A defines the graph structure: two nodes i, j are connected if and only if Aij = 1. We denote by D the diagonal degree matrix of G, i.e., Dii = ∑ j Aij . We denote the complement of G by Ḡ = (Ā,X), where Ā has zeros in its diagonal and Āij = 1−Aij for i 6= j. 2 Exposing local pooling 2.1 Experimental setup Models. To investigate the relevance of local pooling, we study three representative models. We first consider GRACLUS [8], an efficient graph clustering algorithm that has been adopted as a pooling layer in modern GNNs [7, 34]. We combine GRACLUS with a sum-based convolutional operator [28]. Our second choice is the popular differential pooling model (DIFFPOOL) [47]. DIFFPOOL is the pioneering approach to learned pooling schemes and has served as inspiration to many methods [14]. Last, we look into the graph memory network (GMN) [21], a recently proposed model that reports state-of-the-art results on graph-level prediction tasks. Here, we focus on local pooling mechanisms and expect the results to be relevant for a large class of models whose principle is rooted in CNNs. Tasks and datasets. We use four graph-level prediction tasks as running examples: predicting the constrained solubility of molecules (ZINC, [20]), classifying chemical compounds regarding their activity against lung cancer (NCI1, [40]); categorizing ego-networks of actors w.r.t. the genre of the movies in which they collaborated (IMDB-B, [45]); and classifying handwritten digits (Superpixels MNIST, [1, 10]). The datasets cover graphs with none, discrete, and continuous node features. For completeness, we also report results on five other broadly used datasets in Section 3.1. Statistics of the datasets are available in the supplementary material (Section A.1). Evaluation. We split each dataset into train (80%), validation (10%) and test (10%) data. For the regression task, we use the mean absolute error (MAE) as performance metric. We report statistics of the performance metrics over 20 runs with different seeds. Similarly to the evaluation protocol in [10], we train all models with Adam [22] and apply learning rate decay, ranging from initial 10−3 down to 10−5, with decay ratio of 0.5 and patience of 10 epochs. Also, we use early stopping based on the validation accuracy. For further details, we refer to Appendix A in the supplementary material. Notably, we do not aim to benchmark the performance of the GNN models. Rather, we want to isolate local pooling effects. Therefore, for model selection, we follow guidelines provided by the original authors or in benchmarking papers and simply modify the pooling mechanism, keeping the remaining model structure untouched. All methods were implemented in PyTorch [12, 33] and our code is available at https://github.com/AaltoPML/Rethinking-pooling-in-GNNs. Case 1: Pooling with off-the-shelf graph clustering We first consider a network design that resembles standard CNNs. Following architectures used in [7, 12, 13], we alternate graph convolutions [28] and pooling layers based on graph clustering [8]. At each layer, a neighborhood aggregation step combines each node feature vector with the features of its neighbors in the graph. The features are linearly transformed before running through a component-wise non-linear function (e.g., ReLU). In matrix form, the convolution is Z(l) = ReLU ( X(l−1)W (l) 1 + A (l−1)X(l−1)W (l) 2 ) with (A(0),X(0)) = (A,X), (1) where W (l)2 ,W (l) 1 ∈ Rdl−1×dl are model parameters, and dl is the embedding dimension at layer l. The next step consists of applying the GRACLUS algorithm [8] to obtain a cluster assignment matrix S(l) ∈ {0, 1}nl−1×nl mapping each node to its cluster index in {1, . . . , nl}, with nl < nl−1 clusters. We then coarsen the features by max-pooling the nodes in the same cluster: X (l) kj = max i:S (l) ik =1 Z (l) ij k = 1, . . . , nl (2) and coarsen the adjacency matrix such that A(l)ij = 1 iff clusters i and j have neighbors in G(l−1): A(l) = S(l) ᵀ A(l−1)S(l) with A(l)kk = 0 k = 1, . . . , nl. (3) Clustering the complement graph. The clustering step holds the idea that good pooling regions, equivalently to their CNN counterparts, should group nearby nodes. To challenge this intuition, we follow an opposite idea and set pooling regions by grouping nodes that are not connected in the graph. In particular, we compute the assignments S(l) by applying GRACLUS to the complement graph Ḡ(l−1) of G(l−1). Note that we only employ the complement graph to compute cluster assignments S(l). With the assignments in hand, we apply the pooling operation (Equations 2 and 3) using the original graph structure. Henceforth, we refer to this approach as COMPLEMENT. Results. Figure 1 shows the distribution of the performance of the standard approach (GRACLUS) and its variant that operates on the complement graph (COMPLEMENT). In all tasks, both models perform almost on par and the distributions have a similar shape. Despite their simplicity, GRACLUS and COMPLEMENT are strong baselines (see Table 1). For instance, Errica et al. [11] report GIN [44] as the best performing model for the NCI1 dataset, achieving 80.0±1.4 accuracy. This is indistinguishable from COMPLEMENT’s performance (80.1± 1.6). We observe the same trend on IMDB-B, GraphSAGE [16] obtains 69.9 ± 4.6 and COMPLEMENT scores 70.6 ± 5.1 — a less than 0.6% difference. Using the same data splits as [10], COMPLEMENT and GIN perform within a margin of 1% in accuracy (SMNIST) and 0.01 in absolute error (ZINC). Case 2: Differential pooling DIFFPOOL [47] uses a GNN to learn cluster assignments for graph pooling. At each layer l, the soft cluster assignment matrix S(l) ∈ Rnl−1×nl is S(l) = softmax ( GNN (l) 1 (A (l−1),X(l−1)) ) with (A(0),X(0)) = (A,X). (4) The next step applies S(l) and a second GNN to compute the graph representation at layer l: X(l) = S(l) ᵀ GNN (l) 2 (A (l−1),X(l−1)) and A(l) = S(l) ᵀ A(l−1)S(l). (5) During training, DIFFPOOL employs a sum of three loss functions: i) a supervised loss; ii) the Frobenius norm between A(l) and the Gramian of the cluster assignments, at each layer, i.e.,∑ l ‖A(l) − S(l)S(l) ᵀ‖; iii) the entropy of the cluster assignments at each layer. The second loss is referred to as the link prediction loss and enforces nearby nodes to be pooled together. The third loss penalizes the entropy, encouraging sharp cluster assignments. Random assignments. To confront the influence of the learned cluster assignments, we replace S(l) in Equation 4 with a normalized random matrix softmax(S̃(l)). We consider three distributions: (Uniform) S̃(l)ij ∼ U(a, b) (Normal) S̃ (l) ij ∼ N (µ, σ2) (Bernoulli) S̃ (l) ij ∼ B(α) (6) We sample the assignment matrix before training starts and do not propagate gradients through it. Results. Figure 2 compares DIFFPOOL against the randomized variants. In all tasks, the highest average accuracy is due to a randomized approach. Nonetheless, there is no clear winner among all methods. Notably, the variances obtained with the random pooling schemes are not significantly higher than those from DIFFPOOL. Remark 1. Permutation invariance is an important property of most GNNs that assures consistency across different graph representations. However, the randomized variants break the invariance of DIFF- POOL. A simple fix consists of taking S̃(l) = X(l−1)S̃′, where S̃′ ∈ Rdl−1×nl is a random matrix. Figure 3 compares the randomized variants with and without this fix w.r.t. the validation error on artificially permuted graphs during training on the ZINC dataset. Results suggest that the variants are approximately invariant. Case 3: Graph memory networks Graph memory networks (GMNs) [21] consist of a sequence of memory layers stacked on top of a GNN, also known as the initial query network. We denote the output of the initial query network by Q(0). The first step in a memory layer computes kernel matrices between input queries Q(l−1) = [q (l−1) 1 , . . . , q (l−1) nl−1 ] ᵀ and multi-head keys K(l)h = [k (l) 1h , . . . ,k (l) nlh ]ᵀ: S (l) h : S (l) ijh ∝ ( 1 + ‖q(l−1)i − k (l) jh‖2/τ )− τ+12 ∀h = 1 . . . H, (7) where H is the number of heads and τ is the degrees of freedom of the Student’s t-kernel. We then aggregate the multi-head assignments S(l)h into a single matrix S (l) using a 1×1 convolution followed by row-wise softmax normalization. Finally, we pool the node embeddings Q(l−1) according to their soft assignments S(l) and apply a single-layer neural net: Q(l) = ReLU ( S(l) ᵀ Q(l−1)W (l) ) . (8) In this notation, queries Q(l) correspond to node embeddings X(l) for l > 0. Also, note that the memory layer does not leverage graph structure information as it is fully condensed into Q(0). Following [21], we use a GNN as query network. In particular, we employ a two layer network with the same convolutional operator as in Equation 1. The loss function employed to learn GMNs consists of a convex combination of: i) a supervised loss; and ii) the Kullback-Leibler divergence between the learned assignments and their self-normalized squares. The latter aim to enforce sharp soft-assignments, similarly to the entropy loss in DIFFPOOL. Remark 2. Curiously, the intuition behind the loss that allegedly improves cluster purity might be misleading. For instance, uniform cluster assignments, the ones the loss was engineered to avoid, are a perfect minimizer for it. We provide more details in Appendix D. Simplifying GMN. The principle behind GMNs consists of grouping nodes based on their similarities to learned keys (centroids). To scrutinize this principle, we propose two variants. In the first, we replace the kernel in Equation 7 by the euclidean distance taken from fixed keys drawn from a uniform distribution. Opposite to vanilla GMNs, the resulting assignments group nodes that are farthest from a key. The second variant substitutes multi-head assignment matrices for a fixed matrix whose entries are independently sampled from a uniform distribution. We refer to these approaches, respectively, as DISTANCE and RANDOM. Results. Figure 4 compares GMN with its simplified variants. For all datasets, DISTANCE and RANDOM perform on par with GMN, with slightly better MAE for the ZINC dataset. Also, the variants present no noticeable increase in variance. It is worth mentioning that the simplified GMNs are naturally faster to train as they have significantly less learned parameters. In the case of DISTANCE, keys are taken as constants once sampled. Additionally, RANDOM bypasses the computation of the pairwise distances in Equation 7, which dominates the time of the forward pass in GMNs. On the largest dataset (SMNIST), DISTANCE takes up to half the time of GMN (per epoch), whereas RANDOM is up to ten times faster than GMN. 3 Analysis The results in the previous section are counter-intuitive. We now analyze the factors that led to these results. We show that the convolutions preceding the pooling layers learn representations which are approximately invariant to how nodes are grouped. In particular, GNNs learn smooth node representations at the early layers. To experimentally show this, we remove the initial convolutions that perform this early smoothing. As a result, all networks experience a significant drop in performance. Finally, we show that local pooling offers no benefit in terms of accuracy to the evaluated models. Pooling learns approximately homogeneous node representations. The results with the random pooling methods suggest that any convex combination of the node features enables us to extract good graph representation. Intuitively, this is possible if the nodes display similar activation patterns before pooling. If we interpret convolutions as filters defined over node features/signals, this phenomenon can be explained if the initial convolutional layers extract low-frequency information across the graph input channels/embedding components. To evaluate this intuition, we compute the activations before the first pooling layer and after each of the subsequent pooling layers. Figure 5 shows activations for the random pooling variants of DIFFPOOL and GMN, and the COMPLEMENT approach on ZINC. The plots in Figure 5 validate that the first convolutional layers produce node features which are relatively homogeneous within the same graph, specially for the randomized variants. The networks learn features that resemble vertical bars. As expected, the pooling layers accentuate this phenomenon, extracting even more similar representations. We report embeddings for other datasets in Appendix E. Even methods based on local pooling tend to learn homogeneous representations. As one can notice from Figure 6, DIFFPOOL and GMN show smoothed patterns in the outputs of their initial pooling layer. This phenomenon explains why the performance of the randomized approaches matches those of their original counterparts. The results suggest that the loss terms designed to enforce local clustering are either not beneficial to the learning task or are obfuscated by the supervised loss. This observation does not apply to GRACLUS, as it employs a deterministic clustering algorithm, separating the possibly competing goals of learning hierarchical structures and minimizing a supervised loss. To further gauge the impact of the unsupervised loss on the performance of these GNNs, we compare two DIFFPOOL models trained with the link prediction loss multiplied by the weighting factors λ = 100 and λ = 103. Figure 7 shows the validation curves of the supervised loss for ZINC and SMNIST. We observe that the supervised losses for models with both of the λ values converge to a similar point, at a similar rate. This validates that the un- supervised loss (link prediction) has little to no effect on the predictive performance of DIFFPOOL. We have observed a similar behavior for GMNs, which we report in the supplementary material. Discouraging smoothness. In Figure 6, we observe homogeneous node representations even before the first pooling layer. This naturally poses a challenge for the upcoming pooling layers to learn meaningful local structures. These homogeneous embeddings correspond to low frequency signals defined on the graph nodes. In the general case, achieving such patterns is only possible with more than a single convolution. This can be explained from a spectral perspective. Since each convolutional layer corresponds to filters that act linearly in the spectral domain, a single convolution cannot filter out specific frequency bands. These ideas have already been exploited to develop simplified GNNs [31, 43] that compute fixed polynomial filters in a normalized spectrum. Remarkably, using multiple convolutions to obtain initial embeddings is common practice [18, 26]. To evaluate its impact, we apply a single convolution before the first pooling layer in all networks. We then compare these networks against the original implementations. Table 2 displays the results. All models report a significant drop in performance with a single convolutional layer. On NCI1, the methods obtain accura- cies about 4% lower, on average. Likewise, GMN and GRACLUS report a 4% performance drop on SMNIST. With a single initial GraphSAGE convolution, the performances of Diffpool and its uniform variant become as lower as 66.4% on SMNIST. This dramatic drop is not observed only for IMDB-B, which counts on constant node features and therefore may not benefit from the additional convolutions. Note that under reduced expressiveness, pooling far away nodes seems to impose a negative inductive bias as COMPLEMENT consistently fails to rival the performance of GRACLUS. Intuitively, the number of convolutions needed to achieve smooth representations depend on the richness of the initial node features. Many popular datasets rely on one-hot features and might require a small number of initial convolutions. Extricating these effects is outside the scope of this work. Is pooling overrated? Our experiments suggest that local pooling does not play an important role on the performance of GNNs. A natural next question is whether local pooling presents any advantage over global pooling strategies. We run a GNN with global mean pooling on top of three convolutional layers, the results are: 79.65 ± 2.07 (NCI), 71.05 ± 4.58 (IMDB-B), 95.20 ± 0.18 (SMNIST), and 0.443 ± 0.03 (ZINC). These performances are on par with our previous results obtained using GRACLUS, DIFFPOOL and GMN. Regardless of how surprising this may sound, our findings are consistent with results reported in the literature. For instance, GraphSAGE networks outperform DIFFPOOL in a rigorous evaluation protocol [10, 11], although GraphSAGE is the convolutional component of DIFFPOOL. Likewise, but in the context of attention, Knyazev et al. [24] argue that, except under certain circumstances, general attention mechanisms are negligible or even harmful. Similar findings also hold for CNNs [35, 39]. Permutation invariance is a relevant property for GNNs as it guarantees that the network’s predictions do not vary with the graph representation. For completeness, we state in Appendix C the permutation invariance of the simplified GNNs discussed in Section 2. 3.1 Additional results More datasets. We consider four additional datasets commonly used to assess GNNs, three of which are part of the TU datasets [29]: PROTEINS, NCI109, and DD; and one from the recently proposed Open Graph Benchamark (OGB) framework [17]: MOLHIV. Since Ivanov et al. [19] showed that the IMDB-B dataset is affected by a serious isomorphism bias, we also report results on the “cleaned” version of the IMDB-B dataset, hereafter refereed to as IMDB-C. Table 3 shows the results for the additional datasets. Similarly to what we have observed in the previous experiments, non-local pooling performs on par with local pooling. The largest performance gap occurs for the PROTEINS and MOHIV datasets, on which COMPLEMENT achieves accuracy around 3% higher than GRACLUS, on average. The performance of all methods on IMDB-C does not differ significantly from their performance on IMDB-B. However, removing the isomorphisms in IMDB-B reduces the dataset size, which clearly increases variance. Another pooling method. We also provide results for MINCUTPOOL [2], a recently proposed pooling scheme based on spectral clustering. This scheme integrates an unsupervised loss which stems from a relaxation of a MINCUT objective and learns to assign clusters in a spectrum-free way. We compare MINCUTPOOL with a random variant for all datasets employed in this work. Again, we find that a random pooling mechanism achieves comparable results to its local counterpart. Details are given in Appendix B. Sensitivity to hyperparameters. As mentioned in Section 2.1, this paper does not intend to benchmark the predictive performance of models equipped with different pooling strategies. Consequently, we did not exhaustively optimize model hyperparameters. One may wonder whether our results hold for a broader set of hyperparameter choices. Figure 8 depicts the performance gap between GRACLUS and COMPLEMENT for a varying number of pooling layers and embedding dimensionality over a single run. We observe the greatest performance gap in favor of COMPLEMENT (e.g., 5 layers and 32 dimensions on ZINC), which does not amount to an improvement greater than 0.05 in MAE. Overall, we find that the choice of hyperparameters does not significantly increase the performance gap between GRACLUS and COMPLEMENT. 4 Related works Graph pooling usually falls into global and hierarchical approaches. Global methods aggregate the node representations either via simple flattening procedures such as summing (or averaging) the node embeddings [44] or more sophisticated set aggregation schemes [32, 49]. On the other hand, hierarchical methods [7, 14, 18, 21, 26, 27, 47] sequentially coarsen graph representations over the network’s layers. Notably, Knyazev et al. [24] provides a unified view of local pooling and node attention mechanisms, and study the ability of pooling methods to generalize to larger and noisy graphs. Also, they show that the effect of attention is not crucial and sometimes can be harmful, and propose a weakly-supervised approach. Simple GNNs. Over the last years, we have seen a surge of simplified GNNs. Wu et al. [43] show that removing the nonlinearity in GCNs [23] does not negatively impact on their performances on node classification tasks. The resulting feature extractor consists of a low-pass-type filter. In [31], the authors take this idea further and evaluate the resilience to feature noise and provide insights on GCN-based designs. For graph classification, Chen et al. [6] report that linear convolutional filters followed by nonlinear set functions achieve competitive performances against modern GNNs. Cai and Wang [5] propose a strong baseline based on local node statistics for non-attributed graph classification tasks. Benchmarking GNNs. Errica et al. [11] demonstrate how the lack of rigorous evaluation protocols affects reproducibility and hinders new advances in the field. They found that structure-agnostic baselines outperform popular GNNs on at least three commonly used chemical datasets. At the rescue of GNNs, Dwivedi et al. [10] argue that this lack of proper evaluation comes mainly from using small datasets. To tackle this, they introduce a new benchmarking framework with datasets that are large enough for researchers to identify key design principles and assess statistically relevant differences between GNN architectures. Similar issues related to the use of small datasets are reported in [37]. Understanding pooling and attention in regular domains. In the context of CNNs for object recognition tasks, Ruderman et al. [35] evaluate the role of pooling in CNNs w.r.t. the ability to handle deformation stability. The results show that pooling is not necessary for appropriate deformation stability. The explanation lies in the network’s ability to learn smooth filters across the layers. Sheng et al. [38] and Zhao et al. [51] propose random pooling as a fast alternative to conventional CNNs. Regarding attention-based models, Wiegreffe and Pinter [42] show that attention weights usually do not provide meaningful explanations for predictions. These works demonstrate the importance of proper assessment of core assumptions in deep learning. 5 Conclusion In contrast to the ever-increasing influx of GNN architectures, very few works rigorously assess which design choices are crucial for efficient learning. Consequently, misconceptions can be widely spread, influencing the development of models drinking from flawed intuitions. In this paper, we study the role of local pooling and its impact on the performance of GNNs. We show that most GNN architectures employ convolutions that can quickly lead to smooth node representations. As a result, the pooling layers become approximately invariant to specific cluster assignments. We also found that clustering-enforcing regularization is usually innocuous. In a series of experiments on accredited benchmarks, we show that extracting local information is not a necessary principle for efficient pooling. By shedding new light onto the role of pooling, we hope to contribute to the community in at least two ways: i) providing a simple sanity-check for novel pooling strategies; ii) deconstruct misconceptions and wrong intuitions related to the benefits of graph pooling. Acknowledgments and Disclosure of Funding This work was funded by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI, and grants 294238, 292334 and 319264). We acknowledge the computational resources provided by the Aalto Science-IT Project. Broader impact Graph neural networks (GNNs) have become the de facto learning tools in many valuable domains such as social network analysis, drug discovery, recommender systems, and natural language processing. Nonetheless, the fundamental design principles behind the success of GNNs are only partially understood. This work takes a step further in understanding local pooling, one of the core design choices in many GNN architectures. We believe this work will help researchers and practitioners better choose in which directions to employ their time and resources to build more accurate GNNs.
1. What is the focus of the paper in terms of graph neural networks? 2. What are the contributions of the paper regarding local pooling layers? 3. What are the limitations of the analysis presented in the paper? 4. How do the findings of the paper impact our understanding of graph representation learning?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper analyzes the effectiveness of local pooling layers in existing graph neural networks for the task of graph representation learning. By analyzing three existing popular local pooling methods, the authors show that the pooling layers are not responsible for the success of GNNs, which are mainly due to the graph convolutional layers. This Strengths + This paper studies an important problem, the effectiveness of local pooling layers in existing graph neural networks + The conclusions are very instrumental, which sheds new light onto the role of pooling. Weaknesses - the analysis are only limited to three local pooling methods
NIPS
Title Rethinking pooling in graph neural networks Abstract Graph pooling is a central component of a myriad of graph neural network (GNN) architectures. As an inheritance from traditional CNNs, most approaches formulate graph pooling as a cluster assignment problem, extending the idea of local patches in regular grids to graphs. Despite the wide adherence to this design choice, no work has rigorously evaluated its influence on the success of GNNs. In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph. Strikingly, our experiments demonstrate that using these variants does not result in any decrease in performance. To understand this phenomenon, we study the interplay between convolutional layers and the subsequent pooling ones. We show that the convolutions play a leading role in the learned representations. In contrast to the common belief, local pooling is not responsible for the success of GNNs on relevant and widely-used benchmarks. 1 Introduction The success of graph neural networks (GNNs) [3, 36] in many domains [9, 15, 25, 41, 46, 50] is due to their ability to extract meaningful representations from graph-structured data. Similarly to convolutional neural networks (CNNs), a typical GNN sequentially combines local filtering, nonlinearity, and (possibly) pooling operations to obtain refined graph representations at each layer. Whereas the convolutional filters capture local regularities in the input graph, the interleaved pooling operations reduce the graph representation while ideally preserving important structural information. Although strategies for graph pooling come in many flavors [26, 30, 47, 49], most GNNs follow a hierarchical scheme in which the pooling regions correspond to graph clusters that, in turn, are combined to produce a coarser graph [4, 7, 13, 21, 47, 48]. Intuitively, these clusters generalize the notion of local neighborhood exploited in traditional CNNs and allow for pooling graphs of varying sizes. The cluster assignments can be obtained via deterministic clustering algorithms [4, 7] or be learned in an end-to-end fashion [21, 47]. Also, one can leverage node embeddings [21], graph topology [8], or both [47, 48], to pool graphs. We refer to these approaches as local pooling. Together with attention-based mechanisms [24, 26], the notion that clustering is a must-have property of graph pooling has been tremendously influential, resulting in an ever-increasing number of pooling schemes [14, 18, 21, 27, 48]. Implicit in any pooling approach is the belief that the quality of the cluster assignments is crucial for GNNs performance. Nonetheless, to the best of our knowledge, this belief has not been rigorously evaluated. Misconceptions not only hinder new advances but also may lead to unnecessary complexity and obfuscate interpretability. This is particularly critical in graph representation learning, as we have seen a clear trend towards simplified GNNs [5, 6, 11, 31, 43]. ∗Equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we study the extent to which local pooling plays a role in GNNs. In particular, we choose representative models that are popular or claim to achieve state-of-the-art performances and simplify their pooling operators by eliminating any clustering-enforcing component. We either apply randomized cluster assignments or operate on complementary graphs. Surprisingly, the empirical results show that the non-local GNN variants exhibit comparable, if not superior, performance to the original methods in all experiments. To understand our findings, we design new experiments to evaluate the interplay between convolutional layers and pooling; and analyze the learned embeddings. We show that graph coarsening in both the original methods and our simplifications lead to homogeneous embeddings. This is because successful GNNs usually learn low-pass filters at early convolutional stages. Consequently, the specific way in which we combine nodes for pooling becomes less relevant. In a nutshell, the contributions of this paper are: i) we show that popular and modern representative GNNs do not perform better than simple baselines built upon randomization and non-local pooling; ii) we explain why the simplified GNNs work and analyze the conditions for this to happen; and iii) we discuss the overall impact of pooling in the design of efficient GNNs. Aware of common misleading evaluation protocols [10, 11], we use benchmarks on which GNNs have proven to beat structureagnostic baselines. We believe this work presents a sanity-check for local pooling, suggesting that novel pooling schemes should count on more ablation studies to validate their effectiveness. Notation. We represent a graph G, with n > 0 nodes, as an ordered pair (A,X) comprising a symmetric adjacency matrix A ∈ {0, 1}n×n and a matrix of node features X ∈ Rn×d. The matrix A defines the graph structure: two nodes i, j are connected if and only if Aij = 1. We denote by D the diagonal degree matrix of G, i.e., Dii = ∑ j Aij . We denote the complement of G by Ḡ = (Ā,X), where Ā has zeros in its diagonal and Āij = 1−Aij for i 6= j. 2 Exposing local pooling 2.1 Experimental setup Models. To investigate the relevance of local pooling, we study three representative models. We first consider GRACLUS [8], an efficient graph clustering algorithm that has been adopted as a pooling layer in modern GNNs [7, 34]. We combine GRACLUS with a sum-based convolutional operator [28]. Our second choice is the popular differential pooling model (DIFFPOOL) [47]. DIFFPOOL is the pioneering approach to learned pooling schemes and has served as inspiration to many methods [14]. Last, we look into the graph memory network (GMN) [21], a recently proposed model that reports state-of-the-art results on graph-level prediction tasks. Here, we focus on local pooling mechanisms and expect the results to be relevant for a large class of models whose principle is rooted in CNNs. Tasks and datasets. We use four graph-level prediction tasks as running examples: predicting the constrained solubility of molecules (ZINC, [20]), classifying chemical compounds regarding their activity against lung cancer (NCI1, [40]); categorizing ego-networks of actors w.r.t. the genre of the movies in which they collaborated (IMDB-B, [45]); and classifying handwritten digits (Superpixels MNIST, [1, 10]). The datasets cover graphs with none, discrete, and continuous node features. For completeness, we also report results on five other broadly used datasets in Section 3.1. Statistics of the datasets are available in the supplementary material (Section A.1). Evaluation. We split each dataset into train (80%), validation (10%) and test (10%) data. For the regression task, we use the mean absolute error (MAE) as performance metric. We report statistics of the performance metrics over 20 runs with different seeds. Similarly to the evaluation protocol in [10], we train all models with Adam [22] and apply learning rate decay, ranging from initial 10−3 down to 10−5, with decay ratio of 0.5 and patience of 10 epochs. Also, we use early stopping based on the validation accuracy. For further details, we refer to Appendix A in the supplementary material. Notably, we do not aim to benchmark the performance of the GNN models. Rather, we want to isolate local pooling effects. Therefore, for model selection, we follow guidelines provided by the original authors or in benchmarking papers and simply modify the pooling mechanism, keeping the remaining model structure untouched. All methods were implemented in PyTorch [12, 33] and our code is available at https://github.com/AaltoPML/Rethinking-pooling-in-GNNs. Case 1: Pooling with off-the-shelf graph clustering We first consider a network design that resembles standard CNNs. Following architectures used in [7, 12, 13], we alternate graph convolutions [28] and pooling layers based on graph clustering [8]. At each layer, a neighborhood aggregation step combines each node feature vector with the features of its neighbors in the graph. The features are linearly transformed before running through a component-wise non-linear function (e.g., ReLU). In matrix form, the convolution is Z(l) = ReLU ( X(l−1)W (l) 1 + A (l−1)X(l−1)W (l) 2 ) with (A(0),X(0)) = (A,X), (1) where W (l)2 ,W (l) 1 ∈ Rdl−1×dl are model parameters, and dl is the embedding dimension at layer l. The next step consists of applying the GRACLUS algorithm [8] to obtain a cluster assignment matrix S(l) ∈ {0, 1}nl−1×nl mapping each node to its cluster index in {1, . . . , nl}, with nl < nl−1 clusters. We then coarsen the features by max-pooling the nodes in the same cluster: X (l) kj = max i:S (l) ik =1 Z (l) ij k = 1, . . . , nl (2) and coarsen the adjacency matrix such that A(l)ij = 1 iff clusters i and j have neighbors in G(l−1): A(l) = S(l) ᵀ A(l−1)S(l) with A(l)kk = 0 k = 1, . . . , nl. (3) Clustering the complement graph. The clustering step holds the idea that good pooling regions, equivalently to their CNN counterparts, should group nearby nodes. To challenge this intuition, we follow an opposite idea and set pooling regions by grouping nodes that are not connected in the graph. In particular, we compute the assignments S(l) by applying GRACLUS to the complement graph Ḡ(l−1) of G(l−1). Note that we only employ the complement graph to compute cluster assignments S(l). With the assignments in hand, we apply the pooling operation (Equations 2 and 3) using the original graph structure. Henceforth, we refer to this approach as COMPLEMENT. Results. Figure 1 shows the distribution of the performance of the standard approach (GRACLUS) and its variant that operates on the complement graph (COMPLEMENT). In all tasks, both models perform almost on par and the distributions have a similar shape. Despite their simplicity, GRACLUS and COMPLEMENT are strong baselines (see Table 1). For instance, Errica et al. [11] report GIN [44] as the best performing model for the NCI1 dataset, achieving 80.0±1.4 accuracy. This is indistinguishable from COMPLEMENT’s performance (80.1± 1.6). We observe the same trend on IMDB-B, GraphSAGE [16] obtains 69.9 ± 4.6 and COMPLEMENT scores 70.6 ± 5.1 — a less than 0.6% difference. Using the same data splits as [10], COMPLEMENT and GIN perform within a margin of 1% in accuracy (SMNIST) and 0.01 in absolute error (ZINC). Case 2: Differential pooling DIFFPOOL [47] uses a GNN to learn cluster assignments for graph pooling. At each layer l, the soft cluster assignment matrix S(l) ∈ Rnl−1×nl is S(l) = softmax ( GNN (l) 1 (A (l−1),X(l−1)) ) with (A(0),X(0)) = (A,X). (4) The next step applies S(l) and a second GNN to compute the graph representation at layer l: X(l) = S(l) ᵀ GNN (l) 2 (A (l−1),X(l−1)) and A(l) = S(l) ᵀ A(l−1)S(l). (5) During training, DIFFPOOL employs a sum of three loss functions: i) a supervised loss; ii) the Frobenius norm between A(l) and the Gramian of the cluster assignments, at each layer, i.e.,∑ l ‖A(l) − S(l)S(l) ᵀ‖; iii) the entropy of the cluster assignments at each layer. The second loss is referred to as the link prediction loss and enforces nearby nodes to be pooled together. The third loss penalizes the entropy, encouraging sharp cluster assignments. Random assignments. To confront the influence of the learned cluster assignments, we replace S(l) in Equation 4 with a normalized random matrix softmax(S̃(l)). We consider three distributions: (Uniform) S̃(l)ij ∼ U(a, b) (Normal) S̃ (l) ij ∼ N (µ, σ2) (Bernoulli) S̃ (l) ij ∼ B(α) (6) We sample the assignment matrix before training starts and do not propagate gradients through it. Results. Figure 2 compares DIFFPOOL against the randomized variants. In all tasks, the highest average accuracy is due to a randomized approach. Nonetheless, there is no clear winner among all methods. Notably, the variances obtained with the random pooling schemes are not significantly higher than those from DIFFPOOL. Remark 1. Permutation invariance is an important property of most GNNs that assures consistency across different graph representations. However, the randomized variants break the invariance of DIFF- POOL. A simple fix consists of taking S̃(l) = X(l−1)S̃′, where S̃′ ∈ Rdl−1×nl is a random matrix. Figure 3 compares the randomized variants with and without this fix w.r.t. the validation error on artificially permuted graphs during training on the ZINC dataset. Results suggest that the variants are approximately invariant. Case 3: Graph memory networks Graph memory networks (GMNs) [21] consist of a sequence of memory layers stacked on top of a GNN, also known as the initial query network. We denote the output of the initial query network by Q(0). The first step in a memory layer computes kernel matrices between input queries Q(l−1) = [q (l−1) 1 , . . . , q (l−1) nl−1 ] ᵀ and multi-head keys K(l)h = [k (l) 1h , . . . ,k (l) nlh ]ᵀ: S (l) h : S (l) ijh ∝ ( 1 + ‖q(l−1)i − k (l) jh‖2/τ )− τ+12 ∀h = 1 . . . H, (7) where H is the number of heads and τ is the degrees of freedom of the Student’s t-kernel. We then aggregate the multi-head assignments S(l)h into a single matrix S (l) using a 1×1 convolution followed by row-wise softmax normalization. Finally, we pool the node embeddings Q(l−1) according to their soft assignments S(l) and apply a single-layer neural net: Q(l) = ReLU ( S(l) ᵀ Q(l−1)W (l) ) . (8) In this notation, queries Q(l) correspond to node embeddings X(l) for l > 0. Also, note that the memory layer does not leverage graph structure information as it is fully condensed into Q(0). Following [21], we use a GNN as query network. In particular, we employ a two layer network with the same convolutional operator as in Equation 1. The loss function employed to learn GMNs consists of a convex combination of: i) a supervised loss; and ii) the Kullback-Leibler divergence between the learned assignments and their self-normalized squares. The latter aim to enforce sharp soft-assignments, similarly to the entropy loss in DIFFPOOL. Remark 2. Curiously, the intuition behind the loss that allegedly improves cluster purity might be misleading. For instance, uniform cluster assignments, the ones the loss was engineered to avoid, are a perfect minimizer for it. We provide more details in Appendix D. Simplifying GMN. The principle behind GMNs consists of grouping nodes based on their similarities to learned keys (centroids). To scrutinize this principle, we propose two variants. In the first, we replace the kernel in Equation 7 by the euclidean distance taken from fixed keys drawn from a uniform distribution. Opposite to vanilla GMNs, the resulting assignments group nodes that are farthest from a key. The second variant substitutes multi-head assignment matrices for a fixed matrix whose entries are independently sampled from a uniform distribution. We refer to these approaches, respectively, as DISTANCE and RANDOM. Results. Figure 4 compares GMN with its simplified variants. For all datasets, DISTANCE and RANDOM perform on par with GMN, with slightly better MAE for the ZINC dataset. Also, the variants present no noticeable increase in variance. It is worth mentioning that the simplified GMNs are naturally faster to train as they have significantly less learned parameters. In the case of DISTANCE, keys are taken as constants once sampled. Additionally, RANDOM bypasses the computation of the pairwise distances in Equation 7, which dominates the time of the forward pass in GMNs. On the largest dataset (SMNIST), DISTANCE takes up to half the time of GMN (per epoch), whereas RANDOM is up to ten times faster than GMN. 3 Analysis The results in the previous section are counter-intuitive. We now analyze the factors that led to these results. We show that the convolutions preceding the pooling layers learn representations which are approximately invariant to how nodes are grouped. In particular, GNNs learn smooth node representations at the early layers. To experimentally show this, we remove the initial convolutions that perform this early smoothing. As a result, all networks experience a significant drop in performance. Finally, we show that local pooling offers no benefit in terms of accuracy to the evaluated models. Pooling learns approximately homogeneous node representations. The results with the random pooling methods suggest that any convex combination of the node features enables us to extract good graph representation. Intuitively, this is possible if the nodes display similar activation patterns before pooling. If we interpret convolutions as filters defined over node features/signals, this phenomenon can be explained if the initial convolutional layers extract low-frequency information across the graph input channels/embedding components. To evaluate this intuition, we compute the activations before the first pooling layer and after each of the subsequent pooling layers. Figure 5 shows activations for the random pooling variants of DIFFPOOL and GMN, and the COMPLEMENT approach on ZINC. The plots in Figure 5 validate that the first convolutional layers produce node features which are relatively homogeneous within the same graph, specially for the randomized variants. The networks learn features that resemble vertical bars. As expected, the pooling layers accentuate this phenomenon, extracting even more similar representations. We report embeddings for other datasets in Appendix E. Even methods based on local pooling tend to learn homogeneous representations. As one can notice from Figure 6, DIFFPOOL and GMN show smoothed patterns in the outputs of their initial pooling layer. This phenomenon explains why the performance of the randomized approaches matches those of their original counterparts. The results suggest that the loss terms designed to enforce local clustering are either not beneficial to the learning task or are obfuscated by the supervised loss. This observation does not apply to GRACLUS, as it employs a deterministic clustering algorithm, separating the possibly competing goals of learning hierarchical structures and minimizing a supervised loss. To further gauge the impact of the unsupervised loss on the performance of these GNNs, we compare two DIFFPOOL models trained with the link prediction loss multiplied by the weighting factors λ = 100 and λ = 103. Figure 7 shows the validation curves of the supervised loss for ZINC and SMNIST. We observe that the supervised losses for models with both of the λ values converge to a similar point, at a similar rate. This validates that the un- supervised loss (link prediction) has little to no effect on the predictive performance of DIFFPOOL. We have observed a similar behavior for GMNs, which we report in the supplementary material. Discouraging smoothness. In Figure 6, we observe homogeneous node representations even before the first pooling layer. This naturally poses a challenge for the upcoming pooling layers to learn meaningful local structures. These homogeneous embeddings correspond to low frequency signals defined on the graph nodes. In the general case, achieving such patterns is only possible with more than a single convolution. This can be explained from a spectral perspective. Since each convolutional layer corresponds to filters that act linearly in the spectral domain, a single convolution cannot filter out specific frequency bands. These ideas have already been exploited to develop simplified GNNs [31, 43] that compute fixed polynomial filters in a normalized spectrum. Remarkably, using multiple convolutions to obtain initial embeddings is common practice [18, 26]. To evaluate its impact, we apply a single convolution before the first pooling layer in all networks. We then compare these networks against the original implementations. Table 2 displays the results. All models report a significant drop in performance with a single convolutional layer. On NCI1, the methods obtain accura- cies about 4% lower, on average. Likewise, GMN and GRACLUS report a 4% performance drop on SMNIST. With a single initial GraphSAGE convolution, the performances of Diffpool and its uniform variant become as lower as 66.4% on SMNIST. This dramatic drop is not observed only for IMDB-B, which counts on constant node features and therefore may not benefit from the additional convolutions. Note that under reduced expressiveness, pooling far away nodes seems to impose a negative inductive bias as COMPLEMENT consistently fails to rival the performance of GRACLUS. Intuitively, the number of convolutions needed to achieve smooth representations depend on the richness of the initial node features. Many popular datasets rely on one-hot features and might require a small number of initial convolutions. Extricating these effects is outside the scope of this work. Is pooling overrated? Our experiments suggest that local pooling does not play an important role on the performance of GNNs. A natural next question is whether local pooling presents any advantage over global pooling strategies. We run a GNN with global mean pooling on top of three convolutional layers, the results are: 79.65 ± 2.07 (NCI), 71.05 ± 4.58 (IMDB-B), 95.20 ± 0.18 (SMNIST), and 0.443 ± 0.03 (ZINC). These performances are on par with our previous results obtained using GRACLUS, DIFFPOOL and GMN. Regardless of how surprising this may sound, our findings are consistent with results reported in the literature. For instance, GraphSAGE networks outperform DIFFPOOL in a rigorous evaluation protocol [10, 11], although GraphSAGE is the convolutional component of DIFFPOOL. Likewise, but in the context of attention, Knyazev et al. [24] argue that, except under certain circumstances, general attention mechanisms are negligible or even harmful. Similar findings also hold for CNNs [35, 39]. Permutation invariance is a relevant property for GNNs as it guarantees that the network’s predictions do not vary with the graph representation. For completeness, we state in Appendix C the permutation invariance of the simplified GNNs discussed in Section 2. 3.1 Additional results More datasets. We consider four additional datasets commonly used to assess GNNs, three of which are part of the TU datasets [29]: PROTEINS, NCI109, and DD; and one from the recently proposed Open Graph Benchamark (OGB) framework [17]: MOLHIV. Since Ivanov et al. [19] showed that the IMDB-B dataset is affected by a serious isomorphism bias, we also report results on the “cleaned” version of the IMDB-B dataset, hereafter refereed to as IMDB-C. Table 3 shows the results for the additional datasets. Similarly to what we have observed in the previous experiments, non-local pooling performs on par with local pooling. The largest performance gap occurs for the PROTEINS and MOHIV datasets, on which COMPLEMENT achieves accuracy around 3% higher than GRACLUS, on average. The performance of all methods on IMDB-C does not differ significantly from their performance on IMDB-B. However, removing the isomorphisms in IMDB-B reduces the dataset size, which clearly increases variance. Another pooling method. We also provide results for MINCUTPOOL [2], a recently proposed pooling scheme based on spectral clustering. This scheme integrates an unsupervised loss which stems from a relaxation of a MINCUT objective and learns to assign clusters in a spectrum-free way. We compare MINCUTPOOL with a random variant for all datasets employed in this work. Again, we find that a random pooling mechanism achieves comparable results to its local counterpart. Details are given in Appendix B. Sensitivity to hyperparameters. As mentioned in Section 2.1, this paper does not intend to benchmark the predictive performance of models equipped with different pooling strategies. Consequently, we did not exhaustively optimize model hyperparameters. One may wonder whether our results hold for a broader set of hyperparameter choices. Figure 8 depicts the performance gap between GRACLUS and COMPLEMENT for a varying number of pooling layers and embedding dimensionality over a single run. We observe the greatest performance gap in favor of COMPLEMENT (e.g., 5 layers and 32 dimensions on ZINC), which does not amount to an improvement greater than 0.05 in MAE. Overall, we find that the choice of hyperparameters does not significantly increase the performance gap between GRACLUS and COMPLEMENT. 4 Related works Graph pooling usually falls into global and hierarchical approaches. Global methods aggregate the node representations either via simple flattening procedures such as summing (or averaging) the node embeddings [44] or more sophisticated set aggregation schemes [32, 49]. On the other hand, hierarchical methods [7, 14, 18, 21, 26, 27, 47] sequentially coarsen graph representations over the network’s layers. Notably, Knyazev et al. [24] provides a unified view of local pooling and node attention mechanisms, and study the ability of pooling methods to generalize to larger and noisy graphs. Also, they show that the effect of attention is not crucial and sometimes can be harmful, and propose a weakly-supervised approach. Simple GNNs. Over the last years, we have seen a surge of simplified GNNs. Wu et al. [43] show that removing the nonlinearity in GCNs [23] does not negatively impact on their performances on node classification tasks. The resulting feature extractor consists of a low-pass-type filter. In [31], the authors take this idea further and evaluate the resilience to feature noise and provide insights on GCN-based designs. For graph classification, Chen et al. [6] report that linear convolutional filters followed by nonlinear set functions achieve competitive performances against modern GNNs. Cai and Wang [5] propose a strong baseline based on local node statistics for non-attributed graph classification tasks. Benchmarking GNNs. Errica et al. [11] demonstrate how the lack of rigorous evaluation protocols affects reproducibility and hinders new advances in the field. They found that structure-agnostic baselines outperform popular GNNs on at least three commonly used chemical datasets. At the rescue of GNNs, Dwivedi et al. [10] argue that this lack of proper evaluation comes mainly from using small datasets. To tackle this, they introduce a new benchmarking framework with datasets that are large enough for researchers to identify key design principles and assess statistically relevant differences between GNN architectures. Similar issues related to the use of small datasets are reported in [37]. Understanding pooling and attention in regular domains. In the context of CNNs for object recognition tasks, Ruderman et al. [35] evaluate the role of pooling in CNNs w.r.t. the ability to handle deformation stability. The results show that pooling is not necessary for appropriate deformation stability. The explanation lies in the network’s ability to learn smooth filters across the layers. Sheng et al. [38] and Zhao et al. [51] propose random pooling as a fast alternative to conventional CNNs. Regarding attention-based models, Wiegreffe and Pinter [42] show that attention weights usually do not provide meaningful explanations for predictions. These works demonstrate the importance of proper assessment of core assumptions in deep learning. 5 Conclusion In contrast to the ever-increasing influx of GNN architectures, very few works rigorously assess which design choices are crucial for efficient learning. Consequently, misconceptions can be widely spread, influencing the development of models drinking from flawed intuitions. In this paper, we study the role of local pooling and its impact on the performance of GNNs. We show that most GNN architectures employ convolutions that can quickly lead to smooth node representations. As a result, the pooling layers become approximately invariant to specific cluster assignments. We also found that clustering-enforcing regularization is usually innocuous. In a series of experiments on accredited benchmarks, we show that extracting local information is not a necessary principle for efficient pooling. By shedding new light onto the role of pooling, we hope to contribute to the community in at least two ways: i) providing a simple sanity-check for novel pooling strategies; ii) deconstruct misconceptions and wrong intuitions related to the benefits of graph pooling. Acknowledgments and Disclosure of Funding This work was funded by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI, and grants 294238, 292334 and 319264). We acknowledge the computational resources provided by the Aalto Science-IT Project. Broader impact Graph neural networks (GNNs) have become the de facto learning tools in many valuable domains such as social network analysis, drug discovery, recommender systems, and natural language processing. Nonetheless, the fundamental design principles behind the success of GNNs are only partially understood. This work takes a step further in understanding local pooling, one of the core design choices in many GNN architectures. We believe this work will help researchers and practitioners better choose in which directions to employ their time and resources to build more accurate GNNs.
1. What is the main contribution of the paper? 2. What are the strengths of the paper, particularly in providing evidence against the common assumption regarding local pooling in Graph Neural Networks? 3. What are the weaknesses of the paper, especially regarding its lack of proposing new solutions and its limited scope in addressing the issue of GRACLUS? 4. Do you have any concerns or questions regarding the paper's methodology or conclusions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper challenges the common assumption that local pooling of nodes in Graph Neural Networks is beneficial to improve performances. In the experiments, the authors show that: - using pooling layers does not lead to an increase in performance with respect to simple baselines such as a) pooling disconnected nodes or b) pooling random node subsets; - pooling learns approximately homogeneous features across nodes (which corroborates the first finding); - using unsupervised losses to force nodes to create pure clusters has little to no impact on predictive performances; - local node pooling performs similarly to global node pooling. ------ Post feedback update: In light of the feedback received from the authors, which in my opinion responds to all the concerns expressed by me in a satisfactory way, I have decided raise this paper's score to 7, and I recommend acceptance. Strengths This paper provides evidence that pooling in GNNs might not be of such influence to achieve good results (even if there are other reasons to consider pooling beyond performance evaluation). Anyway, even though it is a harsh claim, it seems supported by empirical evidence. I believe this paper might be of use to the graph learning community, as it establishes novel standards to satisfy, and useful baselines to outperform, in order to assess the validity and usefulness of novel pooling methods. Weaknesses 1) The work is incremental, nothing really new is proposed (expecially in term of solutions). 2) I am not fully convinced that the argument of the paper holds for GRACLUS for two reasons: - It is well known that some problems in a graph have a dual problem in the complement graph (e.g. clique vs. independent set). Thus, it might be that the fact the model learned something about the dual problem in the complement graph is sufficient to explain the similar performances. - The authors do not provide strikingly convincing evidence that pooling in GRACLUS learns homogeneous representations (as opposed to the other methods where the evidence is clear) However, I am willing to increase my judgement if the authors are able to convince me in the rebuttal phase.
NIPS
Title Bandit Samplers for Training Graph Neural Networks Abstract Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are changing during the training and not known a priori, but only partially observed when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets. 1 Introduction Graph neural networks [13, 11] have emerged as a powerful tool for representation learning of graph data in irregular or non-euclidean domains [3, 21]. For instance, graph neural networks have demonstrated state-of-the-art performance on learning tasks such as node classification, link and graph property prediction, with applications ranging from drug design [8], social networks [11], transaction networks [14], gene expression networks [9], and knowledge graphs [17]. One major challenge of training GNNs comes from the requirements of heavy floating point operations and large memory footprints, due to the recursive expansions over the neighborhoods. For a minibatch with a single vertex vi, to compute its embedding h (L) i at the L-th layer, we have to expand its ∗Equal contribution. †Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. neighborhood from the (L− 1)-th layer to the 0-th layer, i.e. L-hops neighbors. That will soon cover a large portion of the graph if particularly the graph is dense. One basic idea of alleviating such “neighbor explosion” problem was to sample neighbors in a top-down manner, i.e. sample neighbors in the l-th layer given the nodes in the (l + 1)-th layer recursively. Several layer sampling approaches [11, 6, 12, 23] have been proposed to alleviate above “neighbor explosion” problem and improve the convergence of training GCNs, e.g. with importance sampling. However, the optimal sampler [12], q?ij = αij‖h(l)j ‖ 2∑ k∈Ni αik‖h(l)k ‖2 for vertex vi, to minimize the variance of the estimator ĥ(l+1)i involves all its neighbors’ hidden embeddings, i.e. {ĥ (l) j |vj ∈ Ni}, which is infeasible to be computed because we can only observe them partially while doing sampling. Existing approaches [6, 12, 23] typically compromise the optimal sampling distribution via approximations, which may impede the convergence. Moreover, such approaches are not applicable to more general cases where the weights or kernels αij’s are not known a priori, but are learned weights parameterized by attention functions [20]. That is, both the hidden embeddings and learned weights involved in the optimal sampler constantly vary during the training process, and only part of the unnormalized attention values or hidden embeddings can be observed while do sampling. Present work. We derive novel variance reduced samplers for training of GCNs and attentive GNNs with a fundamentally different perspective. That is, different with existing approaches that need to compute the immediate sampling distribution, we maintain nonparametric estimates of the sampler instead, and update the sampler towards optimal variance after we acquire partial knowledges about neighbors being sampled, as the algorithm iterates. To fulfil this purpose, we formulate the optimization of the samplers as a bandit problem, where the regret is the gap between expected loss (negative reward) under current policy (sampler) and expected loss with optimal policy. We define the reward with respect to each action, i.e. the choice of a set of neighbors with sample size k, as the derivatives of the sampling variance, and show the variance of our samplers asymptotically approaches the optimal variance within a factor of 3. Under this problem formulation, we propose two bandit algorithms. The first algorithm based on multi-armed bandit (MAB) chooses k < K arms (neighbors) repeatedly. Our second algorithm based on MAB with multiple plays chooses a combinatorial set of neighbors with size k only once. To summarize, (1) We recast the sampler for GNNs as a bandit problem from a fundamentally different perspective. It works for GCNs and attentive GNNs while existing approaches apply only to GCNs. (2) We theoretically show that the regret with respect to the variance of our estimators asymptotically approximates the optimal sampler within a factor of 3 while no existing approaches optimize the sampler. (3) We empirically show that our approachs are way competitive in terms of convergence and sample variance, compared with state-of-the-art approaches on multiple public datasets. 2 Problem Setting Let G = (V, E) denote the graph with N nodes vi ∈ V , and edges (vi, vj) ∈ E . Let the adjacency matrix denote as A ∈ RN×N . Assuming the feature matrix H(0) ∈ RN×D(0) with h(0)i denoting the D(0)-dimensional feature of node vi. We focus on the following simple but general form of GNNs: h (l+1) i = σ ( N∑ j=1 α(vi, vj)h (l) j W (l) ) , l = 0, . . . , L− 1 (1) where h(l)i is the hidden embedding of node vi at the l-th layer, α = (α(vi, vj)) ∈ RN×N is a kernel or weight matrix, W (l) ∈ RD(l)×D(l+1) is the transform parameter on the l-th layer, and σ(·) is the activation function. The weight α(vi, vj), or αij for simplicity, is non-zero only if vj is in the 1-hop neighborhood Ni of vi. It varies with the aggregation functions [3, 21]. For example, (1) GCNs [8, 13] define fixed weights asα = D̃−1à orα = D̃− 1 2 ÃD̃− 1 2 respectively, where à = A+I , and D̃ is the diagonal node degree matrix of Ã. (2) The attentive GNNs [20, 15] define a learned weight α(vi, vj) by attention functions: α(vi, vj) = α̃(vi,vj ;θ)∑ vk∈Ni α̃(vi,vk;θ) , where the unnormalized attentions α̃(vi, vj ; θ) = exp(ReLU(aT [Whi‖Whj ])), are parameterized by θ = {a,W}. Different from GCNs, the learned weights αij ∝ α̃ij can be evaluated only given all the unnormalized weights in the neighborhood. The basic idea of layer sampling approaches [11, 6, 12, 23] was to recast the evaluation of Eq. (1) as ĥ (l+1) i = σ ( N(i)Epij [ ĥ (l) j ] W (l) ) , (2) where pij ∝ αij , and N(i) = ∑ j αij . Hence we can evaluate each node vi at the (l + 1)-th layer, using a Monte Carlo estimator with sampled neighbors at the l-th layer. Without loss of generality, we assume pij = αij and N(i) = 1 that meet the setting of attentive GNNs in the rest of this paper. To further reduce the variance, let us consider the following importance sampling ĥ (l+1) i = σW (l) ( µ̂ (l) i ) = σW (l) ( Eqij [ αij qij ĥ (l) j ]) , (3) where we use σW (l)(·) to include transform parameterW (l) into the function σ(·) for conciseness. As such, one can find an alternative sampling distribution qi = (qij1 , ..., qij|Ni|) to reduce the variance of an estimator, e.g. a Monte Carlo estimator µ̂(l)i = 1 k ∑k s=1 αijs qijs ĥ (l) js , where js ∼ qi. Take expectation over qi, we define the variance of µ̂ (l) i = αijs qijs ĥ (l) js at step t and (l+1)-th layer to be: Vt(qi) = E [∥∥∥µ̂(l)i (t)− µ(l)i (t)∥∥∥2] = E[∥∥∥αijs(t)qijs h(l)js (t)− ∑ j∈Ni αij(t)h (l) j (t) ∥∥∥2]. (4) Note that αij and h(vj) that are inferred during training may vary over steps t’s. We will explicitly include step t and layer l only when it is necessary. By expanding Eq. (4) one can write V(qi) as the difference of two terms. The first is a function of qi, which we refer to as the effective variance: Ve(qi) = ∑ j∈Ni 1 qij α2ij ‖hj‖ 2 , (5) while the second does not depend on qi, and we denote it by Vc = ∥∥∥∑j∈Ni αijhj∥∥∥2. The optimal sampling distribution [6, 12] at (l + 1)-th layer for vertex i that minimizes the variance is: q?ij = αij‖h(l)j ‖2∑ k∈Ni αik‖h (l) k ‖2 . (6) However, evaluating this sampling distribution is infeasible because we cannot have all the knowledges of neighbors’ embeddings in the denominator of Eq. (6). Moreover, the αij’s in attentive GNNs could also vary during the training procedure. Existing layer sampling approaches based on importance sampling just ignore the effects of norm of embeddings and assume the αij’s are fixed during training. As a result, the sampling distribution is suboptimal and only applicable to GCNs where the weights are fixed. Note that our derivation above follows the setting of node-wise sampling approaches [11], but the claim remains to hold for layer-wise sampling approaches [6, 12, 23]. 3 Related Works We summarize three types of works for training graph neural networks. First, several “layer sampling” approaches [11, 6, 12, 23] have been proposed to alleviate the “neighbor explosion” problems. Given a minibatch of labeled vertices at each iteration, such approaches sample neighbors layer by layer in a top-down manner. Particularly, node-wise samplers [11] randomly sample neighbors in the lower layer given each node in the upper layer, while layer-wise samplers [6, 12, 23] leverage importance sampling to sample neighbors in the lower layer given all the nodes in upper layer with sample sizes of each layer be independent of each other. Empirically, the layer-wise samplers work even worse [5] compared with node-wise samplers, and one can set an appropriate sample size for each layer to alleviate the growth issue of node-wise samplers. In this paper, we focus on optimizing the variance in the vein of layer sampling approaches. Though the derivation of our bandit samplers follows the node-wise samplers, it can be extended to layer-wise. We leave this extension as a future work. Second, Chen et al. [5] proposed a variance reduced estimator by maintaining historical embeddings of each vertices, based on the assumption that the embeddings of a single vertex would be close to its history. This estimator uses a simple random sampler and works efficient in practice at the expense of requiring an extra storage that is linear with number of nodes. Third, two “graph sampling” approaches [7, 22] first cut the graph into partitions [7] or sample into subgraphs [22], then they train models on those partitions or subgraphs in a batch mode [13]. They show that the training time of each epoch may be much faster compared with “layer sampling” approaches. We summarize the drawbacks as follows. First, the partition of the original graph could be sensitive to the training problem. Second, these approaches assume that all the vertices in the graph have labels, however, in practice only partial vertices may have labels [14]. GNNs Architecture. For readers who are interested in the works related to the architecture of GNNs, please refer to the comprehensive survey [21]. Existing sampling approaches works only on GCNs, but not on more advanced architectures like GAT [20]. 4 Variance Reduced Samplers as Bandit Problems We formulate the optimization of sampling variance as a bandit problem. Basically, optimal variance requires the knowledge of all the neighbors’ embeddings that are computation infeasible, and our chance is to exploit the sampled good neighbors. Our basic idea is that instead of explicitly calculating the intractable optimal sampling distribution in Eq. (6) at each iteration, we aim to optimize a sampler or policy Qti for each vertex i over the horizontal steps 1 ≤ t ≤ T , and make the variance of the estimator following this sampler asymptotically approach the optimum Q?i = argmin Qi ∑T t=1 Vte(Qi), such that ∑T t=1 Vte(Qti) ≤ c ∑T t=1 Vte(Q?i ) for some constant c > 1. Each action of policy Qti is a choice of any k-element set of sampled neighbors Si ⊂ Ni where Si ∼ Qti. We denote Qi,Si(t) as the probability of the action that vi chooses Si at t. The gap to be minimized between effective variance and the oracle is Vte(Qti)− Vte(Q?i ) ≤ 〈Qti −Q?i ,∇QtiV t e(Q t i)〉. (7) Note that the function Vte(Qti) is convex w.r.t Qti, hence for Qti and Q?i we have the upper bound derived on right hand of Eq. (7). We define this upper bound as regret at t, which means the expected loss (negative reward) with policy Qti minus the expected loss with optimal policy Q ? i . Hence the reward w.r.t choosing Si at t is the negative derivative of the effective variance ri,Si(t) = −∇Qi,Si (t)V t e(Q t i). In the following, we adapt this bandit problem in the adversary bandit setting [1] because the rewards vary as the training proceeds and do not follow a priori fixed distribution [4]. We leave the studies of other bandits as a future work. We show in section 6 that with this regret the variances of our estimators asymptotically approach the optimal variance within a factor of 3. Following importance sampling, both of our samplers maintain the alternative sampling distribution qti = (qij1(t), ..., qij|Ni|(t)) for each vertex vi over steps t’s. We instantiate above framework under two bandit settings. (1) In the adversary MAB setting [1], we define the sampler Qti as qti , that samples exact an arm (neighbor) vjs ⊂ Ni from qti . In this case the set Si is the element vjs . To have a sample size of k neighbors, we repeat this process k times. After we collected k rewards rijs(t) = −∇qi,js (t)V t e(q t i) we update q t i by EXP3 [1]. (2) In the adversary MAB with multiple plays setting [19], it uses an efficient k-combination sampler (DepRound [10]) Q to sample any k-element subset S ⊂ {1, 2, ...,K} that satisfies ∑ S:j∈S QS = qj ,∀j ∈ {1, 2, ...,K}, where qj corresponds to the alternative probability of sampling j. As such, it allows us to select a set of k distinct arms (neighbors) S = ( K k ) from K arms at once. The selection can be done in O(K). After we collected the reward −∇Qi,Si (t)V t e(Q t i), we update q t i by EXP3.M [19]. Discussions. We have to select a sample size of k neighbors in GNNs. Note that in MAB setting, exact one neighbor should be selected and followed by updating the policy. Hence strictly speaking applying MAB to our problem is not rigorous. Applying MAB with multiple plays to our problem is rigorous because it allows us to sample k neighbors at once and update the rewards together. 5 Algorithms The framework of our algorithm is: (1) pick k arms with a sampler based on the alternative sampling distribution qti for any vertex vi, (2) establish the unbiased estimator, (3) do feedforward and backpropagation, and finally (4) calculate the rewards and update the sampler with a proper bandit algorithm. We show this framework in Algorithm 1. Note that the variance w.r.t qi in Eq. (4) is defined only at the (l + 1)-th layer, hence we should maintain multiple qi’s at each layer. In practice, we find that maintain a single qi and update it only using rewards from the 1-st layer works well enough. The time complexity of our algorithm is same with any node-wise approaches [11]. In addition, it requires a storage in O(|E|) to maintain the alternative sampling distribution, |E| is the number of edges used for message passing operations in GNNs. Beyond that, no further storage is required. This is true even for more sophisticated architectures where messages are passed between neighbors beyond one hop. It remains to instantiate the estimators, variances and rewards related to our two bandit settings. We name our first algorithm GNN-BS under adversary MAB setting, and the second GNN-BS.M under adversary MAB with multiple plays setting. We first assume the weights αij’s are fixed, then extend to attentive GNNs that αij(t)’s change. Algorithm 1 Bandit Samplers for Training GNNs. Require: step T , sample size k, number of layers L, node features H(0), adjacency matrix A. 1: Initialize: qij(1) = 1/ |Ni| if j ∈ Ni else 0, wij(1) = 1 if j ∈ Ni else 0. 2: for t = 1 to T do 3: Read a minibatch of labeled vertices at layer L. 4: Use sampler qti or DepRound(k, qti) to sample neighbors top-down with sample size k. 5: Forward GNN model via estimators defined in Eq. (8) or Proposition 1. 6: Backpropagation and update GNN model. 7: for each vi in the 1-st layer do 8: Collect vi’s k sampled neighbors vj ∈ Sti , and rewards rti = {rij(t) : vj ∈ Sti}. 9: Update qt+1i and w t+1 i by EXP3(qti , wti , rti , Sti ) or EXP3.M(qti , wti , rti , Sti ). 10: end for 11: end for 12: return GNN model. 5.1 GNN-BS: Graph Neural Networks with Bandit Sampler In this setting, we choose 1 arm and repeat k times. We have the following Monte Carlo estimator µ̂i = 1 k k∑ s=1 αijs qijs ĥjs , js ∼ qi. (8) This yields the variance V(qi) = 1k Eqi [∥∥∥αijsqijs hjs −∑j∈Ni αijhj∥∥∥2 ] . Following Eq. (5) and Eq. (7), we have the reward of vi picking neighbor vj at step t as rij(t) = −∇qij(t)V t e(q t i) = α2ij k · qij(t)2 ‖hj(t)‖2. (9) 5.2 GNN-BS.M: Graph Neural Networks with Multiple Plays Bandit Sampler Given a vertex vi, an important property of DepRound is that it satisfies ∑ Si:j∈Si Qi,Si = qij ,∀vj ∈ Ni, where Si ⊂ Ni is any subset of size k. We have the following unbiased estimator. Proposition 1. µ̂i = ∑ js∈Si αijs qijs hjs is the unbiased estimator of µi = ∑ j∈Ni αijhj given that Si is sampled from Qi with DepRound, where Si is the selected k-subset neighbors of vertex i. The effective variance of this estimator is Ve(Qi) = ∑ Si⊂Ni Qi,Si‖ ∑ js∈Si αijs qijs hjs‖2. Since the derivative of this effective variance w.r.t Qi,Si does not factorize, we instead have the following approximated effective variance using Jensen’s inequality. Proposition 2. The effective variance can be approximated by Ve(Qi) ≤ ∑ js∈Ni αijs qijs ‖hjs‖2. Proposition 3. The negative derivative of the approximated effective variance ∑ js∈Ni αijs qijs ‖hjs‖2 w.r.t Qi,Si , i.e. the reward of vi choosing Si at t is ri,Si(t) = ∑ js∈Si αijs qijs (t) 2 ‖hjs(t)‖2. Follow EXP3.M we use the reward w.r.t each arm as rij(t) = αij qij(t)2 ‖hj(t)‖2,∀j ∈ Si. Our proofs rely on the property of DepRound introduced above. 5.3 Extension to Attentive GNNs In this section, we extend our algorithms to attentive GNNs. The issue remained is that the attention value αij can not be evaluated with only sampled neighborhoods, instead, we can only compute the unnormalized attentions α̃ij . We define the adjusted feedback attention values as follows: α′ij = ∑ j∈Si qij · α̃ij∑ j∈Si α̃ij , (10) where α̃ij’s are the unnormalized attention values that can be obviously evaluated when we have sampled (vi, vj). We use ∑ j∈Si qij as a surrogate of ∑ j∈Si α̃ij∑ j∈Ni α̃ij so that we can approximate the truth attention values αij by our adjusted attention values α′ij . 6 Regret Analysis As we described in section 4, the regret is defined as 〈Qti − Q?i ,∇QtiV t e(Q t i)〉. By choosing the reward as the negative derivative of the effective variance, we have the following theorem that our bandit sampling algorithms asymptotically approximate the optimal variance within a factor of 3. Theorem 1. Using Algorithm 1 with η = 0.4 and δ = √ (1−η)η4k5 ln(n/k) Tn4 to minimize the effective variance with respect to {Qti}1≤t≤T , we have T∑ t=1 Vte(Qti) ≤ 3 T∑ t=1 Vte(Q?i ) + 10 √ Tn4 ln(n/k) k3 (11) where T ≥ ln(n/k)n2(1− η)/(kη2), n = |Ni|. Our proof follows [16] by upper and lower bounding the potential function. The upper and lower bounds are the functions of the alternative sampling probability qij(t) and the reward rij(t) respectively. By multiplying the upper and lower bounds by the optimal sampling probability q?i and using the reward definition in (9), we have the upper bound of the effective variance. The growth of this regret is sublinear in terms of T . The regret decreases in polynomial as sample size k grows. Note that the number of neighbors n is always well bounded in pratical graphs, and can be considered as a moderate constant number. Compared with existing layer sampling approaches, this is the first work optimizing the sampling variance of GNNs towards optimum. We will empirically show the sampling variances in experiments. 7 Experiments In this section, we conduct extensive experiments compared with state-of-the-art approaches to show the advantage of our training approaches. We use the following rule to name our approaches: GNN architecture plus bandit sampler. For example, GCN-BS, GAT-BS and GP-BS denote the training approaches for GCN, GAT [20] and GeniePath [15] respectively. Please find our implementations at https://github.com/xavierzw/gnn-bs. We run all the experiments using one machine with Intel Xeon E5-2682 and 512GB RAM. The major purpose of this paper is to compare the effects of our samplers with existing training algorithms, so we compare them by training the same GNN architecture. We use the following architectures unless otherwise stated. We fix the number of layers as 2 as in [13] for all comparison algorithms. We set the dimension of hidden embeddings as 16 for Cora and Pubmed, and 256 for PPI, Reddit and Flickr. For a fair comparison, we do not use the normalization layer [2] particularly used in some works [5, 22]. For attentive GNNs, we use the attention layer proposed in GAT. we set the number of multi-heads as 1 for simplicity. We report results on 5 benchmark data that include Cora [18], Pubmed [18], PPI [11], Reddit [11], and Flickr [22]. We follow the standard data splits, and summarize the statistics in Table 1. We summarize the comparison algorithms as follows. (1) GraphSAGE [11] is a node-wise layer sampling approach with a random sampler. (2) FastGCN [6], LADIES [23], and AS-GCN [12] are layer sampling approaches based on importance sampling. (3) S-GCN [5] can be viewed as an optimization solver for training of GCN based on a simply random sampler. (4) ClusterGCN [7] and GraphSAINT [22] are “graph sampling” techniques that first partition or sample the graph into small subgraphs, then train each subgraph using the batch algorithm [13]. (5) The open source algorithms that support the training of attentive GNNs are AS-GCN and GraphSAINT. We denote them as AS-GAT and GraphSAINT-GAT. We save the model based on the best results on validation and report results on testing data in Section 7.1. We do grid search for the following hyperparameters in each algorithm, i.e., the learning rate {0.01, 0.001}, the penalty weight on the `2-norm regularizers {0, 0.0001, 0.0005, 0.001}, the dropout rate {0, 0.1, 0.2, 0.3}. By following the exsiting implementations3, we save the model based on the best results on validation, and restore the model to report results on testing data in Section 7.1. For the sample size in GraphSAGE, S-GCN and our algorithms, we set 1 for Cora and Pubmed, 5 for Flickr, 10 for PPI and reddit. We set the sample size in the first and second layer for FastGCN/LADIES and AS-GCN/AS-GAT as 256 and 256 for Cora and Pubmed, 1, 900 and 3, 800 for PPI, 780 and 1, 560 for Flickr, and 2, 350 and 4, 700 for Reddit. We set the batch size of all the layer sampling approaches and S-GCN as 256 for all the datasets. For ClusterGCN, we set the partitions according to the suggestions [7] for PPI and Reddit. We set the number of partitions for Cora and Pubmed as 10, for flickr as 200 by doing grid search. We set the architecture of GraphSAINT as “0-1-1”4 which means MLP layer followed by two graph convolution layers. We use the “rw” sampling strategy that reported as the best in their original paper to perform the graph sampling procedure. We set the number of root and walk length as the paper suggested. 7.1 Results on Benchmark Data We report the testing results on GCN and attentive GNN architectures in Table 2 and Table 3 respectively. We run the results of each algorithm 3 times and report the mean and standard deviation. The results on the two layer GCN architecture show that our GCN-BS performs the best on most of datasets. Compared with layer sampling approaches, GCN-BS performs significantly better in relative dense graphs, such as PPI and Reddit. This shows the efficiency of our sampler on selecting neighbors. The results on the two layer attentive GNN architecture show the superiority of our algorithms on training more complex GNN architectures. GraphSAINT or AS-GAT do not compute the softmax of learned weights, but simply use the unnormalized weights to perform the aggregation. As a result, most of results from AS-GAT and GraphSAINT-GAT in Table 3 are worse than their results in Table 2. Thanks to the power of attentive structures in GNNs, our algorithms perform the best results on PPI and Flickr. 7.2 Convergence In this section, we analyze the convergences of comparison algorithms on the two layer GCN and attentive GNN architectures in Figure 1 in terms of epoch. We run all the algorithms 3 times and show the mean and standard deviation. Our approaches converge much faster with lower variances in most datasets. The GNN-BS algorithms perform very similar to GNN-BS.M, even though strictly speaking GNN-BS does not follow the rigorous MAB setting. The convergences on validation in terms of timing (seconds), compared with layer sampling approaches, in Fig. 2 show the similar results. 7.3 Sample Size Analysis We analyze the sampling variances and accuracy as sample size varies using PPI data. Note that existing layer sampling approaches do not optimize the variances once the samplers are specified. As a result, their variances are simply fixed [23], while our approaches asymptotically appoach the optimum. For comparison, we train our models until convergence, then compute the average sampling variances. We show the results in Figure 3 (left and middle). The results are grouped in two categories, i.e. results for GCN and attentive GNNs respectively. Our approaches’ sampling variances are smaller in each group. This explains the performances of our approaches on Micro F1 scores. Note that the overall sampling variances of node-wise approaches are way better than those of layer-wise approaches. To further show the convergence while we simulate graphs with different degrees and fix the sample size of different algorithms, we set up the following experiments. We randomly sample 100 labeled nodes {1,...,i,...,100} with each µi uniformly sampled from [-10, 10]. For each labeled node i we generate k neighbors, and its neighbors’ features are 1-dimensional scalars in real field that are sampled from uniform(µi − σ, µi + σ) with σ = 5. Each node i’s label is generated by simply averaging its neighbors’ 1-dimensional scalar features. We use a GCN architecture with mean aggregators. We compare the convergence (mean squared error loss) with a random sampler by increasing k = 50 to 100 and 200 in Figure 3 (right). All the samplers use a fixed sample size 10. It shows that our bandit sampler works much better compared with a uniform sampler on graphs with different degrees. 8 Conclusions In this paper, we show that the optimal layer samplers based on importance sampling for training general graph neural networks are computationally intractable, since it needs all the neighbors’ hidden embeddings or learned weights. Instead, we re-formulate the sampling problem as a bandit problem that requires only partial knowledges from neighbors being sampled. We propose two algorithms based on multi-armed bandit and MAB with multiple plays. We show the variance of our bandit sampler asymptotically approaches the optimum within a factor of 3. We empirically show that our algorithms achieve much better convergence results with much lower variances compared with state-of-the-art approaches. Broader Impact This paper presents an approach for fast training of graph neural networks with theoretical guarantees. It may have impacts on training approaches related to any models based on message passing. The graph neural networks may have positive impacts on recommendater systems, protein analyses, fraud detection and so on. This work does not present any foreseeable societal consequence. Acknowledgments and Disclosure of Funding This work is supported by Ant Group.
1. What is the focus and contribution of the paper regarding graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis? 3. What are the weaknesses of the paper, especially regarding experimentation? 4. Do you have any concerns about the choice of baselines or the presentation of experimental results? 5. Are there any questions regarding the hyperparameter settings or computational costs?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper, the authors studied the sampling and its variance for GCNs and GNNs. They proposed to formulate the optimization of the sampling variance as an adversary bandit problem. Experimental results on several benchmark datasets demonstrated the effectiveness of the proposed method. ========================= post-rebuttal edit ========================= Thank the author's response much. It addressed part of my concerns. I'd like to keep my score. My main concerns are still related to the experiments. I encourage the authors to try more baselines and datasets to further verify the advantages of the proposed method. Strengths 1. The research problem of studying how to reduce the sampling variance of graph-structured data is interesting. 2. This paper seems theoretically solid, with very rich and detailed analysis. 3. Experimental results on benchmark datasets indicate the effectiveness of the proposed method. Weaknesses 1. Although the problem studied in this paper is interesting, this pape is not easy to follow. 2. More baselines (sampling approaches designed for GNNs) are needed. In Table 2, S-GCN [5] is a simple sampler. ClusterGCN and GraphSAINT are designed for sampling (sub)graphs. The same for Table 3. 3. To be honest, I am kind of confused about Table 3. It would be better if the authors provide more analysis for Table 3. And more analysis when Tables 2,3 are considered together. 4. How did the authors determine the hyper-parameter settings? 5. I am interested in seeing more experimental comparison on the datasets with a large number of nodes. 6. How about the comparison in terms of computation cost / running time?
NIPS
Title Bandit Samplers for Training Graph Neural Networks Abstract Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are changing during the training and not known a priori, but only partially observed when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets. 1 Introduction Graph neural networks [13, 11] have emerged as a powerful tool for representation learning of graph data in irregular or non-euclidean domains [3, 21]. For instance, graph neural networks have demonstrated state-of-the-art performance on learning tasks such as node classification, link and graph property prediction, with applications ranging from drug design [8], social networks [11], transaction networks [14], gene expression networks [9], and knowledge graphs [17]. One major challenge of training GNNs comes from the requirements of heavy floating point operations and large memory footprints, due to the recursive expansions over the neighborhoods. For a minibatch with a single vertex vi, to compute its embedding h (L) i at the L-th layer, we have to expand its ∗Equal contribution. †Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. neighborhood from the (L− 1)-th layer to the 0-th layer, i.e. L-hops neighbors. That will soon cover a large portion of the graph if particularly the graph is dense. One basic idea of alleviating such “neighbor explosion” problem was to sample neighbors in a top-down manner, i.e. sample neighbors in the l-th layer given the nodes in the (l + 1)-th layer recursively. Several layer sampling approaches [11, 6, 12, 23] have been proposed to alleviate above “neighbor explosion” problem and improve the convergence of training GCNs, e.g. with importance sampling. However, the optimal sampler [12], q?ij = αij‖h(l)j ‖ 2∑ k∈Ni αik‖h(l)k ‖2 for vertex vi, to minimize the variance of the estimator ĥ(l+1)i involves all its neighbors’ hidden embeddings, i.e. {ĥ (l) j |vj ∈ Ni}, which is infeasible to be computed because we can only observe them partially while doing sampling. Existing approaches [6, 12, 23] typically compromise the optimal sampling distribution via approximations, which may impede the convergence. Moreover, such approaches are not applicable to more general cases where the weights or kernels αij’s are not known a priori, but are learned weights parameterized by attention functions [20]. That is, both the hidden embeddings and learned weights involved in the optimal sampler constantly vary during the training process, and only part of the unnormalized attention values or hidden embeddings can be observed while do sampling. Present work. We derive novel variance reduced samplers for training of GCNs and attentive GNNs with a fundamentally different perspective. That is, different with existing approaches that need to compute the immediate sampling distribution, we maintain nonparametric estimates of the sampler instead, and update the sampler towards optimal variance after we acquire partial knowledges about neighbors being sampled, as the algorithm iterates. To fulfil this purpose, we formulate the optimization of the samplers as a bandit problem, where the regret is the gap between expected loss (negative reward) under current policy (sampler) and expected loss with optimal policy. We define the reward with respect to each action, i.e. the choice of a set of neighbors with sample size k, as the derivatives of the sampling variance, and show the variance of our samplers asymptotically approaches the optimal variance within a factor of 3. Under this problem formulation, we propose two bandit algorithms. The first algorithm based on multi-armed bandit (MAB) chooses k < K arms (neighbors) repeatedly. Our second algorithm based on MAB with multiple plays chooses a combinatorial set of neighbors with size k only once. To summarize, (1) We recast the sampler for GNNs as a bandit problem from a fundamentally different perspective. It works for GCNs and attentive GNNs while existing approaches apply only to GCNs. (2) We theoretically show that the regret with respect to the variance of our estimators asymptotically approximates the optimal sampler within a factor of 3 while no existing approaches optimize the sampler. (3) We empirically show that our approachs are way competitive in terms of convergence and sample variance, compared with state-of-the-art approaches on multiple public datasets. 2 Problem Setting Let G = (V, E) denote the graph with N nodes vi ∈ V , and edges (vi, vj) ∈ E . Let the adjacency matrix denote as A ∈ RN×N . Assuming the feature matrix H(0) ∈ RN×D(0) with h(0)i denoting the D(0)-dimensional feature of node vi. We focus on the following simple but general form of GNNs: h (l+1) i = σ ( N∑ j=1 α(vi, vj)h (l) j W (l) ) , l = 0, . . . , L− 1 (1) where h(l)i is the hidden embedding of node vi at the l-th layer, α = (α(vi, vj)) ∈ RN×N is a kernel or weight matrix, W (l) ∈ RD(l)×D(l+1) is the transform parameter on the l-th layer, and σ(·) is the activation function. The weight α(vi, vj), or αij for simplicity, is non-zero only if vj is in the 1-hop neighborhood Ni of vi. It varies with the aggregation functions [3, 21]. For example, (1) GCNs [8, 13] define fixed weights asα = D̃−1à orα = D̃− 1 2 ÃD̃− 1 2 respectively, where à = A+I , and D̃ is the diagonal node degree matrix of Ã. (2) The attentive GNNs [20, 15] define a learned weight α(vi, vj) by attention functions: α(vi, vj) = α̃(vi,vj ;θ)∑ vk∈Ni α̃(vi,vk;θ) , where the unnormalized attentions α̃(vi, vj ; θ) = exp(ReLU(aT [Whi‖Whj ])), are parameterized by θ = {a,W}. Different from GCNs, the learned weights αij ∝ α̃ij can be evaluated only given all the unnormalized weights in the neighborhood. The basic idea of layer sampling approaches [11, 6, 12, 23] was to recast the evaluation of Eq. (1) as ĥ (l+1) i = σ ( N(i)Epij [ ĥ (l) j ] W (l) ) , (2) where pij ∝ αij , and N(i) = ∑ j αij . Hence we can evaluate each node vi at the (l + 1)-th layer, using a Monte Carlo estimator with sampled neighbors at the l-th layer. Without loss of generality, we assume pij = αij and N(i) = 1 that meet the setting of attentive GNNs in the rest of this paper. To further reduce the variance, let us consider the following importance sampling ĥ (l+1) i = σW (l) ( µ̂ (l) i ) = σW (l) ( Eqij [ αij qij ĥ (l) j ]) , (3) where we use σW (l)(·) to include transform parameterW (l) into the function σ(·) for conciseness. As such, one can find an alternative sampling distribution qi = (qij1 , ..., qij|Ni|) to reduce the variance of an estimator, e.g. a Monte Carlo estimator µ̂(l)i = 1 k ∑k s=1 αijs qijs ĥ (l) js , where js ∼ qi. Take expectation over qi, we define the variance of µ̂ (l) i = αijs qijs ĥ (l) js at step t and (l+1)-th layer to be: Vt(qi) = E [∥∥∥µ̂(l)i (t)− µ(l)i (t)∥∥∥2] = E[∥∥∥αijs(t)qijs h(l)js (t)− ∑ j∈Ni αij(t)h (l) j (t) ∥∥∥2]. (4) Note that αij and h(vj) that are inferred during training may vary over steps t’s. We will explicitly include step t and layer l only when it is necessary. By expanding Eq. (4) one can write V(qi) as the difference of two terms. The first is a function of qi, which we refer to as the effective variance: Ve(qi) = ∑ j∈Ni 1 qij α2ij ‖hj‖ 2 , (5) while the second does not depend on qi, and we denote it by Vc = ∥∥∥∑j∈Ni αijhj∥∥∥2. The optimal sampling distribution [6, 12] at (l + 1)-th layer for vertex i that minimizes the variance is: q?ij = αij‖h(l)j ‖2∑ k∈Ni αik‖h (l) k ‖2 . (6) However, evaluating this sampling distribution is infeasible because we cannot have all the knowledges of neighbors’ embeddings in the denominator of Eq. (6). Moreover, the αij’s in attentive GNNs could also vary during the training procedure. Existing layer sampling approaches based on importance sampling just ignore the effects of norm of embeddings and assume the αij’s are fixed during training. As a result, the sampling distribution is suboptimal and only applicable to GCNs where the weights are fixed. Note that our derivation above follows the setting of node-wise sampling approaches [11], but the claim remains to hold for layer-wise sampling approaches [6, 12, 23]. 3 Related Works We summarize three types of works for training graph neural networks. First, several “layer sampling” approaches [11, 6, 12, 23] have been proposed to alleviate the “neighbor explosion” problems. Given a minibatch of labeled vertices at each iteration, such approaches sample neighbors layer by layer in a top-down manner. Particularly, node-wise samplers [11] randomly sample neighbors in the lower layer given each node in the upper layer, while layer-wise samplers [6, 12, 23] leverage importance sampling to sample neighbors in the lower layer given all the nodes in upper layer with sample sizes of each layer be independent of each other. Empirically, the layer-wise samplers work even worse [5] compared with node-wise samplers, and one can set an appropriate sample size for each layer to alleviate the growth issue of node-wise samplers. In this paper, we focus on optimizing the variance in the vein of layer sampling approaches. Though the derivation of our bandit samplers follows the node-wise samplers, it can be extended to layer-wise. We leave this extension as a future work. Second, Chen et al. [5] proposed a variance reduced estimator by maintaining historical embeddings of each vertices, based on the assumption that the embeddings of a single vertex would be close to its history. This estimator uses a simple random sampler and works efficient in practice at the expense of requiring an extra storage that is linear with number of nodes. Third, two “graph sampling” approaches [7, 22] first cut the graph into partitions [7] or sample into subgraphs [22], then they train models on those partitions or subgraphs in a batch mode [13]. They show that the training time of each epoch may be much faster compared with “layer sampling” approaches. We summarize the drawbacks as follows. First, the partition of the original graph could be sensitive to the training problem. Second, these approaches assume that all the vertices in the graph have labels, however, in practice only partial vertices may have labels [14]. GNNs Architecture. For readers who are interested in the works related to the architecture of GNNs, please refer to the comprehensive survey [21]. Existing sampling approaches works only on GCNs, but not on more advanced architectures like GAT [20]. 4 Variance Reduced Samplers as Bandit Problems We formulate the optimization of sampling variance as a bandit problem. Basically, optimal variance requires the knowledge of all the neighbors’ embeddings that are computation infeasible, and our chance is to exploit the sampled good neighbors. Our basic idea is that instead of explicitly calculating the intractable optimal sampling distribution in Eq. (6) at each iteration, we aim to optimize a sampler or policy Qti for each vertex i over the horizontal steps 1 ≤ t ≤ T , and make the variance of the estimator following this sampler asymptotically approach the optimum Q?i = argmin Qi ∑T t=1 Vte(Qi), such that ∑T t=1 Vte(Qti) ≤ c ∑T t=1 Vte(Q?i ) for some constant c > 1. Each action of policy Qti is a choice of any k-element set of sampled neighbors Si ⊂ Ni where Si ∼ Qti. We denote Qi,Si(t) as the probability of the action that vi chooses Si at t. The gap to be minimized between effective variance and the oracle is Vte(Qti)− Vte(Q?i ) ≤ 〈Qti −Q?i ,∇QtiV t e(Q t i)〉. (7) Note that the function Vte(Qti) is convex w.r.t Qti, hence for Qti and Q?i we have the upper bound derived on right hand of Eq. (7). We define this upper bound as regret at t, which means the expected loss (negative reward) with policy Qti minus the expected loss with optimal policy Q ? i . Hence the reward w.r.t choosing Si at t is the negative derivative of the effective variance ri,Si(t) = −∇Qi,Si (t)V t e(Q t i). In the following, we adapt this bandit problem in the adversary bandit setting [1] because the rewards vary as the training proceeds and do not follow a priori fixed distribution [4]. We leave the studies of other bandits as a future work. We show in section 6 that with this regret the variances of our estimators asymptotically approach the optimal variance within a factor of 3. Following importance sampling, both of our samplers maintain the alternative sampling distribution qti = (qij1(t), ..., qij|Ni|(t)) for each vertex vi over steps t’s. We instantiate above framework under two bandit settings. (1) In the adversary MAB setting [1], we define the sampler Qti as qti , that samples exact an arm (neighbor) vjs ⊂ Ni from qti . In this case the set Si is the element vjs . To have a sample size of k neighbors, we repeat this process k times. After we collected k rewards rijs(t) = −∇qi,js (t)V t e(q t i) we update q t i by EXP3 [1]. (2) In the adversary MAB with multiple plays setting [19], it uses an efficient k-combination sampler (DepRound [10]) Q to sample any k-element subset S ⊂ {1, 2, ...,K} that satisfies ∑ S:j∈S QS = qj ,∀j ∈ {1, 2, ...,K}, where qj corresponds to the alternative probability of sampling j. As such, it allows us to select a set of k distinct arms (neighbors) S = ( K k ) from K arms at once. The selection can be done in O(K). After we collected the reward −∇Qi,Si (t)V t e(Q t i), we update q t i by EXP3.M [19]. Discussions. We have to select a sample size of k neighbors in GNNs. Note that in MAB setting, exact one neighbor should be selected and followed by updating the policy. Hence strictly speaking applying MAB to our problem is not rigorous. Applying MAB with multiple plays to our problem is rigorous because it allows us to sample k neighbors at once and update the rewards together. 5 Algorithms The framework of our algorithm is: (1) pick k arms with a sampler based on the alternative sampling distribution qti for any vertex vi, (2) establish the unbiased estimator, (3) do feedforward and backpropagation, and finally (4) calculate the rewards and update the sampler with a proper bandit algorithm. We show this framework in Algorithm 1. Note that the variance w.r.t qi in Eq. (4) is defined only at the (l + 1)-th layer, hence we should maintain multiple qi’s at each layer. In practice, we find that maintain a single qi and update it only using rewards from the 1-st layer works well enough. The time complexity of our algorithm is same with any node-wise approaches [11]. In addition, it requires a storage in O(|E|) to maintain the alternative sampling distribution, |E| is the number of edges used for message passing operations in GNNs. Beyond that, no further storage is required. This is true even for more sophisticated architectures where messages are passed between neighbors beyond one hop. It remains to instantiate the estimators, variances and rewards related to our two bandit settings. We name our first algorithm GNN-BS under adversary MAB setting, and the second GNN-BS.M under adversary MAB with multiple plays setting. We first assume the weights αij’s are fixed, then extend to attentive GNNs that αij(t)’s change. Algorithm 1 Bandit Samplers for Training GNNs. Require: step T , sample size k, number of layers L, node features H(0), adjacency matrix A. 1: Initialize: qij(1) = 1/ |Ni| if j ∈ Ni else 0, wij(1) = 1 if j ∈ Ni else 0. 2: for t = 1 to T do 3: Read a minibatch of labeled vertices at layer L. 4: Use sampler qti or DepRound(k, qti) to sample neighbors top-down with sample size k. 5: Forward GNN model via estimators defined in Eq. (8) or Proposition 1. 6: Backpropagation and update GNN model. 7: for each vi in the 1-st layer do 8: Collect vi’s k sampled neighbors vj ∈ Sti , and rewards rti = {rij(t) : vj ∈ Sti}. 9: Update qt+1i and w t+1 i by EXP3(qti , wti , rti , Sti ) or EXP3.M(qti , wti , rti , Sti ). 10: end for 11: end for 12: return GNN model. 5.1 GNN-BS: Graph Neural Networks with Bandit Sampler In this setting, we choose 1 arm and repeat k times. We have the following Monte Carlo estimator µ̂i = 1 k k∑ s=1 αijs qijs ĥjs , js ∼ qi. (8) This yields the variance V(qi) = 1k Eqi [∥∥∥αijsqijs hjs −∑j∈Ni αijhj∥∥∥2 ] . Following Eq. (5) and Eq. (7), we have the reward of vi picking neighbor vj at step t as rij(t) = −∇qij(t)V t e(q t i) = α2ij k · qij(t)2 ‖hj(t)‖2. (9) 5.2 GNN-BS.M: Graph Neural Networks with Multiple Plays Bandit Sampler Given a vertex vi, an important property of DepRound is that it satisfies ∑ Si:j∈Si Qi,Si = qij ,∀vj ∈ Ni, where Si ⊂ Ni is any subset of size k. We have the following unbiased estimator. Proposition 1. µ̂i = ∑ js∈Si αijs qijs hjs is the unbiased estimator of µi = ∑ j∈Ni αijhj given that Si is sampled from Qi with DepRound, where Si is the selected k-subset neighbors of vertex i. The effective variance of this estimator is Ve(Qi) = ∑ Si⊂Ni Qi,Si‖ ∑ js∈Si αijs qijs hjs‖2. Since the derivative of this effective variance w.r.t Qi,Si does not factorize, we instead have the following approximated effective variance using Jensen’s inequality. Proposition 2. The effective variance can be approximated by Ve(Qi) ≤ ∑ js∈Ni αijs qijs ‖hjs‖2. Proposition 3. The negative derivative of the approximated effective variance ∑ js∈Ni αijs qijs ‖hjs‖2 w.r.t Qi,Si , i.e. the reward of vi choosing Si at t is ri,Si(t) = ∑ js∈Si αijs qijs (t) 2 ‖hjs(t)‖2. Follow EXP3.M we use the reward w.r.t each arm as rij(t) = αij qij(t)2 ‖hj(t)‖2,∀j ∈ Si. Our proofs rely on the property of DepRound introduced above. 5.3 Extension to Attentive GNNs In this section, we extend our algorithms to attentive GNNs. The issue remained is that the attention value αij can not be evaluated with only sampled neighborhoods, instead, we can only compute the unnormalized attentions α̃ij . We define the adjusted feedback attention values as follows: α′ij = ∑ j∈Si qij · α̃ij∑ j∈Si α̃ij , (10) where α̃ij’s are the unnormalized attention values that can be obviously evaluated when we have sampled (vi, vj). We use ∑ j∈Si qij as a surrogate of ∑ j∈Si α̃ij∑ j∈Ni α̃ij so that we can approximate the truth attention values αij by our adjusted attention values α′ij . 6 Regret Analysis As we described in section 4, the regret is defined as 〈Qti − Q?i ,∇QtiV t e(Q t i)〉. By choosing the reward as the negative derivative of the effective variance, we have the following theorem that our bandit sampling algorithms asymptotically approximate the optimal variance within a factor of 3. Theorem 1. Using Algorithm 1 with η = 0.4 and δ = √ (1−η)η4k5 ln(n/k) Tn4 to minimize the effective variance with respect to {Qti}1≤t≤T , we have T∑ t=1 Vte(Qti) ≤ 3 T∑ t=1 Vte(Q?i ) + 10 √ Tn4 ln(n/k) k3 (11) where T ≥ ln(n/k)n2(1− η)/(kη2), n = |Ni|. Our proof follows [16] by upper and lower bounding the potential function. The upper and lower bounds are the functions of the alternative sampling probability qij(t) and the reward rij(t) respectively. By multiplying the upper and lower bounds by the optimal sampling probability q?i and using the reward definition in (9), we have the upper bound of the effective variance. The growth of this regret is sublinear in terms of T . The regret decreases in polynomial as sample size k grows. Note that the number of neighbors n is always well bounded in pratical graphs, and can be considered as a moderate constant number. Compared with existing layer sampling approaches, this is the first work optimizing the sampling variance of GNNs towards optimum. We will empirically show the sampling variances in experiments. 7 Experiments In this section, we conduct extensive experiments compared with state-of-the-art approaches to show the advantage of our training approaches. We use the following rule to name our approaches: GNN architecture plus bandit sampler. For example, GCN-BS, GAT-BS and GP-BS denote the training approaches for GCN, GAT [20] and GeniePath [15] respectively. Please find our implementations at https://github.com/xavierzw/gnn-bs. We run all the experiments using one machine with Intel Xeon E5-2682 and 512GB RAM. The major purpose of this paper is to compare the effects of our samplers with existing training algorithms, so we compare them by training the same GNN architecture. We use the following architectures unless otherwise stated. We fix the number of layers as 2 as in [13] for all comparison algorithms. We set the dimension of hidden embeddings as 16 for Cora and Pubmed, and 256 for PPI, Reddit and Flickr. For a fair comparison, we do not use the normalization layer [2] particularly used in some works [5, 22]. For attentive GNNs, we use the attention layer proposed in GAT. we set the number of multi-heads as 1 for simplicity. We report results on 5 benchmark data that include Cora [18], Pubmed [18], PPI [11], Reddit [11], and Flickr [22]. We follow the standard data splits, and summarize the statistics in Table 1. We summarize the comparison algorithms as follows. (1) GraphSAGE [11] is a node-wise layer sampling approach with a random sampler. (2) FastGCN [6], LADIES [23], and AS-GCN [12] are layer sampling approaches based on importance sampling. (3) S-GCN [5] can be viewed as an optimization solver for training of GCN based on a simply random sampler. (4) ClusterGCN [7] and GraphSAINT [22] are “graph sampling” techniques that first partition or sample the graph into small subgraphs, then train each subgraph using the batch algorithm [13]. (5) The open source algorithms that support the training of attentive GNNs are AS-GCN and GraphSAINT. We denote them as AS-GAT and GraphSAINT-GAT. We save the model based on the best results on validation and report results on testing data in Section 7.1. We do grid search for the following hyperparameters in each algorithm, i.e., the learning rate {0.01, 0.001}, the penalty weight on the `2-norm regularizers {0, 0.0001, 0.0005, 0.001}, the dropout rate {0, 0.1, 0.2, 0.3}. By following the exsiting implementations3, we save the model based on the best results on validation, and restore the model to report results on testing data in Section 7.1. For the sample size in GraphSAGE, S-GCN and our algorithms, we set 1 for Cora and Pubmed, 5 for Flickr, 10 for PPI and reddit. We set the sample size in the first and second layer for FastGCN/LADIES and AS-GCN/AS-GAT as 256 and 256 for Cora and Pubmed, 1, 900 and 3, 800 for PPI, 780 and 1, 560 for Flickr, and 2, 350 and 4, 700 for Reddit. We set the batch size of all the layer sampling approaches and S-GCN as 256 for all the datasets. For ClusterGCN, we set the partitions according to the suggestions [7] for PPI and Reddit. We set the number of partitions for Cora and Pubmed as 10, for flickr as 200 by doing grid search. We set the architecture of GraphSAINT as “0-1-1”4 which means MLP layer followed by two graph convolution layers. We use the “rw” sampling strategy that reported as the best in their original paper to perform the graph sampling procedure. We set the number of root and walk length as the paper suggested. 7.1 Results on Benchmark Data We report the testing results on GCN and attentive GNN architectures in Table 2 and Table 3 respectively. We run the results of each algorithm 3 times and report the mean and standard deviation. The results on the two layer GCN architecture show that our GCN-BS performs the best on most of datasets. Compared with layer sampling approaches, GCN-BS performs significantly better in relative dense graphs, such as PPI and Reddit. This shows the efficiency of our sampler on selecting neighbors. The results on the two layer attentive GNN architecture show the superiority of our algorithms on training more complex GNN architectures. GraphSAINT or AS-GAT do not compute the softmax of learned weights, but simply use the unnormalized weights to perform the aggregation. As a result, most of results from AS-GAT and GraphSAINT-GAT in Table 3 are worse than their results in Table 2. Thanks to the power of attentive structures in GNNs, our algorithms perform the best results on PPI and Flickr. 7.2 Convergence In this section, we analyze the convergences of comparison algorithms on the two layer GCN and attentive GNN architectures in Figure 1 in terms of epoch. We run all the algorithms 3 times and show the mean and standard deviation. Our approaches converge much faster with lower variances in most datasets. The GNN-BS algorithms perform very similar to GNN-BS.M, even though strictly speaking GNN-BS does not follow the rigorous MAB setting. The convergences on validation in terms of timing (seconds), compared with layer sampling approaches, in Fig. 2 show the similar results. 7.3 Sample Size Analysis We analyze the sampling variances and accuracy as sample size varies using PPI data. Note that existing layer sampling approaches do not optimize the variances once the samplers are specified. As a result, their variances are simply fixed [23], while our approaches asymptotically appoach the optimum. For comparison, we train our models until convergence, then compute the average sampling variances. We show the results in Figure 3 (left and middle). The results are grouped in two categories, i.e. results for GCN and attentive GNNs respectively. Our approaches’ sampling variances are smaller in each group. This explains the performances of our approaches on Micro F1 scores. Note that the overall sampling variances of node-wise approaches are way better than those of layer-wise approaches. To further show the convergence while we simulate graphs with different degrees and fix the sample size of different algorithms, we set up the following experiments. We randomly sample 100 labeled nodes {1,...,i,...,100} with each µi uniformly sampled from [-10, 10]. For each labeled node i we generate k neighbors, and its neighbors’ features are 1-dimensional scalars in real field that are sampled from uniform(µi − σ, µi + σ) with σ = 5. Each node i’s label is generated by simply averaging its neighbors’ 1-dimensional scalar features. We use a GCN architecture with mean aggregators. We compare the convergence (mean squared error loss) with a random sampler by increasing k = 50 to 100 and 200 in Figure 3 (right). All the samplers use a fixed sample size 10. It shows that our bandit sampler works much better compared with a uniform sampler on graphs with different degrees. 8 Conclusions In this paper, we show that the optimal layer samplers based on importance sampling for training general graph neural networks are computationally intractable, since it needs all the neighbors’ hidden embeddings or learned weights. Instead, we re-formulate the sampling problem as a bandit problem that requires only partial knowledges from neighbors being sampled. We propose two algorithms based on multi-armed bandit and MAB with multiple plays. We show the variance of our bandit sampler asymptotically approaches the optimum within a factor of 3. We empirically show that our algorithms achieve much better convergence results with much lower variances compared with state-of-the-art approaches. Broader Impact This paper presents an approach for fast training of graph neural networks with theoretical guarantees. It may have impacts on training approaches related to any models based on message passing. The graph neural networks may have positive impacts on recommendater systems, protein analyses, fraud detection and so on. This work does not present any foreseeable societal consequence. Acknowledgments and Disclosure of Funding This work is supported by Ant Group.
1. What is the main contribution of the paper in the field of GNN embeddings? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical grounding and empirical results? 3. Do you have any concerns or suggestions regarding the method, such as using combinatorial bandits instead? 4. How does the reviewer assess the novelty and relevance of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose to use a bandit approach to optimally sample the neighbors in GNN embeddings. Previous approaches include random and importance sampling and proposed approach scales even to GNNS with attention since they can change across iterations. Also a nice theoretical bound shows a multiplicative factor of 3 over the optimal variance. Strengths (A) Good theoretical grounding in bandits using multi-play MAB is proposed for neighbor selection and a variance bound is shown. (B) Strong empirical results as shown on a wide-variety of datasets. (C) Casting node selection to a bandit problem seems novel to me and could lead to other bandit extensions as well as applications to other graph settings. (D) Highly relevant since training GNNs is expensive and improvements are highly welcome. Weaknesses (A) How about using combinatorial bandits (CMAB) for selecting the neighbors? (B) I might have missed it but further insight into why this approach works could have been interesting. Are you able to better sample neighbors in fewer rounds while random/importance sampling explore unnecessarily?
NIPS
Title Bandit Samplers for Training Graph Neural Networks Abstract Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are changing during the training and not known a priori, but only partially observed when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets. 1 Introduction Graph neural networks [13, 11] have emerged as a powerful tool for representation learning of graph data in irregular or non-euclidean domains [3, 21]. For instance, graph neural networks have demonstrated state-of-the-art performance on learning tasks such as node classification, link and graph property prediction, with applications ranging from drug design [8], social networks [11], transaction networks [14], gene expression networks [9], and knowledge graphs [17]. One major challenge of training GNNs comes from the requirements of heavy floating point operations and large memory footprints, due to the recursive expansions over the neighborhoods. For a minibatch with a single vertex vi, to compute its embedding h (L) i at the L-th layer, we have to expand its ∗Equal contribution. †Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. neighborhood from the (L− 1)-th layer to the 0-th layer, i.e. L-hops neighbors. That will soon cover a large portion of the graph if particularly the graph is dense. One basic idea of alleviating such “neighbor explosion” problem was to sample neighbors in a top-down manner, i.e. sample neighbors in the l-th layer given the nodes in the (l + 1)-th layer recursively. Several layer sampling approaches [11, 6, 12, 23] have been proposed to alleviate above “neighbor explosion” problem and improve the convergence of training GCNs, e.g. with importance sampling. However, the optimal sampler [12], q?ij = αij‖h(l)j ‖ 2∑ k∈Ni αik‖h(l)k ‖2 for vertex vi, to minimize the variance of the estimator ĥ(l+1)i involves all its neighbors’ hidden embeddings, i.e. {ĥ (l) j |vj ∈ Ni}, which is infeasible to be computed because we can only observe them partially while doing sampling. Existing approaches [6, 12, 23] typically compromise the optimal sampling distribution via approximations, which may impede the convergence. Moreover, such approaches are not applicable to more general cases where the weights or kernels αij’s are not known a priori, but are learned weights parameterized by attention functions [20]. That is, both the hidden embeddings and learned weights involved in the optimal sampler constantly vary during the training process, and only part of the unnormalized attention values or hidden embeddings can be observed while do sampling. Present work. We derive novel variance reduced samplers for training of GCNs and attentive GNNs with a fundamentally different perspective. That is, different with existing approaches that need to compute the immediate sampling distribution, we maintain nonparametric estimates of the sampler instead, and update the sampler towards optimal variance after we acquire partial knowledges about neighbors being sampled, as the algorithm iterates. To fulfil this purpose, we formulate the optimization of the samplers as a bandit problem, where the regret is the gap between expected loss (negative reward) under current policy (sampler) and expected loss with optimal policy. We define the reward with respect to each action, i.e. the choice of a set of neighbors with sample size k, as the derivatives of the sampling variance, and show the variance of our samplers asymptotically approaches the optimal variance within a factor of 3. Under this problem formulation, we propose two bandit algorithms. The first algorithm based on multi-armed bandit (MAB) chooses k < K arms (neighbors) repeatedly. Our second algorithm based on MAB with multiple plays chooses a combinatorial set of neighbors with size k only once. To summarize, (1) We recast the sampler for GNNs as a bandit problem from a fundamentally different perspective. It works for GCNs and attentive GNNs while existing approaches apply only to GCNs. (2) We theoretically show that the regret with respect to the variance of our estimators asymptotically approximates the optimal sampler within a factor of 3 while no existing approaches optimize the sampler. (3) We empirically show that our approachs are way competitive in terms of convergence and sample variance, compared with state-of-the-art approaches on multiple public datasets. 2 Problem Setting Let G = (V, E) denote the graph with N nodes vi ∈ V , and edges (vi, vj) ∈ E . Let the adjacency matrix denote as A ∈ RN×N . Assuming the feature matrix H(0) ∈ RN×D(0) with h(0)i denoting the D(0)-dimensional feature of node vi. We focus on the following simple but general form of GNNs: h (l+1) i = σ ( N∑ j=1 α(vi, vj)h (l) j W (l) ) , l = 0, . . . , L− 1 (1) where h(l)i is the hidden embedding of node vi at the l-th layer, α = (α(vi, vj)) ∈ RN×N is a kernel or weight matrix, W (l) ∈ RD(l)×D(l+1) is the transform parameter on the l-th layer, and σ(·) is the activation function. The weight α(vi, vj), or αij for simplicity, is non-zero only if vj is in the 1-hop neighborhood Ni of vi. It varies with the aggregation functions [3, 21]. For example, (1) GCNs [8, 13] define fixed weights asα = D̃−1à orα = D̃− 1 2 ÃD̃− 1 2 respectively, where à = A+I , and D̃ is the diagonal node degree matrix of Ã. (2) The attentive GNNs [20, 15] define a learned weight α(vi, vj) by attention functions: α(vi, vj) = α̃(vi,vj ;θ)∑ vk∈Ni α̃(vi,vk;θ) , where the unnormalized attentions α̃(vi, vj ; θ) = exp(ReLU(aT [Whi‖Whj ])), are parameterized by θ = {a,W}. Different from GCNs, the learned weights αij ∝ α̃ij can be evaluated only given all the unnormalized weights in the neighborhood. The basic idea of layer sampling approaches [11, 6, 12, 23] was to recast the evaluation of Eq. (1) as ĥ (l+1) i = σ ( N(i)Epij [ ĥ (l) j ] W (l) ) , (2) where pij ∝ αij , and N(i) = ∑ j αij . Hence we can evaluate each node vi at the (l + 1)-th layer, using a Monte Carlo estimator with sampled neighbors at the l-th layer. Without loss of generality, we assume pij = αij and N(i) = 1 that meet the setting of attentive GNNs in the rest of this paper. To further reduce the variance, let us consider the following importance sampling ĥ (l+1) i = σW (l) ( µ̂ (l) i ) = σW (l) ( Eqij [ αij qij ĥ (l) j ]) , (3) where we use σW (l)(·) to include transform parameterW (l) into the function σ(·) for conciseness. As such, one can find an alternative sampling distribution qi = (qij1 , ..., qij|Ni|) to reduce the variance of an estimator, e.g. a Monte Carlo estimator µ̂(l)i = 1 k ∑k s=1 αijs qijs ĥ (l) js , where js ∼ qi. Take expectation over qi, we define the variance of µ̂ (l) i = αijs qijs ĥ (l) js at step t and (l+1)-th layer to be: Vt(qi) = E [∥∥∥µ̂(l)i (t)− µ(l)i (t)∥∥∥2] = E[∥∥∥αijs(t)qijs h(l)js (t)− ∑ j∈Ni αij(t)h (l) j (t) ∥∥∥2]. (4) Note that αij and h(vj) that are inferred during training may vary over steps t’s. We will explicitly include step t and layer l only when it is necessary. By expanding Eq. (4) one can write V(qi) as the difference of two terms. The first is a function of qi, which we refer to as the effective variance: Ve(qi) = ∑ j∈Ni 1 qij α2ij ‖hj‖ 2 , (5) while the second does not depend on qi, and we denote it by Vc = ∥∥∥∑j∈Ni αijhj∥∥∥2. The optimal sampling distribution [6, 12] at (l + 1)-th layer for vertex i that minimizes the variance is: q?ij = αij‖h(l)j ‖2∑ k∈Ni αik‖h (l) k ‖2 . (6) However, evaluating this sampling distribution is infeasible because we cannot have all the knowledges of neighbors’ embeddings in the denominator of Eq. (6). Moreover, the αij’s in attentive GNNs could also vary during the training procedure. Existing layer sampling approaches based on importance sampling just ignore the effects of norm of embeddings and assume the αij’s are fixed during training. As a result, the sampling distribution is suboptimal and only applicable to GCNs where the weights are fixed. Note that our derivation above follows the setting of node-wise sampling approaches [11], but the claim remains to hold for layer-wise sampling approaches [6, 12, 23]. 3 Related Works We summarize three types of works for training graph neural networks. First, several “layer sampling” approaches [11, 6, 12, 23] have been proposed to alleviate the “neighbor explosion” problems. Given a minibatch of labeled vertices at each iteration, such approaches sample neighbors layer by layer in a top-down manner. Particularly, node-wise samplers [11] randomly sample neighbors in the lower layer given each node in the upper layer, while layer-wise samplers [6, 12, 23] leverage importance sampling to sample neighbors in the lower layer given all the nodes in upper layer with sample sizes of each layer be independent of each other. Empirically, the layer-wise samplers work even worse [5] compared with node-wise samplers, and one can set an appropriate sample size for each layer to alleviate the growth issue of node-wise samplers. In this paper, we focus on optimizing the variance in the vein of layer sampling approaches. Though the derivation of our bandit samplers follows the node-wise samplers, it can be extended to layer-wise. We leave this extension as a future work. Second, Chen et al. [5] proposed a variance reduced estimator by maintaining historical embeddings of each vertices, based on the assumption that the embeddings of a single vertex would be close to its history. This estimator uses a simple random sampler and works efficient in practice at the expense of requiring an extra storage that is linear with number of nodes. Third, two “graph sampling” approaches [7, 22] first cut the graph into partitions [7] or sample into subgraphs [22], then they train models on those partitions or subgraphs in a batch mode [13]. They show that the training time of each epoch may be much faster compared with “layer sampling” approaches. We summarize the drawbacks as follows. First, the partition of the original graph could be sensitive to the training problem. Second, these approaches assume that all the vertices in the graph have labels, however, in practice only partial vertices may have labels [14]. GNNs Architecture. For readers who are interested in the works related to the architecture of GNNs, please refer to the comprehensive survey [21]. Existing sampling approaches works only on GCNs, but not on more advanced architectures like GAT [20]. 4 Variance Reduced Samplers as Bandit Problems We formulate the optimization of sampling variance as a bandit problem. Basically, optimal variance requires the knowledge of all the neighbors’ embeddings that are computation infeasible, and our chance is to exploit the sampled good neighbors. Our basic idea is that instead of explicitly calculating the intractable optimal sampling distribution in Eq. (6) at each iteration, we aim to optimize a sampler or policy Qti for each vertex i over the horizontal steps 1 ≤ t ≤ T , and make the variance of the estimator following this sampler asymptotically approach the optimum Q?i = argmin Qi ∑T t=1 Vte(Qi), such that ∑T t=1 Vte(Qti) ≤ c ∑T t=1 Vte(Q?i ) for some constant c > 1. Each action of policy Qti is a choice of any k-element set of sampled neighbors Si ⊂ Ni where Si ∼ Qti. We denote Qi,Si(t) as the probability of the action that vi chooses Si at t. The gap to be minimized between effective variance and the oracle is Vte(Qti)− Vte(Q?i ) ≤ 〈Qti −Q?i ,∇QtiV t e(Q t i)〉. (7) Note that the function Vte(Qti) is convex w.r.t Qti, hence for Qti and Q?i we have the upper bound derived on right hand of Eq. (7). We define this upper bound as regret at t, which means the expected loss (negative reward) with policy Qti minus the expected loss with optimal policy Q ? i . Hence the reward w.r.t choosing Si at t is the negative derivative of the effective variance ri,Si(t) = −∇Qi,Si (t)V t e(Q t i). In the following, we adapt this bandit problem in the adversary bandit setting [1] because the rewards vary as the training proceeds and do not follow a priori fixed distribution [4]. We leave the studies of other bandits as a future work. We show in section 6 that with this regret the variances of our estimators asymptotically approach the optimal variance within a factor of 3. Following importance sampling, both of our samplers maintain the alternative sampling distribution qti = (qij1(t), ..., qij|Ni|(t)) for each vertex vi over steps t’s. We instantiate above framework under two bandit settings. (1) In the adversary MAB setting [1], we define the sampler Qti as qti , that samples exact an arm (neighbor) vjs ⊂ Ni from qti . In this case the set Si is the element vjs . To have a sample size of k neighbors, we repeat this process k times. After we collected k rewards rijs(t) = −∇qi,js (t)V t e(q t i) we update q t i by EXP3 [1]. (2) In the adversary MAB with multiple plays setting [19], it uses an efficient k-combination sampler (DepRound [10]) Q to sample any k-element subset S ⊂ {1, 2, ...,K} that satisfies ∑ S:j∈S QS = qj ,∀j ∈ {1, 2, ...,K}, where qj corresponds to the alternative probability of sampling j. As such, it allows us to select a set of k distinct arms (neighbors) S = ( K k ) from K arms at once. The selection can be done in O(K). After we collected the reward −∇Qi,Si (t)V t e(Q t i), we update q t i by EXP3.M [19]. Discussions. We have to select a sample size of k neighbors in GNNs. Note that in MAB setting, exact one neighbor should be selected and followed by updating the policy. Hence strictly speaking applying MAB to our problem is not rigorous. Applying MAB with multiple plays to our problem is rigorous because it allows us to sample k neighbors at once and update the rewards together. 5 Algorithms The framework of our algorithm is: (1) pick k arms with a sampler based on the alternative sampling distribution qti for any vertex vi, (2) establish the unbiased estimator, (3) do feedforward and backpropagation, and finally (4) calculate the rewards and update the sampler with a proper bandit algorithm. We show this framework in Algorithm 1. Note that the variance w.r.t qi in Eq. (4) is defined only at the (l + 1)-th layer, hence we should maintain multiple qi’s at each layer. In practice, we find that maintain a single qi and update it only using rewards from the 1-st layer works well enough. The time complexity of our algorithm is same with any node-wise approaches [11]. In addition, it requires a storage in O(|E|) to maintain the alternative sampling distribution, |E| is the number of edges used for message passing operations in GNNs. Beyond that, no further storage is required. This is true even for more sophisticated architectures where messages are passed between neighbors beyond one hop. It remains to instantiate the estimators, variances and rewards related to our two bandit settings. We name our first algorithm GNN-BS under adversary MAB setting, and the second GNN-BS.M under adversary MAB with multiple plays setting. We first assume the weights αij’s are fixed, then extend to attentive GNNs that αij(t)’s change. Algorithm 1 Bandit Samplers for Training GNNs. Require: step T , sample size k, number of layers L, node features H(0), adjacency matrix A. 1: Initialize: qij(1) = 1/ |Ni| if j ∈ Ni else 0, wij(1) = 1 if j ∈ Ni else 0. 2: for t = 1 to T do 3: Read a minibatch of labeled vertices at layer L. 4: Use sampler qti or DepRound(k, qti) to sample neighbors top-down with sample size k. 5: Forward GNN model via estimators defined in Eq. (8) or Proposition 1. 6: Backpropagation and update GNN model. 7: for each vi in the 1-st layer do 8: Collect vi’s k sampled neighbors vj ∈ Sti , and rewards rti = {rij(t) : vj ∈ Sti}. 9: Update qt+1i and w t+1 i by EXP3(qti , wti , rti , Sti ) or EXP3.M(qti , wti , rti , Sti ). 10: end for 11: end for 12: return GNN model. 5.1 GNN-BS: Graph Neural Networks with Bandit Sampler In this setting, we choose 1 arm and repeat k times. We have the following Monte Carlo estimator µ̂i = 1 k k∑ s=1 αijs qijs ĥjs , js ∼ qi. (8) This yields the variance V(qi) = 1k Eqi [∥∥∥αijsqijs hjs −∑j∈Ni αijhj∥∥∥2 ] . Following Eq. (5) and Eq. (7), we have the reward of vi picking neighbor vj at step t as rij(t) = −∇qij(t)V t e(q t i) = α2ij k · qij(t)2 ‖hj(t)‖2. (9) 5.2 GNN-BS.M: Graph Neural Networks with Multiple Plays Bandit Sampler Given a vertex vi, an important property of DepRound is that it satisfies ∑ Si:j∈Si Qi,Si = qij ,∀vj ∈ Ni, where Si ⊂ Ni is any subset of size k. We have the following unbiased estimator. Proposition 1. µ̂i = ∑ js∈Si αijs qijs hjs is the unbiased estimator of µi = ∑ j∈Ni αijhj given that Si is sampled from Qi with DepRound, where Si is the selected k-subset neighbors of vertex i. The effective variance of this estimator is Ve(Qi) = ∑ Si⊂Ni Qi,Si‖ ∑ js∈Si αijs qijs hjs‖2. Since the derivative of this effective variance w.r.t Qi,Si does not factorize, we instead have the following approximated effective variance using Jensen’s inequality. Proposition 2. The effective variance can be approximated by Ve(Qi) ≤ ∑ js∈Ni αijs qijs ‖hjs‖2. Proposition 3. The negative derivative of the approximated effective variance ∑ js∈Ni αijs qijs ‖hjs‖2 w.r.t Qi,Si , i.e. the reward of vi choosing Si at t is ri,Si(t) = ∑ js∈Si αijs qijs (t) 2 ‖hjs(t)‖2. Follow EXP3.M we use the reward w.r.t each arm as rij(t) = αij qij(t)2 ‖hj(t)‖2,∀j ∈ Si. Our proofs rely on the property of DepRound introduced above. 5.3 Extension to Attentive GNNs In this section, we extend our algorithms to attentive GNNs. The issue remained is that the attention value αij can not be evaluated with only sampled neighborhoods, instead, we can only compute the unnormalized attentions α̃ij . We define the adjusted feedback attention values as follows: α′ij = ∑ j∈Si qij · α̃ij∑ j∈Si α̃ij , (10) where α̃ij’s are the unnormalized attention values that can be obviously evaluated when we have sampled (vi, vj). We use ∑ j∈Si qij as a surrogate of ∑ j∈Si α̃ij∑ j∈Ni α̃ij so that we can approximate the truth attention values αij by our adjusted attention values α′ij . 6 Regret Analysis As we described in section 4, the regret is defined as 〈Qti − Q?i ,∇QtiV t e(Q t i)〉. By choosing the reward as the negative derivative of the effective variance, we have the following theorem that our bandit sampling algorithms asymptotically approximate the optimal variance within a factor of 3. Theorem 1. Using Algorithm 1 with η = 0.4 and δ = √ (1−η)η4k5 ln(n/k) Tn4 to minimize the effective variance with respect to {Qti}1≤t≤T , we have T∑ t=1 Vte(Qti) ≤ 3 T∑ t=1 Vte(Q?i ) + 10 √ Tn4 ln(n/k) k3 (11) where T ≥ ln(n/k)n2(1− η)/(kη2), n = |Ni|. Our proof follows [16] by upper and lower bounding the potential function. The upper and lower bounds are the functions of the alternative sampling probability qij(t) and the reward rij(t) respectively. By multiplying the upper and lower bounds by the optimal sampling probability q?i and using the reward definition in (9), we have the upper bound of the effective variance. The growth of this regret is sublinear in terms of T . The regret decreases in polynomial as sample size k grows. Note that the number of neighbors n is always well bounded in pratical graphs, and can be considered as a moderate constant number. Compared with existing layer sampling approaches, this is the first work optimizing the sampling variance of GNNs towards optimum. We will empirically show the sampling variances in experiments. 7 Experiments In this section, we conduct extensive experiments compared with state-of-the-art approaches to show the advantage of our training approaches. We use the following rule to name our approaches: GNN architecture plus bandit sampler. For example, GCN-BS, GAT-BS and GP-BS denote the training approaches for GCN, GAT [20] and GeniePath [15] respectively. Please find our implementations at https://github.com/xavierzw/gnn-bs. We run all the experiments using one machine with Intel Xeon E5-2682 and 512GB RAM. The major purpose of this paper is to compare the effects of our samplers with existing training algorithms, so we compare them by training the same GNN architecture. We use the following architectures unless otherwise stated. We fix the number of layers as 2 as in [13] for all comparison algorithms. We set the dimension of hidden embeddings as 16 for Cora and Pubmed, and 256 for PPI, Reddit and Flickr. For a fair comparison, we do not use the normalization layer [2] particularly used in some works [5, 22]. For attentive GNNs, we use the attention layer proposed in GAT. we set the number of multi-heads as 1 for simplicity. We report results on 5 benchmark data that include Cora [18], Pubmed [18], PPI [11], Reddit [11], and Flickr [22]. We follow the standard data splits, and summarize the statistics in Table 1. We summarize the comparison algorithms as follows. (1) GraphSAGE [11] is a node-wise layer sampling approach with a random sampler. (2) FastGCN [6], LADIES [23], and AS-GCN [12] are layer sampling approaches based on importance sampling. (3) S-GCN [5] can be viewed as an optimization solver for training of GCN based on a simply random sampler. (4) ClusterGCN [7] and GraphSAINT [22] are “graph sampling” techniques that first partition or sample the graph into small subgraphs, then train each subgraph using the batch algorithm [13]. (5) The open source algorithms that support the training of attentive GNNs are AS-GCN and GraphSAINT. We denote them as AS-GAT and GraphSAINT-GAT. We save the model based on the best results on validation and report results on testing data in Section 7.1. We do grid search for the following hyperparameters in each algorithm, i.e., the learning rate {0.01, 0.001}, the penalty weight on the `2-norm regularizers {0, 0.0001, 0.0005, 0.001}, the dropout rate {0, 0.1, 0.2, 0.3}. By following the exsiting implementations3, we save the model based on the best results on validation, and restore the model to report results on testing data in Section 7.1. For the sample size in GraphSAGE, S-GCN and our algorithms, we set 1 for Cora and Pubmed, 5 for Flickr, 10 for PPI and reddit. We set the sample size in the first and second layer for FastGCN/LADIES and AS-GCN/AS-GAT as 256 and 256 for Cora and Pubmed, 1, 900 and 3, 800 for PPI, 780 and 1, 560 for Flickr, and 2, 350 and 4, 700 for Reddit. We set the batch size of all the layer sampling approaches and S-GCN as 256 for all the datasets. For ClusterGCN, we set the partitions according to the suggestions [7] for PPI and Reddit. We set the number of partitions for Cora and Pubmed as 10, for flickr as 200 by doing grid search. We set the architecture of GraphSAINT as “0-1-1”4 which means MLP layer followed by two graph convolution layers. We use the “rw” sampling strategy that reported as the best in their original paper to perform the graph sampling procedure. We set the number of root and walk length as the paper suggested. 7.1 Results on Benchmark Data We report the testing results on GCN and attentive GNN architectures in Table 2 and Table 3 respectively. We run the results of each algorithm 3 times and report the mean and standard deviation. The results on the two layer GCN architecture show that our GCN-BS performs the best on most of datasets. Compared with layer sampling approaches, GCN-BS performs significantly better in relative dense graphs, such as PPI and Reddit. This shows the efficiency of our sampler on selecting neighbors. The results on the two layer attentive GNN architecture show the superiority of our algorithms on training more complex GNN architectures. GraphSAINT or AS-GAT do not compute the softmax of learned weights, but simply use the unnormalized weights to perform the aggregation. As a result, most of results from AS-GAT and GraphSAINT-GAT in Table 3 are worse than their results in Table 2. Thanks to the power of attentive structures in GNNs, our algorithms perform the best results on PPI and Flickr. 7.2 Convergence In this section, we analyze the convergences of comparison algorithms on the two layer GCN and attentive GNN architectures in Figure 1 in terms of epoch. We run all the algorithms 3 times and show the mean and standard deviation. Our approaches converge much faster with lower variances in most datasets. The GNN-BS algorithms perform very similar to GNN-BS.M, even though strictly speaking GNN-BS does not follow the rigorous MAB setting. The convergences on validation in terms of timing (seconds), compared with layer sampling approaches, in Fig. 2 show the similar results. 7.3 Sample Size Analysis We analyze the sampling variances and accuracy as sample size varies using PPI data. Note that existing layer sampling approaches do not optimize the variances once the samplers are specified. As a result, their variances are simply fixed [23], while our approaches asymptotically appoach the optimum. For comparison, we train our models until convergence, then compute the average sampling variances. We show the results in Figure 3 (left and middle). The results are grouped in two categories, i.e. results for GCN and attentive GNNs respectively. Our approaches’ sampling variances are smaller in each group. This explains the performances of our approaches on Micro F1 scores. Note that the overall sampling variances of node-wise approaches are way better than those of layer-wise approaches. To further show the convergence while we simulate graphs with different degrees and fix the sample size of different algorithms, we set up the following experiments. We randomly sample 100 labeled nodes {1,...,i,...,100} with each µi uniformly sampled from [-10, 10]. For each labeled node i we generate k neighbors, and its neighbors’ features are 1-dimensional scalars in real field that are sampled from uniform(µi − σ, µi + σ) with σ = 5. Each node i’s label is generated by simply averaging its neighbors’ 1-dimensional scalar features. We use a GCN architecture with mean aggregators. We compare the convergence (mean squared error loss) with a random sampler by increasing k = 50 to 100 and 200 in Figure 3 (right). All the samplers use a fixed sample size 10. It shows that our bandit sampler works much better compared with a uniform sampler on graphs with different degrees. 8 Conclusions In this paper, we show that the optimal layer samplers based on importance sampling for training general graph neural networks are computationally intractable, since it needs all the neighbors’ hidden embeddings or learned weights. Instead, we re-formulate the sampling problem as a bandit problem that requires only partial knowledges from neighbors being sampled. We propose two algorithms based on multi-armed bandit and MAB with multiple plays. We show the variance of our bandit sampler asymptotically approaches the optimum within a factor of 3. We empirically show that our algorithms achieve much better convergence results with much lower variances compared with state-of-the-art approaches. Broader Impact This paper presents an approach for fast training of graph neural networks with theoretical guarantees. It may have impacts on training approaches related to any models based on message passing. The graph neural networks may have positive impacts on recommendater systems, protein analyses, fraud detection and so on. This work does not present any foreseeable societal consequence. Acknowledgments and Disclosure of Funding This work is supported by Ant Group.
1. What is the focus and contribution of the paper regarding graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of novelty and relevance? 3. What are the weaknesses of the paper, especially regarding the explanation of the reward function and experimental validation? 4. Do you have any questions or suggestions regarding the use of adversarial bandits methods for learning optimal sampling weights? 5. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the problem of accelerating the training of graph neural networks (GNNs) using sampling. The embedding of a node is computed by aggregating the embeddings of its neighboring nodes. Instead of sampling all neighbors - which can be expensive - we can sample a subset of the nodes. The resulting estimator's variance can be reduced using importance sampling. For GNNs, it is difficult to determine the optimal importance sampling weights since they depend on unknown quantities. This paper proposes the use of adversarial bandits methods to learn the optimal sampling weights. Strengths The paper is relevant to the neurips community. The paper is well-written, and the authors have compared their sampling scheme to existing sampling schemes. They also provide a regret guarantee for their algorithm. The contribution is novel to the best of my knowledge. Weaknesses I do not follow why the gradient of the variance can be set as the reward for the bandit (the writeup after equation 7). The objective is to choose a good Q_i^t, and the quantity on the lhs of equation 7 is to be minimized. This is upper bounded by the inner product on the rhs. In the reward function however, only the gradient is present. Could you elaborate this? In the experiments, I would have liked to see two plots. First, a plot comparing the empirical regret with the upper bound in Theorem 1. This would help validate the expression in Theorem 1 and understand whether it predicts the dependence on various quantities correctly. Second, a plot measuring the gains on simulated toy graphs. For instance, how does the convergence change if we consider graphs where all nodes have a fixed degree. I would imagine that as the degree increases, the convergence would be slower (for a fixed sample of the neighborhood) I would also encourage releasing the code for reproducibility.
NIPS
Title Bandit Samplers for Training Graph Neural Networks Abstract Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are changing during the training and not known a priori, but only partially observed when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets. 1 Introduction Graph neural networks [13, 11] have emerged as a powerful tool for representation learning of graph data in irregular or non-euclidean domains [3, 21]. For instance, graph neural networks have demonstrated state-of-the-art performance on learning tasks such as node classification, link and graph property prediction, with applications ranging from drug design [8], social networks [11], transaction networks [14], gene expression networks [9], and knowledge graphs [17]. One major challenge of training GNNs comes from the requirements of heavy floating point operations and large memory footprints, due to the recursive expansions over the neighborhoods. For a minibatch with a single vertex vi, to compute its embedding h (L) i at the L-th layer, we have to expand its ∗Equal contribution. †Corresponding author. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. neighborhood from the (L− 1)-th layer to the 0-th layer, i.e. L-hops neighbors. That will soon cover a large portion of the graph if particularly the graph is dense. One basic idea of alleviating such “neighbor explosion” problem was to sample neighbors in a top-down manner, i.e. sample neighbors in the l-th layer given the nodes in the (l + 1)-th layer recursively. Several layer sampling approaches [11, 6, 12, 23] have been proposed to alleviate above “neighbor explosion” problem and improve the convergence of training GCNs, e.g. with importance sampling. However, the optimal sampler [12], q?ij = αij‖h(l)j ‖ 2∑ k∈Ni αik‖h(l)k ‖2 for vertex vi, to minimize the variance of the estimator ĥ(l+1)i involves all its neighbors’ hidden embeddings, i.e. {ĥ (l) j |vj ∈ Ni}, which is infeasible to be computed because we can only observe them partially while doing sampling. Existing approaches [6, 12, 23] typically compromise the optimal sampling distribution via approximations, which may impede the convergence. Moreover, such approaches are not applicable to more general cases where the weights or kernels αij’s are not known a priori, but are learned weights parameterized by attention functions [20]. That is, both the hidden embeddings and learned weights involved in the optimal sampler constantly vary during the training process, and only part of the unnormalized attention values or hidden embeddings can be observed while do sampling. Present work. We derive novel variance reduced samplers for training of GCNs and attentive GNNs with a fundamentally different perspective. That is, different with existing approaches that need to compute the immediate sampling distribution, we maintain nonparametric estimates of the sampler instead, and update the sampler towards optimal variance after we acquire partial knowledges about neighbors being sampled, as the algorithm iterates. To fulfil this purpose, we formulate the optimization of the samplers as a bandit problem, where the regret is the gap between expected loss (negative reward) under current policy (sampler) and expected loss with optimal policy. We define the reward with respect to each action, i.e. the choice of a set of neighbors with sample size k, as the derivatives of the sampling variance, and show the variance of our samplers asymptotically approaches the optimal variance within a factor of 3. Under this problem formulation, we propose two bandit algorithms. The first algorithm based on multi-armed bandit (MAB) chooses k < K arms (neighbors) repeatedly. Our second algorithm based on MAB with multiple plays chooses a combinatorial set of neighbors with size k only once. To summarize, (1) We recast the sampler for GNNs as a bandit problem from a fundamentally different perspective. It works for GCNs and attentive GNNs while existing approaches apply only to GCNs. (2) We theoretically show that the regret with respect to the variance of our estimators asymptotically approximates the optimal sampler within a factor of 3 while no existing approaches optimize the sampler. (3) We empirically show that our approachs are way competitive in terms of convergence and sample variance, compared with state-of-the-art approaches on multiple public datasets. 2 Problem Setting Let G = (V, E) denote the graph with N nodes vi ∈ V , and edges (vi, vj) ∈ E . Let the adjacency matrix denote as A ∈ RN×N . Assuming the feature matrix H(0) ∈ RN×D(0) with h(0)i denoting the D(0)-dimensional feature of node vi. We focus on the following simple but general form of GNNs: h (l+1) i = σ ( N∑ j=1 α(vi, vj)h (l) j W (l) ) , l = 0, . . . , L− 1 (1) where h(l)i is the hidden embedding of node vi at the l-th layer, α = (α(vi, vj)) ∈ RN×N is a kernel or weight matrix, W (l) ∈ RD(l)×D(l+1) is the transform parameter on the l-th layer, and σ(·) is the activation function. The weight α(vi, vj), or αij for simplicity, is non-zero only if vj is in the 1-hop neighborhood Ni of vi. It varies with the aggregation functions [3, 21]. For example, (1) GCNs [8, 13] define fixed weights asα = D̃−1à orα = D̃− 1 2 ÃD̃− 1 2 respectively, where à = A+I , and D̃ is the diagonal node degree matrix of Ã. (2) The attentive GNNs [20, 15] define a learned weight α(vi, vj) by attention functions: α(vi, vj) = α̃(vi,vj ;θ)∑ vk∈Ni α̃(vi,vk;θ) , where the unnormalized attentions α̃(vi, vj ; θ) = exp(ReLU(aT [Whi‖Whj ])), are parameterized by θ = {a,W}. Different from GCNs, the learned weights αij ∝ α̃ij can be evaluated only given all the unnormalized weights in the neighborhood. The basic idea of layer sampling approaches [11, 6, 12, 23] was to recast the evaluation of Eq. (1) as ĥ (l+1) i = σ ( N(i)Epij [ ĥ (l) j ] W (l) ) , (2) where pij ∝ αij , and N(i) = ∑ j αij . Hence we can evaluate each node vi at the (l + 1)-th layer, using a Monte Carlo estimator with sampled neighbors at the l-th layer. Without loss of generality, we assume pij = αij and N(i) = 1 that meet the setting of attentive GNNs in the rest of this paper. To further reduce the variance, let us consider the following importance sampling ĥ (l+1) i = σW (l) ( µ̂ (l) i ) = σW (l) ( Eqij [ αij qij ĥ (l) j ]) , (3) where we use σW (l)(·) to include transform parameterW (l) into the function σ(·) for conciseness. As such, one can find an alternative sampling distribution qi = (qij1 , ..., qij|Ni|) to reduce the variance of an estimator, e.g. a Monte Carlo estimator µ̂(l)i = 1 k ∑k s=1 αijs qijs ĥ (l) js , where js ∼ qi. Take expectation over qi, we define the variance of µ̂ (l) i = αijs qijs ĥ (l) js at step t and (l+1)-th layer to be: Vt(qi) = E [∥∥∥µ̂(l)i (t)− µ(l)i (t)∥∥∥2] = E[∥∥∥αijs(t)qijs h(l)js (t)− ∑ j∈Ni αij(t)h (l) j (t) ∥∥∥2]. (4) Note that αij and h(vj) that are inferred during training may vary over steps t’s. We will explicitly include step t and layer l only when it is necessary. By expanding Eq. (4) one can write V(qi) as the difference of two terms. The first is a function of qi, which we refer to as the effective variance: Ve(qi) = ∑ j∈Ni 1 qij α2ij ‖hj‖ 2 , (5) while the second does not depend on qi, and we denote it by Vc = ∥∥∥∑j∈Ni αijhj∥∥∥2. The optimal sampling distribution [6, 12] at (l + 1)-th layer for vertex i that minimizes the variance is: q?ij = αij‖h(l)j ‖2∑ k∈Ni αik‖h (l) k ‖2 . (6) However, evaluating this sampling distribution is infeasible because we cannot have all the knowledges of neighbors’ embeddings in the denominator of Eq. (6). Moreover, the αij’s in attentive GNNs could also vary during the training procedure. Existing layer sampling approaches based on importance sampling just ignore the effects of norm of embeddings and assume the αij’s are fixed during training. As a result, the sampling distribution is suboptimal and only applicable to GCNs where the weights are fixed. Note that our derivation above follows the setting of node-wise sampling approaches [11], but the claim remains to hold for layer-wise sampling approaches [6, 12, 23]. 3 Related Works We summarize three types of works for training graph neural networks. First, several “layer sampling” approaches [11, 6, 12, 23] have been proposed to alleviate the “neighbor explosion” problems. Given a minibatch of labeled vertices at each iteration, such approaches sample neighbors layer by layer in a top-down manner. Particularly, node-wise samplers [11] randomly sample neighbors in the lower layer given each node in the upper layer, while layer-wise samplers [6, 12, 23] leverage importance sampling to sample neighbors in the lower layer given all the nodes in upper layer with sample sizes of each layer be independent of each other. Empirically, the layer-wise samplers work even worse [5] compared with node-wise samplers, and one can set an appropriate sample size for each layer to alleviate the growth issue of node-wise samplers. In this paper, we focus on optimizing the variance in the vein of layer sampling approaches. Though the derivation of our bandit samplers follows the node-wise samplers, it can be extended to layer-wise. We leave this extension as a future work. Second, Chen et al. [5] proposed a variance reduced estimator by maintaining historical embeddings of each vertices, based on the assumption that the embeddings of a single vertex would be close to its history. This estimator uses a simple random sampler and works efficient in practice at the expense of requiring an extra storage that is linear with number of nodes. Third, two “graph sampling” approaches [7, 22] first cut the graph into partitions [7] or sample into subgraphs [22], then they train models on those partitions or subgraphs in a batch mode [13]. They show that the training time of each epoch may be much faster compared with “layer sampling” approaches. We summarize the drawbacks as follows. First, the partition of the original graph could be sensitive to the training problem. Second, these approaches assume that all the vertices in the graph have labels, however, in practice only partial vertices may have labels [14]. GNNs Architecture. For readers who are interested in the works related to the architecture of GNNs, please refer to the comprehensive survey [21]. Existing sampling approaches works only on GCNs, but not on more advanced architectures like GAT [20]. 4 Variance Reduced Samplers as Bandit Problems We formulate the optimization of sampling variance as a bandit problem. Basically, optimal variance requires the knowledge of all the neighbors’ embeddings that are computation infeasible, and our chance is to exploit the sampled good neighbors. Our basic idea is that instead of explicitly calculating the intractable optimal sampling distribution in Eq. (6) at each iteration, we aim to optimize a sampler or policy Qti for each vertex i over the horizontal steps 1 ≤ t ≤ T , and make the variance of the estimator following this sampler asymptotically approach the optimum Q?i = argmin Qi ∑T t=1 Vte(Qi), such that ∑T t=1 Vte(Qti) ≤ c ∑T t=1 Vte(Q?i ) for some constant c > 1. Each action of policy Qti is a choice of any k-element set of sampled neighbors Si ⊂ Ni where Si ∼ Qti. We denote Qi,Si(t) as the probability of the action that vi chooses Si at t. The gap to be minimized between effective variance and the oracle is Vte(Qti)− Vte(Q?i ) ≤ 〈Qti −Q?i ,∇QtiV t e(Q t i)〉. (7) Note that the function Vte(Qti) is convex w.r.t Qti, hence for Qti and Q?i we have the upper bound derived on right hand of Eq. (7). We define this upper bound as regret at t, which means the expected loss (negative reward) with policy Qti minus the expected loss with optimal policy Q ? i . Hence the reward w.r.t choosing Si at t is the negative derivative of the effective variance ri,Si(t) = −∇Qi,Si (t)V t e(Q t i). In the following, we adapt this bandit problem in the adversary bandit setting [1] because the rewards vary as the training proceeds and do not follow a priori fixed distribution [4]. We leave the studies of other bandits as a future work. We show in section 6 that with this regret the variances of our estimators asymptotically approach the optimal variance within a factor of 3. Following importance sampling, both of our samplers maintain the alternative sampling distribution qti = (qij1(t), ..., qij|Ni|(t)) for each vertex vi over steps t’s. We instantiate above framework under two bandit settings. (1) In the adversary MAB setting [1], we define the sampler Qti as qti , that samples exact an arm (neighbor) vjs ⊂ Ni from qti . In this case the set Si is the element vjs . To have a sample size of k neighbors, we repeat this process k times. After we collected k rewards rijs(t) = −∇qi,js (t)V t e(q t i) we update q t i by EXP3 [1]. (2) In the adversary MAB with multiple plays setting [19], it uses an efficient k-combination sampler (DepRound [10]) Q to sample any k-element subset S ⊂ {1, 2, ...,K} that satisfies ∑ S:j∈S QS = qj ,∀j ∈ {1, 2, ...,K}, where qj corresponds to the alternative probability of sampling j. As such, it allows us to select a set of k distinct arms (neighbors) S = ( K k ) from K arms at once. The selection can be done in O(K). After we collected the reward −∇Qi,Si (t)V t e(Q t i), we update q t i by EXP3.M [19]. Discussions. We have to select a sample size of k neighbors in GNNs. Note that in MAB setting, exact one neighbor should be selected and followed by updating the policy. Hence strictly speaking applying MAB to our problem is not rigorous. Applying MAB with multiple plays to our problem is rigorous because it allows us to sample k neighbors at once and update the rewards together. 5 Algorithms The framework of our algorithm is: (1) pick k arms with a sampler based on the alternative sampling distribution qti for any vertex vi, (2) establish the unbiased estimator, (3) do feedforward and backpropagation, and finally (4) calculate the rewards and update the sampler with a proper bandit algorithm. We show this framework in Algorithm 1. Note that the variance w.r.t qi in Eq. (4) is defined only at the (l + 1)-th layer, hence we should maintain multiple qi’s at each layer. In practice, we find that maintain a single qi and update it only using rewards from the 1-st layer works well enough. The time complexity of our algorithm is same with any node-wise approaches [11]. In addition, it requires a storage in O(|E|) to maintain the alternative sampling distribution, |E| is the number of edges used for message passing operations in GNNs. Beyond that, no further storage is required. This is true even for more sophisticated architectures where messages are passed between neighbors beyond one hop. It remains to instantiate the estimators, variances and rewards related to our two bandit settings. We name our first algorithm GNN-BS under adversary MAB setting, and the second GNN-BS.M under adversary MAB with multiple plays setting. We first assume the weights αij’s are fixed, then extend to attentive GNNs that αij(t)’s change. Algorithm 1 Bandit Samplers for Training GNNs. Require: step T , sample size k, number of layers L, node features H(0), adjacency matrix A. 1: Initialize: qij(1) = 1/ |Ni| if j ∈ Ni else 0, wij(1) = 1 if j ∈ Ni else 0. 2: for t = 1 to T do 3: Read a minibatch of labeled vertices at layer L. 4: Use sampler qti or DepRound(k, qti) to sample neighbors top-down with sample size k. 5: Forward GNN model via estimators defined in Eq. (8) or Proposition 1. 6: Backpropagation and update GNN model. 7: for each vi in the 1-st layer do 8: Collect vi’s k sampled neighbors vj ∈ Sti , and rewards rti = {rij(t) : vj ∈ Sti}. 9: Update qt+1i and w t+1 i by EXP3(qti , wti , rti , Sti ) or EXP3.M(qti , wti , rti , Sti ). 10: end for 11: end for 12: return GNN model. 5.1 GNN-BS: Graph Neural Networks with Bandit Sampler In this setting, we choose 1 arm and repeat k times. We have the following Monte Carlo estimator µ̂i = 1 k k∑ s=1 αijs qijs ĥjs , js ∼ qi. (8) This yields the variance V(qi) = 1k Eqi [∥∥∥αijsqijs hjs −∑j∈Ni αijhj∥∥∥2 ] . Following Eq. (5) and Eq. (7), we have the reward of vi picking neighbor vj at step t as rij(t) = −∇qij(t)V t e(q t i) = α2ij k · qij(t)2 ‖hj(t)‖2. (9) 5.2 GNN-BS.M: Graph Neural Networks with Multiple Plays Bandit Sampler Given a vertex vi, an important property of DepRound is that it satisfies ∑ Si:j∈Si Qi,Si = qij ,∀vj ∈ Ni, where Si ⊂ Ni is any subset of size k. We have the following unbiased estimator. Proposition 1. µ̂i = ∑ js∈Si αijs qijs hjs is the unbiased estimator of µi = ∑ j∈Ni αijhj given that Si is sampled from Qi with DepRound, where Si is the selected k-subset neighbors of vertex i. The effective variance of this estimator is Ve(Qi) = ∑ Si⊂Ni Qi,Si‖ ∑ js∈Si αijs qijs hjs‖2. Since the derivative of this effective variance w.r.t Qi,Si does not factorize, we instead have the following approximated effective variance using Jensen’s inequality. Proposition 2. The effective variance can be approximated by Ve(Qi) ≤ ∑ js∈Ni αijs qijs ‖hjs‖2. Proposition 3. The negative derivative of the approximated effective variance ∑ js∈Ni αijs qijs ‖hjs‖2 w.r.t Qi,Si , i.e. the reward of vi choosing Si at t is ri,Si(t) = ∑ js∈Si αijs qijs (t) 2 ‖hjs(t)‖2. Follow EXP3.M we use the reward w.r.t each arm as rij(t) = αij qij(t)2 ‖hj(t)‖2,∀j ∈ Si. Our proofs rely on the property of DepRound introduced above. 5.3 Extension to Attentive GNNs In this section, we extend our algorithms to attentive GNNs. The issue remained is that the attention value αij can not be evaluated with only sampled neighborhoods, instead, we can only compute the unnormalized attentions α̃ij . We define the adjusted feedback attention values as follows: α′ij = ∑ j∈Si qij · α̃ij∑ j∈Si α̃ij , (10) where α̃ij’s are the unnormalized attention values that can be obviously evaluated when we have sampled (vi, vj). We use ∑ j∈Si qij as a surrogate of ∑ j∈Si α̃ij∑ j∈Ni α̃ij so that we can approximate the truth attention values αij by our adjusted attention values α′ij . 6 Regret Analysis As we described in section 4, the regret is defined as 〈Qti − Q?i ,∇QtiV t e(Q t i)〉. By choosing the reward as the negative derivative of the effective variance, we have the following theorem that our bandit sampling algorithms asymptotically approximate the optimal variance within a factor of 3. Theorem 1. Using Algorithm 1 with η = 0.4 and δ = √ (1−η)η4k5 ln(n/k) Tn4 to minimize the effective variance with respect to {Qti}1≤t≤T , we have T∑ t=1 Vte(Qti) ≤ 3 T∑ t=1 Vte(Q?i ) + 10 √ Tn4 ln(n/k) k3 (11) where T ≥ ln(n/k)n2(1− η)/(kη2), n = |Ni|. Our proof follows [16] by upper and lower bounding the potential function. The upper and lower bounds are the functions of the alternative sampling probability qij(t) and the reward rij(t) respectively. By multiplying the upper and lower bounds by the optimal sampling probability q?i and using the reward definition in (9), we have the upper bound of the effective variance. The growth of this regret is sublinear in terms of T . The regret decreases in polynomial as sample size k grows. Note that the number of neighbors n is always well bounded in pratical graphs, and can be considered as a moderate constant number. Compared with existing layer sampling approaches, this is the first work optimizing the sampling variance of GNNs towards optimum. We will empirically show the sampling variances in experiments. 7 Experiments In this section, we conduct extensive experiments compared with state-of-the-art approaches to show the advantage of our training approaches. We use the following rule to name our approaches: GNN architecture plus bandit sampler. For example, GCN-BS, GAT-BS and GP-BS denote the training approaches for GCN, GAT [20] and GeniePath [15] respectively. Please find our implementations at https://github.com/xavierzw/gnn-bs. We run all the experiments using one machine with Intel Xeon E5-2682 and 512GB RAM. The major purpose of this paper is to compare the effects of our samplers with existing training algorithms, so we compare them by training the same GNN architecture. We use the following architectures unless otherwise stated. We fix the number of layers as 2 as in [13] for all comparison algorithms. We set the dimension of hidden embeddings as 16 for Cora and Pubmed, and 256 for PPI, Reddit and Flickr. For a fair comparison, we do not use the normalization layer [2] particularly used in some works [5, 22]. For attentive GNNs, we use the attention layer proposed in GAT. we set the number of multi-heads as 1 for simplicity. We report results on 5 benchmark data that include Cora [18], Pubmed [18], PPI [11], Reddit [11], and Flickr [22]. We follow the standard data splits, and summarize the statistics in Table 1. We summarize the comparison algorithms as follows. (1) GraphSAGE [11] is a node-wise layer sampling approach with a random sampler. (2) FastGCN [6], LADIES [23], and AS-GCN [12] are layer sampling approaches based on importance sampling. (3) S-GCN [5] can be viewed as an optimization solver for training of GCN based on a simply random sampler. (4) ClusterGCN [7] and GraphSAINT [22] are “graph sampling” techniques that first partition or sample the graph into small subgraphs, then train each subgraph using the batch algorithm [13]. (5) The open source algorithms that support the training of attentive GNNs are AS-GCN and GraphSAINT. We denote them as AS-GAT and GraphSAINT-GAT. We save the model based on the best results on validation and report results on testing data in Section 7.1. We do grid search for the following hyperparameters in each algorithm, i.e., the learning rate {0.01, 0.001}, the penalty weight on the `2-norm regularizers {0, 0.0001, 0.0005, 0.001}, the dropout rate {0, 0.1, 0.2, 0.3}. By following the exsiting implementations3, we save the model based on the best results on validation, and restore the model to report results on testing data in Section 7.1. For the sample size in GraphSAGE, S-GCN and our algorithms, we set 1 for Cora and Pubmed, 5 for Flickr, 10 for PPI and reddit. We set the sample size in the first and second layer for FastGCN/LADIES and AS-GCN/AS-GAT as 256 and 256 for Cora and Pubmed, 1, 900 and 3, 800 for PPI, 780 and 1, 560 for Flickr, and 2, 350 and 4, 700 for Reddit. We set the batch size of all the layer sampling approaches and S-GCN as 256 for all the datasets. For ClusterGCN, we set the partitions according to the suggestions [7] for PPI and Reddit. We set the number of partitions for Cora and Pubmed as 10, for flickr as 200 by doing grid search. We set the architecture of GraphSAINT as “0-1-1”4 which means MLP layer followed by two graph convolution layers. We use the “rw” sampling strategy that reported as the best in their original paper to perform the graph sampling procedure. We set the number of root and walk length as the paper suggested. 7.1 Results on Benchmark Data We report the testing results on GCN and attentive GNN architectures in Table 2 and Table 3 respectively. We run the results of each algorithm 3 times and report the mean and standard deviation. The results on the two layer GCN architecture show that our GCN-BS performs the best on most of datasets. Compared with layer sampling approaches, GCN-BS performs significantly better in relative dense graphs, such as PPI and Reddit. This shows the efficiency of our sampler on selecting neighbors. The results on the two layer attentive GNN architecture show the superiority of our algorithms on training more complex GNN architectures. GraphSAINT or AS-GAT do not compute the softmax of learned weights, but simply use the unnormalized weights to perform the aggregation. As a result, most of results from AS-GAT and GraphSAINT-GAT in Table 3 are worse than their results in Table 2. Thanks to the power of attentive structures in GNNs, our algorithms perform the best results on PPI and Flickr. 7.2 Convergence In this section, we analyze the convergences of comparison algorithms on the two layer GCN and attentive GNN architectures in Figure 1 in terms of epoch. We run all the algorithms 3 times and show the mean and standard deviation. Our approaches converge much faster with lower variances in most datasets. The GNN-BS algorithms perform very similar to GNN-BS.M, even though strictly speaking GNN-BS does not follow the rigorous MAB setting. The convergences on validation in terms of timing (seconds), compared with layer sampling approaches, in Fig. 2 show the similar results. 7.3 Sample Size Analysis We analyze the sampling variances and accuracy as sample size varies using PPI data. Note that existing layer sampling approaches do not optimize the variances once the samplers are specified. As a result, their variances are simply fixed [23], while our approaches asymptotically appoach the optimum. For comparison, we train our models until convergence, then compute the average sampling variances. We show the results in Figure 3 (left and middle). The results are grouped in two categories, i.e. results for GCN and attentive GNNs respectively. Our approaches’ sampling variances are smaller in each group. This explains the performances of our approaches on Micro F1 scores. Note that the overall sampling variances of node-wise approaches are way better than those of layer-wise approaches. To further show the convergence while we simulate graphs with different degrees and fix the sample size of different algorithms, we set up the following experiments. We randomly sample 100 labeled nodes {1,...,i,...,100} with each µi uniformly sampled from [-10, 10]. For each labeled node i we generate k neighbors, and its neighbors’ features are 1-dimensional scalars in real field that are sampled from uniform(µi − σ, µi + σ) with σ = 5. Each node i’s label is generated by simply averaging its neighbors’ 1-dimensional scalar features. We use a GCN architecture with mean aggregators. We compare the convergence (mean squared error loss) with a random sampler by increasing k = 50 to 100 and 200 in Figure 3 (right). All the samplers use a fixed sample size 10. It shows that our bandit sampler works much better compared with a uniform sampler on graphs with different degrees. 8 Conclusions In this paper, we show that the optimal layer samplers based on importance sampling for training general graph neural networks are computationally intractable, since it needs all the neighbors’ hidden embeddings or learned weights. Instead, we re-formulate the sampling problem as a bandit problem that requires only partial knowledges from neighbors being sampled. We propose two algorithms based on multi-armed bandit and MAB with multiple plays. We show the variance of our bandit sampler asymptotically approaches the optimum within a factor of 3. We empirically show that our algorithms achieve much better convergence results with much lower variances compared with state-of-the-art approaches. Broader Impact This paper presents an approach for fast training of graph neural networks with theoretical guarantees. It may have impacts on training approaches related to any models based on message passing. The graph neural networks may have positive impacts on recommendater systems, protein analyses, fraud detection and so on. This work does not present any foreseeable societal consequence. Acknowledgments and Disclosure of Funding This work is supported by Ant Group.
1. What is the focus and contribution of the paper regarding variance reduced samplers for training GCNs and attentive GNNs? 2. What are the strengths of the proposed approach, particularly in its application to GAT and GeniePath? 3. What are the weaknesses of the paper, especially regarding experimentation and scalability? 4. Do you have any concerns about the notation used in the paper, such as in Equation 4? 5. How does the reviewer assess the clarity and quality of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper describes about the variance reduced samplers for training GCNs and attentive GNNs by formulating the optimisation of samplers as a bandit problem and proposes two multi-armed bandit algorithms. Strengths The paper is written well. The idea of using mutli-arm bandit for sampling the neighbours training of GCNs and attentive GNNs is interesting. The technical details and equations appears to be correct. The authors applied their multi- arm bandit approach to GAT and GeniePath and shown its effective over layer sampling approaches. Weaknesses The authors conduct experiments with 2 layer architecture. However, for few problems and dataset, it may require more complex architecture and the authors could clarify how the proposed algorithm performs in terms of scalability and computation cost. The notation used in the paper sometimes can be confusing. For example - in equation 4 - alpha_ij is a value not a function - alpha_ij(t) can be noted as function of 't' It is mentioned that the rewards vary as the training proceeds and it would have been interesting to explore how any of simple bandit algorithms perform in the experiments or how to apply simple bandit algorithms for the current experiments. The authors could try and adapt a simple eps-greedy method to solve this problem The algorithm based on MAB where a combinatorial set of neighbors with size k needs to be chosen. The size parameters 'k' will be a hyperparameter for the model which needs to tuned using validation set. The authors can provide clarification on how to select the size k.
NIPS
Title Hyperbolic Procrustes Analysis Using Riemannian Geometry Abstract Label-free alignment between datasets collected at different times, locations, or by different instruments is a fundamental scientific task. Hyperbolic spaces have recently provided a fruitful foundation for the development of informative representations of hierarchical data. Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA). HPA consists of new implementations of the three prototypical Procrustes analysis components: translation, scaling, and rotation, based on the Riemannian geometry of the Lorentz model of hyperbolic space. We analyze the proposed components, highlighting their useful properties for alignment. The efficacy of HPA, its theoretical properties, stability and computational efficiency are demonstrated in simulations. In addition, we showcase its performance on three batch correction tasks involving gene expression and mass cytometry data. Specifically, we demonstrate high-quality unsupervised batch effect removal from data acquired at different sites and with different technologies that outperforms recent methods for label-free alignment in hyperbolic spaces. 1 Introduction A key scientific task in modern data analysis is the alignment of data. The need for alignment often arises since data are acquired in multiple domains, under different environmental conditions, using various acquisition equipment, and at different sites. This paper focuses on the problem of label-free alignment of data embedded in hyperbolic spaces. Recently, hyperbolic spaces have accentuated in geometric representation learning. These non-Euclidean spaces have become popular since they provide a natural embedding of hierarchical data thanks to the exponential growth of the lengths of their geodesic paths [41, 42, 30, 15, 14, 6, 32]. The problem of alignment of data embedded in hyperbolic spaces has been extensively studied, e.g., in the context of natural language processing [49], ontology matching [10], matching two data modalities [40], and improving the embedding in hyperbolic spaces [2]. A few of these studies are based on optimal transport (OT) [2, 22], a classical problem in mathematics [38] that has recently reemerged in modern data analysis, e.g., for domain adaptation [7]. Despite its increasing usage, OT for unsupervised alignment is fundamentally limited [54], since OT (as any density matching approach) cannot recover volume-preserving maps [3, 4, 36]. In this paper, we resort to Procrustes analysis (PA) [17, 18] that is based on purely geometric considerations. PA has been widely used for aligning datasets by eliminating the shift, scaling, and rotational factors. Over the years, it has been successfully applied to various applications, e.g., 35th Conference on Neural Information Processing Systems (NeurIPS 2021). image registration [34], manifold alignment [52], shape matching [35], domain adaptation [47], and manifold learning [27], to name but a few. Here, we address the problem of label-free matching of hierarchical data embedded in hyperbolic spaces. We present hyperbolic Procrustes analysis (HPA), a new PA method in the Lorentz model of hyperbolic geometry. The main novelty lies in the introduction of new implementations of the three prototypical PA components based on Riemannian geometry. Specifically, translation is viewed as a Riemannian mean alignment, implemented using parallel transport (PT). Scaling is determined with respect to geodesic paths. Rotation is considered as moment alignment on a mapping of the tangent space of the manifold to a Euclidean vector space. Our analysis provides new derivations in the Riemannian geometry of the Lorentz model and specifies the commuting properties of the HPA components. We show that HPA, compared to existing baselines and OT-based methods, achieves improved alignment in a purely unsupervised setting. In addition, it has a natural and stable out-of-sample extension, it supports both small and big data, and it is computationally efficient. We show application to batch correction in bioinformatics tasks. We present results on both gene expression and mass cytometry (CyTOF) data, exemplifying the generality and broad scope of our method. In contrast to recent works [28, 50], our method does not require landmark correspondence, which is often unavailable in many datasets or hard to obtain. Specifically, we show that batch effects caused by acquisition using different technologies, at different sites, and at different times can be accurately removed, while preserving the intrinsic structure of the data. Our main contributions are as follows. (i) We present a new implementation of PA using the Riemannian geometry of the Lorentz model for unsupervised label-free hierarchical data alignment. (ii) We provide theoretical analysis and justification of our alignment method based on new derivations of Riemannian geometry operations in the Lorentz model. These derivations have their own merit as they could be used in other contexts. (iii) We show experimental results of accurate batch effect removal from several hierarchical bioinformatics datasets without landmark correspondence. 2 Background on hyperbolic geometry Hyperbolic space is a non-Euclidean space with a negative constant sectional curvature and an underlying geometry that describes tree-like graphs with small distortions [46]. There exist four commonly-used models for hyperbolic spaces: Poincaré disk model, Lorentz model (hyperboloid model), Poincaré half-plane model, and Beltrami-Klein model. These four models are equivalent and there exist transformations between them. Here, we consider the Lorentz model, and specifically, the upper sheet of the hyperboloid model, because its basic Riemannian operations have simple closed-form expressions and the computation of the geodesic distances is stable [42, 30]. Formally, the upper sheet of the hyperboloid model in a d-dimensional hyperbolic space is defined by Ld := {x 2 Rd+1|hx,xiL = 1,x(1) > 0}, where hx,xiL = x>Hx is the Lorentzian inner product and H 2 R(d+1)⇥(d+1) is defined by H = [ 1,0>;0, Id]. The Lorentzian norm of a hyperbolic vector x 2 Ld is denoted by ||x||L = p hx,xiL, with the origin µ0 = [1,0>]> 2 Ld. Let TxLd be the tangent space at x 2 Ld, defined by TxLd := {v|hx,viL = 0}. Consider x 2 L d and v 2 TxLd, the geodesic path : R+0 ! Ld is defined by (t) = cosh(||v||Lt)x + sinh(||v||Lt) v ||v||L with (0) = x and initial velocity 0(0) = v, where 0(t) := ddt (t). In addition, the associated geodesic distance is dLd(x, v(t)) = cosh 1( hx, v(t)iL). The Exponential map, projecting a point v 2 TxLd to the manifold Ld, is given by Expx(v) = (1) = cosh(||v||L)x + sinh(||v||L) v ||v||L . The Logarithmic map, projecting a point y 2 L d to the tangent space TxLd at x, is defined by Logx(y) = cosh 1( )p 2 1 (y x), where = hx,yiL. The PT of a vector v 2 TxLd along the geodesic path from x 2 Ld to y 2 Ld is defined by PTx!y(v) = v + hy x,viL +1 (x + y), where = hx,yiL, keeping the metric tensor unchanged. The Riemannian mean xX and the corresponding dispersion dX of a set X = {xi|xi 2 Ld}ni=1 are defined using the Fréchet mean [13, 33] by xX := m(X ) = arg min x2Ld nX i=1 d2Ld(x,xi) and dX := r(X ) = min x2Ld 1 n nX i=1 d2Ld(x,xi), (1) where m : X ! Ld and r : X ! R+. Note that the Fréchet mean of samples on connected and compact Riemannian manifolds of non-positive curvatures, such as hyperbolic spaces, is guaranteed to exist, and it is unique [24, 44, 1]. The Fréchet mean is commonly computed by the Karcher Flow [24, 20], which is computationally demanding. Importantly, in the considered hyperbolic space, the Fréchet mean can be efficiently obtained using the accurate gradient formulation [33]. Given a vector x 2 Ld and a symmetric and positive-definite (SPD) matrix ⌃ 2 Rd⇥d, the wrapped normal distribution G(x,⌃) provides a generative model of hyperbolic samples as follows [39, 11]. First, a vector v0 is sampled from N (0,⌃). Then, 0 is concatenated to the vector v0 such that v = [0,v0]> 2 Tµ0L d. Finally, PT from the origin µ0 = [1,0>]> to x is applied to v, and the resulting point is mapped to the manifold using the Exponential map at x. The probability density function of this model is given by log G(y|x,⌃) = log N (0,⌃) (n 1) log( sinh ||v 0||2 ||v0||2 ). 3 Hyperbolic Procrustes analysis Existing methods for data alignment typically seek a function that minimizes a certain cost. A large body of work attempts to match the empirical densities of two datasets, e.g., by minimizing the maximum mean discrepancy (MMD) [48, 29] or solving OT problems [45, 2, 22]. Finding an effective cost function without labels or landmarks is challenging, and minimizing such costs directly often lead to poor alignment in practice (see illustration in Fig. 1). A different well-established approach that applies indirect alignment based on geometric considerations is PA. While preparing this manuscript, another method of PA in hyperbolic spaces (PAH) was presented for matching two sets, assuming that they consist of the same number of points and that there exists a point-wise isometric map between them [51]. We remark that the analysis we present here applies to broader settings and makes no such assumptions. See Appendix E for details on classical PA as well as for comparisons to [51] and to the application of Euclidean PA in the tangent space. We consider two sets of points H(1) = {h(1)i } N1 i=1 and H (2) = {h(2)i } N2 i=1 in L d. Here, we aim to find a function ⇣ : Ld ! Ld, consisting of three components: translation, scaling, and rotation, that aligns H (2) with H(1) in an unsupervised label-free manner as depicted in Fig. 1. Finding such a function can be viewed as an extension of classical PA from the Euclidean space Rd+1 to the Lorentz model L d. A natural extension to multiple sets is described in Section 3.5. We remark that the statements are written in the context of the problem at hand. In Appendix A, we restate them more generally and present their proofs. 3.1 Riemannian translation Let h (1) and h (2) denote the Riemannian means of the sets H(1) and H(2), respectively. In this translation component, we find a map h (2)!h(1) : L d ! L d that aligns the Riemannian means of the sets. In the spirit of [5, 53], we propose to construct h (2)!h(1)(h (2) i ) as the composition of three Riemannian operations in Ld: the Logarithmic map applied to h(2)i at h (2) , PT h(2)i from h (2) to h (1) along the geodesic path, and the Exponential map applied to the transported point at h (1) : h (2)!h(1)(h (2) i ) := Exph(1)(PTh(2)!h(1)(Logh(2)(h (2) i ))). (2) See Fig. B.1 in Appendix B for illustration. Since the geodesic path between any two points in L d is unique [46], h (2)!h(1) is well-defined. The rationale behind the combination of these three Riemannian operations is twofold. First, PT is a map that aligns the means of the sets, while preserving their internal structure. Second, the Logarithmic and Exponential maps compose a map whose domain and range are the Lorentz model Ld rather than the tangent space, as desired. We make these claims formal in the following results. Proposition 1. The map h (2)!h(1) defined in Eq. (2) aligns the means of the sets, i.e., it satisfies h (1) = m({ h (2)!h(1)(h (2) i )} N2 i=1), (3) where m is the function defined in Eq. (1). Proposition 2. The map h (2)!h(1)(h (2) i ) for all h (2) i 2 H (2) can be recast as: h (2)!h(1)(h (2) i ) = h (2) i (h (2) i |h (1) ,h (2) )h (2) + (h(2)i |h (1) ,h (2) )h (1) , (4) where the functions and are positive, defined by 0 < (h(2)i |h (1) ,h (2) ) = D h (1) +h (2) ↵+1 ,h (2) i E L and 0 < (h(2)i |h (1) ,h (2) ) = D h (1) (2↵+1)h(2) ↵+1 ,h (2) i E L , respectively, and 0 < ↵ = hh (1) ,h (2) iL. In addition to providing a compact closed-form expression, Prop. 2 gives the proposed translation based on Riemannian geometry an interpretation of standard mean alignment in linear vector spaces. It implies that the alignment is nothing but subtracting the mean of the source set h (2) from each vector in H(2), and adding the mean of the target set h (1) (with the appropriate scales). Proposition 3. The map h (2)!h(1) preserves distances (i.e., it is an isometry): dLd(h (2) i ,h (2) j ) = dLd( h(2)!h(1)(h (2) i ), h(2)!h(1)(h (2) j )), (5) for any two points h(2)i ,h (2) j 2 H (2). Let (t) be the unique geodesic path from h (2) to h (1) such that (0) = h (2) and (1) = h (1) , and let 0(0) 2 T h (2)L d and 0(1) 2 T h (1)L d be the corresponding velocities, respectively. Proposition 4. The map h (2)!h(1) aligns geodesic velocities, i.e., given the mapping of the geodesic velocities to the manifold Ld 3 v0 = Exp h (2)( 0(0)) = h (1) and Ld 3 v1 = Exp h (1)( 0(1)), we have h (2)!h(1)(v0) = v1. (6) Isometry is determined up to rotation, a fact that can be problematic for alignment. For example, any H-unitary matrix [16] can be an isometric function in Ld. When landmarks are given, they can be used to alleviate this redundancy. However, in the purely unsupervised setting we consider, other data-driven ques are required. Prop. 4 implies that the proposed translation based on PT fixes some of these rotational degrees of freedom by aligning the geodesic velocities. In Section 3.3 we revisit this issue. Now, with a slight abuse of notation, let eH(2) = h (2)!h(1)(H (2)). Proposition 5. Consider two subsets A, B ⇢ H(2) and their translations eA = h (2)!h(1)(A), eB = h (2)!h(1)(B) ⇢ eH(2). Let a = m(A), b = m(B), ea = m( eA), and eb = m( eB) be the Riemannian means of the subsets. Then, h (2)!h(1) a!b = ea!eb h(2)!h(1) . (7) In the context of the alignment problem, the importance of Prop. 5 is the following. Suppose the two sets correspond to data measured at two labs (denoted with and without a tilde), and suppose each set was acquired by two types of equipment (denoted by A and B). Prop. 5 implies that aligning data from the different labs and then aligning data acquired using the different equipment is equivalent to first aligning the different equipment and then the different labs, i.e., any order of the two alignments generates the same result. Seemingly, this is a natural property in Euclidean spaces. However, in a Riemannian manifold, it is not a trivial result, and it holds for the transport along the geodesic path. See Appendix A for counter-examples. 3.2 Riemannian scaling Let d(1) and d(2) denote the Riemannian dispersions of H(1) and H(2). By propositions 1 and 3, h (1) and d(2) are the mean and dispersion of eH(2). Here, our goal is to align the Riemannian dispersions of H(1) and eH(2). For this purpose, we propose the scaling function ⌥s h (1) : L d ! L d, given by ⌥s h (1)(eh(2)i ) = i(s), (8) where s = p d(1)/d(2) is the scaling factor and i(t) is the geodesic path from h (1) to eh(2)i such that i(0) = h (1) and i(1) = eh(2)i . See Fig. B.2 in Appendix B for illustration. Proposition 6. The dispersion of the rescaled set bH(2) = ⌥s h (1)( eH(2)) is d(1). 3.3 Riemannian wrapped rotation The purpose of this component is to align the orientation of the distributions of the two sets after translation and scaling, namely, after aligning their first and second moments. The proposed rotation function ⇥ h (1) : Ld ! Ld consists of (i) mapping the points from the manifold Ld to the tangent space T h (1)L d, (ii) mapping to Rd, (iii) rotating in Rd, and (iv) mapping back to the tangent space and then to the manifold. We perform the rotation in Rd, which we term wrapped rotation, rather than a direct rotation on the manifold Ld or on the tangent space T h (1)L d for the following reasons. First, the frequently used rotation map in Ld [51] does not necessarily preserve the Riemannian mean, and in our context, it might reverse the mean alignment. Second, rotation applied directly to the tangent space T h (1)L d does not guarantee that the rotated points remain on the same tangent space. Third, applying rotation in the Euclidean vector space that is isometric to the tangent space is less efficient and stable, and it obtains slightly worse empirical results (see Appendix D.4 for details). Last, applying the rotation in Rd allows us to use the standard Euclidean rotation using SVD. In Section 4, we empirically demonstrate the advantage of the proposed rotation compared to the alternatives. Definition 1. Let the mapping function P h (1) : T h (1)L d ! R d defined on the tangent space at h (1) and its inverse map be the following functions defined by P h (1)(v) := ⇥ v(2), . . . ,v(d + 1) ⇤> 2 R d and P 1 h (1)(s) := h hs, P h (1)(h (1) )i h (1) (1) , s> i> 2 T h (1)L d, (9) where s 2 Rd and h·, ·i is the standard Euclidean inner product. Note that removing the first element of v is valid due to the constraint imposed on the vector elements in the tangent space by definition. Indeed, no information is lost and the mapping is invertible. The first step in our rotation component is to map the points in H(1) and in bH(2) to the tangent space at h (1) : v(1)i = Logh(1)(h (1) i ) for i = 1, . . . , N1, and v (2) i = Logh(1)( bh(2)i ) for i = 1, . . . , N2. In the second step, we map the points by the mapping function in Definition 1 and re-center them: s (1) i = Ph(1)(v (1) i ) s (1), i = 1, . . . , N1 and s (2) i = Ph(1)(v (2) i ) s (2), i = 1, . . . , N2, where s (k) = 1Nk PNk i=1 Ph (1)(v (k) i ) for k = 1, 2 is the mean vector of the projections. Then, the mapped and centered points (in Rd) are collected into matrices: S (k) = ⇥ s (k) 1 , s (k) 2 , . . . , s (k) Nk ⇤ 2 R d⇥Nk . (10) In the third step, for each set k = 1, 2, we compute the rotation matrix U (k) 2 Rd⇥d by applying SVD to the matrix S(k) = U (k)⇤(k)(E(k))>. Since the left-singular vectors are determined up to a sign, we propose to align their signs as follows: u(2)i sign(hu (2) i ,u (1) i i)u (2) i , where u (1) i and u (2) i are the i-th left-singular vector of the two sets, resulting in modified rotation matrices U (k). Finally, we apply the rotation to bH(2) by ⇥U h (1)(bh(2)i ) = Exph(1) ⇣ P 1 h (1) ⇣ U > ⇣ P h (1)(Log h (1)(bh(2)i )) s (2) ⌘ + s(2) ⌘⌘ , (11) where U = U (1)(U (2))>. Proposition 7. The wrapped rotation is bijective, and the inverse is given by (⇥U h (1)) 1 = ⇥U > h (1) . (12) 3.4 Analysis Putting all three components together, the proposed HPA that aligns H(2) with H(1) culminates in the composition of translation, scaling, and rotation: ⇥U h (1) ⌥ s h (1) h (2)!h(1) . (13) As in most PA schemes, the order of the three components is important. Yet, the proposed components allow us a certain degree of freedom, as indicated in the following results. Proposition 8. The Riemannian translation and the Riemannian scaling commute w.r.t. the Riemannian means h (1) and h (2) : ⌥s h (1) h (2)!h(1) = h(2)!h(1) ⌥ s h (2) . (14) Note that ⌥s h (1) and h (2)!h(1) do not necessarily commute: ⌥ s h (1) h (2)!h(1) 6= h(2)!h(1) ⌥ s h (1) . Proposition 9. The Riemannian scaling and the wrapped rotation commute: ⌥s h (1) ⇥ U h (1) = ⇥ U h (1) ⌥ s h (1) . (15) We note that the rotation does not commute with the translation, because PT only preserves the local covariant derivative on the tangent space but might cause rotation and distortion along the transportation. Therefore, the rotation is required to be the last component of our HPA. Thus far, we did not present a model for the discrepancy between the two sets, nor we presented the proposed HPA as optimal with respect to some criterion. In the following result, we show that if the discrepancy between the sets can be expressed as a composition of translation, scaling, and rotation, then the two sets can be perfectly aligned using HPA. Proposition 10. Let ⌘ : Ld ! Ld be a map, given by ⌘ = ⇥U h (1) ⌥ s h (1) h (2)!h(1) . If H (1) = {h (1) i = ⌘(h (2) i )} N2 i=1, then, h (2) i = (⇥ U 0 h (2) ⌥ 1 s h (2) h (1)!h(2))(h (1) i ), i = 1, . . . , N2, (16) where U 0 2 O(d). Note that HPA consists of the sequence of Riemannian translation, Riemannian scaling and wrappedrotation. The domain and range of each component is the manifold Ld. Yet, the first and last operations of each component are the Logarithmic and Exponential maps that project a point from the manifold to the tangent space, and vice versa, respectively. This allows us to propose an efficient implementation of the sequence without the back and forth projections as described in Appendix C. 3.5 Extension to multiple sets We can naturally scale up the setting to support the alignment of K > 2 sets, denoted by H(k) = {h (k) i } Nk i=1, where k 2 {1, 2, . . . , K}. Let h (k) and d(k) be the Riemannian mean and dispersion of Algorithm 1 Hyperbolic Procrustes analysis Input: K sets of hyperbolic points H(1) = {h(1)i } N1 i=1, . . . , H (K) = {h(K)i } NK i=1 Output: K aligned sets of hyperbolic points H̆(1) = {h̆(1)i } N1 i=1, . . . , H̆ (K) = {h̆(K)i } NK i=1 1: for each set H(k) do 2: compute the Riemannian mean h (k) and dispersion d(k) 3: end for 4: compute h, the global Riemannian mean of {h (k) } K k=1 5: for each set H(k) do 6: apply the Riemannian translation eh(k)i = h(k)!h(h (k) i ) // Eq. (2) 7: apply the Riemannian scaling bh(k)i = ⌥sh( eh(k)i ) with s = 1/ p d(k) // Eq. (8) 8: apply the wrapped rotation h̆(k)i = ⇥ U h (bh(k)i ) with U = U (1)(U (k))> // Eq. (11) 9: end for the k-th dataset, respectively. In addition, let h be the global Riemannian mean of {h (k) } K k=1. We propose to transport the points of the k-th set using h (k)!h. Next, the Riemannian dispersion of each set is set to 1 by applying ⌥s h with s = 1/ p d(k). Finally, the wrapped rotation is applied to all the data sets on the mapping of the tangent space T h L d and then mapped back to the manifold L d. The first set is designated as the reference set, and all other rotation matrices U (k) are updated according to u(k)i sign(hu (k) i ,u (1) i i)u (k) i . The proposed HPA for multiple sets is summarized in Algorithm 1, and some implementation remarks appear in Appendix C. 4 Experimental results We apply HPA to simulations and to three biomedical datasets1. In addition, we test HPA on MNIST [31] and USPS [23] datasets, which arguably do not have a distinct hierarchical structure. Nonetheless, we demonstrate in Appendix D that our HPA is highly effective in aligning these two datasets. All the experiments are label-free. We compare the obtained results to the following alignment methods: (i) PAH [51], which is applied only to the simulated data since it requires the existence of a one-to-one correspondence between the sets, (ii) only the Riemannian translation (RT), (iii) OT in hyperbolic space with the weighted Fréchet mean (HOT-F) extended to an unsupervised setting according to [54], (iv) OT with W-linear map (HOT-L) [22], and (v) hyperbolic mapping estimation (HOT-ME) [22]. As a baseline, we present the results obtained before the alignment (Baseline). For more details on the experimental setting, see Appendix C. 4.1 Simulations The synthetic data in Ld is generated using the sampling scheme described in Section 2 based on [39]. Given an arbitrary point µ 2 Ld and an arbitrary SPD matrix ⌃ 2 Rd⇥d, we generate a set of N points Q (1) = {q(1)i } N i=1 centered at µ by Ld 3 q (1) i = Expµ(PTµ0!µ(ev (1) i )), where µ0 = [1,0] > is the origin, v(1)i = [0, ev (1) i ] >, and ev(1)i ⇠ N (0,⌃). Next, we generate three noisy and distorted versions of Q(1). The first noisy set Q(2) = {q(2)i } N i=1 is generated as proposed in [51] by q (2) i = LT✏iq (1) i , where T✏i is a hyperbolic translation defined by T✏i = [ p 1 + ✏>i ✏i, ✏ > i ; ✏i, (I + ✏i✏ > i ) 1 2 ] , ✏i is sampled from N (0, 2I), 2 is the variance, and L is a random H-unitary matrix [16]. Another noisy set, denoted as Q(3) = {q(3)i } N i=1, is generated by q (3) i = L(Expµ(PTµ0!µ(u (1) i ))), where u (1) i = [0, ev(1)i + ✏i]>. Here, the noise is added to the tangent space at µ0. Finally, let Q(4) = {q (4) i } N i=1 be a distorted set, given by q(4)i = fµ0(q (3) i ), where fµ0(x) = cosh(||u||Lt)µ 0 + sinh(||u||Lt) u ||u||L and u = Log µ0(x), for arbitrary (fixed) µ0 2 Ld and t > 0. 1Our code is available at https://github.com/RonenTalmonLab/HyperbolicProcrustesAnalysis. <latexit sha1_base64="DJH1y20CjQfE05rkzrChGdpLG3Y=">AAACGHicbVA9TwJBEN3DL8Qv1NJmIzGBBu8IiZZEG0tIBEwAyd7eHmzY27vszhnJ5X6GjX/FxkJjbOn8Ny4fhYAvmeTlvZnMzHMjwTXY9o+V2djc2t7J7ub29g8Oj/LHJy0dxoqyJg1FqB5copngkjWBg2APkWIkcAVru6Pbqd9+YkrzUN7DOGK9gAwk9zklYKR+/rIbEBhSIpJG+pgUnVKKu8CeIcFEejhddiultJ8v2GV7BrxOnAUpoAXq/fyk64U0DpgEKojWHceOoJcQBZwKlua6sWYRoSMyYB1DJQmY7iWzx1J8YRQP+6EyJQHP1L8TCQm0Hgeu6Zweqle9qfif14nBv+4lXEYxMEnni/xYYAjxNCXsccUoiLEhhCpubsV0SBShYLLMmRCc1ZfXSatSdqrlaqNaqN0s4siiM3SOishBV6iG7lAdNRFFL+gNfaBP69V6t76s73lrxlrMnKIlWJNfPCSf4w==</latexit> <latexit sha1_base64="jNZ8bDFg5cYQW2/dwkoUqRDGC3I=">AAACGHicbVA9TwJBEN3zE/ELtbTZSEygwTsl0ZJoYwmJfCQckr1lgQ17e5fdOSO53M+w8a/YWGiMLZ3/xgWuEPAlk7y8N5OZeV4ouAbb/rHW1jc2t7YzO9ndvf2Dw9zRcUMHkaKsTgMRqJZHNBNcsjpwEKwVKkZ8T7CmN7qb+s0npjQP5AOMQ9bxyUDyPqcEjNTNXbg+gSElIq4lj3HBKSbYBfYMMSayh5NF96qYdHN5u2TPgFeJk5I8SlHt5iZuL6CRzyRQQbRuO3YInZgo4FSwJOtGmoWEjsiAtQ2VxGe6E88eS/C5UXq4HyhTEvBM/TsRE1/rse+Zzumhetmbiv957Qj6N52YyzACJul8UT8SGAI8TQn3uGIUxNgQQhU3t2I6JIpQMFlmTQjO8surpHFZcsqlcq2cr9ymcWTQKTpDBeSga1RB96iK6oiiF/SGPtCn9Wq9W1/W97x1zUpnTtACrMkvPaqf5A==</latexit> <latexit sha1_base64="hTgOuDJMdGixoeXwOc6+UufcYtI=">AAACGHicbVA9TwJBEN3DL8SvU0ubjcQEGrwzJFoSbSwhkY8EkOwtC2zY27vszhnJ5X6GjX/FxkJjbOn8Ny5whYAvmeTlvZnMzPNCwTU4zo+V2djc2t7J7ub29g8Oj+zjk4YOIkVZnQYiUC2PaCa4ZHXgIFgrVIz4nmBNb3w385tPTGkeyAeYhKzrk6HkA04JGKlnX3Z8AiNKRFxLHuOCW0xwB9gzxJjIPk6W3XIx6dl5p+TMgdeJm5I8SlHt2dNOP6CRzyRQQbRuu04I3Zgo4FSwJNeJNAsJHZMhaxsqic90N54/luALo/TxIFCmJOC5+nciJr7WE98znbND9ao3E//z2hEMbroxl2EETNLFokEkMAR4lhLuc8UoiIkhhCpubsV0RBShYLLMmRDc1ZfXSeOq5JZL5Vo5X7lN48iiM3SOCshF16iC7lEV1RFFL+gNfaBP69V6t76s70VrxkpnTtESrOkvPzCf5Q==</latexit> We apply Algorithm 1 to align the three pairs of sets {Q(1), Q(2)}, {Q(1), Q(3)}, and {Q(1), Q(4)}, setting N = 100, = 1, and d 2 {3, 5, 10, 20, . . . , 40}. Each experiment is repeated 10 times with different values of µ, ⌃, µ0 and t. To evaluate the alignment, we use the pairwise discrepancy based on the hidden one-to-one correspondence, given by "(Q(1), Q(j)) = 1N PN i=1 d 2 Ld(q (1) i , q (j) i ), where j 2 {2, 3, 4}. The discrepancy as a function of the dimension d is shown in Fig. 2. We observe that the proposed HPA has lower discrepancy relative to the other label-free methods. Specifically, it outperforms OT-based methods that are designed to match the densities. Furthermore, the proposed HPA is stable, in contrast to HOT-L, which is highly sensitive to the noise and distortion introduced in Q(3) and Q(4). Interestingly, we remark that the discrepancies of RT and PAH are very close, empirically showing that RT alone is comparable to PAH. In addition, note that HPA is permutationinvariant and does not require one-to-one correspondence as PAH. We report the running-time in Appendix D and demonstrate that HPA is more efficient than HOT-F and HOT-ME. 4.2 Batch effect removal We consider bioinformatics datasets consisting of gene expression data and CyTOF. Representing such data in hyperbolic spaces was shown to be informative and useful [25], implying that such data have an underlying inherent hierarchical structure. Batch effects [43] arise from experimental variations that can be attributed to the measurement device or other environmental factors. Batch correction is typically a critical precursor to any subsequent analysis and processing. Three batch effect removal tasks are examined. The first task involves breast cancer (BC) gene expression data. We consider two publicly available datasets: METABRIC [8] and TCGA [26], consisting of samples from five breast cancer subtypes. The batch effect stems from different profiling techniques: gene expression microarray and RNA sequencing. In the second task, three cohorts of lung cancer (LC) gene expression data [21] are considered, consisting of samples from three lung cancer subtypes. The data were collected using gene expression microarrays at three different sites (a likely source of batch effects): Stanford University (ST), University of Michigan (UM), and Dana-Farber Cancer Institute (D-F). The last task involves CyTOF data [48] consisting of peripheral blood mononuclear cells (PBMCs) collected from two multiple sclerosis patients during four days: two day before treatment (BT) and two days after treatment (AT). These 8 = 2⇥ 2⇥ 2 batches were collected with or without PMA/ionomycin stimulated PBMCs. We aim to remove the batch effects between two different days from the same condition (BT/AT) and from the same patient. In each batch removal task, we first learn an embedding of the data from all the batches into the Lorentz model Ld [42]. Then, HPA is applied to the embedded points in Ld. Fig. 3 shows a visualization of the embedding of the two breast cancer datasets before and after HPA. For visualization, we project the points in L3 to the 3D Poincaré ball. Before the alignment, the dominant factor separating between the patients’ samples (points) is the batch. In contrast, after the alignment, the batch effect is substantially suppressed (visually) and the factors separating the points are dominated by the cancer subtype. We evaluate the quality of the alignment in two aspects using objective measures: (i) k-NN classification, with leave-one-batch-out cross-validation, is utilized for assessing the alignment of the intrinsic structure, and (ii) MMD [19] is used for assessing the distribution alignment quality. For the classification, we view the five subtypes of BC, the three subtypes of LC, and the presence of stimulated cells in CyTOF as the labels in the respective tasks. In addition to the results of the different alignment methods, we report the k-NN classification based only on a single batch (S-Baseline), which indicates the adequacy of the representation in hyperbolic space to the task at hand. Table 1 depicts the k-NN classification obtained for the best k per method, and Table 2 shows the MMD. In each task, we set the dimension of the Lorentz model d to the dimension in which the best empirical single-task performance is obtained (S-Baseline). We note that similar results and trends are obtained for various dimensions. Additional results for various k values and an ablation study, showing that the combination of all three components yields the best classification results, are reported in Appendix D. Although the OT-based methods obtain the best matching between the distributions of the batches, HPA outperforms in all three tasks in terms of classification (see Table 1). In the two gene expression tasks, where the data have multiple labels and we align multiple batches, the advantage of HPA compared to the other methods is particularly significant. In Appendix D, we demonstrate HPA’s out-of-sample capabilities on the CyTOF data by learning the batch correction map between the different days from one patient and applying it to the data of the other patient. 4.3 Discussion Alignment methods based on density matching, such as OT-based methods, often overlook an important aspect in purely unsupervised settings. Although sample density is the main data property that can be and need to be aligned, preserving the intrinsic structure/geometry of the sets is important, as it might be tightly related to the hidden labels. Indeed, we see in our experiments that OT-based methods provide a good density alignment (reducing the inter-set variability), as demonstrated by small MMD values (see Table 2). However, the intrinsic structure of the sets (the intra-set variability) is not preserved, as evident by the resulting poor (hidden) label matching, conveyed by the k-NN classification performance (see Table 1). This is also illustrated in the right panel of Fig. 1. There it is visible that the three OT-based methods provide good global alignment of the sets, yet the intrinsic structure is not kept, as implied by the poor color matching. In contrast to OT-based methods, HPA does not explicitly aim to match densities, and thus, it obtains slightly worse MMD performance compared to OT-based methods. However, HPA matches the first two moments of the density and includes the rotation component, which was shown to be one of the fundamental limitations of OT-based methods for alignment as OT cannot recover volume-preserving maps [4, 36]. As seen in the simulation and experimental results and illustrated in the right panel of Fig. 1, we still obtain a good global alignment and simultaneously preserve the intrinsic structure, allowing for high classification performance. We remark that in the synthetic examples, there is a (hidden) one-to-one correspondence between the sets, and therefore one-to-one discrepancy can be computed (instead or in addition to MMD). When there is such a correspondence, OT still cannot recover volume-preserving maps, while HPA can mitigate noise and distortions. 5 Conclusion We introduced HPA for label-free alignment of data in the Lorentz model. Based on Riemannian geometry, we presented new translation and scaling operations that align the first and second Riemannian moments as well as a wrapped rotation that aligns the orientation in the hyperboloid model. Our theoretical analysis provides further insight and highlights properties that may be useful for practitioners. We empirically showed in simulations that HPA is stable under noise and distortions. Experimental results involving purely unsupervised batch correction of multiple bioinformatics datasets with multiple labels is demonstrated. Beyond alignment and batch effect removal, our method can be viewed as a type of domain adaptation or a precursor of transfer learning that relies on purely geometric considerations, exploiting the geometric structure of data as well as the geometric properties of the space of the data. In addition, it can be utilized for multimodal data fusion and geometric registration of shapes with hierarchical structure. Acknowledgments and Disclosure of Funding We thank the reviewers for their important comments and suggestions, and we thank Thomas Dagès for the helpful discussion. The work of YEL and RT was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 802735-ERC-DIFFOP. The work of YK was supported by the National Institutes of Health R01GM131642, UM1PA05141, P50CA121974, and U01DA053628.
1. What is the focus and contribution of the paper regarding density alignment on hyperbolic space? 2. What are the strengths of the proposed method in terms of efficiency and computational time requirement? 3. What are the weaknesses of the paper, particularly regarding the existence and uniqueness of FM and the limitation of Algorithm 1 to hyperbolic spaces? 4. How can the authors improve the experimental results by providing more comprehensive comparisons and addressing the subpar comparison? 5. What are some suggestions for improving the readability and clarity of the paper, such as presenting the steps for Riemannian wrapped rotation as bulleted lists or algorithms?
Summary Of The Paper Review
Summary Of The Paper The authors presented algorithm based on Riemannian geometric tools to align densities on Hyperbolic space. Experimental results have been demonstrated on synthetic and breast cancer data. The results have shown efficiency of the proposed method in terms of lower discrepancy and less computational time requirement. Review Post rebuttal and discussion, I want to recommend acceptance. The overall paper is an easy read. although my main concern is (a) The existence and uniqueness of FM should be addressed to ensure validity of Algorithm 1. (b) Most of the propositions are true due to the choice of metric and property of PT. So why Algorithm 1 does not valid for other Riemannian manifolds? The authors should point out what is so special about Hyperbolic spaces? (c) The experimental results are limited only to a synthetic dataset and the batch effect removal on the breast cancer dataset. Also the comparison is subpar. Detailed Comments: (a) line 83, the authors should cite many other older papers like [Groisser Riemannian Newton .., 2003], [Chakraborty et al., Exact PGA ..., 2014]. (b) As mentioned earlier, most of the propositions like Proposition 5, 8 follow directly from the property of PT. (c) The steps for Riemannian wrapped rotation should be presented as bulleted lists or algorithm for better clarity. (d) The main comparison with [18] is not enough, few obvious baselines including doing aligning in the ambient space, using Log Euclidean framework should be presented.
NIPS
Title Hyperbolic Procrustes Analysis Using Riemannian Geometry Abstract Label-free alignment between datasets collected at different times, locations, or by different instruments is a fundamental scientific task. Hyperbolic spaces have recently provided a fruitful foundation for the development of informative representations of hierarchical data. Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA). HPA consists of new implementations of the three prototypical Procrustes analysis components: translation, scaling, and rotation, based on the Riemannian geometry of the Lorentz model of hyperbolic space. We analyze the proposed components, highlighting their useful properties for alignment. The efficacy of HPA, its theoretical properties, stability and computational efficiency are demonstrated in simulations. In addition, we showcase its performance on three batch correction tasks involving gene expression and mass cytometry data. Specifically, we demonstrate high-quality unsupervised batch effect removal from data acquired at different sites and with different technologies that outperforms recent methods for label-free alignment in hyperbolic spaces. 1 Introduction A key scientific task in modern data analysis is the alignment of data. The need for alignment often arises since data are acquired in multiple domains, under different environmental conditions, using various acquisition equipment, and at different sites. This paper focuses on the problem of label-free alignment of data embedded in hyperbolic spaces. Recently, hyperbolic spaces have accentuated in geometric representation learning. These non-Euclidean spaces have become popular since they provide a natural embedding of hierarchical data thanks to the exponential growth of the lengths of their geodesic paths [41, 42, 30, 15, 14, 6, 32]. The problem of alignment of data embedded in hyperbolic spaces has been extensively studied, e.g., in the context of natural language processing [49], ontology matching [10], matching two data modalities [40], and improving the embedding in hyperbolic spaces [2]. A few of these studies are based on optimal transport (OT) [2, 22], a classical problem in mathematics [38] that has recently reemerged in modern data analysis, e.g., for domain adaptation [7]. Despite its increasing usage, OT for unsupervised alignment is fundamentally limited [54], since OT (as any density matching approach) cannot recover volume-preserving maps [3, 4, 36]. In this paper, we resort to Procrustes analysis (PA) [17, 18] that is based on purely geometric considerations. PA has been widely used for aligning datasets by eliminating the shift, scaling, and rotational factors. Over the years, it has been successfully applied to various applications, e.g., 35th Conference on Neural Information Processing Systems (NeurIPS 2021). image registration [34], manifold alignment [52], shape matching [35], domain adaptation [47], and manifold learning [27], to name but a few. Here, we address the problem of label-free matching of hierarchical data embedded in hyperbolic spaces. We present hyperbolic Procrustes analysis (HPA), a new PA method in the Lorentz model of hyperbolic geometry. The main novelty lies in the introduction of new implementations of the three prototypical PA components based on Riemannian geometry. Specifically, translation is viewed as a Riemannian mean alignment, implemented using parallel transport (PT). Scaling is determined with respect to geodesic paths. Rotation is considered as moment alignment on a mapping of the tangent space of the manifold to a Euclidean vector space. Our analysis provides new derivations in the Riemannian geometry of the Lorentz model and specifies the commuting properties of the HPA components. We show that HPA, compared to existing baselines and OT-based methods, achieves improved alignment in a purely unsupervised setting. In addition, it has a natural and stable out-of-sample extension, it supports both small and big data, and it is computationally efficient. We show application to batch correction in bioinformatics tasks. We present results on both gene expression and mass cytometry (CyTOF) data, exemplifying the generality and broad scope of our method. In contrast to recent works [28, 50], our method does not require landmark correspondence, which is often unavailable in many datasets or hard to obtain. Specifically, we show that batch effects caused by acquisition using different technologies, at different sites, and at different times can be accurately removed, while preserving the intrinsic structure of the data. Our main contributions are as follows. (i) We present a new implementation of PA using the Riemannian geometry of the Lorentz model for unsupervised label-free hierarchical data alignment. (ii) We provide theoretical analysis and justification of our alignment method based on new derivations of Riemannian geometry operations in the Lorentz model. These derivations have their own merit as they could be used in other contexts. (iii) We show experimental results of accurate batch effect removal from several hierarchical bioinformatics datasets without landmark correspondence. 2 Background on hyperbolic geometry Hyperbolic space is a non-Euclidean space with a negative constant sectional curvature and an underlying geometry that describes tree-like graphs with small distortions [46]. There exist four commonly-used models for hyperbolic spaces: Poincaré disk model, Lorentz model (hyperboloid model), Poincaré half-plane model, and Beltrami-Klein model. These four models are equivalent and there exist transformations between them. Here, we consider the Lorentz model, and specifically, the upper sheet of the hyperboloid model, because its basic Riemannian operations have simple closed-form expressions and the computation of the geodesic distances is stable [42, 30]. Formally, the upper sheet of the hyperboloid model in a d-dimensional hyperbolic space is defined by Ld := {x 2 Rd+1|hx,xiL = 1,x(1) > 0}, where hx,xiL = x>Hx is the Lorentzian inner product and H 2 R(d+1)⇥(d+1) is defined by H = [ 1,0>;0, Id]. The Lorentzian norm of a hyperbolic vector x 2 Ld is denoted by ||x||L = p hx,xiL, with the origin µ0 = [1,0>]> 2 Ld. Let TxLd be the tangent space at x 2 Ld, defined by TxLd := {v|hx,viL = 0}. Consider x 2 L d and v 2 TxLd, the geodesic path : R+0 ! Ld is defined by (t) = cosh(||v||Lt)x + sinh(||v||Lt) v ||v||L with (0) = x and initial velocity 0(0) = v, where 0(t) := ddt (t). In addition, the associated geodesic distance is dLd(x, v(t)) = cosh 1( hx, v(t)iL). The Exponential map, projecting a point v 2 TxLd to the manifold Ld, is given by Expx(v) = (1) = cosh(||v||L)x + sinh(||v||L) v ||v||L . The Logarithmic map, projecting a point y 2 L d to the tangent space TxLd at x, is defined by Logx(y) = cosh 1( )p 2 1 (y x), where = hx,yiL. The PT of a vector v 2 TxLd along the geodesic path from x 2 Ld to y 2 Ld is defined by PTx!y(v) = v + hy x,viL +1 (x + y), where = hx,yiL, keeping the metric tensor unchanged. The Riemannian mean xX and the corresponding dispersion dX of a set X = {xi|xi 2 Ld}ni=1 are defined using the Fréchet mean [13, 33] by xX := m(X ) = arg min x2Ld nX i=1 d2Ld(x,xi) and dX := r(X ) = min x2Ld 1 n nX i=1 d2Ld(x,xi), (1) where m : X ! Ld and r : X ! R+. Note that the Fréchet mean of samples on connected and compact Riemannian manifolds of non-positive curvatures, such as hyperbolic spaces, is guaranteed to exist, and it is unique [24, 44, 1]. The Fréchet mean is commonly computed by the Karcher Flow [24, 20], which is computationally demanding. Importantly, in the considered hyperbolic space, the Fréchet mean can be efficiently obtained using the accurate gradient formulation [33]. Given a vector x 2 Ld and a symmetric and positive-definite (SPD) matrix ⌃ 2 Rd⇥d, the wrapped normal distribution G(x,⌃) provides a generative model of hyperbolic samples as follows [39, 11]. First, a vector v0 is sampled from N (0,⌃). Then, 0 is concatenated to the vector v0 such that v = [0,v0]> 2 Tµ0L d. Finally, PT from the origin µ0 = [1,0>]> to x is applied to v, and the resulting point is mapped to the manifold using the Exponential map at x. The probability density function of this model is given by log G(y|x,⌃) = log N (0,⌃) (n 1) log( sinh ||v 0||2 ||v0||2 ). 3 Hyperbolic Procrustes analysis Existing methods for data alignment typically seek a function that minimizes a certain cost. A large body of work attempts to match the empirical densities of two datasets, e.g., by minimizing the maximum mean discrepancy (MMD) [48, 29] or solving OT problems [45, 2, 22]. Finding an effective cost function without labels or landmarks is challenging, and minimizing such costs directly often lead to poor alignment in practice (see illustration in Fig. 1). A different well-established approach that applies indirect alignment based on geometric considerations is PA. While preparing this manuscript, another method of PA in hyperbolic spaces (PAH) was presented for matching two sets, assuming that they consist of the same number of points and that there exists a point-wise isometric map between them [51]. We remark that the analysis we present here applies to broader settings and makes no such assumptions. See Appendix E for details on classical PA as well as for comparisons to [51] and to the application of Euclidean PA in the tangent space. We consider two sets of points H(1) = {h(1)i } N1 i=1 and H (2) = {h(2)i } N2 i=1 in L d. Here, we aim to find a function ⇣ : Ld ! Ld, consisting of three components: translation, scaling, and rotation, that aligns H (2) with H(1) in an unsupervised label-free manner as depicted in Fig. 1. Finding such a function can be viewed as an extension of classical PA from the Euclidean space Rd+1 to the Lorentz model L d. A natural extension to multiple sets is described in Section 3.5. We remark that the statements are written in the context of the problem at hand. In Appendix A, we restate them more generally and present their proofs. 3.1 Riemannian translation Let h (1) and h (2) denote the Riemannian means of the sets H(1) and H(2), respectively. In this translation component, we find a map h (2)!h(1) : L d ! L d that aligns the Riemannian means of the sets. In the spirit of [5, 53], we propose to construct h (2)!h(1)(h (2) i ) as the composition of three Riemannian operations in Ld: the Logarithmic map applied to h(2)i at h (2) , PT h(2)i from h (2) to h (1) along the geodesic path, and the Exponential map applied to the transported point at h (1) : h (2)!h(1)(h (2) i ) := Exph(1)(PTh(2)!h(1)(Logh(2)(h (2) i ))). (2) See Fig. B.1 in Appendix B for illustration. Since the geodesic path between any two points in L d is unique [46], h (2)!h(1) is well-defined. The rationale behind the combination of these three Riemannian operations is twofold. First, PT is a map that aligns the means of the sets, while preserving their internal structure. Second, the Logarithmic and Exponential maps compose a map whose domain and range are the Lorentz model Ld rather than the tangent space, as desired. We make these claims formal in the following results. Proposition 1. The map h (2)!h(1) defined in Eq. (2) aligns the means of the sets, i.e., it satisfies h (1) = m({ h (2)!h(1)(h (2) i )} N2 i=1), (3) where m is the function defined in Eq. (1). Proposition 2. The map h (2)!h(1)(h (2) i ) for all h (2) i 2 H (2) can be recast as: h (2)!h(1)(h (2) i ) = h (2) i (h (2) i |h (1) ,h (2) )h (2) + (h(2)i |h (1) ,h (2) )h (1) , (4) where the functions and are positive, defined by 0 < (h(2)i |h (1) ,h (2) ) = D h (1) +h (2) ↵+1 ,h (2) i E L and 0 < (h(2)i |h (1) ,h (2) ) = D h (1) (2↵+1)h(2) ↵+1 ,h (2) i E L , respectively, and 0 < ↵ = hh (1) ,h (2) iL. In addition to providing a compact closed-form expression, Prop. 2 gives the proposed translation based on Riemannian geometry an interpretation of standard mean alignment in linear vector spaces. It implies that the alignment is nothing but subtracting the mean of the source set h (2) from each vector in H(2), and adding the mean of the target set h (1) (with the appropriate scales). Proposition 3. The map h (2)!h(1) preserves distances (i.e., it is an isometry): dLd(h (2) i ,h (2) j ) = dLd( h(2)!h(1)(h (2) i ), h(2)!h(1)(h (2) j )), (5) for any two points h(2)i ,h (2) j 2 H (2). Let (t) be the unique geodesic path from h (2) to h (1) such that (0) = h (2) and (1) = h (1) , and let 0(0) 2 T h (2)L d and 0(1) 2 T h (1)L d be the corresponding velocities, respectively. Proposition 4. The map h (2)!h(1) aligns geodesic velocities, i.e., given the mapping of the geodesic velocities to the manifold Ld 3 v0 = Exp h (2)( 0(0)) = h (1) and Ld 3 v1 = Exp h (1)( 0(1)), we have h (2)!h(1)(v0) = v1. (6) Isometry is determined up to rotation, a fact that can be problematic for alignment. For example, any H-unitary matrix [16] can be an isometric function in Ld. When landmarks are given, they can be used to alleviate this redundancy. However, in the purely unsupervised setting we consider, other data-driven ques are required. Prop. 4 implies that the proposed translation based on PT fixes some of these rotational degrees of freedom by aligning the geodesic velocities. In Section 3.3 we revisit this issue. Now, with a slight abuse of notation, let eH(2) = h (2)!h(1)(H (2)). Proposition 5. Consider two subsets A, B ⇢ H(2) and their translations eA = h (2)!h(1)(A), eB = h (2)!h(1)(B) ⇢ eH(2). Let a = m(A), b = m(B), ea = m( eA), and eb = m( eB) be the Riemannian means of the subsets. Then, h (2)!h(1) a!b = ea!eb h(2)!h(1) . (7) In the context of the alignment problem, the importance of Prop. 5 is the following. Suppose the two sets correspond to data measured at two labs (denoted with and without a tilde), and suppose each set was acquired by two types of equipment (denoted by A and B). Prop. 5 implies that aligning data from the different labs and then aligning data acquired using the different equipment is equivalent to first aligning the different equipment and then the different labs, i.e., any order of the two alignments generates the same result. Seemingly, this is a natural property in Euclidean spaces. However, in a Riemannian manifold, it is not a trivial result, and it holds for the transport along the geodesic path. See Appendix A for counter-examples. 3.2 Riemannian scaling Let d(1) and d(2) denote the Riemannian dispersions of H(1) and H(2). By propositions 1 and 3, h (1) and d(2) are the mean and dispersion of eH(2). Here, our goal is to align the Riemannian dispersions of H(1) and eH(2). For this purpose, we propose the scaling function ⌥s h (1) : L d ! L d, given by ⌥s h (1)(eh(2)i ) = i(s), (8) where s = p d(1)/d(2) is the scaling factor and i(t) is the geodesic path from h (1) to eh(2)i such that i(0) = h (1) and i(1) = eh(2)i . See Fig. B.2 in Appendix B for illustration. Proposition 6. The dispersion of the rescaled set bH(2) = ⌥s h (1)( eH(2)) is d(1). 3.3 Riemannian wrapped rotation The purpose of this component is to align the orientation of the distributions of the two sets after translation and scaling, namely, after aligning their first and second moments. The proposed rotation function ⇥ h (1) : Ld ! Ld consists of (i) mapping the points from the manifold Ld to the tangent space T h (1)L d, (ii) mapping to Rd, (iii) rotating in Rd, and (iv) mapping back to the tangent space and then to the manifold. We perform the rotation in Rd, which we term wrapped rotation, rather than a direct rotation on the manifold Ld or on the tangent space T h (1)L d for the following reasons. First, the frequently used rotation map in Ld [51] does not necessarily preserve the Riemannian mean, and in our context, it might reverse the mean alignment. Second, rotation applied directly to the tangent space T h (1)L d does not guarantee that the rotated points remain on the same tangent space. Third, applying rotation in the Euclidean vector space that is isometric to the tangent space is less efficient and stable, and it obtains slightly worse empirical results (see Appendix D.4 for details). Last, applying the rotation in Rd allows us to use the standard Euclidean rotation using SVD. In Section 4, we empirically demonstrate the advantage of the proposed rotation compared to the alternatives. Definition 1. Let the mapping function P h (1) : T h (1)L d ! R d defined on the tangent space at h (1) and its inverse map be the following functions defined by P h (1)(v) := ⇥ v(2), . . . ,v(d + 1) ⇤> 2 R d and P 1 h (1)(s) := h hs, P h (1)(h (1) )i h (1) (1) , s> i> 2 T h (1)L d, (9) where s 2 Rd and h·, ·i is the standard Euclidean inner product. Note that removing the first element of v is valid due to the constraint imposed on the vector elements in the tangent space by definition. Indeed, no information is lost and the mapping is invertible. The first step in our rotation component is to map the points in H(1) and in bH(2) to the tangent space at h (1) : v(1)i = Logh(1)(h (1) i ) for i = 1, . . . , N1, and v (2) i = Logh(1)( bh(2)i ) for i = 1, . . . , N2. In the second step, we map the points by the mapping function in Definition 1 and re-center them: s (1) i = Ph(1)(v (1) i ) s (1), i = 1, . . . , N1 and s (2) i = Ph(1)(v (2) i ) s (2), i = 1, . . . , N2, where s (k) = 1Nk PNk i=1 Ph (1)(v (k) i ) for k = 1, 2 is the mean vector of the projections. Then, the mapped and centered points (in Rd) are collected into matrices: S (k) = ⇥ s (k) 1 , s (k) 2 , . . . , s (k) Nk ⇤ 2 R d⇥Nk . (10) In the third step, for each set k = 1, 2, we compute the rotation matrix U (k) 2 Rd⇥d by applying SVD to the matrix S(k) = U (k)⇤(k)(E(k))>. Since the left-singular vectors are determined up to a sign, we propose to align their signs as follows: u(2)i sign(hu (2) i ,u (1) i i)u (2) i , where u (1) i and u (2) i are the i-th left-singular vector of the two sets, resulting in modified rotation matrices U (k). Finally, we apply the rotation to bH(2) by ⇥U h (1)(bh(2)i ) = Exph(1) ⇣ P 1 h (1) ⇣ U > ⇣ P h (1)(Log h (1)(bh(2)i )) s (2) ⌘ + s(2) ⌘⌘ , (11) where U = U (1)(U (2))>. Proposition 7. The wrapped rotation is bijective, and the inverse is given by (⇥U h (1)) 1 = ⇥U > h (1) . (12) 3.4 Analysis Putting all three components together, the proposed HPA that aligns H(2) with H(1) culminates in the composition of translation, scaling, and rotation: ⇥U h (1) ⌥ s h (1) h (2)!h(1) . (13) As in most PA schemes, the order of the three components is important. Yet, the proposed components allow us a certain degree of freedom, as indicated in the following results. Proposition 8. The Riemannian translation and the Riemannian scaling commute w.r.t. the Riemannian means h (1) and h (2) : ⌥s h (1) h (2)!h(1) = h(2)!h(1) ⌥ s h (2) . (14) Note that ⌥s h (1) and h (2)!h(1) do not necessarily commute: ⌥ s h (1) h (2)!h(1) 6= h(2)!h(1) ⌥ s h (1) . Proposition 9. The Riemannian scaling and the wrapped rotation commute: ⌥s h (1) ⇥ U h (1) = ⇥ U h (1) ⌥ s h (1) . (15) We note that the rotation does not commute with the translation, because PT only preserves the local covariant derivative on the tangent space but might cause rotation and distortion along the transportation. Therefore, the rotation is required to be the last component of our HPA. Thus far, we did not present a model for the discrepancy between the two sets, nor we presented the proposed HPA as optimal with respect to some criterion. In the following result, we show that if the discrepancy between the sets can be expressed as a composition of translation, scaling, and rotation, then the two sets can be perfectly aligned using HPA. Proposition 10. Let ⌘ : Ld ! Ld be a map, given by ⌘ = ⇥U h (1) ⌥ s h (1) h (2)!h(1) . If H (1) = {h (1) i = ⌘(h (2) i )} N2 i=1, then, h (2) i = (⇥ U 0 h (2) ⌥ 1 s h (2) h (1)!h(2))(h (1) i ), i = 1, . . . , N2, (16) where U 0 2 O(d). Note that HPA consists of the sequence of Riemannian translation, Riemannian scaling and wrappedrotation. The domain and range of each component is the manifold Ld. Yet, the first and last operations of each component are the Logarithmic and Exponential maps that project a point from the manifold to the tangent space, and vice versa, respectively. This allows us to propose an efficient implementation of the sequence without the back and forth projections as described in Appendix C. 3.5 Extension to multiple sets We can naturally scale up the setting to support the alignment of K > 2 sets, denoted by H(k) = {h (k) i } Nk i=1, where k 2 {1, 2, . . . , K}. Let h (k) and d(k) be the Riemannian mean and dispersion of Algorithm 1 Hyperbolic Procrustes analysis Input: K sets of hyperbolic points H(1) = {h(1)i } N1 i=1, . . . , H (K) = {h(K)i } NK i=1 Output: K aligned sets of hyperbolic points H̆(1) = {h̆(1)i } N1 i=1, . . . , H̆ (K) = {h̆(K)i } NK i=1 1: for each set H(k) do 2: compute the Riemannian mean h (k) and dispersion d(k) 3: end for 4: compute h, the global Riemannian mean of {h (k) } K k=1 5: for each set H(k) do 6: apply the Riemannian translation eh(k)i = h(k)!h(h (k) i ) // Eq. (2) 7: apply the Riemannian scaling bh(k)i = ⌥sh( eh(k)i ) with s = 1/ p d(k) // Eq. (8) 8: apply the wrapped rotation h̆(k)i = ⇥ U h (bh(k)i ) with U = U (1)(U (k))> // Eq. (11) 9: end for the k-th dataset, respectively. In addition, let h be the global Riemannian mean of {h (k) } K k=1. We propose to transport the points of the k-th set using h (k)!h. Next, the Riemannian dispersion of each set is set to 1 by applying ⌥s h with s = 1/ p d(k). Finally, the wrapped rotation is applied to all the data sets on the mapping of the tangent space T h L d and then mapped back to the manifold L d. The first set is designated as the reference set, and all other rotation matrices U (k) are updated according to u(k)i sign(hu (k) i ,u (1) i i)u (k) i . The proposed HPA for multiple sets is summarized in Algorithm 1, and some implementation remarks appear in Appendix C. 4 Experimental results We apply HPA to simulations and to three biomedical datasets1. In addition, we test HPA on MNIST [31] and USPS [23] datasets, which arguably do not have a distinct hierarchical structure. Nonetheless, we demonstrate in Appendix D that our HPA is highly effective in aligning these two datasets. All the experiments are label-free. We compare the obtained results to the following alignment methods: (i) PAH [51], which is applied only to the simulated data since it requires the existence of a one-to-one correspondence between the sets, (ii) only the Riemannian translation (RT), (iii) OT in hyperbolic space with the weighted Fréchet mean (HOT-F) extended to an unsupervised setting according to [54], (iv) OT with W-linear map (HOT-L) [22], and (v) hyperbolic mapping estimation (HOT-ME) [22]. As a baseline, we present the results obtained before the alignment (Baseline). For more details on the experimental setting, see Appendix C. 4.1 Simulations The synthetic data in Ld is generated using the sampling scheme described in Section 2 based on [39]. Given an arbitrary point µ 2 Ld and an arbitrary SPD matrix ⌃ 2 Rd⇥d, we generate a set of N points Q (1) = {q(1)i } N i=1 centered at µ by Ld 3 q (1) i = Expµ(PTµ0!µ(ev (1) i )), where µ0 = [1,0] > is the origin, v(1)i = [0, ev (1) i ] >, and ev(1)i ⇠ N (0,⌃). Next, we generate three noisy and distorted versions of Q(1). The first noisy set Q(2) = {q(2)i } N i=1 is generated as proposed in [51] by q (2) i = LT✏iq (1) i , where T✏i is a hyperbolic translation defined by T✏i = [ p 1 + ✏>i ✏i, ✏ > i ; ✏i, (I + ✏i✏ > i ) 1 2 ] , ✏i is sampled from N (0, 2I), 2 is the variance, and L is a random H-unitary matrix [16]. Another noisy set, denoted as Q(3) = {q(3)i } N i=1, is generated by q (3) i = L(Expµ(PTµ0!µ(u (1) i ))), where u (1) i = [0, ev(1)i + ✏i]>. Here, the noise is added to the tangent space at µ0. Finally, let Q(4) = {q (4) i } N i=1 be a distorted set, given by q(4)i = fµ0(q (3) i ), where fµ0(x) = cosh(||u||Lt)µ 0 + sinh(||u||Lt) u ||u||L and u = Log µ0(x), for arbitrary (fixed) µ0 2 Ld and t > 0. 1Our code is available at https://github.com/RonenTalmonLab/HyperbolicProcrustesAnalysis. <latexit sha1_base64="DJH1y20CjQfE05rkzrChGdpLG3Y=">AAACGHicbVA9TwJBEN3DL8Qv1NJmIzGBBu8IiZZEG0tIBEwAyd7eHmzY27vszhnJ5X6GjX/FxkJjbOn8Ny4fhYAvmeTlvZnMzHMjwTXY9o+V2djc2t7J7ub29g8Oj/LHJy0dxoqyJg1FqB5copngkjWBg2APkWIkcAVru6Pbqd9+YkrzUN7DOGK9gAwk9zklYKR+/rIbEBhSIpJG+pgUnVKKu8CeIcFEejhddiultJ8v2GV7BrxOnAUpoAXq/fyk64U0DpgEKojWHceOoJcQBZwKlua6sWYRoSMyYB1DJQmY7iWzx1J8YRQP+6EyJQHP1L8TCQm0Hgeu6Zweqle9qfif14nBv+4lXEYxMEnni/xYYAjxNCXsccUoiLEhhCpubsV0SBShYLLMmRCc1ZfXSatSdqrlaqNaqN0s4siiM3SOishBV6iG7lAdNRFFL+gNfaBP69V6t76s73lrxlrMnKIlWJNfPCSf4w==</latexit> <latexit sha1_base64="jNZ8bDFg5cYQW2/dwkoUqRDGC3I=">AAACGHicbVA9TwJBEN3zE/ELtbTZSEygwTsl0ZJoYwmJfCQckr1lgQ17e5fdOSO53M+w8a/YWGiMLZ3/xgWuEPAlk7y8N5OZeV4ouAbb/rHW1jc2t7YzO9ndvf2Dw9zRcUMHkaKsTgMRqJZHNBNcsjpwEKwVKkZ8T7CmN7qb+s0npjQP5AOMQ9bxyUDyPqcEjNTNXbg+gSElIq4lj3HBKSbYBfYMMSayh5NF96qYdHN5u2TPgFeJk5I8SlHt5iZuL6CRzyRQQbRuO3YInZgo4FSwJOtGmoWEjsiAtQ2VxGe6E88eS/C5UXq4HyhTEvBM/TsRE1/rse+Zzumhetmbiv957Qj6N52YyzACJul8UT8SGAI8TQn3uGIUxNgQQhU3t2I6JIpQMFlmTQjO8surpHFZcsqlcq2cr9ymcWTQKTpDBeSga1RB96iK6oiiF/SGPtCn9Wq9W1/W97x1zUpnTtACrMkvPaqf5A==</latexit> <latexit sha1_base64="hTgOuDJMdGixoeXwOc6+UufcYtI=">AAACGHicbVA9TwJBEN3DL8SvU0ubjcQEGrwzJFoSbSwhkY8EkOwtC2zY27vszhnJ5X6GjX/FxkJjbOn8Ny5whYAvmeTlvZnMzPNCwTU4zo+V2djc2t7J7ub29g8Oj+zjk4YOIkVZnQYiUC2PaCa4ZHXgIFgrVIz4nmBNb3w385tPTGkeyAeYhKzrk6HkA04JGKlnX3Z8AiNKRFxLHuOCW0xwB9gzxJjIPk6W3XIx6dl5p+TMgdeJm5I8SlHt2dNOP6CRzyRQQbRuu04I3Zgo4FSwJNeJNAsJHZMhaxsqic90N54/luALo/TxIFCmJOC5+nciJr7WE98znbND9ao3E//z2hEMbroxl2EETNLFokEkMAR4lhLuc8UoiIkhhCpubsV0RBShYLLMmRDc1ZfXSeOq5JZL5Vo5X7lN48iiM3SOCshF16iC7lEV1RFFL+gNfaBP69V6t76s70VrxkpnTtESrOkvPzCf5Q==</latexit> We apply Algorithm 1 to align the three pairs of sets {Q(1), Q(2)}, {Q(1), Q(3)}, and {Q(1), Q(4)}, setting N = 100, = 1, and d 2 {3, 5, 10, 20, . . . , 40}. Each experiment is repeated 10 times with different values of µ, ⌃, µ0 and t. To evaluate the alignment, we use the pairwise discrepancy based on the hidden one-to-one correspondence, given by "(Q(1), Q(j)) = 1N PN i=1 d 2 Ld(q (1) i , q (j) i ), where j 2 {2, 3, 4}. The discrepancy as a function of the dimension d is shown in Fig. 2. We observe that the proposed HPA has lower discrepancy relative to the other label-free methods. Specifically, it outperforms OT-based methods that are designed to match the densities. Furthermore, the proposed HPA is stable, in contrast to HOT-L, which is highly sensitive to the noise and distortion introduced in Q(3) and Q(4). Interestingly, we remark that the discrepancies of RT and PAH are very close, empirically showing that RT alone is comparable to PAH. In addition, note that HPA is permutationinvariant and does not require one-to-one correspondence as PAH. We report the running-time in Appendix D and demonstrate that HPA is more efficient than HOT-F and HOT-ME. 4.2 Batch effect removal We consider bioinformatics datasets consisting of gene expression data and CyTOF. Representing such data in hyperbolic spaces was shown to be informative and useful [25], implying that such data have an underlying inherent hierarchical structure. Batch effects [43] arise from experimental variations that can be attributed to the measurement device or other environmental factors. Batch correction is typically a critical precursor to any subsequent analysis and processing. Three batch effect removal tasks are examined. The first task involves breast cancer (BC) gene expression data. We consider two publicly available datasets: METABRIC [8] and TCGA [26], consisting of samples from five breast cancer subtypes. The batch effect stems from different profiling techniques: gene expression microarray and RNA sequencing. In the second task, three cohorts of lung cancer (LC) gene expression data [21] are considered, consisting of samples from three lung cancer subtypes. The data were collected using gene expression microarrays at three different sites (a likely source of batch effects): Stanford University (ST), University of Michigan (UM), and Dana-Farber Cancer Institute (D-F). The last task involves CyTOF data [48] consisting of peripheral blood mononuclear cells (PBMCs) collected from two multiple sclerosis patients during four days: two day before treatment (BT) and two days after treatment (AT). These 8 = 2⇥ 2⇥ 2 batches were collected with or without PMA/ionomycin stimulated PBMCs. We aim to remove the batch effects between two different days from the same condition (BT/AT) and from the same patient. In each batch removal task, we first learn an embedding of the data from all the batches into the Lorentz model Ld [42]. Then, HPA is applied to the embedded points in Ld. Fig. 3 shows a visualization of the embedding of the two breast cancer datasets before and after HPA. For visualization, we project the points in L3 to the 3D Poincaré ball. Before the alignment, the dominant factor separating between the patients’ samples (points) is the batch. In contrast, after the alignment, the batch effect is substantially suppressed (visually) and the factors separating the points are dominated by the cancer subtype. We evaluate the quality of the alignment in two aspects using objective measures: (i) k-NN classification, with leave-one-batch-out cross-validation, is utilized for assessing the alignment of the intrinsic structure, and (ii) MMD [19] is used for assessing the distribution alignment quality. For the classification, we view the five subtypes of BC, the three subtypes of LC, and the presence of stimulated cells in CyTOF as the labels in the respective tasks. In addition to the results of the different alignment methods, we report the k-NN classification based only on a single batch (S-Baseline), which indicates the adequacy of the representation in hyperbolic space to the task at hand. Table 1 depicts the k-NN classification obtained for the best k per method, and Table 2 shows the MMD. In each task, we set the dimension of the Lorentz model d to the dimension in which the best empirical single-task performance is obtained (S-Baseline). We note that similar results and trends are obtained for various dimensions. Additional results for various k values and an ablation study, showing that the combination of all three components yields the best classification results, are reported in Appendix D. Although the OT-based methods obtain the best matching between the distributions of the batches, HPA outperforms in all three tasks in terms of classification (see Table 1). In the two gene expression tasks, where the data have multiple labels and we align multiple batches, the advantage of HPA compared to the other methods is particularly significant. In Appendix D, we demonstrate HPA’s out-of-sample capabilities on the CyTOF data by learning the batch correction map between the different days from one patient and applying it to the data of the other patient. 4.3 Discussion Alignment methods based on density matching, such as OT-based methods, often overlook an important aspect in purely unsupervised settings. Although sample density is the main data property that can be and need to be aligned, preserving the intrinsic structure/geometry of the sets is important, as it might be tightly related to the hidden labels. Indeed, we see in our experiments that OT-based methods provide a good density alignment (reducing the inter-set variability), as demonstrated by small MMD values (see Table 2). However, the intrinsic structure of the sets (the intra-set variability) is not preserved, as evident by the resulting poor (hidden) label matching, conveyed by the k-NN classification performance (see Table 1). This is also illustrated in the right panel of Fig. 1. There it is visible that the three OT-based methods provide good global alignment of the sets, yet the intrinsic structure is not kept, as implied by the poor color matching. In contrast to OT-based methods, HPA does not explicitly aim to match densities, and thus, it obtains slightly worse MMD performance compared to OT-based methods. However, HPA matches the first two moments of the density and includes the rotation component, which was shown to be one of the fundamental limitations of OT-based methods for alignment as OT cannot recover volume-preserving maps [4, 36]. As seen in the simulation and experimental results and illustrated in the right panel of Fig. 1, we still obtain a good global alignment and simultaneously preserve the intrinsic structure, allowing for high classification performance. We remark that in the synthetic examples, there is a (hidden) one-to-one correspondence between the sets, and therefore one-to-one discrepancy can be computed (instead or in addition to MMD). When there is such a correspondence, OT still cannot recover volume-preserving maps, while HPA can mitigate noise and distortions. 5 Conclusion We introduced HPA for label-free alignment of data in the Lorentz model. Based on Riemannian geometry, we presented new translation and scaling operations that align the first and second Riemannian moments as well as a wrapped rotation that aligns the orientation in the hyperboloid model. Our theoretical analysis provides further insight and highlights properties that may be useful for practitioners. We empirically showed in simulations that HPA is stable under noise and distortions. Experimental results involving purely unsupervised batch correction of multiple bioinformatics datasets with multiple labels is demonstrated. Beyond alignment and batch effect removal, our method can be viewed as a type of domain adaptation or a precursor of transfer learning that relies on purely geometric considerations, exploiting the geometric structure of data as well as the geometric properties of the space of the data. In addition, it can be utilized for multimodal data fusion and geometric registration of shapes with hierarchical structure. Acknowledgments and Disclosure of Funding We thank the reviewers for their important comments and suggestions, and we thank Thomas Dagès for the helpful discussion. The work of YEL and RT was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 802735-ERC-DIFFOP. The work of YK was supported by the National Institutes of Health R01GM131642, UM1PA05141, P50CA121974, and U01DA053628.
1. What is the main contribution of the paper regarding unsupervised alignment in hyperbolic spaces? 2. What are the strengths of the proposed hyperbolic procrustes analysis (HPA) method? 3. What are the concerns raised in the review regarding the usage of wrapped rotations instead of Euclidean rotations in the tangent space? 4. How does the reviewer suggest improving the rotation method in HPA? 5. What is the advantage of the suggested rotation method according to the reviewer?
Summary Of The Paper Review
Summary Of The Paper This paper studies the unsupervised alignment of hierarchical datasets embedded in hyperbolic spaces, by proposing a hyperbolic procrustes analysis (HPA) method. HPA uses new hyperbolic translations, rotations and scaling methods to align datasets. Experiments on synthetic and real datasets show that HPA is able to remove noise from data collection to align datasets. Review Strengths: The paper is well-written. The theorems and propositions in this paper are precisely formulated and clearly written. The authors clearly explained why each theorem is important to the paper. Experiments are convincing, and the downstream tasks in 4.2 are good experiments to show the quality and utility of alignment. Suggestions/Comments: Line 167 ("Second, rotation applied directly to the tangent space [...] does not guarantee that the rotated points remain on the same tangent space"). I don't understand this sentence. The tangent space of L^d at \bar{h}^{(1)}, just like the tangent space of any Riemannian manifold at any point, is isometric to the Euclidean space R^d. Thus, we can identify it with R^d using an isometry then perform Euclidean rotations there (then identify things back to the tangent space). If rotations in Euclidean space do not throw points out of the space, why would that happen for rotations in the tangent space T_{\bar{h}^{(1)}} L^d ? If, instead of using the wrapped rotations, we (1) map points to the tangent space T_{\bar{h}^{(1)}} L^d using the log map, (2) perform Euclidean rotations on this tangent space around the center \bar{h}^{(1)}, and (3) map the points back to L^d using the exponential map, then the resulting map have the advantage of being an isometry of L^d: In fact, it will just be a rotation of L^d around the point \bar{h}^{(1)}. In particular, it preserves the mean \bar{h}^{(1)}. Note that the authors' comment in line 165 (about rotation maps in L^d not necessarily preserving the mean) does not really apply here: Rotations in L^d can totally preserve the mean, as long as we rotate around the mean itself, as suggested above. While the rotation used in [46] indeed might not preserve the mean \bar{h}, that is only because that rotation is based at the "bottom" of the hyperboloid and not the mean \bar{h}. I don't see why we have to stick with that particular rotation, given that the hyperbolic space is homogenous so there is nothing intrinsically special about the "bottom" point. To summarize: I feel that the rotation of L^d around \bar{h} has a few advantages over the wrapped rotations suggested in 3.3: It is (1) an isometry, (2) quite natural, and (3) intrinsic to the geometry of the hyperbolic space (i.e. not depends on any specific model or coordinate systems). Meanwhile, it still preserves the mean \bar{h} and has a closed-form expression, as the authors desire. Therefore, I do not understand why the remarks in line 165-168 are valid reasons to discard this rotation. Perhaps the authors have tried this and found that it performed worse empirically? If so, it would be nice to see that analysis. PS. Note that if we replace the wrapped operations by the usual rotation above, then the whole alignment method is essentially just (1) map points to tangent spaces using logarithm maps, (2) translate them to the same tangent space using parallel transport, (3) perform Euclidean alignment on the tangent space, and (4) map points back to L^d using exponential map. This is a simple baseline that can be efficiently computed and can leverage whatever Eucllidean alignment techniques we want.
NIPS
Title Hyperbolic Procrustes Analysis Using Riemannian Geometry Abstract Label-free alignment between datasets collected at different times, locations, or by different instruments is a fundamental scientific task. Hyperbolic spaces have recently provided a fruitful foundation for the development of informative representations of hierarchical data. Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA). HPA consists of new implementations of the three prototypical Procrustes analysis components: translation, scaling, and rotation, based on the Riemannian geometry of the Lorentz model of hyperbolic space. We analyze the proposed components, highlighting their useful properties for alignment. The efficacy of HPA, its theoretical properties, stability and computational efficiency are demonstrated in simulations. In addition, we showcase its performance on three batch correction tasks involving gene expression and mass cytometry data. Specifically, we demonstrate high-quality unsupervised batch effect removal from data acquired at different sites and with different technologies that outperforms recent methods for label-free alignment in hyperbolic spaces. 1 Introduction A key scientific task in modern data analysis is the alignment of data. The need for alignment often arises since data are acquired in multiple domains, under different environmental conditions, using various acquisition equipment, and at different sites. This paper focuses on the problem of label-free alignment of data embedded in hyperbolic spaces. Recently, hyperbolic spaces have accentuated in geometric representation learning. These non-Euclidean spaces have become popular since they provide a natural embedding of hierarchical data thanks to the exponential growth of the lengths of their geodesic paths [41, 42, 30, 15, 14, 6, 32]. The problem of alignment of data embedded in hyperbolic spaces has been extensively studied, e.g., in the context of natural language processing [49], ontology matching [10], matching two data modalities [40], and improving the embedding in hyperbolic spaces [2]. A few of these studies are based on optimal transport (OT) [2, 22], a classical problem in mathematics [38] that has recently reemerged in modern data analysis, e.g., for domain adaptation [7]. Despite its increasing usage, OT for unsupervised alignment is fundamentally limited [54], since OT (as any density matching approach) cannot recover volume-preserving maps [3, 4, 36]. In this paper, we resort to Procrustes analysis (PA) [17, 18] that is based on purely geometric considerations. PA has been widely used for aligning datasets by eliminating the shift, scaling, and rotational factors. Over the years, it has been successfully applied to various applications, e.g., 35th Conference on Neural Information Processing Systems (NeurIPS 2021). image registration [34], manifold alignment [52], shape matching [35], domain adaptation [47], and manifold learning [27], to name but a few. Here, we address the problem of label-free matching of hierarchical data embedded in hyperbolic spaces. We present hyperbolic Procrustes analysis (HPA), a new PA method in the Lorentz model of hyperbolic geometry. The main novelty lies in the introduction of new implementations of the three prototypical PA components based on Riemannian geometry. Specifically, translation is viewed as a Riemannian mean alignment, implemented using parallel transport (PT). Scaling is determined with respect to geodesic paths. Rotation is considered as moment alignment on a mapping of the tangent space of the manifold to a Euclidean vector space. Our analysis provides new derivations in the Riemannian geometry of the Lorentz model and specifies the commuting properties of the HPA components. We show that HPA, compared to existing baselines and OT-based methods, achieves improved alignment in a purely unsupervised setting. In addition, it has a natural and stable out-of-sample extension, it supports both small and big data, and it is computationally efficient. We show application to batch correction in bioinformatics tasks. We present results on both gene expression and mass cytometry (CyTOF) data, exemplifying the generality and broad scope of our method. In contrast to recent works [28, 50], our method does not require landmark correspondence, which is often unavailable in many datasets or hard to obtain. Specifically, we show that batch effects caused by acquisition using different technologies, at different sites, and at different times can be accurately removed, while preserving the intrinsic structure of the data. Our main contributions are as follows. (i) We present a new implementation of PA using the Riemannian geometry of the Lorentz model for unsupervised label-free hierarchical data alignment. (ii) We provide theoretical analysis and justification of our alignment method based on new derivations of Riemannian geometry operations in the Lorentz model. These derivations have their own merit as they could be used in other contexts. (iii) We show experimental results of accurate batch effect removal from several hierarchical bioinformatics datasets without landmark correspondence. 2 Background on hyperbolic geometry Hyperbolic space is a non-Euclidean space with a negative constant sectional curvature and an underlying geometry that describes tree-like graphs with small distortions [46]. There exist four commonly-used models for hyperbolic spaces: Poincaré disk model, Lorentz model (hyperboloid model), Poincaré half-plane model, and Beltrami-Klein model. These four models are equivalent and there exist transformations between them. Here, we consider the Lorentz model, and specifically, the upper sheet of the hyperboloid model, because its basic Riemannian operations have simple closed-form expressions and the computation of the geodesic distances is stable [42, 30]. Formally, the upper sheet of the hyperboloid model in a d-dimensional hyperbolic space is defined by Ld := {x 2 Rd+1|hx,xiL = 1,x(1) > 0}, where hx,xiL = x>Hx is the Lorentzian inner product and H 2 R(d+1)⇥(d+1) is defined by H = [ 1,0>;0, Id]. The Lorentzian norm of a hyperbolic vector x 2 Ld is denoted by ||x||L = p hx,xiL, with the origin µ0 = [1,0>]> 2 Ld. Let TxLd be the tangent space at x 2 Ld, defined by TxLd := {v|hx,viL = 0}. Consider x 2 L d and v 2 TxLd, the geodesic path : R+0 ! Ld is defined by (t) = cosh(||v||Lt)x + sinh(||v||Lt) v ||v||L with (0) = x and initial velocity 0(0) = v, where 0(t) := ddt (t). In addition, the associated geodesic distance is dLd(x, v(t)) = cosh 1( hx, v(t)iL). The Exponential map, projecting a point v 2 TxLd to the manifold Ld, is given by Expx(v) = (1) = cosh(||v||L)x + sinh(||v||L) v ||v||L . The Logarithmic map, projecting a point y 2 L d to the tangent space TxLd at x, is defined by Logx(y) = cosh 1( )p 2 1 (y x), where = hx,yiL. The PT of a vector v 2 TxLd along the geodesic path from x 2 Ld to y 2 Ld is defined by PTx!y(v) = v + hy x,viL +1 (x + y), where = hx,yiL, keeping the metric tensor unchanged. The Riemannian mean xX and the corresponding dispersion dX of a set X = {xi|xi 2 Ld}ni=1 are defined using the Fréchet mean [13, 33] by xX := m(X ) = arg min x2Ld nX i=1 d2Ld(x,xi) and dX := r(X ) = min x2Ld 1 n nX i=1 d2Ld(x,xi), (1) where m : X ! Ld and r : X ! R+. Note that the Fréchet mean of samples on connected and compact Riemannian manifolds of non-positive curvatures, such as hyperbolic spaces, is guaranteed to exist, and it is unique [24, 44, 1]. The Fréchet mean is commonly computed by the Karcher Flow [24, 20], which is computationally demanding. Importantly, in the considered hyperbolic space, the Fréchet mean can be efficiently obtained using the accurate gradient formulation [33]. Given a vector x 2 Ld and a symmetric and positive-definite (SPD) matrix ⌃ 2 Rd⇥d, the wrapped normal distribution G(x,⌃) provides a generative model of hyperbolic samples as follows [39, 11]. First, a vector v0 is sampled from N (0,⌃). Then, 0 is concatenated to the vector v0 such that v = [0,v0]> 2 Tµ0L d. Finally, PT from the origin µ0 = [1,0>]> to x is applied to v, and the resulting point is mapped to the manifold using the Exponential map at x. The probability density function of this model is given by log G(y|x,⌃) = log N (0,⌃) (n 1) log( sinh ||v 0||2 ||v0||2 ). 3 Hyperbolic Procrustes analysis Existing methods for data alignment typically seek a function that minimizes a certain cost. A large body of work attempts to match the empirical densities of two datasets, e.g., by minimizing the maximum mean discrepancy (MMD) [48, 29] or solving OT problems [45, 2, 22]. Finding an effective cost function without labels or landmarks is challenging, and minimizing such costs directly often lead to poor alignment in practice (see illustration in Fig. 1). A different well-established approach that applies indirect alignment based on geometric considerations is PA. While preparing this manuscript, another method of PA in hyperbolic spaces (PAH) was presented for matching two sets, assuming that they consist of the same number of points and that there exists a point-wise isometric map between them [51]. We remark that the analysis we present here applies to broader settings and makes no such assumptions. See Appendix E for details on classical PA as well as for comparisons to [51] and to the application of Euclidean PA in the tangent space. We consider two sets of points H(1) = {h(1)i } N1 i=1 and H (2) = {h(2)i } N2 i=1 in L d. Here, we aim to find a function ⇣ : Ld ! Ld, consisting of three components: translation, scaling, and rotation, that aligns H (2) with H(1) in an unsupervised label-free manner as depicted in Fig. 1. Finding such a function can be viewed as an extension of classical PA from the Euclidean space Rd+1 to the Lorentz model L d. A natural extension to multiple sets is described in Section 3.5. We remark that the statements are written in the context of the problem at hand. In Appendix A, we restate them more generally and present their proofs. 3.1 Riemannian translation Let h (1) and h (2) denote the Riemannian means of the sets H(1) and H(2), respectively. In this translation component, we find a map h (2)!h(1) : L d ! L d that aligns the Riemannian means of the sets. In the spirit of [5, 53], we propose to construct h (2)!h(1)(h (2) i ) as the composition of three Riemannian operations in Ld: the Logarithmic map applied to h(2)i at h (2) , PT h(2)i from h (2) to h (1) along the geodesic path, and the Exponential map applied to the transported point at h (1) : h (2)!h(1)(h (2) i ) := Exph(1)(PTh(2)!h(1)(Logh(2)(h (2) i ))). (2) See Fig. B.1 in Appendix B for illustration. Since the geodesic path between any two points in L d is unique [46], h (2)!h(1) is well-defined. The rationale behind the combination of these three Riemannian operations is twofold. First, PT is a map that aligns the means of the sets, while preserving their internal structure. Second, the Logarithmic and Exponential maps compose a map whose domain and range are the Lorentz model Ld rather than the tangent space, as desired. We make these claims formal in the following results. Proposition 1. The map h (2)!h(1) defined in Eq. (2) aligns the means of the sets, i.e., it satisfies h (1) = m({ h (2)!h(1)(h (2) i )} N2 i=1), (3) where m is the function defined in Eq. (1). Proposition 2. The map h (2)!h(1)(h (2) i ) for all h (2) i 2 H (2) can be recast as: h (2)!h(1)(h (2) i ) = h (2) i (h (2) i |h (1) ,h (2) )h (2) + (h(2)i |h (1) ,h (2) )h (1) , (4) where the functions and are positive, defined by 0 < (h(2)i |h (1) ,h (2) ) = D h (1) +h (2) ↵+1 ,h (2) i E L and 0 < (h(2)i |h (1) ,h (2) ) = D h (1) (2↵+1)h(2) ↵+1 ,h (2) i E L , respectively, and 0 < ↵ = hh (1) ,h (2) iL. In addition to providing a compact closed-form expression, Prop. 2 gives the proposed translation based on Riemannian geometry an interpretation of standard mean alignment in linear vector spaces. It implies that the alignment is nothing but subtracting the mean of the source set h (2) from each vector in H(2), and adding the mean of the target set h (1) (with the appropriate scales). Proposition 3. The map h (2)!h(1) preserves distances (i.e., it is an isometry): dLd(h (2) i ,h (2) j ) = dLd( h(2)!h(1)(h (2) i ), h(2)!h(1)(h (2) j )), (5) for any two points h(2)i ,h (2) j 2 H (2). Let (t) be the unique geodesic path from h (2) to h (1) such that (0) = h (2) and (1) = h (1) , and let 0(0) 2 T h (2)L d and 0(1) 2 T h (1)L d be the corresponding velocities, respectively. Proposition 4. The map h (2)!h(1) aligns geodesic velocities, i.e., given the mapping of the geodesic velocities to the manifold Ld 3 v0 = Exp h (2)( 0(0)) = h (1) and Ld 3 v1 = Exp h (1)( 0(1)), we have h (2)!h(1)(v0) = v1. (6) Isometry is determined up to rotation, a fact that can be problematic for alignment. For example, any H-unitary matrix [16] can be an isometric function in Ld. When landmarks are given, they can be used to alleviate this redundancy. However, in the purely unsupervised setting we consider, other data-driven ques are required. Prop. 4 implies that the proposed translation based on PT fixes some of these rotational degrees of freedom by aligning the geodesic velocities. In Section 3.3 we revisit this issue. Now, with a slight abuse of notation, let eH(2) = h (2)!h(1)(H (2)). Proposition 5. Consider two subsets A, B ⇢ H(2) and their translations eA = h (2)!h(1)(A), eB = h (2)!h(1)(B) ⇢ eH(2). Let a = m(A), b = m(B), ea = m( eA), and eb = m( eB) be the Riemannian means of the subsets. Then, h (2)!h(1) a!b = ea!eb h(2)!h(1) . (7) In the context of the alignment problem, the importance of Prop. 5 is the following. Suppose the two sets correspond to data measured at two labs (denoted with and without a tilde), and suppose each set was acquired by two types of equipment (denoted by A and B). Prop. 5 implies that aligning data from the different labs and then aligning data acquired using the different equipment is equivalent to first aligning the different equipment and then the different labs, i.e., any order of the two alignments generates the same result. Seemingly, this is a natural property in Euclidean spaces. However, in a Riemannian manifold, it is not a trivial result, and it holds for the transport along the geodesic path. See Appendix A for counter-examples. 3.2 Riemannian scaling Let d(1) and d(2) denote the Riemannian dispersions of H(1) and H(2). By propositions 1 and 3, h (1) and d(2) are the mean and dispersion of eH(2). Here, our goal is to align the Riemannian dispersions of H(1) and eH(2). For this purpose, we propose the scaling function ⌥s h (1) : L d ! L d, given by ⌥s h (1)(eh(2)i ) = i(s), (8) where s = p d(1)/d(2) is the scaling factor and i(t) is the geodesic path from h (1) to eh(2)i such that i(0) = h (1) and i(1) = eh(2)i . See Fig. B.2 in Appendix B for illustration. Proposition 6. The dispersion of the rescaled set bH(2) = ⌥s h (1)( eH(2)) is d(1). 3.3 Riemannian wrapped rotation The purpose of this component is to align the orientation of the distributions of the two sets after translation and scaling, namely, after aligning their first and second moments. The proposed rotation function ⇥ h (1) : Ld ! Ld consists of (i) mapping the points from the manifold Ld to the tangent space T h (1)L d, (ii) mapping to Rd, (iii) rotating in Rd, and (iv) mapping back to the tangent space and then to the manifold. We perform the rotation in Rd, which we term wrapped rotation, rather than a direct rotation on the manifold Ld or on the tangent space T h (1)L d for the following reasons. First, the frequently used rotation map in Ld [51] does not necessarily preserve the Riemannian mean, and in our context, it might reverse the mean alignment. Second, rotation applied directly to the tangent space T h (1)L d does not guarantee that the rotated points remain on the same tangent space. Third, applying rotation in the Euclidean vector space that is isometric to the tangent space is less efficient and stable, and it obtains slightly worse empirical results (see Appendix D.4 for details). Last, applying the rotation in Rd allows us to use the standard Euclidean rotation using SVD. In Section 4, we empirically demonstrate the advantage of the proposed rotation compared to the alternatives. Definition 1. Let the mapping function P h (1) : T h (1)L d ! R d defined on the tangent space at h (1) and its inverse map be the following functions defined by P h (1)(v) := ⇥ v(2), . . . ,v(d + 1) ⇤> 2 R d and P 1 h (1)(s) := h hs, P h (1)(h (1) )i h (1) (1) , s> i> 2 T h (1)L d, (9) where s 2 Rd and h·, ·i is the standard Euclidean inner product. Note that removing the first element of v is valid due to the constraint imposed on the vector elements in the tangent space by definition. Indeed, no information is lost and the mapping is invertible. The first step in our rotation component is to map the points in H(1) and in bH(2) to the tangent space at h (1) : v(1)i = Logh(1)(h (1) i ) for i = 1, . . . , N1, and v (2) i = Logh(1)( bh(2)i ) for i = 1, . . . , N2. In the second step, we map the points by the mapping function in Definition 1 and re-center them: s (1) i = Ph(1)(v (1) i ) s (1), i = 1, . . . , N1 and s (2) i = Ph(1)(v (2) i ) s (2), i = 1, . . . , N2, where s (k) = 1Nk PNk i=1 Ph (1)(v (k) i ) for k = 1, 2 is the mean vector of the projections. Then, the mapped and centered points (in Rd) are collected into matrices: S (k) = ⇥ s (k) 1 , s (k) 2 , . . . , s (k) Nk ⇤ 2 R d⇥Nk . (10) In the third step, for each set k = 1, 2, we compute the rotation matrix U (k) 2 Rd⇥d by applying SVD to the matrix S(k) = U (k)⇤(k)(E(k))>. Since the left-singular vectors are determined up to a sign, we propose to align their signs as follows: u(2)i sign(hu (2) i ,u (1) i i)u (2) i , where u (1) i and u (2) i are the i-th left-singular vector of the two sets, resulting in modified rotation matrices U (k). Finally, we apply the rotation to bH(2) by ⇥U h (1)(bh(2)i ) = Exph(1) ⇣ P 1 h (1) ⇣ U > ⇣ P h (1)(Log h (1)(bh(2)i )) s (2) ⌘ + s(2) ⌘⌘ , (11) where U = U (1)(U (2))>. Proposition 7. The wrapped rotation is bijective, and the inverse is given by (⇥U h (1)) 1 = ⇥U > h (1) . (12) 3.4 Analysis Putting all three components together, the proposed HPA that aligns H(2) with H(1) culminates in the composition of translation, scaling, and rotation: ⇥U h (1) ⌥ s h (1) h (2)!h(1) . (13) As in most PA schemes, the order of the three components is important. Yet, the proposed components allow us a certain degree of freedom, as indicated in the following results. Proposition 8. The Riemannian translation and the Riemannian scaling commute w.r.t. the Riemannian means h (1) and h (2) : ⌥s h (1) h (2)!h(1) = h(2)!h(1) ⌥ s h (2) . (14) Note that ⌥s h (1) and h (2)!h(1) do not necessarily commute: ⌥ s h (1) h (2)!h(1) 6= h(2)!h(1) ⌥ s h (1) . Proposition 9. The Riemannian scaling and the wrapped rotation commute: ⌥s h (1) ⇥ U h (1) = ⇥ U h (1) ⌥ s h (1) . (15) We note that the rotation does not commute with the translation, because PT only preserves the local covariant derivative on the tangent space but might cause rotation and distortion along the transportation. Therefore, the rotation is required to be the last component of our HPA. Thus far, we did not present a model for the discrepancy between the two sets, nor we presented the proposed HPA as optimal with respect to some criterion. In the following result, we show that if the discrepancy between the sets can be expressed as a composition of translation, scaling, and rotation, then the two sets can be perfectly aligned using HPA. Proposition 10. Let ⌘ : Ld ! Ld be a map, given by ⌘ = ⇥U h (1) ⌥ s h (1) h (2)!h(1) . If H (1) = {h (1) i = ⌘(h (2) i )} N2 i=1, then, h (2) i = (⇥ U 0 h (2) ⌥ 1 s h (2) h (1)!h(2))(h (1) i ), i = 1, . . . , N2, (16) where U 0 2 O(d). Note that HPA consists of the sequence of Riemannian translation, Riemannian scaling and wrappedrotation. The domain and range of each component is the manifold Ld. Yet, the first and last operations of each component are the Logarithmic and Exponential maps that project a point from the manifold to the tangent space, and vice versa, respectively. This allows us to propose an efficient implementation of the sequence without the back and forth projections as described in Appendix C. 3.5 Extension to multiple sets We can naturally scale up the setting to support the alignment of K > 2 sets, denoted by H(k) = {h (k) i } Nk i=1, where k 2 {1, 2, . . . , K}. Let h (k) and d(k) be the Riemannian mean and dispersion of Algorithm 1 Hyperbolic Procrustes analysis Input: K sets of hyperbolic points H(1) = {h(1)i } N1 i=1, . . . , H (K) = {h(K)i } NK i=1 Output: K aligned sets of hyperbolic points H̆(1) = {h̆(1)i } N1 i=1, . . . , H̆ (K) = {h̆(K)i } NK i=1 1: for each set H(k) do 2: compute the Riemannian mean h (k) and dispersion d(k) 3: end for 4: compute h, the global Riemannian mean of {h (k) } K k=1 5: for each set H(k) do 6: apply the Riemannian translation eh(k)i = h(k)!h(h (k) i ) // Eq. (2) 7: apply the Riemannian scaling bh(k)i = ⌥sh( eh(k)i ) with s = 1/ p d(k) // Eq. (8) 8: apply the wrapped rotation h̆(k)i = ⇥ U h (bh(k)i ) with U = U (1)(U (k))> // Eq. (11) 9: end for the k-th dataset, respectively. In addition, let h be the global Riemannian mean of {h (k) } K k=1. We propose to transport the points of the k-th set using h (k)!h. Next, the Riemannian dispersion of each set is set to 1 by applying ⌥s h with s = 1/ p d(k). Finally, the wrapped rotation is applied to all the data sets on the mapping of the tangent space T h L d and then mapped back to the manifold L d. The first set is designated as the reference set, and all other rotation matrices U (k) are updated according to u(k)i sign(hu (k) i ,u (1) i i)u (k) i . The proposed HPA for multiple sets is summarized in Algorithm 1, and some implementation remarks appear in Appendix C. 4 Experimental results We apply HPA to simulations and to three biomedical datasets1. In addition, we test HPA on MNIST [31] and USPS [23] datasets, which arguably do not have a distinct hierarchical structure. Nonetheless, we demonstrate in Appendix D that our HPA is highly effective in aligning these two datasets. All the experiments are label-free. We compare the obtained results to the following alignment methods: (i) PAH [51], which is applied only to the simulated data since it requires the existence of a one-to-one correspondence between the sets, (ii) only the Riemannian translation (RT), (iii) OT in hyperbolic space with the weighted Fréchet mean (HOT-F) extended to an unsupervised setting according to [54], (iv) OT with W-linear map (HOT-L) [22], and (v) hyperbolic mapping estimation (HOT-ME) [22]. As a baseline, we present the results obtained before the alignment (Baseline). For more details on the experimental setting, see Appendix C. 4.1 Simulations The synthetic data in Ld is generated using the sampling scheme described in Section 2 based on [39]. Given an arbitrary point µ 2 Ld and an arbitrary SPD matrix ⌃ 2 Rd⇥d, we generate a set of N points Q (1) = {q(1)i } N i=1 centered at µ by Ld 3 q (1) i = Expµ(PTµ0!µ(ev (1) i )), where µ0 = [1,0] > is the origin, v(1)i = [0, ev (1) i ] >, and ev(1)i ⇠ N (0,⌃). Next, we generate three noisy and distorted versions of Q(1). The first noisy set Q(2) = {q(2)i } N i=1 is generated as proposed in [51] by q (2) i = LT✏iq (1) i , where T✏i is a hyperbolic translation defined by T✏i = [ p 1 + ✏>i ✏i, ✏ > i ; ✏i, (I + ✏i✏ > i ) 1 2 ] , ✏i is sampled from N (0, 2I), 2 is the variance, and L is a random H-unitary matrix [16]. Another noisy set, denoted as Q(3) = {q(3)i } N i=1, is generated by q (3) i = L(Expµ(PTµ0!µ(u (1) i ))), where u (1) i = [0, ev(1)i + ✏i]>. Here, the noise is added to the tangent space at µ0. Finally, let Q(4) = {q (4) i } N i=1 be a distorted set, given by q(4)i = fµ0(q (3) i ), where fµ0(x) = cosh(||u||Lt)µ 0 + sinh(||u||Lt) u ||u||L and u = Log µ0(x), for arbitrary (fixed) µ0 2 Ld and t > 0. 1Our code is available at https://github.com/RonenTalmonLab/HyperbolicProcrustesAnalysis. <latexit sha1_base64="DJH1y20CjQfE05rkzrChGdpLG3Y=">AAACGHicbVA9TwJBEN3DL8Qv1NJmIzGBBu8IiZZEG0tIBEwAyd7eHmzY27vszhnJ5X6GjX/FxkJjbOn8Ny4fhYAvmeTlvZnMzHMjwTXY9o+V2djc2t7J7ub29g8Oj/LHJy0dxoqyJg1FqB5copngkjWBg2APkWIkcAVru6Pbqd9+YkrzUN7DOGK9gAwk9zklYKR+/rIbEBhSIpJG+pgUnVKKu8CeIcFEejhddiultJ8v2GV7BrxOnAUpoAXq/fyk64U0DpgEKojWHceOoJcQBZwKlua6sWYRoSMyYB1DJQmY7iWzx1J8YRQP+6EyJQHP1L8TCQm0Hgeu6Zweqle9qfif14nBv+4lXEYxMEnni/xYYAjxNCXsccUoiLEhhCpubsV0SBShYLLMmRCc1ZfXSatSdqrlaqNaqN0s4siiM3SOishBV6iG7lAdNRFFL+gNfaBP69V6t76s73lrxlrMnKIlWJNfPCSf4w==</latexit> <latexit sha1_base64="jNZ8bDFg5cYQW2/dwkoUqRDGC3I=">AAACGHicbVA9TwJBEN3zE/ELtbTZSEygwTsl0ZJoYwmJfCQckr1lgQ17e5fdOSO53M+w8a/YWGiMLZ3/xgWuEPAlk7y8N5OZeV4ouAbb/rHW1jc2t7YzO9ndvf2Dw9zRcUMHkaKsTgMRqJZHNBNcsjpwEKwVKkZ8T7CmN7qb+s0npjQP5AOMQ9bxyUDyPqcEjNTNXbg+gSElIq4lj3HBKSbYBfYMMSayh5NF96qYdHN5u2TPgFeJk5I8SlHt5iZuL6CRzyRQQbRuO3YInZgo4FSwJOtGmoWEjsiAtQ2VxGe6E88eS/C5UXq4HyhTEvBM/TsRE1/rse+Zzumhetmbiv957Qj6N52YyzACJul8UT8SGAI8TQn3uGIUxNgQQhU3t2I6JIpQMFlmTQjO8surpHFZcsqlcq2cr9ymcWTQKTpDBeSga1RB96iK6oiiF/SGPtCn9Wq9W1/W97x1zUpnTtACrMkvPaqf5A==</latexit> <latexit sha1_base64="hTgOuDJMdGixoeXwOc6+UufcYtI=">AAACGHicbVA9TwJBEN3DL8SvU0ubjcQEGrwzJFoSbSwhkY8EkOwtC2zY27vszhnJ5X6GjX/FxkJjbOn8Ny5whYAvmeTlvZnMzPNCwTU4zo+V2djc2t7J7ub29g8Oj+zjk4YOIkVZnQYiUC2PaCa4ZHXgIFgrVIz4nmBNb3w385tPTGkeyAeYhKzrk6HkA04JGKlnX3Z8AiNKRFxLHuOCW0xwB9gzxJjIPk6W3XIx6dl5p+TMgdeJm5I8SlHt2dNOP6CRzyRQQbRuu04I3Zgo4FSwJNeJNAsJHZMhaxsqic90N54/luALo/TxIFCmJOC5+nciJr7WE98znbND9ao3E//z2hEMbroxl2EETNLFokEkMAR4lhLuc8UoiIkhhCpubsV0RBShYLLMmRDc1ZfXSeOq5JZL5Vo5X7lN48iiM3SOCshF16iC7lEV1RFFL+gNfaBP69V6t76s70VrxkpnTtESrOkvPzCf5Q==</latexit> We apply Algorithm 1 to align the three pairs of sets {Q(1), Q(2)}, {Q(1), Q(3)}, and {Q(1), Q(4)}, setting N = 100, = 1, and d 2 {3, 5, 10, 20, . . . , 40}. Each experiment is repeated 10 times with different values of µ, ⌃, µ0 and t. To evaluate the alignment, we use the pairwise discrepancy based on the hidden one-to-one correspondence, given by "(Q(1), Q(j)) = 1N PN i=1 d 2 Ld(q (1) i , q (j) i ), where j 2 {2, 3, 4}. The discrepancy as a function of the dimension d is shown in Fig. 2. We observe that the proposed HPA has lower discrepancy relative to the other label-free methods. Specifically, it outperforms OT-based methods that are designed to match the densities. Furthermore, the proposed HPA is stable, in contrast to HOT-L, which is highly sensitive to the noise and distortion introduced in Q(3) and Q(4). Interestingly, we remark that the discrepancies of RT and PAH are very close, empirically showing that RT alone is comparable to PAH. In addition, note that HPA is permutationinvariant and does not require one-to-one correspondence as PAH. We report the running-time in Appendix D and demonstrate that HPA is more efficient than HOT-F and HOT-ME. 4.2 Batch effect removal We consider bioinformatics datasets consisting of gene expression data and CyTOF. Representing such data in hyperbolic spaces was shown to be informative and useful [25], implying that such data have an underlying inherent hierarchical structure. Batch effects [43] arise from experimental variations that can be attributed to the measurement device or other environmental factors. Batch correction is typically a critical precursor to any subsequent analysis and processing. Three batch effect removal tasks are examined. The first task involves breast cancer (BC) gene expression data. We consider two publicly available datasets: METABRIC [8] and TCGA [26], consisting of samples from five breast cancer subtypes. The batch effect stems from different profiling techniques: gene expression microarray and RNA sequencing. In the second task, three cohorts of lung cancer (LC) gene expression data [21] are considered, consisting of samples from three lung cancer subtypes. The data were collected using gene expression microarrays at three different sites (a likely source of batch effects): Stanford University (ST), University of Michigan (UM), and Dana-Farber Cancer Institute (D-F). The last task involves CyTOF data [48] consisting of peripheral blood mononuclear cells (PBMCs) collected from two multiple sclerosis patients during four days: two day before treatment (BT) and two days after treatment (AT). These 8 = 2⇥ 2⇥ 2 batches were collected with or without PMA/ionomycin stimulated PBMCs. We aim to remove the batch effects between two different days from the same condition (BT/AT) and from the same patient. In each batch removal task, we first learn an embedding of the data from all the batches into the Lorentz model Ld [42]. Then, HPA is applied to the embedded points in Ld. Fig. 3 shows a visualization of the embedding of the two breast cancer datasets before and after HPA. For visualization, we project the points in L3 to the 3D Poincaré ball. Before the alignment, the dominant factor separating between the patients’ samples (points) is the batch. In contrast, after the alignment, the batch effect is substantially suppressed (visually) and the factors separating the points are dominated by the cancer subtype. We evaluate the quality of the alignment in two aspects using objective measures: (i) k-NN classification, with leave-one-batch-out cross-validation, is utilized for assessing the alignment of the intrinsic structure, and (ii) MMD [19] is used for assessing the distribution alignment quality. For the classification, we view the five subtypes of BC, the three subtypes of LC, and the presence of stimulated cells in CyTOF as the labels in the respective tasks. In addition to the results of the different alignment methods, we report the k-NN classification based only on a single batch (S-Baseline), which indicates the adequacy of the representation in hyperbolic space to the task at hand. Table 1 depicts the k-NN classification obtained for the best k per method, and Table 2 shows the MMD. In each task, we set the dimension of the Lorentz model d to the dimension in which the best empirical single-task performance is obtained (S-Baseline). We note that similar results and trends are obtained for various dimensions. Additional results for various k values and an ablation study, showing that the combination of all three components yields the best classification results, are reported in Appendix D. Although the OT-based methods obtain the best matching between the distributions of the batches, HPA outperforms in all three tasks in terms of classification (see Table 1). In the two gene expression tasks, where the data have multiple labels and we align multiple batches, the advantage of HPA compared to the other methods is particularly significant. In Appendix D, we demonstrate HPA’s out-of-sample capabilities on the CyTOF data by learning the batch correction map between the different days from one patient and applying it to the data of the other patient. 4.3 Discussion Alignment methods based on density matching, such as OT-based methods, often overlook an important aspect in purely unsupervised settings. Although sample density is the main data property that can be and need to be aligned, preserving the intrinsic structure/geometry of the sets is important, as it might be tightly related to the hidden labels. Indeed, we see in our experiments that OT-based methods provide a good density alignment (reducing the inter-set variability), as demonstrated by small MMD values (see Table 2). However, the intrinsic structure of the sets (the intra-set variability) is not preserved, as evident by the resulting poor (hidden) label matching, conveyed by the k-NN classification performance (see Table 1). This is also illustrated in the right panel of Fig. 1. There it is visible that the three OT-based methods provide good global alignment of the sets, yet the intrinsic structure is not kept, as implied by the poor color matching. In contrast to OT-based methods, HPA does not explicitly aim to match densities, and thus, it obtains slightly worse MMD performance compared to OT-based methods. However, HPA matches the first two moments of the density and includes the rotation component, which was shown to be one of the fundamental limitations of OT-based methods for alignment as OT cannot recover volume-preserving maps [4, 36]. As seen in the simulation and experimental results and illustrated in the right panel of Fig. 1, we still obtain a good global alignment and simultaneously preserve the intrinsic structure, allowing for high classification performance. We remark that in the synthetic examples, there is a (hidden) one-to-one correspondence between the sets, and therefore one-to-one discrepancy can be computed (instead or in addition to MMD). When there is such a correspondence, OT still cannot recover volume-preserving maps, while HPA can mitigate noise and distortions. 5 Conclusion We introduced HPA for label-free alignment of data in the Lorentz model. Based on Riemannian geometry, we presented new translation and scaling operations that align the first and second Riemannian moments as well as a wrapped rotation that aligns the orientation in the hyperboloid model. Our theoretical analysis provides further insight and highlights properties that may be useful for practitioners. We empirically showed in simulations that HPA is stable under noise and distortions. Experimental results involving purely unsupervised batch correction of multiple bioinformatics datasets with multiple labels is demonstrated. Beyond alignment and batch effect removal, our method can be viewed as a type of domain adaptation or a precursor of transfer learning that relies on purely geometric considerations, exploiting the geometric structure of data as well as the geometric properties of the space of the data. In addition, it can be utilized for multimodal data fusion and geometric registration of shapes with hierarchical structure. Acknowledgments and Disclosure of Funding We thank the reviewers for their important comments and suggestions, and we thank Thomas Dagès for the helpful discussion. The work of YEL and RT was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 802735-ERC-DIFFOP. The work of YK was supported by the National Institutes of Health R01GM131642, UM1PA05141, P50CA121974, and U01DA053628.
1. What is the focus and contribution of the paper on Hyperbolic Procrustes Analysis (HPA)? 2. What are the strengths of the proposed algorithm, particularly in its ability to align point sets in hyperbolic spaces? 3. What are the weaknesses of the paper, especially regarding the choice of alignment strategies and the assumption made in Prop. 10? 4. Do you have any concerns about the applicability of HPA compared to optimal transport-based alignment? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes Hyperbolic Procrustes Analysis (HPA) for aligning point sets that live in hyperbolic spaces. The proposed algorithm uses the tools from Riemannian geometry to correctly and efficiently generalize the (generalized) Procrustes alignment to Riemannian manifolds. The paper deploys this algorithm to perform unsupervised, label-free, and hierarchical data alignment. The experiments demonstrate that HPA can yield higher accuracy when compared to optimal transport based alignment. Review PROs I think generalizing Procrustes alignment to Riemannian manifolds carries fundamental importance. The approach presented in this paper is intuitive, principled and makes sense. The paper is clearly written explaining different stages of the algorithm. It is easy to follow and understand for readers with certain background. The paper conducts an appropriate evaluation on hierarchical datasets and problems where such an alignment is superior to the existing optimal transport based ones. CONs Procrustes alignment only considers translation, scale and rotation to align two datasets. For a high dimensional problem, I seem to lose the motivation why (and in which cases) this would be superior to say a hierarchical OT? Can we have a general recipe in how to choose which algorithm to use? In other words, intuitively, why would this perform better than OT for a given dataset? It would be good to clarify this point. It seems like a large portion of the material addresses a general Riemannian manifold. However, when we dig to the specifics of the proofs the Lorenz model comes along. Is it possible to consider a similar alignment procedure not for hyperbolic spaces but for general manifolds? A lot of ablation studies seem to be missing here. First, as mention in Sec. 3.3, there could be multiple ways to solve the rotation problem. The paper argues for the wrapped rotation. I would agree, but this choice is not experimentally validated. Can we see a study of different rotational alignment strategies? Prop. 10 seems to assume that the dataset is generated by an inverse Procrustes. Why should this be the case? For the alignment of multiple point clouds (the generalized problem) : would it not make sense to fix one of the point clouds and align the rest? Yes or no, it would be good to discuss why. Can we see visualizations from other methods as well? This might help in grasping certain problems with the state of the art.
NIPS
Title Hyperbolic Procrustes Analysis Using Riemannian Geometry Abstract Label-free alignment between datasets collected at different times, locations, or by different instruments is a fundamental scientific task. Hyperbolic spaces have recently provided a fruitful foundation for the development of informative representations of hierarchical data. Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA). HPA consists of new implementations of the three prototypical Procrustes analysis components: translation, scaling, and rotation, based on the Riemannian geometry of the Lorentz model of hyperbolic space. We analyze the proposed components, highlighting their useful properties for alignment. The efficacy of HPA, its theoretical properties, stability and computational efficiency are demonstrated in simulations. In addition, we showcase its performance on three batch correction tasks involving gene expression and mass cytometry data. Specifically, we demonstrate high-quality unsupervised batch effect removal from data acquired at different sites and with different technologies that outperforms recent methods for label-free alignment in hyperbolic spaces. 1 Introduction A key scientific task in modern data analysis is the alignment of data. The need for alignment often arises since data are acquired in multiple domains, under different environmental conditions, using various acquisition equipment, and at different sites. This paper focuses on the problem of label-free alignment of data embedded in hyperbolic spaces. Recently, hyperbolic spaces have accentuated in geometric representation learning. These non-Euclidean spaces have become popular since they provide a natural embedding of hierarchical data thanks to the exponential growth of the lengths of their geodesic paths [41, 42, 30, 15, 14, 6, 32]. The problem of alignment of data embedded in hyperbolic spaces has been extensively studied, e.g., in the context of natural language processing [49], ontology matching [10], matching two data modalities [40], and improving the embedding in hyperbolic spaces [2]. A few of these studies are based on optimal transport (OT) [2, 22], a classical problem in mathematics [38] that has recently reemerged in modern data analysis, e.g., for domain adaptation [7]. Despite its increasing usage, OT for unsupervised alignment is fundamentally limited [54], since OT (as any density matching approach) cannot recover volume-preserving maps [3, 4, 36]. In this paper, we resort to Procrustes analysis (PA) [17, 18] that is based on purely geometric considerations. PA has been widely used for aligning datasets by eliminating the shift, scaling, and rotational factors. Over the years, it has been successfully applied to various applications, e.g., 35th Conference on Neural Information Processing Systems (NeurIPS 2021). image registration [34], manifold alignment [52], shape matching [35], domain adaptation [47], and manifold learning [27], to name but a few. Here, we address the problem of label-free matching of hierarchical data embedded in hyperbolic spaces. We present hyperbolic Procrustes analysis (HPA), a new PA method in the Lorentz model of hyperbolic geometry. The main novelty lies in the introduction of new implementations of the three prototypical PA components based on Riemannian geometry. Specifically, translation is viewed as a Riemannian mean alignment, implemented using parallel transport (PT). Scaling is determined with respect to geodesic paths. Rotation is considered as moment alignment on a mapping of the tangent space of the manifold to a Euclidean vector space. Our analysis provides new derivations in the Riemannian geometry of the Lorentz model and specifies the commuting properties of the HPA components. We show that HPA, compared to existing baselines and OT-based methods, achieves improved alignment in a purely unsupervised setting. In addition, it has a natural and stable out-of-sample extension, it supports both small and big data, and it is computationally efficient. We show application to batch correction in bioinformatics tasks. We present results on both gene expression and mass cytometry (CyTOF) data, exemplifying the generality and broad scope of our method. In contrast to recent works [28, 50], our method does not require landmark correspondence, which is often unavailable in many datasets or hard to obtain. Specifically, we show that batch effects caused by acquisition using different technologies, at different sites, and at different times can be accurately removed, while preserving the intrinsic structure of the data. Our main contributions are as follows. (i) We present a new implementation of PA using the Riemannian geometry of the Lorentz model for unsupervised label-free hierarchical data alignment. (ii) We provide theoretical analysis and justification of our alignment method based on new derivations of Riemannian geometry operations in the Lorentz model. These derivations have their own merit as they could be used in other contexts. (iii) We show experimental results of accurate batch effect removal from several hierarchical bioinformatics datasets without landmark correspondence. 2 Background on hyperbolic geometry Hyperbolic space is a non-Euclidean space with a negative constant sectional curvature and an underlying geometry that describes tree-like graphs with small distortions [46]. There exist four commonly-used models for hyperbolic spaces: Poincaré disk model, Lorentz model (hyperboloid model), Poincaré half-plane model, and Beltrami-Klein model. These four models are equivalent and there exist transformations between them. Here, we consider the Lorentz model, and specifically, the upper sheet of the hyperboloid model, because its basic Riemannian operations have simple closed-form expressions and the computation of the geodesic distances is stable [42, 30]. Formally, the upper sheet of the hyperboloid model in a d-dimensional hyperbolic space is defined by Ld := {x 2 Rd+1|hx,xiL = 1,x(1) > 0}, where hx,xiL = x>Hx is the Lorentzian inner product and H 2 R(d+1)⇥(d+1) is defined by H = [ 1,0>;0, Id]. The Lorentzian norm of a hyperbolic vector x 2 Ld is denoted by ||x||L = p hx,xiL, with the origin µ0 = [1,0>]> 2 Ld. Let TxLd be the tangent space at x 2 Ld, defined by TxLd := {v|hx,viL = 0}. Consider x 2 L d and v 2 TxLd, the geodesic path : R+0 ! Ld is defined by (t) = cosh(||v||Lt)x + sinh(||v||Lt) v ||v||L with (0) = x and initial velocity 0(0) = v, where 0(t) := ddt (t). In addition, the associated geodesic distance is dLd(x, v(t)) = cosh 1( hx, v(t)iL). The Exponential map, projecting a point v 2 TxLd to the manifold Ld, is given by Expx(v) = (1) = cosh(||v||L)x + sinh(||v||L) v ||v||L . The Logarithmic map, projecting a point y 2 L d to the tangent space TxLd at x, is defined by Logx(y) = cosh 1( )p 2 1 (y x), where = hx,yiL. The PT of a vector v 2 TxLd along the geodesic path from x 2 Ld to y 2 Ld is defined by PTx!y(v) = v + hy x,viL +1 (x + y), where = hx,yiL, keeping the metric tensor unchanged. The Riemannian mean xX and the corresponding dispersion dX of a set X = {xi|xi 2 Ld}ni=1 are defined using the Fréchet mean [13, 33] by xX := m(X ) = arg min x2Ld nX i=1 d2Ld(x,xi) and dX := r(X ) = min x2Ld 1 n nX i=1 d2Ld(x,xi), (1) where m : X ! Ld and r : X ! R+. Note that the Fréchet mean of samples on connected and compact Riemannian manifolds of non-positive curvatures, such as hyperbolic spaces, is guaranteed to exist, and it is unique [24, 44, 1]. The Fréchet mean is commonly computed by the Karcher Flow [24, 20], which is computationally demanding. Importantly, in the considered hyperbolic space, the Fréchet mean can be efficiently obtained using the accurate gradient formulation [33]. Given a vector x 2 Ld and a symmetric and positive-definite (SPD) matrix ⌃ 2 Rd⇥d, the wrapped normal distribution G(x,⌃) provides a generative model of hyperbolic samples as follows [39, 11]. First, a vector v0 is sampled from N (0,⌃). Then, 0 is concatenated to the vector v0 such that v = [0,v0]> 2 Tµ0L d. Finally, PT from the origin µ0 = [1,0>]> to x is applied to v, and the resulting point is mapped to the manifold using the Exponential map at x. The probability density function of this model is given by log G(y|x,⌃) = log N (0,⌃) (n 1) log( sinh ||v 0||2 ||v0||2 ). 3 Hyperbolic Procrustes analysis Existing methods for data alignment typically seek a function that minimizes a certain cost. A large body of work attempts to match the empirical densities of two datasets, e.g., by minimizing the maximum mean discrepancy (MMD) [48, 29] or solving OT problems [45, 2, 22]. Finding an effective cost function without labels or landmarks is challenging, and minimizing such costs directly often lead to poor alignment in practice (see illustration in Fig. 1). A different well-established approach that applies indirect alignment based on geometric considerations is PA. While preparing this manuscript, another method of PA in hyperbolic spaces (PAH) was presented for matching two sets, assuming that they consist of the same number of points and that there exists a point-wise isometric map between them [51]. We remark that the analysis we present here applies to broader settings and makes no such assumptions. See Appendix E for details on classical PA as well as for comparisons to [51] and to the application of Euclidean PA in the tangent space. We consider two sets of points H(1) = {h(1)i } N1 i=1 and H (2) = {h(2)i } N2 i=1 in L d. Here, we aim to find a function ⇣ : Ld ! Ld, consisting of three components: translation, scaling, and rotation, that aligns H (2) with H(1) in an unsupervised label-free manner as depicted in Fig. 1. Finding such a function can be viewed as an extension of classical PA from the Euclidean space Rd+1 to the Lorentz model L d. A natural extension to multiple sets is described in Section 3.5. We remark that the statements are written in the context of the problem at hand. In Appendix A, we restate them more generally and present their proofs. 3.1 Riemannian translation Let h (1) and h (2) denote the Riemannian means of the sets H(1) and H(2), respectively. In this translation component, we find a map h (2)!h(1) : L d ! L d that aligns the Riemannian means of the sets. In the spirit of [5, 53], we propose to construct h (2)!h(1)(h (2) i ) as the composition of three Riemannian operations in Ld: the Logarithmic map applied to h(2)i at h (2) , PT h(2)i from h (2) to h (1) along the geodesic path, and the Exponential map applied to the transported point at h (1) : h (2)!h(1)(h (2) i ) := Exph(1)(PTh(2)!h(1)(Logh(2)(h (2) i ))). (2) See Fig. B.1 in Appendix B for illustration. Since the geodesic path between any two points in L d is unique [46], h (2)!h(1) is well-defined. The rationale behind the combination of these three Riemannian operations is twofold. First, PT is a map that aligns the means of the sets, while preserving their internal structure. Second, the Logarithmic and Exponential maps compose a map whose domain and range are the Lorentz model Ld rather than the tangent space, as desired. We make these claims formal in the following results. Proposition 1. The map h (2)!h(1) defined in Eq. (2) aligns the means of the sets, i.e., it satisfies h (1) = m({ h (2)!h(1)(h (2) i )} N2 i=1), (3) where m is the function defined in Eq. (1). Proposition 2. The map h (2)!h(1)(h (2) i ) for all h (2) i 2 H (2) can be recast as: h (2)!h(1)(h (2) i ) = h (2) i (h (2) i |h (1) ,h (2) )h (2) + (h(2)i |h (1) ,h (2) )h (1) , (4) where the functions and are positive, defined by 0 < (h(2)i |h (1) ,h (2) ) = D h (1) +h (2) ↵+1 ,h (2) i E L and 0 < (h(2)i |h (1) ,h (2) ) = D h (1) (2↵+1)h(2) ↵+1 ,h (2) i E L , respectively, and 0 < ↵ = hh (1) ,h (2) iL. In addition to providing a compact closed-form expression, Prop. 2 gives the proposed translation based on Riemannian geometry an interpretation of standard mean alignment in linear vector spaces. It implies that the alignment is nothing but subtracting the mean of the source set h (2) from each vector in H(2), and adding the mean of the target set h (1) (with the appropriate scales). Proposition 3. The map h (2)!h(1) preserves distances (i.e., it is an isometry): dLd(h (2) i ,h (2) j ) = dLd( h(2)!h(1)(h (2) i ), h(2)!h(1)(h (2) j )), (5) for any two points h(2)i ,h (2) j 2 H (2). Let (t) be the unique geodesic path from h (2) to h (1) such that (0) = h (2) and (1) = h (1) , and let 0(0) 2 T h (2)L d and 0(1) 2 T h (1)L d be the corresponding velocities, respectively. Proposition 4. The map h (2)!h(1) aligns geodesic velocities, i.e., given the mapping of the geodesic velocities to the manifold Ld 3 v0 = Exp h (2)( 0(0)) = h (1) and Ld 3 v1 = Exp h (1)( 0(1)), we have h (2)!h(1)(v0) = v1. (6) Isometry is determined up to rotation, a fact that can be problematic for alignment. For example, any H-unitary matrix [16] can be an isometric function in Ld. When landmarks are given, they can be used to alleviate this redundancy. However, in the purely unsupervised setting we consider, other data-driven ques are required. Prop. 4 implies that the proposed translation based on PT fixes some of these rotational degrees of freedom by aligning the geodesic velocities. In Section 3.3 we revisit this issue. Now, with a slight abuse of notation, let eH(2) = h (2)!h(1)(H (2)). Proposition 5. Consider two subsets A, B ⇢ H(2) and their translations eA = h (2)!h(1)(A), eB = h (2)!h(1)(B) ⇢ eH(2). Let a = m(A), b = m(B), ea = m( eA), and eb = m( eB) be the Riemannian means of the subsets. Then, h (2)!h(1) a!b = ea!eb h(2)!h(1) . (7) In the context of the alignment problem, the importance of Prop. 5 is the following. Suppose the two sets correspond to data measured at two labs (denoted with and without a tilde), and suppose each set was acquired by two types of equipment (denoted by A and B). Prop. 5 implies that aligning data from the different labs and then aligning data acquired using the different equipment is equivalent to first aligning the different equipment and then the different labs, i.e., any order of the two alignments generates the same result. Seemingly, this is a natural property in Euclidean spaces. However, in a Riemannian manifold, it is not a trivial result, and it holds for the transport along the geodesic path. See Appendix A for counter-examples. 3.2 Riemannian scaling Let d(1) and d(2) denote the Riemannian dispersions of H(1) and H(2). By propositions 1 and 3, h (1) and d(2) are the mean and dispersion of eH(2). Here, our goal is to align the Riemannian dispersions of H(1) and eH(2). For this purpose, we propose the scaling function ⌥s h (1) : L d ! L d, given by ⌥s h (1)(eh(2)i ) = i(s), (8) where s = p d(1)/d(2) is the scaling factor and i(t) is the geodesic path from h (1) to eh(2)i such that i(0) = h (1) and i(1) = eh(2)i . See Fig. B.2 in Appendix B for illustration. Proposition 6. The dispersion of the rescaled set bH(2) = ⌥s h (1)( eH(2)) is d(1). 3.3 Riemannian wrapped rotation The purpose of this component is to align the orientation of the distributions of the two sets after translation and scaling, namely, after aligning their first and second moments. The proposed rotation function ⇥ h (1) : Ld ! Ld consists of (i) mapping the points from the manifold Ld to the tangent space T h (1)L d, (ii) mapping to Rd, (iii) rotating in Rd, and (iv) mapping back to the tangent space and then to the manifold. We perform the rotation in Rd, which we term wrapped rotation, rather than a direct rotation on the manifold Ld or on the tangent space T h (1)L d for the following reasons. First, the frequently used rotation map in Ld [51] does not necessarily preserve the Riemannian mean, and in our context, it might reverse the mean alignment. Second, rotation applied directly to the tangent space T h (1)L d does not guarantee that the rotated points remain on the same tangent space. Third, applying rotation in the Euclidean vector space that is isometric to the tangent space is less efficient and stable, and it obtains slightly worse empirical results (see Appendix D.4 for details). Last, applying the rotation in Rd allows us to use the standard Euclidean rotation using SVD. In Section 4, we empirically demonstrate the advantage of the proposed rotation compared to the alternatives. Definition 1. Let the mapping function P h (1) : T h (1)L d ! R d defined on the tangent space at h (1) and its inverse map be the following functions defined by P h (1)(v) := ⇥ v(2), . . . ,v(d + 1) ⇤> 2 R d and P 1 h (1)(s) := h hs, P h (1)(h (1) )i h (1) (1) , s> i> 2 T h (1)L d, (9) where s 2 Rd and h·, ·i is the standard Euclidean inner product. Note that removing the first element of v is valid due to the constraint imposed on the vector elements in the tangent space by definition. Indeed, no information is lost and the mapping is invertible. The first step in our rotation component is to map the points in H(1) and in bH(2) to the tangent space at h (1) : v(1)i = Logh(1)(h (1) i ) for i = 1, . . . , N1, and v (2) i = Logh(1)( bh(2)i ) for i = 1, . . . , N2. In the second step, we map the points by the mapping function in Definition 1 and re-center them: s (1) i = Ph(1)(v (1) i ) s (1), i = 1, . . . , N1 and s (2) i = Ph(1)(v (2) i ) s (2), i = 1, . . . , N2, where s (k) = 1Nk PNk i=1 Ph (1)(v (k) i ) for k = 1, 2 is the mean vector of the projections. Then, the mapped and centered points (in Rd) are collected into matrices: S (k) = ⇥ s (k) 1 , s (k) 2 , . . . , s (k) Nk ⇤ 2 R d⇥Nk . (10) In the third step, for each set k = 1, 2, we compute the rotation matrix U (k) 2 Rd⇥d by applying SVD to the matrix S(k) = U (k)⇤(k)(E(k))>. Since the left-singular vectors are determined up to a sign, we propose to align their signs as follows: u(2)i sign(hu (2) i ,u (1) i i)u (2) i , where u (1) i and u (2) i are the i-th left-singular vector of the two sets, resulting in modified rotation matrices U (k). Finally, we apply the rotation to bH(2) by ⇥U h (1)(bh(2)i ) = Exph(1) ⇣ P 1 h (1) ⇣ U > ⇣ P h (1)(Log h (1)(bh(2)i )) s (2) ⌘ + s(2) ⌘⌘ , (11) where U = U (1)(U (2))>. Proposition 7. The wrapped rotation is bijective, and the inverse is given by (⇥U h (1)) 1 = ⇥U > h (1) . (12) 3.4 Analysis Putting all three components together, the proposed HPA that aligns H(2) with H(1) culminates in the composition of translation, scaling, and rotation: ⇥U h (1) ⌥ s h (1) h (2)!h(1) . (13) As in most PA schemes, the order of the three components is important. Yet, the proposed components allow us a certain degree of freedom, as indicated in the following results. Proposition 8. The Riemannian translation and the Riemannian scaling commute w.r.t. the Riemannian means h (1) and h (2) : ⌥s h (1) h (2)!h(1) = h(2)!h(1) ⌥ s h (2) . (14) Note that ⌥s h (1) and h (2)!h(1) do not necessarily commute: ⌥ s h (1) h (2)!h(1) 6= h(2)!h(1) ⌥ s h (1) . Proposition 9. The Riemannian scaling and the wrapped rotation commute: ⌥s h (1) ⇥ U h (1) = ⇥ U h (1) ⌥ s h (1) . (15) We note that the rotation does not commute with the translation, because PT only preserves the local covariant derivative on the tangent space but might cause rotation and distortion along the transportation. Therefore, the rotation is required to be the last component of our HPA. Thus far, we did not present a model for the discrepancy between the two sets, nor we presented the proposed HPA as optimal with respect to some criterion. In the following result, we show that if the discrepancy between the sets can be expressed as a composition of translation, scaling, and rotation, then the two sets can be perfectly aligned using HPA. Proposition 10. Let ⌘ : Ld ! Ld be a map, given by ⌘ = ⇥U h (1) ⌥ s h (1) h (2)!h(1) . If H (1) = {h (1) i = ⌘(h (2) i )} N2 i=1, then, h (2) i = (⇥ U 0 h (2) ⌥ 1 s h (2) h (1)!h(2))(h (1) i ), i = 1, . . . , N2, (16) where U 0 2 O(d). Note that HPA consists of the sequence of Riemannian translation, Riemannian scaling and wrappedrotation. The domain and range of each component is the manifold Ld. Yet, the first and last operations of each component are the Logarithmic and Exponential maps that project a point from the manifold to the tangent space, and vice versa, respectively. This allows us to propose an efficient implementation of the sequence without the back and forth projections as described in Appendix C. 3.5 Extension to multiple sets We can naturally scale up the setting to support the alignment of K > 2 sets, denoted by H(k) = {h (k) i } Nk i=1, where k 2 {1, 2, . . . , K}. Let h (k) and d(k) be the Riemannian mean and dispersion of Algorithm 1 Hyperbolic Procrustes analysis Input: K sets of hyperbolic points H(1) = {h(1)i } N1 i=1, . . . , H (K) = {h(K)i } NK i=1 Output: K aligned sets of hyperbolic points H̆(1) = {h̆(1)i } N1 i=1, . . . , H̆ (K) = {h̆(K)i } NK i=1 1: for each set H(k) do 2: compute the Riemannian mean h (k) and dispersion d(k) 3: end for 4: compute h, the global Riemannian mean of {h (k) } K k=1 5: for each set H(k) do 6: apply the Riemannian translation eh(k)i = h(k)!h(h (k) i ) // Eq. (2) 7: apply the Riemannian scaling bh(k)i = ⌥sh( eh(k)i ) with s = 1/ p d(k) // Eq. (8) 8: apply the wrapped rotation h̆(k)i = ⇥ U h (bh(k)i ) with U = U (1)(U (k))> // Eq. (11) 9: end for the k-th dataset, respectively. In addition, let h be the global Riemannian mean of {h (k) } K k=1. We propose to transport the points of the k-th set using h (k)!h. Next, the Riemannian dispersion of each set is set to 1 by applying ⌥s h with s = 1/ p d(k). Finally, the wrapped rotation is applied to all the data sets on the mapping of the tangent space T h L d and then mapped back to the manifold L d. The first set is designated as the reference set, and all other rotation matrices U (k) are updated according to u(k)i sign(hu (k) i ,u (1) i i)u (k) i . The proposed HPA for multiple sets is summarized in Algorithm 1, and some implementation remarks appear in Appendix C. 4 Experimental results We apply HPA to simulations and to three biomedical datasets1. In addition, we test HPA on MNIST [31] and USPS [23] datasets, which arguably do not have a distinct hierarchical structure. Nonetheless, we demonstrate in Appendix D that our HPA is highly effective in aligning these two datasets. All the experiments are label-free. We compare the obtained results to the following alignment methods: (i) PAH [51], which is applied only to the simulated data since it requires the existence of a one-to-one correspondence between the sets, (ii) only the Riemannian translation (RT), (iii) OT in hyperbolic space with the weighted Fréchet mean (HOT-F) extended to an unsupervised setting according to [54], (iv) OT with W-linear map (HOT-L) [22], and (v) hyperbolic mapping estimation (HOT-ME) [22]. As a baseline, we present the results obtained before the alignment (Baseline). For more details on the experimental setting, see Appendix C. 4.1 Simulations The synthetic data in Ld is generated using the sampling scheme described in Section 2 based on [39]. Given an arbitrary point µ 2 Ld and an arbitrary SPD matrix ⌃ 2 Rd⇥d, we generate a set of N points Q (1) = {q(1)i } N i=1 centered at µ by Ld 3 q (1) i = Expµ(PTµ0!µ(ev (1) i )), where µ0 = [1,0] > is the origin, v(1)i = [0, ev (1) i ] >, and ev(1)i ⇠ N (0,⌃). Next, we generate three noisy and distorted versions of Q(1). The first noisy set Q(2) = {q(2)i } N i=1 is generated as proposed in [51] by q (2) i = LT✏iq (1) i , where T✏i is a hyperbolic translation defined by T✏i = [ p 1 + ✏>i ✏i, ✏ > i ; ✏i, (I + ✏i✏ > i ) 1 2 ] , ✏i is sampled from N (0, 2I), 2 is the variance, and L is a random H-unitary matrix [16]. Another noisy set, denoted as Q(3) = {q(3)i } N i=1, is generated by q (3) i = L(Expµ(PTµ0!µ(u (1) i ))), where u (1) i = [0, ev(1)i + ✏i]>. Here, the noise is added to the tangent space at µ0. Finally, let Q(4) = {q (4) i } N i=1 be a distorted set, given by q(4)i = fµ0(q (3) i ), where fµ0(x) = cosh(||u||Lt)µ 0 + sinh(||u||Lt) u ||u||L and u = Log µ0(x), for arbitrary (fixed) µ0 2 Ld and t > 0. 1Our code is available at https://github.com/RonenTalmonLab/HyperbolicProcrustesAnalysis. <latexit sha1_base64="DJH1y20CjQfE05rkzrChGdpLG3Y=">AAACGHicbVA9TwJBEN3DL8Qv1NJmIzGBBu8IiZZEG0tIBEwAyd7eHmzY27vszhnJ5X6GjX/FxkJjbOn8Ny4fhYAvmeTlvZnMzHMjwTXY9o+V2djc2t7J7ub29g8Oj/LHJy0dxoqyJg1FqB5copngkjWBg2APkWIkcAVru6Pbqd9+YkrzUN7DOGK9gAwk9zklYKR+/rIbEBhSIpJG+pgUnVKKu8CeIcFEejhddiultJ8v2GV7BrxOnAUpoAXq/fyk64U0DpgEKojWHceOoJcQBZwKlua6sWYRoSMyYB1DJQmY7iWzx1J8YRQP+6EyJQHP1L8TCQm0Hgeu6Zweqle9qfif14nBv+4lXEYxMEnni/xYYAjxNCXsccUoiLEhhCpubsV0SBShYLLMmRCc1ZfXSatSdqrlaqNaqN0s4siiM3SOishBV6iG7lAdNRFFL+gNfaBP69V6t76s73lrxlrMnKIlWJNfPCSf4w==</latexit> <latexit sha1_base64="jNZ8bDFg5cYQW2/dwkoUqRDGC3I=">AAACGHicbVA9TwJBEN3zE/ELtbTZSEygwTsl0ZJoYwmJfCQckr1lgQ17e5fdOSO53M+w8a/YWGiMLZ3/xgWuEPAlk7y8N5OZeV4ouAbb/rHW1jc2t7YzO9ndvf2Dw9zRcUMHkaKsTgMRqJZHNBNcsjpwEKwVKkZ8T7CmN7qb+s0npjQP5AOMQ9bxyUDyPqcEjNTNXbg+gSElIq4lj3HBKSbYBfYMMSayh5NF96qYdHN5u2TPgFeJk5I8SlHt5iZuL6CRzyRQQbRuO3YInZgo4FSwJOtGmoWEjsiAtQ2VxGe6E88eS/C5UXq4HyhTEvBM/TsRE1/rse+Zzumhetmbiv957Qj6N52YyzACJul8UT8SGAI8TQn3uGIUxNgQQhU3t2I6JIpQMFlmTQjO8surpHFZcsqlcq2cr9ymcWTQKTpDBeSga1RB96iK6oiiF/SGPtCn9Wq9W1/W97x1zUpnTtACrMkvPaqf5A==</latexit> <latexit sha1_base64="hTgOuDJMdGixoeXwOc6+UufcYtI=">AAACGHicbVA9TwJBEN3DL8SvU0ubjcQEGrwzJFoSbSwhkY8EkOwtC2zY27vszhnJ5X6GjX/FxkJjbOn8Ny5whYAvmeTlvZnMzPNCwTU4zo+V2djc2t7J7ub29g8Oj+zjk4YOIkVZnQYiUC2PaCa4ZHXgIFgrVIz4nmBNb3w385tPTGkeyAeYhKzrk6HkA04JGKlnX3Z8AiNKRFxLHuOCW0xwB9gzxJjIPk6W3XIx6dl5p+TMgdeJm5I8SlHt2dNOP6CRzyRQQbRuu04I3Zgo4FSwJNeJNAsJHZMhaxsqic90N54/luALo/TxIFCmJOC5+nciJr7WE98znbND9ao3E//z2hEMbroxl2EETNLFokEkMAR4lhLuc8UoiIkhhCpubsV0RBShYLLMmRDc1ZfXSeOq5JZL5Vo5X7lN48iiM3SOCshF16iC7lEV1RFFL+gNfaBP69V6t76s70VrxkpnTtESrOkvPzCf5Q==</latexit> We apply Algorithm 1 to align the three pairs of sets {Q(1), Q(2)}, {Q(1), Q(3)}, and {Q(1), Q(4)}, setting N = 100, = 1, and d 2 {3, 5, 10, 20, . . . , 40}. Each experiment is repeated 10 times with different values of µ, ⌃, µ0 and t. To evaluate the alignment, we use the pairwise discrepancy based on the hidden one-to-one correspondence, given by "(Q(1), Q(j)) = 1N PN i=1 d 2 Ld(q (1) i , q (j) i ), where j 2 {2, 3, 4}. The discrepancy as a function of the dimension d is shown in Fig. 2. We observe that the proposed HPA has lower discrepancy relative to the other label-free methods. Specifically, it outperforms OT-based methods that are designed to match the densities. Furthermore, the proposed HPA is stable, in contrast to HOT-L, which is highly sensitive to the noise and distortion introduced in Q(3) and Q(4). Interestingly, we remark that the discrepancies of RT and PAH are very close, empirically showing that RT alone is comparable to PAH. In addition, note that HPA is permutationinvariant and does not require one-to-one correspondence as PAH. We report the running-time in Appendix D and demonstrate that HPA is more efficient than HOT-F and HOT-ME. 4.2 Batch effect removal We consider bioinformatics datasets consisting of gene expression data and CyTOF. Representing such data in hyperbolic spaces was shown to be informative and useful [25], implying that such data have an underlying inherent hierarchical structure. Batch effects [43] arise from experimental variations that can be attributed to the measurement device or other environmental factors. Batch correction is typically a critical precursor to any subsequent analysis and processing. Three batch effect removal tasks are examined. The first task involves breast cancer (BC) gene expression data. We consider two publicly available datasets: METABRIC [8] and TCGA [26], consisting of samples from five breast cancer subtypes. The batch effect stems from different profiling techniques: gene expression microarray and RNA sequencing. In the second task, three cohorts of lung cancer (LC) gene expression data [21] are considered, consisting of samples from three lung cancer subtypes. The data were collected using gene expression microarrays at three different sites (a likely source of batch effects): Stanford University (ST), University of Michigan (UM), and Dana-Farber Cancer Institute (D-F). The last task involves CyTOF data [48] consisting of peripheral blood mononuclear cells (PBMCs) collected from two multiple sclerosis patients during four days: two day before treatment (BT) and two days after treatment (AT). These 8 = 2⇥ 2⇥ 2 batches were collected with or without PMA/ionomycin stimulated PBMCs. We aim to remove the batch effects between two different days from the same condition (BT/AT) and from the same patient. In each batch removal task, we first learn an embedding of the data from all the batches into the Lorentz model Ld [42]. Then, HPA is applied to the embedded points in Ld. Fig. 3 shows a visualization of the embedding of the two breast cancer datasets before and after HPA. For visualization, we project the points in L3 to the 3D Poincaré ball. Before the alignment, the dominant factor separating between the patients’ samples (points) is the batch. In contrast, after the alignment, the batch effect is substantially suppressed (visually) and the factors separating the points are dominated by the cancer subtype. We evaluate the quality of the alignment in two aspects using objective measures: (i) k-NN classification, with leave-one-batch-out cross-validation, is utilized for assessing the alignment of the intrinsic structure, and (ii) MMD [19] is used for assessing the distribution alignment quality. For the classification, we view the five subtypes of BC, the three subtypes of LC, and the presence of stimulated cells in CyTOF as the labels in the respective tasks. In addition to the results of the different alignment methods, we report the k-NN classification based only on a single batch (S-Baseline), which indicates the adequacy of the representation in hyperbolic space to the task at hand. Table 1 depicts the k-NN classification obtained for the best k per method, and Table 2 shows the MMD. In each task, we set the dimension of the Lorentz model d to the dimension in which the best empirical single-task performance is obtained (S-Baseline). We note that similar results and trends are obtained for various dimensions. Additional results for various k values and an ablation study, showing that the combination of all three components yields the best classification results, are reported in Appendix D. Although the OT-based methods obtain the best matching between the distributions of the batches, HPA outperforms in all three tasks in terms of classification (see Table 1). In the two gene expression tasks, where the data have multiple labels and we align multiple batches, the advantage of HPA compared to the other methods is particularly significant. In Appendix D, we demonstrate HPA’s out-of-sample capabilities on the CyTOF data by learning the batch correction map between the different days from one patient and applying it to the data of the other patient. 4.3 Discussion Alignment methods based on density matching, such as OT-based methods, often overlook an important aspect in purely unsupervised settings. Although sample density is the main data property that can be and need to be aligned, preserving the intrinsic structure/geometry of the sets is important, as it might be tightly related to the hidden labels. Indeed, we see in our experiments that OT-based methods provide a good density alignment (reducing the inter-set variability), as demonstrated by small MMD values (see Table 2). However, the intrinsic structure of the sets (the intra-set variability) is not preserved, as evident by the resulting poor (hidden) label matching, conveyed by the k-NN classification performance (see Table 1). This is also illustrated in the right panel of Fig. 1. There it is visible that the three OT-based methods provide good global alignment of the sets, yet the intrinsic structure is not kept, as implied by the poor color matching. In contrast to OT-based methods, HPA does not explicitly aim to match densities, and thus, it obtains slightly worse MMD performance compared to OT-based methods. However, HPA matches the first two moments of the density and includes the rotation component, which was shown to be one of the fundamental limitations of OT-based methods for alignment as OT cannot recover volume-preserving maps [4, 36]. As seen in the simulation and experimental results and illustrated in the right panel of Fig. 1, we still obtain a good global alignment and simultaneously preserve the intrinsic structure, allowing for high classification performance. We remark that in the synthetic examples, there is a (hidden) one-to-one correspondence between the sets, and therefore one-to-one discrepancy can be computed (instead or in addition to MMD). When there is such a correspondence, OT still cannot recover volume-preserving maps, while HPA can mitigate noise and distortions. 5 Conclusion We introduced HPA for label-free alignment of data in the Lorentz model. Based on Riemannian geometry, we presented new translation and scaling operations that align the first and second Riemannian moments as well as a wrapped rotation that aligns the orientation in the hyperboloid model. Our theoretical analysis provides further insight and highlights properties that may be useful for practitioners. We empirically showed in simulations that HPA is stable under noise and distortions. Experimental results involving purely unsupervised batch correction of multiple bioinformatics datasets with multiple labels is demonstrated. Beyond alignment and batch effect removal, our method can be viewed as a type of domain adaptation or a precursor of transfer learning that relies on purely geometric considerations, exploiting the geometric structure of data as well as the geometric properties of the space of the data. In addition, it can be utilized for multimodal data fusion and geometric registration of shapes with hierarchical structure. Acknowledgments and Disclosure of Funding We thank the reviewers for their important comments and suggestions, and we thank Thomas Dagès for the helpful discussion. The work of YEL and RT was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 802735-ERC-DIFFOP. The work of YK was supported by the National Institutes of Health R01GM131642, UM1PA05141, P50CA121974, and U01DA053628.
1. What is the main contribution of the paper in terms of data alignment? 2. What is the significance of the Lorentz model presented in the paper? 3. What are the strengths of the paper's theoretical analysis and experimental results? 4. How does the reviewer assess the novelty and effectiveness of the proposed alignment method? 5. Are there any limitations or potential improvements suggested by the reviewer for future research?
Summary Of The Paper Review
Summary Of The Paper The need for alignment often arises since data are acquired in multiple domains, under different environmental conditions, using various acquisition equipment, and at different sites. This paper focuses on the problem of unsupervised alignment of data embedded in hyperbolic spaces. Review This paper presents the Lorentz model for unsupervised label-free hierarchical data alignment. They also provide theoretical analysis and justification of the alignment method. The theoretical analysis provides further insight and highlights properties that may be useful. Their experiments also indicate that HPA is stable under noise and distortions. The most interesting thing is that the authors also experiment on two datasets that arguably do not have a distinct hierarchical structure but are highly effective.
NIPS
Title Learnable Polyphase Sampling for Shift Invariant and Equivariant Convolutional Networks Abstract We propose learnable polyphase sampling (LPS), a pair of learnable down/upsampling layers that enable truly shift-invariant and equivariant convolutional networks. LPS can be trained end-to-end from data and generalizes existing handcrafted downsampling layers. It is widely applicable as it can be integrated into any convolutional network by replacing down/upsampling layers. We evaluate LPS on image classification and semantic segmentation. Experiments show that LPS is on-par with or outperforms existing methods in both performance and shift consistency. For the first time, we achieve true shift-equivariance on semantic segmentation (PASCAL VOC), i.e., 100% shift consistency, outperforming baselines by an absolute 3.3%. Our project page and code are available at https://raymondyeh07.github.io/learnable_polyphase_sampling/ 1 Introduction For tasks like image classification, shifts of an object do not change the corresponding object label, i.e., the task is shift-invariant. This shift-invariance property has been incorporated into deep-nets yielding convolutional neural nets (CNN). Seminal works on CNNs [15, 24] directly attribute the model design to shift-invariance. For example, Fukushima [15] states “the network has an ability of position-invariant pattern recognition” and LeCun et al. [24] motivate CNNs by stating that they “ensure some degree of shift invariance.” CNNs have evolved since their conception. Modern deep-nets contain more layers, use different non-linearities and pooling layers. Re-examining these modern architectures, Zhang [56] surprisingly finds that modern deep-nets are not shift-invariant. To address this, Zhang [56] and Zou et al. [57] propose to perform anti-aliasing before each downsampling layer, and found it to improve the degree of invariance. More recently, Chaman and Dokmanic [5] show that deep-nets can be “truly shift-invariant,” i.e., a model’s output is identical for given shifted inputs. For this, they replace all downsampling layers with their adaptive polyphase sampling (APS) layer. While APS achieves true shift-invariance by selecting the max-norm polyphase component (a handcrafted downsampling scheme), an important question arises: are there more effective downsampling schemes that can achieve true shift-invariance? Consider an extreme case, a handcrafted deep-net that always outputs zeros is truly shift-invariant, but does not accomplish any task. This motivates to study how truly shift-invariant downsampling schemes can be learned from data. For this we propose Learnable Polyphase Sampling (LPS), a pair of down/upsampling layers that yield truly shift-invariant/equivariant deep-nets and can be trained in an end-to-end manner. For ∗Equal contribution. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). downsampling, LPS can be easily integrated into existing deep-net architectures by swapping out the pooling/striding layers. Theoretically, LPS generalizes APS to downsampling schemes that cannot be represented by APS. Hence, LPS’s ideal performance is never worse than that of APS. For upsampling, LPS guarantees architectures that are truly shift-equivariant, i.e., the output shifts accordingly when the input shifts. This is desirable for tasks like semantic image segmentation. To validate the proposed LPS, we conduct extensive experiments: (a) image classification on CIFAR10 [21] and ImageNet [12]; (b) semantic segmentation on PASCAL VOC [13]. We observe that the proposed approach outperforms APS and further improves anti-aliasing methods on both model performance and shift consistency. Our contributions are as follows: • We propose learnable polyphase sampling (LPS), a pair of novel down/upsampling layers, and prove that they yield truly shift-invariant/-equivariant deep-nets. Different from prior works, our sampling scheme is trained from end-to-end and not handcrafted. • We theoretically prove that LPS (downsampling) is a generalization of APS. Hence, in theory, LPS improves upon APS. • We conduct extensive experiments demonstrating the effectiveness of LPS on image classification and segmentation over three datasets comparing to APS and anti-aliasing approaches. 2 Related Work In this section, we briefly discuss related work, including shift invariant/equivariant deep-nets and pooling layers. Additional necessary concepts are reviewed in Sec. 3. Shift Invariant/equivariant convolutional networks. Modern convolutional networks use striding or pooling to reduce the amount of memory and computation in the model [17, 22, 40, 45]. As pointed out by Azulay and Weiss [1] and Zhang [56], these pooling/striding layers break the shiftinvariance property of deep-nets. To address this issue, Zhang [56] proposed to perform anti-aliasing, i.e., lowpass filtering (LPF) before each downsampling, a canonical signal processing technique for multi-rate systems [47]. We illustrate this approach in Fig. 1 (left). Zou et al. [57] further improved the LPF technique by using adaptive filters which better preserve edge information. While anti-aliasing filters are effective, Chaman and Dokmanic [5] show that true shift-invariance, i.e., 100% shift consistency, can be achieved without anti-aliasing. Specifically, they propose Adaptive Polyphase Sampling (APS) which selects the downsampling indices, i.e., polyphase components, based on the `p-norm of the polyphase components; a handcrafted rule, as illustrated in Fig. 1 (right). In a follow up technical report [4], APS is extended to upsampling using unpooling layers [2, 55], where the downsampling indices are saved to place values back to their corresponding spatial location during upsampling. Our work presents a novel pair of shift-invariant/equivariant down/upsampling layers which are trainable, in contrast to APS’s handcrafted selection rule. We note that generalizations of equivariance beyond shifts have also been studied [3, 7, 37, 39, 43, 46, 48, 52] and applied to various domains, e.g., sets [16, 31, 35, 38, 50, 53], graphs [10, 11, 19, 27, 28, 30, 32, 44, 51], spherical images [8, 9, 20], volumetric data [49], etc. In this work, we focus solely on shift-equivariance for images with CNNs. Pooling layers. Many designs for better downsampling or pooling layers have been proposed. Popular choices are Average-Pooling [23] and Max-Pooling [36]. Other generalizations also exists, e.g., LP - Pooling [42] which generalizes pooling to use different norms. The effectiveness of different pooling layers has also been studied by Scherer et al. [41]. More similar to our work is Stochastic-Pooling [54] and Mixed Max-Average Pooling [25]. Stochastic-Pooling constructs a probability distribution by normalizing activations within a window and sampling during training. In our work, we present a novel design which learns the sampling distribution. Mixed Max-Average Pooling learns a single scalar to permit a soft-choice between Max- and Average-Pooling. In contrast, our LPS has shift-equivariance guarantees while being end-to-end trainable. 3 Preliminaries We provide a brief review on equivariant and invariant functions to establish the notation. For readability, we use one-dimensional data to illustrate these ideas. In practice, these concepts are generalized to multiple channels and two-dimensional data. Shift invariance and equivariance. The concept of equivariance, a generalization of invariance, describes how a function’s output is transformed given that the input is transformed in a predefined way. For example, shift equivariance describes how the output is shifted given that the input is also shifted: think of image segmentation, if an object in the image is shifted then its corresponding mask is also shifted. A function f : RN 7→ RM is TN , {TM , I}-equivariant (shift-equivariant) if and only if (iff) ∃ T ∈ {TM , I} s.t. f(TNx) = Tf(x) ∀x ∈ RN , (1) where TNx[n] , x[(n+1) mod N ] ∀n ∈ Z denotes a circular shift, [·] denotes the indexing operator, and I denotes the identity function. This definition of equivariance handles the ambiguity that arises when shifting by one and downsampling by two. Ideally, a shift by one at the input should result in a 0.5 shift in the downsampled signal, which is not achievable on the integer grid. Hence, this definition considers either a shift by one or a no shift at the output as equivariant. Following the equivariance definition, invariance can be viewed as a special case where the transformation at the output is an identity function, I . Concretely, a function f : RN 7→ RM is TN , {I}equivariant (shift-invariant) iff f(TNx) = f(x) ∀x ∈ RN . (2) To obtain shift-invariance from shift-equivariant functions it is common to use global pooling. Observe that ∑ m f(Tx)[m] = ∑ m (T f(x))[m] (3) is shift-invariant if f is shift-equivariant, as summation is an orderless operation. Note that the composition of shift-equivariant functions maintains shift-equivariance. Hence, f can be a stack of equivariant layers, e.g., a composition of convolution layers. While existing deep-nets [17, 26, 40] do use global spatial pooling, these architectures are not shiftinvariant. This is due to pooling and downsampling layers, which are not shift-equivariant as we review next. Downsampling and pooling layers. A downsampling-by-two layer D : RN 7→ RbN/2c is defined as D(x)[n] = x[2n] ∀n ∈ Z, (4) which returns the even indices of the input x. As a shift operator makes the odd indices even, a downsampling layer is not shift-equivariant/invariant. Commonly used average or max pooling can be viewed as an average or max filter followed by downsampling, hence pooling is also not shift-equivariant/invariant. To address this issue, Chaman and Dokmanic [5] propose adaptive polyphase sampling (APS) which is an input dependent (adaptive) selection of the odd/even indices. Adaptive polyphase sampling. Proposed by Chaman and Dokmanic [5], adaptive polyphase sampling (APS) returns whether the odd or even indices, i.e., the polyphase components, based on their norms. Formally, APS : RN 7→ RbN/2c is defined as: APS(x) = { Poly(x)0 if ‖Poly(x)0‖ > ‖Poly(x)1‖ Poly(x)1 otherwise , (5) where x ∈ RN is the input and Poly(x)i denotes the polyphase components, i.e., Poly(x)0[n] = x[2n] and Poly(x)1[n] = x[2n+ 1]. (6) While this handcrafted selection rule achieves a consistent selection of the polyphase components, it is not the only way to achieve it, e.g., returning the polyphase component with the smaller norm. In this work, we study a family of shift-equivariant sampling layers and propose how to learn them in a data-driven manner. 4 Approach Our goal is to design a learnable down/upsampling layer that is shift-invariant/equivariant. We formulate down/upsampling by modeling the conditional probability of selecting each polyphase component given an input. For this we use a small neural network. This enables the sampling scheme to be trained end-to-end from data, hence the name learnable polyphase sampling (LPS). In Sec. 4.1, we introduce learnable polyphase downsampling (LPD), discuss how to train it end-to-end, and show that it generalizes APS. In Sec. 4.2, we propose a practical layer design of LPD. Lastly, in Sec. 4.3, we discuss how to perform LPS for upsampling, namely, learnable polyphase upsampling (LPU). For readability, we present the approach using one dimensional data, i.e., a row in an image. 4.1 Learnable Polyphase Downsampling We propose learnable polyphase downsampling (LPD) to learn a shift-equivariant downsampling layer. Given an input feature map x ∈ RC×N , LPD spatially downsamples the input to produce an output in RC×bN/2c via LPD(x)[c, n] = x[c, 2n+ k?] , Poly(x)k? , (7) where k? = argmaxk∈{0,1} pθ(k = k|x) and Poly(x)k? denotes the k?-th polyphase component. We model a conditional probability pθ(k|x) for selecting polyphase components, i.e., k denotes the random variable of the polyphase indices. For 1D data, there are only two polyphase components. Critically, not all pθ lead to an equivariant downsampling layer. For example, pθ(k = 0|x) = 1 results in the standard down-sampling which always returns values on even indices for 1D signals. We will next examine which family of pθ achieves a shift-equivariant downsampling layer. Shift-permutation equivariance of pθ. Consider the example in Fig. 2. We can see that a circular shift in the spatial domain induces a permutation in the polyphase components. Observe that the top-row of the polyphase component containing the blue circle and orange square are permuted to the second row when the input is circularly shifted. We now state this formally. Lemma 1. Polyphase shift-permutation property Poly(TNx)k = { Poly(x)1 if k = 0 TMPoly(x)0 if k = 1 . (8) Proof. By definition, Poly(TNx)k[n] = TNx[(2n+ k) mod N ] = x[(2n+ k + 1) mod N ] (9) = { x[(2n+ 1) mod N ] = Poly(x)1 if k = 0 x[(2(n+ 1)) mod N ] = TMPoly(x)0 if k = 1 (10) From Lemma 1, we observe that to achieve an equivariant downsampling layer a spatially shifted input should lead to a permutation of the selection probability (Claim 1). We note that pθ is said to be shift-permutation-equivariant if pθ(k = π(k)|TNx) = pθ(k = k|x), (11) where π denotes a permutation on the polyphase indices, i.e., a “swap” of indices is characterized by π(k), i.e., π(0) = 1 and π(1) = 0. Claim 1. If pθ is shift-permutation-equivariant, defined in Eq. (11), then LPD defined in Eq. (7) is a shift-equivariant downsampling layer. Proof. Let x̂ , TNx be a shifted version of x ∈ RN . Recall LPD(x) and LPD(x̂) are defined as: LPD(x) , Poly(x)k? , k? = arg max k∈{0,1} pθ(k = k|x), (12) LPD(x̂) , Poly(x̂)k̂? , k̂ ? = arg max k∈{0,1} pθ(k = k|x̂). (13) From Lemma 1, LPD(TNx) can be expressed as: LPD(TNx) = { Poly(x)1 if k̂? = 0 TMPoly(x)0 if k̂? = 1 . (14) As pθ is the shift-permutation-equivariant, k̂? = π(k?) = 1− k?. (15) Finally, combining Eq. (14) and Eq. (15), LPD(TNx) = { Poly(x)1 if k? = 1 TMPoly(x)0 if k? = 0 = ( (1− k?)TM + k?I ) · LPD(x), (16) showing that LPD satisfies the shift-equivariance definition reviewed in Eq. (1). Here, we parameterize pθ with a small neural network. The exact construction of a shift-permutation equivariant deep-net architecture is deferred to Sec. 4.2. We next discuss how to train the distribution parameters θ in LPD. End-to-end training of LPD. At training time, to incorporate stochasticity and compute gradients, we parameterize pθ using Gumbel Softmax [18, 29]. To backpropagate gradients to θ, we relax the selection of polyphase components as a convex combination, i.e., y = ∑ k zk · Poly(x)k, z ∼ pθ(k|x), (17) where z corresponds to a selection variable, i.e., ∑ k zk = 1 and zk ∈ [0, 1]. Note the slight abuse of notation as pθ(k|x) denotes a probability over polyphase indices represented in a one-hot format. We further encourage the Gumbel Softmax to behave more like an argmax by decaying its temperature τ during training as recommended by Jang et al. [18]. LPD generalizes APS. A key advantage of LPS over APS is that it can learn from data, potentially leading to a better sampling scheme than a handcrafted one. Here, we show that APS is a special case of LPD. Therefore, LPD should perform at least as well as APS if parameters are trained well. Claim 2. APS is a special case of LPD, i.e., LPD can represent APS’s selection rule. Proof. Consider a parametrization of pθ as follows, pθ(k = k|x) = exp (‖Poly(x)k‖)∑ j exp(‖Poly(x)j‖) . (18) As the exponential is a strictly increasing function we have argmax k pθ(k = k|x) = argmax k ‖Poly(x)k‖ . (19) Eq. (18) is a softmax with input ‖Poly(x)k‖, as such a function exists, LPD generalizes APS. 4.2 Practical LPD Design We aim for a conditional distribution pθ that is shift-permutation equivariant to obtain a shiftequivariant pooling layer. Let the conditional probability be modeled as: pθ(k = k|x) , exp[fθ(Poly(x)k)]∑ j exp[fθ(Poly(x)j)] , (20) where fθ : RC×H ′×W ′ 7→ R is a small network that extracts features from polyphase component Poly(x)k. We first show that pθ is shift-permutation equivariant if fθ is shift invariant. Claim 3. In Eq. (20), if fθ is shift invariant then pθ is shift-permutation equivariant (Eq. (11)). Proof. Denote a feature map x and its shifted version x̂ , TNx. By definition, pθ(k = π(k)|TNx) = exp(fθ(Poly(TNx)π(k)))∑ j exp(fθ(Poly(TNx)j)) . (21) With a shift-invariant fθ and using Lemma 1, fθ(Poly(TNx)π(k)) = fθ(TMPoly(x)k) = fθ(Poly(x)k) (22) ∴ pθ(k = π(k)|TNx) = exp(Poly(x)k)∑ j exp(k = Poly(x)j) = pθ(k = k|x) Based on the result in Claim 3, we now present a convolution based meta-architecture that satisfies the shift-permutation property. The general design principle: share parameters across polyphase indices, just as convolution achieves shift equivariance by sharing parameters, plus averaging over the spatial domain. An illustration of the proposed meta-architecture is shown in Fig. 3. Fully convolutional model. Logits are extracted from the polyphase components via fullyconvolutional operations followed by averaging along the channel and the spatial domain. Following this, f convθ is denoted as: f convθ (Poly(x)k) , 1 CM ∑ c,n f̃ convθ (Poly(x)k)[c, n], (23) where f̃ convθ : RC×M 7→ RC×M is a CNN model (without pooling layers) and M = bN/2c. The shift equivariance property of f̃ convθ guarantees that f conv θ is shift-invariant due to the global pooling. 4.3 Learnable Polyphase Upsampling (LPU) Beyond shift invariant models, we extend the theory from downsampling to upsampling, which permits to design shift-equivariant models. The main idea is to place the features obtained after downsampling back to their original spatial location. Given a feature map y ∈ RC×bN/2c downsampled via LPD from x, the upsampling layer outputs u ∈ RC×N are defined as follows: Poly(u)k? = { y, k? = argmaxk∈{0,1} pθ(k = k|x) 0, otherwise. (24) We name this layer learnable polyphase upsampling (LPU), i.e., LPU(y, pθ) , u. We now show that LPU and LPD achieve shift-equivariance. Claim 4. If pθ is shift-permutation equivariant, as defined in Eq. (11), then LPU ◦ LPD is shift-equivariant. Proof. We prove this claim following definitions of LPU, LPD and Lemma 1. The complete proof is deferred to Appendix Sec. A1. End-to-end training of LPU. As in downsampling, we also incorporate stochasticity via GumbelSoftmax. To backpropagate gradients to pθ, we relax the hard selection into a convex combination, i.e., Poly(u)k = zk · y, z ∼ pθ(k|x). (25) Anti-aliasing for upsampling. While LPU provides a shift-equivariant upsampling scheme, it introduces zeros in the output which results in high-frequency components. This is known as aliasing in a multirate system [47]. To resolve this, following the classical solution, we apply a low-pass filter scaled by the upsampling factor after each LPU. 5 Experiments We conduct experiments on image classification following prior works. We report on the same architectures and training setup. We report both the circular shift setup in APS [5] and the standard shift setup in LPF [56]. We also evaluate on semantic segmentation, considering the circular shift, inspired by APS, and the standard shift setup following DDAC [57]. For circular shift settings, the theory exactly matches the experiment hence true equivariance is achieved. To our knowledge, this is the first truly shift equivariant model reported on PASCAL VOC. 5.1 Image Classification (Circular Shift) Experiment & implementation details. Following APS, all the evaluated pooling and anti-aliasing models use the ResNet-18 [17] architecture with circular padding on CIFAR10 [21] and ImageNet [12]. Anti-alias filters are applied after each downsampling layer following LPF [56] and DDAC [57]. We also replace downsampling layers with APS [5] and our proposed LPS layer. We provide more experimental details in Appendix Sec. A4. Evaluation metrics. We report classification accuracy to quantify the model performance on the original dataset without any shifts. To evaluate shift-invariance, following APS, we report circularconsistency (C-Cons.) which computes the average percentage of predicted labels that are equal under two different circular shifts, i.e., ŷ(Circ. Shifth1,w1(I)) = ŷ(Circ. Shifth2,w2(I)), (26) where ŷ(I) denotes the predicted label for an input image I and h1, w1, h2, w2 are uniformly sampled from 0 to 32. We report the average over five random seeds. CIFAR10 results. Tab. 1 shows the classification accuracy and circular consistency on CIFAR10. We report the mean and standard deviation over five runs with different random initialization of the ResNet-18 model. We observe that the proposed LPS improves classification accuracy over all baselines while achieving 100% circular consistency. In addition to attaining perfect shift consistency, we observe that combining anti-aliasing with LPS further improves performance. ImageNet results. We conduct experiments on ImageNet with circular shift using ResNet-18. In Tab. 2, we compare with APS’s best model using a box filter (Rectangle-2), as reported by Chaman and Dokmanic [5]. While both APS and LPS achieve 100% circular consistency, our proposed LPS improves on classification accuracy in all scenarios, highlighting its advantages. 5.2 Image Classification (Standard Shift) Experiment & implementation details. To directly compare with results from LPF and DDAC, we conduct experiments on ImageNet using the ResNet-50 and ResNet-101 architectures following their setting, i.e., training with standard shifts augmentation and using convolution layers with zero-padding. Evaluation metrics. Shift consistency (S-Cons.) computes the average percentage of ŷ(Shifth1,w1(I)) = ŷ(Shifth2,w2(I)), (27) where h1, w1, h2, w2 are uniformly sampled from the interval {0, . . . , 32}. To avoid padding at the boundary, following LPF [56], we perform a shift on an image then crop its center 224× 224 region. We note that, due to the change in content at the boundary, perfect shift consistency is not guaranteed. ImageNet results. In Tab. 3, we compare to the best anti-aliasing result as reported in LPF, DDAC and DDAC∗ which is trained from the authors’ released code using hyperparameters specified in the repository. Note, in standard shift setting LPS no longer achieves true shift-invariance due to padding at the boundaries. Despite this gap from the theory, LPS achieves improvements in both performance and shift-consistency over the baselines. When compared to LPF, both ResNet-50 and ResNet-101 architecture achieved improved classification accuracy and shift-consistency. When compared to DDAC, LPS achieves comparable accuracy with higher shift-consistency. 5.3 Trainable Parameters and Inference Time While LPD is a data-driven downsampling layer, we show that the additional trainable parameters introduced by it are marginal with respect to the classification architecture. Tab. 4 shows the number of trainable parameters required by the ResNet-101 models. For each method, we report the absolute number of trainable parameters, which includes both classifier and learnable pooling weights. We also include the relative number of trainable parameters, which only considers the learnable pooling weights and the percentage it represents with respect to the default ResNet-101 architecture weights. For comparison purposes, we also include the inference time required by each model to evaluate their computational overhead. Mean and standard deviation of the inference time is computed for each method on 100 batches of size 32. Following ImageNet default settings, the image dimensions corrrespond to 224× 224× 3. Results show our proposed LPD method introduces approximately 1% additional trainable parameters on the ResNet-101 architecture, and increases the inference time roughly by 14.89 ms over the LPF anti-aliasing method (the less computationally expensive of the evaluated techniques). On the other hand, most of the overhead comes from DDAC, which increases the number of trainable parameters by approximately 4% and the inference time by approximately 55.97 ms. Overall, our comparison shows that, by equipping a classifier with LPD layers, the computational overhead is almost trivial. Despite increasing the number of trainable parameters, we empirically show that our LPD approach outperforms classifiers with significantly more parameters. Please refer to Sec. A4.3 for additional experiments comparing the performance of our ResNet-101 + LPD model against the much larger ResNet-152 classifier. LPD learns sampling schemes different from APS. To further analyze LPD, we replace all the LPD layers with APS for a ResNet-101 model trained on ImageNet. We observe a critical drop in top-1 classification accuracy from 78.8% to 0.1%, indicating that LPD did not learn a downsampling scheme equivalent to APS. We also counted how many times (across all layers) LPD selects the max-norm. On the ImageNet validation set, LPD selects the max `2-norm polyphase component only 20.57% of the time. These show LPD learned a selection rule that differs from the handcrafted APS. Qualitative study on LPD. In Fig. 4 we show the selected activations, at the fourth layer, of a ResNet-50 model with LPD. Each column describes the first 8 channels of the four possible polyphase components k ∈ {0, . . . , 3}. The component selected by LPD, denoted as k?, is boxed in blue. For comparison purposes, we also boxed the component that maximizes the `2-norm in red. We observe that LPD is distinct from APS as they select a different set of polyphase components. However, we did not observe a specific pattern that can explain LPD’s selection rule. 5.4 Semantic Segmentation (Circular Shift) Experiment & implementation details. We evaluate LPS’s down/upsampling layers on semantic segmentation. As in DDAC [57], we evaluate using the PASCAL VOC [13] dataset. Following DDAC, we use DeepLabV3+ [6] as our baseline model. We use the ResNet-18 backbone pre-trained on the ImageNet (circular shift) reported in Sec. 5.1. We experiment with using only the LPD backbone and the full LPS, i.e., both LPD and LPU. We also evaluate the performance using APS, which corresponds to a hand-crafted downsampling scheme, in combination with the default bilinear interpolation strategy from DeepLabV3+. Note that, while our LPS approach consists of both shift equivariant down and upsampling schemes (LPD and LPU, respectively), APS only operates on the downsampling process. Thus, the latter does not guarantee a circularly shift equivariant segmentation. Evaluation metric. We report mean intersection over union (mIoU) to evaluate segmentation performance. To quantify circular-equivariance, we report mean Average Segmentation Circular Consistency (mASCC) which computes the average percentage of predicted (per-pixel) labels that remained the same under two different circular shifts. I.e., a shifted image is passed to a model to make a segmentation prediction. This prediction is then “unshifted” for comparison. We report five random shift pairs for each image. Results. We report the results for PASCAL VOC in Tab. 5. Overall, we observe that LPD only and LPS achieve comparable results to DDAC and APS in mIoU. Notably, LPS achieves 100% mASCC, matching the theory. This confirms that both the proposed LPD and LPU layers are necessary and are able to learn effective down/up sampling schemes for semantic segmentation. 5.5 Semantic Segmentation (Standard Shift) Experiment & implementation details. For the standard shift setting, we directly follow the experimental setup from DDAC. We use DeepLabV3+ with a ResNet-101 backbone pre-trained on ImageNet as reported in Sec. 5.2. Evaluation metric. To quantify the shift-equivariance, following DDAC, we report the mean Average Semantic Segmentation Consistency (mASSC) which is a linear-shift version of mASCC described in Sec. 5.4 except boundary pixels are ignored. Results. In Tab. 6, we compare mIoU and mASSC of LPS to various baselines. We observe that LPS achieves improvements in mIoU and consistency when compared to DDAC∗. We note that DDAC [57] did not release their code for mASSC. For a fair comparison, we report the performance of their released checkpoint using our implementation of mASSC, indicated with DDAC∗. Despite the gap in theory and practice due to non-circular padding at the boundary, our experiments show LPS remains an effective approach to improve both shift consistency and model performance. 6 Conclusion We propose learnable polyphase sampling (LPS), a pair of shift-equivariant down/upsampling layers. LPS’s design theoretically guarantees circular shift-invariance and equivariance while being end-to-end trainable. Additionally, LPS retains superior consistency on standard shifts where theoretical assumptions are broken at image boundaries. Finally, LPS captures a richer family of shift-invariant/equivariant functions than APS. Through extensive experiments on image classification and semantic segmentation, we demonstrate that LPS is on-par with/exceeds APS, LPF and DDAC in terms of model performance and consistency. Acknowledgments: We thank Greg Shakhnarovich & PALS at TTI-Chicago for the thoughtful discussions and computation resources. This work is supported in part by NSF under Grants 1718221, 2008387, 2045586, 2106825, MRI 1725729, NIFA award 2020-67021-32799, and funding by PPG Industries, Inc. We thank NVIDIA for providing a GPU.
1. What is the focus and contribution of the paper on polyphase resampling? 2. What are the strengths of the proposed approach, particularly in terms of its application in classification and segmentation? 3. What are the weaknesses of the paper, especially regarding the idea of polyphase resampling and its relation to signal processing? 4. Do you have any concerns or questions about the comparison between LPU and APS in terms of shift invariance and computational complexity? 5. Are there any limitations or drawbacks of the proposed method that should be considered?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper generalizes polyphase resampling and applies learnable selection of the polyphase components, instead of based on signal magnitudes. It achieved better accuracy and shift consistency on classification and segmentation. Strengths And Weaknesses Strength: The paper writes with professionism in notations and arangement. The authors did both experiment in high-level classification and low-level segmentation. Weakness: My personal feeling here is that: the idea of polyphase resampling is being derived to maybe too complicated, to lose its original meaning in signal processing, and now it becomes a fancy way of 2x2 pooling. Even APS can be used for backpropagation actually, it is just another non-smooth operator adding to some existing non-smooth operators, Relu, maxpooling, so no big trouble. So I am conservative about the motivation to be the polyphase resampling parameterized. The computation added by this layer cannot be neglet, considering the shiting doubles of workload of whatever layer follows it. Questions Maybe I missed something, as APS is a specicial case of LPU, and why is LPU more shift-invariant than APS theoretically, and on segmentation empirically? As LPU is data dependent, so it could be approximate APS on some bad test case? In figure 4 caption "This indicates LPS did not learn to reproduce APS" I can infer this from the algorithm, but is this good from visual meaning perspective? Difficult to see. The running time, computation complexity comparison with APS, and other baseline should be included. Limitations yes
NIPS
Title Learnable Polyphase Sampling for Shift Invariant and Equivariant Convolutional Networks Abstract We propose learnable polyphase sampling (LPS), a pair of learnable down/upsampling layers that enable truly shift-invariant and equivariant convolutional networks. LPS can be trained end-to-end from data and generalizes existing handcrafted downsampling layers. It is widely applicable as it can be integrated into any convolutional network by replacing down/upsampling layers. We evaluate LPS on image classification and semantic segmentation. Experiments show that LPS is on-par with or outperforms existing methods in both performance and shift consistency. For the first time, we achieve true shift-equivariance on semantic segmentation (PASCAL VOC), i.e., 100% shift consistency, outperforming baselines by an absolute 3.3%. Our project page and code are available at https://raymondyeh07.github.io/learnable_polyphase_sampling/ 1 Introduction For tasks like image classification, shifts of an object do not change the corresponding object label, i.e., the task is shift-invariant. This shift-invariance property has been incorporated into deep-nets yielding convolutional neural nets (CNN). Seminal works on CNNs [15, 24] directly attribute the model design to shift-invariance. For example, Fukushima [15] states “the network has an ability of position-invariant pattern recognition” and LeCun et al. [24] motivate CNNs by stating that they “ensure some degree of shift invariance.” CNNs have evolved since their conception. Modern deep-nets contain more layers, use different non-linearities and pooling layers. Re-examining these modern architectures, Zhang [56] surprisingly finds that modern deep-nets are not shift-invariant. To address this, Zhang [56] and Zou et al. [57] propose to perform anti-aliasing before each downsampling layer, and found it to improve the degree of invariance. More recently, Chaman and Dokmanic [5] show that deep-nets can be “truly shift-invariant,” i.e., a model’s output is identical for given shifted inputs. For this, they replace all downsampling layers with their adaptive polyphase sampling (APS) layer. While APS achieves true shift-invariance by selecting the max-norm polyphase component (a handcrafted downsampling scheme), an important question arises: are there more effective downsampling schemes that can achieve true shift-invariance? Consider an extreme case, a handcrafted deep-net that always outputs zeros is truly shift-invariant, but does not accomplish any task. This motivates to study how truly shift-invariant downsampling schemes can be learned from data. For this we propose Learnable Polyphase Sampling (LPS), a pair of down/upsampling layers that yield truly shift-invariant/equivariant deep-nets and can be trained in an end-to-end manner. For ∗Equal contribution. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). downsampling, LPS can be easily integrated into existing deep-net architectures by swapping out the pooling/striding layers. Theoretically, LPS generalizes APS to downsampling schemes that cannot be represented by APS. Hence, LPS’s ideal performance is never worse than that of APS. For upsampling, LPS guarantees architectures that are truly shift-equivariant, i.e., the output shifts accordingly when the input shifts. This is desirable for tasks like semantic image segmentation. To validate the proposed LPS, we conduct extensive experiments: (a) image classification on CIFAR10 [21] and ImageNet [12]; (b) semantic segmentation on PASCAL VOC [13]. We observe that the proposed approach outperforms APS and further improves anti-aliasing methods on both model performance and shift consistency. Our contributions are as follows: • We propose learnable polyphase sampling (LPS), a pair of novel down/upsampling layers, and prove that they yield truly shift-invariant/-equivariant deep-nets. Different from prior works, our sampling scheme is trained from end-to-end and not handcrafted. • We theoretically prove that LPS (downsampling) is a generalization of APS. Hence, in theory, LPS improves upon APS. • We conduct extensive experiments demonstrating the effectiveness of LPS on image classification and segmentation over three datasets comparing to APS and anti-aliasing approaches. 2 Related Work In this section, we briefly discuss related work, including shift invariant/equivariant deep-nets and pooling layers. Additional necessary concepts are reviewed in Sec. 3. Shift Invariant/equivariant convolutional networks. Modern convolutional networks use striding or pooling to reduce the amount of memory and computation in the model [17, 22, 40, 45]. As pointed out by Azulay and Weiss [1] and Zhang [56], these pooling/striding layers break the shiftinvariance property of deep-nets. To address this issue, Zhang [56] proposed to perform anti-aliasing, i.e., lowpass filtering (LPF) before each downsampling, a canonical signal processing technique for multi-rate systems [47]. We illustrate this approach in Fig. 1 (left). Zou et al. [57] further improved the LPF technique by using adaptive filters which better preserve edge information. While anti-aliasing filters are effective, Chaman and Dokmanic [5] show that true shift-invariance, i.e., 100% shift consistency, can be achieved without anti-aliasing. Specifically, they propose Adaptive Polyphase Sampling (APS) which selects the downsampling indices, i.e., polyphase components, based on the `p-norm of the polyphase components; a handcrafted rule, as illustrated in Fig. 1 (right). In a follow up technical report [4], APS is extended to upsampling using unpooling layers [2, 55], where the downsampling indices are saved to place values back to their corresponding spatial location during upsampling. Our work presents a novel pair of shift-invariant/equivariant down/upsampling layers which are trainable, in contrast to APS’s handcrafted selection rule. We note that generalizations of equivariance beyond shifts have also been studied [3, 7, 37, 39, 43, 46, 48, 52] and applied to various domains, e.g., sets [16, 31, 35, 38, 50, 53], graphs [10, 11, 19, 27, 28, 30, 32, 44, 51], spherical images [8, 9, 20], volumetric data [49], etc. In this work, we focus solely on shift-equivariance for images with CNNs. Pooling layers. Many designs for better downsampling or pooling layers have been proposed. Popular choices are Average-Pooling [23] and Max-Pooling [36]. Other generalizations also exists, e.g., LP - Pooling [42] which generalizes pooling to use different norms. The effectiveness of different pooling layers has also been studied by Scherer et al. [41]. More similar to our work is Stochastic-Pooling [54] and Mixed Max-Average Pooling [25]. Stochastic-Pooling constructs a probability distribution by normalizing activations within a window and sampling during training. In our work, we present a novel design which learns the sampling distribution. Mixed Max-Average Pooling learns a single scalar to permit a soft-choice between Max- and Average-Pooling. In contrast, our LPS has shift-equivariance guarantees while being end-to-end trainable. 3 Preliminaries We provide a brief review on equivariant and invariant functions to establish the notation. For readability, we use one-dimensional data to illustrate these ideas. In practice, these concepts are generalized to multiple channels and two-dimensional data. Shift invariance and equivariance. The concept of equivariance, a generalization of invariance, describes how a function’s output is transformed given that the input is transformed in a predefined way. For example, shift equivariance describes how the output is shifted given that the input is also shifted: think of image segmentation, if an object in the image is shifted then its corresponding mask is also shifted. A function f : RN 7→ RM is TN , {TM , I}-equivariant (shift-equivariant) if and only if (iff) ∃ T ∈ {TM , I} s.t. f(TNx) = Tf(x) ∀x ∈ RN , (1) where TNx[n] , x[(n+1) mod N ] ∀n ∈ Z denotes a circular shift, [·] denotes the indexing operator, and I denotes the identity function. This definition of equivariance handles the ambiguity that arises when shifting by one and downsampling by two. Ideally, a shift by one at the input should result in a 0.5 shift in the downsampled signal, which is not achievable on the integer grid. Hence, this definition considers either a shift by one or a no shift at the output as equivariant. Following the equivariance definition, invariance can be viewed as a special case where the transformation at the output is an identity function, I . Concretely, a function f : RN 7→ RM is TN , {I}equivariant (shift-invariant) iff f(TNx) = f(x) ∀x ∈ RN . (2) To obtain shift-invariance from shift-equivariant functions it is common to use global pooling. Observe that ∑ m f(Tx)[m] = ∑ m (T f(x))[m] (3) is shift-invariant if f is shift-equivariant, as summation is an orderless operation. Note that the composition of shift-equivariant functions maintains shift-equivariance. Hence, f can be a stack of equivariant layers, e.g., a composition of convolution layers. While existing deep-nets [17, 26, 40] do use global spatial pooling, these architectures are not shiftinvariant. This is due to pooling and downsampling layers, which are not shift-equivariant as we review next. Downsampling and pooling layers. A downsampling-by-two layer D : RN 7→ RbN/2c is defined as D(x)[n] = x[2n] ∀n ∈ Z, (4) which returns the even indices of the input x. As a shift operator makes the odd indices even, a downsampling layer is not shift-equivariant/invariant. Commonly used average or max pooling can be viewed as an average or max filter followed by downsampling, hence pooling is also not shift-equivariant/invariant. To address this issue, Chaman and Dokmanic [5] propose adaptive polyphase sampling (APS) which is an input dependent (adaptive) selection of the odd/even indices. Adaptive polyphase sampling. Proposed by Chaman and Dokmanic [5], adaptive polyphase sampling (APS) returns whether the odd or even indices, i.e., the polyphase components, based on their norms. Formally, APS : RN 7→ RbN/2c is defined as: APS(x) = { Poly(x)0 if ‖Poly(x)0‖ > ‖Poly(x)1‖ Poly(x)1 otherwise , (5) where x ∈ RN is the input and Poly(x)i denotes the polyphase components, i.e., Poly(x)0[n] = x[2n] and Poly(x)1[n] = x[2n+ 1]. (6) While this handcrafted selection rule achieves a consistent selection of the polyphase components, it is not the only way to achieve it, e.g., returning the polyphase component with the smaller norm. In this work, we study a family of shift-equivariant sampling layers and propose how to learn them in a data-driven manner. 4 Approach Our goal is to design a learnable down/upsampling layer that is shift-invariant/equivariant. We formulate down/upsampling by modeling the conditional probability of selecting each polyphase component given an input. For this we use a small neural network. This enables the sampling scheme to be trained end-to-end from data, hence the name learnable polyphase sampling (LPS). In Sec. 4.1, we introduce learnable polyphase downsampling (LPD), discuss how to train it end-to-end, and show that it generalizes APS. In Sec. 4.2, we propose a practical layer design of LPD. Lastly, in Sec. 4.3, we discuss how to perform LPS for upsampling, namely, learnable polyphase upsampling (LPU). For readability, we present the approach using one dimensional data, i.e., a row in an image. 4.1 Learnable Polyphase Downsampling We propose learnable polyphase downsampling (LPD) to learn a shift-equivariant downsampling layer. Given an input feature map x ∈ RC×N , LPD spatially downsamples the input to produce an output in RC×bN/2c via LPD(x)[c, n] = x[c, 2n+ k?] , Poly(x)k? , (7) where k? = argmaxk∈{0,1} pθ(k = k|x) and Poly(x)k? denotes the k?-th polyphase component. We model a conditional probability pθ(k|x) for selecting polyphase components, i.e., k denotes the random variable of the polyphase indices. For 1D data, there are only two polyphase components. Critically, not all pθ lead to an equivariant downsampling layer. For example, pθ(k = 0|x) = 1 results in the standard down-sampling which always returns values on even indices for 1D signals. We will next examine which family of pθ achieves a shift-equivariant downsampling layer. Shift-permutation equivariance of pθ. Consider the example in Fig. 2. We can see that a circular shift in the spatial domain induces a permutation in the polyphase components. Observe that the top-row of the polyphase component containing the blue circle and orange square are permuted to the second row when the input is circularly shifted. We now state this formally. Lemma 1. Polyphase shift-permutation property Poly(TNx)k = { Poly(x)1 if k = 0 TMPoly(x)0 if k = 1 . (8) Proof. By definition, Poly(TNx)k[n] = TNx[(2n+ k) mod N ] = x[(2n+ k + 1) mod N ] (9) = { x[(2n+ 1) mod N ] = Poly(x)1 if k = 0 x[(2(n+ 1)) mod N ] = TMPoly(x)0 if k = 1 (10) From Lemma 1, we observe that to achieve an equivariant downsampling layer a spatially shifted input should lead to a permutation of the selection probability (Claim 1). We note that pθ is said to be shift-permutation-equivariant if pθ(k = π(k)|TNx) = pθ(k = k|x), (11) where π denotes a permutation on the polyphase indices, i.e., a “swap” of indices is characterized by π(k), i.e., π(0) = 1 and π(1) = 0. Claim 1. If pθ is shift-permutation-equivariant, defined in Eq. (11), then LPD defined in Eq. (7) is a shift-equivariant downsampling layer. Proof. Let x̂ , TNx be a shifted version of x ∈ RN . Recall LPD(x) and LPD(x̂) are defined as: LPD(x) , Poly(x)k? , k? = arg max k∈{0,1} pθ(k = k|x), (12) LPD(x̂) , Poly(x̂)k̂? , k̂ ? = arg max k∈{0,1} pθ(k = k|x̂). (13) From Lemma 1, LPD(TNx) can be expressed as: LPD(TNx) = { Poly(x)1 if k̂? = 0 TMPoly(x)0 if k̂? = 1 . (14) As pθ is the shift-permutation-equivariant, k̂? = π(k?) = 1− k?. (15) Finally, combining Eq. (14) and Eq. (15), LPD(TNx) = { Poly(x)1 if k? = 1 TMPoly(x)0 if k? = 0 = ( (1− k?)TM + k?I ) · LPD(x), (16) showing that LPD satisfies the shift-equivariance definition reviewed in Eq. (1). Here, we parameterize pθ with a small neural network. The exact construction of a shift-permutation equivariant deep-net architecture is deferred to Sec. 4.2. We next discuss how to train the distribution parameters θ in LPD. End-to-end training of LPD. At training time, to incorporate stochasticity and compute gradients, we parameterize pθ using Gumbel Softmax [18, 29]. To backpropagate gradients to θ, we relax the selection of polyphase components as a convex combination, i.e., y = ∑ k zk · Poly(x)k, z ∼ pθ(k|x), (17) where z corresponds to a selection variable, i.e., ∑ k zk = 1 and zk ∈ [0, 1]. Note the slight abuse of notation as pθ(k|x) denotes a probability over polyphase indices represented in a one-hot format. We further encourage the Gumbel Softmax to behave more like an argmax by decaying its temperature τ during training as recommended by Jang et al. [18]. LPD generalizes APS. A key advantage of LPS over APS is that it can learn from data, potentially leading to a better sampling scheme than a handcrafted one. Here, we show that APS is a special case of LPD. Therefore, LPD should perform at least as well as APS if parameters are trained well. Claim 2. APS is a special case of LPD, i.e., LPD can represent APS’s selection rule. Proof. Consider a parametrization of pθ as follows, pθ(k = k|x) = exp (‖Poly(x)k‖)∑ j exp(‖Poly(x)j‖) . (18) As the exponential is a strictly increasing function we have argmax k pθ(k = k|x) = argmax k ‖Poly(x)k‖ . (19) Eq. (18) is a softmax with input ‖Poly(x)k‖, as such a function exists, LPD generalizes APS. 4.2 Practical LPD Design We aim for a conditional distribution pθ that is shift-permutation equivariant to obtain a shiftequivariant pooling layer. Let the conditional probability be modeled as: pθ(k = k|x) , exp[fθ(Poly(x)k)]∑ j exp[fθ(Poly(x)j)] , (20) where fθ : RC×H ′×W ′ 7→ R is a small network that extracts features from polyphase component Poly(x)k. We first show that pθ is shift-permutation equivariant if fθ is shift invariant. Claim 3. In Eq. (20), if fθ is shift invariant then pθ is shift-permutation equivariant (Eq. (11)). Proof. Denote a feature map x and its shifted version x̂ , TNx. By definition, pθ(k = π(k)|TNx) = exp(fθ(Poly(TNx)π(k)))∑ j exp(fθ(Poly(TNx)j)) . (21) With a shift-invariant fθ and using Lemma 1, fθ(Poly(TNx)π(k)) = fθ(TMPoly(x)k) = fθ(Poly(x)k) (22) ∴ pθ(k = π(k)|TNx) = exp(Poly(x)k)∑ j exp(k = Poly(x)j) = pθ(k = k|x) Based on the result in Claim 3, we now present a convolution based meta-architecture that satisfies the shift-permutation property. The general design principle: share parameters across polyphase indices, just as convolution achieves shift equivariance by sharing parameters, plus averaging over the spatial domain. An illustration of the proposed meta-architecture is shown in Fig. 3. Fully convolutional model. Logits are extracted from the polyphase components via fullyconvolutional operations followed by averaging along the channel and the spatial domain. Following this, f convθ is denoted as: f convθ (Poly(x)k) , 1 CM ∑ c,n f̃ convθ (Poly(x)k)[c, n], (23) where f̃ convθ : RC×M 7→ RC×M is a CNN model (without pooling layers) and M = bN/2c. The shift equivariance property of f̃ convθ guarantees that f conv θ is shift-invariant due to the global pooling. 4.3 Learnable Polyphase Upsampling (LPU) Beyond shift invariant models, we extend the theory from downsampling to upsampling, which permits to design shift-equivariant models. The main idea is to place the features obtained after downsampling back to their original spatial location. Given a feature map y ∈ RC×bN/2c downsampled via LPD from x, the upsampling layer outputs u ∈ RC×N are defined as follows: Poly(u)k? = { y, k? = argmaxk∈{0,1} pθ(k = k|x) 0, otherwise. (24) We name this layer learnable polyphase upsampling (LPU), i.e., LPU(y, pθ) , u. We now show that LPU and LPD achieve shift-equivariance. Claim 4. If pθ is shift-permutation equivariant, as defined in Eq. (11), then LPU ◦ LPD is shift-equivariant. Proof. We prove this claim following definitions of LPU, LPD and Lemma 1. The complete proof is deferred to Appendix Sec. A1. End-to-end training of LPU. As in downsampling, we also incorporate stochasticity via GumbelSoftmax. To backpropagate gradients to pθ, we relax the hard selection into a convex combination, i.e., Poly(u)k = zk · y, z ∼ pθ(k|x). (25) Anti-aliasing for upsampling. While LPU provides a shift-equivariant upsampling scheme, it introduces zeros in the output which results in high-frequency components. This is known as aliasing in a multirate system [47]. To resolve this, following the classical solution, we apply a low-pass filter scaled by the upsampling factor after each LPU. 5 Experiments We conduct experiments on image classification following prior works. We report on the same architectures and training setup. We report both the circular shift setup in APS [5] and the standard shift setup in LPF [56]. We also evaluate on semantic segmentation, considering the circular shift, inspired by APS, and the standard shift setup following DDAC [57]. For circular shift settings, the theory exactly matches the experiment hence true equivariance is achieved. To our knowledge, this is the first truly shift equivariant model reported on PASCAL VOC. 5.1 Image Classification (Circular Shift) Experiment & implementation details. Following APS, all the evaluated pooling and anti-aliasing models use the ResNet-18 [17] architecture with circular padding on CIFAR10 [21] and ImageNet [12]. Anti-alias filters are applied after each downsampling layer following LPF [56] and DDAC [57]. We also replace downsampling layers with APS [5] and our proposed LPS layer. We provide more experimental details in Appendix Sec. A4. Evaluation metrics. We report classification accuracy to quantify the model performance on the original dataset without any shifts. To evaluate shift-invariance, following APS, we report circularconsistency (C-Cons.) which computes the average percentage of predicted labels that are equal under two different circular shifts, i.e., ŷ(Circ. Shifth1,w1(I)) = ŷ(Circ. Shifth2,w2(I)), (26) where ŷ(I) denotes the predicted label for an input image I and h1, w1, h2, w2 are uniformly sampled from 0 to 32. We report the average over five random seeds. CIFAR10 results. Tab. 1 shows the classification accuracy and circular consistency on CIFAR10. We report the mean and standard deviation over five runs with different random initialization of the ResNet-18 model. We observe that the proposed LPS improves classification accuracy over all baselines while achieving 100% circular consistency. In addition to attaining perfect shift consistency, we observe that combining anti-aliasing with LPS further improves performance. ImageNet results. We conduct experiments on ImageNet with circular shift using ResNet-18. In Tab. 2, we compare with APS’s best model using a box filter (Rectangle-2), as reported by Chaman and Dokmanic [5]. While both APS and LPS achieve 100% circular consistency, our proposed LPS improves on classification accuracy in all scenarios, highlighting its advantages. 5.2 Image Classification (Standard Shift) Experiment & implementation details. To directly compare with results from LPF and DDAC, we conduct experiments on ImageNet using the ResNet-50 and ResNet-101 architectures following their setting, i.e., training with standard shifts augmentation and using convolution layers with zero-padding. Evaluation metrics. Shift consistency (S-Cons.) computes the average percentage of ŷ(Shifth1,w1(I)) = ŷ(Shifth2,w2(I)), (27) where h1, w1, h2, w2 are uniformly sampled from the interval {0, . . . , 32}. To avoid padding at the boundary, following LPF [56], we perform a shift on an image then crop its center 224× 224 region. We note that, due to the change in content at the boundary, perfect shift consistency is not guaranteed. ImageNet results. In Tab. 3, we compare to the best anti-aliasing result as reported in LPF, DDAC and DDAC∗ which is trained from the authors’ released code using hyperparameters specified in the repository. Note, in standard shift setting LPS no longer achieves true shift-invariance due to padding at the boundaries. Despite this gap from the theory, LPS achieves improvements in both performance and shift-consistency over the baselines. When compared to LPF, both ResNet-50 and ResNet-101 architecture achieved improved classification accuracy and shift-consistency. When compared to DDAC, LPS achieves comparable accuracy with higher shift-consistency. 5.3 Trainable Parameters and Inference Time While LPD is a data-driven downsampling layer, we show that the additional trainable parameters introduced by it are marginal with respect to the classification architecture. Tab. 4 shows the number of trainable parameters required by the ResNet-101 models. For each method, we report the absolute number of trainable parameters, which includes both classifier and learnable pooling weights. We also include the relative number of trainable parameters, which only considers the learnable pooling weights and the percentage it represents with respect to the default ResNet-101 architecture weights. For comparison purposes, we also include the inference time required by each model to evaluate their computational overhead. Mean and standard deviation of the inference time is computed for each method on 100 batches of size 32. Following ImageNet default settings, the image dimensions corrrespond to 224× 224× 3. Results show our proposed LPD method introduces approximately 1% additional trainable parameters on the ResNet-101 architecture, and increases the inference time roughly by 14.89 ms over the LPF anti-aliasing method (the less computationally expensive of the evaluated techniques). On the other hand, most of the overhead comes from DDAC, which increases the number of trainable parameters by approximately 4% and the inference time by approximately 55.97 ms. Overall, our comparison shows that, by equipping a classifier with LPD layers, the computational overhead is almost trivial. Despite increasing the number of trainable parameters, we empirically show that our LPD approach outperforms classifiers with significantly more parameters. Please refer to Sec. A4.3 for additional experiments comparing the performance of our ResNet-101 + LPD model against the much larger ResNet-152 classifier. LPD learns sampling schemes different from APS. To further analyze LPD, we replace all the LPD layers with APS for a ResNet-101 model trained on ImageNet. We observe a critical drop in top-1 classification accuracy from 78.8% to 0.1%, indicating that LPD did not learn a downsampling scheme equivalent to APS. We also counted how many times (across all layers) LPD selects the max-norm. On the ImageNet validation set, LPD selects the max `2-norm polyphase component only 20.57% of the time. These show LPD learned a selection rule that differs from the handcrafted APS. Qualitative study on LPD. In Fig. 4 we show the selected activations, at the fourth layer, of a ResNet-50 model with LPD. Each column describes the first 8 channels of the four possible polyphase components k ∈ {0, . . . , 3}. The component selected by LPD, denoted as k?, is boxed in blue. For comparison purposes, we also boxed the component that maximizes the `2-norm in red. We observe that LPD is distinct from APS as they select a different set of polyphase components. However, we did not observe a specific pattern that can explain LPD’s selection rule. 5.4 Semantic Segmentation (Circular Shift) Experiment & implementation details. We evaluate LPS’s down/upsampling layers on semantic segmentation. As in DDAC [57], we evaluate using the PASCAL VOC [13] dataset. Following DDAC, we use DeepLabV3+ [6] as our baseline model. We use the ResNet-18 backbone pre-trained on the ImageNet (circular shift) reported in Sec. 5.1. We experiment with using only the LPD backbone and the full LPS, i.e., both LPD and LPU. We also evaluate the performance using APS, which corresponds to a hand-crafted downsampling scheme, in combination with the default bilinear interpolation strategy from DeepLabV3+. Note that, while our LPS approach consists of both shift equivariant down and upsampling schemes (LPD and LPU, respectively), APS only operates on the downsampling process. Thus, the latter does not guarantee a circularly shift equivariant segmentation. Evaluation metric. We report mean intersection over union (mIoU) to evaluate segmentation performance. To quantify circular-equivariance, we report mean Average Segmentation Circular Consistency (mASCC) which computes the average percentage of predicted (per-pixel) labels that remained the same under two different circular shifts. I.e., a shifted image is passed to a model to make a segmentation prediction. This prediction is then “unshifted” for comparison. We report five random shift pairs for each image. Results. We report the results for PASCAL VOC in Tab. 5. Overall, we observe that LPD only and LPS achieve comparable results to DDAC and APS in mIoU. Notably, LPS achieves 100% mASCC, matching the theory. This confirms that both the proposed LPD and LPU layers are necessary and are able to learn effective down/up sampling schemes for semantic segmentation. 5.5 Semantic Segmentation (Standard Shift) Experiment & implementation details. For the standard shift setting, we directly follow the experimental setup from DDAC. We use DeepLabV3+ with a ResNet-101 backbone pre-trained on ImageNet as reported in Sec. 5.2. Evaluation metric. To quantify the shift-equivariance, following DDAC, we report the mean Average Semantic Segmentation Consistency (mASSC) which is a linear-shift version of mASCC described in Sec. 5.4 except boundary pixels are ignored. Results. In Tab. 6, we compare mIoU and mASSC of LPS to various baselines. We observe that LPS achieves improvements in mIoU and consistency when compared to DDAC∗. We note that DDAC [57] did not release their code for mASSC. For a fair comparison, we report the performance of their released checkpoint using our implementation of mASSC, indicated with DDAC∗. Despite the gap in theory and practice due to non-circular padding at the boundary, our experiments show LPS remains an effective approach to improve both shift consistency and model performance. 6 Conclusion We propose learnable polyphase sampling (LPS), a pair of shift-equivariant down/upsampling layers. LPS’s design theoretically guarantees circular shift-invariance and equivariance while being end-to-end trainable. Additionally, LPS retains superior consistency on standard shifts where theoretical assumptions are broken at image boundaries. Finally, LPS captures a richer family of shift-invariant/equivariant functions than APS. Through extensive experiments on image classification and semantic segmentation, we demonstrate that LPS is on-par with/exceeds APS, LPF and DDAC in terms of model performance and consistency. Acknowledgments: We thank Greg Shakhnarovich & PALS at TTI-Chicago for the thoughtful discussions and computation resources. This work is supported in part by NSF under Grants 1718221, 2008387, 2045586, 2106825, MRI 1725729, NIFA award 2020-67021-32799, and funding by PPG Industries, Inc. We thank NVIDIA for providing a GPU.
1. What is the focus and contribution of the paper on shift invariant and equivalence convolutional neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its learnability and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its limitations and comparisons with other works? 4. Do you have any concerns about the effectiveness of the proposed method in real-case scenarios? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a new framework to enable the design of shift invariant and equivalence convolutional neural networks. The key idea in this paper is introducing a learnable polyphase sampling scheme which can be used in both down/up sampling layers. Compared to existing methods, the proposed LPS is trained from data instead of manually. The LPS is introduced with detailed analysis and the implementation is also shown in both down/up sampling layers. Experiments on image classification and semantic segmentation valid its effectiveness in practices. Strengths And Weaknesses Pros: The main contribution in this paper is extending the manually designed adaptive polyphase sampling layers to learnable polyphase sampling layers with theoretically prove. Especially, the introduced conditional probability directly learn from the data while the LPS keep the property of APS. Besides the LPS, both downsampling and upsampling LPD and LPU have been showed with implementation and detailed analysis. Cons: The main idea of this paper is generalized the existing work APS in a learnable manner. Compared to the significance of APS the contribution is limited. Also it is not clear how LPU and LPD work in real cases. For example, in Semantic Segmentation cases, LPD only seems works better than LPD and LPU while it is not clear how LPU works and what the reason behind that. Questions Q1.In general LPS has similar performance to APS in Resnet 18 setting. Is the result consistent in Resnet 50 or 101? Q2. Table 4 shows that LPD only works better than LPS, while there is no ablation shows that the importance of LPD and LPU in the neural network when keep replacing LPD LPU in downsampling and upsampling layers. Will is possible LPD + a few LPU layers can have even better performance? Limitations The limitation in this paper is the marginal performance improvement over APS, which leads the bias for justifying the importance of the motivation in this paper. This is because APS is a special case of LPS, the contribution of this paper can only be valid through theoretical analysis.