title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
Causal Bandits with Unknown Graph Structure
https://papers.nips.cc/paper_files/paper/2021/hash/d010396ca8abf6ead8cacc2c2f2f26c7-Abstract.html
Yangyi Lu, Amirhossein Meisami, Ambuj Tewari
https://papers.nips.cc/paper_files/paper/2021/hash/d010396ca8abf6ead8cacc2c2f2f26c7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13524-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d010396ca8abf6ead8cacc2c2f2f26c7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9-XhLobA4z
https://papers.nips.cc/paper_files/paper/2021/file/d010396ca8abf6ead8cacc2c2f2f26c7-Supplemental.pdf
In causal bandit problems the action set consists of interventions on variables of a causal graph. Several researchers have recently studied such bandit problems and pointed out their practical applications. However, all existing works rely on a restrictive and impractical assumption that the learner is given full knowledge of the causal graph structure upfront. In this paper, we develop novel causal bandit algorithms without knowing the causal graph. Our algorithms work well for causal trees, causal forests and a general class of causal graphs. The regret guarantees of our algorithms greatly improve upon those of standard multi-armed bandit (MAB) algorithms under mild conditions. Lastly, we prove our mild conditions are necessary: without them one cannot do better than standard MAB algorithms.
null
Piper: Multidimensional Planner for DNN Parallelization
https://papers.nips.cc/paper_files/paper/2021/hash/d01eeca8b24321cd2fe89dd85b9beb51-Abstract.html
Jakub M. Tarnawski, Deepak Narayanan, Amar Phanishayee
https://papers.nips.cc/paper_files/paper/2021/hash/d01eeca8b24321cd2fe89dd85b9beb51-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13525-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d01eeca8b24321cd2fe89dd85b9beb51-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-U9I0f2S7W
https://papers.nips.cc/paper_files/paper/2021/file/d01eeca8b24321cd2fe89dd85b9beb51-Supplemental.pdf
The rapid increase in sizes of state-of-the-art DNN models, and consequently the increase in the compute and memory requirements of model training, has led to the development of many execution schemes such as data parallelism, pipeline model parallelism, tensor (intra-layer) model parallelism, and various memory-saving optimizations. However, no prior work has tackled the highly complex problem of optimally partitioning the DNN computation graph across many accelerators while combining all these parallelism modes and optimizations.In this work, we introduce Piper, an efficient optimization algorithm for this problem that is based on a two-level dynamic programming approach. Our two-level approach is driven by the insight that being given tensor-parallelization techniques for individual layers (e.g., Megatron-LM's splits for transformer layers) significantly reduces the search space and makes the global problem tractable, compared to considering tensor-parallel configurations for the entire DNN operator graph.
null
Causal Effect Inference for Structured Treatments
https://papers.nips.cc/paper_files/paper/2021/hash/d02e9bdc27a894e882fa0c9055c99722-Abstract.html
Jean Kaddour, Yuchen Zhu, Qi Liu, Matt J. Kusner, Ricardo Silva
https://papers.nips.cc/paper_files/paper/2021/hash/d02e9bdc27a894e882fa0c9055c99722-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13526-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d02e9bdc27a894e882fa0c9055c99722-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0v9EPJGc10
https://papers.nips.cc/paper_files/paper/2021/file/d02e9bdc27a894e882fa0c9055c99722-Supplemental.pdf
We address the estimation of conditional average treatment effects (CATEs) for structured treatments (e.g., graphs, images, texts). Given a weak condition on the effect, we propose the generalized Robinson decomposition, which (i) isolates the causal estimand (reducing regularization bias), (ii) allows one to plug in arbitrary models for learning, and (iii) possesses a quasi-oracle convergence guarantee under mild assumptions. In experiments with small-world and molecular graphs we demonstrate that our approach outperforms prior work in CATE estimation.
null
Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging
https://papers.nips.cc/paper_files/paper/2021/hash/d03a857a23b5285736c4d55e0bb067c8-Abstract.html
Ali Hashemi, Yijing Gao, Chang Cai, Sanjay Ghosh, Klaus-Robert Müller, Srikantan Nagarajan, Stefan Haufe
https://papers.nips.cc/paper_files/paper/2021/hash/d03a857a23b5285736c4d55e0bb067c8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13527-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d03a857a23b5285736c4d55e0bb067c8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kO3l8oz8EVP
https://papers.nips.cc/paper_files/paper/2021/file/d03a857a23b5285736c4d55e0bb067c8-Supplemental.pdf
Several problems in neuroimaging and beyond require inference on the parameters of multi-task sparse hierarchical regression models. Examples include M/EEG inverse problems, neural encoding models for task-based fMRI analyses, and climate science. In these domains, both the model parameters to be inferred and the measurement noise may exhibit a complex spatio-temporal structure. Existing work either neglects the temporal structure or leads to computationally demanding inference schemes. Overcoming these limitations, we devise a novel flexible hierarchical Bayesian framework within which the spatio-temporal dynamics of model parameters and noise are modeled to have Kronecker product covariance structure. Inference in our framework is based on majorization-minimization optimization and has guaranteed convergence properties. Our highly efficient algorithms exploit the intrinsic Riemannian geometry of temporal autocovariance matrices. For stationary dynamics described by Toeplitz matrices, the theory of circulant embeddings is employed. We prove convex bounding properties and derive update rules of the resulting algorithms. On both synthetic and real neural data from M/EEG, we demonstrate that our methods lead to improved performance.
null
Topological Attention for Time Series Forecasting
https://papers.nips.cc/paper_files/paper/2021/hash/d062f3e278a1fbba2303ff5a22e8c75e-Abstract.html
Sebastian Zeng, Florian Graf, Christoph Hofer, Roland Kwitt
https://papers.nips.cc/paper_files/paper/2021/hash/d062f3e278a1fbba2303ff5a22e8c75e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13528-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d062f3e278a1fbba2303ff5a22e8c75e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Xl1Z1L9DBIJ
https://papers.nips.cc/paper_files/paper/2021/file/d062f3e278a1fbba2303ff5a22e8c75e-Supplemental.pdf
The problem of (point) forecasting univariate time series is considered. Most approaches, ranging from traditional statistical methods to recent learning-based techniques with neural networks, directly operate on raw time series observations. As an extension, we study whether local topological properties, as captured via persistent homology, can serve as a reliable signal that provides complementary information for learning to forecast. To this end, we propose topological attention, which allows attending to local topological features within a time horizon of historical data. Our approach easily integrates into existing end-to-end trainable forecasting models, such as N-BEATS, and, in combination with the latter exhibits state-of-the-art performance on the large-scale M4 benchmark dataset of 100,000 diverse time series from different domains. Ablation experiments, as well as a comparison to recent techniques in a setting where only a single time series is available for training, corroborate the beneficial nature of including local topological information through an attention mechanism.
null
Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels
https://papers.nips.cc/paper_files/paper/2021/hash/d064bf1ad039ff366564f352226e7640-Abstract.html
Stefani Karp, Ezra Winston, Yuanzhi Li, Aarti Singh
https://papers.nips.cc/paper_files/paper/2021/hash/d064bf1ad039ff366564f352226e7640-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13529-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d064bf1ad039ff366564f352226e7640-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=oAjn5-AgSd
https://papers.nips.cc/paper_files/paper/2021/file/d064bf1ad039ff366564f352226e7640-Supplemental.pdf
Neural networks have been shown to outperform kernel methods in practice (including neural tangent kernels). Most theoretical explanations of this performance gap focus on learning a complex hypothesis class; in some cases, it is unclear whether this hypothesis class captures realistic data. In this work, we propose a related, but alternative, explanation for this performance gap in the image classification setting, based on finding a sparse signal in the presence of noise. Specifically, we prove that, for a simple data distribution with sparse signal amidst high-variance noise, a simple convolutional neural network trained using stochastic gradient descent learns to threshold out the noise and find the signal. On the other hand, the corresponding neural tangent kernel, with a fixed set of predetermined features, is unable to adapt to the signal in this manner. We supplement our theoretical results by demonstrating this phenomenon empirically: in CIFAR-10 and MNIST images with various backgrounds, as the background noise increases in intensity, a CNN's performance stays relatively robust, whereas its corresponding neural tangent kernel sees a notable drop in performance. We therefore propose the "local signal adaptivity" (LSA) phenomenon as one explanation for the superiority of neural networks over kernel methods.
null
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
https://papers.nips.cc/paper_files/paper/2021/hash/d072677d210ac4c03ba046120f0802ec-Abstract.html
Bowen Pan, Rameswar Panda, Yifan Jiang, Zhangyang Wang, Rogerio Feris, Aude Oliva
https://papers.nips.cc/paper_files/paper/2021/hash/d072677d210ac4c03ba046120f0802ec-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13530-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d072677d210ac4c03ba046120f0802ec-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7X_sBjIwtm9
https://papers.nips.cc/paper_files/paper/2021/file/d072677d210ac4c03ba046120f0802ec-Supplemental.pdf
The self-attention-based model, transformer, is recently becoming the leading backbone in the field of computer vision. In spite of the impressive success made by transformers in a variety of vision tasks, it still suffers from heavy computation and intensive memory costs. To address this limitation, this paper presents an Interpretability-Aware REDundancy REDuction framework (IA-RED$^2$). We start by observing a large amount of redundant computation, mainly spent on uncorrelated input patches, and then introduce an interpretable module to dynamically and gracefully drop these redundant patches. This novel framework is then extended to a hierarchical structure, where uncorrelated tokens at different stages are gradually removed, resulting in a considerable shrinkage of computational cost. We include extensive experiments on both image and video tasks, where our method could deliver up to 1.4x speed-up for state-of-the-art models like DeiT and TimeSformer, by only sacrificing less than 0.7% accuracy. More importantly, contrary to other acceleration approaches, our method is inherently interpretable with substantial visual evidence, making vision transformer closer to a more human-understandable architecture while being lighter. We demonstrate that the interpretability that naturally emerged in our framework can outperform the raw attention learned by the original visual transformer, as well as those generated by off-the-shelf interpretation methods, with both qualitative and quantitative results. Project Page: http://people.csail.mit.edu/bpan/ia-red/.
null
Symbolic Regression via Deep Reinforcement Learning Enhanced Genetic Programming Seeding
https://papers.nips.cc/paper_files/paper/2021/hash/d073bb8d0c47f317dd39de9c9f004e9d-Abstract.html
Terrell Mundhenk, Mikel Landajuela, Ruben Glatt, Claudio P Santiago, Daniel faissol, Brenden K Petersen
https://papers.nips.cc/paper_files/paper/2021/hash/d073bb8d0c47f317dd39de9c9f004e9d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13531-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d073bb8d0c47f317dd39de9c9f004e9d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=tjwQaOI9tdy
https://papers.nips.cc/paper_files/paper/2021/file/d073bb8d0c47f317dd39de9c9f004e9d-Supplemental.pdf
Symbolic regression is the process of identifying mathematical expressions that fit observed output from a black-box process. It is a discrete optimization problem generally believed to be NP-hard. Prior approaches to solving the problem include neural-guided search (e.g. using reinforcement learning) and genetic programming. In this work, we introduce a hybrid neural-guided/genetic programming approach to symbolic regression and other combinatorial optimization problems. We propose a neural-guided component used to seed the starting population of a random restart genetic programming component, gradually learning better starting populations. On a number of common benchmark tasks to recover underlying expressions from a dataset, our method recovers 65% more expressions than a recently published top-performing model using the same experimental setup. We demonstrate that running many genetic programming generations without interdependence on the neural-guided component performs better for symbolic regression than alternative formulations where the two are more strongly coupled. Finally, we introduce a new set of 22 symbolic regression benchmark problems with increased difficulty over existing benchmarks. Source code is provided at www.github.com/brendenpetersen/deep-symbolic-optimization.
null
Choose a Transformer: Fourier or Galerkin
https://papers.nips.cc/paper_files/paper/2021/hash/d0921d442ee91b896ad95059d13df618-Abstract.html
Shuhao Cao
https://papers.nips.cc/paper_files/paper/2021/hash/d0921d442ee91b896ad95059d13df618-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13532-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d0921d442ee91b896ad95059d13df618-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ssohLcmn4-r
https://papers.nips.cc/paper_files/paper/2021/file/d0921d442ee91b896ad95059d13df618-Supplemental.pdf
In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts.
null
A Causal Lens for Controllable Text Generation
https://papers.nips.cc/paper_files/paper/2021/hash/d0f5edad9ac19abed9e235c0fe0aa59f-Abstract.html
Zhiting Hu, Li Erran Li
https://papers.nips.cc/paper_files/paper/2021/hash/d0f5edad9ac19abed9e235c0fe0aa59f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13533-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d0f5edad9ac19abed9e235c0fe0aa59f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kAm9By0R5ME
https://papers.nips.cc/paper_files/paper/2021/file/d0f5edad9ac19abed9e235c0fe0aa59f-Supplemental.zip
Controllable text generation concerns two fundamental tasks of wide applications, namely generating text of given attributes (i.e., attribute-conditional generation), and minimally editing existing text to possess desired attributes (i.e., text attribute transfer). Extensive prior work has largely studied the two problems separately, and developed different conditional models which, however, are prone to producing biased text (e.g., various gender stereotypes). This paper proposes to formulate controllable text generation from a principled causal perspective which models the two tasks with a unified framework. A direct advantage of the causal formulation is the use of rich causality tools to mitigate generation biases and improve control. We treat the two tasks as interventional and counterfactual causal inference based on a structural causal model, respectively. We then apply the framework to the challenging practical setting where confounding factors (that induce spurious correlations) are observable only on a small fraction of data. Experiments show significant superiority of the causal approach over previous conditional models for improved control accuracy and reduced bias.
null
Differentially Private Multi-Armed Bandits in the Shuffle Model
https://papers.nips.cc/paper_files/paper/2021/hash/d14388bb836687ff2b16b7bee6bab182-Abstract.html
Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer
https://papers.nips.cc/paper_files/paper/2021/hash/d14388bb836687ff2b16b7bee6bab182-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13534-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d14388bb836687ff2b16b7bee6bab182-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=P0AeY-efPEx
https://papers.nips.cc/paper_files/paper/2021/file/d14388bb836687ff2b16b7bee6bab182-Supplemental.pdf
We give an $(\varepsilon,\delta)$-differentially private algorithm for the Multi-Armed Bandit (MAB) problem in the shuffle model with a distribution-dependent regret of $O\left(\left(\sum_{a:\Delta_a>0}\frac{\log T}{\Delta_a}\right)+\frac{k\sqrt{\log\frac{1}{\delta}}\log T}{\varepsilon}\right)$, and a distribution-independent regret of $O\left(\sqrt{kT\log T}+\frac{k\sqrt{\log\frac{1}{\delta}}\log T}{\varepsilon}\right)$, where $T$ is the number of rounds, $\Delta_a$ is the suboptimality gap of the action $a$, and $k$ is the total number of actions. Our upper bound almost matches the regret of the best known algorithms for the centralized model, and significantly outperforms the best known algorithm in the local model.
null
Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions
https://papers.nips.cc/paper_files/paper/2021/hash/d1588e685562af341ff2448de4b674d1-Abstract.html
Lijun Zhang, Guanghui Wang, Wei-Wei Tu, Wei Jiang, Zhi-Hua Zhou
https://papers.nips.cc/paper_files/paper/2021/hash/d1588e685562af341ff2448de4b674d1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13535-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d1588e685562af341ff2448de4b674d1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nnQpieSBwJ
https://papers.nips.cc/paper_files/paper/2021/file/d1588e685562af341ff2448de4b674d1-Supplemental.pdf
To deal with changing environments, a new performance measure—adaptive regret, defined as the maximum static regret over any interval, was proposed in online learning. Under the setting of online convex optimization, several algorithms have been successfully developed to minimize the adaptive regret. However, existing algorithms lack universality in the sense that they can only handle one type of convex functions and need apriori knowledge of parameters. By contrast, there exist universal algorithms, such as MetaGrad, that attain optimal static regret for multiple types of convex functions simultaneously. Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions. Specifically, we borrow the idea of maintaining multiple learning rates in MetaGrad to handle the uncertainty of functions, and utilize the technique of sleeping experts to capture changing environments. In this way, our algorithm automatically adapts to the property of functions (convex, exponentially concave, or strongly convex), as well as the nature of environments (stationary or changing). As a by product, it also allows the type of functions to switch between rounds.
null
Learning Hard Optimization Problems: A Data Generation Perspective
https://papers.nips.cc/paper_files/paper/2021/hash/d1942a3ab01eb59220e2b3a46e7ef09d-Abstract.html
James Kotary, Ferdinando Fioretto, Pascal Van Hentenryck
https://papers.nips.cc/paper_files/paper/2021/hash/d1942a3ab01eb59220e2b3a46e7ef09d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13536-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d1942a3ab01eb59220e2b3a46e7ef09d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2zO2lb7ykMD
https://papers.nips.cc/paper_files/paper/2021/file/d1942a3ab01eb59220e2b3a46e7ef09d-Supplemental.pdf
Optimization problems are ubiquitous in our societies and are present in almost every segment of the economy. Most of these optimization problems are NP-hard and computationally demanding, often requiring approximate solutions for large-scale instances. Machine learning frameworks that learn to approximate solutions to such hard optimization problems are a potentially promising avenue to address these difficulties, particularly when many closely related problem instances must be solved repeatedly. Supervised learning frameworks can train a model using the outputs of pre-solved instances. However, when the outputs are themselves approximations, when the optimization problem has symmetric solutions, and/or when the solver uses randomization, solutions to closely related instances may exhibit large differences and the learning task can become inherently more difficult. This paper demonstrates this critical challenge, connects the volatility of the training data to the ability of a model to approximate it, and proposes a method for producing (exact or approximate) solutions to optimization problems that are more amenable to supervised learning tasks. The effectiveness of the method is tested on hard non-linear nonconvex and discrete combinatorial problems.
null
Canonical Capsules: Self-Supervised Capsules in Canonical Pose
https://papers.nips.cc/paper_files/paper/2021/hash/d1ee59e20ad01cedc15f5118a7626099-Abstract.html
Weiwei Sun, Andrea Tagliasacchi, Boyang Deng, Sara Sabour, Soroosh Yazdani, Geoffrey E. Hinton, Kwang Moo Yi
https://papers.nips.cc/paper_files/paper/2021/hash/d1ee59e20ad01cedc15f5118a7626099-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13537-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d1ee59e20ad01cedc15f5118a7626099-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jnkE5c5f9m
https://papers.nips.cc/paper_files/paper/2021/file/d1ee59e20ad01cedc15f5118a7626099-Supplemental.zip
We propose a self-supervised capsule architecture for 3D point clouds. We compute capsule decompositions of objects through permutation-equivariant attention, and self-supervise the process by training with pairs of randomly rotated objects. Our key idea is to aggregate the attention masks into semantic keypoints, and use these to supervise a decomposition that satisfies the capsule invariance/equivariance properties. This not only enables the training of a semantically consistent decomposition, but also allows us to learn a canonicalization operation that enables object-centric reasoning. To train our neural network we require neither classification labels nor manually-aligned training datasets. Yet, by learning an object-centric representation in a self-supervised manner, our method outperforms the state-of-the-art on 3D point cloud reconstruction, canonicalization, and unsupervised classification.
null
Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
https://papers.nips.cc/paper_files/paper/2021/hash/d1f255a373a3cef72e03aa9d980c7eca-Abstract.html
Timo Milbich, Karsten Roth, Samarth Sinha, Ludwig Schmidt, Marzyeh Ghassemi, Bjorn Ommer
https://papers.nips.cc/paper_files/paper/2021/hash/d1f255a373a3cef72e03aa9d980c7eca-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13538-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d1f255a373a3cef72e03aa9d980c7eca-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_KqWSCu566
https://papers.nips.cc/paper_files/paper/2021/file/d1f255a373a3cef72e03aa9d980c7eca-Supplemental.pdf
Deep Metric Learning (DML) aims to find representations suitable for zero-shot transfer to a priori unknown test distributions. However, common evaluation protocols only test a single, fixed data split in which train and test classes are assigned randomly. More realistic evaluations should consider a broad spectrum of distribution shifts with potentially varying degree and difficulty.In this work, we systematically construct train-test splits of increasing difficulty and present the ooDML benchmark to characterize generalization under out-of-distribution shifts in DML. ooDML is designed to probe the generalization performance on much more challenging, diverse train-to-test distribution shifts. Based on our new benchmark, we conduct a thorough empirical analysis of state-of-the-art DML methods. We find that while generalization tends to consistently degrade with difficulty, some methods are better at retaining performance as the distribution shift increases. Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML.
null
Dynamics-regulated kinematic policy for egocentric pose estimation
https://papers.nips.cc/paper_files/paper/2021/hash/d1fe173d08e959397adf34b1d77e88d7-Abstract.html
Zhengyi Luo, Ryo Hachiuma, Ye Yuan, Kris Kitani
https://papers.nips.cc/paper_files/paper/2021/hash/d1fe173d08e959397adf34b1d77e88d7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13539-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d1fe173d08e959397adf34b1d77e88d7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=I1GHll1Z7E
https://papers.nips.cc/paper_files/paper/2021/file/d1fe173d08e959397adf34b1d77e88d7-Supplemental.zip
We propose a method for object-aware 3D egocentric pose estimation that tightly integrates kinematics modeling, dynamics modeling, and scene object information. Unlike prior kinematics or dynamics-based approaches where the two components are used disjointly, we synergize the two approaches via dynamics-regulated training. At each timestep, a kinematic model is used to provide a target pose using video evidence and simulation state. Then, a prelearned dynamics model attempts to mimic the kinematic pose in a physics simulator. By comparing the pose instructed by the kinematic model against the pose generated by the dynamics model, we can use their misalignment to further improve the kinematic model. By factoring in the 6DoF pose of objects (e.g., chairs, boxes) in the scene, we demonstrate for the first time, the ability to estimate physically-plausible 3D human-object interactions using a single wearable camera. We evaluate our egocentric pose estimation method in both controlled laboratory settings and real-world scenarios.
null
Never Go Full Batch (in Stochastic Convex Optimization)
https://papers.nips.cc/paper_files/paper/2021/hash/d27b95cac4c27feb850aaa4070cc4675-Abstract.html
Idan Amir, Yair Carmon, Tomer Koren, Roi Livni
https://papers.nips.cc/paper_files/paper/2021/hash/d27b95cac4c27feb850aaa4070cc4675-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13540-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d27b95cac4c27feb850aaa4070cc4675-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4VAp_PL9yKs
null
We study the generalization performance of $\text{\emph{full-batch}}$ optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within $\epsilon$ after $O(1/\epsilon^2)$ iterations, full-batch methods either need at least $\Omega(1/\epsilon^4)$ iterations or exhibit a dimension-dependent sample complexity.
null
Collaborative Learning in the Jungle (Decentralized, Byzantine, Heterogeneous, Asynchronous and Nonconvex Learning)
https://papers.nips.cc/paper_files/paper/2021/hash/d2cd33e9c0236a8c2d8bd3fa91ad3acf-Abstract.html
El Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Arsany Guirguis, Lê-Nguyên Hoang, Sébastien Rouault
https://papers.nips.cc/paper_files/paper/2021/hash/d2cd33e9c0236a8c2d8bd3fa91ad3acf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13541-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d2cd33e9c0236a8c2d8bd3fa91ad3acf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=O8wI1avs4WF
https://papers.nips.cc/paper_files/paper/2021/file/d2cd33e9c0236a8c2d8bd3fa91ad3acf-Supplemental.pdf
We study \emph{Byzantine collaborative learning}, where $n$ nodes seek to collectively learn from each others' local data. The data distribution may vary from one node to another. No node is trusted, and $f < n$ nodes can behave arbitrarily. We prove that collaborative learning is equivalent to a new form of agreement, which we call \emph{averaging agreement}. In this problem, nodes start each with an initial vector and seek to approximately agree on a common vector, which is close to the average of honest nodes' initial vectors. We present two asynchronous solutions to averaging agreement, each we prove optimal according to some dimension. The first, based on the minimum-diameter averaging, requires $n \geq 6f+1$, but achieves asymptotically the best-possible averaging constant up to a multiplicative constant. The second, based on reliable broadcast and coordinate-wise trimmed mean, achieves optimal Byzantine resilience, i.e., $n \geq 3f+1$. Each of these algorithms induces an optimal Byzantine collaborative learning protocol. In particular, our equivalence yields new impossibility theorems on what any collaborative learning algorithm can achieve in adversarial and heterogeneous environments.
null
Not All Low-Pass Filters are Robust in Graph Convolutional Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d30960ce77e83d896503d43ba249caf7-Abstract.html
Heng Chang, Yu Rong, Tingyang Xu, Yatao Bian, Shiji Zhou, Xin Wang, Junzhou Huang, Wenwu Zhu
https://papers.nips.cc/paper_files/paper/2021/hash/d30960ce77e83d896503d43ba249caf7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13542-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d30960ce77e83d896503d43ba249caf7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bDdfxLQITtu
https://papers.nips.cc/paper_files/paper/2021/file/d30960ce77e83d896503d43ba249caf7-Supplemental.pdf
Graph Convolutional Networks (GCNs) are promising deep learning approaches in learning representations for graph-structured data. Despite the proliferation of such methods, it is well known that they are vulnerable to carefully crafted adversarial attacks on the graph structure. In this paper, we first conduct an adversarial vulnerability analysis based on matrix perturbation theory. We prove that the low- frequency components of the symmetric normalized Laplacian, which is usually used as the convolutional filter in GCNs, could be more robust against structural perturbations when their eigenvalues fall into a certain robust interval. Our results indicate that not all low-frequency components are robust to adversarial attacks and provide a deeper understanding of the relationship between graph spectrum and robustness of GCNs. Motivated by the theory, we present GCN-LFR, a general robust co-training paradigm for GCN-based models, that encourages transferring the robustness of low-frequency components with an auxiliary neural network. To this end, GCN-LFR could enhance the robustness of various kinds of GCN-based models against poisoning structural attacks in a plug-and-play manner. Extensive experiments across five benchmark datasets and five GCN-based models also confirm that GCN-LFR is resistant to the adversarial attacks without compromising on performance in the benign situation.
null
Counterfactual Maximum Likelihood Estimation for Training Deep Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d30d0f522a86b3665d8e3a9a91472e28-Abstract.html
Xinyi Wang, Wenhu Chen, Michael Saxon, William Yang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/d30d0f522a86b3665d8e3a9a91472e28-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13543-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d30d0f522a86b3665d8e3a9a91472e28-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=o6s1b_-nDOE
https://papers.nips.cc/paper_files/paper/2021/file/d30d0f522a86b3665d8e3a9a91472e28-Supplemental.pdf
Although deep learning models have driven state-of-the-art performance on a wide array of tasks, they are prone to spurious correlations that should not be learned as predictive clues. To mitigate this problem, we propose a causality-based training framework to reduce the spurious correlations caused by observed confounders. We give theoretical analysis on the underlying general Structural Causal Model (SCM) and propose to perform Maximum Likelihood Estimation (MLE) on the interventional distribution instead of the observational distribution, namely Counterfactual Maximum Likelihood Estimation (CMLE). As the interventional distribution, in general, is hidden from the observational data, we then derive two different upper bounds of the expected negative log-likelihood and propose two general algorithms, Implicit CMLE and Explicit CMLE, for causal predictions of deep learning models using observational data. We conduct experiments on both simulated data and two real-world tasks: Natural Language Inference (NLI) and Image Captioning. The results show that CMLE methods outperform the regular MLE method in terms of out-of-domain generalization performance and reducing spurious correlations, while maintaining comparable performance on the regular evaluations.
null
Robust Optimization for Multilingual Translation with Imbalanced Data
https://papers.nips.cc/paper_files/paper/2021/hash/d324a0cc02881779dcda44a675fdcaaa-Abstract.html
Xian Li, Hongyu Gong
https://papers.nips.cc/paper_files/paper/2021/hash/d324a0cc02881779dcda44a675fdcaaa-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13544-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d324a0cc02881779dcda44a675fdcaaa-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Rv3vp-JDUSJ
https://papers.nips.cc/paper_files/paper/2021/file/d324a0cc02881779dcda44a675fdcaaa-Supplemental.pdf
Multilingual models are parameter-efficient and especially effective in improving low-resource languages by leveraging crosslingual transfer. Despite recent advance in massive multilingual translation with ever-growing model and data, how to effectively train multilingual models has not been well understood. In this paper, we show that a common situation in multilingual training, data imbalance among languages, poses optimization tension between high resource and low resource languages where the found multilingual solution is often sub-optimal for low resources. We show that common training method which upsamples low resources can not robustly optimize population loss with risks of either underfitting high resource languages or overfitting low resource ones. Drawing on recent findings on the geometry of loss landscape and its effect on generalization, we propose a principled optimization algorithm, Curvature Aware Task Scaling (CATS), which adaptively rescales gradients from different tasks with a meta objective of guiding multilingual training to low-curvature neighborhoods with uniformly low loss for all languages. We ran experiments on common benchmarks (TED, WMT and OPUS-100) with varying degrees of data imbalance. CATS effectively improved multilingual optimization and as a result demonstrated consistent gains on low resources ($+0.8$ to $+2.2$ BLEU) without hurting high resources. In addition, CATS is robust to overparameterization and large batch size training, making it a promising training method for massive multilingual models that truly improve low resource languages.
null
A/B/n Testing with Control in the Presence of Subpopulations
https://papers.nips.cc/paper_files/paper/2021/hash/d35a29602005cb55aa57a5f683c8e0c2-Abstract.html
Yoan Russac, Christina Katsimerou, Dennis Bohle, Olivier Cappé, Aurélien Garivier, Wouter M. Koolen
https://papers.nips.cc/paper_files/paper/2021/hash/d35a29602005cb55aa57a5f683c8e0c2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13545-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d35a29602005cb55aa57a5f683c8e0c2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DDoDN0BLLhb
https://papers.nips.cc/paper_files/paper/2021/file/d35a29602005cb55aa57a5f683c8e0c2-Supplemental.pdf
Motivated by A/B/n testing applications, we consider a finite set of distributions (called \emph{arms}), one of which is treated as a \emph{control}. We assume that the population is stratified into homogeneous subpopulations. At every time step, a subpopulation is sampled and an arm is chosen: the resulting observation is an independent draw from the arm conditioned on the subpopulation. The quality of each arm is assessed through a weighted combination of its subpopulation means. We propose a strategy for sequentially choosing one arm per time step so as to discover as fast as possible which arms, if any, have higher weighted expectation than the control. This strategy is shown to be asymptotically optimal in the following sense: if $\tau_\delta$ is the first time when the strategy ensures that it is able to output the correct answer with probability at least $1-\delta$, then $\mathbb{E}[\tau_\delta]$ grows linearly with $\log(1/\delta)$ at the exact optimal rate. This rate is identified in the paper in three different settings: (1) when the experimenter does not observe the subpopulation information, (2) when the subpopulation of each sample is observed but not chosen, and (3) when the experimenter can select the subpopulation from which each response is sampled. We illustrate the efficiency of the proposed strategy with numerical simulations on synthetic and real data collected from an A/B/n experiment.
null
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d35b05a832e2bb91f110d54e34e2da79-Abstract.html
Giora Simchoni, Saharon Rosset
https://papers.nips.cc/paper_files/paper/2021/hash/d35b05a832e2bb91f110d54e34e2da79-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13546-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d35b05a832e2bb91f110d54e34e2da79-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8hLuUnBwfM2
https://papers.nips.cc/paper_files/paper/2021/file/d35b05a832e2bb91f110d54e34e2da79-Supplemental.pdf
High-cardinality categorical features are a major challenge for machine learning methods in general and for deep learning in particular. Existing solutions such as one-hot encoding and entity embeddings can be hard to scale when the cardinality is very high, require much space, are hard to interpret or may overfit the data. A special scenario of interest is that of repeated measures, where the categorical feature is the identity of the individual or object, and each object is measured several times, possibly under different conditions (values of the other features). We propose accounting for high-cardinality categorical features as random effects variables in a regression setting, and consequently adopt the corresponding negative log likelihood loss from the linear mixed models (LMM) statistical literature and integrate it in a deep learning framework. We test our model which we call LMMNN on simulated as well as real datasets with a single categorical feature with high cardinality, using various baseline neural networks architectures such as convolutional networks and LSTM, and various applications in e-commerce, healthcare and computer vision. Our results show that treating high-cardinality categorical features as random effects leads to a significant improvement in prediction performance compared to state of the art alternatives. Potential extensions such as accounting for multiple categorical features and classification settings are discussed. Our code and simulations are available at https://github.com/gsimchoni/lmmnn.
null
Learning Debiased Representation via Disentangled Feature Augmentation
https://papers.nips.cc/paper_files/paper/2021/hash/d360a502598a4b64b936683b44a5523a-Abstract.html
Jungsoo Lee, Eungyeup Kim, Juyoung Lee, Jihyeon Lee, Jaegul Choo
https://papers.nips.cc/paper_files/paper/2021/hash/d360a502598a4b64b936683b44a5523a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13547-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d360a502598a4b64b936683b44a5523a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-oUhJJILWHb
https://papers.nips.cc/paper_files/paper/2021/file/d360a502598a4b64b936683b44a5523a-Supplemental.pdf
Image classification models tend to make decisions based on peripheral attributes of data items that have strong correlation with a target variable (i.e., dataset bias). These biased models suffer from the poor generalization capability when evaluated on unbiased datasets. Existing approaches for debiasing often identify and emphasize those samples with no such correlation (i.e., bias-conflicting) without defining the bias type in advance. However, such bias-conflicting samples are significantly scarce in biased datasets, limiting the debiasing capability of these approaches. This paper first presents an empirical analysis revealing that training with "diverse" bias-conflicting samples beyond a given training set is crucial for debiasing as well as the generalization capability. Based on this observation, we propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples. To this end, our method learns the disentangled representation of (1) the intrinsic attributes (i.e., those inherently defining a certain class) and (2) bias attributes (i.e., peripheral attributes causing the bias), from a large number of bias-aligned samples, the bias attributes of which have strong correlation with the target variable. Using the disentangled representation, we synthesize bias-conflicting samples that contain the diverse intrinsic attributes of bias-aligned samples by swapping their latent features. By utilizing these diversified bias-conflicting features during the training, our approach achieves superior classification accuracy and debiasing results against the existing baselines on both synthetic and real-world datasets.
null
Scallop: From Probabilistic Deductive Databases to Scalable Differentiable Reasoning
https://papers.nips.cc/paper_files/paper/2021/hash/d367eef13f90793bd8121e2f675f0dc2-Abstract.html
Jiani Huang, Ziyang Li, Binghong Chen, Karan Samel, Mayur Naik, Le Song, Xujie Si
https://papers.nips.cc/paper_files/paper/2021/hash/d367eef13f90793bd8121e2f675f0dc2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13548-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d367eef13f90793bd8121e2f675f0dc2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ngdcA1tlDvj
https://papers.nips.cc/paper_files/paper/2021/file/d367eef13f90793bd8121e2f675f0dc2-Supplemental.zip
Deep learning and symbolic reasoning are complementary techniques for an intelligent system. However, principled combinations of these techniques have limited scalability, rendering them ill-suited for real-world applications. We propose Scallop, a system that builds upon probabilistic deductive databases, to bridge this gap. The key insight underlying Scallop is a provenance framework that introduces a tunable parameter to specify the level of reasoning granularity. Scallop thereby i) generalizes exact probabilistic reasoning, ii) asymptotically reduces computational cost, and iii) provides relative accuracy guarantees. On a suite of tasks that involve mathematical and logical reasoning, Scallop scales significantly better without sacrificing accuracy compared to DeepProbLog, a principled neural logic programming approach. We also create and evaluate on a real-world Visual Question Answering (VQA) benchmark that requires multi-hop reasoning. Scallop outperforms two VQA-tailored models, a Neural Module Networks based and a transformer based model, by 12.42% and 21.66% respectively.
null
Learning to Synthesize Programs as Interpretable and Generalizable Policies
https://papers.nips.cc/paper_files/paper/2021/hash/d37124c4c79f357cb02c655671a432fa-Abstract.html
Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, Joseph J. Lim
https://papers.nips.cc/paper_files/paper/2021/hash/d37124c4c79f357cb02c655671a432fa-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13549-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d37124c4c79f357cb02c655671a432fa-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wP9twkexC3V
https://papers.nips.cc/paper_files/paper/2021/file/d37124c4c79f357cb02c655671a432fa-Supplemental.pdf
Recently, deep reinforcement learning (DRL) methods have achieved impressive performance on tasks in a variety of domains. However, neural network policies produced with DRL methods are not human-interpretable and often have difficulty generalizing to novel scenarios. To address these issues, prior works explore learning programmatic policies that are more interpretable and structured for generalization. Yet, these works either employ limited policy representations (e.g. decision trees, state machines, or predefined program templates) or require stronger supervision (e.g. input/output state pairs or expert demonstrations). We present a framework that instead learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner, solely from reward signals. To alleviate the difficulty of learning to compose programs to induce the desired agent behavior from scratch, we propose to first learn a program embedding space that continuously parameterizes diverse behaviors in an unsupervised manner and then search over the learned program embedding space to yield a program that maximizes the return for a given task. Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines while producing interpretable and more generalizable policies. We also justify the necessity of the proposed two-stage learning scheme as well as analyze various methods for learning the program embedding. Website at https://clvrai.com/leaps.
null
The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning
https://papers.nips.cc/paper_files/paper/2021/hash/d384dec9f5f7a64a36b5c8f03b8a6d92-Abstract.html
Shahab Bakhtiari, Patrick Mineault, Timothy Lillicrap, Christopher Pack, Blake Richards
https://papers.nips.cc/paper_files/paper/2021/hash/d384dec9f5f7a64a36b5c8f03b8a6d92-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13550-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d384dec9f5f7a64a36b5c8f03b8a6d92-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=t1czgrQOrwW
https://papers.nips.cc/paper_files/paper/2021/file/d384dec9f5f7a64a36b5c8f03b8a6d92-Supplemental.pdf
The visual system of mammals is comprised of parallel, hierarchical specialized pathways. Different pathways are specialized in so far as they use representations that are more suitable for supporting specific downstream behaviours. In particular, the clearest example is the specialization of the ventral ("what") and dorsal ("where") pathways of the visual cortex. These two pathways support behaviours related to visual recognition and movement, respectively. To-date, deep neural networks have mostly been used as models of the ventral, recognition pathway. However, it is unknown whether both pathways can be modelled with a single deep ANN. Here, we ask whether a single model with a single loss function can capture the properties of both the ventral and the dorsal pathways. We explore this question using data from mice, who like other mammals, have specialized pathways that appear to support recognition and movement behaviours. We show that when we train a deep neural network architecture with two parallel pathways using a self-supervised predictive loss function, we can outperform other models in fitting mouse visual cortex. Moreover, we can model both the dorsal and ventral pathways. These results demonstrate that a self-supervised predictive learning approach applied to parallel pathway architectures can account for some of the functional specialization seen in mammalian visual systems.
null
Adversarial Training Helps Transfer Learning via Better Representations
https://papers.nips.cc/paper_files/paper/2021/hash/d3aeec875c479e55d1cdeea161842ec6-Abstract.html
Zhun Deng, Linjun Zhang, Kailas Vodrahalli, Kenji Kawaguchi, James Y. Zou
https://papers.nips.cc/paper_files/paper/2021/hash/d3aeec875c479e55d1cdeea161842ec6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13551-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d3aeec875c479e55d1cdeea161842ec6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=f8Dqhg0w-7i
https://papers.nips.cc/paper_files/paper/2021/file/d3aeec875c479e55d1cdeea161842ec6-Supplemental.pdf
Transfer learning aims to leverage models pre-trained on source data to efficiently adapt to target setting, where only limited data are available for model fine-tuning. Recent works empirically demonstrate that adversarial training in the source data can improve the ability of models to transfer to new domains. However, why this happens is not known. In this paper, we provide a theoretical model to rigorously analyze how adversarial training helps transfer learning. We show that adversarial training in the source data generates provably better representations, so fine-tuning on top of this representation leads to a more accurate predictor of the target data. We further demonstrate both theoretically and empirically that semi-supervised learning in the source data can also improve transfer learning by similarly improving the representation. Moreover, performing adversarial training on top of semi-supervised learning can further improve transferability, suggesting that the two approaches have complementary benefits on representations. We support our theories with experiments on popular data sets and deep learning architectures.
null
Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning
https://papers.nips.cc/paper_files/paper/2021/hash/d3e2e8f631bd9336ed25b8162aef8782-Abstract.html
Maxwell Nye, Michael Tessler, Josh Tenenbaum, Brenden M. Lake
https://papers.nips.cc/paper_files/paper/2021/hash/d3e2e8f631bd9336ed25b8162aef8782-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13552-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d3e2e8f631bd9336ed25b8162aef8782-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=P7GUAXxS3ym
null
Human reasoning can be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2"). Neural sequence models---which have been increasingly successful at performing complex, structured tasks---exhibit the advantages and failure modes of System 1: they are fast and learn patterns from data, but are often inconsistent and incoherent. In this work, we seek a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning. We explore several variations on this theme in which candidate generations from a neural sequence model are examined for logical consistency by a symbolic reasoning module, which can either accept or reject the generations. Our approach uses neural inference to mediate between the neural System 1 and the logical System 2. Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
null
Learning the optimal Tikhonov regularizer for inverse problems
https://papers.nips.cc/paper_files/paper/2021/hash/d3e6cd9f66f2c1d3840ade4161cf7406-Abstract.html
Giovanni S. Alberti, Ernesto De Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria
https://papers.nips.cc/paper_files/paper/2021/hash/d3e6cd9f66f2c1d3840ade4161cf7406-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13553-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d3e6cd9f66f2c1d3840ade4161cf7406-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zMZPDwm3H3
https://papers.nips.cc/paper_files/paper/2021/file/d3e6cd9f66f2c1d3840ade4161cf7406-Supplemental.pdf
In this work, we consider the linear inverse problem $y=Ax+\varepsilon$, where $A\colon X\to Y$ is a known linear operator between the separable Hilbert spaces $X$ and $Y$, $x$ is a random variable in $X$ and $\epsilon$ is a zero-mean random process in $Y$. This setting covers several inverse problems in imaging including denoising, deblurring, and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori, but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator $A$ and depends only on the mean and covariance of $x$.Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both $x$ and $y$, and one unsupervised, based only on samples of $x$. In both cases, we prove generalization bounds, under some weak assumptions on the distribution of $x$ and $\varepsilon$, including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.
null
NovelD: A Simple yet Effective Exploration Criterion
https://papers.nips.cc/paper_files/paper/2021/hash/d428d070622e0f4363fceae11f4a3576-Abstract.html
Tianjun Zhang, Huazhe Xu, Xiaolong Wang, Yi Wu, Kurt Keutzer, Joseph E. Gonzalez, Yuandong Tian
https://papers.nips.cc/paper_files/paper/2021/hash/d428d070622e0f4363fceae11f4a3576-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13554-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d428d070622e0f4363fceae11f4a3576-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CYUzpnOkFJp
https://papers.nips.cc/paper_files/paper/2021/file/d428d070622e0f4363fceae11f4a3576-Supplemental.pdf
Efficient exploration under sparse rewards remains a key challenge in deep reinforcement learning. Previous exploration methods (e.g., RND) have achieved strong results in multiple hard tasks. However, if there are multiple novel areas to explore, these methods often focus quickly on one without sufficiently trying others (like a depth-wise first search manner). In some scenarios (e.g., four corridor environment in Sec 4.2), we observe they explore in one corridor for long and fail to cover all the states. On the other hand, in theoretical RL, with optimistic initialization and the inverse square root of visitation count as a bonus, it won't suffer from this and explores different novel regions alternatively (like a breadth-first search manner). In this paper, inspired by this, we propose a simple but effective criterion called NovelD by weighting every novel area approximately equally. Our algorithm is very simple but yet shows comparable performance or even outperforms multiple SOTA exploration methods in many hard exploration tasks. Specifically, NovelD solves all the static procedurally-generated tasks in Mini-Grid with just 120M environment steps, without any curriculum learning. In comparison, the previous SOTA only solves 50% of them. NovelD also achieves SOTA on multiple tasks in NetHack, a rogue-like game that contains more challenging procedurally-generated environments. In multiple Atari games (e.g., MonteZuma's Revenge, Venture, Gravitar), NovelD outperforms RND. We analyze NovelD thoroughly in MiniGrid and found that empirically it helps the agent explore the environment more uniformly with a focus on exploring beyond the boundary.
null
On Margin-Based Cluster Recovery with Oracle Queries
https://papers.nips.cc/paper_files/paper/2021/hash/d46e1fcf4c07ce4a69ee07e4134bcef1-Abstract.html
Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice
https://papers.nips.cc/paper_files/paper/2021/hash/d46e1fcf4c07ce4a69ee07e4134bcef1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13555-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d46e1fcf4c07ce4a69ee07e4134bcef1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yLyXqdsYho
https://papers.nips.cc/paper_files/paper/2021/file/d46e1fcf4c07ce4a69ee07e4134bcef1-Supplemental.pdf
We study an active cluster recovery problem where, given a set of $n$ points and an oracle answering queries like ``are these two points in the same cluster?'', the task is to recover exactly all clusters using as few queries as possible. We begin by introducing a simple but general notion of margin between clusters that captures, as special cases, the margins used in previous works, the classic SVM margin, and standard notions of stability for center-based clusterings. Under our margin assumptions we design algorithms that, in a variety of settings, recover all clusters exactly using only $O(\log n)$ queries. For $\mathbb{R}^m$, we give an algorithm that recovers \emph{arbitrary} convex clusters, in polynomial time, and with a number of queries that is lower than the best existing algorithm by $\Theta(m^m)$ factors. For general pseudometric spaces, where clusters might not be convex or might not have any notion of shape, we give an algorithm that achieves the $O(\log n)$ query bound, and is provably near-optimal as a function of the packing number of the space. Finally, for clusterings realized by binary concept classes, we give a combinatorial characterization of recoverability with $O(\log n)$ queries, and we show that, for many concept classes in $\mathbb{R}^m$, this characterization is equivalent to our margin condition. Our results show a deep connection between cluster margins and active cluster recoverability.
null
Multi-Scale Representation Learning on Proteins
https://papers.nips.cc/paper_files/paper/2021/hash/d494020ff8ec181ef98ed97ac3f25453-Abstract.html
Vignesh Ram Somnath, Charlotte Bunne, Andreas Krause
https://papers.nips.cc/paper_files/paper/2021/hash/d494020ff8ec181ef98ed97ac3f25453-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13556-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d494020ff8ec181ef98ed97ac3f25453-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-xEk43f_EO6
https://papers.nips.cc/paper_files/paper/2021/file/d494020ff8ec181ef98ed97ac3f25453-Supplemental.pdf
Proteins are fundamental biological entities mediating key roles in cellular function and disease. This paper introduces a multi-scale graph construction of a protein –HoloProt– connecting surface to structure and sequence. The surface captures coarser details of the protein, while sequence as primary component and structure –comprising secondary and tertiary components– capture finer details. Our graph encoder then learns a multi-scale representation by allowing each level to integrate the encoding from level(s) below with the graph at that level. We test the learned representation on different tasks, (i.) ligand binding affinity (regression), and (ii.) protein function prediction (classification).On the regression task, contrary to previous methods, our model performs consistently and reliably across different dataset splits, outperforming all baselines on most splits. On the classification task, it achieves a performance close to the top-performing model while using 10x fewer parameters. To improve the memory efficiency of our construction, we segment the multiplex protein surface manifold into molecular superpixels and substitute the surface with these superpixels at little to no performance loss.
null
Sparse Quadratic Optimisation over the Stiefel Manifold with Application to Permutation Synchronisation
https://papers.nips.cc/paper_files/paper/2021/hash/d4bad256c73a6b25b86cc9c1a77255b1-Abstract.html
Florian Bernard, Daniel Cremers, Johan Thunberg
https://papers.nips.cc/paper_files/paper/2021/hash/d4bad256c73a6b25b86cc9c1a77255b1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13557-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d4bad256c73a6b25b86cc9c1a77255b1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sl_0rQmHxQk
https://papers.nips.cc/paper_files/paper/2021/file/d4bad256c73a6b25b86cc9c1a77255b1-Supplemental.pdf
We address the non-convex optimisation problem of finding a sparse matrix on the Stiefel manifold (matrices with mutually orthogonal columns of unit length) that maximises (or minimises) a quadratic objective function. Optimisation problems on the Stiefel manifold occur for example in spectral relaxations of various combinatorial problems, such as graph matching, clustering, or permutation synchronisation. Although sparsity is a desirable property in such settings, it is mostly neglected in spectral formulations since existing solvers, e.g. based on eigenvalue decomposition, are unable to account for sparsity while at the same time maintaining global optimality guarantees. We fill this gap and propose a simple yet effective sparsity-promoting modification of the Orthogonal Iteration algorithm for finding the dominant eigenspace of a matrix. By doing so, we can guarantee that our method finds a Stiefel matrix that is globally optimal with respect to the quadratic objective function, while in addition being sparse. As a motivating application we consider the task of permutation synchronisation, which can be understood as a constrained clustering problem that has particular relevance for matching multiple images or 3D shapes in computer vision, computer graphics, and beyond. We demonstrate that the proposed approach outperforms previous methods in this domain.
null
Second-Order Neural ODE Optimizer
https://papers.nips.cc/paper_files/paper/2021/hash/d4c2e4a3297fe25a71d030b67eb83bfc-Abstract.html
Guan-Horng Liu, Tianrong Chen, Evangelos Theodorou
https://papers.nips.cc/paper_files/paper/2021/hash/d4c2e4a3297fe25a71d030b67eb83bfc-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13558-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d4c2e4a3297fe25a71d030b67eb83bfc-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XwetFe0U63c
https://papers.nips.cc/paper_files/paper/2021/file/d4c2e4a3297fe25a71d030b67eb83bfc-Supplemental.pdf
We propose a novel second-order optimization framework for training the emerging deep continuous-time models, specifically the Neural Ordinary Differential Equations (Neural ODEs). Since their training already involves expensive gradient computation by solving a backward ODE, deriving efficient second-order methods becomes highly nontrivial. Nevertheless, inspired by the recent Optimal Control (OC) interpretation of training deep networks, we show that a specific continuous-time OC methodology, called Differential Programming, can be adopted to derive backward ODEs for higher-order derivatives at the same O(1) memory cost. We further explore a low-rank representation of the second-order derivatives and show that it leads to efficient preconditioned updates with the aid of Kronecker-based factorization. The resulting method – named SNOpt – converges much faster than first-order baselines in wall-clock time, and the improvement remains consistent across various applications, e.g. image classification, generative flow, and time-series prediction. Our framework also enables direct architecture optimization, such as the integration time of Neural ODEs, with second-order feedback policies, strengthening the OC perspective as a principled tool of analyzing optimization in deep learning. Our code is available at https://github.com/ghliu/snopt.
null
Graph Neural Networks with Local Graph Parameters
https://papers.nips.cc/paper_files/paper/2021/hash/d4d8d1ac7e00e9105775a6b660dd3cbb-Abstract.html
Pablo Barceló, Floris Geerts, Juan Reutter, Maksimilian Ryschkov
https://papers.nips.cc/paper_files/paper/2021/hash/d4d8d1ac7e00e9105775a6b660dd3cbb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13559-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d4d8d1ac7e00e9105775a6b660dd3cbb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yGKklt8wyV
https://papers.nips.cc/paper_files/paper/2021/file/d4d8d1ac7e00e9105775a6b660dd3cbb-Supplemental.pdf
Various recent proposals increase the distinguishing power of Graph Neural Networks (GNNs) by propagating features between k-tuples of vertices. The distinguishing power of these “higher-order” GNNs is known to be bounded by the k-dimensional Weisfeiler-Leman (WL) test, yet their O(n^k) memory requirements limit their applicability. Other proposals infuse GNNs with local higher-order graph structural information from the start, hereby inheriting the desirable O(n) memory requirement from GNNs at the cost of a one-time, possibly non-linear, preprocessing step. We propose local graph parameter enabled GNNs as a framework for studying the latter kind of approaches and precisely characterize their distinguishing power, in terms of a variant of the WL test, and in terms of the graph structural properties that they can take into account. Local graph parameters can be added to any GNN architecture, and are cheap to compute. In terms of expressive power, our proposal lies in the middle of GNNs and their higher-order counterparts. Further, we propose several techniques to aide in choosing the right local graph parameters. Our results connect GNNs with deep results in finite model theory and finite variable logics. Our experimental evaluation shows that adding local graph parameters often has a positive effect for a variety of GNNs, datasets and graph learning tasks.
null
Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems
https://papers.nips.cc/paper_files/paper/2021/hash/d4dd111a4fd973394238aca5c05bebe3-Abstract.html
Tianyi Chen, Yuejiao Sun, Wotao Yin
https://papers.nips.cc/paper_files/paper/2021/hash/d4dd111a4fd973394238aca5c05bebe3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13560-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d4dd111a4fd973394238aca5c05bebe3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OItvP2-i9j
https://papers.nips.cc/paper_files/paper/2021/file/d4dd111a4fd973394238aca5c05bebe3-Supplemental.pdf
Stochastic nested optimization, including stochastic compositional, min-max, and bilevel optimization, is gaining popularity in many machine learning applications. While the three problems share a nested structure, existing works often treat them separately, thus developing problem-specific algorithms and analyses. Among various exciting developments, simple SGD-type updates (potentially on multiple variables) are still prevalent in solving this class of nested problems, but they are believed to have a slower convergence rate than non-nested problems. This paper unifies several SGD-type updates for stochastic nested problems into a single SGD approach that we term ALternating Stochastic gradient dEscenT (ALSET) method. By leveraging the hidden smoothness of the problem, this paper presents a tighter analysis of ALSET for stochastic nested problems. Under the new analysis, to achieve an $\epsilon$-stationary point of the nested problem, it requires ${\cal O}(\epsilon^{-2})$ samples in total. Under certain regularity conditions, applying our results to stochastic compositional, min-max, and reinforcement learning problems either improves or matches the best-known sample complexity in the respective cases. Our results explain why simple SGD-type algorithms in stochastic nested problems all work very well in practice without the need for further modifications.
null
Dense Unsupervised Learning for Video Segmentation
https://papers.nips.cc/paper_files/paper/2021/hash/d516b13671a4179d9b7b458a6ebdeb92-Abstract.html
Nikita Araslanov, Simone Schaub-Meyer, Stefan Roth
https://papers.nips.cc/paper_files/paper/2021/hash/d516b13671a4179d9b7b458a6ebdeb92-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13561-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d516b13671a4179d9b7b458a6ebdeb92-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=i8kfkuiCJCI
https://papers.nips.cc/paper_files/paper/2021/file/d516b13671a4179d9b7b458a6ebdeb92-Supplemental.zip
We present a novel approach to unsupervised learning for video object segmentation (VOS). Unlike previous work, our formulation allows to learn dense feature representations directly in a fully convolutional regime. We rely on uniform grid sampling to extract a set of anchors and train our model to disambiguate between them on both inter- and intra-video levels. However, a naive scheme to train such a model results in a degenerate solution. We propose to prevent this with a simple regularisation scheme, accommodating the equivariance property of the segmentation task to similarity transformations. Our training objective admits efficient implementation and exhibits fast training convergence. On established VOS benchmarks, our approach exceeds the segmentation accuracy of previous work despite using significantly less training data and compute power.
null
Charting and Navigating the Space of Solutions for Recurrent Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d530d454337fb09964237fecb4bea6ce-Abstract.html
Elia Turner, Kabir V Dabholkar, Omri Barak
https://papers.nips.cc/paper_files/paper/2021/hash/d530d454337fb09964237fecb4bea6ce-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13562-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d530d454337fb09964237fecb4bea6ce-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SQm_poGrlj
https://papers.nips.cc/paper_files/paper/2021/file/d530d454337fb09964237fecb4bea6ce-Supplemental.pdf
In recent years Recurrent Neural Networks (RNNs) were successfully used to model the way neural activity drives task-related behavior in animals, operating under the implicit assumption that the obtained solutions are universal. Observations in both neuroscience and machine learning challenge this assumption. Animals can approach a given task with a variety of strategies, and training machine learning algorithms introduces the phenomenon of underspecification. These observations imply that every task is associated with a space of solutions. To date, the structure of this space is not understood, limiting the approach of comparing RNNs with neural data.Here, we characterize the space of solutions associated with various tasks. We first study a simple two-neuron network on a task that leads to multiple solutions. We trace the nature of the final solution back to the network’s initial connectivity and identify discrete dynamical regimes that underlie this diversity. We then examine three neuroscience-inspired tasks: Delayed discrimination, Interval discrimination, and Time reproduction. For each task, we find a rich set of solutions. One layer of variability can be found directly in the neural activity of the networks. An additional layer is uncovered by testing the trained networks' ability to extrapolate, as a perturbation to a system often reveals hidden structure. Furthermore, we relate extrapolation patterns to specific dynamical objects and effective algorithms found by the networks. We introduce a tool to derive the reduced dynamics of networks by generating a compact directed graph describing the essence of the dynamics with regards to behavioral inputs and outputs. Using this representation, we can partition the solutions to each task into a handful of types and show that neural features can partially predict them.Taken together, our results shed light on the concept of the space of solutions and its uses both in Machine learning and in Neuroscience.
null
Fast Training Method for Stochastic Compositional Optimization Problems
https://papers.nips.cc/paper_files/paper/2021/hash/d5397f1497b5cdaad7253fdc92db610b-Abstract.html
Hongchang Gao, Heng Huang
https://papers.nips.cc/paper_files/paper/2021/hash/d5397f1497b5cdaad7253fdc92db610b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13563-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5397f1497b5cdaad7253fdc92db610b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Mobm1AGs64v
https://papers.nips.cc/paper_files/paper/2021/file/d5397f1497b5cdaad7253fdc92db610b-Supplemental.pdf
The stochastic compositional optimization problem covers a wide range of machine learning models, such as sparse additive models and model-agnostic meta-learning. Thus, it is necessary to develop efficient methods for its optimization. Existing methods for the stochastic compositional optimization problem only focus on the single machine scenario, which is far from satisfactory when data are distributed on different devices. To address this problem, we propose novel decentralized stochastic compositional gradient descent methods to efficiently train the large-scale stochastic compositional optimization problem. To the best of our knowledge, our work is the first one facilitating decentralized training for this kind of problem. Furthermore, we provide the convergence analysis for our methods, which shows that the convergence rate of our methods can achieve linear speedup with respect to the number of devices. At last, we apply our decentralized training methods to the model-agnostic meta-learning problem, and the experimental results confirm the superior performance of our methods.
null
Dual-stream Network for Visual Recognition
https://papers.nips.cc/paper_files/paper/2021/hash/d56b9fc4b0f1be8871f5e1c40c0067e7-Abstract.html
Mingyuan Mao, peng gao, Renrui Zhang, Honghui Zheng, Teli Ma, Yan Peng, Errui Ding, Baochang Zhang, Shumin Han
https://papers.nips.cc/paper_files/paper/2021/hash/d56b9fc4b0f1be8871f5e1c40c0067e7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13564-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d56b9fc4b0f1be8871f5e1c40c0067e7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AjfD1JjeVKN
null
Transformers with remarkable global representation capacities achieve competitive results for visual tasks, but fail to consider high-level local pattern information in input images. In this paper, we present a generic Dual-stream Network (DS-Net) to fully explore the representation capacity of local and global pattern features for image classification. Our DS-Net can simultaneously calculate fine-grained and integrated features and efficiently fuse them. Specifically, we propose an Intra-scale Propagation module to process two different resolutions in each block and an Inter-Scale Alignment module to perform information interaction across features at dual scales. Besides, we also design a Dual-stream FPN (DS-FPN) to further enhance contextual information for downstream dense predictions. Without bells and whistles, the proposed DS-Net outperforms DeiT-Small by 2.4\% in terms of top-1 accuracy on ImageNet-1k and achieves state-of-the-art performance over other Vision Transformers and ResNets. For object detection and instance segmentation, DS-Net-Small respectively outperforms ResNet-50 by 6.4\% and 5.5 \% in terms of mAP on MSCOCO 2017, and surpasses the previous state-of-the-art scheme, which significantly demonstrates its potential to be a general backbone in vision tasks. The code will be released soon.
null
Estimating High Order Gradients of the Data Distribution by Denoising
https://papers.nips.cc/paper_files/paper/2021/hash/d582ac40970f9885836a61d7b2c662e4-Abstract.html
Chenlin Meng, Yang Song, Wenzhe Li, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2021/hash/d582ac40970f9885836a61d7b2c662e4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13565-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d582ac40970f9885836a61d7b2c662e4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YTkQQrqSyE1
https://papers.nips.cc/paper_files/paper/2021/file/d582ac40970f9885836a61d7b2c662e4-Supplemental.pdf
The first order derivative of a data density can be estimated efficiently by denoising score matching, and has become an important component in many applications, such as image generation and audio synthesis. Higher order derivatives provide additional local information about the data distribution and enable new applications. Although they can be estimated via automatic differentiation of a learned density model, this can amplify estimation errors and is expensive in high dimensional settings. To overcome these limitations, we propose a method to directly estimate high order derivatives (scores) of a data density from samples. We first show that denoising score matching can be interpreted as a particular case of Tweedie’s formula. By leveraging Tweedie’s formula on higher order moments, we generalize denoising score matching to estimate higher order derivatives. We demonstrate empirically that models trained with the proposed method can approximate second order derivatives more efficiently and accurately than via automatic differentiation. We show that our models can be used to quantify uncertainty in denoising and to improve the mixing speed of Langevin dynamics via Ozaki discretization for sampling synthetic data and natural images.
null
Machine versus Human Attention in Deep Reinforcement Learning Tasks
https://papers.nips.cc/paper_files/paper/2021/hash/d58e2f077670f4de9cd7963c857f2534-Abstract.html
Suna (Sihang) Guo, Ruohan Zhang, Bo Liu, Yifeng Zhu, Dana Ballard, Mary Hayhoe, Peter Stone
https://papers.nips.cc/paper_files/paper/2021/hash/d58e2f077670f4de9cd7963c857f2534-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13566-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d58e2f077670f4de9cd7963c857f2534-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8fztRILSxL
https://papers.nips.cc/paper_files/paper/2021/file/d58e2f077670f4de9cd7963c857f2534-Supplemental.zip
Deep reinforcement learning (RL) algorithms are powerful tools for solving visuomotor decision tasks. However, the trained models are often difficult to interpret, because they are represented as end-to-end deep neural networks. In this paper, we shed light on the inner workings of such trained models by analyzing the pixels that they attend to during task execution, and comparing them with the pixels attended to by humans executing the same tasks. To this end, we investigate the following two questions that, to the best of our knowledge, have not been previously studied. 1) How similar are the visual representations learned by RL agents and humans when performing the same task? and, 2) How do similarities and differences in these learned representations explain RL agents' performance on these tasks? Specifically, we compare the saliency maps of RL agents against visual attention models of human experts when learning to play Atari games. Further, we analyze how hyperparameters of the deep RL algorithm affect the learned representations and saliency maps of the trained agents. The insights provided have the potential to inform novel algorithms for closing the performance gap between human experts and RL agents.
null
Reusing Combinatorial Structure: Faster Iterative Projections over Submodular Base Polytopes
https://papers.nips.cc/paper_files/paper/2021/hash/d58f36f7679f85784d8b010ff248f898-Abstract.html
Jai Moondra, Hassan Mortagy, Swati Gupta
https://papers.nips.cc/paper_files/paper/2021/hash/d58f36f7679f85784d8b010ff248f898-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13567-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d58f36f7679f85784d8b010ff248f898-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OWwm6hzMDsU
null
Optimization algorithms such as projected Newton's method, FISTA, mirror descent and its variants enjoy near-optimal regret bounds and convergence rates, but suffer from a computational bottleneck of computing ``projections" in potentially each iteration (e.g., $O(T^{1/2})$ regret of online mirror descent). On the other hand, conditional gradient variants solve a linear optimization in each iteration, but result in suboptimal rates (e.g., $O(T^{3/4})$ regret of online Frank-Wolfe). Motivated by this trade-off in runtime v/s convergence rates, we consider iterative projections of close-by points over widely-prevalent submodular base polytopes $B(f)$. We develop a toolkit to speed up the computation of projections using both discrete and continuous perspectives. We subsequently adapt the away-step Frank-Wolfe algorithm to use this information and enable early termination. For the special case of cardinality based submodular polytopes, we improve the runtime of computing certain Bregman projections by a factor of $\Omega(n/\log(n))$. Our theoretical results show orders of magnitude reduction in runtime in preliminary computational experiments.
null
Constrained Optimization to Train Neural Networks on Critical and Under-Represented Classes
https://papers.nips.cc/paper_files/paper/2021/hash/d5ade38a2c9f6f073d69e1bc6b6e64c1-Abstract.html
Sara Sangalli, Ertunc Erdil, Andeas Hötker, Olivio Donati, Ender Konukoglu
https://papers.nips.cc/paper_files/paper/2021/hash/d5ade38a2c9f6f073d69e1bc6b6e64c1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13568-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5ade38a2c9f6f073d69e1bc6b6e64c1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vERYhbX_6Y
https://papers.nips.cc/paper_files/paper/2021/file/d5ade38a2c9f6f073d69e1bc6b6e64c1-Supplemental.pdf
Deep neural networks (DNNs) are notorious for making more mistakes for the classes that have substantially fewer samples than the others during training. Such class imbalance is ubiquitous in clinical applications and very crucial to handle because the classes with fewer samples most often correspond to critical cases (e.g., cancer) where misclassifications can have severe consequences.Not to miss such cases, binary classifiers need to be operated at high True Positive Rates (TPRs) by setting a higher threshold, but this comes at the cost of very high False Positive Rates (FPRs) for problems with class imbalance. Existing methods for learning under class imbalance most often do not take this into account. We argue that prediction accuracy should be improved by emphasizing the reduction of FPRs at high TPRs for problems where misclassification of the positive, i.e. critical, class samples are associated with higher cost.To this end, we pose the training of a DNN for binary classification as a constrained optimization problem and introduce a novel constraint that can be used with existing loss functions to enforce maximal area under the ROC curve (AUC) through prioritizing FPR reduction at high TPR. We solve the resulting constrained optimization problem using an Augmented Lagrangian method (ALM).Going beyond binary, we also propose two possible extensions of the proposed constraint for multi-class classification problems.We present experimental results for image-based binary and multi-class classification applications using an in-house medical imaging dataset, CIFAR10, and CIFAR100. Our results demonstrate that the proposed method improves the baselines in majority of the cases by attaining higher accuracy on critical classes while reducing the misclassification rate for the non-critical class samples.
null
Collapsed Variational Bounds for Bayesian Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d5b03d3acb580879f82271ab4885ee5e-Abstract.html
Marcin Tomczak, Siddharth Swaroop, Andrew Foong, Richard Turner
https://papers.nips.cc/paper_files/paper/2021/hash/d5b03d3acb580879f82271ab4885ee5e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13569-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5b03d3acb580879f82271ab4885ee5e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ykN3tbJ0qmX
https://papers.nips.cc/paper_files/paper/2021/file/d5b03d3acb580879f82271ab4885ee5e-Supplemental.pdf
Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been partly hampered by poor predictive performance caused by underfitting, and their performance is known to be very sensitive to the prior over weights. Current practice often fixes the prior parameters to standard values or tunes them using heuristics or cross-validation. In this paper, we treat prior parameters in a distributional way by extending the model and collapsing the variational bound with respect to their posteriors. This leads to novel and tighter Evidence Lower Bounds (ELBOs) for performing variational inference (VI) in BNNs. Our experiments show that the new bounds significantly improve the performance of Gaussian mean-field VI applied to BNNs on a variety of data sets, demonstrating that mean-field VI works well even in deep models. We also find that the tighter ELBOs can be good optimization targets for learning the hyperparameters of hierarchical priors.
null
Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers
https://papers.nips.cc/paper_files/paper/2021/hash/d5b3d8dadd770c460b1cde910a711987-Abstract.html
Tommaso d'Orsi, Chih-Hung Liu, Rajai Nasser, Gleb Novikov, David Steurer, Stefan Tiegel
https://papers.nips.cc/paper_files/paper/2021/hash/d5b3d8dadd770c460b1cde910a711987-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13570-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5b3d8dadd770c460b1cde910a711987-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=BaHth99Sp45
https://papers.nips.cc/paper_files/paper/2021/file/d5b3d8dadd770c460b1cde910a711987-Supplemental.pdf
We develop machinery to design efficiently computable and \emph{consistent} estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an $\alpha$ fraction of the samples.As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA).For sparse regression, we achieve consistency for optimal sample size $n\gtrsim (k\log d)/\alpha^2$ and optimal error rate $O(\sqrt{(k\log d)/(n\cdot \alpha^2)})$where $n$ is the number of observations, $d$ is the number of dimensions and $k$ is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples.Prior to this work, no estimator was known to be consistent when the fraction of inliers $\alpha$ is $o(1/\log \log n)$, even for (non-spherical) Gaussian design matrices.Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d'Orsi et al.~\cite{ICML-linear-regression}.In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in $n$ (e.g., Gaussian with variance $1/n^2$).To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the $\ell_1$ norm or the nuclear norm, and extend d'Orsi et al.'s approach~\cite{ICML-linear-regression} in a novel way to analyze the loss function.Our machinery appears to be easily applicable to a wide range of estimation problems.We complement these algorithmic results with statistical lower bounds showing that the fraction of inliers that our PCA estimator can deal with is optimal up to a constant factor.
null
Offline Constrained Multi-Objective Reinforcement Learning via Pessimistic Dual Value Iteration
https://papers.nips.cc/paper_files/paper/2021/hash/d5c8e1ab6fc0bfeb5f29aafa999cdb29-Abstract.html
Runzhe Wu, Yufeng Zhang, Zhuoran Yang, Zhaoran Wang
https://papers.nips.cc/paper_files/paper/2021/hash/d5c8e1ab6fc0bfeb5f29aafa999cdb29-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13571-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5c8e1ab6fc0bfeb5f29aafa999cdb29-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9-sCrvMbL9
https://papers.nips.cc/paper_files/paper/2021/file/d5c8e1ab6fc0bfeb5f29aafa999cdb29-Supplemental.zip
In constrained multi-objective RL, the goal is to learn a policy that achieves the best performance specified by a multi-objective preference function under a constraint. We focus on the offline setting where the RL agent aims to learn the optimal policy from a given dataset. This scenario is common in real-world applications where interactions with the environment are expensive and the constraint violation is dangerous. For such a setting, we transform the original constrained problem into a primal-dual formulation, which is solved via dual gradient ascent. Moreover, we propose to combine such an approach with pessimism to overcome the uncertainty in offline data, which leads to our Pessimistic Dual Iteration (PEDI). We establish upper bounds on both the suboptimality and constraint violation for the policy learned by PEDI based on an arbitrary dataset, which proves that PEDI is provably sample efficient. We also specialize PEDI to the setting with linear function approximation. To the best of our knowledge, we propose the first provably efficient constrained multi-objective RL algorithm with offline data without any assumption on the coverage of the dataset.
null
Absolute Neighbour Difference based Correlation Test for Detecting Heteroscedastic Relationships
https://papers.nips.cc/paper_files/paper/2021/hash/d5cfead94f5350c12c322b5b664544c1-Abstract.html
Lifeng Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/d5cfead94f5350c12c322b5b664544c1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13572-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5cfead94f5350c12c322b5b664544c1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=BpFtRaPGRTZ
https://papers.nips.cc/paper_files/paper/2021/file/d5cfead94f5350c12c322b5b664544c1-Supplemental.pdf
It is a challenge to detect complicated data relationships thoroughly. Here, we propose a new statistical measure, named the absolute neighbour difference based neighbour correlation coefficient, to detect the associations between variables through examining the heteroscedasticity of the unpredictable variation of dependent variables. Different from previous studies, the new method concentrates on measuring nonfunctional relationships rather than functional or mixed associations. Either used alone or in combination with other measures, it enables not only a convenient test of heteroscedasticity, but also measuring functional and nonfunctional relationships separately that obviously leads to a deeper insight into the data associations. The method is concise and easy to implement that does not rely on explicitly estimating the regression residuals or the dependencies between variables so that it is not restrict to any kind of model assumption. The mechanisms of the correlation test are proved in theory and demonstrated with numerical analyses.
null
Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html
Shibo Li, Robert Kirby, Shandian Zhe
https://papers.nips.cc/paper_files/paper/2021/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13573-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5e2c0adad503c91f91df240d0cd4e49-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wF-llA3k32
https://papers.nips.cc/paper_files/paper/2021/file/d5e2c0adad503c91f91df240d0cd4e49-Supplemental.pdf
Bayesian optimization (BO) is a powerful approach for optimizing black-box, expensive-to-evaluate functions. To enable a flexible trade-off between the cost and accuracy, many applications allow the function to be evaluated at different fidelities. In order to reduce the optimization cost while maximizing the benefit-cost ratio, in this paper we propose Batch Multi-fidelity Bayesian Optimization with Deep Auto-Regressive Networks (BMBO-DARN). We use a set of Bayesian neural networks to construct a fully auto-regressive model, which is expressive enough to capture strong yet complex relationships across all the fidelities, so as to improve the surrogate learning and optimization performance. Furthermore, to enhance the quality and diversity of queries, we develop a simple yet efficient batch querying method, without any combinatorial search over the fidelities. We propose a batch acquisition function based on Max-value Entropy Search (MES) principle, which penalizes highly correlated queries and encourages diversity. We use posterior samples and moment matching to fulfill efficient computation of the acquisition function, and conduct alternating optimization over every fidelity-input pair, which guarantees an improvement at each step. We demonstrate the advantage of our approach on four real-world hyperparameter optimization applications.
null
Mastering Atari Games with Limited Data
https://papers.nips.cc/paper_files/paper/2021/hash/d5eca8dc3820cad9fe56a3bafda65ca1-Abstract.html
Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, Yang Gao
https://papers.nips.cc/paper_files/paper/2021/hash/d5eca8dc3820cad9fe56a3bafda65ca1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13574-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5eca8dc3820cad9fe56a3bafda65ca1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OKrNPg3xR3T
https://papers.nips.cc/paper_files/paper/2021/file/d5eca8dc3820cad9fe56a3bafda65ca1-Supplemental.pdf
Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal. We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 194.3% mean human performance and 109.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data. EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at https://github.com/YeWR/EfficientZero. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community.
null
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification
https://papers.nips.cc/paper_files/paper/2021/hash/d5fcc35c94879a4afad61cacca56192c-Abstract.html
Clémence Réda, Andrea Tirinzoni, Rémy Degenne
https://papers.nips.cc/paper_files/paper/2021/hash/d5fcc35c94879a4afad61cacca56192c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13575-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5fcc35c94879a4afad61cacca56192c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1fnkvjzVdu9
https://papers.nips.cc/paper_files/paper/2021/file/d5fcc35c94879a4afad61cacca56192c-Supplemental.pdf
We study the problem of the identification of m arms with largest means under a fixed error rate $\delta$ (fixed-confidence Top-m identification), for misspecified linear bandit models. This problem is motivated by practical applications, especially in medicine and recommendation systems, where linear models are popular due to their simplicity and the existence of efficient algorithms, but in which data inevitably deviates from linearity. In this work, we first derive a tractable lower bound on the sample complexity of any $\delta$-correct algorithm for the general Top-m identification problem. We show that knowing the scale of the deviation from linearity is necessary to exploit the structure of the problem. We then describe the first algorithm for this setting, which is both practical and adapts to the amount of misspecification. We derive an upper bound to its sample complexity which confirms this adaptivity and that matches the lower bound when $\delta \rightarrow 0$. Finally, we evaluate our algorithm on both synthetic and real-world data, showing competitive performance with respect to existing baselines.
null
Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
https://papers.nips.cc/paper_files/paper/2021/hash/d5ff135377d39f1de7372c95c74dd962-Abstract.html
Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine
https://papers.nips.cc/paper_files/paper/2021/hash/d5ff135377d39f1de7372c95c74dd962-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13576-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d5ff135377d39f1de7372c95c74dd962-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=QWIvzSQaX5
https://papers.nips.cc/paper_files/paper/2021/file/d5ff135377d39f1de7372c95c74dd962-Supplemental.pdf
Generalization is a central challenge for the deployment of reinforcement learning (RL) systems in the real world. In this paper, we show that the sequential structure of the RL problem necessitates new approaches to generalization beyond the well-studied techniques used in supervised learning. While supervised learning methods can generalize effectively without explicitly accounting for epistemic uncertainty, we describe why appropriate uncertainty handling can actually be essential in RL. We show that generalization to unseen test conditions from a limited number of training conditions induces a kind of implicit partial observability, effectively turning even fully-observed MDPs into POMDPs. Informed by this observation, we recast the problem of generalization in RL as solving the induced partially observed Markov decision process, which we call the epistemic POMDP. We demonstrate the failure modes of algorithms that do not appropriately handle this partial observability, and suggest a simple ensemble-based technique for approximately solving the partially observed problem. Empirically, we demonstrate that our simple algorithm derived from the epistemic POMDP achieves significant gains in generalization over current methods on the Procgen benchmark suite.
null
Set Prediction in the Latent Space
https://papers.nips.cc/paper_files/paper/2021/hash/d61e9e58ae1058322bc169943b39f1d8-Abstract.html
Konpat Preechakul, Chawan Piansaddhayanon, Burin Naowarat, Tirasan Khandhawit, Sira Sriswasdi, Ekapol Chuangsuwanich
https://papers.nips.cc/paper_files/paper/2021/hash/d61e9e58ae1058322bc169943b39f1d8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13577-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d61e9e58ae1058322bc169943b39f1d8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1ANcwXQuijU
https://papers.nips.cc/paper_files/paper/2021/file/d61e9e58ae1058322bc169943b39f1d8-Supplemental.pdf
Set prediction tasks require the matching between predicted set and ground truth set in order to propagate the gradient signal. Recent works have performed this matching in the original feature space thus requiring predefined distance functions. We propose a method for learning the distance function by performing the matching in the latent space learned from encoding networks. This method enables the use of teacher forcing which was not possible previously since matching in the feature space must be computed after the entire output sequence is generated. Nonetheless, a naive implementation of latent set prediction might not converge due to permutation instability. To address this problem, we provide sufficient conditions for permutation stability which begets an algorithm to improve the overall model convergence. Experiments on several set prediction tasks, including image captioning and object detection, demonstrate the effectiveness of our method.
null
Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel
https://papers.nips.cc/paper_files/paper/2021/hash/d63fbf8c3173730f82b150c5ef38b8ff-Abstract.html
Yixin Chen, Tonmoy Dey, Alan Kuhnle
https://papers.nips.cc/paper_files/paper/2021/hash/d63fbf8c3173730f82b150c5ef38b8ff-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13578-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d63fbf8c3173730f82b150c5ef38b8ff-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Ri2G086_3v
https://papers.nips.cc/paper_files/paper/2021/file/d63fbf8c3173730f82b150c5ef38b8ff-Supplemental.pdf
For the problem of maximizing a monotone, submodular function with respect to a cardinality constraint $k$ on a ground set of size $n$, we provide an algorithm that achieves the state-of-the-art in both its empirical performance and its theoretical properties, in terms of adaptive complexity, query complexity, and approximation ratio; that is, it obtains, with high probability, query complexity of $O(n)$ in expectation, adaptivity of $O(\log(n))$, and approximation ratio of nearly $1-1/e$. The main algorithm is assembled from two components which may be of independent interest. The first component of our algorithm, LINEARSEQ, is useful as a preprocessing algorithm to improve the query complexity of many algorithms. Moreover, a variant of LINEARSEQ is shown to have adaptive complexity of $O( \log (n / k) )$ which is smaller than that of any previous algorithm in the literature. The second component is a parallelizable thresholding procedure THRESHOLDSEQ for adding elements with gain above a constant threshold. Finally, we demonstrate that our main algorithm empirically outperforms, in terms of runtime, adaptive rounds, total queries, and objective values, the previous state-of-the-art algorithm FAST in a comprehensive evaluation with six submodular objective functions.
null
Fine-grained Generalization Analysis of Inductive Matrix Completion
https://papers.nips.cc/paper_files/paper/2021/hash/d6428eecbe0f7dff83fc607c5044b2b9-Abstract.html
Antoine Ledent, Rodrigo Alves, Yunwen Lei, Marius Kloft
https://papers.nips.cc/paper_files/paper/2021/hash/d6428eecbe0f7dff83fc607c5044b2b9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13579-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d6428eecbe0f7dff83fc607c5044b2b9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lDzLzhUIwBq
https://papers.nips.cc/paper_files/paper/2021/file/d6428eecbe0f7dff83fc607c5044b2b9-Supplemental.pdf
In this paper, we bridge the gap between the state-of-the-art theoretical results for matrix completion with the nuclear norm and their equivalent in \textit{inductive matrix completion}: (1) In the distribution-free setting, we prove bounds improving the previously best scaling of $O(rd^2)$ to $\widetilde{O}(d^{3/2}\sqrt{r})$, where $d$ is the dimension of the side information and $r$ is the rank. (2) We introduce the (smoothed) \textit{adjusted trace-norm minimization} strategy, an inductive analogue of the weighted trace norm, for which we show guarantees of the order $\widetilde{O}(dr)$ under arbitrary sampling. In the inductive case, a similar rate was previously achieved only under uniform sampling and for exact recovery. Both our results align with the state of the art in the particular case of standard (non-inductive) matrix completion, where they are known to be tight up to log terms. Experiments further confirm that our strategy outperforms standard inductive matrix completion on various synthetic datasets and real problems, justifying its place as an important tool in the arsenal of methods for matrix completion using side information.
null
Learning Frequency Domain Approximation for Binary Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d645920e395fedad7bbbed0eca3fe2e0-Abstract.html
Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing XU, Yunhe Wang
https://papers.nips.cc/paper_files/paper/2021/hash/d645920e395fedad7bbbed0eca3fe2e0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13580-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d645920e395fedad7bbbed0eca3fe2e0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5JvnsAdf6Vz
null
Binary neural networks (BNNs) represent original full-precision weights and activations into 1-bit with sign function. Since the gradient of the conventional sign function is almost zero everywhere which cannot be used for back-propagation, several attempts have been proposed to alleviate the optimization difficulty by using approximate gradient. However, those approximations corrupt the main direction of factual gradient. To this end, we propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs, namely frequency domain approximation (FDA). The proposed approach does not affect the low-frequency information of the original sign function which occupies most of the overall energy, and high-frequency coefficients will be ignored to avoid the huge computational overhead. In addition, we embed a noise adaptation module into the training phase to compensate the approximation error. The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/FDA-BNN.
null
Reformulating Zero-shot Action Recognition for Multi-label Actions
https://papers.nips.cc/paper_files/paper/2021/hash/d6539d3b57159babf6a72e106beb45bd-Abstract.html
Alec Kerrigan, Kevin Duarte, Yogesh Rawat, Mubarak Shah
https://papers.nips.cc/paper_files/paper/2021/hash/d6539d3b57159babf6a72e106beb45bd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13581-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d6539d3b57159babf6a72e106beb45bd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=mHHU6KWQ1ci
https://papers.nips.cc/paper_files/paper/2021/file/d6539d3b57159babf6a72e106beb45bd-Supplemental.pdf
The goal of zero-shot action recognition (ZSAR) is to classify action classes which were not previously seen during training. Traditionally, this is achieved by training a network to map, or regress, visual inputs to a semantic space where a nearest neighbor classifier is used to select the closest target class. We argue that this approach is sub-optimal due to the use of nearest neighbor on static semantic space and is ineffective when faced with multi-label videos - where two semantically distinct co-occurring action categories cannot be predicted with high confidence. To overcome these limitations, we propose a ZSAR framework which does not rely on nearest neighbor classification, but rather consists of a pairwise scoring function. Given a video and a set of action classes, our method predicts a set of confidence scores for each class independently. This allows for the prediction of several semantically distinct classes within one video input. Our evaluations show that our method not only achieves strong performance on three single-label action classification datasets (UCF-101, HMDB, and RareAct), but also outperforms previous ZSAR approaches on a challenging multi-label dataset (AVA) and a real-world surprise activity detection dataset (MEVA).
null
Optimal Best-Arm Identification Methods for Tail-Risk Measures
https://papers.nips.cc/paper_files/paper/2021/hash/d69c7ebb6a253532b266151eac6591af-Abstract.html
Shubhada Agrawal, Wouter M. Koolen, Sandeep Juneja
https://papers.nips.cc/paper_files/paper/2021/hash/d69c7ebb6a253532b266151eac6591af-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13582-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d69c7ebb6a253532b266151eac6591af-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4wVlNqBJXg
https://papers.nips.cc/paper_files/paper/2021/file/d69c7ebb6a253532b266151eac6591af-Supplemental.pdf
Conditional value-at-risk (CVaR) and value-at-risk (VaR) are popular tail-risk measures in finance and insurance industries as well as in highly reliable, safety-critical uncertain environments where often the underlying probability distributions are heavy-tailed. We use the multi-armed bandit best-arm identification framework and consider the problem of identifying the arm from amongst finitely many that has the smallest CVaR, VaR, or weighted sum of CVaR and mean. The latter captures the risk-return trade-off common in finance. Our main contribution is an optimal $\delta$-correct algorithm that acts on general arms, including heavy-tailed distributions, and matches the lower bound on the expected number of samples needed, asymptotically (as $ \delta$ approaches $0$). The algorithm requires solving a non-convex optimization problem in the space of probability measures, that requires delicate analysis. En-route, we develop new non-asymptotic, anytime-valid, empirical-likelihood-based concentration inequalities for tail-risk measures.
null
SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision
https://papers.nips.cc/paper_files/paper/2021/hash/d6ef5f7fa914c19931a55bb262ec879c-Abstract.html
Irina Higgins, Peter Wirnsberger, Andrew Jaegle, Aleksandar Botev
https://papers.nips.cc/paper_files/paper/2021/hash/d6ef5f7fa914c19931a55bb262ec879c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13583-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d6ef5f7fa914c19931a55bb262ec879c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9Qu0U9Fj7IP
https://papers.nips.cc/paper_files/paper/2021/file/d6ef5f7fa914c19931a55bb262ec879c-Supplemental.pdf
A recently proposed class of models attempts to learn latent dynamics from high-dimensional observations, like images, using priors informed by Hamiltonian mechanics. While these models have important potential applications in areas like robotics or autonomous driving, there is currently no good way to evaluate their performance: existing methods primarily rely on image reconstruction quality, which does not always reflect the quality of the learnt latent dynamics. In this work, we empirically highlight the problems with the existing measures and develop a set of new measures, including a binary indicator of whether the underlying Hamiltonian dynamics have been faithfully captured, which we call Symplecticity Metric or SyMetric. Our measures take advantage of the known properties of Hamiltonian dynamics and are more discriminative of the model's ability to capture the underlying dynamics than reconstruction error. Using SyMetric, we identify a set of architectural choices that significantly improve the performance of a previously proposed model for inferring latent dynamics from pixels, the Hamiltonian Generative Network (HGN). Unlike the original HGN, the new SyMetric is able to discover an interpretable phase space with physically meaningful latents on some datasets. Furthermore, it is stable for significantly longer rollouts on a diverse range of 13 datasets, producing rollouts of essentially infinite length both forward and backwards in time with no degradation in quality on a subset of the datasets.
null
Learning with Holographic Reduced Representations
https://papers.nips.cc/paper_files/paper/2021/hash/d71dd235287466052f1630f31bde7932-Abstract.html
Ashwinkumar Ganesan, Hang Gao, Sunil Gandhi, Edward Raff, Tim Oates, James Holt, Mark McLean
https://papers.nips.cc/paper_files/paper/2021/hash/d71dd235287466052f1630f31bde7932-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13584-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d71dd235287466052f1630f31bde7932-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RX6PrcpXP-
https://papers.nips.cc/paper_files/paper/2021/file/d71dd235287466052f1630f31bde7932-Supplemental.pdf
Holographic Reduced Representations (HRR) are a method for performing symbolic AI on top of real-valued vectors by associating each vector with an abstract concept, and providing mathematical operations to manipulate vectors as if they were classic symbolic objects. This method has seen little use outside of older symbolic AI work and cognitive science. Our goal is to revisit this approach to understand if it is viable for enabling a hybrid neural-symbolic approach to learning as a differential component of a deep learning architecture. HRRs today are not effective in a differential solution due to numerical instability, a problem we solve by introducing a projection step that forces the vectors to exist in a well behaved point in space. In doing so we improve the concept retrieval efficacy of HRRs by over $100\times$. Using multi-label classification we demonstrate how to leverage the symbolic HRR properties to develop a output layer and loss function that is able to learn effectively, and allows us to investigate some of the pros and cons of an HRR neuro-symbolic learning approach.
null
Learning Barrier Certificates: Towards Safe Reinforcement Learning with Zero Training-time Violations
https://papers.nips.cc/paper_files/paper/2021/hash/d71fa38b648d86602d14ac610f2e6194-Abstract.html
Yuping Luo, Tengyu Ma
https://papers.nips.cc/paper_files/paper/2021/hash/d71fa38b648d86602d14ac610f2e6194-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13585-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d71fa38b648d86602d14ac610f2e6194-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=K4Su8BIivap
https://papers.nips.cc/paper_files/paper/2021/file/d71fa38b648d86602d14ac610f2e6194-Supplemental.zip
Training-time safety violations have been a major concern when we deploy reinforcement learning algorithms in the real world.This paper explores the possibility of safe RL algorithms with zero training-time safety violations in the challenging setting where we are only given a safe but trivial-reward initial policy without any prior knowledge of the dynamics and additional offline data.We propose an algorithm, Co-trained Barrier Certificate for Safe RL (CRABS), which iteratively learns barrier certificates, dynamics models, and policies. The barrier certificates are learned via adversarial training and ensure the policy's safety assuming calibrated learned dynamics. We also add a regularization term to encourage larger certified regions to enable better exploration. Empirical simulations show that zero safety violations are already challenging for a suite of simple environments with only 2-4 dimensional state space, especially if high-reward policies have to visit regions near the safety boundary. Prior methods require hundreds of violations to achieve decent rewards on these tasks, whereas our proposed algorithms incur zero violations.
null
On the Second-order Convergence Properties of Random Search Methods
https://papers.nips.cc/paper_files/paper/2021/hash/d757719ed7c2b66dd17dcee2a3cb29f4-Abstract.html
Aurelien Lucchi, Antonio Orvieto, Adamos Solomou
https://papers.nips.cc/paper_files/paper/2021/hash/d757719ed7c2b66dd17dcee2a3cb29f4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13586-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d757719ed7c2b66dd17dcee2a3cb29f4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1oR_gQGp3Rm
https://papers.nips.cc/paper_files/paper/2021/file/d757719ed7c2b66dd17dcee2a3cb29f4-Supplemental.pdf
We study the theoretical convergence properties of random-search methods when optimizing non-convex objective functions without having access to derivatives. We prove that standard random-search methods that do not rely on second-order information converge to a second-order stationary point. However, they suffer from an exponential complexity in terms of the input dimension of the problem. In order to address this issue, we propose a novel variant of random search that exploits negative curvature by only relying on function evaluations. We prove that this approach converges to a second-order stationary point at a much faster rate than vanilla methods: namely, the complexity in terms of the number of function evaluations is only linear in the problem dimension. We test our algorithm empirically and find good agreements with our theoretical results.
null
Noether’s Learning Dynamics: Role of Symmetry Breaking in Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/d76d8deea9c19cc9aaf2237d2bf2f785-Abstract.html
Hidenori Tanaka, Daniel Kunin
https://papers.nips.cc/paper_files/paper/2021/hash/d76d8deea9c19cc9aaf2237d2bf2f785-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13587-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d76d8deea9c19cc9aaf2237d2bf2f785-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fiPtD7iXuhn
https://papers.nips.cc/paper_files/paper/2021/file/d76d8deea9c19cc9aaf2237d2bf2f785-Supplemental.pdf
In nature, symmetry governs regularities, while symmetry breaking brings texture. In artificial neural networks, symmetry has been a central design principle to efficiently capture regularities in the world, but the role of symmetry breaking is not well understood. Here, we develop a theoretical framework to study the "geometry of learning dynamics" in neural networks, and reveal a key mechanism of explicit symmetry breaking behind the efficiency and stability of modern neural networks. To build this understanding, we model the discrete learning dynamics of gradient descent using a continuous-time Lagrangian formulation, in which the learning rule corresponds to the kinetic energy and the loss function corresponds to the potential energy. Then, we identify "kinetic symmetry breaking" (KSB), the condition when the kinetic energy explicitly breaks the symmetry of the potential function. We generalize Noether’s theorem known in physics to take into account KSB and derive the resulting motion of the Noether charge: "Noether's Learning Dynamics" (NLD). Finally, we apply NLD to neural networks with normalization layers and reveal how KSB introduces a mechanism of implicit adaptive optimization, establishing an analogy between learning dynamics induced by normalization layers and RMSProp. Overall, through the lens of Lagrangian mechanics, we have established a theoretical foundation to discover geometric design principles for the learning dynamics of neural networks.
null
A Theory of the Distortion-Perception Tradeoff in Wasserstein Space
https://papers.nips.cc/paper_files/paper/2021/hash/d77e68596c15c53c2a33ad143739902d-Abstract.html
Dror Freirich, Tomer Michaeli, Ron Meir
https://papers.nips.cc/paper_files/paper/2021/hash/d77e68596c15c53c2a33ad143739902d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13588-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d77e68596c15c53c2a33ad143739902d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=qeaT2O5fNKC
https://papers.nips.cc/paper_files/paper/2021/file/d77e68596c15c53c2a33ad143739902d-Supplemental.pdf
The lower the distortion of an estimator, the more the distribution of its outputs generally deviates from the distribution of the signals it attempts to estimate. This phenomenon, known as the perception-distortion tradeoff, has captured significant attention in image restoration, where it implies that fidelity to ground truth images comes on the expense of perceptual quality (deviation from statistics of natural images). However, despite the increasing popularity of performing comparisons on the perception-distortion plane, there remains an important open question: what is the minimal distortion that can be achieved under a given perception constraint? In this paper, we derive a closed form expression for this distortion-perception (DP) function for the mean squared-error (MSE) distortion and Wasserstein-2 perception index. We prove that the DP function is always quadratic, regardless of the underlying distribution. This stems from the fact that estimators on the DP curve form a geodesic in Wasserstein space. In the Gaussian setting, we further provide a closed form expression for such estimators. For general distributions, we show how these estimators can be constructed from the estimators at the two extremes of the tradeoff: The global MSE minimizer, and a minimizer of the MSE under a perfect perceptual quality constraint. The latter can be obtained as a stochastic transformation of the former.
null
Neural Production Systems
https://papers.nips.cc/paper_files/paper/2021/hash/d785bf9067f8af9e078b93cf26de2b54-Abstract.html
Anirudh Goyal ALIAS PARTH GOYAL, Aniket Didolkar, Nan Rosemary Ke, Charles Blundell, Philippe Beaudoin, Nicolas Heess, Michael C. Mozer, Yoshua Bengio
https://papers.nips.cc/paper_files/paper/2021/hash/d785bf9067f8af9e078b93cf26de2b54-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13589-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d785bf9067f8af9e078b93cf26de2b54-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xQGYquca0gB
https://papers.nips.cc/paper_files/paper/2021/file/d785bf9067f8af9e078b93cf26de2b54-Supplemental.zip
Visual environments are structured, consisting of distinct objects or entities. These entities have properties---visible or latent---that determine the manner in which they interact with one another. To partition images into entities, deep-learning researchers have proposed structural inductive biases such as slot-based architectures. To model interactions among entities, equivariant graph neural nets (GNNs) are used, but these are not particularly well suited to the task for two reasons. First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be. Second, GNNs do not factorize knowledge about interactions in an entity-conditional manner. As an alternative, we take inspiration from cognitive science and resurrect a classic approach, production systems, which consist of a set of rule templates that are applied by binding placeholder variables in the rules to specific entities. Rules are scored on their match to entities, and the best fitting rules are applied to update entity properties. In a series of experiments, we demonstrate that this architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information. This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.
null
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/d79c6256b9bdac53a55801a066b70da3-Abstract.html
Mher Safaryan, Filip Hanzely, Peter Richtarik
https://papers.nips.cc/paper_files/paper/2021/hash/d79c6256b9bdac53a55801a066b70da3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13590-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d79c6256b9bdac53a55801a066b70da3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MGX69TBAi07
https://papers.nips.cc/paper_files/paper/2021/file/d79c6256b9bdac53a55801a066b70da3-Supplemental.zip
Large scale distributed optimization has become the default tool for the training of supervised machine learning models with a large number of parameters and training data. Recent advancements in the field provide several mechanisms for speeding up the training, including {\em compressed communication}, {\em variance reduction} and {\em acceleration}. However, none of these methods is capable of exploiting the inherently rich data-dependent smoothness structure of the local losses beyond standard smoothness constants. In this paper, we argue that when training supervised models, {\em smoothness matrices}---information-rich generalizations of the ubiquitous smoothness constants---can and should be exploited for further dramatic gains, both in theory and practice. In order to further alleviate the communication burden inherent in distributed optimization, we propose a novel communication sparsification strategy that can take full advantage of the smoothness matrices associated with local losses. To showcase the power of this tool, we describe how our sparsification technique can be adapted to three distributed optimization algorithms---DCGD, DIANA and ADIANA---yielding significant savings in terms of communication complexity. The new methods always outperform the baselines, often dramatically so.
null
Increasing Liquid State Machine Performance with Edge-of-Chaos Dynamics Organized by Astrocyte-modulated Plasticity
https://papers.nips.cc/paper_files/paper/2021/hash/d79c8788088c2193f0244d8f1f36d2db-Abstract.html
Vladimir Ivanov, Konstantinos Michmizos
https://papers.nips.cc/paper_files/paper/2021/hash/d79c8788088c2193f0244d8f1f36d2db-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13591-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d79c8788088c2193f0244d8f1f36d2db-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0isj8oxdQys
https://papers.nips.cc/paper_files/paper/2021/file/d79c8788088c2193f0244d8f1f36d2db-Supplemental.pdf
The liquid state machine (LSM) combines low training complexity and biological plausibility, which has made it an attractive machine learning framework for edge and neuromorphic computing paradigms. Originally proposed as a model of brain computation, the LSM tunes its internal weights without backpropagation of gradients, which results in lower performance compared to multi-layer neural networks. Recent findings in neuroscience suggest that astrocytes, a long-neglected non-neuronal brain cell, modulate synaptic plasticity and brain dynamics, tuning brain networks to the vicinity of the computationally optimal critical phase transition between order and chaos. Inspired by this disruptive understanding of how brain networks self-tune, we propose the neuron-astrocyte liquid state machine (NALSM) that addresses under-performance through self-organized near-critical dynamics. Similar to its biological counterpart, the astrocyte model integrates neuronal activity and provides global feedback to spike-timing-dependent plasticity (STDP), which self-organizes NALSM dynamics around a critical branching factor that is associated with the edge-of-chaos. We demonstrate that NALSM achieves state-of-the-art accuracy versus comparable LSM methods, without the need for data-specific hand-tuning. With a top accuracy of $97.61\%$ on MNIST, $97.51\%$ on N-MNIST, and $85.84\%$ on Fashion-MNIST, NALSM achieved comparable performance to current fully-connected multi-layer spiking neural networks trained via backpropagation. Our findings suggest that the further development of brain-inspired machine learning methods has the potential to reach the performance of deep learning, with the added benefits of supporting robust and energy-efficient neuromorphic computing on the edge.
null
Fair Sortition Made Transparent
https://papers.nips.cc/paper_files/paper/2021/hash/d7b431b1a0cc5f032399870ff4710743-Abstract.html
Bailey Flanigan, Gregory Kehne, Ariel D. Procaccia
https://papers.nips.cc/paper_files/paper/2021/hash/d7b431b1a0cc5f032399870ff4710743-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13592-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d7b431b1a0cc5f032399870ff4710743-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kwU8HhoUi4W
https://papers.nips.cc/paper_files/paper/2021/file/d7b431b1a0cc5f032399870ff4710743-Supplemental.pdf
Sortition is an age-old democratic paradigm, widely manifested today through the random selection of citizens' assemblies. Recently-deployed algorithms select assemblies \textit{maximally fairly}, meaning that subject to demographic quotas, they give all potential participants as equal a chance as possible of being chosen. While these fairness gains can bolster the legitimacy of citizens' assemblies and facilitate their uptake, existing algorithms remain limited by their lack of transparency. To overcome this hurdle, in this work we focus on panel selection by uniform lottery, which is easy to realize in an observable way. By this approach, the final assembly is selected by uniformly sampling some pre-selected set of $m$ possible assemblies.We provide theoretical guarantees on the fairness attainable via this type of uniform lottery, as compared to the existing maximally fair but opaque algorithms, for two different fairness objectives. We complement these results with experiments on real-world instances that demonstrate the viability of the uniform lottery approach as a method of selecting assemblies both fairly and transparently.
null
A Max-Min Entropy Framework for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/d7b76edf790923bf7177f7ebba5978df-Abstract.html
Seungyul Han, Youngchul Sung
https://papers.nips.cc/paper_files/paper/2021/hash/d7b76edf790923bf7177f7ebba5978df-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13593-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d7b76edf790923bf7177f7ebba5978df-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Hj_PxeC8CiV
https://papers.nips.cc/paper_files/paper/2021/file/d7b76edf790923bf7177f7ebba5978df-Supplemental.zip
In this paper, we propose a max-min entropy framework for reinforcement learning (RL) to overcome the limitation of the soft actor-critic (SAC) algorithm implementing the maximum entropy RL in model-free sample-based learning. Whereas the maximum entropy RL guides learning for policies to reach states with high entropy in the future, the proposed max-min entropy framework aims to learn to visit states with low entropy and maximize the entropy of these low-entropy states to promote better exploration. For general Markov decision processes (MDPs), an efficient algorithm is constructed under the proposed max-min entropy framework based on disentanglement of exploration and exploitation. Numerical results show that the proposed algorithm yields drastic performance improvement over the current state-of-the-art RL algorithms.
null
Reward is enough for convex MDPs
https://papers.nips.cc/paper_files/paper/2021/hash/d7e4cdde82a894b8f633e6d61a01ef15-Abstract.html
Tom Zahavy, Brendan O'Donoghue, Guillaume Desjardins, Satinder Singh
https://papers.nips.cc/paper_files/paper/2021/hash/d7e4cdde82a894b8f633e6d61a01ef15-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13594-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d7e4cdde82a894b8f633e6d61a01ef15-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ELndVeVA-TR
https://papers.nips.cc/paper_files/paper/2021/file/d7e4cdde82a894b8f633e6d61a01ef15-Supplemental.pdf
Maximising a cumulative reward function that is Markov and stationary, i.e., defined over state-action pairs and independent of time, is sufficient to capture many kinds of goals in a Markov decision process (MDP). However, not all goals can be captured in this manner. In this paper we study convex MDPs in which goals are expressed as convex functions of the stationary distribution and show that they cannot be formulated using stationary reward functions. Convex MDPs generalize the standard reinforcement learning (RL) problem formulation to a larger framework that includes many supervised and unsupervised RL problems, such as apprenticeship learning, constrained MDPs, and so-called pure exploration'. Our approach is to reformulate the convex MDP problem as a min-max game involving policy and cost (negative reward)players', using Fenchel duality. We propose a meta-algorithm for solving this problem and show that it unifies many existing algorithms in the literature.
null
Fast Doubly-Adaptive MCMC to Estimate the Gibbs Partition Function with Weak Mixing Time Bounds
https://papers.nips.cc/paper_files/paper/2021/hash/d7f14b4988c30cc40e5e7b7d157bc018-Abstract.html
Shahrzad Haddadan, Yue Zhuang, Cyrus Cousins, Eli Upfal
https://papers.nips.cc/paper_files/paper/2021/hash/d7f14b4988c30cc40e5e7b7d157bc018-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13595-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d7f14b4988c30cc40e5e7b7d157bc018-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x8k1nAoGu1U
https://papers.nips.cc/paper_files/paper/2021/file/d7f14b4988c30cc40e5e7b7d157bc018-Supplemental.pdf
We present a novel method for reducing the computational complexity of rigorously estimating the partition functions of Gibbs (or Boltzmann) distributions, which arise ubiquitously in probabilistic graphical models. A major obstacle to applying the Gibbs distribution in practice is the need to estimate their partition function (normalizing constant). The state of the art in addressing this problem is multi-stage algorithms which consist of a cooling schedule and a mean estimator in each step of the schedule. While the cooling schedule in these algorithms is adaptive, the mean estimate computations use MCMC as a black-box to draw approximately-independent samples. Here we develop a doubly adaptive approach, combining the adaptive cooling schedule with an adaptive MCMC mean estimator, whose number of Markov chain steps adapts dynamically to the underlying chain. Through rigorous theoretical analysis, we prove that our method outperforms the state of the art algorithms in several factors: (1) The computational complexity of our method is smaller; (2) Our method is less sensitive to loose bounds on mixing times, an inherent components in these algorithms; and (3) The improvement obtained by our method is particularly significant in the most challenging regime of high precision estimates. We demonstrate the advantage of our method in experiments run on classic factor graphs, such as voting models and Ising models.
null
Does enforcing fairness mitigate biases caused by subpopulation shift?
https://papers.nips.cc/paper_files/paper/2021/hash/d800149d2f947ad4d64f34668f8b20f6-Abstract.html
Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin, Yuekai Sun
https://papers.nips.cc/paper_files/paper/2021/hash/d800149d2f947ad4d64f34668f8b20f6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13596-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d800149d2f947ad4d64f34668f8b20f6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=6mUrD5rg-UU
https://papers.nips.cc/paper_files/paper/2021/file/d800149d2f947ad4d64f34668f8b20f6-Supplemental.pdf
Many instances of algorithmic bias are caused by subpopulation shifts. For example, ML models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we study whether enforcing algorithmic fairness during training improves the performance of the trained model in the \emph{target domain}. On one hand, we conceive scenarios in which enforcing fairness does not improve performance in the target domain. In fact, it may even harm performance. On the other hand, we derive necessary and sufficient conditions under which enforcing algorithmic fairness leads to the Bayes model in the target domain. We also illustrate the practical implications of our theoretical results in simulations and on real data.
null
Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods
https://papers.nips.cc/paper_files/paper/2021/hash/d811406316b669ad3d370d78b51b1d2e-Abstract.html
Desi R Ivanova, Adam Foster, Steven Kleinegesse, Michael U. Gutmann, Thomas Rainforth
https://papers.nips.cc/paper_files/paper/2021/hash/d811406316b669ad3d370d78b51b1d2e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13597-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d811406316b669ad3d370d78b51b1d2e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=on2DNSz2Qg
https://papers.nips.cc/paper_files/paper/2021/file/d811406316b669ad3d370d78b51b1d2e-Supplemental.zip
We introduce implicit Deep Adaptive Design (iDAD), a new method for performing adaptive experiments in real-time with implicit models. iDAD amortizes the cost of Bayesian optimal experimental design (BOED) by learning a design policy network upfront, which can then be deployed quickly at the time of the experiment. The iDAD network can be trained on any model which simulates differentiable samples, unlike previous design policy work that requires a closed form likelihood and conditionally independent experiments. At deployment, iDAD allows design decisions to be made in milliseconds, in contrast to traditional BOED approaches that require heavy computation during the experiment itself. We illustrate the applicability of iDAD on a number of experiments, and show that it provides a fast and effective mechanism for performing adaptive design with implicit models.
null
Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games
https://papers.nips.cc/paper_files/paper/2021/hash/d82118376df344b0010f53909b961db3-Abstract.html
Yu Bai, Chi Jin, Huan Wang, Caiming Xiong
https://papers.nips.cc/paper_files/paper/2021/hash/d82118376df344b0010f53909b961db3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13598-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d82118376df344b0010f53909b961db3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LZOG2YgDiRn
https://papers.nips.cc/paper_files/paper/2021/file/d82118376df344b0010f53909b961db3-Supplemental.pdf
Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently asymmetric and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is general-sum. The majority of existing results in this field focuses on either symmetric solution concepts (e.g. Nash equilibrium) or zero-sum games. It remains open how to learn the Stackelberg equilibrium---an asymmetric analog of the Nash equilibrium---in general-sum games efficiently from noisy samples. This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium, in the bandit feedback setting where we only observe noisy samples of the reward. We consider three representative two-player general-sum games: bandit games, bandit-reinforcement learning (bandit-RL) games, and linear bandit games. In all these games, we identify a fundamental gap between the exact value of the Stackelberg equilibrium and its estimated version using finitely many noisy samples, which can not be closed information-theoretically regardless of the algorithm. We then establish sharp positive results on sample-efficient learning of Stackelberg equilibrium with value optimal up to the gap identified above, with matching lower bounds in the dependency on the gap, error tolerance, and the size of the action spaces. Overall, our results unveil unique challenges in learning Stackelberg equilibria under noisy bandit feedback, which we hope could shed light on future research on this topic.
null
Non-approximate Inference for Collective Graphical Models on Path Graphs via Discrete Difference of Convex Algorithm
https://papers.nips.cc/paper_files/paper/2021/hash/d827f12e35eae370ba9c65b7f6026695-Abstract.html
Yasunori Akagi, Naoki Marumo, Hideaki Kim, Takeshi Kurashima, Hiroyuki Toda
https://papers.nips.cc/paper_files/paper/2021/hash/d827f12e35eae370ba9c65b7f6026695-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13599-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d827f12e35eae370ba9c65b7f6026695-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=QUaKP7557s
https://papers.nips.cc/paper_files/paper/2021/file/d827f12e35eae370ba9c65b7f6026695-Supplemental.pdf
The importance of aggregated count data, which is calculated from the data of multiple individuals, continues to increase. Collective Graphical Model (CGM) is a probabilistic approach to the analysis of aggregated data. One of the most important operations in CGM is maximum a posteriori (MAP) inference of unobserved variables under given observations. Because the MAP inference problem for general CGMs has been shown to be NP-hard, an approach that solves an approximate problem has been proposed. However, this approach has two major drawbacks. First, the quality of the solution deteriorates when the values in the count tables are small, because the approximation becomes inaccurate. Second, since continuous relaxation is applied, the integrality constraints of the output are violated. To resolve these problems, this paper proposes a new method for MAP inference for CGMs on path graphs. Our method is based on the Difference of Convex Algorithm (DCA), which is a general methodology to minimize a function represented as the sum of a convex function and a concave function. In our algorithm, important subroutines in DCA can be efficiently calculated by minimum convex cost flow algorithms. Experiments show that the proposed method outputs higher quality solutions than the conventional approach.
null
Implicit Task-Driven Probability Discrepancy Measure for Unsupervised Domain Adaptation
https://papers.nips.cc/paper_files/paper/2021/hash/d82f9436247aa0049767b776dceab4ed-Abstract.html
Mao Li, Kaiqi Jiang, Xinhua Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/d82f9436247aa0049767b776dceab4ed-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13600-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d82f9436247aa0049767b776dceab4ed-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DvxH_RCnSj3
https://papers.nips.cc/paper_files/paper/2021/file/d82f9436247aa0049767b776dceab4ed-Supplemental.pdf
Probability discrepancy measure is a fundamental construct for numerous machine learning models such as weakly supervised learning and generative modeling. However, most measures overlook the fact that the distributions are not the end-product of learning, but are the basis of downstream predictor. Therefore it is important to warp the probability discrepancy measure towards the end tasks, and we hence propose a new bi-level optimization based approach so that the two distributions are compared not uniformly against the entire hypothesis space, but only with respect to the optimal predictor for the downstream end task. When applied to margin disparity discrepancy and contrastive domain discrepancy, our method significantly improves the performance in unsupervised domain adaptation, and enjoys a much more principled training process.
null
SBO-RNN: Reformulating Recurrent Neural Networks via Stochastic Bilevel Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/d87ca511e2a8593c8039ef732f5bffed-Abstract.html
Ziming Zhang, Yun Yue, Guojun Wu, Yanhua Li, Haichong Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/d87ca511e2a8593c8039ef732f5bffed-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13601-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d87ca511e2a8593c8039ef732f5bffed-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=r1pprsDm185
https://papers.nips.cc/paper_files/paper/2021/file/d87ca511e2a8593c8039ef732f5bffed-Supplemental.pdf
In this paper we consider the training stability of recurrent neural networks (RNNs) and propose a family of RNNs, namely SBO-RNN, that can be formulated using stochastic bilevel optimization (SBO). With the help of stochastic gradient descent (SGD), we manage to convert the SBO problem into an RNN where the feedforward and backpropagation solve the lower and upper-level optimization for learning hidden states and their hyperparameters, respectively. We prove that under mild conditions there is no vanishing or exploding gradient in training SBO-RNN. Empirically we demonstrate our approach with superior performance on several benchmark datasets, with fewer parameters, less training data, and much faster convergence. Code is available at https://zhang-vislab.github.io.
null
Navigating to the Best Policy in Markov Decision Processes
https://papers.nips.cc/paper_files/paper/2021/hash/d9896106ca98d3d05b8cbdf4fd8b13a1-Abstract.html
Aymen Al Marjani, Aurélien Garivier, Alexandre Proutiere
https://papers.nips.cc/paper_files/paper/2021/hash/d9896106ca98d3d05b8cbdf4fd8b13a1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13602-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d9896106ca98d3d05b8cbdf4fd8b13a1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hY4rUScQOe
https://papers.nips.cc/paper_files/paper/2021/file/d9896106ca98d3d05b8cbdf4fd8b13a1-Supplemental.pdf
We investigate the classical active pure exploration problem in Markov Decision Processes, where the agent sequentially selects actions and, from the resulting system trajectory, aims at identifying the best policy as fast as possible. We propose a problem-dependent lower bound on the average number of steps required before a correct answer can be given with probability at least $1-\delta$. We further provide the first algorithm with an instance-specific sample complexity in this setting. This algorithm addresses the general case of communicating MDPs; we also propose a variant with a reduced exploration rate (and hence faster convergence) under an additional ergodicity assumption. This work extends previous results relative to the \emph{generative setting}~\cite{pmlr-v139-marjani21a}, where the agent could at each step query the random outcome of any (state, action) pair. In contrast, we show here how to deal with the \emph{navigation constraints}, induced by the \emph{online setting}. Our analysis relies on an ergodic theorem for non-homogeneous Markov chains which we consider of wide interest in the analysis of Markov Decision Processes.
null
A Faster Decentralized Algorithm for Nonconvex Minimax Problems
https://papers.nips.cc/paper_files/paper/2021/hash/d994e3728ba5e28defb88a3289cd7ee8-Abstract.html
Wenhan Xian, Feihu Huang, Yanfu Zhang, Heng Huang
https://papers.nips.cc/paper_files/paper/2021/hash/d994e3728ba5e28defb88a3289cd7ee8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13603-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d994e3728ba5e28defb88a3289cd7ee8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rjIjkiyAJao
https://papers.nips.cc/paper_files/paper/2021/file/d994e3728ba5e28defb88a3289cd7ee8-Supplemental.pdf
In this paper, we study the nonconvex-strongly-concave minimax optimization problem on decentralized setting. The minimax problems are attracting increasing attentions because of their popular practical applications such as policy evaluation and adversarial training. As training data become larger, distributed training has been broadly adopted in machine learning tasks. Recent research works show that the decentralized distributed data-parallel training techniques are specially promising, because they can achieve the efficient communications and avoid the bottleneck problem on the central node or the latency of low bandwidth network. However, the decentralized minimax problems were seldom studied in literature and the existing methods suffer from very high gradient complexity. To address this challenge, we propose a new faster decentralized algorithm, named as DM-HSGD, for nonconvex minimax problems by using the variance reduced technique of hybrid stochastic gradient descent. We prove that our DM-HSGD algorithm achieves stochastic first-order oracle (SFO) complexity of $O(\kappa^3 \epsilon^{-3})$ for decentralized stochastic nonconvex-strongly-concave problem to search an $\epsilon$-stationary point, which improves the exiting best theoretical results. Moreover, we also prove that our algorithm achieves linear speedup with respect to the number of workers. Our experiments on decentralized settings show the superior performance of our new algorithm.
null
Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis
https://papers.nips.cc/paper_files/paper/2021/hash/d9d347f57ae11f34235b4555710547d8-Abstract.html
Qi CHEN, Changjian Shui, Mario Marchand
https://papers.nips.cc/paper_files/paper/2021/hash/d9d347f57ae11f34235b4555710547d8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13604-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d9d347f57ae11f34235b4555710547d8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9J2wV5E1Aq_
https://papers.nips.cc/paper_files/paper/2021/file/d9d347f57ae11f34235b4555710547d8-Supplemental.pdf
We derive a novel information-theoretic analysis of the generalization property of meta-learning algorithms. Concretely, our analysis proposes a generic understanding in both the conventional learning-to-learn framework \citep{amit2018meta} and the modern model-agnostic meta-learning (MAML) algorithms \citep{finn2017model}.Moreover, we provide a data-dependent generalization bound for the stochastic variant of MAML, which is \emph{non-vacuous} for deep few-shot learning. As compared to previous bounds that depend on the square norms of gradients, empirical validations on both simulated data and a well-known few-shot benchmark show that our bound is orders of magnitude tighter in most conditions.
null
ReLU Regression with Massart Noise
https://papers.nips.cc/paper_files/paper/2021/hash/d9d3837ee7981e8c064774da6cdd98bf-Abstract.html
Ilias Diakonikolas, Jong Ho Park, Christos Tzamos
https://papers.nips.cc/paper_files/paper/2021/hash/d9d3837ee7981e8c064774da6cdd98bf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13605-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d9d3837ee7981e8c064774da6cdd98bf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7BlQMwp_44p
https://papers.nips.cc/paper_files/paper/2021/file/d9d3837ee7981e8c064774da6cdd98bf-Supplemental.pdf
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data. This supervised learning task is efficiently solvable in the realizable setting, but is known to be computationally hard with adversarial label noise. In this work, we focus on ReLU regression in the Massart noise model, a natural and well-studied semi-random noise model. In this model, the label of every point is generated according to a function in the class, but an adversary is allowed to change this value arbitrarily with some probability, which is {\em at most} $\eta < 1/2$. We develop an efficient algorithm that achieves exact parameter recovery in this model under mild anti-concentration assumptions on the underlying distribution. Such assumptions are necessary for exact recovery to be information-theoretically possible. We demonstrate that our algorithm significantly outperforms naive applications of $\ell_1$ and $\ell_2$ regression on both synthetic and real data.
null
Identification of the Generalized Condorcet Winner in Multi-dueling Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/d9de6a144a3cc26cb4b3c47b206a121a-Abstract.html
Björn Haddenhorst, Viktor Bengs, Eyke Hüllermeier
https://papers.nips.cc/paper_files/paper/2021/hash/d9de6a144a3cc26cb4b3c47b206a121a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13606-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d9de6a144a3cc26cb4b3c47b206a121a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=omDF-uQ_OZ
https://papers.nips.cc/paper_files/paper/2021/file/d9de6a144a3cc26cb4b3c47b206a121a-Supplemental.pdf
The reliable identification of the “best” arm while keeping the sample complexity as low as possible is a common task in the field of multi-armed bandits. In the multi-dueling variant of multi-armed bandits, where feedback is provided in the form of a winning arm among as set of k chosen ones, a reasonable notion of best arm is the generalized Condorcet winner (GCW). The latter is an the arm that has the greatest probability of being the winner in each subset containing it. In this paper, we derive lower bounds on the sample complexity for the task of identifying the GCW under various assumptions. As a by-product, our lower bound results provide new insights for the special case of dueling bandits (k = 2). We propose the Dvoretzky–Kiefer–Wolfowitz tournament (DKWT) algorithm, which we prove to be nearly optimal. In a numerical study, we show that DKWT empirically outperforms current state-of-the-art algorithms, even in the special case of dueling bandits or under a Plackett-Luce assumption on the feedback mechanism.
null
Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch
https://papers.nips.cc/paper_files/paper/2021/hash/d9e74f47610385b11e295eec4c58d473-Abstract.html
Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Adrian Weller, Volkan Cevher
https://papers.nips.cc/paper_files/paper/2021/hash/d9e74f47610385b11e295eec4c58d473-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13607-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d9e74f47610385b11e295eec4c58d473-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=t8HduwpoQQv
https://papers.nips.cc/paper_files/paper/2021/file/d9e74f47610385b11e295eec4c58d473-Supplemental.pdf
We study the inverse reinforcement learning (IRL) problem under a transition dynamics mismatch between the expert and the learner. Specifically, we consider the Maximum Causal Entropy (MCE) IRL learner model and provide a tight upper bound on the learner's performance degradation based on the $\ell_1$-distance between the transition dynamics of the expert and the learner. Leveraging insights from the Robust RL literature, we propose a robust MCE IRL algorithm, which is a principled approach to help with this mismatch. Finally, we empirically demonstrate the stable performance of our algorithm compared to the standard MCE IRL algorithm under transition dynamics mismatches in both finite and continuous MDP problems.
null
Re-ranking for image retrieval and transductive few-shot classification
https://papers.nips.cc/paper_files/paper/2021/hash/d9fc0cdb67638d50f411432d0d41d0ba-Abstract.html
Xi SHEN, Yang Xiao, Shell Xu Hu, Othman Sbai, Mathieu Aubry
https://papers.nips.cc/paper_files/paper/2021/hash/d9fc0cdb67638d50f411432d0d41d0ba-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13608-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d9fc0cdb67638d50f411432d0d41d0ba-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sneJD9juaNl
https://papers.nips.cc/paper_files/paper/2021/file/d9fc0cdb67638d50f411432d0d41d0ba-Supplemental.pdf
In the problems of image retrieval and few-shot classification, the mainstream approaches focus on learning a better feature representation. However, directly tackling the distance or similarity measure between images could also be efficient. To this end, we revisit the idea of re-ranking the top-k retrieved images in the context of image retrieval (e.g., the k-reciprocal nearest neighbors) and generalize this idea to transductive few-shot learning. We propose to meta-learn the re-ranking updates such that the similarity graph converges towards the target similarity graph induced by the image labels. Specifically, the re-ranking module takes as input an initial similarity graph between the query image and the contextual images using a pre-trained feature extractor, and predicts an improved similarity graph by leveraging the structure among the involved images. We show that our re-ranking approach can be applied to unseen images and can further boost existing approaches for both image retrieval and few-shot learning problems. Our approach operates either independently or in conjunction with classical re-ranking approaches, yielding clear and consistent improvements on image retrieval (CUB, Cars, SOP, rOxford5K and rParis6K) and transductive few-shot classification (Mini-ImageNet, tiered-ImageNet and CIFAR-FS) benchmarks. Our code is available at https://imagine.enpc.fr/~shenx/SSR/.
null
Post-processing for Individual Fairness
https://papers.nips.cc/paper_files/paper/2021/hash/d9fea4ca7e4a74c318ec27c1deb0796c-Abstract.html
Felix Petersen, Debarghya Mukherjee, Yuekai Sun, Mikhail Yurochkin
https://papers.nips.cc/paper_files/paper/2021/hash/d9fea4ca7e4a74c318ec27c1deb0796c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13609-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/d9fea4ca7e4a74c318ec27c1deb0796c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=qGeqg4_hA2
https://papers.nips.cc/paper_files/paper/2021/file/d9fea4ca7e4a74c318ec27c1deb0796c-Supplemental.pdf
Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of post-processing is that it avoids expensive retraining. In this work, we propose general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals, guiding the desired fairness constraints. We cast the IF post-processing problem as a graph smoothing problem corresponding to graph Laplacian regularization that preserves the desired "treat similar individuals similarly" interpretation. Our theoretical results demonstrate the connection of the new objective function to a local relaxation of the original individual fairness. Empirically, our post-processing algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy.
null
OpenMatch: Open-Set Semi-supervised Learning with Open-set Consistency Regularization
https://papers.nips.cc/paper_files/paper/2021/hash/da11e8cd1811acb79ccf0fd62cd58f86-Abstract.html
Kuniaki Saito, Donghyun Kim, Kate Saenko
https://papers.nips.cc/paper_files/paper/2021/hash/da11e8cd1811acb79ccf0fd62cd58f86-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13610-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/da11e8cd1811acb79ccf0fd62cd58f86-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=77cNKCCjgw
https://papers.nips.cc/paper_files/paper/2021/file/da11e8cd1811acb79ccf0fd62cd58f86-Supplemental.pdf
Semi-supervised learning (SSL) is an effective means to leverage unlabeled data to improve a model’s performance. Typical SSL methods like FixMatch assume that labeled and unlabeled data share the same label space. However, in practice, unlabeled data can contain categories unseen in the labeled set, i.e., outliers, which can significantly harm the performance of SSL algorithms. To address this problem, we propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.Learning representations of inliers while rejecting outliers is essential for the success of OSSL. To this end, OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers. The OVA-classifier outputs the confidence score of a sample being an inlier, providing a threshold to detect outliers. Another key contribution is an open-set soft-consistency regularization loss, which enhances the smoothness of the OVA-classifier with respect to input transformations and greatly improves outlier detection. \ours achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10. The code is available at \url{https://github.com/VisionLearningGroup/OP_Match}.
null
End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
https://papers.nips.cc/paper_files/paper/2021/hash/da3fde159d754a2555eaa198d2d105b2-Abstract.html
Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, Dani Yogatama
https://papers.nips.cc/paper_files/paper/2021/hash/da3fde159d754a2555eaa198d2d105b2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13611-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/da3fde159d754a2555eaa198d2d105b2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5KWmB6JePx
https://papers.nips.cc/paper_files/paper/2021/file/da3fde159d754a2555eaa198d2d105b2-Supplemental.pdf
We present an end-to-end differentiable training method for retrieval-augmented open-domain question answering systems that combine information from multiple retrieved documents when generating answers. We model retrieval decisions as latent variables over sets of relevant documents. Since marginalizing over sets of retrieved documents is computationally hard, we approximate this using an expectation-maximization algorithm. We iteratively estimate the value of our latent variable (the set of relevant documents for a given question) and then use this estimate to update the retriever and reader parameters. We hypothesize that such end-to-end training allows training signals to flow to the reader and then to the retriever better than staged-wise training. This results in a retriever that is able to select more relevant documents for a question and a reader that is trained on more accurate documents to generate an answer. Experiments on three benchmark datasets demonstrate that our proposed method outperforms all existing approaches of comparable size by 2-3% absolute exact match points, achieving new state-of-the-art results. Our results also demonstrate the feasibility of learning to retrieve to improve answer generation without explicit supervision of retrieval decisions.
null
Fast Algorithms for $L_\infty$-constrained S-rectangular Robust MDPs
https://papers.nips.cc/paper_files/paper/2021/hash/da4fb5c6e93e74d3df8527599fa62642-Abstract.html
Bahram Behzadian, Marek Petrik, Chin Pang Ho
https://papers.nips.cc/paper_files/paper/2021/hash/da4fb5c6e93e74d3df8527599fa62642-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13612-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/da4fb5c6e93e74d3df8527599fa62642-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=IdqbmJx-urQ
null
Robust Markov decision processes (RMDPs) are a useful building block of robust reinforcement learning algorithms but can be hard to solve. This paper proposes a fast, exact algorithm for computing the Bellman operator for S-rectangular robust Markov decision processes with $L_\infty$-constrained rectangular ambiguity sets. The algorithm combines a novel homotopy continuation method with a bisection method to solve S-rectangular ambiguity in quasi-linear time in the number of states and actions. The algorithm improves on the cubic time required by leading general linear programming methods. Our experimental results confirm the practical viability of our method and show that it outperforms a leading commercial optimization package by several orders of magnitude.
null
Instance-optimal Mean Estimation Under Differential Privacy
https://papers.nips.cc/paper_files/paper/2021/hash/da54dd5a0398011cdfa50d559c2c0ef8-Abstract.html
Ziyue Huang, Yuting Liang, Ke Yi
https://papers.nips.cc/paper_files/paper/2021/hash/da54dd5a0398011cdfa50d559c2c0ef8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13613-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/da54dd5a0398011cdfa50d559c2c0ef8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AjgFqUoD4U
https://papers.nips.cc/paper_files/paper/2021/file/da54dd5a0398011cdfa50d559c2c0ef8-Supplemental.zip
Mean estimation under differential privacy is a fundamental problem, but worst-case optimal mechanisms do not offer meaningful utility guarantees in practice when the global sensitivity is very large. Instead, various heuristics have been proposed to reduce the error on real-world data that do not resemble the worst-case instance. This paper takes a principled approach, yielding a mechanism that is instance-optimal in a strong sense. In addition to its theoretical optimality, the mechanism is also simple and practical, and adapts to a variety of data characteristics without the need of parameter tuning. It easily extends to the local and shuffle model as well.
null
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
https://papers.nips.cc/paper_files/paper/2021/hash/da94cbeff56cfda50785df477941308b-Abstract.html
Thomas FEL, Remi Cadene, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, Thomas Serre
https://papers.nips.cc/paper_files/paper/2021/hash/da94cbeff56cfda50785df477941308b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13614-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/da94cbeff56cfda50785df477941308b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hA-PHQGOjqQ
https://papers.nips.cc/paper_files/paper/2021/file/da94cbeff56cfda50785df477941308b-Supplemental.pdf
We describe a novel attribution method which is grounded in Sensitivity Analysis and uses Sobol indices. Beyond modeling the individual contributions of image regions, Sobol indices provide an efficient way to capture higher-order interactions between image regions and their contributions to a neural network's prediction through the lens of variance.We describe an approach that makes the computation of these indices efficient for high-dimensional problems by using perturbation masks coupled with efficient estimators to handle the high dimensionality of images.Importantly, we show that the proposed method leads to favorable scores on standard benchmarks for vision (and language models) while drastically reducing the computing time compared to other black-box methods -- even surpassing the accuracy of state-of-the-art white-box methods which require access to internal representations. Our code is freely available:github.com/fel-thomas/Sobol-Attribution-Method.
null
PatchGame: Learning to Signal Mid-level Patches in Referential Games
https://papers.nips.cc/paper_files/paper/2021/hash/dac32839a9f0baae954b41abee610cc0-Abstract.html
Kamal Gupta, Gowthami Somepalli, Anubhav Anubhav, Vinoj Yasanga Jayasundara Magalle Hewa, Matthias Zwicker, Abhinav Shrivastava
https://papers.nips.cc/paper_files/paper/2021/hash/dac32839a9f0baae954b41abee610cc0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13615-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dac32839a9f0baae954b41abee610cc0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DZKsFQyDB9
https://papers.nips.cc/paper_files/paper/2021/file/dac32839a9f0baae954b41abee610cc0-Supplemental.pdf
We study a referential game (a type of signaling game) where two agents communicate with each other via a discrete bottleneck to achieve a common goal. In our referential game, the goal of the speaker is to compose a message or a symbolic representation of "important" image patches, while the task for the listener is to match the speaker's message to a different view of the same image. We show that it is indeed possible for the two agents to develop a communication protocol without explicit or implicit supervision. We further investigate the developed protocol and show the applications in speeding up recent Vision Transformers by using only important patches, and as pre-training for downstream recognition tasks (e.g., classification).
null
Implicit Generative Copulas
https://papers.nips.cc/paper_files/paper/2021/hash/dac4a67bdc4a800113b0f1ad67ed696f-Abstract.html
Tim Janke, Mohamed Ghanmi, Florian Steinke
https://papers.nips.cc/paper_files/paper/2021/hash/dac4a67bdc4a800113b0f1ad67ed696f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13616-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dac4a67bdc4a800113b0f1ad67ed696f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=h1bPe7spQkr
https://papers.nips.cc/paper_files/paper/2021/file/dac4a67bdc4a800113b0f1ad67ed696f-Supplemental.pdf
Copulas are a powerful tool for modeling multivariate distributions as they allow to separately estimate the univariate marginal distributions and the joint dependency structure. However, known parametric copulas offer limited flexibility especially in high dimensions, while commonly used non-parametric methods suffer from the curse of dimensionality. A popular remedy is to construct a tree-based hierarchy of conditional bivariate copulas.In this paper, we propose a flexible, yet conceptually simple alternative based on implicit generative neural networks.The key challenge is to ensure marginal uniformity of the estimated copula distribution.We achieve this by learning a multivariate latent distribution with unspecified marginals but the desired dependency structure.By applying the probability integral transform, we can then obtain samples from the high-dimensional copula distribution without relying on parametric assumptions or the need to find a suitable tree structure.Experiments on synthetic and real data from finance, physics, and image generation demonstrate the performance of this approach.
null
Tensor Normal Training for Deep Learning Models
https://papers.nips.cc/paper_files/paper/2021/hash/dae3312c4c6c7000a37ecfb7b0aeb0e4-Abstract.html
Yi Ren, Donald Goldfarb
https://papers.nips.cc/paper_files/paper/2021/hash/dae3312c4c6c7000a37ecfb7b0aeb0e4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13617-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dae3312c4c6c7000a37ecfb7b0aeb0e4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-t9LPHRYKmi
https://papers.nips.cc/paper_files/paper/2021/file/dae3312c4c6c7000a37ecfb7b0aeb0e4-Supplemental.pdf
Despite the predominant use of first-order methods for training deep learning models, second-order methods, and in particular, natural gradient methods, remain of interest because of their potential for accelerating training through the use of curvature information. Several methods with non-diagonal preconditioning matrices, including KFAC, Shampoo, and K-BFGS, have been proposed and shown to be effective. Based on the so-called tensor normal (TN) distribution, we propose and analyze a brand new approximate natural gradient method, Tensor Normal Training (TNT), which like Shampoo, only requires knowledge of the shape of the training parameters. By approximating the probabilistically based Fisher matrix, as opposed to the empirical Fisher matrix, our method uses the block-wise covariance of the sampling based gradient as the pre-conditioning matrix. Moreover, the assumption that the sampling-based (tensor) gradient follows a TN distribution, ensures that its covariance has a Kronecker separable structure, which leads to a tractable approximation to the Fisher matrix. Consequently, TNT's memory requirements and per-iteration computational costs are only slightly higher than those for first-order methods. In our experiments, TNT exhibited superior optimization performance to state-of-the-art first-order methods, and comparable optimization performance to the state-of-the-art second-order methods KFAC and Shampoo. Moreover, TNT demonstrated its ability to generalize as well as first-order methods, while using fewer epochs.
null
Unintended Selection: Persistent Qualification Rate Disparities and Interventions
https://papers.nips.cc/paper_files/paper/2021/hash/db00f1b7fdf48fd26b5fb5f309e9afaf-Abstract.html
Reilly Raab, Yang Liu
https://papers.nips.cc/paper_files/paper/2021/hash/db00f1b7fdf48fd26b5fb5f309e9afaf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13618-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/db00f1b7fdf48fd26b5fb5f309e9afaf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LBhruMnhgIB
https://papers.nips.cc/paper_files/paper/2021/file/db00f1b7fdf48fd26b5fb5f309e9afaf-Supplemental.pdf
Realistically---and equitably---modeling the dynamics of group-level disparities in machine learning remains an open problem. In particular, we desire models that do not suppose inherent differences between artificial groups of people---but rather endogenize disparities by appeal to unequal initial conditions of insular subpopulations. In this paper, agents each have a real-valued feature $X$ (e.g., credit score) informed by a ``true'' binary label $Y$ representing qualification (e.g., for a loan). Each agent alternately (1) receives a binary classification label $\hat{Y}$ (e.g., loan approval) from a Bayes-optimal machine learning classifier observing $X$ and (2) may update their qualification $Y$ by imitating successful strategies (e.g., seek a raise) within an isolated group $G$ of agents to which they belong. We consider the disparity of qualification rates $\Pr(Y=1)$ between different groups and how this disparity changes subject to a sequence of Bayes-optimal classifiers repeatedly retrained on the global population. We model the evolving qualification rates of each subpopulation (group) using the replicator equation, which derives from a class of imitation processes. We show that differences in qualification rates between subpopulations can persist indefinitely for a set of non-trivial equilibrium states due to uniformed classifier deployments, even when groups are identical in all aspects except initial qualification densities. We next simulate the effects of commonly proposed fairness interventions on this dynamical system along with a new feedback control mechanism capable of permanently eliminating group-level qualification rate disparities. We conclude by discussing the limitations of our model and findings and by outlining potential future work.
null
Revisiting 3D Object Detection From an Egocentric Perspective
https://papers.nips.cc/paper_files/paper/2021/hash/db182d2552835bec774847e06406bfa2-Abstract.html
Boyang Deng, Charles R Qi, Mahyar Najibi, Thomas Funkhouser, Yin Zhou, Dragomir Anguelov
https://papers.nips.cc/paper_files/paper/2021/hash/db182d2552835bec774847e06406bfa2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13619-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/db182d2552835bec774847e06406bfa2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OMNRFw1fX3a
https://papers.nips.cc/paper_files/paper/2021/file/db182d2552835bec774847e06406bfa2-Supplemental.pdf
3D object detection is a key module for safety-critical robotics applications such as autonomous driving. For these applications, we care most about how the detections affect the ego-agent’s behavior and safety (the egocentric perspective). Intuitively, we seek more accurate descriptions of object geometry when it’s more likely to interfere with the ego-agent’s motion trajectory. However, current detection metrics, based on box Intersection-over-Union (IoU), are object-centric and aren’t designed to capture the spatio-temporal relationship between objects and the ego-agent. To address this issue, we propose a new egocentric measure to evaluate 3D object detection, namely Support Distance Error (SDE). Our analysis based on SDE reveals that the egocentric detection quality is bounded by the coarse geometry of the bounding boxes. Given the insight that SDE would benefit from more accurate geometry descriptions, we propose to represent objects as amodal contours, specifically amodal star-shaped polygons, and devise a simple model, StarPoly, to predict such contours. Our experiments on the large-scale Waymo Open Dataset show that SDE better reflects the impact of detection quality on the ego-agent’s safety compared to IoU; and the estimated contours from StarPoly consistently improve the egocentric detection quality over recent 3D object detectors.
null
Optimizing Information-theoretical Generalization Bound via Anisotropic Noise of SGLD
https://papers.nips.cc/paper_files/paper/2021/hash/db2b4182156b2f1f817860ac9f409ad7-Abstract.html
Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/db2b4182156b2f1f817860ac9f409ad7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13620-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/db2b4182156b2f1f817860ac9f409ad7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lN2Uqm-ScC
https://papers.nips.cc/paper_files/paper/2021/file/db2b4182156b2f1f817860ac9f409ad7-Supplemental.pdf
Recently, the information-theoretical framework has been proven to be able to obtain non-vacuous generalization bounds for large models trained by Stochastic Gradient Langevin Dynamics (SGLD) with isotropic noise. In this paper, we optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD. We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized. This validates that the optimal noise is quite close to the empirical gradient covariance. Technically, we develop a new information-theoretical bound that enables such an optimization analysis. We then apply matrix analysis to derive the form of optimal noise covariance. Presented constraint and results are validated by the empirical observations.
null
Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning
https://papers.nips.cc/paper_files/paper/2021/hash/db8e1af0cb3aca1ae2d0018624204529-Abstract.html
Sen Cui, Weishen Pan, Jian Liang, Changshui Zhang, Fei Wang
https://papers.nips.cc/paper_files/paper/2021/hash/db8e1af0cb3aca1ae2d0018624204529-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13621-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/db8e1af0cb3aca1ae2d0018624204529-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WwqOoNnA8f
https://papers.nips.cc/paper_files/paper/2021/file/db8e1af0cb3aca1ae2d0018624204529-Supplemental.pdf
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively without the need of accessing the raw data samples across different sources. So far FL research has mostly focused on improving the performance, how the algorithmic disparity will be impacted for the model learned from FL and the impact of algorithmic disparity on the utility inconsistency are largely unexplored. In this paper, we propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients (data sources). We derive our framework from a constrained multi-objective optimization perspective, in which we learn a model satisfying fairness constraints on all clients with consistent performance. Specifically, we treat the algorithm prediction loss at each local client as an objective and maximize the worst-performing client with fairness constraints through optimizing a surrogate maximum function with all objectives involved. A gradient-based procedure is employed to achieve the Pareto optimality of this optimization problem. Theoretical analysis is provided to prove that our method can converge to a Pareto solution that achieves the min-max performance with fairness constraints on all clients. Comprehensive experiments on synthetic and real-world datasets demonstrate the superiority that our approach over baselines and its effectiveness in achieving both fairness and consistency across all local clients.
null
A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning
https://papers.nips.cc/paper_files/paper/2021/hash/db9ad56c71619aeed9723314d1456037-Abstract.html
Xinyi Tong, Xiangxiang Xu, Shao-Lun Huang, Lizhong Zheng
https://papers.nips.cc/paper_files/paper/2021/hash/db9ad56c71619aeed9723314d1456037-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13622-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/db9ad56c71619aeed9723314d1456037-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wQZWg82TWx
https://papers.nips.cc/paper_files/paper/2021/file/db9ad56c71619aeed9723314d1456037-Supplemental.zip
Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed. This paper proposes a mathematical framework for quantifying the transferability in multi-source transfer learning problems, with both the task similarities and the sample complexity of learning models taken into account. In particular, we consider the setup where the models learned from different tasks are linearly combined for learning the target task, and use the optimal combining coefficients to measure the transferability. Then, we demonstrate the analytical expression of this transferability measure, characterized by the sample sizes, model complexity, and the similarities between source and target tasks, which provides fundamental insights of the knowledge transferring mechanism and the guidance for algorithm designs. Furthermore, we apply our analyses for practical learning tasks, and establish a quantifiable transferability measure by exploiting a parameterized model. In addition, we develop an alternating iterative algorithm to implement our theoretical results for training deep neural networks in multi-source transfer learning tasks. Finally, experiments on image classification tasks show that our approach outperforms existing transfer learning algorithms in multi-source and few-shot scenarios.
null
Morié Attack (MA): A New Potential Risk of Screen Photos
https://papers.nips.cc/paper_files/paper/2021/hash/db9eeb7e678863649bce209842e0d164-Abstract.html
Dantong Niu, Ruohao Guo, Yisen Wang
https://papers.nips.cc/paper_files/paper/2021/hash/db9eeb7e678863649bce209842e0d164-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13623-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/db9eeb7e678863649bce209842e0d164-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zdNEp82a-_q
https://papers.nips.cc/paper_files/paper/2021/file/db9eeb7e678863649bce209842e0d164-Supplemental.pdf
Images, captured by a camera, play a critical role in training Deep Neural Networks (DNNs). Usually, we assume the images acquired by cameras are consistent with the ones perceived by human eyes. However, due to the different physical mechanisms between human-vision and computer-vision systems, the final perceived images could be very different in some cases, for example shooting on digital monitors. In this paper, we find a special phenomenon in digital image processing, the moiré effect, that could cause unnoticed security threats to DNNs. Based on it, we propose a Moiré Attack (MA) that generates the physical-world moiré pattern adding to the images by mimicking the shooting process of digital devices. Extensive experiments demonstrate that our proposed digital Moiré Attack (MA) is a perfect camouflage for attackers to tamper with DNNs with a high success rate ($100.0\%$ for untargeted and $97.0\%$ for targeted attack with the noise budget $\epsilon=4$), high transferability rate across different models, and high robustness under various defenses. Furthermore, MA owns great stealthiness because the moiré effect is unavoidable due to the camera's inner physical structure, which therefore hardly attracts the awareness of humans. Our code is available at https://github.com/Dantong88/Moire_Attack.
null