title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
ProTo: Program-Guided Transformer for Program-Guided Tasks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d34201a5b85900908db6cae92723617-Abstract.html
|
Zelin Zhao, Karan Samel, Binghong Chen, lee song
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d34201a5b85900908db6cae92723617-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12924-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8d34201a5b85900908db6cae92723617-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=3BI2dazLpN
|
https://papers.nips.cc/paper_files/paper/2021/file/8d34201a5b85900908db6cae92723617-Supplemental.pdf
|
Programs, consisting of semantic and structural information, play an important role in the communication between humans and agents. Towards learning general program executors to unify perception, reasoning, and decision making, we formulate program-guided tasks which require learning to execute a given program on the observed task specification. Furthermore, we propose Program-Guided Transformers (ProTo), which integrates both semantic and structural guidance of a program by leveraging cross-attention and masked self-attention to pass messages between the specification and routines in the program. ProTo executes a program in a learned latent space and enjoys stronger representation ability than previous neural-symbolic approaches. We demonstrate that ProTo significantly outperforms the previous state-of-the-art methods on GQA visual reasoning and 2D Minecraft policy learning datasets. Additionally, ProTo demonstrates better generalization to unseen, complex, and human-written programs.
| null |
An Efficient Transfer Learning Framework for Multiagent Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d9a6e908ed2b731fb96151d9bb94d49-Abstract.html
|
Tianpei Yang, Weixun Wang, Hongyao Tang, Jianye Hao, Zhaopeng Meng, Hangyu Mao, Dong Li, Wulong Liu, Yingfeng Chen, Yujing Hu, Changjie Fan, Chengwei Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d9a6e908ed2b731fb96151d9bb94d49-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12925-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8d9a6e908ed2b731fb96151d9bb94d49-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GAiM0RXrMfF
|
https://papers.nips.cc/paper_files/paper/2021/file/8d9a6e908ed2b731fb96151d9bb94d49-Supplemental.pdf
|
Transfer Learning has shown great potential to enhance single-agent Reinforcement Learning (RL) efficiency. Similarly, Multiagent RL (MARL) can also be accelerated if agents can share knowledge with each other. However, it remains a problem of how an agent should learn from other agents. In this paper, we propose a novel Multiagent Policy Transfer Framework (MAPTF) to improve MARL efficiency. MAPTF learns which agent's policy is the best to reuse for each agent and when to terminate it by modeling multiagent policy transfer as the option learning problem. Furthermore, in practice, the option module can only collect all agent's local experiences for update due to the partial observability of the environment. While in this setting, each agent's experience may be inconsistent with each other, which may cause the inaccuracy and oscillation of the option-value's estimation. Therefore, we propose a novel option learning algorithm, the successor representation option learning to solve it by decoupling the environment dynamics from rewards and learning the option-value under each agent's preference. MAPTF can be easily combined with existing deep RL and MARL approaches, and experimental results show it significantly boosts the performance of existing methods in both discrete and continuous state spaces.
| null |
Learning to Time-Decode in Spiking Neural Networks Through the Information Bottleneck
|
https://papers.nips.cc/paper_files/paper/2021/hash/8da57fac3313174128cc5f13328d4573-Abstract.html
|
Nicolas Skatchkovsky, Osvaldo Simeone, Hyeryung Jang
|
https://papers.nips.cc/paper_files/paper/2021/hash/8da57fac3313174128cc5f13328d4573-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12926-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8da57fac3313174128cc5f13328d4573-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Fw0IQgaGlhh
|
https://papers.nips.cc/paper_files/paper/2021/file/8da57fac3313174128cc5f13328d4573-Supplemental.pdf
|
One of the key challenges in training Spiking Neural Networks (SNNs) is that target outputs typically come in the form of natural signals, such as labels for classification or images for generative models, and need to be encoded into spikes. This is done by handcrafting target spiking signals, which in turn implicitly fixes the mechanisms used to decode spikes into natural signals, e.g., rate decoding. The arbitrary choice of target signals and decoding rule generally impairs the capacity of the SNN to encode and process information in the timing of spikes. To address this problem, this work introduces a hybrid variational autoencoder architecture, consisting of an encoding SNN and a decoding Artificial Neural Network (ANN). The role of the decoding ANN is to learn how to best convert the spiking signals output by the SNN into the target natural signal. A novel end-to-end learning rule is introduced that optimizes a directed information bottleneck training criterion via surrogate gradients. We demonstrate the applicability of the technique in an experimental settings on various tasks, including real-life datasets.
| null |
NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform
|
https://papers.nips.cc/paper_files/paper/2021/hash/8dd291cbea8f231982db0fb1716dfc55-Abstract.html
|
Achille Thin, Yazid Janati El Idrissi, Sylvain Le Corff, Charles Ollion, Eric Moulines, Arnaud Doucet, Alain Durmus, Christian X Robert
|
https://papers.nips.cc/paper_files/paper/2021/hash/8dd291cbea8f231982db0fb1716dfc55-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12927-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8dd291cbea8f231982db0fb1716dfc55-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=76tTYokjtG
|
https://papers.nips.cc/paper_files/paper/2021/file/8dd291cbea8f231982db0fb1716dfc55-Supplemental.pdf
|
Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant $\mathrm{Z}$ are challenging problems. In this paper, a novel family of importance samplers (IS) and Markov chain Monte Carlo (MCMC) samplers is derived. Given an invertible map $\mathrm{T}$, these schemes combine (with weights) elements from the forward and backward Orbits through points sampled from a proposal distribution $\rho$. The map $\mathrm{T}$ does not leave the target $\pi$ invariant, hence the name NEO, standing for Non-Equilibrium Orbits. NEO-IS provides unbiased estimators of the normalizing constant and self-normalized IS estimators of expectations under $\pi$ while NEO-MCMC combines multiple NEO-IS estimates of the normalizing constant and an iterated sampling-importance resampling mechanism to sample from $\pi$. For $\mathrm{T}$ chosen as a discrete-time integrator of a conformal Hamiltonian system, NEO-IS achieves state-of-the art performance on difficult benchmarks and NEO-MCMC is able to explore highly multimodal targets. Additionally, we provide detailed theoretical results for both methods. In particular, we show that NEO-MCMC is uniformly geometrically ergodic and establish explicit mixing time estimates under mild conditions.
| null |
Relaxing Local Robustness
|
https://papers.nips.cc/paper_files/paper/2021/hash/8df6a65941e4c9da40a4fb899de65c55-Abstract.html
|
Klas Leino, Matt Fredrikson
|
https://papers.nips.cc/paper_files/paper/2021/hash/8df6a65941e4c9da40a4fb899de65c55-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12928-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8df6a65941e4c9da40a4fb899de65c55-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ahrSWZgjkg
|
https://papers.nips.cc/paper_files/paper/2021/file/8df6a65941e4c9da40a4fb899de65c55-Supplemental.pdf
|
Certifiable local robustness, which rigorously precludes small-norm adversarial examples, has received significant attention as a means of addressing security concerns in deep learning. However, for some classification problems, local robustness is not a natural objective, even in the presence of adversaries; for example, if an image contains two classes of subjects, the correct label for the image may be considered arbitrary between the two, and thus enforcing strict separation between them is unnecessary. In this work, we introduce two relaxed safety properties for classifiers that address this observation: (1) relaxed top-k robustness, which serves as the analogue of top-k accuracy; and (2) affinity robustness, which specifies which sets of labels must be separated by a robustness margin, and which can be $\epsilon$-close in $\ell_p$ space. We show how to construct models that can be efficiently certified against each relaxed robustness property, and trained with very little overhead relative to standard gradient descent. Finally, we demonstrate experimentally that these relaxed variants of robustness are well-suited to several significant classification problems, leading to lower rejection rates and higher certified accuracies than can be obtained when certifying "standard" local robustness.
| null |
Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer
|
https://papers.nips.cc/paper_files/paper/2021/hash/8df7c2e3c3c3be098ef7b382bd2c37ba-Abstract.html
|
Ge Yang, Edward Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, Jianfeng Gao
|
https://papers.nips.cc/paper_files/paper/2021/hash/8df7c2e3c3c3be098ef7b382bd2c37ba-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12929-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8df7c2e3c3c3be098ef7b382bd2c37ba-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Bx6qKuBM2AD
|
https://papers.nips.cc/paper_files/paper/2021/file/8df7c2e3c3c3be098ef7b382bd2c37ba-Supplemental.pdf
|
Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters.We show that, in the recently discovered Maximal Update Parametrization ($\mu$P), many optimal HPs remain stable even as model size changes. This leads to a new HP tuning paradigm we call *$\mu$Transfer*: parametrize the target model in $\mu$P, tune the HP indirectly on a smaller model, and *zero-shot transfer* them to the full-sized model, i.e., without directly tuning the latter at all.We verify $\mu$Transfer on Transformer and ResNet. For example, 1) by transferring pretraining HPs from a model of 13M parameters, we outperform published numbers of BERT-large (350M parameters), with a total tuning cost equivalent to pretraining BERT-large once; 2) by transferring from 40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7% of total pretraining cost. A Pytorch implementation of our technique can be found at github.com/microsoft/mup. See arxiv.org for the full, up-to-date version of this work.
| null |
Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e036cc193d0af59aa9b22821248292b-Abstract.html
|
Anish Chakrabarty, Swagatam Das
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e036cc193d0af59aa9b22821248292b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12930-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8e036cc193d0af59aa9b22821248292b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=iFadi3f5V5I
|
https://papers.nips.cc/paper_files/paper/2021/file/8e036cc193d0af59aa9b22821248292b-Supplemental.pdf
|
The introduction of Variational Autoencoders (VAE) has been marked as a breakthrough in the history of representation learning models. Besides having several accolades of its own, VAE has successfully flagged off a series of inventions in the form of its immediate successors. Wasserstein Autoencoder (WAE), being an heir to that realm carries with it all of the goodness and heightened generative promises, matching even the generative adversarial networks (GANs). Needless to say, recent years have witnessed a remarkable resurgence in statistical analyses of the GANs. Similar examinations for Autoencoders however, despite their diverse applicability and notable empirical performance, remain largely absent. To close this gap, in this paper, we investigate the statistical properties of WAE. Firstly, we provide statistical guarantees that WAE achieves the target distribution in the latent space, utilizing the Vapnik–Chervonenkis (VC) theory. The main result, consequently ensures the regeneration of the input distribution, harnessing the potential offered by Optimal Transport of measures under the Wasserstein metric. This study, in turn, hints at the class of distributions WAE can reconstruct after suffering a compression in the form of a latent law.
| null |
Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e08227323cd829e449559bb381484b7-Abstract.html
|
Christopher Rytting, David Wingate
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e08227323cd829e449559bb381484b7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12931-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8e08227323cd829e449559bb381484b7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=urueR03mkng
|
https://papers.nips.cc/paper_files/paper/2021/file/8e08227323cd829e449559bb381484b7-Supplemental.zip
|
Large natural language models (LMs) (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks. Here, we show that the knowledge embedded in such models provides a useful inductive bias, not just on traditional NLP tasks, but also in the nontraditional task of training a symbolic reasoning engine. We observe that these engines learn quickly and generalize in a natural way that reflects human intuition. For example, training such a system to model block-stacking might naturally generalize to stacking other types of objects because of structure in the real world that has been partially captured by the language describing it. We study several abstract textual reasoning tasks, such as object manipulation and navigation, and demonstrate multiple types of generalization to novel scenarios and the symbols that comprise them. We also demonstrate the surprising utility of $\textit{compositional learning}$, where a learner dedicated to mastering a complicated task gains an advantage by training on relevant simpler tasks instead of jumping straight to the complicated task.
| null |
Differentiable Simulation of Soft Multi-body Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e296a067a37563370ded05f5a3bf3ec-Abstract.html
|
Yiling Qiao, Junbang Liang, Vladlen Koltun, Ming Lin
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e296a067a37563370ded05f5a3bf3ec-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12932-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8e296a067a37563370ded05f5a3bf3ec-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=j3fpZLKcXF
|
https://papers.nips.cc/paper_files/paper/2021/file/8e296a067a37563370ded05f5a3bf3ec-Supplemental.pdf
|
We present a method for differentiable simulation of soft articulated bodies. Our work enables the integration of differentiable physical dynamics into gradient-based pipelines. We develop a top-down matrix assembly algorithm within Projective Dynamics and derive a generalized dry friction model for soft continuum using a new matrix splitting strategy. We derive a differentiable control framework for soft articulated bodies driven by muscles, joint torques, or pneumatic tubes. The experiments demonstrate that our designs make soft body simulation more stable and realistic compared to other frameworks. Our method accelerates the solution of system identification problems by more than an order of magnitude, and enables efficient gradient-based learning of motion control with soft robots.
| null |
Good Classification Measures and How to Find Them
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e489b4966fe8f703b5be647f1cbae63-Abstract.html
|
Martijn Gösgens, Anton Zhiyanov, Aleksey Tikhonov, Liudmila Prokhorenkova
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e489b4966fe8f703b5be647f1cbae63-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12933-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8e489b4966fe8f703b5be647f1cbae63-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=TLXpi2j6F7
|
https://papers.nips.cc/paper_files/paper/2021/file/8e489b4966fe8f703b5be647f1cbae63-Supplemental.pdf
|
Several performance measures can be used for evaluating classification results: accuracy, F-measure, and many others. Can we say that some of them are better than others, or, ideally, choose one measure that is best in all situations? To answer this question, we conduct a systematic analysis of classification performance measures: we formally define a list of desirable properties and theoretically analyze which measures satisfy which properties. We also prove an impossibility theorem: some desirable properties cannot be simultaneously satisfied. Finally, we propose a new family of measures satisfying all desirable properties except one. This family includes the Matthews Correlation Coefficient and a so-called Symmetric Balanced Accuracy that was not previously used in classification literature. We believe that our systematic approach gives an important tool to practitioners for adequately evaluating classification results.
| null |
Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e5e15c4e6d09c8333a17843461041a9-Abstract.html
|
Junho Kim, Byung-Kwan Lee, Yong Man Ro
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e5e15c4e6d09c8333a17843461041a9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12934-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8e5e15c4e6d09c8333a17843461041a9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=90M-91IZ0JC
|
https://papers.nips.cc/paper_files/paper/2021/file/8e5e15c4e6d09c8333a17843461041a9-Supplemental.pdf
|
Adversarial examples, generated by carefully crafted perturbation, have attracted considerable attention in research fields. Recent works have argued that the existence of the robust and non-robust features is a primary cause of the adversarial examples, and investigated their internal interactions in the feature space. In this paper, we propose a way of explicitly distilling feature representation into the robust and non-robust features, using Information Bottleneck. Specifically, we inject noise variation to each feature unit and evaluate the information flow in the feature representation to dichotomize feature units either robust or non-robust, based on the noise variation magnitude. Through comprehensive experiments, we demonstrate that the distilled features are highly correlated with adversarial prediction, and they have human-perceptible semantic information by themselves. Furthermore, we present an attack mechanism intensifying the gradient of non-robust features that is directly related to the model prediction, and validate its effectiveness of breaking model robustness.
| null |
Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e7991af8afa942dc572950e01177da5-Abstract.html
|
Michael Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Teh, Marc Deisenroth
|
https://papers.nips.cc/paper_files/paper/2021/hash/8e7991af8afa942dc572950e01177da5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12935-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8e7991af8afa942dc572950e01177da5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=FwVmM8Zol_8
|
https://papers.nips.cc/paper_files/paper/2021/file/8e7991af8afa942dc572950e01177da5-Supplemental.pdf
|
Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems. Motivated by a desire to deploy Gaussian processes in novel areas of science, a rapidly-growing line of research has focused on constructively extending these models to handle non-Euclidean domains, including Riemannian manifolds, such as spheres and tori. We propose techniques that generalize this class to model vector fields on Riemannian manifolds, which are important in a number of application areas in the physical sciences. To do so, we present a general recipe for constructing gauge independent kernels, which induce Gaussian vector fields, i.e. vector-valued Gaussian processes coherent withgeometry, from scalar-valued Riemannian kernels. We extend standard Gaussian process training methods, such as variational inference, to this setting. This enables vector-valued Gaussian processes on Riemannian manifolds to be trained using standard methods and makes them accessible to machine learning practitioners.
| null |
On the Representation Power of Set Pooling Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ea1e4f9f24c38f168d538c9cfc50a14-Abstract.html
|
Christian Bueno, Alan Hylton
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ea1e4f9f24c38f168d538c9cfc50a14-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12936-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ea1e4f9f24c38f168d538c9cfc50a14-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sjRlHsawmRf
|
https://papers.nips.cc/paper_files/paper/2021/file/8ea1e4f9f24c38f168d538c9cfc50a14-Supplemental.pdf
|
Point clouds and sets are input data-types which pose unique problems to deep learning. Since sets can have variable cardinality and are unchanged by permutation, the input space for these problems naturally form infinite-dimensional non-Euclidean spaces. Despite these mathematical difficulties, PointNet (Qi et al. 2017) and Deep Sets (Zaheer et al. 2017) introduced foundational neural network architectures to address these problems. In this paper we present a unified framework to study the expressive power of such networks as well as their extensions beyond point clouds (partially addressing a conjecture on the extendibility of DeepSets along the way). To this end, we demonstrate the crucial role that the Hausdorff and Wasserstein metrics play and prove new cardinality-agnostic universality results to characterize exactly which functions can be approximated by these models. In particular, these results imply that PointNet generally cannot approximate averages of continuous functions over sets (e.g. center-of-mass or higher moments) implying that DeepSets is strictly more expressive than PointNet in the constant cardinality setting. Moreover, we obtain explicit lower-bounds on the approximation error and present a simple method to produce arbitrarily many examples of this failure-mode. Counterintuitively, we also prove that in the unbounded cardinality setting that any function which can be uniformly approximated by both PointNet and normalized-DeepSets must be constant. Finally, we also prove theorems on the Lipschitz properties of PointNet and normalized-DeepSets which shed insight into exploitable inductive bias in these networks.
| null |
Learning Policies with Zero or Bounded Constraint Violation for Constrained MDPs
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ec2ba5e96ec1c050bc631abda80f269-Abstract.html
|
Tao Liu, Ruida Zhou, Dileep Kalathil, Panganamala Kumar, Chao Tian
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ec2ba5e96ec1c050bc631abda80f269-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12937-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ec2ba5e96ec1c050bc631abda80f269-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Nl7VO_Y7K4Q
|
https://papers.nips.cc/paper_files/paper/2021/file/8ec2ba5e96ec1c050bc631abda80f269-Supplemental.pdf
|
We address the issue of safety in reinforcement learning. We pose the problem in an episodic framework of a constrained Markov decision process. Existing results have shown that it is possible to achieve a reward regret of $\tilde{\mathcal{O}}(\sqrt{K})$ while allowing an $\tilde{\mathcal{O}}(\sqrt{K})$ constraint violation in $K$ episodes. A critical question that arises is whether it is possible to keep the constraint violation even smaller. We show that when a strictly safe policy is known, then one can confine the system to zero constraint violation with arbitrarily high probability while keeping the reward regret of order $\tilde{\mathcal{O}}(\sqrt{K})$. The algorithm which does so employs the principle of optimistic pessimism in the face of uncertainty to achieve safe exploration. When no strictly safe policy is known, though one is known to exist, then it is possible to restrict the system to bounded constraint violation with arbitrarily high probability. This is shown to be realized by a primal-dual algorithm with an optimistic primal estimate and a pessimistic dual update.
| null |
A Prototype-Oriented Framework for Unsupervised Domain Adaptation
|
https://papers.nips.cc/paper_files/paper/2021/hash/8edd72158ccd2a879f79cb2538568fdc-Abstract.html
|
Korawat Tanwisuth, Xinjie Fan, Huangjie Zheng, Shujian Zhang, Hao Zhang, Bo Chen, Mingyuan Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/8edd72158ccd2a879f79cb2538568fdc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12938-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8edd72158ccd2a879f79cb2538568fdc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yH2VrkpiCK6
|
https://papers.nips.cc/paper_files/paper/2021/file/8edd72158ccd2a879f79cb2538568fdc-Supplemental.pdf
|
Existing methods for unsupervised domain adaptation often rely on minimizing some statistical distance between the source and target samples in the latent space. To avoid the sampling variability, class imbalance, and data-privacy concerns that often plague these methods, we instead provide a memory and computation-efficient probabilistic framework to extract class prototypes and align the target features with them. We demonstrate the general applicability of our method on a wide range of scenarios, including single-source, multi-source, class-imbalance, and source-private domain adaptation. Requiring no additional model parameters and having a moderate increase in computation over the source model alone, the proposed method achieves competitive performance with state-of-the-art methods.
| null |
Mining the Benefits of Two-stage and One-stage HOI Detection
|
https://papers.nips.cc/paper_files/paper/2021/hash/8f1d43620bc6bb580df6e80b0dc05c48-Abstract.html
|
Aixi Zhang, Yue Liao, Si Liu, Miao Lu, Yongliang Wang, Chen Gao, XIAOBO LI
|
https://papers.nips.cc/paper_files/paper/2021/hash/8f1d43620bc6bb580df6e80b0dc05c48-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12939-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8f1d43620bc6bb580df6e80b0dc05c48-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=qDrpme0FAi
|
https://papers.nips.cc/paper_files/paper/2021/file/8f1d43620bc6bb580df6e80b0dc05c48-Supplemental.zip
|
Two-stage methods have dominated Human-Object Interaction~(HOI) detection for several years. Recently, one-stage HOI detection methods have become popular. In this paper, we aim to explore the essential pros and cons of two-stage and one-stage methods. With this as the goal, we find that conventional two-stage methods mainly suffer from positioning positive interactive human-object pairs, while one-stage methods are challenging to make an appropriate trade-off on multi-task learning, \emph{i.e.}, object detection, and interaction classification. Therefore, a core problem is how to take the essence and discard the dregs from the conventional two types of methods. To this end, we propose a novel one-stage framework with disentangling human-object detection and interaction classification in a cascade manner. In detail, we first design a human-object pair generator based on a state-of-the-art one-stage HOI detector by removing the interaction classification module or head and then design a relatively isolated interaction classifier to classify each human-object pair. Two cascade decoders in our proposed framework can focus on one specific task, detection or interaction classification. In terms of the specific implementation, we adopt a transformer-based HOI detector as our base model. The newly introduced disentangling paradigm outperforms existing methods by a large margin, with a significant relative mAP gain of 9.32% on HICO-Det. The source codes are available at https://github.com/YueLiao/CDN.
| null |
Discerning Decision-Making Process of Deep Neural Networks with Hierarchical Voting Transformation
|
https://papers.nips.cc/paper_files/paper/2021/hash/8f1fa0193ca2b5d2fa0695827d8270e9-Abstract.html
|
Ying Sun, Hengshu Zhu, Chuan Qin, Fuzhen Zhuang, Qing He, Hui Xiong
|
https://papers.nips.cc/paper_files/paper/2021/hash/8f1fa0193ca2b5d2fa0695827d8270e9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12940-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8f1fa0193ca2b5d2fa0695827d8270e9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Qsd05NmZHIZ
|
https://papers.nips.cc/paper_files/paper/2021/file/8f1fa0193ca2b5d2fa0695827d8270e9-Supplemental.pdf
|
Neural network based deep learning techniques have shown great success for numerous applications. While it is expected to understand their intrinsic decision-making processes, these deep neural networks often work in a black-box way. To this end, in this paper, we aim to discern the decision-making processes of neural networks through a hierarchical voting strategy by developing an explainable deep learning model, namely Voting Transformation-based Explainable Neural Network (VOTEN). Specifically, instead of relying on massive feature combinations, VOTEN creatively models expressive single-valued voting functions between explicitly modeled latent concepts to achieve high fitting ability. Along this line, we first theoretically analyze the major components of VOTEN and prove the relationship and advantages of VOTEN compared with Multi-Layer Perceptron (MLP), the basic structure of deep neural networks. Moreover, we design efficient algorithms to improve the model usability by explicitly showing the decision processes of VOTEN. Finally, extensive experiments on multiple real-world datasets clearly validate the performances and explainability of VOTEN.
| null |
Risk-averse Heteroscedastic Bayesian Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/8f97d1d7e02158a83ceb2c14ff5372cd-Abstract.html
|
Anastasia Makarova, Ilnura Usmanova, Ilija Bogunovic, Andreas Krause
|
https://papers.nips.cc/paper_files/paper/2021/hash/8f97d1d7e02158a83ceb2c14ff5372cd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12941-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8f97d1d7e02158a83ceb2c14ff5372cd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QO93ev_yPqn
|
https://papers.nips.cc/paper_files/paper/2021/file/8f97d1d7e02158a83ceb2c14ff5372cd-Supplemental.pdf
|
Many black-box optimization tasks arising in high-stakes applications require risk-averse decisions. The standard Bayesian optimization (BO) paradigm, however, optimizes the expected value only. We generalize BO to trade mean and input-dependent variance of the objective, both of which we assume to be unknown a priori. In particular, we propose a novel risk-averse heteroscedastic Bayesian optimization algorithm (RAHBO) that aims to identify a solution with high return and low noise variance, while learning the noise distribution on the fly. To this end, we model both expectation and variance as (unknown) RKHS functions, and propose a novel risk-aware acquisition function. We bound the regret for our approach and provide a robust rule to report the final decision point for applications where only a single solution must be identified. We demonstrate the effectiveness of RAHBO on synthetic benchmark functions and hyperparameter tuning tasks.
| null |
Invertible DenseNets with Concatenated LipSwish
|
https://papers.nips.cc/paper_files/paper/2021/hash/8fb21ee7a2207526da55a679f0332de2-Abstract.html
|
Yura Perugachi-Diaz, Jakub Tomczak, Sandjai Bhulai
|
https://papers.nips.cc/paper_files/paper/2021/hash/8fb21ee7a2207526da55a679f0332de2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12942-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8fb21ee7a2207526da55a679f0332de2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=btfPZRDdP2F
|
https://papers.nips.cc/paper_files/paper/2021/file/8fb21ee7a2207526da55a679f0332de2-Supplemental.pdf
|
We introduce Invertible Dense Networks (i-DenseNets), a more parameter efficient extension of Residual Flows. The method relies on an analysis of the Lipschitz continuity of the concatenation in DenseNets, where we enforce invertibility of the network by satisfying the Lipschitz constant. Furthermore, we propose a learnable weighted concatenation, which not only improves the model performance but also indicates the importance of the concatenated weighted representation. Additionally, we introduce the Concatenated LipSwish as activation function, for which we show how to enforce the Lipschitz condition and which boosts performance. The new architecture, i-DenseNet, out-performs Residual Flow and other flow-based models on density estimation evaluated in bits per dimension, where we utilize an equal parameter budget. Moreover, we show that the proposed model out-performs Residual Flows when trained as a hybrid model where the model is both a generative and a discriminative model.
| null |
Topological Detection of Trojaned Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8fd7f981e10b41330b618129afcaab2d-Abstract.html
|
Songzhu Zheng, Yikai Zhang, Hubert Wagner, Mayank Goswami, Chao Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/8fd7f981e10b41330b618129afcaab2d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12943-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8fd7f981e10b41330b618129afcaab2d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1r2EannVuIA
|
https://papers.nips.cc/paper_files/paper/2021/file/8fd7f981e10b41330b618129afcaab2d-Supplemental.pdf
|
Deep neural networks are known to have security issues. One particular threat is the Trojan attack. It occurs when the attackers stealthily manipulate the model's behavior through Trojaned training samples, which can later be exploited. Guided by basic neuroscientific principles, we discover subtle -- yet critical -- structural deviation characterizing Trojaned models. In our analysis we use topological tools. They allow us to model high-order dependencies in the networks, robustly compare different networks, and localize structural abnormalities. One interesting observation is that Trojaned models develop short-cuts from shallow to deep layers. Inspired by these observations, we devise a strategy for robust detection of Trojaned models. Compared to standard baselines it displays better performance on multiple benchmarks.
| null |
Provably Strict Generalisation Benefit for Invariance in Kernel Methods
|
https://papers.nips.cc/paper_files/paper/2021/hash/8fe04df45a22b63156ebabbb064fcd5e-Abstract.html
|
Bryn Elesedy
|
https://papers.nips.cc/paper_files/paper/2021/hash/8fe04df45a22b63156ebabbb064fcd5e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12944-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8fe04df45a22b63156ebabbb064fcd5e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yKdYdQbo22W
| null |
It is a commonly held belief that enforcing invariance improves generalisation. Although this approach enjoys widespread popularity, it is only very recently that a rigorous theoretical demonstration of this benefit has been established. In this work we build on the function space perspective of Elesedy and Zaidi [8] to derive a strictly non-zero generalisation benefit of incorporating invariance in kernel ridge regression when the target is invariant to the action of a compact group. We study invariance enforced by feature averaging and find that generalisation is governed by a notion of effective dimension that arises from the interplay between the kernel and the group. In building towards this result, we find that the action of the group induces an orthogonal decomposition of both the reproducing kernel Hilbert space and its kernel, which may be of interest in its own right.
| null |
Formalizing the Generalization-Forgetting Trade-off in Continual Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/901797aebf0b23ecbab534d61ad33bb1-Abstract.html
|
Krishnan Raghavan, Prasanna Balaprakash
|
https://papers.nips.cc/paper_files/paper/2021/hash/901797aebf0b23ecbab534d61ad33bb1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12945-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/901797aebf0b23ecbab534d61ad33bb1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=u1XV9BPAB9
|
https://papers.nips.cc/paper_files/paper/2021/file/901797aebf0b23ecbab534d61ad33bb1-Supplemental.pdf
|
We formulate the continual learning (CL) problem via dynamic programming and model the trade-off between catastrophic forgetting and generalization as a two-player sequential game. In this approach, player 1 maximizes the cost due to lack of generalization whereas player 2 minimizes the cost due to catastrophic forgetting. We show theoretically that a balance point between the two players exists for each task and that this point is stable (once the balance is achieved, the two players stay at the balance point). Next, we introduce balanced continual learning (BCL), which is designed to attain balance between generalization and forgetting and empirically demonstrate that BCL is comparable to or better than the state of the art.
| null |
Risk-Aware Transfer in Reinforcement Learning using Successor Features
|
https://papers.nips.cc/paper_files/paper/2021/hash/90610aa0e24f63ec6d2637e06f9b9af2-Abstract.html
|
Michael Gimelfarb, Andre Barreto, Scott Sanner, Chi-Guhn Lee
|
https://papers.nips.cc/paper_files/paper/2021/hash/90610aa0e24f63ec6d2637e06f9b9af2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12946-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/90610aa0e24f63ec6d2637e06f9b9af2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=a_f_NR8mMr9
|
https://papers.nips.cc/paper_files/paper/2021/file/90610aa0e24f63ec6d2637e06f9b9af2-Supplemental.pdf
|
Sample efficiency and risk-awareness are central to the development of practical reinforcement learning (RL) for complex decision-making. The former can be addressed by transfer learning, while the latter by optimizing some utility function of the return. However, the problem of transferring skills in a risk-aware manner is not well-understood. In this paper, we address the problem of transferring policies between tasks in a common domain that differ only in their reward functions, in which risk is measured by the variance of reward streams. Our approach begins by extending the idea of generalized policy improvement to maximize entropic utilities, thus extending the dynamic programming's policy improvement operation to sets of policies \emph{and} levels of risk-aversion. Next, we extend the idea of successor features (SF), a value function representation that decouples the environment dynamics from the rewards, to capture the variance of returns. Our resulting risk-aware successor features (RaSF) integrate seamlessly within the RL framework, inherit the superior task generalization ability of SFs, while incorporating risk into the decision-making. Experiments on a discrete navigation domain and control of a simulated robotic arm demonstrate the ability of RaSFs to outperform alternative methods including SFs, when taking the risk of the learned policies into account.
| null |
Causal Inference for Event Pairs in Multivariate Point Processes
|
https://papers.nips.cc/paper_files/paper/2021/hash/9078f2a8254704bd760460f027072e52-Abstract.html
|
Tian Gao, Dharmashankar Subramanian, Debarun Bhattacharjya, Xiao Shou, Nicholas Mattei, Kristin P Bennett
|
https://papers.nips.cc/paper_files/paper/2021/hash/9078f2a8254704bd760460f027072e52-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12947-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9078f2a8254704bd760460f027072e52-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=x8qirBbT9xp
|
https://papers.nips.cc/paper_files/paper/2021/file/9078f2a8254704bd760460f027072e52-Supplemental.pdf
|
Causal inference and discovery from observational data has been extensively studied across multiple fields. However, most prior work has focused on independent and identically distributed (i.i.d.) data. In this paper, we propose a formalization for causal inference between pairs of event variables in multivariate recurrent event streams by extending Rubin's framework for the average treatment effect (ATE) and propensity scores to multivariate point processes. Analogous to a joint probability distribution representing i.i.d. data, a multivariate point process represents data involving asynchronous and irregularly spaced occurrences of various types of events over a common timeline. We theoretically justify our point process causal framework and show how to obtain unbiased estimates of the proposed measure. We conduct an experimental investigation using synthetic and real-world event datasets, where our proposed causal inference framework is shown to exhibit superior performance against a set of baseline pairwise causal association scores.
| null |
Evaluating model performance under worst-case subpopulations
|
https://papers.nips.cc/paper_files/paper/2021/hash/908075ea2c025c335f4865f7db427062-Abstract.html
|
Mike Li, Hongseok Namkoong, Shangzhou Xia
|
https://papers.nips.cc/paper_files/paper/2021/hash/908075ea2c025c335f4865f7db427062-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12948-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/908075ea2c025c335f4865f7db427062-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nehzxAdyJxF
|
https://papers.nips.cc/paper_files/paper/2021/file/908075ea2c025c335f4865f7db427062-Supplemental.zip
|
The performance of ML models degrades when the training population is different from that seen under operation. Towards assessing distributional robustness, we study the worst-case performance of a model over all subpopulations of a given size, defined with respect to core attributes $Z$. This notion of robustness can consider arbitrary (continuous) attributes $Z$, and automatically accounts for complex intersectionality in disadvantaged groups. We develop a scalable yet principled two-stage estimation procedure that can evaluate the robustness of state-of-the-art models. We prove that our procedure enjoys several finite-sample convergence guarantees, including dimension-free convergence. Instead of overly conservative notions based on Rademacher complexities, our evaluation error depends on the dimension of $Z$ only through the out-of-sample error in estimating the performance conditional on $Z$. On real datasets, we demonstrate that our method certifies the robustness of a model and prevents deployment of unreliable models.
| null |
Privately Publishable Per-instance Privacy
|
https://papers.nips.cc/paper_files/paper/2021/hash/9087b0efc7c7acd1ef7e153678809c77-Abstract.html
|
Rachel Redberg, Yu-Xiang Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/9087b0efc7c7acd1ef7e153678809c77-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12949-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9087b0efc7c7acd1ef7e153678809c77-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pPbrtkTHe9
|
https://papers.nips.cc/paper_files/paper/2021/file/9087b0efc7c7acd1ef7e153678809c77-Supplemental.pdf
|
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP). Standard differential privacy (DP) gives us a worst-case bound that might be orders of magnitude larger than the privacy loss to a particular individual relative to a fixed dataset. The pDP framework provides a more fine-grained analysis of the privacy guarantee to a target individual, but the per-instance privacy loss itself might be a function of sensitive data. In this paper, we analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
| null |
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
|
https://papers.nips.cc/paper_files/paper/2021/hash/90cc440b1b8caa520c562ac4e4bbcb51-Abstract.html
|
Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
|
https://papers.nips.cc/paper_files/paper/2021/hash/90cc440b1b8caa520c562ac4e4bbcb51-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12950-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/90cc440b1b8caa520c562ac4e4bbcb51-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1TuwAYxRAC
|
https://papers.nips.cc/paper_files/paper/2021/file/90cc440b1b8caa520c562ac4e4bbcb51-Supplemental.pdf
|
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target. However, UDA is not always successful and several accounts of `negative transfer' have been reported in the literature. In this work, we prove a simple lower bound on the target domain error that complements the existing upper bound. Our bound shows the insufficiency of minimizing source domain error and marginal distribution mismatch for a guaranteed reduction in the target domain error, due to the possible increase of induced labeling function mismatch. This insufficiency is further illustrated through simple distributions for which the same UDA approach succeeds, fails, and may succeed or fail with an equal chance. Motivated from this, we propose novel data poisoning attacks to fool UDA methods into learning representations that produce large target domain errors. We evaluate the effect of these attacks on popular UDA methods using benchmark datasets where they have been previously shown to be successful. Our results show that poisoning can significantly decrease the target domain accuracy, dropping it to almost 0% in some cases, with the addition of only 10% poisoned data in the source domain. The failure of these UDA methods demonstrates their limitations at guaranteeing cross-domain generalization consistent with our lower bound. Thus, evaluating UDA methods in adversarial settings such as data poisoning provides a better sense of their robustness to data distributions unfavorable for UDA.
| null |
Coresets for Clustering with Missing Values
|
https://papers.nips.cc/paper_files/paper/2021/hash/90fd4f88f588ae64038134f1eeaa023f-Abstract.html
|
Vladimir Braverman, Shaofeng Jiang, Robert Krauthgamer, Xuan Wu
|
https://papers.nips.cc/paper_files/paper/2021/hash/90fd4f88f588ae64038134f1eeaa023f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12951-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/90fd4f88f588ae64038134f1eeaa023f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1H6zA8wIhKk
|
https://papers.nips.cc/paper_files/paper/2021/file/90fd4f88f588ae64038134f1eeaa023f-Supplemental.pdf
|
We provide the first coreset for clustering points in $\mathbb{R}^d$ that have multiple missing values (coordinates). Previous coreset constructions only allow one missing coordinate. The challenge in this setting is that objective functions, like \kMeans, are evaluated only on the set of available (non-missing) coordinates, which varies across points. Recall that an $\epsilon$-coreset of a large dataset is a small proxy, usually a reweighted subset of points, that $(1+\epsilon)$-approximates the clustering objective for every possible center set.Our coresets for $k$-Means and $k$-Median clustering have size $(jk)^{O(\min(j,k))} (\epsilon^{-1} d \log n)^2$, where $n$ is the number of data points, $d$ is the dimension and $j$ is the maximum number of missing coordinates for each data point. We further design an algorithm to construct these coresets in near-linear time, and consequently improve a recent quadratic-time PTAS for $k$-Means with missing values [Eiben et al., SODA 2021] to near-linear time.We validate our coreset construction, which is based on importance sampling and is easy to implement, on various real data sets. Our coreset exhibits a flexible tradeoff between coreset size and accuracy, and generally outperforms the uniform-sampling baseline. Furthermore, it significantly speeds up a Lloyd's-style heuristic for $k$-Means with missing values.
| null |
Boosting with Multiple Sources
|
https://papers.nips.cc/paper_files/paper/2021/hash/9103820024efb30b451d006dc4ab3370-Abstract.html
|
Corinna Cortes, Mehryar Mohri, Dmitry Storcheus, Ananda Theertha Suresh
|
https://papers.nips.cc/paper_files/paper/2021/hash/9103820024efb30b451d006dc4ab3370-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12952-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9103820024efb30b451d006dc4ab3370-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1oP1duoZxx
|
https://papers.nips.cc/paper_files/paper/2021/file/9103820024efb30b451d006dc4ab3370-Supplemental.pdf
|
We study the problem of learning accurate ensemble predictors, in particular boosting, in the presence of multiple source domains. We show that the standard convex combination ensembles in general cannot succeed in this scenario and adopt instead a domain-weighted combination. We introduce and analyze a new boosting algorithm, MULTIBOOST, for this scenario and show that it benefits from favorable theoretical guarantees. We also report the results of several experiments with our algorithm demonstrating that it outperforms natural baselines on multi-source text-based, image-based and tabular data. We further present an extension of our algorithm to the federated learning scenario and report favorable experimental results for that setting as well. Additionally, we describe in detail an extension of our algorithm to the multi-class setting, MCMULTIBOOST, for which we also report experimental results.
| null |
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation
|
https://papers.nips.cc/paper_files/paper/2021/hash/912d2b1c7b2826caf99687388d2e8f7c-Abstract.html
|
Bowen Zhang, Yifan liu, Zhi Tian, Chunhua Shen
|
https://papers.nips.cc/paper_files/paper/2021/hash/912d2b1c7b2826caf99687388d2e8f7c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12953-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/912d2b1c7b2826caf99687388d2e8f7c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=eL-mdUUwQVZ
|
https://papers.nips.cc/paper_files/paper/2021/file/912d2b1c7b2826caf99687388d2e8f7c-Supplemental.pdf
|
Semantic segmentation requires per-pixel prediction for a given image. Typically, the output resolution of a segmentation network is severely reduced due to the downsampling operations in the CNN backbone. Most previous methods employ upsampling decoders to recover the spatial resolution.Various decoders were designed in the literature. Here, we propose a novel decoder, termed dynamic neural representational decoder (NRD), which is simple yet significantly more efficient. As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks. This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient. Furthermore, these neural representations are dynamically generated and conditioned on the outputs of the encoder networks. The desired semantic labels can be efficiently decoded from the neural representations, resulting in high-resolution semantic segmentation predictions.We empirically show that our proposed decoder can outperform the decoder in DeeplabV3+ with only $\sim$$30\%$ computational complexity, and achieve competitive performance with the methods using dilated encoders with only $\sim$$15\% $ computation. Experiments on Cityscapes, ADE20K, and Pascal Context demonstrate the effectiveness and efficiency of our proposed method.
| null |
Dense Keypoints via Multiview Supervision
|
https://papers.nips.cc/paper_files/paper/2021/hash/914101ec47c52b48a7b6ccc6f5a76f1f-Abstract.html
|
Zhixuan Yu, Haozheng Yu, Long Sha, Sujoy Ganguly, Hyun Soo Park
|
https://papers.nips.cc/paper_files/paper/2021/hash/914101ec47c52b48a7b6ccc6f5a76f1f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12954-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/914101ec47c52b48a7b6ccc6f5a76f1f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=hJOLFJIJ_zy
|
https://papers.nips.cc/paper_files/paper/2021/file/914101ec47c52b48a7b6ccc6f5a76f1f-Supplemental.pdf
|
This paper presents a new end-to-end semi-supervised framework to learn a dense keypoint detector using unlabeled multiview images. A key challenge lies in finding the exact correspondences between the dense keypoints in multiple views since the inverse of the keypoint mapping can be neither analytically derived nor differentiated. This limits applying existing multiview supervision approaches used to learn sparse keypoints that rely on the exact correspondences. To address this challenge, we derive a new probabilistic epipolar constraint that encodes the two desired properties. (1) Soft correspondence: we define a matchability, which measures a likelihood of a point matching to the other image’s corresponding point, thus relaxing the requirement of the exact correspondences. (2) Geometric consistency: every point in the continuous correspondence fields must satisfy the multiview consistency collectively. We formulate a probabilistic epipolar constraint using a weighted average of epipolar errors through the matchability thereby generalizing the point-to-point geometric error to the field-to-field geometric error. This generalization facilitates learning a geometrically coherent dense keypoint detection model by utilizing a large number of unlabeled multiview images. Additionally, to prevent degenerative cases, we employ a distillation-based regularization by using a pretrained model. Finally, we design a new neural network architecture, made of twin networks, that effectively minimizes the probabilistic epipolar errors of all possible correspondences between two view images by building affinity matrices. Our method shows superior performance compared to existing methods, including non-differentiable bootstrapping in terms of keypoint accuracy, multiview consistency, and 3D reconstruction accuracy.
| null |
Scatterbrain: Unifying Sparse and Low-rank Attention
|
https://papers.nips.cc/paper_files/paper/2021/hash/9185f3ec501c674c7c788464a36e7fb3-Abstract.html
|
Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher Ré
|
https://papers.nips.cc/paper_files/paper/2021/hash/9185f3ec501c674c7c788464a36e7fb3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12955-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9185f3ec501c674c7c788464a36e7fb3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SehIKudiIo1
|
https://papers.nips.cc/paper_files/paper/2021/file/9185f3ec501c674c7c788464a36e7fb3-Supplemental.pdf
|
Recent advances in efficient Transformers have exploited either the sparsity or low-rank properties of attention matrices to reduce the computational and memory bottlenecks of modeling long sequences. However, it is still challenging to balance the trade-off between model quality and efficiency to perform a one-size-fits-all approximation for different tasks. To better understand this trade-off, we observe that sparse and low-rank approximations excel in different regimes, determined by the softmax temperature in attention, and sparse + low-rank can outperform each individually. Inspired by the classical robust-PCA algorithm for sparse and low-rank decomposition, we propose Scatterbrain, a novel way to unify sparse (via locality sensitive hashing) and low-rank (via kernel feature map) attention for accurate and efficient approximation. The estimation is unbiased with provably low error. We empirically show that Scatterbrain can achieve $2.1 \times$ lower error than baselines when serving as a drop-in replacement in BigGAN image generation and pre-trained T2T-ViT. On a pre-trained T2T Vision transformer, even without fine-tuning, Scatterbrain can reduce $98\%$ of attention memory at the cost of only $1\%$ drop in accuracy. We demonstrate Scatterbrain for end-to-end training with up to $4$ points better perplexity and 5 points better average accuracy than sparse or low-rank efficient transformers on language modeling and long-range-arena tasks.
| null |
PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning
|
https://papers.nips.cc/paper_files/paper/2021/hash/918f5cd5a5c0d48671d4d4fc54bab2e9-Abstract.html
|
Yining Hong, Li Yi, Josh Tenenbaum, Antonio Torralba, Chuang Gan
|
https://papers.nips.cc/paper_files/paper/2021/hash/918f5cd5a5c0d48671d4d4fc54bab2e9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12956-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/918f5cd5a5c0d48671d4d4fc54bab2e9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=dTp-VUFDIB
|
https://papers.nips.cc/paper_files/paper/2021/file/918f5cd5a5c0d48671d4d4fc54bab2e9-Supplemental.pdf
|
A critical aspect of human visual perception is the ability to parse visual scenes into individual objects and further into object parts, forming part-whole hierarchies. Such composite structures could induce a rich set of semantic concepts and relations, thus playing an important role in the interpretation and organization of visual signals as well as for the generalization of visual perception and reasoning. However, existing visual reasoning benchmarks mostly focus on objects rather than parts. Visual reasoning based on the full part-whole hierarchy is much more challenging than object-centric reasoning due to finer-grained concepts, richer geometry relations, and more complex physics. Therefore, to better serve for part-based conceptual, relational and physical reasoning, we introduce a new large-scale diagnostic visual reasoning dataset named PTR. PTR contains around 80k RGBD synthetic images with ground truth object and part level annotations regarding semantic instance segmentation, color attributes, spatial and geometric relationships, and certain physical properties such as stability. These images are paired with 800k machine-generated questions covering various types of reasoning types, making them a good testbed for visual reasoning models. We examine several state-of-the-art visual reasoning models on this dataset and observe that they still make many surprising mistakes in situations where humans can easily infer the correct answer. We believe this dataset will open up new opportunities for part-based reasoning. PTR dataset and baseline models are publicly available.
| null |
Property-Aware Relation Networks for Few-Shot Molecular Property Prediction
|
https://papers.nips.cc/paper_files/paper/2021/hash/91bc333f6967019ac47b49ca0f2fa757-Abstract.html
|
Yaqing Wang, Abulikemu Abuduweili, Quanming Yao, Dejing Dou
|
https://papers.nips.cc/paper_files/paper/2021/hash/91bc333f6967019ac47b49ca0f2fa757-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12957-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/91bc333f6967019ac47b49ca0f2fa757-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vGjTOxss-Dl
|
https://papers.nips.cc/paper_files/paper/2021/file/91bc333f6967019ac47b49ca0f2fa757-Supplemental.pdf
|
Molecular property prediction plays a fundamental role in drug discovery to identify candidate molecules with target properties. However, molecular property prediction is essentially a few-shot problem, which makes it hard to use regular machine learning models. In this paper, we propose Property-Aware Relation networks (PAR) to handle this problem. In comparison to existing works, we leverage the fact that both relevant substructures and relationships among molecules change across different molecular properties. We first introduce a property-aware embedding function to transform the generic molecular embeddings to substructure-aware space relevant to the target property. Further, we design an adaptive relation graph learning module to jointly estimate molecular relation graph and refine molecular embeddings w.r.t. the target property, such that the limited labels can be effectively propagated among similar molecules. We adopt a meta-learning strategy where the parameters are selectively updated within tasks in order to model generic and property-aware knowledge separately. Extensive experiments on benchmark molecular property prediction datasets show that PAR consistently outperforms existing methods and can obtain property-aware molecular embeddings and model molecular relation graph properly.
| null |
Differentially Private Learning with Adaptive Clipping
|
https://papers.nips.cc/paper_files/paper/2021/hash/91cff01af640a24e7f9f7a5ab407889f-Abstract.html
|
Galen Andrew, Om Thakkar, Brendan McMahan, Swaroop Ramaswamy
|
https://papers.nips.cc/paper_files/paper/2021/hash/91cff01af640a24e7f9f7a5ab407889f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12958-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/91cff01af640a24e7f9f7a5ab407889f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RUQ1zwZR8_
|
https://papers.nips.cc/paper_files/paper/2021/file/91cff01af640a24e7f9f7a5ab407889f-Supplemental.pdf
|
Existing approaches for training neural networks with user-level differential privacy (e.g., DP Federated Averaging) in federated learning (FL) settings involve bounding the contribution of each user's model update by {\em clipping} it to some constant value. However there is no good {\em a priori} setting of the clipping norm across tasks and learning settings: the update norm distribution depends on the model architecture and loss, the amount of data on each device, the client learning rate, and possibly various other parameters. We propose a method wherein instead of a fixed clipping norm, one clips to a value at a specified quantile of the update norm distribution, where the value at the quantile is itself estimated online, with differential privacy. The method tracks the quantile closely, uses a negligible amount of privacy budget, is compatible with other federated learning technologies such as compression and secure aggregation, and has a straightforward joint DP analysis with DP-FedAvg. Experiments demonstrate that adaptive clipping to the median update norm works well across a range of federated learning tasks, eliminating the need to tune any clipping hyperparameter.
| null |
Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial
|
https://papers.nips.cc/paper_files/paper/2021/hash/91e50fe1e39af2869d3336eaaeebdb43-Abstract.html
|
Yang Liu, Jialu Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/91e50fe1e39af2869d3336eaaeebdb43-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12959-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/91e50fe1e39af2869d3336eaaeebdb43-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VjKhSULF7Gb
|
https://papers.nips.cc/paper_files/paper/2021/file/91e50fe1e39af2869d3336eaaeebdb43-Supplemental.pdf
|
In this paper, we answer the question of when inserting label noise (less informative labels) can instead return us more accurate and fair models. We are primarily inspired by three observations: 1) In contrast to reducing label noise rates, increasing the noise rates is easy to implement; 2) Increasing a certain class of instances' label noise to balance the noise rates (increasing-to-balancing) results in an easier learning problem; 3) Increasing-to-balancing improves fairness guarantees against label bias. In this paper, we first quantify the trade-offs introduced by increasing a certain group of instances' label noise rate w.r.t. the loss of label informativeness and the lowered learning difficulties. We analytically demonstrate when such an increase is beneficial, in terms of either improved generalization power or the fairness guarantees. Then we present a method to insert label noise properly for the task of learning with noisy labels, either without or with a fairness constraint. The primary technical challenge we face is due to the fact that we would not know which data instances are suffering from higher noise, and we would not have the ground truth labels to verify any possible hypothesis. We propose a detection method that informs us which group of labels might suffer from higher noise without using ground truth labels. We formally establish the effectiveness of the proposed solution and demonstrate it with extensive experiments.
| null |
Projected GANs Converge Faster
|
https://papers.nips.cc/paper_files/paper/2021/hash/9219adc5c42107c4911e249155320648-Abstract.html
|
Axel Sauer, Kashyap Chitta, Jens Müller, Andreas Geiger
|
https://papers.nips.cc/paper_files/paper/2021/hash/9219adc5c42107c4911e249155320648-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12960-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fUxqIofPPi
|
https://papers.nips.cc/paper_files/paper/2021/file/9219adc5c42107c4911e249155320648-Supplemental.pdf
|
Generative Adversarial Networks (GANs) produce high-quality images but are challenging to train. They need careful regularization, vast amounts of compute, and expensive hyper-parameter sweeps. We make significant headway on these issues by projecting generated and real samples into a fixed, pretrained feature space. Motivated by the finding that the discriminator cannot fully exploit features from deeper layers of the pretrained model, we propose a more effective strategy that mixes features across channels and resolutions. Our Projected GAN improves image quality, sample efficiency, and convergence speed. It is further compatible with resolutions of up to one Megapixel and advances the state-of-the-art Fréchet Inception Distance (FID) on twenty-two benchmark datasets. Importantly, Projected GANs match the previously lowest FIDs up to 40 times faster, cutting the wall-clock time from 5 days to less than 3 hours given the same computational resources.
| null |
Generating High-Quality Explanations for Navigation in Partially-Revealed Environments
|
https://papers.nips.cc/paper_files/paper/2021/hash/926ec030f29f83ce5318754fdb631a33-Abstract.html
|
Gregory Stein
|
https://papers.nips.cc/paper_files/paper/2021/hash/926ec030f29f83ce5318754fdb631a33-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12961-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/926ec030f29f83ce5318754fdb631a33-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jQuTUXeQsy8
|
https://papers.nips.cc/paper_files/paper/2021/file/926ec030f29f83ce5318754fdb631a33-Supplemental.zip
|
We present an approach for generating natural language explanations of high-level behavior of autonomous agents navigating in partially-revealed environments. Our counterfactual explanations communicate changes to interpratable statistics of the belief (e.g., the likelihood an exploratory action will reach the unseen goal) that are estimated from visual input via a deep neural network and used (via a Bellman equation variant) to inform planning far into the future. Additionally, our novel training procedure mimics explanation generation, allowing us to use planning performance as an objective measure of explanation quality. Simulated experiments validate that our explanations are both high quality and can be used in interventions to directly correct bad behavior; agents trained via our training-by-explaining procedure achieve 9.1% lower average cost than a non-learned baseline (12.7% after interventions) in environments derived from real-world floor plans.
| null |
De-randomizing MCMC dynamics with the diffusion Stein operator
|
https://papers.nips.cc/paper_files/paper/2021/hash/9271905e840548b8cada6d60c0cfd93b-Abstract.html
|
Zheyang Shen, Markus Heinonen, Samuel Kaski
|
https://papers.nips.cc/paper_files/paper/2021/hash/9271905e840548b8cada6d60c0cfd93b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12962-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9271905e840548b8cada6d60c0cfd93b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZAh31ihNaoF
|
https://papers.nips.cc/paper_files/paper/2021/file/9271905e840548b8cada6d60c0cfd93b-Supplemental.pdf
|
Approximate Bayesian inference estimates descriptors of an intractable target distribution - in essence, an optimization problem within a family of distributions. For example, Langevin dynamics (LD) extracts asymptotically exact samples from a diffusion process because the time evolution of its marginal distributions constitutes a curve that minimizes the KL-divergence via steepest descent in the Wasserstein space. Parallel to LD, Stein variational gradient descent (SVGD) similarly minimizes the KL, albeit endowed with a novel Stein-Wasserstein distance, by deterministically transporting a set of particle samples, thus de-randomizes the stochastic diffusion process. We propose de-randomized kernel-based particle samplers to all diffusion-based samplers known as MCMC dynamics. Following previous work in interpreting MCMC dynamics, we equip the Stein-Wasserstein space with a fiber-Riemannian Poisson structure, with the capacity of characterizing a fiber-gradient Hamiltonian flow that simulates MCMC dynamics. Such dynamics discretizes into generalized SVGD (GSVGD), a Stein-type deterministic particle sampler, with particle updates coinciding with applying the diffusion Stein operator to a kernel function. We demonstrate empirically that GSVGD can de-randomize complex MCMC dynamics, which combine the advantages of auxiliary momentum variables and Riemannian structure, while maintaining the high sample quality from an interacting particle system.
| null |
Sparsely Changing Latent States for Prediction and Planning in Partially Observable Domains
|
https://papers.nips.cc/paper_files/paper/2021/hash/927b028cfa24b23a09ff20c1a7f9b398-Abstract.html
|
Christian Gumbsch, Martin V. Butz, Georg Martius
|
https://papers.nips.cc/paper_files/paper/2021/hash/927b028cfa24b23a09ff20c1a7f9b398-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12963-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/927b028cfa24b23a09ff20c1a7f9b398-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-VjKyYX-PI9
|
https://papers.nips.cc/paper_files/paper/2021/file/927b028cfa24b23a09ff20c1a7f9b398-Supplemental.pdf
|
A common approach to prediction and planning in partially observable domains is to use recurrent neural networks (RNNs), which ideally develop and maintain a latent memory about hidden, task-relevant factors. We hypothesize that many of these hidden factors in the physical world are constant over time, changing only sparsely. To study this hypothesis, we propose Gated $L_0$ Regularized Dynamics (GateL0RD), a novel recurrent architecture that incorporates the inductive bias to maintain stable, sparsely changing latent states. The bias is implemented by means of a novel internal gating function and a penalty on the $L_0$ norm of latent state changes. We demonstrate that GateL0RD can compete with or outperform state-of-the-art RNNs in a variety of partially observable prediction and control tasks. GateL0RD tends to encode the underlying generative factors of the environment, ignores spurious temporal dependencies, and generalizes better, improving sampling efficiency and overall performance in model-based planning and reinforcement learning tasks. Moreover, we show that the developing latent states can be easily interpreted, which is a step towards better explainability in RNNs.
| null |
PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/92977ae4d2ba21425a59afb269c2a14e-Abstract.html
|
Neehar Peri, Michael Curry, Samuel Dooley, John Dickerson
|
https://papers.nips.cc/paper_files/paper/2021/hash/92977ae4d2ba21425a59afb269c2a14e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12964-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/92977ae4d2ba21425a59afb269c2a14e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=amH9JxZN7C
|
https://papers.nips.cc/paper_files/paper/2021/file/92977ae4d2ba21425a59afb269c2a14e-Supplemental.pdf
|
The design of optimal auctions is a problem of interest in economics, game theory and computer science. Despite decades of effort, strategyproof, revenue-maximizing auction designs are still not known outside of restricted settings. However, recent methods using deep learning have shown some success in approximating optimal auctions, recovering several known solutions and outperforming strong baselines when optimal auctions are not known. In addition to maximizing revenue, auction mechanisms may also seek to encourage socially desirable constraints such as allocation fairness or diversity. However, these philosophical notions neither have standardization nor do they have widely accepted formal definitions. In this paper, we propose PreferenceNet, an extension of existing neural-network-based auction mechanisms to encode constraints using (potentially human-provided) exemplars of desirable allocations. In addition, we introduce a new metric to evaluate an auction allocations' adherence to such socially desirable constraints and demonstrate that our proposed method is competitive with current state-of-the-art neural-network based auction designs. We validate our approach through human subject research and show that we are able to effectively capture real human preferences.
| null |
Large-Scale Learning with Fourier Features and Tensor Decompositions
|
https://papers.nips.cc/paper_files/paper/2021/hash/92a08bf918f44ccd961477be30023da1-Abstract.html
|
Frederiek Wesel, Kim Batselier
|
https://papers.nips.cc/paper_files/paper/2021/hash/92a08bf918f44ccd961477be30023da1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12965-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/92a08bf918f44ccd961477be30023da1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=srzTZmjko0N
|
https://papers.nips.cc/paper_files/paper/2021/file/92a08bf918f44ccd961477be30023da1-Supplemental.pdf
|
Random Fourier features provide a way to tackle large-scale machine learning problems with kernel methods. Their slow Monte Carlo convergence rate has motivated the research of deterministic Fourier features whose approximation error can decrease exponentially in the number of basis functions. However, due to their tensor product extension to multiple dimensions, these methods suffer heavily from the curse of dimensionality, limiting their applicability to one, two or three-dimensional scenarios. In our approach we overcome said curse of dimensionality by exploiting the tensor product structure of deterministic Fourier features, which enables us to represent the model parameters as a low-rank tensor decomposition. We derive a monotonically converging block coordinate descent algorithm with linear complexity in both the sample size and the dimensionality of the inputs for a regularized squared loss function, allowing to learn a parsimonious model in decomposed form using deterministic Fourier features.We demonstrate by means of numerical experiments how our low-rank tensor approach obtains the same performance of the corresponding nonparametric model, consistently outperforming random Fourier features.
| null |
Hash Layers For Large Sparse Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/92bf5e6240737e0326ea59846a83e076-Abstract.html
|
Stephen Roller, Sainbayar Sukhbaatar, arthur szlam, Jason Weston
|
https://papers.nips.cc/paper_files/paper/2021/hash/92bf5e6240737e0326ea59846a83e076-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12966-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/92bf5e6240737e0326ea59846a83e076-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lMgDDWb1ULW
|
https://papers.nips.cc/paper_files/paper/2021/file/92bf5e6240737e0326ea59846a83e076-Supplemental.pdf
|
We investigate the training of sparse layers that use different parameters for different inputs based on hashing in large Transformer models. Specifically, we modify the feedforward layer to hash to different sets of weights depending on the current token, over all tokens in the sequence. We show that this procedure either outperforms or is competitive with learning-to-route mixture-of-expert methods such as Switch Transformers and BASE Layers, while requiring no routing parameters or extra terms in the objective function such as a load balancing loss, and no sophisticated assignment algorithm. We study the performance of different hashing techniques, hash sizes and input features, and show that balanced and random hashes focused on the most local features work best, compared to either learning clusters or using longer-range context. We show our approach works well both on large language modeling and dialogue tasks, and on downstream fine-tuning tasks.
| null |
Sliced Mutual Information: A Scalable Measure of Statistical Dependence
|
https://papers.nips.cc/paper_files/paper/2021/hash/92c4661685bf6681f6a33b78ef729658-Abstract.html
|
Ziv Goldfeld, Kristjan Greenewald
|
https://papers.nips.cc/paper_files/paper/2021/hash/92c4661685bf6681f6a33b78ef729658-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12967-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/92c4661685bf6681f6a33b78ef729658-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SvrYl-FDq2
| null |
Mutual information (MI) is a fundamental measure of statistical dependence, with a myriad of applications to information theory, statistics, and machine learning. While it possesses many desirable structural properties, the estimation of high-dimensional MI from samples suffers from the curse of dimensionality. Motivated by statistical scalability to high dimensions, this paper proposes sliced MI (SMI) as a surrogate measure of dependence. SMI is defined as an average of MI terms between one-dimensional random projections. We show that it preserves many of the structural properties of classic MI, while gaining scalable computation and efficient estimation from samples. Furthermore, and in contrast to classic MI, SMI can grow as a result of deterministic transformations. This enables leveraging SMI for feature extraction by optimizing it over processing functions of raw data to identify useful representations thereof. Our theory is supported by numerical studies of independence testing and feature extraction, which demonstrate the potential gains SMI offers over classic MI for high-dimensional inference.
| null |
Emergent Communication under Varying Sizes and Connectivities
|
https://papers.nips.cc/paper_files/paper/2021/hash/92dfa194391a59dc65b88b704599dbd6-Abstract.html
|
Jooyeon Kim, Alice Oh
|
https://papers.nips.cc/paper_files/paper/2021/hash/92dfa194391a59dc65b88b704599dbd6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12968-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/92dfa194391a59dc65b88b704599dbd6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=iqpFcg2TAa0
|
https://papers.nips.cc/paper_files/paper/2021/file/92dfa194391a59dc65b88b704599dbd6-Supplemental.pdf
|
Recent advances in deep neural networks allowed artificial agents to derive their own emergent languages that promote interaction, coordination, and collaboration within a group. Just as we humans have succeeded in creating a shared language that allows us to interact within a large group, can the emergent communication within an artificial group converge to a shared, agreed language? This research provides an analytical study of the shared emergent language within the group communication settings of different sizes and connectivities. As the group size increases up to hundreds, agents start to speak dissimilar languages, but the rate at which they successfully communicate is maintained. We observe the emergence of different dialects when we restrict the group communication to have local connectivities only. Finally, we provide optimization results of group communication graphs when the number of agents one can communicate with is restricted or when we penalize communication between distant agent pairs. The optimized communication graphs show superior communication success rates compared to graphs with same number of links as well as the emergence of hub nodes and scale-free networks.
| null |
Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/92fde850d824c2ba9b563cb6fa4078c3-Abstract.html
|
Rong Zhu, Mattia Rigotti
|
https://papers.nips.cc/paper_files/paper/2021/hash/92fde850d824c2ba9b563cb6fa4078c3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12969-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/92fde850d824c2ba9b563cb6fa4078c3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=mIKui9t0jDq
|
https://papers.nips.cc/paper_files/paper/2021/file/92fde850d824c2ba9b563cb6fa4078c3-Supplemental.pdf
|
Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment.However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity.Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario.In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits.While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions.Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds.Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilon-greedy exploration.We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost, and make the code to reproduce our results available at \url{https://github.com/ibm/sau-explore}.
| null |
Regret Minimization Experience Replay in Off-Policy Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/931af583573227f0220bc568c65ce104-Abstract.html
|
Xu-Hui Liu, Zhenghai Xue, Jingcheng Pang, Shengyi Jiang, Feng Xu, Yang Yu
|
https://papers.nips.cc/paper_files/paper/2021/hash/931af583573227f0220bc568c65ce104-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12970-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/931af583573227f0220bc568c65ce104-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ba3odanehCw
|
https://papers.nips.cc/paper_files/paper/2021/file/931af583573227f0220bc568c65ce104-Supplemental.pdf
|
In reinforcement learning, experience replay stores past samples for further reuse. Prioritized sampling is a promising technique to better utilize these samples. Previous criteria of prioritization include TD error, recentness and corrective feedback, which are mostly heuristically designed. In this work, we start from the regret minimization objective, and obtain an optimal prioritization strategy for Bellman update that can directly maximize the return of the policy. The theory suggests that data with higher hindsight TD error, better on-policiness and more accurate Q value should be assigned with higher weights during sampling. Thus most previous criteria only consider this strategy partially. We not only provide theoretical justifications for previous criteria, but also propose two new methods to compute the prioritization weight, namely ReMERN and ReMERT. ReMERN learns an error network, while ReMERT exploits the temporal ordering of states. Both methods outperform previous prioritized sampling algorithms in challenging RL benchmarks, including MuJoCo, Atari and Meta-World.
| null |
Relative Uncertainty Learning for Facial Expression Recognition
|
https://papers.nips.cc/paper_files/paper/2021/hash/9332c513ef44b682e9347822c2e457ac-Abstract.html
|
Yuhang Zhang, Chengrui Wang, Weihong Deng
|
https://papers.nips.cc/paper_files/paper/2021/hash/9332c513ef44b682e9347822c2e457ac-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12971-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9332c513ef44b682e9347822c2e457ac-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=h1-ilmYbdea
|
https://papers.nips.cc/paper_files/paper/2021/file/9332c513ef44b682e9347822c2e457ac-Supplemental.pdf
|
In facial expression recognition (FER), the uncertainties introduced by inherent noises like ambiguous facial expressions and inconsistent labels raise concerns about the credibility of recognition results. To quantify these uncertainties and achieve good performance under noisy data, we regard uncertainty as a relative concept and propose an innovative uncertainty learning method called Relative Uncertainty Learning (RUL). Rather than assuming Gaussian uncertainty distributions for all datasets, RUL builds an extra branch to learn uncertainty from the relative difficulty of samples by feature mixup. Specifically, we use uncertainties as weights to mix facial features and design an add-up loss to encourage uncertainty learning. It is easy to implement and adds little or no extra computation overhead. Extensive experiments show that RUL outperforms state-of-the-art FER uncertainty learning methods in both real-world and synthetic noisy FER datasets. Besides, RUL also works well on other datasets such as CIFAR and Tiny ImageNet. The code is available at https://github.com/zyh-uaiaaaa/Relative-Uncertainty-Learning.
| null |
An Information-theoretic Approach to Distribution Shifts
|
https://papers.nips.cc/paper_files/paper/2021/hash/93661c10ed346f9692f4d512319799b3-Abstract.html
|
Marco Federici, Ryota Tomioka, Patrick Forré
|
https://papers.nips.cc/paper_files/paper/2021/hash/93661c10ed346f9692f4d512319799b3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12972-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/93661c10ed346f9692f4d512319799b3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GrZmKDYCp6H
|
https://papers.nips.cc/paper_files/paper/2021/file/93661c10ed346f9692f4d512319799b3-Supplemental.pdf
|
Safely deploying machine learning models to the real world is often a challenging process. For example, models trained with data obtained from a specific geographic location tend to fail when queried with data obtained elsewhere, agents trained in a simulation can struggle to adapt when deployed in the real world or novel environments, and neural networks that are fit to a subset of the population might carry some selection bias into their decision process.In this work, we describe the problem of data shift from an information-theoretic perspective by (i) identifying and describing the different sources of error, (ii) comparing some of the most promising objectives explored in the recent domain generalization and fair classification literature. From our theoretical analysis and empirical evaluation, we conclude that the model selection procedure needs to be guided by careful considerations regarding the observed data, the factors used for correction, and the structure of the data-generating process.
| null |
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
|
https://papers.nips.cc/paper_files/paper/2021/hash/937936029af671cf479fa893db91cbdd-Abstract.html
|
Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin Rubinstein, Ce Zhang, Bo Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/937936029af671cf479fa893db91cbdd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12973-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/937936029af671cf479fa893db91cbdd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=UJw7jgbLgS
|
https://papers.nips.cc/paper_files/paper/2021/file/937936029af671cf479fa893db91cbdd-Supplemental.pdf
|
Adversarial Transferability is an intriguing property - adversarial perturbation crafted against one model is also effective against another model, while these models are from different model families or training processes. To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability, and how to bound it? Is there a way to reduce the adversarial transferability in order to improve the robustness of an ensemble ML model? To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. Our theoretical analysis shows that only promoting the orthogonality between gradients of base models is not enough to ensure low transferability; in the meantime, the model smoothness is an important factor to control the transferability. We also provide the lower and upper bounds of adversarial transferability under certain conditions. Inspired by our theoretical analysis, we propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base models. We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
| null |
Towards Sample-Optimal Compressive Phase Retrieval with Sparse and Generative Priors
|
https://papers.nips.cc/paper_files/paper/2021/hash/939314105ce8701e67489642ef4d49e8-Abstract.html
|
Zhaoqiang Liu, Subhroshekhar Ghosh, Jonathan Scarlett
|
https://papers.nips.cc/paper_files/paper/2021/hash/939314105ce8701e67489642ef4d49e8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12974-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/939314105ce8701e67489642ef4d49e8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=k7Q71M4BPNK
|
https://papers.nips.cc/paper_files/paper/2021/file/939314105ce8701e67489642ef4d49e8-Supplemental.pdf
|
Compressive phase retrieval is a popular variant of the standard compressive sensing problem in which the measurements only contain magnitude information. In this paper, motivated by recent advances in deep generative models, we provide recovery guarantees with near-optimal sample complexity for phase retrieval with generative priors. We first show that when using i.i.d. Gaussian measurements and an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs, roughly $O(k \log L)$ samples suffice to guarantee that any signal minimizing an amplitude-based empirical loss function is close to the true signal. Attaining this sample complexity with a practical algorithm remains a difficult challenge, and finding a good initialization for gradient-based methods has been observed to pose a major bottleneck. To partially address this, we further show that roughly $O(k \log L)$ samples ensure sufficient closeness between the underlying signal and any {\em globally optimal} solution to an optimization problem designed for spectral initialization (though finding such a solution may still be challenging). We also adapt this result to sparse phase retrieval, and show that $O(s \log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional, matching an information-theoretic lower bound. While these guarantees do not directly correspond to a practical algorithm, we propose a practical spectral initialization method motivated by our findings, and experimentally observe performance gains over various existing spectral initialization methods for sparse phase retrieval.
| null |
Moser Flow: Divergence-based Generative Modeling on Manifolds
|
https://papers.nips.cc/paper_files/paper/2021/hash/93a27b0bd99bac3e68a440b48aa421ab-Abstract.html
|
Noam Rozen, Aditya Grover, Maximilian Nickel, Yaron Lipman
|
https://papers.nips.cc/paper_files/paper/2021/hash/93a27b0bd99bac3e68a440b48aa421ab-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12975-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/93a27b0bd99bac3e68a440b48aa421ab-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=qGvMv3undNJ
|
https://papers.nips.cc/paper_files/paper/2021/file/93a27b0bd99bac3e68a440b48aa421ab-Supplemental.pdf
|
We are interested in learning generative models for complex geometries described via manifolds, such as spheres, tori, and other implicit surfaces. Current extensions of existing (Euclidean) generative models are restricted to specific geometries and typically suffer from high computational costs. We introduce Moser Flow (MF), a new class of generative models within the family of continuous normalizing flows (CNF). MF also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN). The divergence is a local, linear differential operator, easy to approximate and calculate on manifolds. Therefore, unlike other CNFs, MF does not require invoking or backpropagating through an ODE solver during training. Furthermore, representing the model density explicitly as the divergence of a NN rather than as a solution of an ODE facilitates learning high fidelity densities. Theoretically, we prove that MF constitutes a universal density approximator under suitable assumptions. Empirically, we demonstrate for the first time the use of flow models for sampling from general curved surfaces and achieve significant improvements in density estimation, sample quality, and training complexity over existing CNFs on challenging synthetic geometries and real-world benchmarks from the earth and climate sciences.
| null |
Structure-Aware Random Fourier Kernel for Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/93da579a65ce84cd1d4c85c2cbb84fc5-Abstract.html
|
Jinyuan Fang, Qiang Zhang, Zaiqiao Meng, Shangsong Liang
|
https://papers.nips.cc/paper_files/paper/2021/hash/93da579a65ce84cd1d4c85c2cbb84fc5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12976-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/93da579a65ce84cd1d4c85c2cbb84fc5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xyoFSmocONi
|
https://papers.nips.cc/paper_files/paper/2021/file/93da579a65ce84cd1d4c85c2cbb84fc5-Supplemental.pdf
|
Gaussian Processes (GPs) define distributions over functions and their generalization capabilities depend heavily on the choice of kernels. In this paper, we propose a novel structure-aware random Fourier (SRF) kernel for GPs that brings several benefits when modeling graph-structured data. First, SRF kernel is defined with a spectral distribution based on the Fourier duality given by the Bochner's theorem, transforming the kernel learning problem to a distribution inference problem. Second, SRF kernel admits a random Fourier feature formulation that makes the kernel scalable for optimization. Third, SRF kernel enables to leverage geometric structures by taking subgraphs as inputs. To effectively optimize GPs with SRF kernel, we develop a variational EM algorithm, which alternates between an inference procedure (E-step) and a learning procedure (M-step). Experimental results on five real-world datasets show that our model can achieve state-of-the-art performance in two typical graph learning tasks, i.e., object classification and link prediction.
| null |
Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling
|
https://papers.nips.cc/paper_files/paper/2021/hash/940392f5f32a7ade1cc201767cf83e31-Abstract.html
|
Valentin De Bortoli, James Thornton, Jeremy Heng, Arnaud Doucet
|
https://papers.nips.cc/paper_files/paper/2021/hash/940392f5f32a7ade1cc201767cf83e31-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12977-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/940392f5f32a7ade1cc201767cf83e31-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9BnCwiXB0ty
|
https://papers.nips.cc/paper_files/paper/2021/file/940392f5f32a7ade1cc201767cf83e31-Supplemental.pdf
|
Progressively applying Gaussian noise transforms complex data distributions to approximately Gaussian. Reversing this dynamic defines a generative model. When the forward noising process is given by a Stochastic Differential Equation (SDE), Song et al (2021) demonstrate how the time inhomogeneous drift of the associated reverse-time SDE may be estimated using score-matching. A limitation of this approach is that the forward-time SDE must be run for a sufficiently long time for the final distribution to be approximately Gaussian. In contrast, solving the Schrödinger Bridge (SB) problem, i.e. an entropy-regularized optimal transport problem on path spaces, yields diffusions which generate samples from the data distribution in finite time. We present Diffusion SB (DSB), an original approximation of the Iterative Proportional Fitting (IPF) procedure to solve the SB problem, and provide theoretical analysis along with generative modeling experiments. The first DSB iteration recovers the methodology proposed by Song et al. (2021), with the flexibility of using shorter time intervals, as subsequent DSB iterations reduce the discrepancy between the final-time marginal of the forward (resp. backward) SDE with respect to the prior (resp. data) distribution. Beyond generative modeling, DSB offers a widely applicable computational optimal transport tool as the continuous state-space analogue of the popular Sinkhorn algorithm (Cuturi, 2013).
| null |
Improving Transferability of Representations via Augmentation-Aware Self-Supervision
|
https://papers.nips.cc/paper_files/paper/2021/hash/94130ea17023c4837f0dcdda95034b65-Abstract.html
|
Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin
|
https://papers.nips.cc/paper_files/paper/2021/hash/94130ea17023c4837f0dcdda95034b65-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12978-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/94130ea17023c4837f0dcdda95034b65-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=U34rQjnImpM
|
https://papers.nips.cc/paper_files/paper/2021/file/94130ea17023c4837f0dcdda95034b65-Supplemental.pdf
|
Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering. However, such invariance could be harmful to downstream tasks if they rely on the characteristics of the data augmentations, e.g., location- or color-sensitive. This is not an issue just for unsupervised learning; we found that this occurs even in supervised learning because it also learns to predict the same label for all augmented samples of an instance. To avoid such failures and obtain more generalizable representations, we suggest to optimize an auxiliary self-supervised loss, coined AugSelf, that learns the difference of augmentation parameters (e.g., cropping positions, color adjustment intensities) between two randomly augmented samples. Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability. Furthermore, AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost. Extensive experiments demonstrate that our simple idea consistently improves the transferability of representations learned by supervised and unsupervised methods in various transfer learning scenarios. The code is available at https://github.com/hankook/AugSelf.
| null |
Long-Short Transformer: Efficient Transformers for Language and Vision
|
https://papers.nips.cc/paper_files/paper/2021/hash/9425be43ba92c2b4454ca7bf602efad8-Abstract.html
|
Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro
|
https://papers.nips.cc/paper_files/paper/2021/hash/9425be43ba92c2b4454ca7bf602efad8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12979-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9425be43ba92c2b4454ca7bf602efad8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=M_lkFOwVdYc
|
https://papers.nips.cc/paper_files/paper/2021/file/9425be43ba92c2b4454ca7bf602efad8-Supplemental.pdf
|
Transformers have achieved success in both language and vision domains. However, it is prohibitively expensive to scale them to long sequences such as long documents or high-resolution images, because self-attention mechanism has quadratic time and memory complexities with respect to the input sequence length. In this paper, we propose Long-Short Transformer (Transformer-LS), an efficient self-attention mechanism for modeling long sequences with linear complexity for both language and vision tasks. It aggregates a novel long-range attention with dynamic projection to model distant correlations and a short-term attention to capture fine-grained local correlations. We propose a dual normalization strategy to account for the scale mismatch between the two attention mechanisms. Transformer-LS can be applied to both autoregressive and bidirectional models without additional complexity. Our method outperforms the state-of-the-art models on multiple tasks in language and vision domains, including the Long Range Arena benchmark, autoregressive language modeling, and ImageNet classification. For instance, Transformer-LS achieves 0.97 test BPC on enwik8 using half the number of parameters than previous method, while being faster and is able to handle 3x as long sequences compared to its full-attention version on the same hardware. On ImageNet, it can obtain the state-of-the-art results (e.g., a moderate size of 55.8M model solely trained on 224x224 ImageNet-1K can obtain Top-1 accuracy 84.1%), while being more scalable on high-resolution images. The source code and models are released at https://github.com/NVIDIA/transformer-ls.
| null |
Post-Training Sparsity-Aware Quantization
|
https://papers.nips.cc/paper_files/paper/2021/hash/9431c87f273e507e6040fcb07dcb4509-Abstract.html
|
Gil Shomron, Freddy Gabbay, Samer Kurzum, Uri Weiser
|
https://papers.nips.cc/paper_files/paper/2021/hash/9431c87f273e507e6040fcb07dcb4509-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12980-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9431c87f273e507e6040fcb07dcb4509-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=qe9z54E_cqE
| null |
Quantization is a technique used in deep neural networks (DNNs) to increase execution performance and hardware efficiency. Uniform post-training quantization (PTQ) methods are common, since they can be implemented efficiently in hardware and do not require extensive hardware resources or a training set. Mapping FP32 models to INT8 using uniform PTQ yields models with negligible accuracy degradation; however, reducing precision below 8 bits with PTQ is challenging, as accuracy degradation becomes noticeable, due to the increase in quantization noise. In this paper, we propose a sparsity-aware quantization (SPARQ) method, in which the unstructured and dynamic activation sparsity is leveraged in different representation granularities. 4-bit quantization, for example, is employed by dynamically examining the bits of 8-bit values and choosing a window of 4 bits, while first skipping zero-value bits. Moreover, instead of quantizing activation-by-activation to 4 bits, we focus on pairs of 8-bit activations and examine whether one of the two is equal to zero. If one is equal to zero, the second can opportunistically use the other's 4-bit budget; if both do not equal zero, then each is dynamically quantized to 4 bits, as described. SPARQ achieves minor accuracy degradation and a practical hardware implementation.
| null |
The Implicit Bias of Minima Stability: A View from Function Space
|
https://papers.nips.cc/paper_files/paper/2021/hash/944a5ae3483ed5c1e10bbccb7942a279-Abstract.html
|
Rotem Mulayoff, Tomer Michaeli, Daniel Soudry
|
https://papers.nips.cc/paper_files/paper/2021/hash/944a5ae3483ed5c1e10bbccb7942a279-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12981-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/944a5ae3483ed5c1e10bbccb7942a279-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2STmSnZAEt2
|
https://papers.nips.cc/paper_files/paper/2021/file/944a5ae3483ed5c1e10bbccb7942a279-Supplemental.pdf
|
The loss terrains of over-parameterized neural networks have multiple global minima. However, it is well known that stochastic gradient descent (SGD) can stably converge only to minima that are sufficiently flat w.r.t. SGD's step size. In this paper we study the effect that this mechanism has on the function implemented by the trained model. First, we extend the existing knowledge on minima stability to non-differentiable minima, which are common in ReLU nets. We then use our stability results to study a single hidden layer univariate ReLU network. In this setting, we show that SGD is biased towards functions whose second derivative (w.r.t the input) has a bounded weighted $L_1$ norm, and this is regardless of the initialization. In particular, we show that the function implemented by the network upon convergence gets smoother as the learning rate increases. The weight multiplying the second derivative is larger around the center of the support of the training distribution, and smaller towards its boundaries, suggesting that a trained model tends to be smoother at the center of the training distribution.
| null |
Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/94739e5a5164b4d2396e253a11d57044-Abstract.html
|
Gen Li, Laixi Shi, Yuxin Chen, Yuantao Gu, Yuejie Chi
|
https://papers.nips.cc/paper_files/paper/2021/hash/94739e5a5164b4d2396e253a11d57044-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12982-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/94739e5a5164b4d2396e253a11d57044-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YA0wIYi-yM3
|
https://papers.nips.cc/paper_files/paper/2021/file/94739e5a5164b4d2396e253a11d57044-Supplemental.pdf
|
Achieving sample efficiency in online episodic reinforcement learning (RL) requires optimally balancing exploration and exploitation. When it comes to a finite-horizon episodic Markov decision process with $S$ states, $A$ actions and horizon length $H$, substantial progress has been achieved towards characterizing the minimax-optimal regret, which scales on the order of $\sqrt{H^2SAT}$ (modulo log factors) with $T$ the total number of samples. While several competing solution paradigms have been proposed to minimize regret, they are either memory-inefficient, or fall short of optimality unless the sample size exceeds an enormous threshold (e.g., $S^6A^4 \,\mathrm{poly}(H)$ for existing model-free methods).To overcome such a large sample size barrier to efficient RL, we design a novel model-free algorithm, with space complexity $O(SAH)$, that achieves near-optimal regret as soon as the sample size exceeds the order of $SA\,\mathrm{poly}(H)$. In terms of this sample size requirement (also referred to the initial burn-in cost), our method improves --- by at least a factor of $S^5A^3$ --- upon any prior memory-efficient algorithm that is asymptotically regret-optimal. Leveraging the recently introduced variance reduction strategy (also called {\em reference-advantage decomposition}), the proposed algorithm employs an {\em early-settled} reference update rule, with the aid of two Q-learning sequences with upper and lower confidence bounds. The design principle of our early-settled variance reduction method might be of independent interest to other RL settings that involve intricate exploration-exploitation trade-offs.
| null |
Robust Auction Design in the Auto-bidding World
|
https://papers.nips.cc/paper_files/paper/2021/hash/948f847055c6bf156997ce9fb59919be-Abstract.html
|
Santiago Balseiro, Yuan Deng, Jieming Mao, Vahab Mirrokni, Song Zuo
|
https://papers.nips.cc/paper_files/paper/2021/hash/948f847055c6bf156997ce9fb59919be-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12983-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/948f847055c6bf156997ce9fb59919be-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=01884FCwbNf
|
https://papers.nips.cc/paper_files/paper/2021/file/948f847055c6bf156997ce9fb59919be-Supplemental.pdf
|
In classic auction theory, reserve prices are known to be effective for improving revenue for the auctioneer against quasi-linear utility maximizing bidders. The introduction of reserve prices, however, usually do not help improve total welfare of the auctioneer and the bidders. In this paper, we focus on value maximizing bidders with return on spend constraints---a paradigm that has drawn considerable attention recently as more advertisers adopt auto-bidding algorithms in advertising platforms---and show that the introduction of reserve prices has a novel impact on the market. Namely, by choosing reserve prices appropriately the auctioneer can improve not only the total revenue but also the total welfare. Our results also demonstrate that reserve prices are robust to bidder types, i.e., reserve prices work well for different bidder types, such as value maximizers and utility maximizers, without using bidder type information. We generalize these results for a variety of auction mechanisms such as VCG, GSP, and first-price auctions. Moreover, we show how to combine these results with additive boosts to improve the welfare of the outcomes of the auction further. Finally, we complement our theoretical observations with an empirical study confirming the effectiveness of these ideas using data from online advertising auctions.
| null |
Weighted model estimation for offline model-based reinforcement learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/949694a5059302e7283073b502f094d7-Abstract.html
|
Toru Hishinuma, Kei Senda
|
https://papers.nips.cc/paper_files/paper/2021/hash/949694a5059302e7283073b502f094d7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12984-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/949694a5059302e7283073b502f094d7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zdC5eXljMPy
|
https://papers.nips.cc/paper_files/paper/2021/file/949694a5059302e7283073b502f094d7-Supplemental.pdf
|
This paper discusses model estimation in offline model-based reinforcement learning (MBRL), which is important for subsequent policy improvement using an estimated model. From the viewpoint of covariate shift, a natural idea is model estimation weighted by the ratio of the state-action distributions of offline data and real future data. However, estimating such a natural weight is one of the main challenges for off-policy evaluation, which is not easy to use. As an artificial alternative, this paper considers weighting with the state-action distribution ratio of offline data and simulated future data, which can be estimated relatively easily by standard density ratio estimation techniques for supervised learning. Based on the artificial weight, this paper defines a loss function for offline MBRL and presents an algorithm to optimize it. Weighting with the artificial weight is justified as evaluating an upper bound of the policy evaluation error. Numerical experiments demonstrate the effectiveness of weighting with the artificial weight.
| null |
Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers
|
https://papers.nips.cc/paper_files/paper/2021/hash/94aada62f90dd50a84ca74304563d5db-Abstract.html
|
Julian Katz-Samuels, Blake Mason, Kevin G. Jamieson, Rob Nowak
|
https://papers.nips.cc/paper_files/paper/2021/hash/94aada62f90dd50a84ca74304563d5db-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12985-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/94aada62f90dd50a84ca74304563d5db-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wJXWzCsGlZw
|
https://papers.nips.cc/paper_files/paper/2021/file/94aada62f90dd50a84ca74304563d5db-Supplemental.pdf
|
We consider interactive learning in the realizable setting and develop a general framework to handle problems ranging from best arm identification to active classification. We begin our investigation with the observation that agnostic algorithms \emph{cannot} be minimax-optimal in the realizable setting. Hence, we design novel computationally efficient algorithms for the realizable setting that match the minimax lower bound up to logarithmic factors and are general-purpose, accommodating a wide variety of function classes including kernel methods, H{\"o}lder smooth functions, and convex functions. The sample complexities of our algorithms can be quantified in terms of well-known quantities like the extended teaching dimension and haystack dimension. However, unlike algorithms based directly on those combinatorial quantities, our algorithms are computationally efficient. To achieve computational efficiency, our algorithms sample from the version space using Monte Carlo ``hit-and-run'' algorithms instead of maintaining the version space explicitly. Our approach has two key strengths. First, it is simple, consisting of two unifying, greedy algorithms. Second, our algorithms have the capability to seamlessly leverage prior knowledge that is often available and useful in practice. In addition to our new theoretical results, we demonstrate empirically that our algorithms are competitive with Gaussian process UCB methods.
| null |
Deconditional Downscaling with Gaussian Processes
|
https://papers.nips.cc/paper_files/paper/2021/hash/94aef38441efa3380a3bed3faf1f9d5d-Abstract.html
|
Siu Lun Chau, Shahine Bouabid, Dino Sejdinovic
|
https://papers.nips.cc/paper_files/paper/2021/hash/94aef38441efa3380a3bed3faf1f9d5d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12986-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/94aef38441efa3380a3bed3faf1f9d5d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tX4OCWu3P7R
|
https://papers.nips.cc/paper_files/paper/2021/file/94aef38441efa3380a3bed3faf1f9d5d-Supplemental.pdf
|
Refining low-resolution (LR) spatial fields with high-resolution (HR) information, often known as statistical downscaling, is challenging as the diversity of spatial datasets often prevents direct matching of observations. Yet, when LR samples are modeled as aggregate conditional means of HR samples with respect to a mediating variable that is globally observed, the recovery of the underlying fine-grained field can be framed as taking an "inverse" of the conditional expectation, namely a deconditioning problem. In this work, we propose a Bayesian formulation of deconditioning which naturally recovers the initial reproducing kernel Hilbert space formulation from Hsu and Ramos (2019). We extend deconditioning to a downscaling setup and devise efficient conditional mean embedding estimator for multiresolution data. By treating conditional expectations as inter-domain features of the underlying field, a posterior for the latent field can be established as a solution to the deconditioning problem. Furthermore, we show that this solution can be viewed as a two-staged vector-valued kernel ridge regressor and show that it has a minimax optimal convergence rate under mild assumptions. Lastly, we demonstrate its proficiency in a synthetic and a real-world atmospheric field downscaling problem, showing substantial improvements over existing methods.
| null |
Image Generation using Continuous Filter Atoms
|
https://papers.nips.cc/paper_files/paper/2021/hash/94c7bb58efc3b337800875b5d382a072-Abstract.html
|
Ze Wang, Seunghyun Hwang, Zichen Miao, Qiang Qiu
|
https://papers.nips.cc/paper_files/paper/2021/hash/94c7bb58efc3b337800875b5d382a072-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12987-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/94c7bb58efc3b337800875b5d382a072-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=877bJocr-w
|
https://papers.nips.cc/paper_files/paper/2021/file/94c7bb58efc3b337800875b5d382a072-Supplemental.pdf
|
In this paper, we model the subspace of convolutional filters with a neural ordinary differential equation (ODE) to enable gradual changes in generated images. Decomposing convolutional filters over a set of filter atoms allows efficiently modeling and sampling from a subspace of high-dimensional filters. By further modeling filters atoms with a neural ODE, we show both empirically and theoretically that such introduced continuity can be propagated to the generated images, and thus achieves gradually evolved image generation. We support the proposed framework of image generation with continuous filter atoms using various experiments, including image-to-image translation and image generation conditioned on continuous labels. Without auxiliary network components and heavy supervision, the proposed continuous filter atoms allow us to easily manipulate the gradual change of generated images by controlling integration intervals of neural ordinary differential equation. This research sheds the light on using the subspace of network parameters to navigate the diverse appearance of image generation.
| null |
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons
|
https://papers.nips.cc/paper_files/paper/2021/hash/94cdbdb84e8e1de8a725fa2ed61498a4-Abstract.html
|
Paul Haider, Benjamin Ellenberger, Laura Kriener, Jakob Jordan, Walter Senn, Mihai A. Petrovici
|
https://papers.nips.cc/paper_files/paper/2021/hash/94cdbdb84e8e1de8a725fa2ed61498a4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12988-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/94cdbdb84e8e1de8a725fa2ed61498a4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=an8FSGbuCw
|
https://papers.nips.cc/paper_files/paper/2021/file/94cdbdb84e8e1de8a725fa2ed61498a4-Supplemental.pdf
|
The response time of physical computational elements is finite, and neurons are no exception. In hierarchical models of cortical networks each layer thus introduces a response lag. This inherent property of physical dynamical systems results in delayed processing of stimuli and causes a timing mismatch between network output and instructive signals, thus afflicting not only inference, but also learning. We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components which avoids these issues by harnessing the ability of biological neurons to phase-advance their output with respect to their membrane potential. This principle enables quasi-instantaneous inference independent of network depth and avoids the need for phased plasticity or computationally expensive network relaxation phases. We jointly derive disentangled neuron and synapse dynamics from a prospective energy function that depends on a network's generalized position and momentum. The resulting model can be interpreted as a biologically plausible approximation of error backpropagation in deep cortical networks with continuous-time, leaky neuronal dynamics and continuously active, local plasticity. We demonstrate successful learning of standard benchmark datasets, achieving competitive performance using both fully-connected and convolutional architectures, and show how our principle can be applied to detailed models of cortical microcircuitry. Furthermore, we study the robustness of our model to spatio-temporal substrate imperfections to demonstrate its feasibility for physical realization, be it in vivo or in silico.
| null |
Learning Fast-Inference Bayesian Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/94e70705efae423efda1088614128d0b-Abstract.html
|
Vaidyanathan Peruvemba Ramaswamy, Stefan Szeider
|
https://papers.nips.cc/paper_files/paper/2021/hash/94e70705efae423efda1088614128d0b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12989-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/94e70705efae423efda1088614128d0b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=UmCsy3C4xj
|
https://papers.nips.cc/paper_files/paper/2021/file/94e70705efae423efda1088614128d0b-Supplemental.zip
|
We propose new methods for learning Bayesian networks (BNs) that reliably support fast inference. We utilize maximum state space size as a more fine-grained measure for the BN's reasoning complexity than the standard treewidth measure, thereby accommodating the possibility that variables range over domains of different sizes. Our methods combine heuristic BN structure learning algorithms with the recently introduced MaxSAT-powered local improvement method (Peruvemba Ramaswamy and Szeider, AAAI'21). Our experiments show that our new learning methods produce BNs that support significantly faster exact probabilistic inference than BNs learned with treewidth bounds.
| null |
Per-Pixel Classification is Not All You Need for Semantic Segmentation
|
https://papers.nips.cc/paper_files/paper/2021/hash/950a4152c2b4aa3ad78bdd6b366cc179-Abstract.html
|
Bowen Cheng, Alex Schwing, Alexander Kirillov
|
https://papers.nips.cc/paper_files/paper/2021/hash/950a4152c2b4aa3ad78bdd6b366cc179-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12990-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/950a4152c2b4aa3ad78bdd6b366cc179-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0lz69oI5iZP
|
https://papers.nips.cc/paper_files/paper/2021/file/950a4152c2b4aa3ad78bdd6b366cc179-Supplemental.pdf
|
Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.
| null |
Deep Markov Factor Analysis: Towards Concurrent Temporal and Spatial Analysis of fMRI Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/951124d4a093eeae83d9726a20295498-Abstract.html
|
Amirreza Farnoosh, Sarah Ostadabbas
|
https://papers.nips.cc/paper_files/paper/2021/hash/951124d4a093eeae83d9726a20295498-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12991-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/951124d4a093eeae83d9726a20295498-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ekVPXh9tYkL
|
https://papers.nips.cc/paper_files/paper/2021/file/951124d4a093eeae83d9726a20295498-Supplemental.pdf
|
Factor analysis methods have been widely used in neuroimaging to transfer high dimensional imaging data into low dimensional, ideally interpretable representations. However, most of these methods overlook the highly nonlinear and complex temporal dynamics of neural processes when factorizing their imaging data. In this paper, we present deep Markov factor analysis (DMFA), a generative model that employs Markov property in a chain of low dimensional temporal embeddings together with spatial inductive assumptions, all related through neural networks, to capture temporal dynamics in functional magnetic resonance imaging (fMRI) data, and tackle their high spatial dimensionality, respectively. Augmented with a discrete latent, DMFA is able to cluster fMRI data in its low dimensional temporal embedding with regard to subject and cognitive state variability, therefore, enables validation of a variety of fMRI-driven neuroscientific hypotheses. Experimental results on both synthetic and real fMRI data demonstrate the capacity of DMFA in revealing interpretable clusters and capturing nonlinear temporal dependencies in these high dimensional imaging data.
| null |
BooVAE: Boosting Approach for Continual Learning of VAE
|
https://papers.nips.cc/paper_files/paper/2021/hash/952285b9b7e7a1be5aa7849f32ffff05-Abstract.html
|
Evgenii Egorov, Anna Kuzina, Evgeny Burnaev
|
https://papers.nips.cc/paper_files/paper/2021/hash/952285b9b7e7a1be5aa7849f32ffff05-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12992-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/952285b9b7e7a1be5aa7849f32ffff05-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zImiB39pyUL
|
https://papers.nips.cc/paper_files/paper/2021/file/952285b9b7e7a1be5aa7849f32ffff05-Supplemental.pdf
|
Variational autoencoder (VAE) is a deep generative model for unsupervised learning, allowing to encode observations into the meaningful latent space. VAE is prone to catastrophic forgetting when tasks arrive sequentially, and only the data for the current one is available. We address this problem of continual learning for VAEs. It is known that the choice of the prior distribution over the latent space is crucial for VAE in the non-continual setting. We argue that it can also be helpful to avoid catastrophic forgetting. We learn the approximation of the aggregated posterior as a prior for each task. This approximation is parametrised as an additive mixture of distributions induced by an encoder evaluated at trainable pseudo-inputs. We use a greedy boosting-like approach with entropy regularisation to learn the components. This method encourages components diversity, which is essential as we aim at memorising the current task with the fewest components possible. Based on the learnable prior, we introduce an end-to-end approach for continual learning of VAEs and provide empirical studies on commonly used benchmarks (MNIST, Fashion MNIST, NotMNIST) and CelebA datasets. For each dataset, the proposed method avoids catastrophic forgetting in a fully automatic way.
| null |
Handling Long-tailed Feature Distribution in AdderNets
|
https://papers.nips.cc/paper_files/paper/2021/hash/95323660ed2124450caaac2c46b5ed90-Abstract.html
|
Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
|
https://papers.nips.cc/paper_files/paper/2021/hash/95323660ed2124450caaac2c46b5ed90-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12993-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/95323660ed2124450caaac2c46b5ed90-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=d7skOEQClK
|
https://papers.nips.cc/paper_files/paper/2021/file/95323660ed2124450caaac2c46b5ed90-Supplemental.pdf
|
Adder neural networks (ANNs) are designed for low energy cost which replace expensive multiplications in convolutional neural networks (CNNs) with cheaper additions to yield energy-efficient neural networks and hardware accelerations. Although ANNs achieve satisfactory efficiency, there exist gaps between ANNs and CNNs where the accuracy of ANNs can hardly be compared to CNNs without the assistance of other training tricks, such as knowledge distillation. The inherent discrepancy lies in the similarity measurement between filters and features, however how to alleviate this difference remains unexplored. To locate the potential problem of ANNs, we focus on the property difference due to similarity measurement. We demonstrate that unordered heavy tails in ANNs could be the key component which prevents ANNs from achieving superior classification performance since fatter tails tend to overlap in feature space. Through pre-defining Multivariate Skew Laplace distributions and embedding feature distributions into the loss function, ANN features can be fully controlled and designed for various properties. We further present a novel method for tackling existing heavy tails in ANNs with only a modification of classifier where ANN features are clustered with their tails well-formulated through proposed angle-based constraint on the distribution parameters to encourage high diversity of tails. Experiments conducted on several benchmarks and comparison with other distributions demonstrate the effectiveness of proposed approach for boosting the performance of ANNs.
| null |
Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL
|
https://papers.nips.cc/paper_files/paper/2021/hash/9559fc73b13fa721a816958488a5b449-Abstract.html
|
Minshuo Chen, Yan Li, Ethan Wang, Zhuoran Yang, Zhaoran Wang, Tuo Zhao
|
https://papers.nips.cc/paper_files/paper/2021/hash/9559fc73b13fa721a816958488a5b449-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12994-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9559fc73b13fa721a816958488a5b449-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ww1e07fy9fC
|
https://papers.nips.cc/paper_files/paper/2021/file/9559fc73b13fa721a816958488a5b449-Supplemental.pdf
|
Mean-Field Multi-Agent Reinforcement Learning (MF-MARL) is attractive in the applications involving a large population of homogeneous agents, as it exploits the permutation invariance of agents and avoids the curse of many agents. Most existing results only focus on online settings, in which agents can interact with the environment during training. In some applications such as social welfare optimization, however, the interaction during training can be prohibitive or even unethical in the societal systems. To bridge such a gap, we propose a SAFARI (peSsimistic meAn-Field vAlue iteRatIon) algorithm for off-line MF-MARL, which only requires a handful of pre-collected experience data. Theoretically, under a weak coverage assumption that the experience dataset contains enough information about the optimal policy, we prove that for an episodic mean-field MDP with a horizon $H$ and $N$ training trajectories, SAFARI attains a sub-optimality gap of $\mathcal{O}(H^2d_{\rm eff} /\sqrt{N})$, where $d_{\rm eff}$ is the effective dimension of the function class for parameterizing the value function, but independent on the number of agents. Numerical experiments are provided.
| null |
A Law of Iterated Logarithm for Multi-Agent Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/955fd82131e15e7b5199cbc8f983306a-Abstract.html
|
Gugan Chandrashekhar Thoppe, Bhumesh Kumar
|
https://papers.nips.cc/paper_files/paper/2021/hash/955fd82131e15e7b5199cbc8f983306a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12995-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/955fd82131e15e7b5199cbc8f983306a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6veB3MCD-bu
|
https://papers.nips.cc/paper_files/paper/2021/file/955fd82131e15e7b5199cbc8f983306a-Supplemental.pdf
|
In Multi-Agent Reinforcement Learning (MARL), multiple agents interact with a common environment, as also with each other, for solving a shared problem in sequential decision-making. It has wide-ranging applications in gaming, robotics, finance, communication, etc. In this work, we derive a novel law of iterated logarithm for a family of distributed nonlinear stochastic approximation schemes that is useful in MARL. In particular, our result describes the convergence rate on almost every sample path where the algorithm converges. This result is the first of its kind in the distributed setup and provides deeper insights than the existing ones, which only discuss convergence rates in the expected or the CLT sense. Importantly, our result holds under significantly weaker assumptions: neither the gossip matrix needs to be doubly stochastic nor the stepsizes square summable. As an application, we show that, for the stepsize $n^{-\gamma}$ with $\gamma \in (0, 1),$ the distributed TD(0) algorithm with linear function approximation has a convergence rate of $O(\sqrt{n^{-\gamma} \ln n })$ a.s.; for the $1/n$ type stepsize, the same is $O(\sqrt{n^{-1} \ln \ln n})$ a.s. These decay rates do not depend on the graph depicting the interactions among the different agents.
| null |
MOMA: Multi-Object Multi-Actor Activity Parsing
|
https://papers.nips.cc/paper_files/paper/2021/hash/95688ba636a4720a85b3634acfec8cdd-Abstract.html
|
Zelun Luo, Wanze Xie, Siddharth Kapoor, Yiyun Liang, Michael Cooper, Juan Carlos Niebles, Ehsan Adeli, Fei-Fei Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/95688ba636a4720a85b3634acfec8cdd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12996-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/95688ba636a4720a85b3634acfec8cdd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=x4oe1W8Hpl3
|
https://papers.nips.cc/paper_files/paper/2021/file/95688ba636a4720a85b3634acfec8cdd-Supplemental.pdf
|
Complex activities often involve multiple humans utilizing different objects to complete actions (e.g., in healthcare settings, physicians, nurses, and patients interact with each other and various medical devices). Recognizing activities poses a challenge that requires a detailed understanding of actors' roles, objects' affordances, and their associated relationships. Furthermore, these purposeful activities are composed of multiple achievable steps, including sub-activities and atomic actions, which jointly define a hierarchy of action parts. This paper introduces Activity Parsing as the overarching task of temporal segmentation and classification of activities, sub-activities, atomic actions, along with an instance-level understanding of actors, objects, and their relationships in videos. Involving multiple entities (actors and objects), we argue that traditional pair-wise relationships, often used in scene or action graphs, do not appropriately represent the dynamics between them. Hence, we introduce Action Hypergraph, a spatial-temporal graph containing hyperedges (i.e., edges with higher-order relationships), as a new representation. In addition, we introduce Multi-Object Multi-Actor (MOMA), the first benchmark and dataset dedicated to activity parsing. Lastly, to parse a video, we propose the HyperGraph Activity Parsing (HGAP) network, which outperforms several baselines, including those based on regular graphs and raw video data.
| null |
The Pareto Frontier of model selection for general Contextual Bandits
|
https://papers.nips.cc/paper_files/paper/2021/hash/9570efef719d705326f0ff817ef084e6-Abstract.html
|
Teodor Vanislavov Marinov, Julian Zimmert
|
https://papers.nips.cc/paper_files/paper/2021/hash/9570efef719d705326f0ff817ef084e6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12997-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9570efef719d705326f0ff817ef084e6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BbSPfmZqs4B
|
https://papers.nips.cc/paper_files/paper/2021/file/9570efef719d705326f0ff817ef084e6-Supplemental.pdf
|
Recent progress in model selection raises the question of the fundamental limits of these techniques. Under specific scrutiny has been model selection for general contextual bandits with nested policy classes, resulting in a COLT2020 open problem. It asks whether it is possible to obtain simultaneously the optimal single algorithm guarantees over all policies in a nested sequence of policy classes, or if otherwise this is possible for a trade-off $\alpha\in[\frac{1}{2},1)$ between complexity term and time: $\ln(|\Pi_m|)^{1-\alpha}T^\alpha$. We give a disappointing answer to this question. Even in the purely stochastic regime, the desired results are unobtainable. We present a Pareto frontier of up to logarithmic factors matching upper and lower bounds, thereby proving that an increase in the complexity term $\ln(|\Pi_m|)$ independent of $T$ is unavoidable for general policy classes.As a side result, we also resolve a COLT2016 open problem concerning second-order bounds in full-information games.
| null |
Teaching an Active Learner with Contrastive Examples
|
https://papers.nips.cc/paper_files/paper/2021/hash/958adb57686c2fdec5796398de5f317a-Abstract.html
|
Chaoqi Wang, Adish Singla, Yuxin Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/958adb57686c2fdec5796398de5f317a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12998-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/958adb57686c2fdec5796398de5f317a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZkGfZLEXZ20
|
https://papers.nips.cc/paper_files/paper/2021/file/958adb57686c2fdec5796398de5f317a-Supplemental.pdf
|
We study the problem of active learning with the added twist that the learner is assisted by a helpful teacher. We consider the following natural interaction protocol: At each round, the learner proposes a query asking for the label of an instance $x^q$, the teacher provides the requested label $\{x^q, y^q\}$ along with explanatory information to guide the learning process. In this paper, we view this information in the form of an additional contrastive example ($\{x^c, y^c\}$) where $x^c$ is picked from a set constrained by $x^q$ (e.g., dissimilar instances with the same label). Our focus is to design a teaching algorithm that can provide an informative sequence of contrastive examples to the learner to speed up the learning process. We show that this leads to a challenging sequence optimization problem where the algorithm's choices at a given round depend on the history of interactions. We investigate an efficient teaching algorithm that adaptively picks these contrastive examples. We derive strong performance guarantees for our algorithm based on two problem-dependent parameters and further show that for specific types of active learners (e.g., a generalized binary search learner), the proposed teaching algorithm exhibits strong approximation guarantees. Finally, we illustrate our bounds and demonstrate the effectiveness of our teaching framework via two numerical case studies.
| null |
Structured Denoising Diffusion Models in Discrete State-Spaces
|
https://papers.nips.cc/paper_files/paper/2021/hash/958c530554f78bcd8e97125b70e6973d-Abstract.html
|
Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, Rianne van den Berg
|
https://papers.nips.cc/paper_files/paper/2021/hash/958c530554f78bcd8e97125b70e6973d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12999-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/958c530554f78bcd8e97125b70e6973d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=h7-XixPCAL
|
https://papers.nips.cc/paper_files/paper/2021/file/958c530554f78bcd8e97125b70e6973d-Supplemental.pdf
|
Denoising diffusion probabilistic models (DDPMs) [Ho et al. 2021] have shown impressive results on image and waveform generation in continuous state spaces. Here, we introduce Discrete Denoising Diffusion Probabilistic Models (D3PMs), diffusion-like generative models for discrete data that generalize the multinomial diffusion model of Hoogeboom et al. [2021], by going beyond corruption processes with uniform transition probabilities. This includes corruption with transition matrices that mimic Gaussian kernels in continuous space, matrices based on nearest neighbors in embedding space, and matrices that introduce absorbing states. The third allows us to draw a connection between diffusion models and autoregressive and mask-based generative models. We show that the choice of transition matrix is an important design decision that leads to improved results in image and text domains. We also introduce a new loss function that combines the variational lower bound with an auxiliary cross entropy loss. For text, this model class achieves strong results on character-level text generation while scaling to large vocabularies on LM1B. On the image dataset CIFAR-10, our models approach the sample quality and exceed the log-likelihood of the continuous-space DDPM model.
| null |
Emergent Communication of Generalizations
|
https://papers.nips.cc/paper_files/paper/2021/hash/9597353e41e6957b5e7aa79214fcb256-Abstract.html
|
Jesse Mu, Noah Goodman
|
https://papers.nips.cc/paper_files/paper/2021/hash/9597353e41e6957b5e7aa79214fcb256-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13000-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9597353e41e6957b5e7aa79214fcb256-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yq5MYHVaClG
|
https://papers.nips.cc/paper_files/paper/2021/file/9597353e41e6957b5e7aa79214fcb256-Supplemental.pdf
|
To build agents that can collaborate effectively with others, recent research has trained artificial agents to communicate with each other in Lewis-style referential games. However, this often leads to successful but uninterpretable communication. We argue that this is due to the game objective: communicating about a single object in a shared visual context is prone to overfitting and does not encourage language useful beyond concrete reference. In contrast, human language conveys a rich variety of abstract ideas. To promote such skills, we propose games that require communicating generalizations over sets of objects representing abstract visual concepts, optionally with separate contexts for each agent. We find that these games greatly improve systematicity and interpretability of the learned languages, according to several metrics in the literature. Finally, we propose a method for identifying logical operations embedded in the emergent languages by learning an approximate compositional reconstruction of the language.
| null |
Distributed Machine Learning with Sparse Heterogeneous Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/959776b99b006e5785c3a3364949ce47-Abstract.html
|
Dominic Richards, Sahand Negahban, Patrick Rebeschini
|
https://papers.nips.cc/paper_files/paper/2021/hash/959776b99b006e5785c3a3364949ce47-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13001-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/959776b99b006e5785c3a3364949ce47-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=F9HNBbytcqT
|
https://papers.nips.cc/paper_files/paper/2021/file/959776b99b006e5785c3a3364949ce47-Supplemental.pdf
|
Motivated by distributed machine learning settings such as Federated Learning, we consider the problem of fitting a statistical model across a distributed collection of heterogeneous data sets whose similarity structure is encoded by a graph topology. Precisely, we analyse the case where each node is associated with fitting a sparse linear model, and edges join two nodes if the difference of their solutions is also sparse. We propose a method based on Basis Pursuit Denoising with a total variation penalty, and provide finite sample guarantees for sub-Gaussian design matrices. Taking the root of the tree as a reference node, we show that if the sparsity of the differences across nodes is smaller than the sparsity at the root, then recovery is successful with fewer samples than by solving the problems independently, or by using methods that rely on a large overlap in the signal supports, such as the group Lasso. We consider both the noiseless and noisy setting, and numerically investigate the performance of distributed methods based on Distributed Alternating Direction Methods of Multipliers (ADMM) and hyperspectral unmixing.
| null |
Manipulating SGD with Data Ordering Attacks
|
https://papers.nips.cc/paper_files/paper/2021/hash/959ab9a0695c467e7caf75431a872e5c-Abstract.html
|
I Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross J Anderson
|
https://papers.nips.cc/paper_files/paper/2021/hash/959ab9a0695c467e7caf75431a872e5c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13002-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/959ab9a0695c467e7caf75431a872e5c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Z7xSQ3SXLQU
|
https://papers.nips.cc/paper_files/paper/2021/file/959ab9a0695c467e7caf75431a872e5c-Supplemental.pdf
|
Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker. Furthermore, we find that even a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors.
| null |
Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/95b431e51fc53692913da5263c214162-Abstract.html
|
Maximilian Stadler, Bertrand Charpentier, Simon Geisler, Daniel Zügner, Stephan Günnemann
|
https://papers.nips.cc/paper_files/paper/2021/hash/95b431e51fc53692913da5263c214162-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13003-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/95b431e51fc53692913da5263c214162-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=N0Pigj5tpHE
|
https://papers.nips.cc/paper_files/paper/2021/file/95b431e51fc53692913da5263c214162-Supplemental.pdf
|
The interdependence between nodes in graphs is key to improve class prediction on nodes, utilized in approaches like Label Probagation (LP) or in Graph Neural Networks (GNNs). Nonetheless, uncertainty estimation for non-independent node-level predictions is under-explored. In this work, we explore uncertainty quantification for node classification in three ways: (1) We derive three axioms explicitly characterizing the expected predictive uncertainty behavior in homophilic attributed graphs.(2) We propose a new model Graph Posterior Network (GPN) which explicitly performs Bayesian posterior updates for predictions on interdependent nodes. GPN provably obeys the proposed axioms. (3) We extensively evaluate GPN and a strong set of baselines on semi-supervised node classification including detection of anomalous features, and detection of left-out classes. GPN outperforms existing approaches for uncertainty estimation in the experiments.
| null |
Locality Sensitive Teaching
|
https://papers.nips.cc/paper_files/paper/2021/hash/95c3f1a8b262ec7a929a8739e21142d7-Abstract.html
|
Zhaozhuo Xu, Beidi Chen, Chaojian Li, Weiyang Liu, Le Song, Yingyan Lin, Anshumali Shrivastava
|
https://papers.nips.cc/paper_files/paper/2021/hash/95c3f1a8b262ec7a929a8739e21142d7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13004-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/95c3f1a8b262ec7a929a8739e21142d7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Rav_oC35ToB
|
https://papers.nips.cc/paper_files/paper/2021/file/95c3f1a8b262ec7a929a8739e21142d7-Supplemental.pdf
|
The emergence of the Internet-of-Things (IoT) sheds light on applying the machine teaching (MT) algorithms for online personalized education on home devices. This direction becomes more promising during the COVID-19 pandemic when in-person education becomes infeasible. However, as one of the most influential and practical MT paradigms, iterative machine teaching (IMT) is prohibited on IoT devices due to its inefficient and unscalable algorithms. IMT is a paradigm where a teacher feeds examples iteratively and intelligently based on the learner's status. In each iteration, current IMT algorithms greedily traverse the whole training set to find an example for the learner, which is computationally expensive in practice. We propose a novel teaching framework, Locality Sensitive Teaching (LST), based on locality sensitive sampling, to overcome these challenges. LST has provable near-constant time complexity, which is exponentially better than the existing baseline. With at most 425.12x speedups and 99.76% energy savings over IMT, LST is the first algorithm that enables energy and time efficient machine teaching on IoT devices. Owing to LST's substantial efficiency and scalability, it is readily applicable in real-world education scenarios.
| null |
No-Press Diplomacy from Scratch
|
https://papers.nips.cc/paper_files/paper/2021/hash/95f2b84de5660ddf45c8a34933a2e66f-Abstract.html
|
Anton Bakhtin, David Wu, Adam Lerer, Noam Brown
|
https://papers.nips.cc/paper_files/paper/2021/hash/95f2b84de5660ddf45c8a34933a2e66f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13005-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/95f2b84de5660ddf45c8a34933a2e66f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Pq7wIzt3OUE
|
https://papers.nips.cc/paper_files/paper/2021/file/95f2b84de5660ddf45c8a34933a2e66f-Supplemental.pdf
|
Prior AI successes in complex games have largely focused on settings with at most hundreds of actions at each decision point. In contrast, Diplomacy is a game with more than 10^20 possible actions per turn. Previous attempts to address games with large branching factors, such as Diplomacy, StarCraft, and Dota, used human data to bootstrap the policy or used handcrafted reward shaping. In this paper, we describe an algorithm for action exploration and equilibrium approximation in games with combinatorial action spaces. This algorithm simultaneously performs value iteration while learning a policy proposal network. A double oracle step is used to explore additional actions to add to the policy proposals. At each state, the target state value and policy for the model training are computed via an equilibrium search procedure. Using this algorithm, we train an agent, DORA, completely from scratch for a popular two-player variant of Diplomacy and show that it achieves superhuman performance. Additionally, we extend our methods to full-scale no-press Diplomacy and for the first time train an agent from scratch with no human data. We present evidence that this agent plays a strategy that is incompatible with human-data bootstrapped agents. This presents the first strong evidence of multiple equilibria in Diplomacy and suggests that self play alone may be insufficient for achieving superhuman performance in Diplomacy.
| null |
Remember What You Want to Forget: Algorithms for Machine Unlearning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9627c45df543c816a3ddf2d8ea686a99-Abstract.html
|
Ayush Sekhari, Jayadev Acharya, Gautam Kamath, Ananda Theertha Suresh
|
https://papers.nips.cc/paper_files/paper/2021/hash/9627c45df543c816a3ddf2d8ea686a99-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13006-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9627c45df543c816a3ddf2d8ea686a99-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pvCLqcsLJ1N
|
https://papers.nips.cc/paper_files/paper/2021/file/9627c45df543c816a3ddf2d8ea686a99-Supplemental.pdf
|
We study the problem of unlearning datapoints from a learnt model. The learner first receives a dataset $S$ drawn i.i.d. from an unknown distribution, and outputs a model $\widehat{w}$ that performs well on unseen samples from the same distribution. However, at some point in the future, any training datapoint $z \in S$ can request to be unlearned, thus prompting the learner to modify its output model while still ensuring the same accuracy guarantees. We initiate a rigorous study of generalization in machine unlearning, where the goal is to perform well on previously unseen datapoints. Our focus is on both computational and storage complexity. For the setting of convex losses, we provide an unlearning algorithm that can unlearn up to $O(n/d^{1/4})$ samples, where $d$ is the problem dimension. In comparison, in general, differentially private learning (which implies unlearning) only guarantees deletion of $O(n/d^{1/2})$ samples. This demonstrates a novel separation between differential privacy and machine unlearning.
| null |
Learning latent causal graphs via mixture oracles
|
https://papers.nips.cc/paper_files/paper/2021/hash/966aad8981dcc75b5b8ab04427a833b2-Abstract.html
|
Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam
|
https://papers.nips.cc/paper_files/paper/2021/hash/966aad8981dcc75b5b8ab04427a833b2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13007-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/966aad8981dcc75b5b8ab04427a833b2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=f9mSLa07Ncc
|
https://papers.nips.cc/paper_files/paper/2021/file/966aad8981dcc75b5b8ab04427a833b2-Supplemental.pdf
|
We study the problem of reconstructing a causal graphical model from data in the presence of latent variables. The main problem of interest is recovering the causal structure over the latent variables while allowing for general, potentially nonlinear dependencies. In many practical problems, the dependence between raw observations (e.g. pixels in an image) is much less relevant than the dependence between certain high-level, latent features (e.g. concepts or objects), and this is the setting of interest. We provide conditions under which both the latent representations and the underlying latent causal model are identifiable by a reduction to a mixture oracle. These results highlight an intriguing connection between the well-studied problem of learning the order of a mixture model and the problem of learning the bipartite structure between observables and unobservables. The proof is constructive, and leads to several algorithms for explicitly reconstructing the full graphical model. We discuss efficient algorithms and provide experiments illustrating the algorithms in practice.
| null |
ErrorCompensatedX: error compensation for variance reduced algorithms
|
https://papers.nips.cc/paper_files/paper/2021/hash/968c9b4f09cbb7d7925f38aea3484111-Abstract.html
|
Hanlin Tang, Yao Li, Ji Liu, Ming Yan
|
https://papers.nips.cc/paper_files/paper/2021/hash/968c9b4f09cbb7d7925f38aea3484111-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13008-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/968c9b4f09cbb7d7925f38aea3484111-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fAWFaNaRVeF
|
https://papers.nips.cc/paper_files/paper/2021/file/968c9b4f09cbb7d7925f38aea3484111-Supplemental.pdf
|
Communication cost is one major bottleneck for the scalability for distributed learning. One approach to reduce the communication cost is to compress the gradient during communication. However, directly compressing the gradient decelerates the convergence speed, and the resulting algorithm may diverge for biased compression. Recent work addressed this problem for stochastic gradient descent by adding back the compression error from the previous step. This idea was further extended to one class of variance reduced algorithms, where the variance of the stochastic gradient is reduced by taking a moving average over all history gradients. However, our analysis shows that just adding the previous step's compression error, as done in existing work, does not fully compensate the compression error. So, we propose ErrorCompensateX, which uses the compression error from the previous two steps. We show that ErrorCompensateX can achieve the same asymptotic convergence rate with the training without compression. Moreover, we provide a unified theoretical analysis framework for this class of variance reduced algorithms, with or without error compensation.
| null |
Deep Contextual Video Compression
|
https://papers.nips.cc/paper_files/paper/2021/hash/96b250a90d3cf0868c83f8c965142d2a-Abstract.html
|
Jiahao Li, Bin Li, Yan Lu
|
https://papers.nips.cc/paper_files/paper/2021/hash/96b250a90d3cf0868c83f8c965142d2a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13009-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/96b250a90d3cf0868c83f8c965142d2a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=evqzNxmXsl3
|
https://papers.nips.cc/paper_files/paper/2021/file/96b250a90d3cf0868c83f8c965142d2a-Supplemental.pdf
|
Most of the existing neural video compression methods adopt the predictive coding framework, which first generates the predicted frame and then encodes its residue with the current frame. However, as for compression ratio, predictive coding is only a sub-optimal solution as it uses simple subtraction operation to remove the redundancy across frames. In this paper, we propose a deep contextual video compression framework to enable a paradigm shift from predictive coding to conditional coding. In particular, we try to answer the following questions: how to define, use, and learn condition under a deep video compression framework. To tap the potential of conditional coding, we propose using feature domain context as condition. This enables us to leverage the high dimension context to carry rich information to both the encoder and the decoder, which helps reconstruct the high-frequency contents for higher video quality. Our framework is also extensible, in which the condition can be flexibly designed. Experiments show that our method can significantly outperform the previous state-of-the-art (SOTA) deep video compression methods. When compared with x265 using veryslow preset, we can achieve 26.0% bitrate saving for 1080P standard test videos.
| null |
On the Frequency Bias of Generative Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/96bf57c6ff19504ff145e2a32991ea96-Abstract.html
|
Katja Schwarz, Yiyi Liao, Andreas Geiger
|
https://papers.nips.cc/paper_files/paper/2021/hash/96bf57c6ff19504ff145e2a32991ea96-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13010-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/96bf57c6ff19504ff145e2a32991ea96-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=IARK9TWiFRb
|
https://papers.nips.cc/paper_files/paper/2021/file/96bf57c6ff19504ff145e2a32991ea96-Supplemental.zip
|
The key objective of Generative Adversarial Networks (GANs) is to generate new data with the same statistics as the provided training data. However, multiple recent works show that state-of-the-art architectures yet struggle to achieve this goal. In particular, they report an elevated amount of high frequencies in the spectral statistics which makes it straightforward to distinguish real and generated images. Explanations for this phenomenon are controversial: While most works attribute the artifacts to the generator, other works point to the discriminator. We take a sober look at those explanations and provide insights on what makes proposed measures against high-frequency artifacts effective. To achieve this, we first independently assess the architectures of both the generator and discriminator and investigate if they exhibit a frequency bias that makes learning the distribution of high-frequency content particularly problematic. Based on these experiments, we make the following four observations: 1) Different upsampling operations bias the generator towards different spectral properties. 2) Checkerboard artifacts introduced by upsampling cannot explain the spectral discrepancies alone as the generator is able to compensate for these artifacts. 3) The discriminator does not struggle with detecting high frequencies per se but rather struggles with frequencies of low magnitude. 4) The downsampling operations in the discriminator can impair the quality of the training signal it provides.In light of these findings, we analyze proposed measures against high-frequency artifacts in state-of-the-art GAN training but find that none of the existing approaches can fully resolve spectral artifacts yet. Our results suggest that there is great potential in improving the discriminator and that this could be key to match the distribution of the training data more closely.
| null |
Learning curves of generic features maps for realistic datasets with a teacher-student model
|
https://papers.nips.cc/paper_files/paper/2021/hash/9704a4fc48ae88598dcbdcdf57f3fdef-Abstract.html
|
Bruno Loureiro, Cedric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mezard, Lenka Zdeborová
|
https://papers.nips.cc/paper_files/paper/2021/hash/9704a4fc48ae88598dcbdcdf57f3fdef-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13011-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9704a4fc48ae88598dcbdcdf57f3fdef-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=B1Kh0SVodY
|
https://papers.nips.cc/paper_files/paper/2021/file/9704a4fc48ae88598dcbdcdf57f3fdef-Supplemental.pdf
|
Teacher-student models provide a framework in which the typical-case performance of high-dimensional supervised learning can be described in closed form. The assumptions of Gaussian i.i.d. input data underlying the canonical teacher-student model may, however, be perceived as too restrictive to capture the behaviour of realistic data sets. In this paper, we introduce a Gaussian covariate generalisation of the model where the teacher and student can act on different spaces, generated with fixed, but generic feature maps. While still solvable in a closed form, this generalization is able to capture the learning curves for a broad range of realistic data sets, thus redeeming the potential of the teacher-student framework. Our contribution is then two-fold: first, we prove a rigorous formula for the asymptotic training loss and generalisation error. Second, we present a number of situations where the learning curve of the model captures the one of a realistic data set learned with kernel regression and classification, with out-of-the-box feature maps such as random projections or scattering transforms, or with pre-learned ones - such as the features learned by training multi-layer neural networks. We discuss both the power and the limitations of the framework.
| null |
It Has Potential: Gradient-Driven Denoisers for Convergent Solutions to Inverse Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/97108695bd93b6be52fa0334874c8722-Abstract.html
|
Regev Cohen, Yochai Blau, Daniel Freedman, Ehud Rivlin
|
https://papers.nips.cc/paper_files/paper/2021/hash/97108695bd93b6be52fa0334874c8722-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13012-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/97108695bd93b6be52fa0334874c8722-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MYvpQVjCK0_
|
https://papers.nips.cc/paper_files/paper/2021/file/97108695bd93b6be52fa0334874c8722-Supplemental.pdf
|
In recent years there has been increasing interest in leveraging denoisers for solving general inverse problems. Two leading frameworks are regularization-by-denoising (RED) and plug-and-play priors (PnP) which incorporate explicit likelihood functions with priors induced by denoising algorithms. RED and PnP have shown state-of-the-art performance in diverse imaging tasks when powerful denoisersare used, such as convolutional neural networks (CNNs). However, the study of their convergence remains an active line of research. Recent works derive the convergence of RED and PnP methods by treating CNN denoisers as approximations for maximum a posteriori (MAP) or minimum mean square error (MMSE) estimators. Yet, state-of-the-art denoisers cannot be interpreted as either MAPor MMSE estimators, since they typically do not exhibit symmetric Jacobians. Furthermore, obtaining stable inverse algorithms often requires controlling the Lipschitz constant of CNN denoisers during training. Precisely enforcing this constraint is impractical, hence, convergence cannot be completely guaranteed. In this work, we introduce image denoisers derived as the gradients of smooth scalar-valued deep neural networks, acting as potentials. This ensures two things: (1) the proposed denoisers display symmetric Jacobians, allowing for MAP and MMSE estimators interpretation; (2) the denoisers may be integrated into RED and PnP schemes with backtracking step size, removing the need for enforcing their Lipschitz constant. To show the latter, we develop a simple inversion method that utilizes the proposed denoisers. We theoretically establish its convergence to stationary points of an underlying objective function consisting of the learned potentials. We numerically validate our method through various imaging experiments, showing improved results compared to standard RED and PnP methods, and with additional provable stability.
| null |
Training Over-parameterized Models with Non-decomposable Objectives
|
https://papers.nips.cc/paper_files/paper/2021/hash/9713faa264b94e2bf346a1bb52587fd8-Abstract.html
|
Harikrishna Narasimhan, Aditya K. Menon
|
https://papers.nips.cc/paper_files/paper/2021/hash/9713faa264b94e2bf346a1bb52587fd8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13013-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9713faa264b94e2bf346a1bb52587fd8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Y0fS_1N0rsk
|
https://papers.nips.cc/paper_files/paper/2021/file/9713faa264b94e2bf346a1bb52587fd8-Supplemental.pdf
|
Many modern machine learning applications come with complex and nuanced design goals such as minimizing the worst-case error, satisfying a given precision or recall target, or enforcing group-fairness constraints. Popular techniques for optimizing such non-decomposable objectives reduce the problem into a sequence of cost-sensitive learning tasks, each of which is then solved by re-weighting the training loss with example-specific costs. We point out that the standard approach of re-weighting the loss to incorporate label costs can produce unsatisfactory results when used to train over-parameterized models. As a remedy, we propose new cost- sensitive losses that extend the classical idea of logit adjustment to handle more general cost matrices. Our losses are calibrated, and can be further improved with distilled labels from a teacher model. Through experiments on benchmark image datasets, we showcase the effectiveness of our approach in training ResNet models with common robust and constrained optimization objectives.
| null |
Reinforcement learning for optimization of variational quantum circuit architectures
|
https://papers.nips.cc/paper_files/paper/2021/hash/9724412729185d53a2e3e7f889d9f057-Abstract.html
|
Mateusz Ostaszewski, Lea M. Trenkwalder, Wojciech Masarczyk, Eleanor Scerri, Vedran Dunjko
|
https://papers.nips.cc/paper_files/paper/2021/hash/9724412729185d53a2e3e7f889d9f057-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13014-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9724412729185d53a2e3e7f889d9f057-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=JQznhE5mdyv
|
https://papers.nips.cc/paper_files/paper/2021/file/9724412729185d53a2e3e7f889d9f057-Supplemental.pdf
|
The study of Variational Quantum Eigensolvers (VQEs) has been in the spotlight in recent times as they may lead to real-world applications of near-term quantum devices. However, their performance depends on the structure of the used variational ansatz, which requires balancing the depth and expressivity of the corresponding circuit. At the same time, near-term restrictions limit the depth of the circuit we can expect to run. Thus, the optimization of the VQE ansatz requires maximizing the expressivity of the circuit while maintaining low depth. In recent years, various methods for VQE structure optimization have been introduced but the capacities of machine learning to aid with this problem have not yet been extensively investigated. In this work, we propose a reinforcement learning algorithm that autonomously explores the space of possible ansatzes, identifying economic circuits which still yield accurate ground energy estimates. The algorithm uses a feedback-driven curriculum learning method that autonomously adapts the complexity of the learning problem to the current performance of the learning algorithm and it incrementally improves the accuracy of the result while minimizing the circuit depth. We showcase the performance of our algorithm on the problem of estimating the ground-state energy of lithium hydride (LiH) in various configurations. In this well-known benchmark problem, we achieve chemical accuracy and state-of-the-art results in terms of circuit depth.
| null |
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
|
https://papers.nips.cc/paper_files/paper/2021/hash/97275a23ca44226c9964043c8462be96-Abstract.html
|
Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, Gennady Pekhimenko
|
https://papers.nips.cc/paper_files/paper/2021/hash/97275a23ca44226c9964043c8462be96-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13015-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/97275a23ca44226c9964043c8462be96-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cwWfDHYpb1z
|
https://papers.nips.cc/paper_files/paper/2021/file/97275a23ca44226c9964043c8462be96-Supplemental.pdf
|
Training deep neural networks on large datasets can often be accelerated by using multiple compute nodes. This approach, known as distributed training, can utilize hundreds of computers via specialized message-passing protocols such as Ring All-Reduce.However, running these protocols at scale requires reliable high-speed networking that is only available in dedicated clusters.In contrast, many real-world applications, such as federated learning and cloud-based distributed training, operate on unreliable devices with unstable network bandwidth.As a result, these applications are restricted to using parameter servers or gossip-based averaging protocols.In this work, we lift that restriction by proposing Moshpit All-Reduce — an iterative averaging protocol that exponentially converges to the global average.We demonstrate the efficiency of our protocol for distributed optimization with strong theoretical guarantees.The experiments show 1.3x speedup for ResNet-50 training on ImageNet compared to competitive gossip-based strategies and 1.5x speedup when training ALBERT-large on preemptible compute nodes.
| null |
IRM—when it works and when it doesn't: A test case of natural language inference
|
https://papers.nips.cc/paper_files/paper/2021/hash/972cda1e62b72640cb7ac702714a115f-Abstract.html
|
Yana Dranker, He He, Yonatan Belinkov
|
https://papers.nips.cc/paper_files/paper/2021/hash/972cda1e62b72640cb7ac702714a115f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13016-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/972cda1e62b72640cb7ac702714a115f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=KtvHbjCF4v
|
https://papers.nips.cc/paper_files/paper/2021/file/972cda1e62b72640cb7ac702714a115f-Supplemental.pdf
|
Invariant Risk Minimization (IRM) is a recently proposed framework for out-of-distribution (o.o.d) generalization. Most of the studies on IRM so far have focused on theoretical results, toy problems, and simple models. In this work, we investigate the applicability of IRM to bias mitigation-a special case of o.o.d generalization-in increasingly naturalistic settings and deep models. Using natural language inference (NLI) as a test case, we start with a setting where both the dataset and the bias are synthetic, continue with a natural dataset and synthetic bias, and end with a fully realistic setting with natural datasets and bias. Our results show that in naturalistic settings, learning complex features in place of the bias proves to be difficult, leading to a rather small improvement over empirical risk minimization. Moreover, we find that in addition to being sensitive to random seeds, the performance of IRM also depends on several critical factors, notably dataset size, bias prevalence, and bias strength, thus limiting IRM's advantage in practical scenarios. Our results highlight key challenges in applying IRM to real-world scenarios, calling for a more naturalistic characterization of the problem setup for o.o.d generalization.
| null |
Self-Supervised Learning Disentangled Group Representation as Feature
|
https://papers.nips.cc/paper_files/paper/2021/hash/97416ac0f58056947e2eb5d5d253d4f2-Abstract.html
|
Tan Wang, Zhongqi Yue, Jianqiang Huang, Qianru Sun, Hanwang Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/97416ac0f58056947e2eb5d5d253d4f2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13017-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/97416ac0f58056947e2eb5d5d253d4f2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RQfcckT1M_4
|
https://papers.nips.cc/paper_files/paper/2021/file/97416ac0f58056947e2eb5d5d253d4f2-Supplemental.pdf
|
A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics). In this paper, we formulate the notion of "good" representation from a group-theoretic view using Higgins' definition of disentangled representation, and show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization, thus unable to modularize the remaining semantics. To break the limitation, we propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM), which successfully grounds the abstract semantics and the group acting on them into concrete contrastive learning. At each iteration, IP-IRM first partitions the training samples into two subsets that correspond to an entangled group element. Then, it minimizes a subset-invariant contrastive loss, where the invariance guarantees to disentangle the group element. We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks. Codes are available at https://github.com/Wangt-CN/IP-IRM.
| null |
SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9752d873fa71c19dc602bf2a0696f9b5-Abstract.html
|
Aaron Chan, Jiashu Xu, Boyuan Long, Soumya Sanyal, Tanishq Gupta, Xiang Ren
|
https://papers.nips.cc/paper_files/paper/2021/hash/9752d873fa71c19dc602bf2a0696f9b5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13018-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9752d873fa71c19dc602bf2a0696f9b5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=FUxXaBop-J_
|
https://papers.nips.cc/paper_files/paper/2021/file/9752d873fa71c19dc602bf2a0696f9b5-Supplemental.pdf
|
Augmenting pre-trained language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks. However, for a given task instance, the KG, or certain parts of the KG, may not be useful. Although KG-augmented models often use attention to focus on specific KG components, the KG is still always used, and the attention mechanism is never explicitly taught which KG components should be used. Meanwhile, saliency methods can measure how much a KG feature (e.g., graph, node, path) influences the model to make the correct prediction, thus explaining which KG features are useful. This paper explores how saliency explanations can be used to improve KG-augmented models' performance. First, we propose to create coarse (Is the KG useful?) and fine (Which nodes/paths in the KG are useful?) saliency explanations. Second, to motivate saliency-based supervision, we analyze oracle KG-augmented models which directly use saliency explanations as extra inputs for guiding their attention. Third, we propose SalKG, a framework for KG-augmented models to learn from coarse and/or fine saliency explanations. Given saliency explanations created from a task's training set, SalKG jointly trains the model to predict the explanations, then solve the task by attending to KG features highlighted by the predicted explanations. On three popular commonsense QA benchmarks (CSQA, OBQA, CODAH) and a range of KG-augmented models, we show that SalKG can yield considerable performance gains --- up to 2.76% absolute improvement on CSQA.
| null |
Supervising the Transfer of Reasoning Patterns in VQA
|
https://papers.nips.cc/paper_files/paper/2021/hash/9766527f2b5d3e95d4a733fcfb77bd7e-Abstract.html
|
Corentin Kervadec, Christian Wolf, Grigory Antipov, Moez Baccouche, Madiha Nadri
|
https://papers.nips.cc/paper_files/paper/2021/hash/9766527f2b5d3e95d4a733fcfb77bd7e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13019-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9766527f2b5d3e95d4a733fcfb77bd7e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kqYiS7HEWfZ
|
https://papers.nips.cc/paper_files/paper/2021/file/9766527f2b5d3e95d4a733fcfb77bd7e-Supplemental.pdf
|
Methods for Visual Question Anwering (VQA) are notorious for leveraging dataset biases rather than performing reasoning, hindering generalization. It has been recently shown that better reasoning patterns emerge in attention layers of a state-of-the-art VQA model when they are trained on perfect (oracle) visual inputs. This provides evidence that deep neural networks can learn to reason when training conditions are favorable enough. However, transferring this learned knowledge to deployable models is a challenge, as much of it is lost during the transfer.We propose a method for knowledge transfer based on a regularization term in our loss function, supervising the sequence of required reasoning operations.We provide a theoretical analysis based on PAC-learning, showing that such program prediction can lead to decreased sample complexity under mild hypotheses. We also demonstrate the effectiveness of this approach experimentally on the GQA dataset and show its complementarity to BERT-like self-supervised pre-training.
| null |
Conformal Bayesian Computation
|
https://papers.nips.cc/paper_files/paper/2021/hash/97785e0500ad16c18574c64189ccf4b4-Abstract.html
|
Edwin Fong, Chris C Holmes
|
https://papers.nips.cc/paper_files/paper/2021/hash/97785e0500ad16c18574c64189ccf4b4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13020-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/97785e0500ad16c18574c64189ccf4b4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=e95xWqO7ehi
|
https://papers.nips.cc/paper_files/paper/2021/file/97785e0500ad16c18574c64189ccf4b4-Supplemental.pdf
|
We develop scalable methods for producing conformal Bayesian predictive intervals with finite sample calibration guarantees. Bayesian posterior predictive distributions, $p(y \mid x)$, characterize subjective beliefs on outcomes of interest, $y$, conditional on predictors, $x$. Bayesian prediction is well-calibrated when the model is true, but the predictive intervals may exhibit poor empirical coverage when the model is misspecified, under the so called ${\cal{M}}$-open perspective. In contrast, conformal inference provides finite sample frequentist guarantees on predictive confidence intervals without the requirement of model fidelity. Using 'add-one-in' importance sampling, we show that conformal Bayesian predictive intervals are efficiently obtained from re-weighted posterior samples of model parameters. Our approach contrasts with existing conformal methods that require expensive refitting of models or data-splitting to achieve computational efficiency. We demonstrate the utility on a range of examples including extensions to partially exchangeable settings such as hierarchical models.
| null |
A Unified Approach to Fair Online Learning via Blackwell Approachability
|
https://papers.nips.cc/paper_files/paper/2021/hash/97ea3cfb64eeaa1edba65501d0bb3c86-Abstract.html
|
Evgenii Chzhen, Christophe Giraud, Gilles Stoltz
|
https://papers.nips.cc/paper_files/paper/2021/hash/97ea3cfb64eeaa1edba65501d0bb3c86-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13021-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/97ea3cfb64eeaa1edba65501d0bb3c86-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vMWHOumNj5
|
https://papers.nips.cc/paper_files/paper/2021/file/97ea3cfb64eeaa1edba65501d0bb3c86-Supplemental.pdf
|
We provide a setting and a general approach to fair online learning with stochastic sensitive and non-sensitive contexts.The setting is a repeated game between the Player and Nature, where at each stage both pick actions based on the contexts. Inspired by the notion of unawareness, we assume that the Player can only access the non-sensitive context before making a decision, while we discuss both cases of Nature accessing the sensitive contexts and Nature unaware of the sensitive contexts. Adapting Blackwell's approachability theory to handle the case of an unknown contexts' distribution, we provide a general necessary and sufficient condition for learning objectives to be compatible with some fairness constraints. This condition is instantiated on (group-wise) no-regret and (group-wise) calibration objectives, and on demographic parity as an additional constraint. When the objective is not compatible with the constraint, the provided framework permits to characterise the optimal trade-off between the two.
| null |
Training Neural Networks is ER-complete
|
https://papers.nips.cc/paper_files/paper/2021/hash/9813b270ed0288e7c0388f0fd4ec68f5-Abstract.html
|
Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow
|
https://papers.nips.cc/paper_files/paper/2021/hash/9813b270ed0288e7c0388f0fd4ec68f5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13022-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9813b270ed0288e7c0388f0fd4ec68f5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fNo7un6Ilaj
| null |
Given a neural network, training data, and a threshold, finding weights for the neural network such that the total error is below the threshold is known to be NP-hard. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is $\exists\mathbb R$-complete. This means that the problem is equivalent, up to polynomial time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, $\exists\mathbb R$ is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP.Neural networks are usually trained using some variation of backpropagation. The result of this paper gives an explanation why techniques commonly used to solve big instances of NP-complete problems (such as SAT solvers, IP solvers, local search, dynamic programming, etc.) seem to be of no use to this task.
| null |
Understanding the Under-Coverage Bias in Uncertainty Estimation
|
https://papers.nips.cc/paper_files/paper/2021/hash/9854d7afce413aa13cd0a1d39d0bcec5-Abstract.html
|
Yu Bai, Song Mei, Huan Wang, Caiming Xiong
|
https://papers.nips.cc/paper_files/paper/2021/hash/9854d7afce413aa13cd0a1d39d0bcec5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13023-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9854d7afce413aa13cd0a1d39d0bcec5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=te8iyHjbPQd
|
https://papers.nips.cc/paper_files/paper/2021/file/9854d7afce413aa13cd0a1d39d0bcec5-Supplemental.pdf
|
Estimating the data uncertainty in regression tasks is often done by learning a quantile function or a prediction interval of the true label conditioned on the input. It is frequently observed that quantile regression---a vanilla algorithm for learning quantiles with asymptotic guarantees---tends to *under-cover* than the desired coverage level in reality. While various fixes have been proposed, a more fundamental understanding of why this under-coverage bias happens in the first place remains elusive.In this paper, we present a rigorous theoretical study on the coverage of uncertainty estimation algorithms in learning quantiles. We prove that quantile regression suffers from an inherent under-coverage bias, in a vanilla setting where we learn a realizable linear quantile function and there is more data than parameters. More quantitatively, for $\alpha>0.5$ and small $d/n$, the $\alpha$-quantile learned by quantile regression roughly achieves coverage $\alpha - (\alpha-1/2)\cdot d/n$ regardless of the noise distribution, where $d$ is the input dimension and $n$ is the number of training data. Our theory reveals that this under-coverage bias stems from a certain high-dimensional parameter estimation error that is not implied by existing theories on quantile regression. Experiments on simulated and real data verify our theory and further illustrate the effect of various factors such as sample size and model capacity on the under-coverage bias in more practical setups.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.