title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
Credal Self-Supervised Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/7866c91c59f8bffc92a79a7cd09f9af9-Abstract.html
|
Julian Lienen, Eyke Hüllermeier
|
https://papers.nips.cc/paper_files/paper/2021/hash/7866c91c59f8bffc92a79a7cd09f9af9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12724-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7866c91c59f8bffc92a79a7cd09f9af9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ypj3xKoRfmr
|
https://papers.nips.cc/paper_files/paper/2021/file/7866c91c59f8bffc92a79a7cd09f9af9-Supplemental.pdf
|
Self-training is an effective approach to semi-supervised learning. The key idea is to let the learner itself iteratively generate "pseudo-supervision" for unlabeled instances based on its current hypothesis. In combination with consistency regularization, pseudo-labeling has shown promising performance in various domains, for example in computer vision. To account for the hypothetical nature of the pseudo-labels, these are commonly provided in the form of probability distributions. Still, one may argue that even a probability distribution represents an excessive level of informedness, as it suggests that the learner precisely knows the ground-truth conditional probabilities. In our approach, we therefore allow the learner to label instances in the form of credal sets, that is, sets of (candidate) probability distributions. Thanks to this increased expressiveness, the learner is able to represent uncertainty and a lack of knowledge in a more flexible and more faithful manner. To learn from weakly labeled data of that kind, we leverage methods that have recently been proposed in the realm of so-called superset learning. In an exhaustive empirical evaluation, we compare our methodology to state-of-the-art self-supervision approaches, showing competitive to superior performance especially in low-label scenarios incorporating a high degree of uncertainty.
| null |
Spot the Difference: Detection of Topological Changes via Geometric Alignment
|
https://papers.nips.cc/paper_files/paper/2021/hash/7867d6557b82ed3b5d61e6591a2a2fd3-Abstract.html
|
Per Steffen Czolbe, Aasa Feragen, Oswin Krause
|
https://papers.nips.cc/paper_files/paper/2021/hash/7867d6557b82ed3b5d61e6591a2a2fd3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12725-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7867d6557b82ed3b5d61e6591a2a2fd3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=a-Lbgfy9RqV
|
https://papers.nips.cc/paper_files/paper/2021/file/7867d6557b82ed3b5d61e6591a2a2fd3-Supplemental.pdf
|
Geometric alignment appears in a variety of applications, ranging from domain adaptation, optimal transport, and normalizing flows in machine learning; optical flow and learned augmentation in computer vision and deformable registration within biomedical imaging. A recurring challenge is the alignment of domains whose topology is not the same; a problem that is routinely ignored, potentially introducing bias in downstream analysis. As a first step towards solving such alignment problems, we propose an unsupervised algorithm for the detection of changes in image topology. The model is based on a conditional variational auto-encoder and detects topological changes between two images during the registration step. We account for both topological changes in the image under spatial variation and unexpected transformations. Our approach is validated on two tasks and datasets: detection of topological changes in microscopy images of cells, and unsupervised anomaly detection brain imaging.
| null |
Rethinking the Variational Interpretation of Accelerated Optimization Methods
|
https://papers.nips.cc/paper_files/paper/2021/hash/788d986905533aba051261497ecffcbb-Abstract.html
|
Peiyuan Zhang, Antonio Orvieto, Hadi Daneshmand
|
https://papers.nips.cc/paper_files/paper/2021/hash/788d986905533aba051261497ecffcbb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12726-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/788d986905533aba051261497ecffcbb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Si3SSyPDiJd
|
https://papers.nips.cc/paper_files/paper/2021/file/788d986905533aba051261497ecffcbb-Supplemental.zip
|
The continuous-time model of Nesterov's momentum provides a thought-provoking perspective for understanding the nature of the acceleration phenomenon in convex optimization. One of the main ideas in this line of research comes from the field of classical mechanics and proposes to link Nesterov's trajectory to the solution of a set of Euler-Lagrange equations relative to the so-called Bregman Lagrangian. In the last years, this approach led to the discovery of many new (stochastic) accelerated algorithms and provided a solid theoretical foundation for the design of structure-preserving accelerated methods. In this work, we revisit this idea and provide an in-depth analysis of the action relative to the Bregman Lagrangian from the point of view of calculus of variations. Our main finding is that, while Nesterov's method is a stationary point for the action, it is often not a minimizer but instead a saddle point for this functional in the space of differentiable curves. This finding challenges the main intuition behind the variational interpretation of Nesterov's method and provides additional insights into the intriguing geometry of accelerated paths.
| null |
Linear and Kernel Classification in the Streaming Model: Improved Bounds for Heavy Hitters
|
https://papers.nips.cc/paper_files/paper/2021/hash/78ccad7da4c2fc2646d1848e965794c5-Abstract.html
|
Arvind Mahankali, David Woodruff
|
https://papers.nips.cc/paper_files/paper/2021/hash/78ccad7da4c2fc2646d1848e965794c5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12727-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/78ccad7da4c2fc2646d1848e965794c5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_pmQOVi3gHx
|
https://papers.nips.cc/paper_files/paper/2021/file/78ccad7da4c2fc2646d1848e965794c5-Supplemental.zip
|
We study linear and kernel classification in the streaming model. For linear classification, we improve upon the algorithm of (Tai, et al. 2018), which solves the $\ell_1$ point query problem on the optimal weight vector $w_* \in \mathbb{R}^d$ in sublinear space. We first give an algorithm solving the more difficult $\ell_2$ point query problem on $w_*$, also in sublinear space. We also give an algorithm which solves the $\ell_2$ heavy hitter problem on $w_*$, in sublinear space and running time. Finally, we give an algorithm which can $\textit{deterministically}$ solve the $\ell_1$ point query problem on $w_*$, with sublinear space improving upon that of (Tai, et al. 2018). For kernel classification, if $w_* \in \mathbb{R}^{d^p}$ is the optimal weight vector classifying points in the stream according to their $p^{th}$-degree polynomial kernel, then we give an algorithm solving the $\ell_2$ point query problem on $w_*$ in $\text{poly}(\frac{p \log d}{\varepsilon})$ space, and an algorithm solving the $\ell_2$ heavy hitter problem in $\text{poly}(\frac{p \log d}{\varepsilon})$ space and running time. Note that our space and running time are polynomial in $p$, making our algorithms well-suited to high-degree polynomial kernels and the Gaussian kernel (approximated by the polynomial kernel of degree $p = \Theta(\log T)$).
| null |
A PAC-Bayes Analysis of Adversarial Robustness
|
https://papers.nips.cc/paper_files/paper/2021/hash/78e8dffe65a2898eef68a33b8db35b78-Abstract.html
|
Paul Viallard, Eric Guillaume VIDOT, Amaury Habrard, Emilie Morvant
|
https://papers.nips.cc/paper_files/paper/2021/hash/78e8dffe65a2898eef68a33b8db35b78-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12728-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/78e8dffe65a2898eef68a33b8db35b78-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sUBSPowU3L5
| null |
We propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input. Instead of deriving a worst-case analysis of the risk of a hypothesis over all the possible perturbations, we leverage the PAC-Bayesian framework to bound the averaged risk on the perturbations for majority votes (over the whole class of hypotheses). Our theoretically founded analysis has the advantage to provide general bounds (i) that are valid for any kind of attacks (i.e., the adversarial attacks), (ii) that are tight thanks to the PAC-Bayesian framework, (iii) that can be directly minimized during the learning phase to obtain a robust model on different attacks at test time.
| null |
SE(3)-equivariant prediction of molecular wavefunctions and electronic densities
|
https://papers.nips.cc/paper_files/paper/2021/hash/78f1893678afbeaa90b1fa01b9cfb860-Abstract.html
|
Oliver Unke, Mihail Bogojeski, Michael Gastegger, Mario Geiger, Tess Smidt, Klaus-Robert Müller
|
https://papers.nips.cc/paper_files/paper/2021/hash/78f1893678afbeaa90b1fa01b9cfb860-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12729-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/78f1893678afbeaa90b1fa01b9cfb860-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=auGY2UQfhSu
|
https://papers.nips.cc/paper_files/paper/2021/file/78f1893678afbeaa90b1fa01b9cfb860-Supplemental.pdf
|
Machine learning has enabled the prediction of quantum chemical properties with high accuracy and efficiency, allowing to bypass computationally costly ab initio calculations. Instead of training on a fixed set of properties, more recent approaches attempt to learn the electronic wavefunction (or density) as a central quantity of atomistic systems, from which all other observables can be derived. This is complicated by the fact that wavefunctions transform non-trivially under molecular rotations, which makes them a challenging prediction target. To solve this issue, we introduce general SE(3)-equivariant operations and building blocks for constructing deep learning architectures for geometric point cloud data and apply them to reconstruct wavefunctions of atomistic systems with unprecedented accuracy. Our model achieves speedups of over three orders of magnitude compared to ab initio methods and reduces prediction errors by up to two orders of magnitude compared to the previous state-of-the-art. This accuracy makes it possible to derive properties such as energies and forces directly from the wavefunction in an end-to-end manner. We demonstrate the potential of our approach in a transfer learning application, where a model trained on low accuracy reference wavefunctions implicitly learns to correct for electronic many-body interactions from observables computed at a higher level of theory. Such machine-learned wavefunction surrogates pave the way towards novel semi-empirical methods, offering resolution at an electronic level while drastically decreasing computational cost. Additionally, the predicted wavefunctions can serve as initial guess in conventional ab initio methods, decreasing the number of iterations required to arrive at a converged solution, thus leading to significant speedups without any loss of accuracy or robustness. While we focus on physics applications in this contribution, the proposed equivariant framework for deep learning on point clouds is promising also beyond, say, in computer vision or graphics.
| null |
Modified Frank Wolfe in Probability Space
|
https://papers.nips.cc/paper_files/paper/2021/hash/79121bb953a3bd47c076f20234bafd2e-Abstract.html
|
Carson Kent, Jiajin Li, Jose Blanchet, Peter W Glynn
|
https://papers.nips.cc/paper_files/paper/2021/hash/79121bb953a3bd47c076f20234bafd2e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12730-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/79121bb953a3bd47c076f20234bafd2e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YzasumDKCWV
|
https://papers.nips.cc/paper_files/paper/2021/file/79121bb953a3bd47c076f20234bafd2e-Supplemental.pdf
|
We propose a novel Frank-Wolfe (FW) procedure for the optimization of infinite-dimensional functionals of probability measures - a task which arises naturally in a wide range of areas including statistical learning (e.g. variational inference) and artificial intelligence (e.g. generative adversarial networks). Our FW procedure takes advantage of Wasserstein gradient flows and strong duality results recently developed in Distributionally Robust Optimization so that gradient steps (in the Wasserstein space) can be efficiently computed using finite-dimensional, convex optimization methods. We show how to choose the step sizes in order to guarantee exponentially fast iteration convergence, under mild assumptions on the functional to optimize. We apply our algorithm to a range of functionals arising from applications in nonparametric estimation.
| null |
Bayesian Optimization of Function Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/792c7b5aae4a79e78aaeda80516ae2ac-Abstract.html
|
Raul Astudillo, Peter Frazier
|
https://papers.nips.cc/paper_files/paper/2021/hash/792c7b5aae4a79e78aaeda80516ae2ac-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12731-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/792c7b5aae4a79e78aaeda80516ae2ac-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HTk8q08-zI
|
https://papers.nips.cc/paper_files/paper/2021/file/792c7b5aae4a79e78aaeda80516ae2ac-Supplemental.pdf
|
We consider Bayesian optimization of the output of a network of functions, where each function takes as input the output of its parent nodes, and where the network takes significant time to evaluate. Such problems arise, for example, in reinforcement learning, engineering design, and manufacturing. While the standard Bayesian optimization approach observes only the final output, our approach delivers greater query efficiency by leveraging information that the former ignores: intermediate output within the network. This is achieved by modeling the nodes of the network using Gaussian processes and choosing the points to evaluate using, as our acquisition function, the expected improvement computed with respect to the implied posterior on the objective. Although the non-Gaussian nature of this posterior prevents computing our acquisition function in closed form, we show that it can be efficiently maximized via sample average approximation. In addition, we prove that our method is asymptotically consistent, meaning that it finds a globally optimal solution as the number of evaluations grows to infinity, thus generalizing previously known convergence results for the expected improvement. Notably, this holds even though our method might not evaluate the domain densely, instead leveraging problem structure to leave regions unexplored. Finally, we show that our approach dramatically outperforms standard Bayesian optimization methods in several synthetic and real-world problems.
| null |
Look at What I’m Doing: Self-Supervised Spatial Grounding of Narrations in Instructional Videos
|
https://papers.nips.cc/paper_files/paper/2021/hash/792dd774336314c3c27a04bb260cf2cf-Abstract.html
|
Reuben Tan, Bryan Plummer, Kate Saenko, Hailin Jin, Bryan Russell
|
https://papers.nips.cc/paper_files/paper/2021/hash/792dd774336314c3c27a04bb260cf2cf-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12732-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/792dd774336314c3c27a04bb260cf2cf-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tMFTT3BDEK9
|
https://papers.nips.cc/paper_files/paper/2021/file/792dd774336314c3c27a04bb260cf2cf-Supplemental.pdf
|
We introduce the task of spatially localizing narrated interactions in videos. Key to our approach is the ability to learn to spatially localize interactions with self-supervision on a large corpus of videos with accompanying transcribed narrations. To achieve this goal, we propose a multilayer cross-modal attention network that enables effective optimization of a contrastive loss during training. We introduce a divided strategy that alternates between computing inter- and intra-modal attention across the visual and natural language modalities, which allows effective training via directly contrasting the two modalities' representations. We demonstrate the effectiveness of our approach by self-training on the HowTo100M instructional video dataset and evaluating on a newly collected dataset of localized described interactions in the YouCook2 dataset. We show that our approach outperforms alternative baselines, including shallow co-attention and full cross-modal attention. We also apply our approach to grounding phrases in images with weak supervision on Flickr30K and show that stacking multiple attention layers is effective and, when combined with a word-to-region loss, achieves state of the art on recall-at-one and pointing hand accuracies.
| null |
RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/793bc52a941b3951dfdb85fb04f9fd06-Abstract.html
|
Krishnateja Killamsetty, Xujiang Zhao, Feng Chen, Rishabh Iyer
|
https://papers.nips.cc/paper_files/paper/2021/hash/793bc52a941b3951dfdb85fb04f9fd06-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12733-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/793bc52a941b3951dfdb85fb04f9fd06-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jSz59N8NvUP
|
https://papers.nips.cc/paper_files/paper/2021/file/793bc52a941b3951dfdb85fb04f9fd06-Supplemental.pdf
|
Semi-supervised learning (SSL) algorithms have had great success in recent years in limited labeled data regimes. However, the current state-of-the-art SSL algorithms are computationally expensive and entail significant compute time and energy requirements. This can prove to be a huge limitation for many smaller companies and academic groups. Our main insight is that training on a subset of unlabeled data instead of entire unlabeled data enables the current SSL algorithms to converge faster, significantly reducing computational costs. In this work, we propose RETRIEVE, a coreset selection framework for efficient and robust semi-supervised learning. RETRIEVE selects the coreset by solving a mixed discrete-continuous bi-level optimization problem such that the selected coreset minimizes the labeled set loss. We use a one-step gradient approximation and show that the discrete optimization problem is approximately submodular, enabling simple greedy algorithms to obtain the coreset. We empirically demonstrate on several real-world datasets that existing SSL algorithms like VAT, Mean-Teacher, FixMatch, when used with RETRIEVE, achieve a) faster training times, b) better performance when unlabeled data consists of Out-of-Distribution (OOD) data and imbalance. More specifically, we show that with minimal accuracy degradation, RETRIEVE achieves a speedup of around $3\times$ in the traditional SSL setting and achieves a speedup of $5\times$ compared to state-of-the-art (SOTA) robust SSL algorithms in the case of imbalance and OOD data. RETRIEVE is available as a part of the CORDS toolkit: https://github.com/decile-team/cords.
| null |
Collaborating with Humans without Human Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/797134c3e42371bb4979a462eb2f042a-Abstract.html
|
DJ Strouse, Kevin McKee, Matt Botvinick, Edward Hughes, Richard Everett
|
https://papers.nips.cc/paper_files/paper/2021/hash/797134c3e42371bb4979a462eb2f042a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12734-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/797134c3e42371bb4979a462eb2f042a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1Kof-nkmQB8
| null |
Collaborating with humans requires rapidly adapting to their individual strengths, weaknesses, and preferences. Unfortunately, most standard multi-agent reinforcement learning techniques, such as self-play (SP) or population play (PP), produce agents that overfit to their training partners and do not generalize well to humans. Alternatively, researchers can collect human data, train a human model using behavioral cloning, and then use that model to train "human-aware" agents ("behavioral cloning play", or BCP). While such an approach can improve the generalization of agents to new human co-players, it involves the onerous and expensive step of collecting large amounts of human data first. Here, we study the problem of how to train agents that collaborate well with human partners without using human data. We argue that the crux of the problem is to produce a diverse set of training partners. Drawing inspiration from successful multi-agent approaches in competitive domains, we find that a surprisingly simple approach is highly effective. We train our agent partner as the best response to a population of self-play agents and their past checkpoints taken throughout training, a method we call Fictitious Co-Play (FCP). Our experiments focus on a two-player collaborative cooking simulator that has recently been proposed as a challenge problem for coordination with humans. We find that FCP agents score significantly higher than SP, PP, and BCP when paired with novel agent and human partners. Furthermore, humans also report a strong subjective preference to partnering with FCP agents over all baselines.
| null |
Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State
|
https://papers.nips.cc/paper_files/paper/2021/hash/79a49b3e3762632813f9e35f4ba53d6c-Abstract.html
|
Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Yisen Wang, Zhouchen Lin
|
https://papers.nips.cc/paper_files/paper/2021/hash/79a49b3e3762632813f9e35f4ba53d6c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12735-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/79a49b3e3762632813f9e35f4ba53d6c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=f2Llmm_z5Sm
|
https://papers.nips.cc/paper_files/paper/2021/file/79a49b3e3762632813f9e35f4ba53d6c-Supplemental.pdf
|
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware. However, the supervised training of SNNs remains a hard problem due to the discontinuity of the spiking neuron model. Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks, and use surrogate derivatives or compute gradients with respect to the spiking time to deal with the problem. These approaches either accumulate approximation errors or only propagate information limitedly through existing spikes, and usually require information propagation along time steps with large memory costs and biological implausibility. In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation. First, we show that the average firing rates of SNNs with feedback connections would gradually evolve to an equilibrium state along time, which follows a fixed-point equation. Then by viewing the forward computation of feedback SNNs as a black-box solver for this equation, and leveraging the implicit differentiation on the equation, we can compute the gradient for parameters without considering the exact forward procedure. In this way, the forward and backward procedures are decoupled and therefore the problem of non-differentiable spiking functions is avoided. We also briefly discuss the biological plausibility of implicit differentiation, which only requires computing another equilibrium. Extensive experiments on MNIST, Fashion-MNIST, N-MNIST, CIFAR-10, and CIFAR-100 demonstrate the superior performance of our method for feedback models with fewer neurons and parameters in a small number of time steps. Our code is available at https://github.com/pkuxmq/IDE-FSNN.
| null |
Online Selective Classification with Limited Feedback
|
https://papers.nips.cc/paper_files/paper/2021/hash/79b6245ff93841eb8c120cec9bf8be14-Abstract.html
|
Aditya Gangrade, Anil Kag, Ashok Cutkosky, Venkatesh Saligrama
|
https://papers.nips.cc/paper_files/paper/2021/hash/79b6245ff93841eb8c120cec9bf8be14-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12736-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/79b6245ff93841eb8c120cec9bf8be14-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cCQAzuT5q4
|
https://papers.nips.cc/paper_files/paper/2021/file/79b6245ff93841eb8c120cec9bf8be14-Supplemental.pdf
|
Motivated by applications to resource-limited and safety-critical domains, we study selective classification in the online learning model, wherein a predictor may abstain from classifying an instance. For example, this may model an adaptive decision to invoke more resources on this instance. Two salient aspects of the setting we consider are that the data may be non-realisable, due to which abstention may be a valid long-term action, and that feedback is only received when the learner abstains, which models the fact that reliable labels are only available when the resource intensive processing is invoked.Within this framework, we explore strategies that make few mistakes, while not abstaining too many times more than the best-in-hindsight error-free classifier from a given class. That is, the one that makes no mistakes, while abstaining the fewest number of times. We construct simple versioning-based schemes for any $\mu \in (0,1],$ that make most $T^\mu$ mistakes while incurring $\tilde{O}(T^{1-\mu})$ excess abstention against adaptive adversaries. We further show that this dependence on $T$ is tight, and provide illustrative experiments on realistic datasets.
| null |
Controlled Text Generation as Continuous Optimization with Multiple Constraints
|
https://papers.nips.cc/paper_files/paper/2021/hash/79ec2a4246feb2126ecf43c4a4418002-Abstract.html
|
Sachin Kumar, Eric Malmi, Aliaksei Severyn, Yulia Tsvetkov
|
https://papers.nips.cc/paper_files/paper/2021/hash/79ec2a4246feb2126ecf43c4a4418002-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12737-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/79ec2a4246feb2126ecf43c4a4418002-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kTy7bbm-4I4
|
https://papers.nips.cc/paper_files/paper/2021/file/79ec2a4246feb2126ecf43c4a4418002-Supplemental.pdf
|
As large-scale language model pretraining pushes the state-of-the-art in text generation, recent work has turned to controlling attributes of the text such models generate. While modifying the pretrained models via fine-tuning remains the popular approach, it incurs a significant computational cost and can be infeasible due to a lack of appropriate data. As an alternative, we propose \textsc{MuCoCO}---a flexible and modular algorithm for controllable inference from pretrained models. We formulate the decoding process as an optimization problem that allows for multiple attributes we aim to control to be easily incorporated as differentiable constraints. By relaxing this discrete optimization to a continuous one, we make use of Lagrangian multipliers and gradient-descent-based techniques to generate the desired text. We evaluate our approach on controllable machine translation and style transfer with multiple sentence-level attributes and observe significant improvements over baselines.
| null |
S$^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a1d9028a78f418cb8f01909a348d9b2-Abstract.html
|
Xinlin Li, Bang Liu, Yaoliang Yu, Wulong Liu, Chunjing XU, Vahid Partovi Nia
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a1d9028a78f418cb8f01909a348d9b2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12738-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7a1d9028a78f418cb8f01909a348d9b2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kQDPhAZHYi
|
https://papers.nips.cc/paper_files/paper/2021/file/7a1d9028a78f418cb8f01909a348d9b2-Supplemental.pdf
|
Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy-efficient compared to conventional neural networks. However, existing shift networks are sensitive to the weight initialization and yield a degraded performance caused by vanishing gradient and weight sign freezing problem. To address these issues, we propose S$^3$ re-parameterization, a novel technique for training low-bit shift networks. Our method decomposes a discrete parameter in a sign-sparse-shift 3-fold manner. This way, it efficiently learns a low-bit network with weight dynamics similar to full-precision networks and insensitive to weight initialization. Our proposed training method pushes the boundaries of shift neural networks and shows 3-bit shift networks compete with their full-precision counterparts in terms of top-1 accuracy on ImageNet.
| null |
Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a430339c10c642c4b2251756fd1b484-Abstract.html
|
Mathias Niepert, Pasquale Minervini, Luca Franceschi
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a430339c10c642c4b2251756fd1b484-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12739-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7a430339c10c642c4b2251756fd1b484-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lR4aaWCQgB
|
https://papers.nips.cc/paper_files/paper/2021/file/7a430339c10c642c4b2251756fd1b484-Supplemental.pdf
|
Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP. Moreover, we show that I-MLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. Experiments on several datasets suggest that I-MLE is competitive with and often outperforms existing approaches which rely on problem-specific relaxations.
| null |
Scaling up Continuous-Time Markov Chains Helps Resolve Underspecification
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a50d83a1e70e9d96c3357438aed7a44-Abstract.html
|
Alkis Gotovos, Rebekka Burkholz, John Quackenbush, Stefanie Jegelka
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a50d83a1e70e9d96c3357438aed7a44-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12740-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7a50d83a1e70e9d96c3357438aed7a44-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=PnY8rTJGOuU
|
https://papers.nips.cc/paper_files/paper/2021/file/7a50d83a1e70e9d96c3357438aed7a44-Supplemental.pdf
|
Modeling the time evolution of discrete sets of items (e.g., genetic mutations) is a fundamental problem in many biomedical applications. We approach this problem through the lens of continuous-time Markov chains, and show that the resulting learning task is generally underspecified in the usual setting of cross-sectional data. We explore a perhaps surprising remedy: including a number of additional independent items can help determine time order, and hence resolve underspecification. This is in sharp contrast to the common practice of limiting the analysis to a small subset of relevant items, which is followed largely due to poor scaling of existing methods. To put our theoretical insight into practice, we develop an approximate likelihood maximization method for learning continuous-time Markov chains, which can scale to hundreds of items and is orders of magnitude faster than previous methods. We demonstrate the effectiveness of our approach on synthetic and real cancer data.
| null |
Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a6a6127ff85640ec69691fb0f7cb1a2-Abstract.html
|
Alexander Korotin, Lingxiao Li, Aude Genevay, Justin M. Solomon, Alexander Filippov, Evgeny Burnaev
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a6a6127ff85640ec69691fb0f7cb1a2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12741-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7a6a6127ff85640ec69691fb0f7cb1a2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=CI0T_3l-n1
|
https://papers.nips.cc/paper_files/paper/2021/file/7a6a6127ff85640ec69691fb0f7cb1a2-Supplemental.pdf
|
Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport---specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use input-convex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream.
| null |
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a6bda9ad6ffdac035c752743b7e9d0e-Abstract.html
|
Aritra Mitra, Rayana Jaafar, George J. Pappas, Hamed Hassani
|
https://papers.nips.cc/paper_files/paper/2021/hash/7a6bda9ad6ffdac035c752743b7e9d0e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12742-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7a6bda9ad6ffdac035c752743b7e9d0e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=h7FqQ6hCK18
|
https://papers.nips.cc/paper_files/paper/2021/file/7a6bda9ad6ffdac035c752743b7e9d0e-Supplemental.pdf
|
We consider a standard federated learning (FL) setup where a group of clients periodically coordinate with a central server to train a statistical model. We develop a general algorithmic framework called FedLin to tackle some of the key challenges intrinsic to FL, namely objective heterogeneity, systems heterogeneity, and infrequent and imprecise communication. Our framework is motivated by the observation that under these challenges, various existing FL algorithms suffer from a fundamental speed-accuracy conflict: they either guarantee linear convergence but to an incorrect point, or convergence to the global minimum but at a sub-linear rate, i.e., fast convergence comes at the expense of accuracy. In contrast, when the clients' local loss functions are smooth and strongly convex, we show that FedLin guarantees linear convergence to the global minimum, despite arbitrary objective and systems heterogeneity. We then establish matching upper and lower bounds on the convergence rate of FedLin that highlight the effects of infrequent, periodic communication. Finally, we show that FedLin preserves linear convergence rates under aggressive gradient sparsification, and quantify the effect of the compression level on the convergence rate. Notably, our work is the first to provide tight linear convergence rate guarantees, and constitutes the first comprehensive analysis of gradient sparsification in FL.
| null |
On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms
|
https://papers.nips.cc/paper_files/paper/2021/hash/7aaece81f2d731fbf8ee0ad3521002ac-Abstract.html
|
Shuyu Cheng, Guoqiang Wu, Jun Zhu
|
https://papers.nips.cc/paper_files/paper/2021/hash/7aaece81f2d731fbf8ee0ad3521002ac-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12743-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7aaece81f2d731fbf8ee0ad3521002ac-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=anxHcl9_sE
|
https://papers.nips.cc/paper_files/paper/2021/file/7aaece81f2d731fbf8ee0ad3521002ac-Supplemental.pdf
|
Zeroth-order (ZO) optimization is widely used to handle challenging tasks, such as query-based black-box adversarial attacks and reinforcement learning. Various attempts have been made to integrate prior information into the gradient estimation procedure based on finite differences, with promising empirical results. However, their convergence properties are not well understood. This paper makes an attempt to fill up this gap by analyzing the convergence of prior-guided ZO algorithms under a greedy descent framework with various gradient estimators. We provide a convergence guarantee for the prior-guided random gradient-free (PRGF) algorithms. Moreover, to further accelerate over greedy descent methods, we present a new accelerated random search (ARS) algorithm that incorporates prior information, together with a convergence analysis. Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks.
| null |
Revisit Multimodal Meta-Learning through the Lens of Multi-Task Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b3403f79b478699224bb449509694cf-Abstract.html
|
Milad Abdollahzadeh, Touba Malekzadeh, Ngai-Man (Man) Cheung
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b3403f79b478699224bb449509694cf-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12744-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7b3403f79b478699224bb449509694cf-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=V5prUHOrOP4
|
https://papers.nips.cc/paper_files/paper/2021/file/7b3403f79b478699224bb449509694cf-Supplemental.pdf
|
Multimodal meta-learning is a recent problem that extends conventional few-shot meta-learning by generalizing its setup to diverse multimodal task distributions. This setup makes a step towards mimicking how humans make use of a diverse set of prior skills to learn new skills. Previous work has achieved encouraging performance. In particular, in spite of the diversity of the multimodal tasks, previous work claims that a single meta-learner trained on a multimodal distribution can sometimes outperform multiple specialized meta-learners trained on individual unimodal distributions. The improvement is attributed to knowledge transfer between different modes of task distributions. However, there is no deep investigation to verify and understand the knowledge transfer between multimodal tasks. Our work makes two contributions to multimodal meta-learning. First, we propose a method to quantify knowledge transfer between tasks of different modes at a micro-level. Our quantitative, task-level analysis is inspired by the recent transference idea from multi-task learning. Second, inspired by hard parameter sharing in multi-task learning and a new interpretation of related work, we propose a new multimodal meta-learner that outperforms existing work by considerable margins. While the major focus is on multimodal meta-learning, our work also attempts to shed light on task interaction in conventional meta-learning. The code for this project is available at https://miladabd.github.io/KML.
| null |
Dynamic Sasvi: Strong Safe Screening for Norm-Regularized Least Squares
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b5b23f4aadf9513306bcd59afb6e4c9-Abstract.html
|
Hiroaki Yamada, Makoto Yamada
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b5b23f4aadf9513306bcd59afb6e4c9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12745-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7b5b23f4aadf9513306bcd59afb6e4c9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GlbCMt4vSFv
|
https://papers.nips.cc/paper_files/paper/2021/file/7b5b23f4aadf9513306bcd59afb6e4c9-Supplemental.pdf
|
A recently introduced technique, called safe screening,'' for a sparse optimization problem allows us to identify irrelevant variables in the early stages of optimization. In this paper, we first propose a flexible framework for safe screening based on the Fenchel--Rockafellar duality and then derive a strong safe screening rule for norm-regularized least squares using the proposed framework. We refer to the proposed screening rule for norm-regularized least squares asdynamic Sasvi'' because it can be interpreted as a generalization of Sasvi. Unlike the original Sasvi, it does not require the exact solution of a more strongly regularized problem; hence, it works safely in practice. We show that our screening rule always eliminates more features compared with the existing state-of-the-art methods.
| null |
What Matters for Adversarial Imitation Learning?
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b647a7d88f4d6319bf0d600d168dbeb-Abstract.html
|
Manu Orsini, Anton Raichuk, Leonard Hussenot, Damien Vincent, Robert Dadashi, Sertan Girgin, Matthieu Geist, Olivier Bachem, Olivier Pietquin, Marcin Andrychowicz
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b647a7d88f4d6319bf0d600d168dbeb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12746-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7b647a7d88f4d6319bf0d600d168dbeb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-OrwaD3bG91
|
https://papers.nips.cc/paper_files/paper/2021/file/7b647a7d88f4d6319bf0d600d168dbeb-Supplemental.pdf
|
Adversarial imitation learning has become a popular framework for imitation in continuous control. Over the years, several variations of its components were proposed to enhance the performance of the learned policies as well as the sample complexity of the algorithm. In practice, these choices are rarely tested all together in rigorous empirical studies.It is therefore difficult to discuss and understand what choices, among the high-level algorithmic options as well as low-level implementation details, matter. To tackle this issue, we implement more than 50 of these choices in a generic adversarial imitation learning frameworkand investigate their impacts in a large-scale study (>500k trained agents) with both synthetic and human-generated demonstrations. We analyze the key results and highlight the most surprising findings.
| null |
Sequential Causal Imitation Learning with Unobserved Confounders
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b670d553471ad0fd7491c75bad587ff-Abstract.html
|
Daniel Kumor, Junzhe Zhang, Elias Bareinboim
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b670d553471ad0fd7491c75bad587ff-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12747-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7b670d553471ad0fd7491c75bad587ff-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=o6-k168bBD8
| null |
"Monkey see monkey do" is an age-old adage, referring to naive imitation without a deep understanding of a system's underlying mechanics. Indeed, if a demonstrator has access to information unavailable to the imitator (monkey), such as a different set of sensors, then no matter how perfectly the imitator models its perceived environment (See), attempting to directly reproduce the demonstrator's behavior (Do) can lead to poor outcomes. Imitation learning in the presence of a mismatch between demonstrator and imitator has been studied in the literature under the rubric of causal imitation learning (Zhang et. al. 2020), but existing solutions are limited to single-stage decision-making. This paper investigates the problem of causal imitation learning in sequential settings, where the imitator must make multiple decisions per episode. We develop a graphical criterion that is both necessary and sufficient for determining the feasibility of causal imitation, providing conditions when an imitator can match a demonstrator's performance despite differing capabilities. Finally, we provide an efficient algorithm for determining imitability, and corroborate our theory with simulations.
| null |
Topic Modeling Revisited: A Document Graph-based Neural Network Perspective
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b6982e584636e6a1cda934f1410299c-Abstract.html
|
Dazhong Shen, Chuan Qin, Chao Wang, Zheng Dong, Hengshu Zhu, Hui Xiong
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b6982e584636e6a1cda934f1410299c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12748-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7b6982e584636e6a1cda934f1410299c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yewqeLly5D8
|
https://papers.nips.cc/paper_files/paper/2021/file/7b6982e584636e6a1cda934f1410299c-Supplemental.pdf
|
Most topic modeling approaches are based on the bag-of-words assumption, where each word is required to be conditionally independent in the same document. As a result, both of the generative story and the topic formulation have totally ignored the semantic dependency among words, which is important for improving the semantic comprehension and model interpretability. To this end, in this paper, we revisit the task of topic modeling by transforming each document into a directed graph with word dependency as edges between word nodes, and develop a novel approach, namely Graph Neural Topic Model (GNTM). Specifically, in GNTM, a well-defined probabilistic generative story is designed to model both the graph structure and word sets with multinomial distributions on the vocabulary and word dependency edge set as the topics. Meanwhile, a Neural Variational Inference (NVI) approach is proposed to learn our model with graph neural networks to encode the document graphs. Besides, we theoretically demonstrate that Latent Dirichlet Allocation (LDA) can be derived from GNTM as a special case with similar objective functions. Finally, extensive experiments on four benchmark datasets have clearly demonstrated the effectiveness and interpretability of GNTM compared with state-of-the-art baselines.
| null |
Hard-Attention for Scalable Image Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b7916dd2de56297aa29cccb2bbf48d4-Abstract.html
|
Athanasios Papadopoulos, Pawel Korus, Nasir Memon
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b7916dd2de56297aa29cccb2bbf48d4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12749-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7b7916dd2de56297aa29cccb2bbf48d4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_6DawVPqyl
|
https://papers.nips.cc/paper_files/paper/2021/file/7b7916dd2de56297aa29cccb2bbf48d4-Supplemental.pdf
|
Can we leverage high-resolution information without the unsustainable quadratic complexity to input scale? We propose Traversal Network (TNet), a novel multi-scale hard-attention architecture, which traverses image scale-space in a top-down fashion, visiting only the most informative image regions along the way. TNet offers an adjustable trade-off between accuracy and complexity, by changing the number of attended image locations. We compare our model against hard-attention baselines on ImageNet, achieving higher accuracy with less resources (FLOPs, processing time and memory). We further test our model on fMoW dataset, where we process satellite images of size up to $896 \times 896$ px, getting up to $2.5$x faster processing compared to baselines operating on the same resolution, while achieving higher accuracy as well. TNet is modular, meaning that most classification models could be adopted as its backbone for feature extraction, making the reported performance gains orthogonal to benefits offered by existing optimized deep models. Finally, hard-attention guarantees a degree of interpretability to our model's predictions, without any extra cost beyond inference.
| null |
Fast Routing under Uncertainty: Adaptive Learning in Congestion Games via Exponential Weights
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b86f36d139d8581d4b5a4f155ba431c-Abstract.html
|
Dong Quan Vu, Kimon Antonakopoulos, Panayotis Mertikopoulos
|
https://papers.nips.cc/paper_files/paper/2021/hash/7b86f36d139d8581d4b5a4f155ba431c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12750-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7b86f36d139d8581d4b5a4f155ba431c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=njIekVo3wLP
|
https://papers.nips.cc/paper_files/paper/2021/file/7b86f36d139d8581d4b5a4f155ba431c-Supplemental.pdf
|
We examine an adaptive learning framework for nonatomic congestion games where the players' cost functions may be subject to exogenous fluctuations (e.g., due to disturbances in the network, variations in the traffic going through a link). In this setting, the popular multiplicative/ exponential weights algorithm enjoys an $\mathcal{O}(1/\sqrt{T})$ equilibrium convergence rate; however, this rate is suboptimal in static environments---i.e., when the network is not subject to randomness. In this static regime, accelerated algorithms achieve an $\mathcal{O}(1/T^{2})$ convergence speed, but they fail to converge altogether in stochastic problems. To fill this gap, we propose a novel, adaptive exponential weights method---dubbed AdaWeight---that seamlessly interpolates between the $\mathcal{O}(1/T^{2})$ and $\mathcal{O}(1/\sqrt{T})$ rates in the static and stochastic regimes respectively. Importantly, this "best-of-both-worlds" guarantee does not require any prior knowledge of the problem's parameters or tuning by the optimizer; in addition, the method's convergence speed depends subquadratically on the size of the network (number of vertices and edges), so it scales gracefully to large, real-life urban networks.
| null |
Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent
|
https://papers.nips.cc/paper_files/paper/2021/hash/7bb16972da003e87724f048d76b7e0e1-Abstract.html
|
Xingchao Liu, Xin Tong, Qiang Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/7bb16972da003e87724f048d76b7e0e1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12751-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7bb16972da003e87724f048d76b7e0e1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=S2-j0ZegyrE
|
https://papers.nips.cc/paper_files/paper/2021/file/7bb16972da003e87724f048d76b7e0e1-Supplemental.pdf
|
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO). In this work, we propose a novel gradient-based algorithm for profiling Pareto front by using Stein variational gradient descent (SVGD). We also provide a counterpart of our method based on Langevin dynamics. Our methods iteratively update a set of points in a parallel fashion to push them towards the Pareto front using multiple gradient descent, while encouraging the diversity between the particles by using the repulsive force mechanism in SVGD, or diffusion noise in Langevin dynamics. Compared with existing gradient-based methods that require predefined preference functions, our method can work efficiently in high dimensional problems, and can obtain more diverse solutions evenly distributed in the Pareto front. Moreover, our methods are theoretically guaranteed to converge to the Pareto front. We demonstrate the effectiveness of our method, especially the SVGD algorithm, through extensive experiments, showing its superiority over existing gradient-based algorithms.
| null |
MAP Propagation Algorithm: Faster Learning with a Team of Reinforcement Learning Agents
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c05147f3029c97ce26c0cb0b2469fca-Abstract.html
|
Stephen Chung
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c05147f3029c97ce26c0cb0b2469fca-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12752-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7c05147f3029c97ce26c0cb0b2469fca-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=utt-q6jW5_w
|
https://papers.nips.cc/paper_files/paper/2021/file/7c05147f3029c97ce26c0cb0b2469fca-Supplemental.pdf
|
Nearly all state-of-the-art deep learning algorithms rely on error backpropagation, which is generally regarded as biologically implausible. An alternative way of training an artificial neural network is through treating each unit in the network as a reinforcement learning agent, and thus the network is considered as a team of agents. As such, all units can be trained by REINFORCE, a local learning rule modulated by a global signal that is more consistent with biologically observed forms of synaptic plasticity. Although this learning rule follows the gradient of return in expectation, it suffers from high variance and thus the low speed of learning, rendering it impractical to train deep networks. We therefore propose a novel algorithm called MAP propagation to reduce this variance significantly while retaining the local property of the learning rule. Experiments demonstrated that MAP propagation could solve common reinforcement learning tasks at a similar speed to backpropagation when applied to an actor-critic network. Our work thus allows for the broader application of teams of agents in deep reinforcement learning.
| null |
TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c220a2091c26a7f5e9f1cfb099511e3-Abstract.html
|
Yifan Jiang, Shiyu Chang, Zhangyang Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c220a2091c26a7f5e9f1cfb099511e3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12753-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7c220a2091c26a7f5e9f1cfb099511e3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1GTpBZvNUrk
|
https://papers.nips.cc/paper_files/paper/2021/file/7c220a2091c26a7f5e9f1cfb099511e3-Supplemental.pdf
|
The recent explosive interest on transformers has suggested their potential to become powerful ``universal" models for computer vision tasks, such as classification, detection, and segmentation. While those attempts mainly study the discriminative models, we explore transformers on some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs). Our goal is to conduct the first pilot study in building a GAN \textit{completely free of convolutions}, using only pure transformer-based architectures. Our vanilla GAN architecture, dubbed \textbf{TransGAN}, consists of a memory-friendly transformer-based generator that progressively increases feature resolution, and correspondingly a multi-scale discriminator to capture simultaneously semantic contexts and low-level textures. On top of them, we introduce the new module of grid self-attention for alleviating the memory bottleneck further, in order to scale up TransGAN to high-resolution generation. We also develop a unique training recipe including a series of techniques that can mitigate the training instability issues of TransGAN, such as data augmentation, modified normalization, and relative position encoding. Our best architecture achieves highly competitive performance compared to current state-of-the-art GANs using convolutional backbones. Specifically, TransGAN sets \textbf{new state-of-the-art} inception score of 10.43 and FID of 18.28 on STL-10. It also reaches the inception score of 9.02 and FID of 9.26 on CIFAR-10, and 5.28 FID on CelebA $\mathbf{128} \times \mathbf{128}$, respectively: both on par with the current best results and outperforming StyleGAN-V2. When it comes to higher-resolution (e.g. $\mathbf{256} \times \mathbf{256}$) generation tasks, such as on CelebA-HQ and LSUN-Church, TransGAN continues to produce diverse visual examples with high fidelity and impressive texture details. In addition, we dive deep into the transformer-based generation models to understand how their behaviors differ from convolutional ones, by visualizing training dynamics. The code is available at: https://github.com/VITA-Group/TransGAN.
| null |
A Central Limit Theorem for Differentially Private Query Answering
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c2c48a32443ad8f805e48520f3b26a4-Abstract.html
|
Jinshuo Dong, Weijie Su, Linjun Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c2c48a32443ad8f805e48520f3b26a4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12754-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7c2c48a32443ad8f805e48520f3b26a4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YscYPF8bU13
|
https://papers.nips.cc/paper_files/paper/2021/file/7c2c48a32443ad8f805e48520f3b26a4-Supplemental.pdf
|
Perhaps the single most important use case for differential privacy is to privately answer numerical queries, which is usually achieved by adding noise to the answer vector. The central question is, therefore, to understand which noise distribution optimizes the privacy-accuracy trade-off, especially when the dimension of the answer vector is high. Accordingly, an extensive literature has been dedicated to the question and the upper and lower bounds have been successfully matched up to constant factors (Bun et al.,2018; Steinke & Ullman, 2017). In this paper, we take a novel approach to address this important optimality question. We first demonstrate an intriguing central limit theorem phenomenon in the high-dimensional regime. More precisely, we prove that a mechanism is approximately Gaussian Differentially Private (Dong et al., 2021) if the added noise satisfies certain conditions. In particular, densities proportional to $\mathrm{e}^{-\|x\|_p^\alpha}$, where $\|x\|_p$ is the standard $\ell_p$-norm, satisfies the conditions. Taking this perspective, we make use of the Cramer--Rao inequality and show an "uncertainty principle"-style result: the product of privacy parameter and the $\ell_2$-loss of the mechanism is lower bounded by the dimension. Furthermore, the Gaussian mechanism achieves the constant-sharp optimal privacy-accuracy trade-off among all such noises. Our findings are corroborated by numerical experiments.
| null |
Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c6c1a7bfde175bed616b39247ccace1-Abstract.html
|
Rishav Chourasia, Jiayuan Ye, Reza Shokri
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c6c1a7bfde175bed616b39247ccace1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12755-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7c6c1a7bfde175bed616b39247ccace1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Nfbe1usrgx4
|
https://papers.nips.cc/paper_files/paper/2021/file/7c6c1a7bfde175bed616b39247ccace1-Supplemental.zip
|
What is the information leakage of an iterative randomized learning algorithm about its training data, when the internal state of the algorithm is \emph{private}? How much is the contribution of each specific training epoch to the information leakage through the released model? We study this problem for noisy gradient descent algorithms, and model the \emph{dynamics} of R\'enyi differential privacy loss throughout the training process. Our analysis traces a provably \emph{tight} bound on the R\'enyi divergence between the pair of probability distributions over parameters of models trained on neighboring datasets. We prove that the privacy loss converges exponentially fast, for smooth and strongly convex loss functions, which is a significant improvement over composition theorems (which over-estimate the privacy loss by upper-bounding its total value over all intermediate gradient computations). For Lipschitz, smooth, and strongly convex loss functions, we prove optimal utility with a small gradient complexity for noisy gradient descent algorithms.
| null |
Data driven semi-supervised learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c93ebe873ef213123c8af4b188e7558-Abstract.html
|
Maria-Florina F. Balcan, Dravyansh Sharma
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c93ebe873ef213123c8af4b188e7558-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12756-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7c93ebe873ef213123c8af4b188e7558-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=n11B-1GmTJl
|
https://papers.nips.cc/paper_files/paper/2021/file/7c93ebe873ef213123c8af4b188e7558-Supplemental.pdf
|
We consider a novel data driven approach for designing semi-supervised learning algorithms that can effectively learn with only a small number of labeled examples. We focus on graph-based techniques, where the unlabeled examples are connected in a graph under the implicit assumption that similar nodes likely have similar labels. Over the past two decades, several elegant graph-based semi-supervised learning algorithms for inferring the labels of the unlabeled examples given the graph and a few labeled examples have been proposed. However, the problem of how to create the graph (which impacts the practical usefulness of these methods significantly) has been relegated to heuristics and domain-specific art, and no general principles have been proposed. In this work we present a novel data driven approach for learning the graph and provide strong formal guarantees in both the distributional and online learning formalizations. We show how to leverage problem instances coming from an underlying problem domain to learn the graph hyperparameters for commonly used parametric families of graphs that provably perform well on new instances from the same domain. We obtain low regret and efficient algorithms in the online setting, and generalization guarantees in the distributional setting. We also show how to combine several very different similarity metrics and learn multiple hyperparameters, our results hold for large classes of problems. We expect some of the tools and techniques we develop along the way to be of independent interest, for data driven algorithms more generally.
| null |
Online Meta-Learning via Learning with Layer-Distributed Memory
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c9e9afa5a9dc68ccaf27d9effeb9383-Abstract.html
|
Sudarshan Babu, Pedro Savarese, Michael Maire
|
https://papers.nips.cc/paper_files/paper/2021/hash/7c9e9afa5a9dc68ccaf27d9effeb9383-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12757-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7c9e9afa5a9dc68ccaf27d9effeb9383-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8YSqxvRhi-Q
|
https://papers.nips.cc/paper_files/paper/2021/file/7c9e9afa5a9dc68ccaf27d9effeb9383-Supplemental.pdf
|
We demonstrate that efficient meta-learning can be achieved via end-to-end training of deep neural networks with memory distributed across layers. The persistent state of this memory assumes the entire burden of guiding task adaptation. Moreover, its distributed nature is instrumental in orchestrating adaptation. Ablation experiments demonstrate that providing relevant feedback to memory units distributed across the depth of the network enables them to guide adaptation throughout the entire network. Our results show that this is a successful strategy for simplifying meta-learning -- often cast as a bi-level optimization problem -- to standard end-to-end training, while outperforming gradient-based, prototype-based, and other memory-based meta-learning strategies. Additionally, our adaptation strategy naturally handles online learning scenarios with a significant delay between observing a sample and its corresponding label -- a setting in which other approaches struggle. Adaptation via distributed memory is effective across a wide range of learning tasks, ranging from classification to online few-shot semantic segmentation.
| null |
Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ca57a9f85a19a6e4b9a248c1daca185-Abstract.html
|
Naoya Takeishi, Alexandros Kalousis
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ca57a9f85a19a6e4b9a248c1daca185-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12758-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7ca57a9f85a19a6e4b9a248c1daca185-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0p0gt1Pn2Gv
|
https://papers.nips.cc/paper_files/paper/2021/file/7ca57a9f85a19a6e4b9a248c1daca185-Supplemental.pdf
|
Integrating physics models within machine learning models holds considerable promise toward learning robust models with improved interpretability and abilities to extrapolate. In this work, we focus on the integration of incomplete physics models into deep generative models. In particular, we introduce an architecture of variational autoencoders (VAEs) in which a part of the latent space is grounded by physics. A key technical challenge is to strike a balance between the incomplete physics and trainable components such as neural networks for ensuring that the physics part is used in a meaningful manner. To this end, we propose a regularized learning method that controls the effect of the trainable components and preserves the semantics of the physics-based latent variables as intended. We not only demonstrate generative performance improvements over a set of synthetic and real-world datasets, but we also show that we learn robust models that can consistently extrapolate beyond the training distribution in a meaningful manner. Moreover, we show that we can control the generative process in an interpretable manner.
| null |
Characterizing the risk of fairwashing
|
https://papers.nips.cc/paper_files/paper/2021/hash/7caf5e22ea3eb8175ab518429c8589a4-Abstract.html
|
Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara
|
https://papers.nips.cc/paper_files/paper/2021/hash/7caf5e22ea3eb8175ab518429c8589a4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12759-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7caf5e22ea3eb8175ab518429c8589a4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9PnKduzf-FT
|
https://papers.nips.cc/paper_files/paper/2021/file/7caf5e22ea3eb8175ab518429c8589a4-Supplemental.pdf
|
Fairwashing refers to the risk that an unfair black-box model can be explained by a fairer model through post-hoc explanation manipulation. In this paper, we investigate the capability of fairwashing attacks by analyzing their fidelity-unfairness trade-offs. In particular, we show that fairwashed explanation models can generalize beyond the suing group (i.e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model. We also demonstrate that fairwashing attacks can transfer across black-box models, meaning that other black-box models can perform fairwashing without explicitly using their predictions. This generalization and transferability of fairwashing attacks imply that their detection will be difficult in practice. Finally, we propose an approach to quantify the risk of fairwashing, which is based on the computation of the range of the unfairness of high-fidelity explainers.
| null |
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
|
https://papers.nips.cc/paper_files/paper/2021/hash/7cc234202e98d2722580858573fd0817-Abstract.html
|
Kanghyun Choi, Deokki Hong, Noseong Park, Youngsok Kim, Jinho Lee
|
https://papers.nips.cc/paper_files/paper/2021/hash/7cc234202e98d2722580858573fd0817-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12760-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7cc234202e98d2722580858573fd0817-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ejo1_Weiart
|
https://papers.nips.cc/paper_files/paper/2021/file/7cc234202e98d2722580858573fd0817-Supplemental.pdf
|
Model quantization is known as a promising method to compress deep neural networks, especially for inferences on lightweight mobile or edge devices. However, model quantization usually requires access to the original training data to maintain the accuracy of the full-precision models, which is often infeasible in real-world scenarios for security and privacy issues.A popular approach to perform quantization without access to the original data is to use synthetically generated samples, based on batch-normalization statistics or adversarial learning.However, the drawback of such approaches is that they primarily rely on random noise input to the generator to attain diversity of the synthetic samples. We find that this is often insufficient to capture the distribution of the original data, especially around the decision boundaries.To this end, we propose Qimera, a method that uses superposed latent embeddings to generate synthetic boundary supporting samples.For the superposed embeddings to better reflect the original distribution, we also propose using an additional disentanglement mapping layer and extracting information from the full-precision model.The experimental results show that Qimera achieves state-of-the-art performances for various settings on data-free quantization. Code is available at https://github.com/iamkanghyunchoi/qimera.
| null |
Embedding Principle of Loss Landscape of Deep Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/7cc532d783a7461f227a5da8ea80bfe1-Abstract.html
|
Yaoyu Zhang, Zhongwang Zhang, Tao Luo, Zhiqin J Xu
|
https://papers.nips.cc/paper_files/paper/2021/hash/7cc532d783a7461f227a5da8ea80bfe1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12761-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7cc532d783a7461f227a5da8ea80bfe1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8AgtfqiHUhs
|
https://papers.nips.cc/paper_files/paper/2021/file/7cc532d783a7461f227a5da8ea80bfe1-Supplemental.pdf
|
Understanding the structure of loss landscape of deep neural networks (DNNs) is obviously important. In this work, we prove an embedding principle that the loss landscape of a DNN "contains" all the critical points of all the narrower DNNs. More precisely, we propose a critical embedding such that any critical point, e.g., local or global minima, of a narrower DNN can be embedded to a critical point/affine subspace of the target DNN with higher degeneracy and preserving the DNN output function. Note that, given any training data, differentiable loss function and differentiable activation function, this embedding structure of critical points holds.This general structure of DNNs is starkly different from other nonconvex problems such as protein-folding.Empirically, we find that a wide DNN is often attracted by highly-degenerate critical points that are embedded from narrow DNNs. The embedding principle provides a new perspective to study the general easy optimization of wide DNNs and unravels a potential implicit low-complexity regularization during the training.Overall, our work provides a skeleton for the study of loss landscape of DNNs and its implication, by which a more exact and comprehensive understanding can be anticipated in the near future.
| null |
Adversarial Reweighting for Partial Domain Adaptation
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ce3284b743aefde80ffd9aec500e085-Abstract.html
|
Xiang Gu, Xi Yu, yan yang, Jian Sun, Zongben Xu
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ce3284b743aefde80ffd9aec500e085-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12762-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7ce3284b743aefde80ffd9aec500e085-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=f5liPryFRoA
|
https://papers.nips.cc/paper_files/paper/2021/file/7ce3284b743aefde80ffd9aec500e085-Supplemental.pdf
|
Partial domain adaptation (PDA) has gained much attention due to its practical setting. The current PDA methods usually adapt the feature extractor by aligning the target and reweighted source domain distributions. In this paper, we experimentally find that the feature adaptation by the reweighted distribution alignment in some state-of-the-art PDA methods is not robust to the ``noisy'' weights of source domain data, leading to negative domain transfer on some challenging benchmarks. To tackle the challenge of negative domain transfer, we propose a novel Adversarial Reweighting (AR) approach that adversarially learns the weights of source domain data to align the source and target domain distributions, and the transferable deep recognition network is learned on the reweighted source domain data. Based on this idea, we propose a training algorithm that alternately updates the parameters of the network and optimizes the weights of source domain data. Extensive experiments show that our method achieves state-of-the-art results on the benchmarks of ImageNet-Caltech, Office-Home, VisDA-2017, and DomainNet. Ablation studies also confirm the effectiveness of our approach.
| null |
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
|
https://papers.nips.cc/paper_files/paper/2021/hash/7cfd5df443b4eb0d69886a583b33de4c-Abstract.html
|
Elias Frantar, Eldar Kurtic, Dan Alistarh
|
https://papers.nips.cc/paper_files/paper/2021/hash/7cfd5df443b4eb0d69886a583b33de4c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12763-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7cfd5df443b4eb0d69886a583b33de4c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=EEq6YUrDyfO
|
https://papers.nips.cc/paper_files/paper/2021/file/7cfd5df443b4eb0d69886a583b33de4c-Supplemental.pdf
|
Efficiently approximating local curvature information of the loss function is a useful tool for the optimization and compression of deep neural networks. Yet, most existing methods to approximate second-order information have high computational or storage costs, limiting their practicality. In this work, we investigate matrix-free approaches for estimating Inverse-Hessian Vector Products (IHVPs) for the case when the Hessian can be approximated as a sum of rank-one matrices, as in the classic approximation of the Hessian by the empirical Fisher matrix. The first algorithm we propose is tailored towards network compression and can compute the IHVP for dimension $d$ given a fixed set of $m$ rank-one matrices using $O(dm^2)$ precomputation, $O(dm)$ cost for computing the IHVP and query cost $O(m)$ for computing any single element of the inverse Hessian approximation. The second algorithm targets an optimization setting, where we wish to compute the product between the inverse Hessian, estimated over a sliding window of optimization steps, and a given gradient direction. We give an algorithm with cost $O(dm + m^2)$ for computing the IHVP and $O(dm + m^3)$ for adding or removing any gradient from the sliding window. We show that both algorithms yield competitive results for network pruning and optimization, respectively, with significantly lower computational overhead relative to existing second-order methods.
| null |
Graph Adversarial Self-Supervised Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d3010c11d08cf990b7614d2c2ca9098-Abstract.html
|
Longqi Yang, Liangliang Zhang, Wenjing Yang
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d3010c11d08cf990b7614d2c2ca9098-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12764-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7d3010c11d08cf990b7614d2c2ca9098-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nVwJse40s1
|
https://papers.nips.cc/paper_files/paper/2021/file/7d3010c11d08cf990b7614d2c2ca9098-Supplemental.zip
|
This paper studies a long-standing problem of learning the representations of a whole graph without human supervision. The recent self-supervised learning methods train models to be invariant to the transformations (views) of the inputs. However, designing these views requires the experience of human experts. Inspired by adversarial training, we propose an adversarial self-supervised learning (\texttt{GASSL}) framework for learning unsupervised representations of graph data without any handcrafted views. \texttt{GASSL} automatically generates challenging views by adding perturbations to the input and are adversarially trained with respect to the encoder. Our method optimizes the min-max problem and utilizes a gradient accumulation strategy to accelerate the training process. Experimental on ten graph classification datasets show that the proposed approach is superior to state-of-the-art self-supervised learning baselines, which are competitive with supervised models.
| null |
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d38b1e9bd793d3f45e0e212a729a93c-Abstract.html
|
Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d38b1e9bd793d3f45e0e212a729a93c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12765-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7d38b1e9bd793d3f45e0e212a729a93c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cAw860ncLRW
|
https://papers.nips.cc/paper_files/paper/2021/file/7d38b1e9bd793d3f45e0e212a729a93c-Supplemental.pdf
|
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs). While existing defense methods have demonstrated promising results on detecting or erasing backdoors, it is still not clear whether robust training methods can be devised to prevent the backdoor triggers being injected into the trained model in the first place. In this paper, we introduce the concept of \emph{anti-backdoor learning}, aiming to train \emph{clean} models given backdoor-poisoned data. We frame the overall learning process as a dual-task of learning the \emph{clean} and the \emph{backdoor} portions of data. From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data much faster than learning with clean data, and the stronger the attack the faster the model converges on backdoored data; 2) the backdoor task is tied to a specific class (the backdoor target class). Based on these two weaknesses, we propose a general learning scheme, Anti-Backdoor Learning (ABL), to automatically prevent backdoor attacks during training. ABL introduces a two-stage \emph{gradient ascent} mechanism for standard training to 1) help isolate backdoor examples at an early training stage, and 2) break the correlation between backdoor examples and the target class at a later training stage. Through extensive experiments on multiple benchmark datasets against 10 state-of-the-art attacks, we empirically show that ABL-trained models on backdoor-poisoned data achieve the same performance as they were trained on purely clean data. Code is available at \url{https://github.com/bboylyg/ABL}.
| null |
Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d3e28d14440d6c07f73b7557e3d9602-Abstract.html
|
Keunseo Kim, JunCheol Shin, Heeyoung Kim
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d3e28d14440d6c07f73b7557e3d9602-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12766-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7d3e28d14440d6c07f73b7557e3d9602-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-nLW4nhdkO
|
https://papers.nips.cc/paper_files/paper/2021/file/7d3e28d14440d6c07f73b7557e3d9602-Supplemental.pdf
|
Several out-of-distribution (OOD) detection scores have been recently proposed for deep generative models because the direct use of the likelihood threshold for OOD detection has been shown to be problematic. In this paper, we propose a new OOD score based on a Bayesian hypothesis test called the locally most powerful Bayesian test (LMPBT). The LMPBT is locally most powerful in that the alternative hypothesis (the representative parameter for the OOD sample) is specified to maximize the probability that the Bayes factor exceeds the evidence threshold in favor of the alternative hypothesis provided that the parameter specified under the alternative hypothesis is in the neighborhood of the parameter specified under the null hypothesis. That is, under this neighborhood parameter condition, the test with the proposed alternative hypothesis maximizes the probability of correct detection of OOD samples. We also propose numerical strategies for more efficient and reliable computation of the LMPBT for practical application to deep generative models. Evaluations conducted of the OOD detection performance of the LMPBT on various benchmark datasets demonstrate its superior performance over existing OOD detection methods.
| null |
Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d5430cf85f78c4b7aa09813b14bce0d-Abstract.html
|
Qiyu Kang, Yang Song, Qinxu Ding, Wee Peng Tay
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d5430cf85f78c4b7aa09813b14bce0d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12767-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7d5430cf85f78c4b7aa09813b14bce0d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9CPc4EIr2t1
|
https://papers.nips.cc/paper_files/paper/2021/file/7d5430cf85f78c4b7aa09813b14bce0d-Supplemental.pdf
|
Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equations (ODEs) are intrinsically more robust against adversarial attacks compared to vanilla DNNs. In this work, we propose a neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF). By ensuring that the equilibrium points of the ODE solution used as part of SODEF are Lyapunov-stable, the ODE solution for an input with a small perturbation converges to the same solution as the unperturbed input. We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability. Our analysis suggests that our proposed regularizers force the extracted feature points to be within a neighborhood of the Lyapunov-stable equilibrium points of the SODEF ODE. SODEF is compatible with many defense methods and can be applied to any neural network's final regressor layer to enhance its stability against adversarial attacks.
| null |
Robust Compressed Sensing MRI with Deep Generative Priors
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d6044e95a16761171b130dcb476a43e-Abstract.html
|
Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, Jon Tamir
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d6044e95a16761171b130dcb476a43e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12768-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7d6044e95a16761171b130dcb476a43e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wHoIjrT6MMb
|
https://papers.nips.cc/paper_files/paper/2021/file/7d6044e95a16761171b130dcb476a43e-Supplemental.pdf
|
The CSGM framework (Bora-Jalal-Price-Dimakis'17) has shown that deepgenerative priors can be powerful tools for solving inverse problems.However, to date this framework has been empirically successful only oncertain datasets (for example, human faces and MNIST digits), and itis known to perform poorly on out-of-distribution samples. In thispaper, we present the first successful application of the CSGMframework on clinical MRI data. We train a generative prior on brainscans from the fastMRI dataset, and show that posterior sampling viaLangevin dynamics achieves high quality reconstructions. Furthermore,our experiments and theory show that posterior sampling is robust tochanges in the ground-truth distribution and measurement process.Our code and models are available at: \url{https://github.com/utcsilab/csgm-mri-langevin}.
| null |
H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction of Humans in Motion
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d62a275027741d98073d42b8f735c68-Abstract.html
|
Hongyi Xu, Thiemo Alldieck, Cristian Sminchisescu
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d62a275027741d98073d42b8f735c68-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12769-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7d62a275027741d98073d42b8f735c68-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=s-NI4H4e3Rf
|
https://papers.nips.cc/paper_files/paper/2021/file/7d62a275027741d98073d42b8f735c68-Supplemental.zip
|
We present neural radiance fields for rendering and temporal (4D) reconstruction of humans in motion (H-NeRF), as captured by a sparse set of cameras or even from a monocular video. Our approach combines ideas from neural scene representation, novel-view synthesis, and implicit statistical geometric human representations, coupled using novel loss functions. Instead of learning a radiance field with a uniform occupancy prior, we constrain it by a structured implicit human body model, represented using signed distance functions. This allows us to robustly fuse information from sparse views and generalize well beyond the poses or views observed in training. Moreover, we apply geometric constraints to co-learn the structure of the observed subject -- including both body and clothing -- and to regularize the radiance field to geometrically plausible solutions. Extensive experiments on multiple datasets demonstrate the robustness and the accuracy of our approach, its generalization capabilities significantly outside a small training set of poses and views, and statistical extrapolation beyond the observed shape.
| null |
DOBF: A Deobfuscation Pre-Training Objective for Programming Languages
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d6548bdc0082aacc950ed35e91fcccb-Abstract.html
|
Marie-Anne Lachaux, Baptiste Roziere, Marc Szafraniec, Guillaume Lample
|
https://papers.nips.cc/paper_files/paper/2021/hash/7d6548bdc0082aacc950ed35e91fcccb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12770-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7d6548bdc0082aacc950ed35e91fcccb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=3ez9BSHTNT
|
https://papers.nips.cc/paper_files/paper/2021/file/7d6548bdc0082aacc950ed35e91fcccb-Supplemental.pdf
|
Recent advances in self-supervised learning have dramatically improved the state of the art on a wide variety of tasks. However, research in language model pre-training has mostly focused on natural languages, and it is unclear whether models like BERT and its variants provide the best pre-training when applied to other modalities, such as source code. In this paper, we introduce a new pre-training objective, DOBF, that leverages the structural aspect of programming languages and pre-trains a model to recover the original version of obfuscated source code. We show that models pre-trained with DOBF significantly outperform existing approaches on multiple downstream tasks, providing relative improvements of up to 12.2% in unsupervised code translation, and 5.3% in natural language code search. Incidentally, we found that our pre-trained model is able to deobfuscate fully obfuscated source files, and to suggest descriptive variable names.
| null |
Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles
|
https://papers.nips.cc/paper_files/paper/2021/hash/7dd3ed2e12d7967b656d156d50308263-Abstract.html
|
Jiefeng Chen, Frederick Liu, Besim Avci, Xi Wu, Yingyu Liang, Somesh Jha
|
https://papers.nips.cc/paper_files/paper/2021/hash/7dd3ed2e12d7967b656d156d50308263-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12771-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7dd3ed2e12d7967b656d156d50308263-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=apK65PUH0l9
|
https://papers.nips.cc/paper_files/paper/2021/file/7dd3ed2e12d7967b656d156d50308263-Supplemental.zip
|
When a deep learning model is deployed in the wild, it can encounter test data drawn from distributions different from the training data distribution and suffer drop in performance. For safe deployment, it is essential to estimate the accuracy of the pre-trained model on the test data. However, the labels for the test inputs are usually not immediately available in practice, and obtaining them can be expensive. This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs. In this paper, we propose a principled and practically effective framework that simultaneously addresses the two tasks. The proposed framework iteratively learns an ensemble of models to identify mis-classified data points and performs self-training to improve the ensemble with the identified points. Theoretical analysis demonstrates that our framework enjoys provable guarantees for both accuracy estimation and error detection under mild conditions readily satisfied by practical deep learning models. Along with the framework, we proposed and experimented with two instantiations and achieved state-of-the-art results on 59 tasks. For example, on iWildCam, one instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7% compared to existing methods.
| null |
Exploiting Chain Rule and Bayes' Theorem to Compare Probability Distributions
|
https://papers.nips.cc/paper_files/paper/2021/hash/7e0ff37942c2de60cbcbd27041196ce3-Abstract.html
|
Huangjie Zheng, Mingyuan Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/7e0ff37942c2de60cbcbd27041196ce3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12772-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7e0ff37942c2de60cbcbd27041196ce3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=f-ggKIDTu5D
|
https://papers.nips.cc/paper_files/paper/2021/file/7e0ff37942c2de60cbcbd27041196ce3-Supplemental.pdf
|
To measure the difference between two probability distributions, referred to as the source and target, respectively, we exploit both the chain rule and Bayes' theorem to construct conditional transport (CT), which is constituted by both a forward component and a backward one. The forward CT is the expected cost of moving a source data point to a target one, with their joint distribution defined by the product of the source probability density function (PDF) and a source-dependent conditional distribution, which is related to the target PDF via Bayes' theorem. The backward CT is defined by reversing the direction. The CT cost can be approximated by replacing the source and target PDFs with their discrete empirical distributions supported on mini-batches, making it amenable to implicit distributions and stochastic gradient descent-based optimization. When applied to train a generative model, CT is shown to strike a good balance between mode-covering and mode-seeking behaviors and strongly resist mode collapse. On a wide variety of benchmark datasets for generative modeling, substituting the default statistical distance of an existing generative adversarial network with CT is shown to consistently improve the performance. PyTorch code is provided.
| null |
Actively Identifying Causal Effects with Latent Variables Given Only Response Variable Observable
|
https://papers.nips.cc/paper_files/paper/2021/hash/7e7e69ea3384874304911625ac34321c-Abstract.html
|
Tian-Zuo Wang, Zhi-Hua Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/7e7e69ea3384874304911625ac34321c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12773-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7e7e69ea3384874304911625ac34321c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tCB-SCt5wWG
|
https://papers.nips.cc/paper_files/paper/2021/file/7e7e69ea3384874304911625ac34321c-Supplemental.pdf
|
In many real tasks, it is generally desired to study the causal effect on a specific target (response variable) only, with no need to identify the thorough causal effects involving all variables. In this paper, we attempt to identify such effects by a few active interventions where only the response variable is observable. This task is challenging because the causal graph is unknown and even there may exist latent confounders. To learn the necessary structure for identifying the effects, we provide the graphical characterization that allows us to efficiently estimate all possible causal effects in a partially mixed ancestral graph (PMAG) by generalized back-door criterion. The characterization guides learning a local structure with the interventional data. Theoretical analysis and empirical studies validate the effectiveness and efficiency of our proposed approach.
| null |
Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/7eb7eabbe9bd03c2fc99881d04da9cbd-Abstract.html
|
Matej Zečević, Devendra Dhami, Athresh Karanam, Sriraam Natarajan, Kristian Kersting
|
https://papers.nips.cc/paper_files/paper/2021/hash/7eb7eabbe9bd03c2fc99881d04da9cbd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12774-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7eb7eabbe9bd03c2fc99881d04da9cbd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9QwPhXWmuRp
|
https://papers.nips.cc/paper_files/paper/2021/file/7eb7eabbe9bd03c2fc99881d04da9cbd-Supplemental.pdf
|
While probabilistic models are an important tool for studying causality, doing so suffers from the intractability of inference. As a step towards tractable causal models, we consider the problem of learning interventional distributions using sum-product networks (SPNs) that are over-parameterized by gate functions, e.g., neural networks. Providing an arbitrarily intervened causal graph as input, effectively subsuming Pearl's do-operator, the gate function predicts the parameters of the SPN. The resulting interventional SPNs are motivated and illustrated by a structural causal model themed around personal health. Our empirical evaluation against competing methods from both generative and causal modelling demonstrates that interventional SPNs indeed are both expressive and causally adequate.
| null |
PettingZoo: Gym for Multi-Agent Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ed2d3454c5eea71148b11d0c25104ff-Abstract.html
|
J Terry, Benjamin Black, Nathaniel Grammel, Mario Jayakumar, Ananth Hari, Ryan Sullivan, Luis S Santos, Clemens Dieffendahl, Caroline Horsch, Rodrigo Perez-Vicente, Niall Williams, Yashas Lokesh, Praveen Ravi
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ed2d3454c5eea71148b11d0c25104ff-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12775-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7ed2d3454c5eea71148b11d0c25104ff-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fLnsj7fpbPI
|
https://papers.nips.cc/paper_files/paper/2021/file/7ed2d3454c5eea71148b11d0c25104ff-Supplemental.pdf
|
This paper introduces the PettingZoo library and the accompanying Agent Environment Cycle ("AEC") games model. PettingZoo is a library of diverse sets of multi-agent environments with a universal, elegant Python API. PettingZoo was developed with the goal of accelerating research in Multi-Agent Reinforcement Learning ("MARL"), by making work more interchangeable, accessible and reproducible akin to what OpenAI's Gym library did for single-agent reinforcement learning. PettingZoo's API, while inheriting many features of Gym, is unique amongst MARL APIs in that it's based around the novel AEC games model. We argue, in part through case studies on major problems in popular MARL environments, that the popular game models are poor conceptual models of the games commonly used with MARL, that they promote severe bugs that are hard to detect, and that the AEC games model addresses these problems.
| null |
Parametric Complexity Bounds for Approximating PDEs with Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/7edccc661418aeb5761dbcdc06ad490c-Abstract.html
|
Tanya Marwah, Zachary Lipton, Andrej Risteski
|
https://papers.nips.cc/paper_files/paper/2021/hash/7edccc661418aeb5761dbcdc06ad490c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12776-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7edccc661418aeb5761dbcdc06ad490c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HbViCqfbd7
|
https://papers.nips.cc/paper_files/paper/2021/file/7edccc661418aeb5761dbcdc06ad490c-Supplemental.pdf
|
Recent experiments have shown that deep networks can approximate solutions to high-dimensional PDEs, seemingly escaping the curse of dimensionality. However, questions regarding the theoretical basis for such approximations, including the required network size remain open. In this paper, we investigate the representational power of neural networks for approximating solutions to linear elliptic PDEs with Dirichlet boundary conditions. We prove that when a PDE's coefficients are representable by small neural networks, the parameters required to approximate its solution scale polynomially with the input dimension $d$ and proportionally to the parameter counts of the coefficient networks. To this end, we develop a proof technique that simulates gradient descent (in an appropriate Hilbert space) by growing a neural network architecture whose iterates each participate as sub-networks in their (slightly larger) successors, and converge to the solution of the PDE.
| null |
Learning-to-learn non-convex piecewise-Lipschitz functions
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ee6f2b3b68a212d3b7a4f6557eb8cc7-Abstract.html
|
Maria-Florina F. Balcan, Mikhail Khodak, Dravyansh Sharma, Ameet Talwalkar
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ee6f2b3b68a212d3b7a4f6557eb8cc7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12777-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7ee6f2b3b68a212d3b7a4f6557eb8cc7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=USq7LP5pnDH
|
https://papers.nips.cc/paper_files/paper/2021/file/7ee6f2b3b68a212d3b7a4f6557eb8cc7-Supplemental.pdf
|
We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms. Starting from recent regret bounds for the exponential forecaster on losses with dispersed discontinuities, we generalize them to be initialization-dependent and then use this result to propose a practical meta-learning procedure that learns both the initialization and the step-size of the algorithm from multiple online learning tasks. Asymptotically, we guarantee that the average regret across tasks scales with a natural notion of task-similarity that measures the amount of overlap between near-optimal regions of different tasks. Finally, we instantiate the method and its guarantee in two important settings: robust meta-learning and multi-task data-driven algorithm design.
| null |
Uncertain Decisions Facilitate Better Preference Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f141cf8e7136ce8701dc6636c2a6fe4-Abstract.html
|
Cassidy Laidlaw, Stuart Russell
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f141cf8e7136ce8701dc6636c2a6fe4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12778-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7f141cf8e7136ce8701dc6636c2a6fe4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sNKpWhzEDWS
|
https://papers.nips.cc/paper_files/paper/2021/file/7f141cf8e7136ce8701dc6636c2a6fe4-Supplemental.pdf
|
Existing observational approaches for learning human preferences, such as inverse reinforcement learning, usually make strong assumptions about the observability of the human's environment. However, in reality, people make many important decisions under uncertainty. To better understand preference learning in these cases, we study the setting of inverse decision theory (IDT), a previously proposed framework where a human is observed making non-sequential binary decisions under uncertainty. In IDT, the human's preferences are conveyed through their loss function, which expresses a tradeoff between different types of mistakes. We give the first statistical analysis of IDT, providing conditions necessary to identify these preferences and characterizing the sample complexity—the number of decisions that must be observed to learn the tradeoff the human is making to a desired precision. Interestingly, we show that it is actually easier to identify preferences when the decision problem is more uncertain. Furthermore, uncertain decision problems allow us to relax the unrealistic assumption that the human is an optimal decision maker but still identify their exact preferences; we give sample complexities in this suboptimal case as well. Our analysis contradicts the intuition that partial observability should make preference learning more difficult. It also provides a first step towards understanding and improving preference learning methods for uncertain and suboptimal humans.
| null |
Decision Transformer: Reinforcement Learning via Sequence Modeling
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f489f642a0ddb10272b5c31057f0663-Abstract.html
|
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f489f642a0ddb10272b5c31057f0663-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12779-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7f489f642a0ddb10272b5c31057f0663-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=a7APmM4B9d
|
https://papers.nips.cc/paper_files/paper/2021/file/7f489f642a0ddb10272b5c31057f0663-Supplemental.pdf
|
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
| null |
Probability Paths and the Structure of Predictions over Time
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f53f8c6c730af6aeb52e66eb74d8507-Abstract.html
|
Zhiyuan Jerry Lin, Hao Sheng, Sharad Goel
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f53f8c6c730af6aeb52e66eb74d8507-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12780-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7f53f8c6c730af6aeb52e66eb74d8507-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OU4LL1qP3Dg
|
https://papers.nips.cc/paper_files/paper/2021/file/7f53f8c6c730af6aeb52e66eb74d8507-Supplemental.pdf
|
In settings ranging from weather forecasts to political prognostications to financial projections, probability estimates of future binary outcomes often evolve over time. For example, the estimated likelihood of rain on a specific day changes by the hour as new information becomes available. Given a collection of such probability paths, we introduce a Bayesian framework -- which we call the Gaussian latent information martingale, or GLIM -- for modeling the structure of dynamic predictions over time. Suppose, for example, that the likelihood of rain in a week is 50%, and consider two hypothetical scenarios. In the first, one expects the forecast to be equally likely to become either 25% or 75% tomorrow; in the second, one expects the forecast to stay constant for the next several days. A time-sensitive decision-maker might select a course of action immediately in the latter scenario, but may postpone their decision in the former, knowing that new information is imminent. We model these trajectories by assuming predictions update according to a latent process of information flow, which is inferred from historical data. In contrast to general methods for time series analysis, this approach preserves important properties of probability paths such as the martingale structure and appropriate amount of volatility and better quantifies future uncertainties around probability paths. We show that GLIM outperforms three popular baseline methods, producing better estimated posterior probability path distributions measured by three different metrics. By elucidating the dynamic structure of predictions over time, we hope to help individuals make more informed choices.
| null |
Deep Extended Hazard Models for Survival Analysis
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f6caf1f0ba788cd7953d817724c2b6e-Abstract.html
|
Qixian Zhong, Jonas W. Mueller, Jane-Ling Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/7f6caf1f0ba788cd7953d817724c2b6e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12781-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7f6caf1f0ba788cd7953d817724c2b6e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GUD7rNkaWKr
|
https://papers.nips.cc/paper_files/paper/2021/file/7f6caf1f0ba788cd7953d817724c2b6e-Supplemental.pdf
|
Unlike standard prediction tasks, survival analysis requires modeling right censored data, which must be treated with care. While deep neural networks excel in traditional supervised learning, it remains unclear how to best utilize these models in survival analysis. A key question asks which data-generating assumptions of traditional survival models should be retained and which should be made more flexible via the function-approximating capabilities of neural networks. Rather than estimating the survival function targeted by most existing methods, we introduce a Deep Extended Hazard (DeepEH) model to provide a flexible and general framework for deep survival analysis. The extended hazard model includes the conventional Cox proportional hazards and accelerated failure time models as special cases, so DeepEH subsumes the popular Deep Cox proportional hazard (DeepSurv) and Deep Accelerated Failure Time (DeepAFT) models. We additionally provide theoretical support for the proposed DeepEH model by establishing consistency and convergence rate of the survival function estimator, which underscore the attractive feature that deep learning is able to detect low-dimensional structure of data in high-dimensional space. Numerical experiments also provide evidence that the proposed methods outperform existing statistical and deep learning approaches to survival analysis.
| null |
TNASP: A Transformer-based NAS Predictor with a Self-evolution Framework
|
https://papers.nips.cc/paper_files/paper/2021/hash/7fa1575cbd7027c9a799983a485c3c2f-Abstract.html
|
Shun Lu, Jixiang Li, Jianchao Tan, Sen Yang, Ji Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/7fa1575cbd7027c9a799983a485c3c2f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12782-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7fa1575cbd7027c9a799983a485c3c2f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_aJnkoYKj6s
|
https://papers.nips.cc/paper_files/paper/2021/file/7fa1575cbd7027c9a799983a485c3c2f-Supplemental.pdf
|
Predictor-based Neural Architecture Search (NAS) continues to be an important topic because it aims to mitigate the time-consuming search procedure of traditional NAS methods. A promising performance predictor determines the quality of final searched models in predictor-based NAS methods. Most existing predictor-based methodologies train model-based predictors under a proxy dataset setting, which may suffer from the accuracy decline and the generalization problem, mainly due to their poor abilities to represent spatial topology information of the graph structure data. Besides the poor encoding for spatial topology information, these works did not take advantage of the temporal information such as historical evaluations during training. Thus, we propose a Transformer-based NAS performance predictor, associated with a Laplacian matrix based positional encoding strategy, which better represents topology information and achieves better performance than previous state-of-the-art methods on NAS-Bench-101, NAS-Bench-201, and DARTS search space. Furthermore, we also propose a self-evolution framework that can fully utilize temporal information as guidance. This framework iteratively involves the evaluations of previously predicted results as constraints into current optimization iteration, thus further improving the performance of our predictor. Such framework is model-agnostic, thus can enhance performance on various backbone structures for the prediction task. Our proposed method helped us rank 2nd among all teams in CVPR 2021 NAS Competition Track 2: Performance Prediction Track.
| null |
Automorphic Equivalence-aware Graph Neural Network
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ffb4e0ece07869880d51662a2234143-Abstract.html
|
Fengli Xu, Quanming Yao, Pan Hui, Yong Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/7ffb4e0ece07869880d51662a2234143-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12783-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/7ffb4e0ece07869880d51662a2234143-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=TmKQ_XeezEB
|
https://papers.nips.cc/paper_files/paper/2021/file/7ffb4e0ece07869880d51662a2234143-Supplemental.pdf
|
Distinguishing the automorphic equivalence of nodes in a graph plays an essential role in many scientific domains, e.g., computational biologist and social network analysis. However, existing graph neural networks (GNNs) fail to capture such an important property. To make GNN aware of automorphic equivalence, we first introduce a localized variant of this concept --- ego-centered automorphic equivalence (Ego-AE). Then, we design a novel variant of GNN, i.e., GRAPE, that uses learnable AE-aware aggregators to explicitly differentiate the Ego-AE of each node's neighbors with the aids of various subgraph templates. While the design of subgraph templates can be hard, we further propose a genetic algorithm to automatically search them from graph data. Moreover, we theoretically prove that GRAPE is expressive in terms of generating distinct representations for nodes with different Ego-AE features, which fills in a fundamental gap of existing GNN variants. Finally, we empirically validate our model on eight real-world graph data, including social network, e-commerce co-purchase network, and citation network, and show that it consistently outperforms existing GNNs. The source code is public available at https://github.com/tsinghua-fib-lab/GRAPE.
| null |
Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/803ef56843860e4a48fc4cdb3065e8ce-Abstract.html
|
Itay Safran, Ohad Shamir
|
https://papers.nips.cc/paper_files/paper/2021/hash/803ef56843860e4a48fc4cdb3065e8ce-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12784-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/803ef56843860e4a48fc4cdb3065e8ce-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fNKwtwJHjx
|
https://papers.nips.cc/paper_files/paper/2021/file/803ef56843860e4a48fc4cdb3065e8ce-Supplemental.pdf
|
Recently, there has been much interest in studying the convergence rates of without-replacement SGD, and proving that it is faster than with-replacement SGD in the worst case. However, known lower bounds ignore the problem's geometry, including its condition number, whereas the upper bounds explicitly depend on it. Perhaps surprisingly, we prove that when the condition number is taken into account, without-replacement SGD \emph{does not} significantly improve on with-replacement SGD in terms of worst-case bounds, unless the number of epochs (passes over the data) is larger than the condition number. Since many problems in machine learning and other areas are both ill-conditioned and involve large datasets, this indicates that without-replacement does not necessarily improve over with-replacement sampling for realistic iteration budgets. We show this by providing new lower and upper bounds which are tight (up to log factors), for quadratic problems with commuting quadratic terms, precisely quantifying the dependence on the problem parameters.
| null |
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II
|
https://papers.nips.cc/paper_files/paper/2021/hash/806d926414ce19d907700e23177ab4ff-Abstract.html
|
Yossi Arjevani, Michael Field
|
https://papers.nips.cc/paper_files/paper/2021/hash/806d926414ce19d907700e23177ab4ff-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12785-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/806d926414ce19d907700e23177ab4ff-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wxBGz3ScBBo
|
https://papers.nips.cc/paper_files/paper/2021/file/806d926414ce19d907700e23177ab4ff-Supplemental.pdf
|
We study the optimization problem associated with fitting two-layer ReLU neural networks with respect to the squared loss, where labels are generated by a target network. We make use of the rich symmetry structure to develop a novel set of tools for studying families of spurious minima. In contrast to existing approaches which operate in limiting regimes, our technique directly addresses the nonconvex loss landscape for finite number of inputs $d$ and neurons $k$, and provides analytic, rather than heuristic, information. In particular, we derive analytic estimates for the loss at different minima, and prove that, modulo $O(d^{-1/2})$-terms, the Hessian spectrum concentrates near small positive constants, with the exception of $\Theta(d)$ eigenvalues which grow linearly with~$d$. We further show that the Hessian spectrum at global and spurious minima coincide to $O(d^{-1/2})$-order, thus challenging our ability to argue about statistical generalization through local curvature. Lastly, our technique provides the exact \emph{fractional} dimensionality at which families of critical points turn from saddles into spurious minima. This makes possible the study of the creation and the annihilation of spurious minima using powerful tools from equivariant bifurcation theory.
| null |
CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8073bd4ed0fe0c330290c58056a2cd5e-Abstract.html
|
Sakshi Varshney, Vinay Kumar Verma, P. K. Srijith, Lawrence Carin, Piyush Rai
|
https://papers.nips.cc/paper_files/paper/2021/hash/8073bd4ed0fe0c330290c58056a2cd5e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12786-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8073bd4ed0fe0c330290c58056a2cd5e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2r6F9duQ6o5
|
https://papers.nips.cc/paper_files/paper/2021/file/8073bd4ed0fe0c330290c58056a2cd5e-Supplemental.pdf
|
We present a continual learning approach for generative adversarial networks (GANs), by designing and leveraging parameter-efficient feature map transformations. Our approach is based on learning a set of global and task-specific parameters. The global parameters are fixed across tasks whereas the task-specific parameters act as local adapters for each task, and help in efficiently obtaining task-specific feature maps. Moreover, we propose an element-wise addition of residual bias in the transformed feature space, which further helps stabilize GAN training in such settings. Our approach also leverages task similarities based on the Fisher information matrix. Leveraging this knowledge from previous tasks significantly improves the model performance. In addition, the similarity measure also helps reduce the parameter growth in continual adaptation and helps to learn a compact model. In contrast to the recent approaches for continually-learned GANs, the proposed approach provides a memory-efficient way to perform effective continual data generation. Through extensive experiments on challenging and diverse datasets, we show that the feature-map-transformation approach outperforms state-of-the-art methods for continually-learned GANs, with substantially fewer parameters. The proposed method generates high-quality samples that can also improve the generative-replay-based continual learning for discriminative tasks.
| null |
Structured Dropout Variational Inference for Bayesian Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/80a160ff31266be2f93012a2a3eca713-Abstract.html
|
Son Nguyen, Duong Nguyen, Khai Nguyen, Khoat Than, Hung Bui, Nhat Ho
|
https://papers.nips.cc/paper_files/paper/2021/hash/80a160ff31266be2f93012a2a3eca713-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12787-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/80a160ff31266be2f93012a2a3eca713-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QgNAUqQLh4
|
https://papers.nips.cc/paper_files/paper/2021/file/80a160ff31266be2f93012a2a3eca713-Supplemental.pdf
|
Approximate inference in Bayesian deep networks exhibits a dilemma of how to yield high fidelity posterior approximations while maintaining computational efficiency and scalability. We tackle this challenge by introducing a novel variational structured approximation inspired by the Bayesian interpretation of Dropout regularization. Concretely, we focus on the inflexibility of the factorized structure in Dropout posterior and then propose an improved method called Variational Structured Dropout (VSD). VSD employs an orthogonal transformation to learn a structured representation on the variational Gaussian noise with plausible complexity, and consequently induces statistical dependencies in the approximate posterior. Theoretically, VSD successfully addresses the pathologies of previous Variational Dropout methods and thus offers a standard Bayesian justification. We further show that VSD induces an adaptive regularization term with several desirable properties which contribute to better generalization. Finally, we conduct extensive experiments on standard benchmarks to demonstrate the effectiveness of VSD over state-of-the-art variational methods on predictive accuracy, uncertainty estimation, and out-of-distribution detection.
| null |
Neural Relightable Participating Media Rendering
|
https://papers.nips.cc/paper_files/paper/2021/hash/80f24ef493982c552b6943f1411f7e2c-Abstract.html
|
Quan Zheng, Gurprit Singh, Hans-peter Seidel
|
https://papers.nips.cc/paper_files/paper/2021/hash/80f24ef493982c552b6943f1411f7e2c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12788-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/80f24ef493982c552b6943f1411f7e2c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lkYOOQIcC0L
|
https://papers.nips.cc/paper_files/paper/2021/file/80f24ef493982c552b6943f1411f7e2c-Supplemental.pdf
|
Learning neural radiance fields of a scene has recently allowed realistic novel view synthesis of the scene, but they are limited to synthesize images under the original fixed lighting condition. Therefore, they are not flexible for the eagerly desired tasks like relighting, scene editing and scene composition. To tackle this problem, several recent methods propose to disentangle reflectance and illumination from the radiance field. These methods can cope with solid objects with opaque surfaces but participating media are neglected. Also, they take into account only direct illumination or at most one-bounce indirect illumination, thus suffer from energy loss due to ignoring the high-order indirect illumination. We propose to learn neural representations for participating media with a complete simulation of global illumination. We estimate direct illumination via ray tracing and compute indirect illumination with spherical harmonics. Our approach avoids computing the lengthy indirect bounces and does not suffer from energy loss. Our experiments on multiple scenes show that our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods, and it can generalize to deal with solid objects with opaque surfaces as well.
| null |
Efficient Neural Network Training via Forward and Backward Propagation Sparsification
|
https://papers.nips.cc/paper_files/paper/2021/hash/80f2f15983422987ea30d77bb531be86-Abstract.html
|
Xiao Zhou, Weizhong Zhang, Zonghao Chen, SHIZHE DIAO, Tong Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/80f2f15983422987ea30d77bb531be86-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12789-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/80f2f15983422987ea30d77bb531be86-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=JnAU9HkXr2
|
https://papers.nips.cc/paper_files/paper/2021/file/80f2f15983422987ea30d77bb531be86-Supplemental.pdf
|
Sparse training is a natural idea to accelerate the training speed of deep neural networks and save the memory usage, especially since large modern neural networks are significantly over-parameterized. However, most of the existing methods cannot achieve this goal in practice because the chain rule based gradient (w.r.t. structure parameters) estimators adopted by previous methods require dense computation at least in the backward propagation step. This paper solves this problem by proposing an efficient sparse training method with completely sparse forward and backward passes. We first formulate the training process as a continuous minimization problem under global sparsity constraint. We then separate the optimization process into two steps, corresponding to weight update and structure parameter update. For the former step, we use the conventional chain rule, which can be sparse via exploiting the sparse structure. For the latter step, instead of using the chain rule based gradient estimators as in existing methods, we propose a variance reduced policy gradient estimator, which only requires two forward passes without backward propagation, thus achieving completely sparse training. We prove that the variance of our gradient estimator is bounded. Extensive experimental results on real-world datasets demonstrate that compared to previous methods, our algorithm is much more effective in accelerating the training process, up to an order of magnitude faster.
| null |
Learning to Ground Multi-Agent Communication with Autoencoders
|
https://papers.nips.cc/paper_files/paper/2021/hash/80fee67c8a4c4989bf8a580b4bbb0cd2-Abstract.html
|
Toru Lin, Jacob Huh, Christopher Stauffer, Ser Nam Lim, Phillip Isola
|
https://papers.nips.cc/paper_files/paper/2021/hash/80fee67c8a4c4989bf8a580b4bbb0cd2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12790-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/80fee67c8a4c4989bf8a580b4bbb0cd2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8bxZ7WGg4b2
|
https://papers.nips.cc/paper_files/paper/2021/file/80fee67c8a4c4989bf8a580b4bbb0cd2-Supplemental.zip
|
Communication requires having a common language, a lingua franca, between agents. This language could emerge via a consensus process, but it may require many generations of trial and error. Alternatively, the lingua franca can be given by the environment, where agents ground their language in representations of the observed world. We demonstrate a simple way to ground language in learned representations, which facilitates decentralized multi-agent communication and coordination. We find that a standard representation learning algorithm -- autoencoding -- is sufficient for arriving at a grounded common language. When agents broadcast these representations, they learn to understand and respond to each other's utterances and achieve surprisingly strong task performance across a variety of multi-agent communication environments.
| null |
Large-Scale Wasserstein Gradient Flows
|
https://papers.nips.cc/paper_files/paper/2021/hash/810dfbbebb17302018ae903e9cb7a483-Abstract.html
|
Petr Mokrov, Alexander Korotin, Lingxiao Li, Aude Genevay, Justin M. Solomon, Evgeny Burnaev
|
https://papers.nips.cc/paper_files/paper/2021/hash/810dfbbebb17302018ae903e9cb7a483-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12791-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/810dfbbebb17302018ae903e9cb7a483-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nlLjIuHsMHp
|
https://papers.nips.cc/paper_files/paper/2021/file/810dfbbebb17302018ae903e9cb7a483-Supplemental.pdf
|
Wasserstein gradient flows provide a powerful means of understanding and solving many diffusion equations. Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy functionals in Wasserstein space. This equivalence, introduced by Jordan, Kinderlehrer and Otto, inspired the so-called JKO scheme to approximate these diffusion processes via an implicit discretization of the gradient flow in Wasserstein space. Solving the optimization problem associated with each JKO step, however, presents serious computational challenges. We introduce a scalable method to approximate Wasserstein gradient flows, targeted to machine learning applications. Our approach relies on input-convex neural networks (ICNNs) to discretize the JKO steps, which can be optimized by stochastic gradient descent. Contrarily to previous work, our method does not require domain discretization or particle simulation. As a result, we can sample from the measure at each time step of the diffusion and compute its probability density. We demonstrate the performance of our algorithm by computing diffusions following the Fokker-Planck equation and apply it to unnormalized density sampling as well as nonlinear filtering.
| null |
Who Leads and Who Follows in Strategic Classification?
|
https://papers.nips.cc/paper_files/paper/2021/hash/812214fb8e7066bfa6e32c626c2c688b-Abstract.html
|
Tijana Zrnic, Eric Mazumdar, Shankar Sastry, Michael Jordan
|
https://papers.nips.cc/paper_files/paper/2021/hash/812214fb8e7066bfa6e32c626c2c688b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12792-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/812214fb8e7066bfa6e32c626c2c688b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cknBzDV6XvN
|
https://papers.nips.cc/paper_files/paper/2021/file/812214fb8e7066bfa6e32c626c2c688b-Supplemental.pdf
|
As predictive models are deployed into the real world, they must increasingly contend with strategic behavior. A growing body of work on strategic classification treats this problem as a Stackelberg game: the decision-maker "leads" in the game by deploying a model, and the strategic agents "follow" by playing their best response to the deployed model. Importantly, in this framing, the burden of learning is placed solely on the decision-maker, while the agents’ best responses are implicitly treated as instantaneous. In this work, we argue that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other’s actions. In particular, by generalizing the standard model to allow both players to learn over time, we show that a decision-maker that makes updates faster than the agents can reverse the order of play, meaning that the agents lead and the decision-maker follows. We observe in standard learning settings that such a role reversal can be desirable for both the decision-maker and the strategic agents. Finally, we show that a decision-maker with the freedom to choose their update frequency can induce learning dynamics that converge to Stackelberg equilibria with either order of play.
| null |
Unadversarial Examples: Designing Objects for Robust Vision
|
https://papers.nips.cc/paper_files/paper/2021/hash/816a6db41f0e44644bc65808b6db5ca4-Abstract.html
|
Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, Ashish Kapoor
|
https://papers.nips.cc/paper_files/paper/2021/hash/816a6db41f0e44644bc65808b6db5ca4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12793-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/816a6db41f0e44644bc65808b6db5ca4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=DxXNxZQVcc
|
https://papers.nips.cc/paper_files/paper/2021/file/816a6db41f0e44644bc65808b6db5ca4-Supplemental.pdf
|
We study a class of computer vision settings wherein one can modify the design of the objects being recognized. We develop a framework that leverages this capability---and deep networks' unusual sensitivity to input perturbations---to design ``robust objects,'' i.e., objects that are explicitly optimized to be confidently classified. Our framework yields improved performance on standard benchmarks, a simulated robotics environment, and physical-world experiments.
| null |
Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment Settings
|
https://papers.nips.cc/paper_files/paper/2021/hash/816b112c6105b3ebd537828a39af4818-Abstract.html
|
Hengrui Cai, Chengchun Shi, Rui Song, Wenbin Lu
|
https://papers.nips.cc/paper_files/paper/2021/hash/816b112c6105b3ebd537828a39af4818-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12794-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/816b112c6105b3ebd537828a39af4818-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rvKD3iqtBdk
| null |
We consider off-policy evaluation (OPE) in continuous treatment settings, such as personalized dose-finding. In OPE, one aims to estimate the mean outcome under a new treatment decision rule using historical data generated by a different decision rule. Most existing works on OPE focus on discrete treatment settings. To handle continuous treatments, we develop a novel estimation method for OPE using deep jump learning. The key ingredient of our method lies in adaptively discretizing the treatment space using deep discretization, by leveraging deep learning and multi-scale change point detection. This allows us to apply existing OPE methods in discrete treatments to handle continuous treatments. Our method is further justified by theoretical results, simulations, and a real application to Warfarin Dosing.
| null |
Attention Approximates Sparse Distributed Memory
|
https://papers.nips.cc/paper_files/paper/2021/hash/8171ac2c5544a5cb54ac0f38bf477af4-Abstract.html
|
Trenton Bricken, Cengiz Pehlevan
|
https://papers.nips.cc/paper_files/paper/2021/hash/8171ac2c5544a5cb54ac0f38bf477af4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12795-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8171ac2c5544a5cb54ac0f38bf477af4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WVYzd7GvaOM
|
https://papers.nips.cc/paper_files/paper/2021/file/8171ac2c5544a5cb54ac0f38bf477af4-Supplemental.pdf
|
While Attention has come to be an important mechanism in deep learning, there remains limited intuition for why it works so well. Here, we show that Transformer Attention can be closely related under certain data conditions to Kanerva's Sparse Distributed Memory (SDM), a biologically plausible associative memory model. We confirm that these conditions are satisfied in pre-trained GPT2 Transformer models. We discuss the implications of the Attention-SDM map and provide new computational and biological interpretations of Attention.
| null |
Augmented Shortcuts for Vision Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/818f4654ed39a1c147d1e51a00ffb4cb-Abstract.html
|
Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/818f4654ed39a1c147d1e51a00ffb4cb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12796-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/818f4654ed39a1c147d1e51a00ffb4cb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=XiZYCewdxMQ
|
https://papers.nips.cc/paper_files/paper/2021/file/818f4654ed39a1c147d1e51a00ffb4cb-Supplemental.pdf
|
Transformer models have achieved great progress on computer vision tasks recently. The rapid development of vision transformers is mainly contributed by their high representation ability for extracting informative features from input images. However, the mainstream transformer models are designed with deep architectures, and the feature diversity will be continuously reduced as the depth increases, \ie, feature collapse. In this paper, we theoretically analyze the feature collapse phenomenon and study the relationship between shortcuts and feature diversity in these transformer models. Then, we present an augmented shortcut scheme, which inserts additional paths with learnable parameters in parallel on the original shortcuts. To save the computational costs, we further explore an efficient approach that uses the block-circulant projection to implement augmented shortcuts. Extensive experiments conducted on benchmark datasets demonstrate the effectiveness of the proposed method, which brings about 1% accuracy increase of the state-of-the-art visual transformers without obviously increasing their parameters and FLOPs.
| null |
Finding Regions of Heterogeneity in Decision-Making via Expected Conditional Covariance
|
https://papers.nips.cc/paper_files/paper/2021/hash/81930c54e08b6d26d9638dd2e4656dc1-Abstract.html
|
Justin Lim, Christina X Ji, Michael Oberst, Saul Blecker, Leora Horwitz, David Sontag
|
https://papers.nips.cc/paper_files/paper/2021/hash/81930c54e08b6d26d9638dd2e4656dc1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12797-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/81930c54e08b6d26d9638dd2e4656dc1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4hBXGTdS6Lc
|
https://papers.nips.cc/paper_files/paper/2021/file/81930c54e08b6d26d9638dd2e4656dc1-Supplemental.pdf
|
Individuals often make different decisions when faced with the same context, due to personal preferences and background. For instance, judges may vary in their leniency towards certain drug-related offenses, and doctors may vary in their preference for how to start treatment for certain types of patients. With these examples in mind, we present an algorithm for identifying types of contexts (e.g., types of cases or patients) with high inter-decision-maker disagreement. We formalize this as a causal inference problem, seeking a region where the assignment of decision-maker has a large causal effect on the decision. Our algorithm finds such a region by maximizing an empirical objective, and we give a generalization bound for its performance. In a semi-synthetic experiment, we show that our algorithm recovers the correct region of heterogeneity accurately compared to baselines. Finally, we apply our algorithm to real-world healthcare datasets, recovering variation that aligns with existing clinical knowledge.
| null |
Identifying and Benchmarking Natural Out-of-Context Prediction Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/81c2f886f91e18fe16d6f4e865877cb6-Abstract.html
|
David Madras, Richard Zemel
|
https://papers.nips.cc/paper_files/paper/2021/hash/81c2f886f91e18fe16d6f4e865877cb6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12798-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/81c2f886f91e18fe16d6f4e865877cb6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ByPR_hOE_EY
|
https://papers.nips.cc/paper_files/paper/2021/file/81c2f886f91e18fe16d6f4e865877cb6-Supplemental.pdf
|
Deep learning systems frequently fail at out-of-context (OOC) prediction, the problem of making reliable predictions on uncommon or unusual inputs or subgroups of the training distribution. To this end, a number of benchmarks for measuring OOC performance have been recently introduced. In this work, we introduce a framework unifying the literature on OOC performance measurement, and demonstrate how rich auxiliary information can be leveraged to identify candidate sets of OOC examples in existing datasets. We present NOOCh: a suite of naturally-occurring "challenge sets", and show how varying notions of context can be used to probe specific OOC failure modes. Experimentally, we explore the tradeoffs between various learning approaches on these challenge sets and demonstrate how the choices made in designing OOC benchmarks can yield varying conclusions.
| null |
Label Disentanglement in Partition-based Extreme Multilabel Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/81c8727c62e800be708dbf37c4695dff-Abstract.html
|
Xuanqing Liu, Wei-Cheng Chang, Hsiang-Fu Yu, Cho-Jui Hsieh, Inderjit Dhillon
|
https://papers.nips.cc/paper_files/paper/2021/hash/81c8727c62e800be708dbf37c4695dff-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12799-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/81c8727c62e800be708dbf37c4695dff-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=eUTd06-qCrT
|
https://papers.nips.cc/paper_files/paper/2021/file/81c8727c62e800be708dbf37c4695dff-Supplemental.pdf
|
Partition-based methods are increasingly-used in extreme multi-label classification (XMC) problems due to their scalability to large output spaces (e.g., millions or more). However, existing methods partition the large label space into mutually exclusive clusters, which is sub-optimal when labels have multi-modality and rich semantics. For instance, the label “Apple” can be the fruit or the brand name, which leads to the following research question: can we disentangle these multi-modal labels with non-exclusive clustering tailored for downstream XMC tasks? In this paper, we show that the label assignment problem in partition-based XMC can be formulated as an optimization problem, with the objective of maximizing precision rates. This leads to an efficient algorithm to form flexible and overlapped label clusters, and a method that can alternatively optimizes the cluster assignments and the model parameters for partition-based XMC. Experimental results on synthetic and real datasets show that our method can successfully disentangle multi-modal labels, leading to state-of-the-art (SOTA) results on four XMC benchmarks.
| null |
Leveraging SE(3) Equivariance for Self-supervised Category-Level Object Pose Estimation from Point Clouds
|
https://papers.nips.cc/paper_files/paper/2021/hash/81e74d678581a3bb7a720b019f4f1a93-Abstract.html
|
Xiaolong Li, Yijia Weng, Li Yi, Leonidas J. Guibas, A. Abbott, Shuran Song, He Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/81e74d678581a3bb7a720b019f4f1a93-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12800-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/81e74d678581a3bb7a720b019f4f1a93-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wGRNAqVBQT2
|
https://papers.nips.cc/paper_files/paper/2021/file/81e74d678581a3bb7a720b019f4f1a93-Supplemental.pdf
|
Category-level object pose estimation aims to find 6D object poses of previously unseen object instances from known categories without access to object CAD models. To reduce the huge amount of pose annotations needed for category-level learning, we propose for the first time a self-supervised learning framework to estimate category-level 6D object pose from single 3D point clouds. During training, our method assumes no ground-truth pose annotations, no CAD models, and no multi-view supervision. The key to our method is to disentangle shape and pose through an invariant shape reconstruction module and an equivariant pose estimation module, empowered by SE(3) equivariant point cloud networks. The invariant shape reconstruction module learns to perform aligned reconstructions, yielding a category-level reference frame without using any annotations. In addition, the equivariant pose estimation module achieves category-level pose estimation accuracy that is comparable to some fully supervised methods. Extensive experiments demonstrate the effectiveness of our approach on both complete and partial depth point clouds from the ModelNet40 benchmark, and on real depth point clouds from the NOCS-REAL 275 dataset. The project page with code and visualizations can be found at: dragonlong.github.io/equi-pose.
| null |
A Theoretical Analysis of Fine-tuning with Linear Teachers
|
https://papers.nips.cc/paper_files/paper/2021/hash/82039d16dce0aab3913b6a7ac73deff7-Abstract.html
|
Gal Shachaf, Alon Brutzkus, Amir Globerson
|
https://papers.nips.cc/paper_files/paper/2021/hash/82039d16dce0aab3913b6a7ac73deff7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12801-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82039d16dce0aab3913b6a7ac73deff7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=DyE8hmj2dse
|
https://papers.nips.cc/paper_files/paper/2021/file/82039d16dce0aab3913b6a7ac73deff7-Supplemental.pdf
|
Fine-tuning is a common practice in deep learning, achieving excellent generalization results on downstream tasks using relatively little training data. Although widely used in practice, it is not well understood theoretically. Here we analyze the sample complexity of this scheme for regression with linear teachers in several settings. Intuitively, the success of fine-tuning depends on the similarity between the source tasks and the target task. But what is the right way of measuring this similarity? We show that the relevant measure has to do with the relation between the source task, the target task and the covariance structure of the target data. In the setting of linear regression, we show that under realistic settings there can be substantial sample complexity reduction when the above measure is low. For deep linear regression, we propose a novel result regarding the inductive bias of gradient-based training when the network is initialized with pretrained weights. Using this result we show that the similarity measure for this setting is also affected by the depth of the network. We conclude with results on shallow ReLU models, and analyze the dependence of sample complexity there on source and target tasks. We empirically demonstrate our results for both synthetic and realistic data.
| null |
Overinterpretation reveals image classification model pathologies
|
https://papers.nips.cc/paper_files/paper/2021/hash/8217bb4e7fa0541e0f5e04fea764ab91-Abstract.html
|
Brandon Carter, Siddhartha Jain, Jonas W. Mueller, David Gifford
|
https://papers.nips.cc/paper_files/paper/2021/hash/8217bb4e7fa0541e0f5e04fea764ab91-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12802-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8217bb4e7fa0541e0f5e04fea764ab91-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ohZjthN1ncg
|
https://papers.nips.cc/paper_files/paper/2021/file/8217bb4e7fa0541e0f5e04fea764ab91-Supplemental.pdf
|
Image classifiers are typically scored on their test set accuracy, but high accuracy can mask a subtle type of model failure. We find that high scoring convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically salient features. When a model provides a high-confidence decision without salient supporting input features, we say the classifier has overinterpreted its input, finding too much class-evidence in patterns that appear nonsensical to humans. Here, we demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation, and we find models on CIFAR-10 make confident predictions even when 95% of input images are masked and humans cannot discern salient features in the remaining pixel-subsets. We introduce Batched Gradient SIS, a new method for discovering sufficient input subsets for complex datasets, and use this method to show the sufficiency of border pixels in ImageNet for training and testing. Although these patterns portend potential model fragility in real-world deployment, they are in fact valid statistical patterns of the benchmark that alone suffice to attain high test accuracy. Unlike adversarial examples, overinterpretation relies upon unmodified image pixels. We find ensembling and input dropout can each help mitigate overinterpretation.
| null |
Neural Circuit Synthesis from Specification Patterns
|
https://papers.nips.cc/paper_files/paper/2021/hash/8230bea7d54bcdf99cdfe85cb07313d5-Abstract.html
|
Frederik Schmitt, Christopher Hahn, Markus N Rabe, Bernd Finkbeiner
|
https://papers.nips.cc/paper_files/paper/2021/hash/8230bea7d54bcdf99cdfe85cb07313d5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12803-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8230bea7d54bcdf99cdfe85cb07313d5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=O4TE57kehc1
|
https://papers.nips.cc/paper_files/paper/2021/file/8230bea7d54bcdf99cdfe85cb07313d5-Supplemental.pdf
|
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL). The LTL synthesis problem is a well-known algorithmic challenge with a long history and an annual competition is organized to track the improvement of algorithms and tooling over time. New approaches using machine learning might open a lot of possibilities in this area, but suffer from the lack of sufficient amounts of training data. In this paper, we consider a method to generate large amounts of additional training data, i.e., pairs of specifications and circuits implementing them. We ensure that this synthetic data is sufficiently close to human-written specifications by mining common patterns from the specifications used in the synthesis competitions. We show that hierarchical Transformers trained on this synthetic data solve a significant portion of problems from the synthesis competitions, and even out-of-distribution examples from a recent case study.
| null |
Directional Message Passing on Molecular Graphs via Synthetic Coordinates
|
https://papers.nips.cc/paper_files/paper/2021/hash/82489c9737cc245530c7a6ebef3753ec-Abstract.html
|
Johannes Gasteiger, Chandan Yeshwanth, Stephan Günnemann
|
https://papers.nips.cc/paper_files/paper/2021/hash/82489c9737cc245530c7a6ebef3753ec-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12804-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82489c9737cc245530c7a6ebef3753ec-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZRu0_3azrCd
|
https://papers.nips.cc/paper_files/paper/2021/file/82489c9737cc245530c7a6ebef3753ec-Supplemental.pdf
|
Graph neural networks that leverage coordinates via directional message passing have recently set the state of the art on multiple molecular property prediction tasks. However, they rely on atom position information that is often unavailable, and obtaining it is usually prohibitively expensive or even impossible. In this paper we propose synthetic coordinates that enable the use of advanced GNNs without requiring the true molecular configuration. We propose two distances as synthetic coordinates: Distance bounds that specify the rough range of molecular configurations, and graph-based distances using a symmetric variant of personalized PageRank. To leverage both distance and angular information we propose a method of transforming normal graph neural networks into directional MPNNs. We show that with this transformation we can reduce the error of a normal graph neural network by 55% on the ZINC benchmark. We furthermore set the state of the art on ZINC and coordinate-free QM9 by incorporating synthetic coordinates in the SMP and DimeNet++ models. Our implementation is available online.
| null |
Federated Multi-Task Learning under a Mixture of Distributions
|
https://papers.nips.cc/paper_files/paper/2021/hash/82599a4ec94aca066873c99b4c741ed8-Abstract.html
|
Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, Richard Vidal
|
https://papers.nips.cc/paper_files/paper/2021/hash/82599a4ec94aca066873c99b4c741ed8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12805-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82599a4ec94aca066873c99b4c741ed8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YCqx6zhEzRp
|
https://papers.nips.cc/paper_files/paper/2021/file/82599a4ec94aca066873c99b4c741ed8-Supplemental.pdf
|
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
| null |
Learning Generative Vision Transformer with Energy-Based Latent Space for Saliency Prediction
|
https://papers.nips.cc/paper_files/paper/2021/hash/8289889263db4a40463e3f358bb7c7a1-Abstract.html
|
Jing Zhang, Jianwen Xie, Nick Barnes, Ping Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/8289889263db4a40463e3f358bb7c7a1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12806-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8289889263db4a40463e3f358bb7c7a1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=LoUdcqLuPej
| null |
Vision transformer networks have shown superiority in many computer vision tasks. In this paper, we take a step further by proposing a novel generative vision transformer with latent variables following an informative energy-based prior for salient object detection. Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation, in which the sampling from the intractable posterior and prior distributions of the latent variables are performed by Langevin dynamics. Further, with the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image. Different from the existing generative models which define the prior distribution of the latent variables as a simple isotropic Gaussian distribution, our model uses an energy-based informative prior which can be more expressive to capture the latent space of the data. We apply the proposed framework to both RGB and RGB-D salient object detection tasks. Extensive experimental results show that our framework can achieve not only accurate saliency predictions but also meaningful uncertainty maps that are consistent with the human perception.
| null |
Regularization in ResNet with Stochastic Depth
|
https://papers.nips.cc/paper_files/paper/2021/hash/82ba9d6eee3f026be339bb287651c3d8-Abstract.html
|
Soufiane Hayou, Fadhel Ayed
|
https://papers.nips.cc/paper_files/paper/2021/hash/82ba9d6eee3f026be339bb287651c3d8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12807-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82ba9d6eee3f026be339bb287651c3d8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8v4Sev9pXv
|
https://papers.nips.cc/paper_files/paper/2021/file/82ba9d6eee3f026be339bb287651c3d8-Supplemental.zip
|
Regularization plays a major role in modern deep learning. From classic techniques such as L1, L2 penalties to other noise-based methods such as Dropout, regularization often yields better generalization properties by avoiding overfitting. Recently, Stochastic Depth (SD) has emerged as an alternative regularization technique for residual neural networks (ResNets) and has proven to boost the performance of ResNet on many tasks [Huang et al., 2016]. Despite the recent success of SD, little is known about this technique from a theoretical perspective. This paper provides a hybrid analysis combining perturbation analysis and signal propagation to shed light on different regularization effects of SD. Our analysis allows us to derive principled guidelines for choosing the survival rates used for training with SD.
| null |
ResT: An Efficient Transformer for Visual Recognition
|
https://papers.nips.cc/paper_files/paper/2021/hash/82c2559140b95ccda9c6ca4a8b981f1e-Abstract.html
|
Qinglong Zhang, Yu-Bin Yang
|
https://papers.nips.cc/paper_files/paper/2021/hash/82c2559140b95ccda9c6ca4a8b981f1e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12808-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82c2559140b95ccda9c6ca4a8b981f1e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6Ab68Ip4Mu
|
https://papers.nips.cc/paper_files/paper/2021/file/82c2559140b95ccda9c6ca4a8b981f1e-Supplemental.pdf
|
This paper presents an efficient multi-scale vision Transformer, called ResT, that capably served as a general-purpose backbone for image recognition. Unlike existing Transformer methods, which employ standard Transformer blocks to tackle raw images with a fixed resolution, our ResT have several advantages: (1) A memory-efficient multi-head self-attention is built, which compresses the memory by a simple depth-wise convolution, and projects the interaction across the attention-heads dimension while keeping the diversity ability of multi-heads; (2) Positional encoding is constructed as spatial attention, which is more flexible and can tackle with input images of arbitrary size without interpolation or fine-tune; (3) Instead of the straightforward tokenization at the beginning of each stage, we design the patch embedding as a stack of overlapping convolution operation with stride on the token map. We comprehensively validate ResT on image classification and downstream tasks. Experimental results show that the proposed ResT can outperform the recently state-of-the-art backbones by a large margin, demonstrating the potential of ResT as strong backbones. The code and models will be made publicly available at https://github.com/wofmanaf/ResT.
| null |
Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams
|
https://papers.nips.cc/paper_files/paper/2021/hash/82ca5dd156cc926b2992f73c2896f761-Abstract.html
|
Chawin Sitawarin, Evgenios Kornaropoulos, Dawn Song, David Wagner
|
https://papers.nips.cc/paper_files/paper/2021/hash/82ca5dd156cc926b2992f73c2896f761-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12809-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82ca5dd156cc926b2992f73c2896f761-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2j3B_YkC8r
|
https://papers.nips.cc/paper_files/paper/2021/file/82ca5dd156cc926b2992f73c2896f761-Supplemental.pdf
|
Adversarial examples are a widely studied phenomenon in machine learning models. While most of the attention has been focused on neural networks, other practical models also suffer from this issue. In this work, we propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification, i.e., finding a minimum-norm adversarial example. Diverging from previous proposals, we propose the first geometric approach by performing a search that expands outwards from a given input point. On a high level, the search radius expands to the nearby higher-order Voronoi cells until we find a cell that classifies differently from the input point. To scale the algorithm to a large $k$, we introduce approximation steps that find perturbation with smaller norm, compared to the baselines, in a variety of datasets. Furthermore, we analyze the structural properties of a dataset where our approach outperforms the competition.
| null |
Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions
|
https://papers.nips.cc/paper_files/paper/2021/hash/82cadb0649a3af4968404c9f6031b233-Abstract.html
|
Jiachen Sun, Yulong Cao, Christopher B Choy, Zhiding Yu, Anima Anandkumar, Zhuoqing Morley Mao, Chaowei Xiao
|
https://papers.nips.cc/paper_files/paper/2021/hash/82cadb0649a3af4968404c9f6031b233-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12810-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82cadb0649a3af4968404c9f6031b233-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=srHp6A1c2z-
|
https://papers.nips.cc/paper_files/paper/2021/file/82cadb0649a3af4968404c9f6031b233-Supplemental.pdf
|
3D point cloud data is increasingly used in safety-critical applications such as autonomous driving. Thus, the robustness of 3D deep learning models against adversarial attacks becomes a major consideration. In this paper, we systematically study the impact of various self-supervised learning proxy tasks on different architectures and threat models for 3D point clouds with adversarial training. Specifically, we study MLP-based (PointNet), convolution-based (DGCNN), and transformer-based (PCT) 3D architectures. Through extensive experimentation, we demonstrate that appropriate applications of self-supervision can significantly enhance the robustness in 3D point cloud recognition, achieving considerable improvements compared to the standard adversarial training baseline. Our analysis reveals that local feature learning is desirable for adversarial robustness in point clouds since it limits the adversarial propagation between the point-level input perturbations and the model's final output. This insight also explains the success of DGCNN and the jigsaw proxy task in achieving stronger 3D adversarial robustness.
| null |
Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
|
https://papers.nips.cc/paper_files/paper/2021/hash/82debd8a12b498e765a11a8e51159440-Abstract.html
|
Jack Parker-Holder, Vu Nguyen, Shaan Desai, Stephen J Roberts
|
https://papers.nips.cc/paper_files/paper/2021/hash/82debd8a12b498e765a11a8e51159440-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12811-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82debd8a12b498e765a11a8e51159440-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=no-Jsrx9ytl
|
https://papers.nips.cc/paper_files/paper/2021/file/82debd8a12b498e765a11a8e51159440-Supplemental.pdf
|
Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters. As such, there has recently been interest in the field of AutoRL, which seeks to automate design decisions to create more general algorithms. Recent work suggests that population based approaches may be effective AutoRL algorithms, by learning hyperparameter schedules on the fly. In particular, the PB2 algorithm is able to achieve strong performance in RL tasks by formulating online hyperparameter optimization as time varying GP-bandit problem, while also providing theoretical guarantees. However, PB2 is only designed to work for \emph{continuous} hyperparameters, which severely limits its utility in practice. In this paper we introduce a new (provably) efficient hierarchical approach for optimizing \emph{both continuous and categorical} variables, using a new time-varying bandit algorithm specifically designed for the population based training regime. We evaluate our approach on the challenging Procgen benchmark, where we show that explicitly modelling dependence between data augmentation and other hyperparameters improves generalization.
| null |
Neural Algorithmic Reasoners are Implicit Planners
|
https://papers.nips.cc/paper_files/paper/2021/hash/82e9e7a12665240d13d0b928be28f230-Abstract.html
|
Andreea-Ioana Deac, Petar Veličković, Ognjen Milinkovic, Pierre-Luc Bacon, Jian Tang, Mladen Nikolic
|
https://papers.nips.cc/paper_files/paper/2021/hash/82e9e7a12665240d13d0b928be28f230-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12812-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/82e9e7a12665240d13d0b928be28f230-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=K5YKjaMjbja
|
https://papers.nips.cc/paper_files/paper/2021/file/82e9e7a12665240d13d0b928be28f230-Supplemental.pdf
|
Implicit planning has emerged as an elegant technique for combining learned models of the world with end-to-end model-free reinforcement learning. We study the class of implicit planners inspired by value iteration, an algorithm that is guaranteed to yield perfect policies in fully-specified tabular environments. We find that prior approaches either assume that the environment is provided in such a tabular form---which is highly restrictive---or infer "local neighbourhoods" of states to run value iteration over---for which we discover an algorithmic bottleneck effect. This effect is caused by explicitly running the planning algorithm based on scalar predictions in every state, which can be harmful to data efficiency if such scalars are improperly predicted. We propose eXecuted Latent Value Iteration Networks (XLVINs), which alleviate the above limitations. Our method performs all planning computations in a high-dimensional latent space, breaking the algorithmic bottleneck. It maintains alignment with value iteration by carefully leveraging neural graph-algorithmic reasoning and contrastive self-supervised learning. Across seven low-data settings---including classical control, navigation and Atari---XLVINs provide significant improvements to data efficiency against value iteration-based implicit planners, as well as relevant model-free baselines. Lastly, we empirically verify that XLVINs can closely align with value iteration.
| null |
Self-Supervised Learning with Kernel Dependence Maximization
|
https://papers.nips.cc/paper_files/paper/2021/hash/83004190b1793d7aa15f8d0d49a13eba-Abstract.html
|
Yazhe Li, Roman Pogodin, Danica J. Sutherland, Arthur Gretton
|
https://papers.nips.cc/paper_files/paper/2021/hash/83004190b1793d7aa15f8d0d49a13eba-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12813-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/83004190b1793d7aa15f8d0d49a13eba-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0HW7A5YZjq7
|
https://papers.nips.cc/paper_files/paper/2021/file/83004190b1793d7aa15f8d0d49a13eba-Supplemental.pdf
|
We approach self-supervised learning of image representations from a statistical dependence perspective, proposing Self-Supervised Learning with the Hilbert-Schmidt Independence Criterion (SSL-HSIC). SSL-HSIC maximizes dependence between representations of transformations of an image and the image identity, while minimizing the kernelized variance of those representations. This framework yields a new understanding of InfoNCE, a variational lower bound on the mutual information (MI) between different transformations. While the MI itself is known to have pathologies which can result in learning meaningless representations, its bound is much better behaved: we show that it implicitly approximates SSL-HSIC (with a slightly different regularizer).Our approach also gives us insight into BYOL, a negative-free SSL method, since SSL-HSIC similarly learns local neighborhoods of samples. SSL-HSIC allows us to directly optimize statistical dependence in time linear in the batch size, without restrictive data assumptions or indirect mutual information estimators. Trained with or without a target network, SSL-HSIC matches the current state-of-the-art for standard linear evaluation on ImageNet, semi-supervised learning and transfer to other classification and vision tasks such as semantic segmentation, depth estimation and object recognition. Code is available at https://github.com/deepmind/ssl_hsic.
| null |
CROCS: Clustering and Retrieval of Cardiac Signals Based on Patient Disease Class, Sex, and Age
|
https://papers.nips.cc/paper_files/paper/2021/hash/8303a79b1e19a194f1875981be5bdb6f-Abstract.html
|
Dani Kiyasseh, Tingting Zhu, David Clifton
|
https://papers.nips.cc/paper_files/paper/2021/hash/8303a79b1e19a194f1875981be5bdb6f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12814-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8303a79b1e19a194f1875981be5bdb6f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=JW2nIBL2tzN
|
https://papers.nips.cc/paper_files/paper/2021/file/8303a79b1e19a194f1875981be5bdb6f-Supplemental.pdf
|
The process of manually searching for relevant instances in, and extracting information from, clinical databases underpin a multitude of clinical tasks. Such tasks include disease diagnosis, clinical trial recruitment, and continuing medical education. This manual search-and-extract process, however, has been hampered by the growth of large-scale clinical databases and the increased prevalence of unlabelled instances. To address this challenge, we propose a supervised contrastive learning framework, CROCS, where representations of cardiac signals associated with a set of patient-specific attributes (e.g., disease class, sex, age) are attracted to learnable embeddings entitled clinical prototypes. We exploit such prototypes for both the clustering and retrieval of unlabelled cardiac signals based on multiple patient attributes. We show that CROCS outperforms the state-of-the-art method, DTC, when clustering and also retrieves relevant cardiac signals from a large database. We also show that clinical prototypes adopt a semantically meaningful arrangement based on patient attributes and thus confer a high degree of interpretability.
| null |
Representing Hyperbolic Space Accurately using Multi-Component Floats
|
https://papers.nips.cc/paper_files/paper/2021/hash/832353270aacb6e3322f493a66aaf5b9-Abstract.html
|
Tao Yu, Christopher M. De Sa
|
https://papers.nips.cc/paper_files/paper/2021/hash/832353270aacb6e3322f493a66aaf5b9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12815-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/832353270aacb6e3322f493a66aaf5b9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=U8xJty6RHH
|
https://papers.nips.cc/paper_files/paper/2021/file/832353270aacb6e3322f493a66aaf5b9-Supplemental.pdf
|
Hyperbolic space is particularly useful for embedding data with hierarchical structure; however, representing hyperbolic space with ordinary floating-point numbers greatly affects the performance due to its \emph{ineluctable} numerical errors. Simply increasing the precision of floats fails to solve the problem and incurs a high computation cost for simulating greater-than-double-precision floats on hardware such as GPUs, which does not support them. In this paper, we propose a simple, feasible-on-GPUs, and easy-to-understand solution for numerically accurate learning on hyperbolic space. We do this with a new approach to represent hyperbolic space using multi-component floating-point (MCF) in the Poincar{\'e} upper-half space model. Theoretically and experimentally we show our model has small numerical error, and on embedding tasks across various datasets, models represented by multi-component floating-points gain more capacity and run significantly faster on GPUs than prior work.
| null |
Dimensionality Reduction for Wasserstein Barycenter
|
https://papers.nips.cc/paper_files/paper/2021/hash/8346db44a721fa863ca38180638bad3d-Abstract.html
|
Zachary Izzo, Sandeep Silwal, Samson Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/8346db44a721fa863ca38180638bad3d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12816-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8346db44a721fa863ca38180638bad3d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cDPFOsj2G6B
| null |
The Wasserstein barycenter is a geometric construct which captures the notion of centrality among probability distributions, and which has found many applications in machine learning. However, most algorithms for finding even an approximate barycenter suffer an exponential dependence on the dimension $d$ of the underlying space of the distributions. In order to cope with this ``curse of dimensionality,'' we study dimensionality reduction techniques for the Wasserstein barycenter problem. When the barycenter is restricted to support of size $n$, we show that randomized dimensionality reduction can be used to map the problem to a space of dimension $O(\log n)$ independent of both $d$ and $k$, and that \emph{any} solution found in the reduced dimension will have its cost preserved up to arbitrary small error in the original space. We provide matching upper and lower bounds on the size of the reduced dimension, showing that our methods are optimal up to constant factors. We also provide a coreset construction for the Wasserstein barycenter problem that significantly decreases the number of input distributions. The coresets can be used in conjunction with random projections and thus further improve computation time. Lastly, our experimental results validate the speedup provided by dimensionality reduction while maintaining solution quality.
| null |
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
|
https://papers.nips.cc/paper_files/paper/2021/hash/8383f931b0cefcc631f070480ef340e1-Abstract.html
|
Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David Cox, Josh McDermott, James J DiCarlo, Sueyeon Chung
|
https://papers.nips.cc/paper_files/paper/2021/hash/8383f931b0cefcc631f070480ef340e1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12817-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8383f931b0cefcc631f070480ef340e1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BfcE_TDjaG6
|
https://papers.nips.cc/paper_files/paper/2021/file/8383f931b0cefcc631f070480ef340e1-Supplemental.pdf
|
Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems. Recent work has proposed adding biologically-inspired components to visual neural networks as a way to improve their adversarial robustness. One surprisingly effective component for reducing adversarial vulnerability is response stochasticity, like that exhibited by biological neurons. Here, using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. Next, we generalize these results to the auditory domain, showing that neural stochasticity also makes auditory models more robust to adversarial perturbations. Geometric analysis of the stochastic networks reveals overlap between representations of clean and adversarially perturbed stimuli, and quantitatively demonstrate that competing geometric effects of stochasticity mediate a tradeoff between adversarial and clean performance. Our results shed light on the strategies of robust perception utilized by adversarially trained and stochastic networks, and help explain how stochasticity may be beneficial to machine and biological computation.
| null |
Unsupervised Learning of Compositional Energy Concepts
|
https://papers.nips.cc/paper_files/paper/2021/hash/838aac83e00e8c5ca0f839c96d6cb3be-Abstract.html
|
Yilun Du, Shuang Li, Yash Sharma, Josh Tenenbaum, Igor Mordatch
|
https://papers.nips.cc/paper_files/paper/2021/hash/838aac83e00e8c5ca0f839c96d6cb3be-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12818-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/838aac83e00e8c5ca0f839c96d6cb3be-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2RgFZHCrI0l
|
https://papers.nips.cc/paper_files/paper/2021/file/838aac83e00e8c5ca0f839c96d6cb3be-Supplemental.pdf
|
Humans are able to rapidly understand scenes by utilizing concepts extracted from prior experience. Such concepts are diverse, and include global scene descriptors, such as the weather or lighting, as well as local scene descriptors, such as the color or size of a particular object. So far, unsupervised discovery of concepts has focused on either modeling the global scene-level or the local object-level factors of variation, but not both. In this work, we propose COMET, which discovers and represents concepts as separate energy functions, enabling us to represent both global concepts as well as objects under a unified framework. COMET discovers energy functions through recomposing the input image, which we find captures independent factors without additional supervision. Sample generation in COMET is formulated as an optimization process on underlying energy functions, enabling us to generate images with permuted and composed concepts. Finally, discovered visual concepts in COMET generalize well, enabling us to compose concepts between separate modalities of images as well as with other concepts discovered by a separate instance of COMET trained on a different dataset. Code and data available at https://energy-based-model.github.io/comet/.
| null |
Nearly Horizon-Free Offline Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8396b14c5dff55d13eea57487bf8ed26-Abstract.html
|
Tongzheng Ren, Jialian Li, Bo Dai, Simon S. Du, Sujay Sanghavi
|
https://papers.nips.cc/paper_files/paper/2021/hash/8396b14c5dff55d13eea57487bf8ed26-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12819-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8396b14c5dff55d13eea57487bf8ed26-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=bhCEPHQR0hB
|
https://papers.nips.cc/paper_files/paper/2021/file/8396b14c5dff55d13eea57487bf8ed26-Supplemental.pdf
|
We revisit offline reinforcement learning on episodic time-homogeneous Markov Decision Processes (MDP). For tabular MDP with $S$ states and $A$ actions, or linear MDP with anchor points and feature dimension $d$, given the collected $K$ episodes data with minimum visiting probability of (anchor) state-action pairs $d_m$, we obtain nearly horizon $H$-free sample complexity bounds for offline reinforcement learning when the total reward is upper bounded by 1. Specifically:• For offline policy evaluation, we obtain an $\tilde{O}\left(\sqrt{\frac{1}{Kd_m}} \right)$ error bound for the plug-in estimator, which matches the lower bound up to logarithmic factors and does not have additional dependency on $\mathrm{poly}(H, S, A, d)$ in higher-order term.• For offline policy optimization, we obtain an $\tilde{O}\left(\sqrt{\frac{1}{Kd_m}} + \frac{\min(S, d)}{Kd_m}\right)$ sub-optimality gap for the empirical optimal policy, which approaches the lower bound up to logarithmic factors and a high-order term, improving upon the best known result by [Cui and Yang 2020] that has additional $\mathrm{poly} (H, S, d)$ factors in the main term.To the best of our knowledge, these are the first set of nearly horizon-free bounds for episodic time-homogeneous offline tabular MDP and linear MDP with anchor points. Central to our analysis is a simple yet effective recursion based method to bound a "total variance" term in the offline scenarios, which could be of individual interest.
| null |
Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach
|
https://papers.nips.cc/paper_files/paper/2021/hash/83a368f54768f506b833130584455df4-Abstract.html
|
Ahmed Abbas, Paul Swoboda
|
https://papers.nips.cc/paper_files/paper/2021/hash/83a368f54768f506b833130584455df4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12820-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/83a368f54768f506b833130584455df4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=70eD741FHyI
|
https://papers.nips.cc/paper_files/paper/2021/file/83a368f54768f506b833130584455df4-Supplemental.pdf
|
We propose a fully differentiable architecture for simultaneous semantic and instance segmentation (a.k.a. panoptic segmentation) consisting of a convolutional neural network and an asymmetric multiway cut problem solver. The latter solves a combinatorial optimization problem that elegantly incorporates semantic and boundary predictions to produce a panoptic labeling. Our formulation allows to directly maximize a smooth surrogate of the panoptic quality metric by backpropagating the gradient through the optimization problem. Experimental evaluation shows improvement by backpropagating through the optimization problem w.r.t. comparable approaches on Cityscapes and COCO datasets. Overall, our approach of combinatorial optimization for panoptic segmentation (COPS) shows the utility of using optimization in tandem with deep learning in a challenging large scale real-world problem and showcases benefits and insights into training such an architecture.
| null |
Reinforcement Learning with State Observation Costs in Action-Contingent Noiselessly Observable Markov Decision Processes
|
https://papers.nips.cc/paper_files/paper/2021/hash/83e8fe6279ad25f15b23c6298c6a3584-Abstract.html
|
HyunJi Alex Nam, Scott Fleming, Emma Brunskill
|
https://papers.nips.cc/paper_files/paper/2021/hash/83e8fe6279ad25f15b23c6298c6a3584-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12821-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/83e8fe6279ad25f15b23c6298c6a3584-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jgze2dDL9y8
|
https://papers.nips.cc/paper_files/paper/2021/file/83e8fe6279ad25f15b23c6298c6a3584-Supplemental.pdf
|
Many real-world problems that require making optimal sequences of decisions under uncertainty involve costs when the agent wishes to obtain information about its environment. We design and analyze algorithms for reinforcement learning (RL) in Action-Contingent Noiselessly Observable MDPs (ACNO-MDPs), a special class of POMDPs in which the agent can choose to either (1) fully observe the state at a cost and then act; or (2) act without any immediate observation information, relying on past observations to infer the underlying state. ACNO-MDPs arise frequently in important real-world application domains like healthcare, in which clinicians must balance the value of information gleaned from medical tests (e.g., blood-based biomarkers) with the costs of gathering that information (e.g., the costs of labor and materials required to administer such tests). We develop a PAC RL algorithm for tabular ACNO-MDPs that provides substantially tighter bounds, compared to generic POMDP-RL algorithms, on the total number of episodes exhibiting worse than near-optimal performance. For continuous-state ACNO-MDPs, we propose a novel method of incorporating observation information that, when coupled with modern RL algorithms, yields significantly faster learning compared to other POMDP-RL algorithms in several simulated environments.
| null |
Iterative Amortized Policy Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/83fa5a432ae55c253d0e60dbfa716723-Abstract.html
|
Joseph Marino, Alexandre Piche, Alessandro Davide Ialongo, Yisong Yue
|
https://papers.nips.cc/paper_files/paper/2021/hash/83fa5a432ae55c253d0e60dbfa716723-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12822-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/83fa5a432ae55c253d0e60dbfa716723-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vuFJO_W85VU
|
https://papers.nips.cc/paper_files/paper/2021/file/83fa5a432ae55c253d0e60dbfa716723-Supplemental.pdf
|
Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high-value actions. From the variational inference perspective on RL, policy networks, when used with entropy or KL regularization, are a form of amortized optimization, optimizing network parameters rather than the policy distributions directly. However, direct amortized mappings can yield suboptimal policy estimates and restricted distributions, limiting performance and exploration. Given this perspective, we consider the more flexible class of iterative amortized optimizers. We demonstrate that the resulting technique, iterative amortized policy optimization, yields performance improvements over direct amortization on benchmark continuous control tasks.
| null |
Revisiting the Calibration of Modern Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8420d359404024567b5aefda1231af24-Abstract.html
|
Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, Mario Lucic
|
https://papers.nips.cc/paper_files/paper/2021/hash/8420d359404024567b5aefda1231af24-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12823-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8420d359404024567b5aefda1231af24-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QRBvLayFXI
|
https://papers.nips.cc/paper_files/paper/2021/file/8420d359404024567b5aefda1231af24-Supplemental.pdf
|
Accurate estimation of predictive uncertainty (model calibration) is essential for the safe application of neural networks. Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models produce poorly calibrated predictions. Here, we revisit this question for recent state-of-the-art image classification models. We systematically relate model calibration and accuracy, and find that the most recent models, notably those not using convolutions, are among the best calibrated. Trends observed in prior model generations, such as decay of calibration with distribution shift or model size, are less pronounced in recent architectures. We also show that model size and amount of pretraining do not fully explain these differences, suggesting that architecture is a major determinant of calibration properties.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.