title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
Fast Bayesian Inference for Gaussian Cox Processes via Path Integral Formulation
https://papers.nips.cc/paper_files/paper/2021/hash/dba31bb5c75992690f20c2d3b370ec7c-Abstract.html
Hideaki Kim
https://papers.nips.cc/paper_files/paper/2021/hash/dba31bb5c75992690f20c2d3b370ec7c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13624-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dba31bb5c75992690f20c2d3b370ec7c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NvXnBQQw0Jb
https://papers.nips.cc/paper_files/paper/2021/file/dba31bb5c75992690f20c2d3b370ec7c-Supplemental.pdf
Gaussian Cox processes are widely-used point process models that use a Gaussian process to describe the Bayesian a priori uncertainty present in latent intensity functions. In this paper, we propose a novel Bayesian inference scheme for Gaussian Cox processes by exploiting a conceptually-intuitive {¥it path integral} formulation. The proposed scheme does not rely on domain discretization, scales linearly with the number of observed events, has a lower complexity than the state-of-the-art variational Bayesian schemes with respect to the number of inducing points, and is applicable to a wide range of Gaussian Cox processes with various types of link functions. Our scheme is especially beneficial under the multi-dimensional input setting, where the number of inducing points tends to be large. We evaluate our scheme on synthetic and real-world data, and show that it achieves comparable predictive accuracy while being tens of times faster than reference methods.
null
Lattice partition recovery with dyadic CART
https://papers.nips.cc/paper_files/paper/2021/hash/dba4c1a117472f6aca95211285d0587e-Abstract.html
OSCAR HERNAN MADRID PADILLA, Yi Yu, Alessandro Rinaldo
https://papers.nips.cc/paper_files/paper/2021/hash/dba4c1a117472f6aca95211285d0587e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13625-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dba4c1a117472f6aca95211285d0587e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yITJ6t31eAE
https://papers.nips.cc/paper_files/paper/2021/file/dba4c1a117472f6aca95211285d0587e-Supplemental.pdf
We study piece-wise constant signals corrupted by additive Gaussian noise over a $d$-dimensional lattice. Data of this form naturally arise in a host of applications, and the tasks of signal detection or testing, de-noising and estimation have been studied extensively in the statistical and signal processing literature. In this paper we consider instead the problem of partition recovery, i.e.~of estimating the partition of the lattice induced by the constancy regions of the unknown signal, using the computationally-efficient dyadic classification and regression tree (DCART) methodology proposed by \citep{donoho1997cart}. We prove that, under appropriate regularity conditions on the shape of the partition elements, a DCART-based procedure consistently estimates the underlying partition at a rate of order $\sigma^2 k^* \log (N)/\kappa^2$, where $k^*$ is the minimal number of rectangular sub-graphs obtained using recursive dyadic partitions supporting the signal partition, $\sigma^2$ is the noise variance, $\kappa$ is the minimal magnitude of the signal difference among contiguous elements of the partition and $N$ is the size of the lattice. Furthermore, under stronger assumptions, our method attains a sharper estimation error of order $\sigma^2\log(N)/\kappa^2$, independent of $k^*$, which we show to be minimax rate optimal. Our theoretical guarantees further extend to the partition estimator based on the optimal regression tree estimator (ORT) of \cite{chatterjee2019adaptive} and to the one obtained through an NP-hard exhaustive search method. We corroborate our theoretical findings and the effectiveness of DCART for partition recovery in simulations.
null
Robust Deep Reinforcement Learning through Adversarial Loss
https://papers.nips.cc/paper_files/paper/2021/hash/dbb422937d7ff56e049d61da730b3e11-Abstract.html
Tuomas Oikarinen, Wang Zhang, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng
https://papers.nips.cc/paper_files/paper/2021/hash/dbb422937d7ff56e049d61da730b3e11-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13626-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dbb422937d7ff56e049d61da730b3e11-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=eaAM_bdW0Q
https://papers.nips.cc/paper_files/paper/2021/file/dbb422937d7ff56e049d61da730b3e11-Supplemental.pdf
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs, which raises concerns about deploying such agents in the real world. To address this issue, we propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against $l_p$-norm bounded adversarial attacks. Our framework is compatible with popular deep reinforcement learning algorithms and we demonstrate its performance with deep Q-learning, A3C and PPO. We experiment on three deep RL benchmarks (Atari, MuJoCo and ProcGen) to show the effectiveness of our robust training algorithm. Our RADIAL-RL agents consistently outperform prior methods when tested against attacks of varying strength and are more computationally efficient to train. In addition, we propose a new evaluation method called Greedy-Worst-Case Reward (GWC) to measure attack agnostic robustness of deep RL agents. We show that GWC can be evaluated efficiently and is a good estimate of the reward under the worst possible sequence of adversarial attacks. All code used for our experiments is available at https://github.com/tuomaso/radial_rl_v2.
null
Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve Optimism, Embrace Virtual Curvature
https://papers.nips.cc/paper_files/paper/2021/hash/dc5d637ed5e62c36ecb73b654b05ba2a-Abstract.html
Kefan Dong, Jiaqi Yang, Tengyu Ma
https://papers.nips.cc/paper_files/paper/2021/hash/dc5d637ed5e62c36ecb73b654b05ba2a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13627-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dc5d637ed5e62c36ecb73b654b05ba2a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=eOEs9Wa91qH
https://papers.nips.cc/paper_files/paper/2021/file/dc5d637ed5e62c36ecb73b654b05ba2a-Supplemental.pdf
This paper studies model-based bandit and reinforcement learning (RL) with nonlinear function approximations. We propose to study convergence to approximate local maxima because we show that global convergence is statistically intractable even for one-layer neural net bandit with a deterministic reward. For both nonlinear bandit and RL, the paper presents a model-based algorithm, Virtual Ascent with Online Model Learner (ViOlin), which provably converges to a local maximum with sample complexity that only depends on the sequential Rademacher complexity of the model class. Our results imply novel global or local regret bounds on several concrete settings such as linear bandit with finite or sparse model class, and two-layer neural net bandit. A key algorithmic insight is that optimism may lead to over-exploration even for two-layer neural net model class. On the other hand, for convergence to local maxima, it suffices to maximize the virtual return if the model can also reasonably predict the gradient and Hessian of the real return.
null
You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection
https://papers.nips.cc/paper_files/paper/2021/hash/dc912a253d1e9ba40e2c597ed2376640-Abstract.html
Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu
https://papers.nips.cc/paper_files/paper/2021/hash/dc912a253d1e9ba40e2c597ed2376640-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13628-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dc912a253d1e9ba40e2c597ed2376640-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nVofoXjTmA_
https://papers.nips.cc/paper_files/paper/2021/file/dc912a253d1e9ba40e2c597ed2376640-Supplemental.pdf
Can Transformer perform $2\mathrm{D}$ object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the $2\mathrm{D}$ spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-$1k$ dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain $42.0$ box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. Code and pre-trained models are available at https://github.com/hustvl/YOLOS.
null
Learning to delegate for large-scale vehicle routing
https://papers.nips.cc/paper_files/paper/2021/hash/dc9fa5f217a1e57b8a6adeb065560b38-Abstract.html
Sirui Li, Zhongxia Yan, Cathy Wu
https://papers.nips.cc/paper_files/paper/2021/hash/dc9fa5f217a1e57b8a6adeb065560b38-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13629-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dc9fa5f217a1e57b8a6adeb065560b38-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rm0I5y2zkG8
https://papers.nips.cc/paper_files/paper/2021/file/dc9fa5f217a1e57b8a6adeb065560b38-Supplemental.pdf
Vehicle routing problems (VRPs) form a class of combinatorial problems with wide practical applications. While previous heuristic or learning-based works achieve decent solutions on small problem instances, their performance deteriorates in large problems. This article presents a novel learning-augmented local search framework to solve large-scale VRP. The method iteratively improves the solution by identifying appropriate subproblems and $delegating$ their improvement to a black box subsolver. At each step, we leverage spatial locality to consider only a linear number of subproblems, rather than exponential. We frame subproblem selection as regression and train a Transformer on a generated training set of problem instances. Our method accelerates state-of-the-art VRP solvers by 10x to 100x while achieving competitive solution qualities for VRPs with sizes ranging from 500 to 3000. Learned subproblem selection offers a 1.5x to 2x speedup over heuristic or random selection. Our results generalize to a variety of VRP distributions, variants, and solvers.
null
Effective Meta-Regularization by Kernelized Proximal Regularization
https://papers.nips.cc/paper_files/paper/2021/hash/dcc5c249e15c211f21e1da0f3ba66169-Abstract.html
Weisen Jiang, James Kwok, Yu Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/dcc5c249e15c211f21e1da0f3ba66169-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13630-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dcc5c249e15c211f21e1da0f3ba66169-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=mekyxmlLJNd
https://papers.nips.cc/paper_files/paper/2021/file/dcc5c249e15c211f21e1da0f3ba66169-Supplemental.zip
We study the problem of meta-learning, which has proved to be advantageous to accelerate learning new tasks with a few samples. The recent approaches based on deep kernels achieve the state-of-the-art performance. However, the regularizers in their base learners are not learnable. In this paper, we propose an algorithm called MetaProx to learn a proximal regularizer for the base learner. We theoretically establish the convergence of MetaProx. Experimental results confirm the advantage of the proposed algorithm.
null
Towards Context-Agnostic Learning Using Synthetic Data
https://papers.nips.cc/paper_files/paper/2021/hash/dccb1c3a558c50d389c24d69a9856730-Abstract.html
Charles Jin, Martin Rinard
https://papers.nips.cc/paper_files/paper/2021/hash/dccb1c3a558c50d389c24d69a9856730-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13631-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dccb1c3a558c50d389c24d69a9856730-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KCsNBfdYI7E
https://papers.nips.cc/paper_files/paper/2021/file/dccb1c3a558c50d389c24d69a9856730-Supplemental.pdf
We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels. We derive a new risk bound for this setting that decomposes into a bias and an error term, and exhibits a surprisingly weak dependence on the true labels. Inspired by these results, we present an algorithm aimed at minimizing the bias term by exploiting the ability to sample from each set independently. We apply our setting to visual classification tasks, where our approach enables us to train classifiers on datasets that consist entirely of a single synthetic example of each class. On several standard benchmarks for real-world image classification, we achieve robust performance in the context-agnostic setting, with good generalization to real world domains, whereas training directly on real world data without our techniques yields classifiers that are brittle to perturbations of the background.
null
Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers
https://papers.nips.cc/paper_files/paper/2021/hash/dcd2f3f312b6705fb06f4f9f1b55b55c-Abstract.html
Jeffrey Negrea, Blair Bilodeau, Nicolò Campolongo, Francesco Orabona, Dan Roy
https://papers.nips.cc/paper_files/paper/2021/hash/dcd2f3f312b6705fb06f4f9f1b55b55c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13632-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dcd2f3f312b6705fb06f4f9f1b55b55c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8SEJ8AT_6Dl
https://papers.nips.cc/paper_files/paper/2021/file/dcd2f3f312b6705fb06f4f9f1b55b55c-Supplemental.pdf
Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data. More recently, the semi-adversarial paradigm (Bilodeau, Negrea, and Roy 2020) provides an alternative relaxation of adversarial online learning by considering data that may be neither fully adversarial nor stochastic (I.I.D.). We achieve the minimax optimal regret in both paradigms using FTRL with separate, novel, root-logarithmic regularizers, both of which can be interpreted as yielding variants of NormalHedge. We extend existing KL regret upper bounds, which hold uniformly over target distributions, to possibly uncountable expert classes with arbitrary priors; provide the first full-information lower bounds for quantile regret on finite expert classes (which are tight); and provide an adaptively minimax optimal algorithm for the semi-adversarial paradigm that adapts to the true, unknown constraint faster, leading to uniformly improved regret bounds over existing methods.
null
Gradient-Free Adversarial Training Against Image Corruption for Learning-based Steering
https://papers.nips.cc/paper_files/paper/2021/hash/dce8af15f064d1accb98887a21029b08-Abstract.html
Yu Shen, Laura Zheng, Manli Shu, Weizi Li, Tom Goldstein, Ming Lin
https://papers.nips.cc/paper_files/paper/2021/hash/dce8af15f064d1accb98887a21029b08-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13633-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dce8af15f064d1accb98887a21029b08-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FCrLunb8-G
https://papers.nips.cc/paper_files/paper/2021/file/dce8af15f064d1accb98887a21029b08-Supplemental.pdf
We introduce a simple yet effective framework for improving the robustness of learning algorithms against image corruptions for autonomous driving. These corruptions can occur due to both internal (e.g., sensor noises and hardware abnormalities) and external factors (e.g., lighting, weather, visibility, and other environmental effects). Using sensitivity analysis with FID-based parameterization, we propose a novel algorithm exploiting basis perturbations to improve the overall performance of autonomous steering and other image processing tasks, such as classification and detection, for self-driving cars. Our model not only improves the performance on the original dataset, but also achieves significant performance improvement on datasets with multiple and unseen perturbations, up to 87% and 77%, respectively. A comparison between our approach and other SOTA techniques confirms the effectiveness of our technique in improving the robustness of neural network training for learning-based steering and other image processing tasks.
null
Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation
https://papers.nips.cc/paper_files/paper/2021/hash/dcf3219715a7c9cd9286f19db46f2384-Abstract.html
Liyuan Xu, Heishiro Kanagawa, Arthur Gretton
https://papers.nips.cc/paper_files/paper/2021/hash/dcf3219715a7c9cd9286f19db46f2384-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13634-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dcf3219715a7c9cd9286f19db46f2384-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0FDxsIEv9G
https://papers.nips.cc/paper_files/paper/2021/file/dcf3219715a7c9cd9286f19db46f2384-Supplemental.pdf
Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder. This is achieved via two-stage regression: in the first stage, we model relations among the treatment and proxies; in the second stage, we use this model to learn the effect of treatment on the outcome, given the context provided by the proxies. PCL guarantees recovery of the true causal effect, subject to identifiability conditions. We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. We show that DFPV outperforms recent state-of-the-art PCL methods on challenging synthetic benchmarks, including settings involving high dimensional image data. Furthermore, we show that PCL can be applied to off-policy evaluation for the confounded bandit problem, in which DFPV also exhibits competitive performance.
null
Certifying Robustness to Programmable Data Bias in Decision Trees
https://papers.nips.cc/paper_files/paper/2021/hash/dcf531edc9b229acfe0f4b87e1e278dd-Abstract.html
Anna Meyer, Aws Albarghouthi, Loris D'Antoni
https://papers.nips.cc/paper_files/paper/2021/hash/dcf531edc9b229acfe0f4b87e1e278dd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13635-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dcf531edc9b229acfe0f4b87e1e278dd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=du_Rss0tW8
https://papers.nips.cc/paper_files/paper/2021/file/dcf531edc9b229acfe0f4b87e1e278dd-Supplemental.pdf
Datasets can be biased due to societal inequities, human biases, under-representation of minorities, etc. Our goal is to certify that models produced by a learning algorithm are pointwise-robust to dataset biases. This is a challenging problem: it entails learning models for a large, or even infinite, number of datasets, ensuring that they all produce the same prediction. We focus on decision-tree learning due to the interpretable nature of the models. Our approach allows programmatically specifying \emph{bias models} across a variety of dimensions (e.g., label-flipping or missing data), composing types of bias, and targeting bias towards a specific group. To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. We evaluate our approach on datasets that are commonly used in the fairness literature, and demonstrate our approach's viability on a range of bias models.
null
TöRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis
https://papers.nips.cc/paper_files/paper/2021/hash/dd03de08bfdff4d8ab01117276564cc7-Abstract.html
Benjamin Attal, Eliot Laidlaw, Aaron Gokaslan, Changil Kim, Christian Richardt, James Tompkin, Matthew O'Toole
https://papers.nips.cc/paper_files/paper/2021/hash/dd03de08bfdff4d8ab01117276564cc7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13636-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dd03de08bfdff4d8ab01117276564cc7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CaKvIT5UMfd
https://papers.nips.cc/paper_files/paper/2021/file/dd03de08bfdff4d8ab01117276564cc7-Supplemental.pdf
Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones.
null
Sequence-to-Sequence Learning with Latent Neural Grammars
https://papers.nips.cc/paper_files/paper/2021/hash/dd17e652cd2a08fdb8bf7f68e2ad3814-Abstract.html
Yoon Kim
https://papers.nips.cc/paper_files/paper/2021/hash/dd17e652cd2a08fdb8bf7f68e2ad3814-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13637-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dd17e652cd2a08fdb8bf7f68e2ad3814-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0vaPiltED1N
https://papers.nips.cc/paper_files/paper/2021/file/dd17e652cd2a08fdb8bf7f68e2ad3814-Supplemental.pdf
Sequence-to-sequence learning with neural networks has become the de facto standard for sequence modeling. This approach typically models the local distribution over the next element with a powerful neural network that can condition on arbitrary context. While flexible and performant, these models often require large datasets for training and can fail spectacularly on benchmarks designed to test for compositional generalization. This work explores an alternative, hierarchical approach to sequence-to-sequence learning with synchronous grammars, where each node in the target tree is transduced by a subset of nodes in the source tree. The source and target trees are treated as fully latent and marginalized out during training. We develop a neural parameterization of the grammar which enables parameter sharing over combinatorial structures without the need for manual feature engineering. We apply this latent neural grammar to various domains---a diagnostic language navigation task designed to test for compositional generalization (SCAN), style transfer, and small-scale machine translation---and find that it performs respectably compared to standard baselines.
null
Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality
https://papers.nips.cc/paper_files/paper/2021/hash/dd1970fb03877a235d530476eb727dab-Abstract.html
Stefanos Leonardos, Georgios Piliouras, Kelly Spendlove
https://papers.nips.cc/paper_files/paper/2021/hash/dd1970fb03877a235d530476eb727dab-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13638-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dd1970fb03877a235d530476eb727dab-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OSLVL-tIBei
https://papers.nips.cc/paper_files/paper/2021/file/dd1970fb03877a235d530476eb727dab-Supplemental.pdf
The interplay between exploration and exploitation in competitive multi-agent learning is still far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical learning model that explicitly captures the balance between game rewards and exploration costs. We show that Q-learning always converges to the unique quantal-response equilibrium (QRE), the standard solution concept for games under bounded rationality, in weighted zero-sum polymatrix games with heterogeneous learning agents using positive exploration rates. Complementing recent results about convergence in weighted potential games [16,34], we show that fast convergence of Q-learning in competitive settings obtains regardless of the number of agents and without any need for parameter fine-tuning. As showcased by our experiments in network zero-sum games, these theoretical results provide the necessary guarantees for an algorithmic approach to the currently open problem of equilibrium selection in competitive multi-agent settings.
null
Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems
https://papers.nips.cc/paper_files/paper/2021/hash/dd32544610bf007f0def4abc9b7ff9ef-Abstract.html
Atara Kaplan, Dan Garber
https://papers.nips.cc/paper_files/paper/2021/hash/dd32544610bf007f0def4abc9b7ff9ef-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13639-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dd32544610bf007f0def4abc9b7ff9ef-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=90c-FVYJ5rL
https://papers.nips.cc/paper_files/paper/2021/file/dd32544610bf007f0def4abc9b7ff9ef-Supplemental.pdf
Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning. While significant progress has been made in recent years in developing efficient methods for \textit{smooth} low-rank optimization problems that avoid maintaining high-rank matrices and computing expensive high-rank SVDs, advances for nonsmooth problems have been slow paced. In this paper we consider standard convex relaxations for such problems. Mainly, we prove that under a natural \textit{generalized strict complementarity} condition and under the relatively mild assumption that the nonsmooth objective can be written as a maximum of smooth functions, the \textit{extragradient method}, when initialized with a "warm-start'' point, converges to an optimal solution with rate $O(1/t)$ while requiring only two \textit{low-rank} SVDs per iteration. We give a precise trade-off between the rank of the SVDs required and the radius of the ball in which we need to initialize the method. We support our theoretical results with empirical experiments on several nonsmooth low-rank matrix recovery tasks, demonstrating that using simple initializations, the extragradient method produces exactly the same iterates when full-rank SVDs are replaced with SVDs of rank that matches the rank of the (low-rank) ground-truth matrix to be recovered.
null
Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
https://papers.nips.cc/paper_files/paper/2021/hash/dd45045f8c68db9f54e70c67048d32e8-Abstract.html
Kate Rakelly, Abhishek Gupta, Carlos Florensa, Sergey Levine
https://papers.nips.cc/paper_files/paper/2021/hash/dd45045f8c68db9f54e70c67048d32e8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13640-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dd45045f8c68db9f54e70c67048d32e8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=haSQRA5RnuM
https://papers.nips.cc/paper_files/paper/2021/file/dd45045f8c68db9f54e70c67048d32e8-Supplemental.pdf
Mutual information (MI) maximization provides an appealing formalism for learning representations of data. In the context of reinforcement learning (RL), such representations can accelerate learning by discarding irrelevant and redundant information, while retaining the information necessary for control. Much prior work on these methods has addressed the practical difficulties of estimating MI from samples of high-dimensional observations, while comparatively less is understood about which MI objectives yield representations that are sufficient for RL from a theoretical perspective. In this paper, we formalize the sufficiency of a state representation for learning and representing the optimal policy, and study several popular MI based objectives through this lens. Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP. We corroborate our theoretical results with empirical experiments on a simulated game environment with visual observations.
null
A Geometric Perspective towards Neural Calibration via Sensitivity Decomposition
https://papers.nips.cc/paper_files/paper/2021/hash/dda99de58ff020cfb57fec1404c97003-Abstract.html
Junjiao Tian, Dylan Yung, Yen-Chang Hsu, Zsolt Kira
https://papers.nips.cc/paper_files/paper/2021/hash/dda99de58ff020cfb57fec1404c97003-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13641-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dda99de58ff020cfb57fec1404c97003-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=W2rRWbI4CTW
https://papers.nips.cc/paper_files/paper/2021/file/dda99de58ff020cfb57fec1404c97003-Supplemental.pdf
It is well known that vision classification models suffer from poor calibration in the face of data distribution shifts. In this paper, we take a geometric approach to this problem. We propose Geometric Sensitivity Decomposition (GSD) which decomposes the norm of a sample feature embedding and the angular similarity to a target classifier into an instance-dependent and an instance-independent com-ponent. The instance-dependent component captures the sensitive information about changes in the input while the instance-independent component represents the insensitive information serving solely to minimize the loss on the training dataset. Inspired by the decomposition, we analytically derive a simple extension to current softmax-linear models, which learns to disentangle the two components during training. On several common vision models, the disentangled model out-performs other calibration methods on standard calibration metrics in the face of out-of-distribution (OOD) data and corruption with significantly less complexity. Specifically, we surpass the current state of the art by 30.8% relative improvement on corrupted CIFAR100 in Expected Calibration Error.
null
Towards a Unified Information-Theoretic Framework for Generalization
https://papers.nips.cc/paper_files/paper/2021/hash/ddbc86dc4b2fbfd8a62e12096227e068-Abstract.html
Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, Dan Roy
https://papers.nips.cc/paper_files/paper/2021/hash/ddbc86dc4b2fbfd8a62e12096227e068-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13642-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/ddbc86dc4b2fbfd8a62e12096227e068-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Dzy8YEm5dX
https://papers.nips.cc/paper_files/paper/2021/file/ddbc86dc4b2fbfd8a62e12096227e068-Supplemental.pdf
In this work, we investigate the expressiveness of the "conditional mutual information" (CMI) framework of Steinke and Zakynthinou (2020) and the prospect of using it to provide a unified framework for proving generalization bounds in the realizable setting. We first demonstrate that one can use this framework to express non-trivial (but sub-optimal) bounds for any learning algorithm that outputs hypotheses from a class of bounded VC dimension. We then explore two directions of strengthening this bound: (i) Can the CMI framework express optimal bounds for VC classes? (ii) Can the CMI framework be used to analyze algorithms whose output hypothesis space is unrestricted (i.e. has an unbounded VC dimension)? With respect to Item (i) we prove that the CMI framework yields the optimal bound on the expected risk of Support Vector Machines (SVMs) for learning halfspaces. This result is an application of our general result showing that stable compression schemes Bousquet al. (2020) of size $k$ have uniformly bounded CMI of order $O(k)$. We further show that an inherent limitation of proper learning of VC classes contradicts the existence of a proper learner with constant CMI, and it implies a negative resolution to an open problem of Steinke and Zakynthinou (2020). We further study the CMI of empirical risk minimizers (ERMs) of class $H$ and show that it is possible to output all consistent classifiers (version space) with bounded CMI if and only if $H$ has a bounded star number (Hanneke and Yang (2015)). With respect to Item (ii) we prove a general reduction showing that "leave-one-out" analysis is expressible via the CMI framework. As a corollary we investigate the CMI of the one-inclusion-graph algorithm proposed by Haussler et al. (1994). More generally, we show that the CMI framework is universal in the sense that for every consistent algorithm and data distribution, the expected risk vanishes as the number of samples diverges if and only if its evaluated CMI has sublinear growth with the number of samples.
null
Bayesian decision-making under misspecified priors with applications to meta-learning
https://papers.nips.cc/paper_files/paper/2021/hash/ddcbe25988981920c872c1787382f04d-Abstract.html
Max Simchowitz, Christopher Tosh, Akshay Krishnamurthy, Daniel J. Hsu, Thodoris Lykouris, Miro Dudik, Robert E. Schapire
https://papers.nips.cc/paper_files/paper/2021/hash/ddcbe25988981920c872c1787382f04d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13643-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/ddcbe25988981920c872c1787382f04d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5Re03X8Iigi
https://papers.nips.cc/paper_files/paper/2021/file/ddcbe25988981920c872c1787382f04d-Supplemental.pdf
Thompson sampling and other Bayesian sequential decision-making algorithms are among the most popular approaches to tackle explore/exploit trade-offs in (contextual) bandits. The choice of prior in these algorithms offers flexibility to encode domain knowledge but can also lead to poor performance when misspecified. In this paper, we demonstrate that performance degrades gracefully with misspecification. We prove that the expected reward accrued by Thompson sampling (TS) with a misspecified prior differs by at most $\tilde{O}(H^2 \epsilon)$ from TS with a well-specified prior, where $\epsilon$ is the total-variation distance between priors and $H$ is the learning horizon. Our bound does not require the prior to have any parametric form. For priors with bounded support, our bound is independent of the cardinality or structure of the action space, and we show that it is tight up to universal constants in the worst case.Building on our sensitivity analysis, we establish generic PAC guarantees for algorithms in the recently studied Bayesian meta-learning setting and derive corollaries for various families of priors. Our results generalize along two axes: (1) they apply to a broader family of Bayesian decision-making algorithms, including a Monte-Carlo implementation of the knowledge gradient algorithm (KG), and (2) they apply to Bayesian POMDPs, the most general Bayesian decision-making setting, encompassing contextual bandits as a special case. Through numerical simulations, we illustrate how prior misspecification and the deployment of one-step look-ahead (as in KG) can impact the convergence of meta-learning in multi-armed and contextual bandits with structured and correlated priors.
null
Neural Trees for Learning on Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/ddf88ea64eaed0f3de5531ac964a0a1a-Abstract.html
Rajat Talak, Siyi Hu, Lisa Peng, Luca Carlone
https://papers.nips.cc/paper_files/paper/2021/hash/ddf88ea64eaed0f3de5531ac964a0a1a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13644-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/ddf88ea64eaed0f3de5531ac964a0a1a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=UwSwML5iJkp
https://papers.nips.cc/paper_files/paper/2021/file/ddf88ea64eaed0f3de5531ac964a0a1a-Supplemental.pdf
Graph Neural Networks (GNNs) have emerged as a flexible and powerful approach for learning over graphs. Despite this success, existing GNNs are constrained by their local message-passing architecture and are provably limited in their expressive power. In this work, we propose a new GNN architecture – the Neural Tree. The neural tree architecture does not perform message passing on the input graph, but on a tree-structured graph, called the H-tree, that is constructed from the input graph. Nodes in the H-tree correspond to subgraphs in the input graph, and they are reorganized in a hierarchical manner such that the parent of a node in the H-tree always corresponds to a larger subgraph in the input graph. We show that the neural tree architecture can approximate any smooth probability distribution function over an undirected graph. We also prove that the number of parameters needed to achieve an $\epsilon$-approximation of the distribution function is exponential in the treewidth of the input graph, but linear in its size. We prove that any continuous G-invariant/equivariant function can be approximated by a nonlinear combination of such probability distribution functions over G. We apply the neural tree to semi-supervised node classification in 3D scene graphs, and show that these theoretical properties translate into significant gains in prediction accuracy, over the more traditional GNN architectures. We also show the applicability of the neural tree architecture to citation networks with large treewidth, by using a graph sub-sampling technique.
null
Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization
https://papers.nips.cc/paper_files/paper/2021/hash/ddf9029977a61241841edeae15e9b53f-Abstract.html
Pranav Subramani, Nicholas Vadivelu, Gautam Kamath
https://papers.nips.cc/paper_files/paper/2021/hash/ddf9029977a61241841edeae15e9b53f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13645-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/ddf9029977a61241841edeae15e9b53f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hw2VfWAr6t6
https://papers.nips.cc/paper_files/paper/2021/file/ddf9029977a61241841edeae15e9b53f-Supplemental.zip
A common pain point in differentially private machine learning is the significant runtime overhead incurred when executing Differentially Private Stochastic Gradient Descent (DPSGD), which may be as large as two orders of magnitude. We thoroughly demonstrate that by exploiting powerful language primitives, including vectorization, just-in-time compilation, and static graph optimization, one can dramatically reduce these overheads, in many cases nearly matching the best non-private running times. These gains are realized in two frameworks: one is JAX, which provides rich support for these primitives through the XLA compiler. We also rebuild core parts of TensorFlow Privacy, integrating more effective vectorization as well as XLA compilation, granting significant memory and runtime improvements over previous release versions. Our proposed approaches allow us to achieve up to 50x speedups compared to the best alternatives. Our code is available at https://github.com/TheSalon/fast-dpsgd.
null
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
https://papers.nips.cc/paper_files/paper/2021/hash/de043a5e421240eb846da8effe472ff1-Abstract.html
Giang Nguyen, Daeyoung Kim, Anh Nguyen
https://papers.nips.cc/paper_files/paper/2021/hash/de043a5e421240eb846da8effe472ff1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13646-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/de043a5e421240eb846da8effe472ff1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OKPS9YdZ8Va
https://papers.nips.cc/paper_files/paper/2021/file/de043a5e421240eb846da8effe472ff1-Supplemental.pdf
Explaining the decisions of an Artificial Intelligence (AI) model is increasingly critical in many real-world, high-stake applications.Hundreds of papers have either proposed new feature attribution methods, discussed or harnessed these tools in their work.However, despite humans being the target end-users, most attribution methods were only evaluated on proxy automatic-evaluation metrics (Zhang et al. 2018; Zhou et al. 2016; Petsiuk et al. 2018). In this paper, we conduct the first user study to measure attribution map effectiveness in assisting humans in ImageNet classification and Stanford Dogs fine-grained classification, and when an image is natural or adversarial (i.e., contains adversarial perturbations). Overall, feature attribution is surprisingly not more effective than showing humans nearest training-set examples. On a harder task of fine-grained dog categorization, presenting attribution maps to humans does not help, but instead hurts the performance of human-AI teams compared to AI alone. Importantly, we found automatic attribution-map evaluation measures to correlate poorly with the actual human-AI team performance. Our findings encourage the community to rigorously test their methods on the downstream human-in-the-loop applications and to rethink the existing evaluation metrics.
null
Coordinated Proximal Policy Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/de73998802680548b916f1947ffbad76-Abstract.html
Zifan Wu, Chao Yu, Deheng Ye, Junge Zhang, haiyin piao, Hankz Hankui Zhuo
https://papers.nips.cc/paper_files/paper/2021/hash/de73998802680548b916f1947ffbad76-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13647-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/de73998802680548b916f1947ffbad76-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iCJFwoy1T-q
https://papers.nips.cc/paper_files/paper/2021/file/de73998802680548b916f1947ffbad76-Supplemental.pdf
We present Coordinated Proximal Policy Optimization (CoPPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting. The key idea lies in the coordinated adaptation of step size during the policy update process among multiple agents. We prove the monotonicity of policy improvement when optimizing a theoretically-grounded joint objective, and derive a simplified optimization objective based on a set of approximations. We then interpret that such an objective in CoPPO can achieve dynamic credit assignment among agents, thereby alleviating the high variance issue during the concurrent update of agent policies. Finally, we demonstrate that CoPPO outperforms several strong baselines and is competitive with the latest multi-agent PPO method (i.e. MAPPO) under typical multi-agent settings, including cooperative matrix games and the StarCraft II micromanagement tasks.
null
Unbiased Classification through Bias-Contrastive and Bias-Balanced Learning
https://papers.nips.cc/paper_files/paper/2021/hash/de8aa43e5d5fa8536cf23e54244476fa-Abstract.html
Youngkyu Hong, Eunho Yang
https://papers.nips.cc/paper_files/paper/2021/hash/de8aa43e5d5fa8536cf23e54244476fa-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13648-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/de8aa43e5d5fa8536cf23e54244476fa-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2OqZZAqxnn
https://papers.nips.cc/paper_files/paper/2021/file/de8aa43e5d5fa8536cf23e54244476fa-Supplemental.pdf
Datasets for training machine learning models tend to be biased unless the data is collected with complete care. In such a biased dataset, models are susceptible to making predictions based on the biased features of the data. The biased model fails to generalize to the case where correlations between biases and targets are shifted. To mitigate this, we propose Bias-Contrastive (BiasCon) loss based on the contrastive learning framework, which effectively leverages the knowledge of bias labels. We further suggest Bias-Balanced (BiasBal) regression which trains the classification model toward the data distribution with balanced target-bias correlation. Furthermore, we propose Soft Bias-Contrastive (SoftCon) loss which handles the dataset without bias labels by softening the pair assignment of the BiasCon loss based on the distance in the feature space of the bias-capturing model. Our experiments show that our proposed methods significantly improve previous debiasing methods in various realistic datasets.
null
Learning from Inside: Self-driven Siamese Sampling and Reasoning for Video Question Answering
https://papers.nips.cc/paper_files/paper/2021/hash/dea184826614d3f4c608731389ed0c74-Abstract.html
Weijiang Yu, Haoteng Zheng, Mengfei Li, Lei Ji, Lijun Wu, Nong Xiao, Nan Duan
https://papers.nips.cc/paper_files/paper/2021/hash/dea184826614d3f4c608731389ed0c74-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13649-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dea184826614d3f4c608731389ed0c74-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lDVeaQIScg
null
Recent advances in the video question answering (i.e., VideoQA) task have achieved strong success by following the paradigm of fine-tuning each clip-text pair independently on the pretrained transformer-based model via supervised learning. Intuitively, multiple samples (i.e., clips) should be interdependent to capture similar visual and key semantic information in the same video. To consider the interdependent knowledge between contextual clips into the network inference, we propose a Siamese Sampling and Reasoning (SiaSamRea) approach, which consists of a siamese sampling mechanism to generate sparse and similar clips (i.e., siamese clips) from the same video, and a novel reasoning strategy for integrating the interdependent knowledge between contextual clips into the network. The reasoning strategy contains two modules: (1) siamese knowledge generation to learn the inter-relationship among clips; (2) siamese knowledge reasoning to produce the refined soft label by propagating the weights of inter-relationship to the predicted candidates of all clips. Finally, our SiaSamRea can endow the current multimodal reasoning paradigm with the ability of learning from inside via the guidance of soft labels. Extensive experiments demonstrate our SiaSamRea achieves state-of-the-art performance on five VideoQA benchmarks, e.g., a significant +2.1% gain on MSRVTT-QA, +2.9% on MSVD-QA, +1.0% on ActivityNet-QA, +1.8% on How2QA and +4.3% (action) on TGIF-QA.
null
Identification and Estimation of Joint Probabilities of Potential Outcomes in Observational Studies with Covariate Information
https://papers.nips.cc/paper_files/paper/2021/hash/dea9ddb25cbf2352cf4dec30222a02a5-Abstract.html
Ryusei Shingaki, manabu kuroki
https://papers.nips.cc/paper_files/paper/2021/hash/dea9ddb25cbf2352cf4dec30222a02a5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13650-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dea9ddb25cbf2352cf4dec30222a02a5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LJSnwCx7wzj
https://papers.nips.cc/paper_files/paper/2021/file/dea9ddb25cbf2352cf4dec30222a02a5-Supplemental.pdf
The joint probabilities of potential outcomes are fundamental components of causal inference in the sense that (i) if they are identifiable, then the causal risk is also identifiable, but not vise versa (Pearl, 2009; Tian and Pearl, 2000) and (ii) they enable us to evaluate the probabilistic aspects of necessity'',sufficiency'', and ``necessity and sufficiency'', which are important concepts of successful explanation (Watson, et al., 2020). However, because they are not identifiable without any assumptions, various assumptions have been utilized to evaluate the joint probabilities of potential outcomes, e.g., the assumption of monotonicity (Pearl, 2009; Tian and Pearl, 2000), the independence between potential outcomes (Robins and Richardson, 2011), the condition of gain equality (Li and Pearl, 2019), and the specific functional relationships between cause and effect (Pearl, 2009). Unlike existing identification conditions, in order to evaluate the joint probabilities of potential outcomes without such assumptions, this paper proposes two types of novel identification conditions using covariate information. In addition, when the joint probabilities of potential outcomes are identifiable through the proposed conditions, the estimation problem of the joint probabilities of potential outcomes reduces to that of singular models and thus they can not be evaluated by standard statistical estimation methods. To solve the problem, this paper proposes a new statistical estimation method based on the augmented Lagrangian method and shows the asymptotic normality of the proposed estimators. Given space constraints, the proofs, the details on the statistical estimation method, some numerical experiments, and the case study are provided in the supplementary material.
null
Online false discovery rate control for anomaly detection in time series
https://papers.nips.cc/paper_files/paper/2021/hash/def130d0b67eb38b7a8f4e7121ed432c-Abstract.html
Quentin Rebjock, Baris Kurt, Tim Januschowski, Laurent Callot
https://papers.nips.cc/paper_files/paper/2021/hash/def130d0b67eb38b7a8f4e7121ed432c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13651-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/def130d0b67eb38b7a8f4e7121ed432c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NvN_B_ZEY5c
https://papers.nips.cc/paper_files/paper/2021/file/def130d0b67eb38b7a8f4e7121ed432c-Supplemental.pdf
This article proposes novel rules for false discovery rate control (FDRC) geared towards online anomaly detection in time series. Online FDRC rules allow to control the properties of a sequence of statistical tests. In the context of anomaly detection, the null hypothesis is that an observation is normal and the alternative is that it is anomalous. FDRC rules allow users to target a lower bound on precision in unsupervised settings. The methods proposed in this article overcome short-comings of previous FDRC rules in the context of anomaly detection, in particular ensuring that power remains high even when the alternative is exceedingly rare (typical in anomaly detection) and the test statistics are serially dependent (typical in time series). We show the soundness of these rules in both theory and experiments.
null
Pragmatic Image Compression for Human-in-the-Loop Decision-Making
https://papers.nips.cc/paper_files/paper/2021/hash/df0aab058ce179e4f7ab135ed4e641a9-Abstract.html
Sid Reddy, Anca Dragan, Sergey Levine
https://papers.nips.cc/paper_files/paper/2021/hash/df0aab058ce179e4f7ab135ed4e641a9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13652-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df0aab058ce179e4f7ab135ed4e641a9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ClwfZc4ooKM
https://papers.nips.cc/paper_files/paper/2021/file/df0aab058ce179e4f7ab135ed4e641a9-Supplemental.zip
Standard lossy image compression algorithms aim to preserve an image's appearance, while minimizing the number of bits needed to transmit it. However, the amount of information actually needed by the user for downstream tasks -- e.g., deciding which product to click on in a shopping website -- is likely much lower. To achieve this lower bitrate, we would ideally only transmit the visual features that drive user behavior, while discarding details irrelevant to the user's decisions. We approach this problem by training a compression model through human-in-the-loop learning as the user performs tasks with the compressed images. The key insight is to train the model to produce a compressed image that induces the user to take the same action that they would have taken had they seen the original image. To approximate the loss function for this model, we train a discriminator that tries to distinguish whether a user's action was taken in response to the compressed image or the original. We evaluate our method through experiments with human participants on four tasks: reading handwritten digits, verifying photos of faces, browsing an online shopping catalogue, and playing a car racing video game. The results show that our method learns to match the user's actions with and without compression at lower bitrates than baseline methods, and adapts the compression model to the user's behavior: it preserves the digit number and randomizes handwriting style in the digit reading task, preserves hats and eyeglasses while randomizing faces in the photo verification task, preserves the perceived price of an item while randomizing its color and background in the online shopping task, and preserves upcoming bends in the road in the car racing game.
null
Generalized Linear Bandits with Local Differential Privacy
https://papers.nips.cc/paper_files/paper/2021/hash/df0e09d6f25a15a815563df9827f48fa-Abstract.html
Yuxuan Han, Zhipeng Liang, Yang Wang, Jiheng Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/df0e09d6f25a15a815563df9827f48fa-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13653-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df0e09d6f25a15a815563df9827f48fa-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=BEVDmheFG0
https://papers.nips.cc/paper_files/paper/2021/file/df0e09d6f25a15a815563df9827f48fa-Supplemental.pdf
Contextual bandit algorithms are useful in personalized online decision-making. However, many applications such as personalized medicine and online advertising require the utilization of individual-specific information for effective learning, while user's data should remain private from the server due to privacy concerns. This motivates the introduction of local differential privacy (LDP), a stringent notion in privacy, to contextual bandits. In this paper, we design LDP algorithms for stochastic generalized linear bandits to achieve the same regret bound as in non-privacy settings. Our main idea is to develop a stochastic gradient-based estimator and update mechanism to ensure LDP. We then exploit the flexibility of stochastic gradient descent (SGD), whose theoretical guarantee for bandit problems is rarely explored, in dealing with generalized linear bandits. We also develop an estimator and update mechanism based on Ordinary Least Square (OLS) for linear bandits. Finally, we conduct experiments with both simulation and real-world datasets to demonstrate the consistently superb performance of our algorithms under LDP constraints with reasonably small parameters $(\varepsilon, \delta)$ to ensure strong privacy protection.
null
On the Algorithmic Stability of Adversarial Training
https://papers.nips.cc/paper_files/paper/2021/hash/df1f1d20ee86704251795841e6a9405a-Abstract.html
Yue Xing, Qifan Song, Guang Cheng
https://papers.nips.cc/paper_files/paper/2021/hash/df1f1d20ee86704251795841e6a9405a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13654-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df1f1d20ee86704251795841e6a9405a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xz80iPFIjvG
https://papers.nips.cc/paper_files/paper/2021/file/df1f1d20ee86704251795841e6a9405a-Supplemental.pdf
The adversarial training is a popular tool to remedy the vulnerability of deep learning models against adversarial attacks, and there is rich theoretical literature on the training loss of adversarial training algorithms. In contrast, this paper studies the algorithmic stability of a generic adversarial training algorithm, which can further help to establish an upper bound for generalization error. By figuring out the stability upper bound and lower bound, we argue that the non-differentiability issue of adversarial training causes worse algorithmic stability than their natural counterparts. To tackle this problem, we consider a noise injection method. While the non-differentiability problem seriously affects the stability of adversarial training, injecting noise enables the training trajectory to avoid the occurrence of non-differentiability with dominating probability, hence enhancing the stability performance of adversarial training. Our analysis also studies the relation between the algorithm stability and numerical approximation error of adversarial attacks.
null
Width-based Lookaheads with Learnt Base Policies and Heuristics Over the Atari-2600 Benchmark
https://papers.nips.cc/paper_files/paper/2021/hash/df42e2244c97a0d80d565ae8176d3351-Abstract.html
Stefan O'Toole, Nir Lipovetzky, Miquel Ramirez, Adrian Pearce
https://papers.nips.cc/paper_files/paper/2021/hash/df42e2244c97a0d80d565ae8176d3351-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13655-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df42e2244c97a0d80d565ae8176d3351-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=U0k2DVAED5
https://papers.nips.cc/paper_files/paper/2021/file/df42e2244c97a0d80d565ae8176d3351-Supplemental.pdf
We propose new width-based planning and learning algorithms inspired from a careful analysis of the design decisions made by previous width-based planners. The algorithms are applied over the Atari-2600 games and our best performing algorithm, Novelty guided Critical Path Learning (N-CPL), outperforms the previously introduced width-based planning and learning algorithms $\pi$-IW(1), $\pi$-IW(1)+ and $\pi$-HIW(n, 1). Furthermore, we present a taxonomy of the Atari-2600 games according to some of their defining characteristics. This analysis of the games provides further insight into the behaviour and performance of the algorithms introduced. Namely, for games with large branching factors, and games with sparse meaningful rewards, N-CPL outperforms $\pi$-IW, $\pi$-IW(1)+ and $\pi$-HIW(n, 1).
null
Characterizing possible failure modes in physics-informed neural networks
https://papers.nips.cc/paper_files/paper/2021/hash/df438e5206f31600e6ae4af72f2725f1-Abstract.html
Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, Michael W. Mahoney
https://papers.nips.cc/paper_files/paper/2021/hash/df438e5206f31600e6ae4af72f2725f1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13656-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df438e5206f31600e6ae4af72f2725f1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=a2Gr9gNFD-J
https://papers.nips.cc/paper_files/paper/2021/file/df438e5206f31600e6ae4af72f2725f1-Supplemental.pdf
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models. The typical approach is to incorporate physical domain knowledge as soft constraints on an empirical loss function and use existing machine learning methodologies to train the model. We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena for even slightly more complex problems. In particular, we analyze several distinct situations of widespread physical interest, including learning differential equations with convection, reaction, and diffusion operators. We provide evidence that the soft regularization in PINNs, which involves PDE-based differential operators, can introduce a number of subtle problems, including making the problem more ill-conditioned. Importantly, we show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize. We then describe two promising solutions to address these failure modes. The first approach is to use curriculum regularization, where the PINN's loss term starts from a simple PDE regularization, and becomes progressively more complex as the NN gets trained. The second approach is to pose the problem as a sequence-to-sequence learning task, rather than learning to predict the entire space-time at once. Extensive testing shows that we can achieve up to 1-2 orders of magnitude lower error with these methods as compared to regular PINN training.
null
Artistic Style Transfer with Internal-external Learning and Contrastive Learning
https://papers.nips.cc/paper_files/paper/2021/hash/df5354693177e83e8ba089e94b7b6b55-Abstract.html
Haibo Chen, lei zhao, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu
https://papers.nips.cc/paper_files/paper/2021/hash/df5354693177e83e8ba089e94b7b6b55-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13657-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df5354693177e83e8ba089e94b7b6b55-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hm0i-cunzGW
https://papers.nips.cc/paper_files/paper/2021/file/df5354693177e83e8ba089e94b7b6b55-Supplemental.pdf
Although existing artistic style transfer methods have achieved significant improvement with deep neural networks, they still suffer from artifacts such as disharmonious colors and repetitive patterns. Motivated by this, we propose an internal-external style transfer method with two contrastive losses. Specifically, we utilize internal statistics of a single style image to determine the colors and texture patterns of the stylized image, and in the meantime, we leverage the external information of the large-scale style dataset to learn the human-aware style information, which makes the color distributions and texture patterns in the stylized image more reasonable and harmonious. In addition, we argue that existing style transfer methods only consider the content-to-stylization and style-to-stylization relations, neglecting the stylization-to-stylization relations. To address this issue, we introduce two contrastive losses, which pull the multiple stylization embeddings closer to each other when they share the same content or style, but push far away otherwise. We conduct extensive experiments, showing that our proposed method can not only produce visually more harmonious and satisfying artistic images, but also promote the stability and consistency of rendered video clips.
null
Fast Abductive Learning by Similarity-based Consistency Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/df7e148cabfd9b608090fa5ee3348bfe-Abstract.html
Yu-Xuan Huang, Wang-Zhou Dai, Le-Wen Cai, Stephen H Muggleton, Yuan Jiang
https://papers.nips.cc/paper_files/paper/2021/hash/df7e148cabfd9b608090fa5ee3348bfe-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13658-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df7e148cabfd9b608090fa5ee3348bfe-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=UMrf6F4Tg9c
https://papers.nips.cc/paper_files/paper/2021/file/df7e148cabfd9b608090fa5ee3348bfe-Supplemental.pdf
To utilize the raw inputs and symbolic knowledge simultaneously, some recent neuro-symbolic learning methods use abduction, i.e., abductive reasoning, to integrate sub-symbolic perception and logical inference. While the perception model, e.g., a neural network, outputs some facts that are inconsistent with the symbolic background knowledge base, abduction can help revise the incorrect perceived facts by minimizing the inconsistency between them and the background knowledge. However, to enable effective abduction, previous approaches need an initialized perception model that discriminates the input raw instances. This limits the application of these methods, as the discrimination ability is usually acquired from a thorough pre-training when the raw inputs are difficult to classify. In this paper, we propose a novel abduction strategy, which leverages the similarity between samples, rather than the output information by the perceptual neural network, to guide the search in abduction. Based on this principle, we further present ABductive Learning with Similarity (ABLSim) and apply it to some difficult neuro-symbolic learning tasks. Experiments show that the efficiency of ABLSim is significantly higher than the state-of-the-art neuro-symbolic methods, allowing it to achieve better performance with less labeled data and weaker domain knowledge.
null
To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs
https://papers.nips.cc/paper_files/paper/2021/hash/df9028fcb6b065e000ffe8a4f03eeb38-Abstract.html
Thomas Scialom, Paul-Alexis Dray, Jacopo Staiano, Sylvain Lamprier, Benjamin Piwowarski
https://papers.nips.cc/paper_files/paper/2021/hash/df9028fcb6b065e000ffe8a4f03eeb38-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13659-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/df9028fcb6b065e000ffe8a4f03eeb38-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jUL1lnsiU9
https://papers.nips.cc/paper_files/paper/2021/file/df9028fcb6b065e000ffe8a4f03eeb38-Supplemental.pdf
Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability.In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.
null
Shapley Residuals: Quantifying the limits of the Shapley value for explanations
https://papers.nips.cc/paper_files/paper/2021/hash/dfc6aa246e88ab3e32caeaaecf433550-Abstract.html
Indra Kumar, Carlos Scheidegger, Suresh Venkatasubramanian, Sorelle Friedler
https://papers.nips.cc/paper_files/paper/2021/hash/dfc6aa246e88ab3e32caeaaecf433550-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13660-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dfc6aa246e88ab3e32caeaaecf433550-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0XJDcC07tQs
https://papers.nips.cc/paper_files/paper/2021/file/dfc6aa246e88ab3e32caeaaecf433550-Supplemental.pdf
Popular feature importance techniques compute additive approximations to nonlinear models by first defining a cooperative game describing the value of different subsets of the model's features, then calculating the resulting game's Shapley values to attribute credit additively between the features. However, the specific modeling settings in which the Shapley values are a poor approximation for the true game have not been well-described. In this paper we utilize an interpretation of Shapley values as the result of an orthogonal projection between vector spaces to calculate a residual representing the kernel component of that projection. We provide an algorithm for computing these residuals, characterize different modeling settings based on the value of the residuals, and demonstrate that they capture information about model predictions that Shapley values cannot. Shapley residuals can thus act as a warning to practitioners against overestimating the degree to which Shapley-value-based explanations give them insight into a model.
null
The Elastic Lottery Ticket Hypothesis
https://papers.nips.cc/paper_files/paper/2021/hash/dfccdb8b1cc7e4dab6d33db0fef12b88-Abstract.html
Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/dfccdb8b1cc7e4dab6d33db0fef12b88-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13661-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dfccdb8b1cc7e4dab6d33db0fef12b88-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zL1szwVKdwc
https://papers.nips.cc/paper_files/paper/2021/file/dfccdb8b1cc7e4dab6d33db0fef12b88-Supplemental.pdf
Lottery Ticket Hypothesis (LTH) raises keen attention to identifying sparse trainable subnetworks, or winning tickets, which can be trained in isolation to achieve similar or even better performance compared to the full models. Despite many efforts being made, the most effective method to identify such winning tickets is still Iterative Magnitude-based Pruning (IMP), which is computationally expensive and has to be run thoroughly for every different network. A natural question that comes in is: can we “transform” the winning ticket found in one network to another with a different architecture, yielding a winning ticket for the latter at the beginning, without re-doing the expensive IMP? Answering this question is not only practically relevant for efficient “once-for-all” winning ticket finding, but also theoretically appealing for uncovering inherently scalable sparse patterns in networks. We conduct extensive experiments on CIFAR-10 and ImageNet, and propose a variety of strategies to tweak the winning tickets found from different networks of the same model family (e.g., ResNets). Based on these results, we articulate the Elastic Lottery Ticket Hypothesis (E-LTH): by mindfully replicating (or dropping) and re-ordering layers for one network, its corresponding winning ticket could be stretched (or squeezed) into a subnetwork for another deeper (or shallower) network from the same family, whose performance is nearly the same competitive as the latter’s winning ticket directly found by IMP. We have also extensively compared E-LTH with pruning-at-initialization and dynamic sparse training methods, as well as discussed the generalizability of E-LTH to different model families, layer types, and across datasets. Code is available at https://github.com/VITA-Group/ElasticLTH.
null
Joint Inference for Neural Network Depth and Dropout Regularization
https://papers.nips.cc/paper_files/paper/2021/hash/dfce06801e1a85d6d06f1fdd4475dacd-Abstract.html
Kishan K C, Rui Li, MohammadMahdi Gilany
https://papers.nips.cc/paper_files/paper/2021/hash/dfce06801e1a85d6d06f1fdd4475dacd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13662-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dfce06801e1a85d6d06f1fdd4475dacd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=S5LLQZ-yUP
https://papers.nips.cc/paper_files/paper/2021/file/dfce06801e1a85d6d06f1fdd4475dacd-Supplemental.pdf
Dropout regularization methods prune a neural network's pre-determined backbone structure to avoid overfitting. However, a deep model still tends to be poorly calibrated with high confidence on incorrect predictions. We propose a unified Bayesian model selection method to jointly infer the most plausible network depth warranted by data, and perform dropout regularization simultaneously. In particular, to infer network depth we define a beta process over the number of hidden layers which allows it to go to infinity. Layer-wise activation probabilities induced by the beta process modulate neuron activation via binary vectors of a conjugate Bernoulli process. Experiments across domains show that by adapting network depth and dropout regularization to data, our method achieves superior performance comparing to state-of-the-art methods with well-calibrated uncertainty estimates. In continual learning, our method enables neural networks to dynamically evolve their depths to accommodate incrementally available data beyond their initial structures, and alleviate catastrophic forgetting.
null
Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows
https://papers.nips.cc/paper_files/paper/2021/hash/dfd786998e082758be12670d856df755-Abstract.html
Brendan Ross, Jesse Cresswell
https://papers.nips.cc/paper_files/paper/2021/hash/dfd786998e082758be12670d856df755-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13663-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/dfd786998e082758be12670d856df755-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DqU-rIHy4Eh
https://papers.nips.cc/paper_files/paper/2021/file/dfd786998e082758be12670d856df755-Supplemental.pdf
Normalizing flows are generative models that provide tractable density estimation via an invertible transformation from a simple base distribution to a complex target distribution. However, this technique cannot directly model data supported on an unknown low-dimensional manifold, a common occurrence in real-world domains such as image data. Recent attempts to remedy this limitation have introduced geometric complications that defeat a central benefit of normalizing flows: exact density estimation. We recover this benefit with Conformal Embedding Flows, a framework for designing flows that learn manifolds with tractable densities. We argue that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data. To this end, we present a series of conformal building blocks and apply them in experiments with synthetic and real-world data to demonstrate that flows can model manifold-supported distributions without sacrificing tractable likelihoods.
null
The Limits of Optimal Pricing in the Dark
https://papers.nips.cc/paper_files/paper/2021/hash/e0126439e08ddfbdf4faa952dc910590-Abstract.html
Quinlan Dawkins, Minbiao Han, Haifeng Xu
https://papers.nips.cc/paper_files/paper/2021/hash/e0126439e08ddfbdf4faa952dc910590-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13664-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e0126439e08ddfbdf4faa952dc910590-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=TqvwWkdlLIk
https://papers.nips.cc/paper_files/paper/2021/file/e0126439e08ddfbdf4faa952dc910590-Supplemental.pdf
A ubiquitous learning problem in today’s digital market is, during repeated interactions between a seller and a buyer, how a seller can gradually learn optimal pricing decisions based on the buyer’s past purchase responses. A fundamental challenge of learning in such a strategic setup is that the buyer will naturally have incentives to manipulate his responses in order to induce more favorable learning outcomes for him. To understand the limits of the seller’s learning when facing such a strategic and possibly manipulative buyer, we study a natural yet powerful buyer manipulation strategy. That is, before the pricing game starts, the buyer simply commits to “imitate” a different value function by pretending to always react optimally according to this imitative value function. We fully characterize the optimal imitative value function that the buyer should imitate as well as the resultant seller revenue and buyer surplus under this optimal buyer manipulation. Our characterizations reveal many useful insights about what happens at equilibrium. For example, a seller with concave production cost will obtain essentially 0 revenue at equilibrium whereas the revenue for a seller with convex production cost is the Bregman divergence of her cost function between no production and certain production. Finally, and importantly, we show that a more powerful class of pricing schemes does not necessarily increase, in fact, may be harmful to, the seller’s revenue. Our results not only lead to an effective prescriptive way for buyers to manipulate learning algorithms but also shed lights on the limits of what a seller can really achieve when pricing in the dark.
null
No RL, No Simulation: Learning to Navigate without Navigating
https://papers.nips.cc/paper_files/paper/2021/hash/e02a35b1563d0db53486ec068ebab80f-Abstract.html
Meera Hahn, Devendra Singh Chaplot, Shubham Tulsiani, Mustafa Mukadam, James M. Rehg, Abhinav Gupta
https://papers.nips.cc/paper_files/paper/2021/hash/e02a35b1563d0db53486ec068ebab80f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13665-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e02a35b1563d0db53486ec068ebab80f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8vXYx6d8Wc
https://papers.nips.cc/paper_files/paper/2021/file/e02a35b1563d0db53486ec068ebab80f-Supplemental.pdf
Most prior methods for learning navigation policies require access to simulation environments, as they need online policy interaction and rely on ground-truth maps for rewards. However, building simulators is expensive (requires manual effort for each and every scene) and creates challenges in transferring learned policies to robotic platforms in the real-world, due to the sim-to-real domain gap. In this paper, we pose a simple question: Do we really need active interaction, ground-truth maps or even reinforcement-learning (RL) in order to solve the image-goal navigation task? We propose a self-supervised approach to learn to navigate from only passive videos of roaming. Our approach, No RL, No Simulator (NRNS), is simple and scalable, yet highly effective. NRNS outperforms RL-based formulations by a significant margin. We present NRNS as a strong baseline for any future image-based navigation tasks that use RL or Simulation.
null
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
https://papers.nips.cc/paper_files/paper/2021/hash/e02e27e04fdff967ba7d76fb24b8069d-Abstract.html
Jiangning Zhang, Chao Xu, Jian Li, Wenzhou Chen, Yabiao Wang, Ying Tai, Shuo Chen, Chengjie Wang, Feiyue Huang, Yong Liu
https://papers.nips.cc/paper_files/paper/2021/hash/e02e27e04fdff967ba7d76fb24b8069d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13666-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e02e27e04fdff967ba7d76fb24b8069d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yn267zYn8Eg
https://papers.nips.cc/paper_files/paper/2021/file/e02e27e04fdff967ba7d76fb24b8069d-Supplemental.pdf
Inspired by biological evolution, we explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA) and derive that both of them have consistent mathematical representation. Analogous to the dynamic local population in EA, we improve the existing transformer structure and propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly. Moreover, we introduce the spatial-filling curve into the current vision transformer to sequence image data into a uniform sequential format. Thus we can design a unified EAT framework to address multi-modal tasks, separating the network architecture from the data format adaptation. Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works while having smaller parameters and greater throughput. We further conduct multi-modal tasks to demonstrate the superiority of the unified EAT, \eg, Text-Based Image Retrieval, and our approach improves the rank-1 by +3.7 points over the baseline on the CSS dataset.
null
Improving Compositionality of Neural Networks by Decoding Representations to Inputs
https://papers.nips.cc/paper_files/paper/2021/hash/e0308d73972d8dd5e2dd27853106386e-Abstract.html
Mike Wu, Noah Goodman, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2021/hash/e0308d73972d8dd5e2dd27853106386e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13667-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e0308d73972d8dd5e2dd27853106386e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jfd_GB546GJ
https://papers.nips.cc/paper_files/paper/2021/file/e0308d73972d8dd5e2dd27853106386e-Supplemental.pdf
In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together. Although deep learning programs have demonstrated strong performance on novel applications, they sacrifice many of the functionalities of traditional software programs. With this as motivation, we take a modest first step towards improving deep learning programs by jointly training a generative model to constrain neural network activations to "decode" back to inputs. We call this design a Decodable Neural Network, or DecNN. Doing so enables a form of compositionality in neural networks, where one can recursively compose DecNN with itself to create an ensemble-like model with uncertainty. In our experiments, we demonstrate applications of this uncertainty to out-of-distribution detection, adversarial example detection, and calibration --- while matching standard neural networks in accuracy. We further explore this compositionality by combining DecNN with pretrained models, where we show promising results that neural networks can be regularized from using protected features.
null
The Hardness Analysis of Thompson Sampling for Combinatorial Semi-bandits with Greedy Oracle
https://papers.nips.cc/paper_files/paper/2021/hash/e0688d13958a19e087e123148555e4b4-Abstract.html
Fang Kong, Yueran Yang, Wei Chen, Shuai Li
https://papers.nips.cc/paper_files/paper/2021/hash/e0688d13958a19e087e123148555e4b4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13668-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e0688d13958a19e087e123148555e4b4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=N6ubGJ2lQf
null
Thompson sampling (TS) has attracted a lot of interest in the bandit area. It was introduced in the 1930s but has not been theoretically proven until recent years. All of its analysis in the combinatorial multi-armed bandit (CMAB) setting requires an exact oracle to provide optimal solutions with any input. However, such an oracle is usually not feasible since many combinatorial optimization problems are NP-hard and only approximation oracles are available. An example \cite{WangC18} has shown the failure of TS to learn with an approximation oracle. However, this oracle is uncommon and is designed only for a specific problem instance. It is still an open question whether the convergence analysis of TS can be extended beyond the exact oracle in CMAB. In this paper, we study this question under the greedy oracle, which is a common (approximation) oracle with theoretical guarantees to solve many (offline) combinatorial optimization problems. We provide a problem-dependent regret lower bound of order $\Omega(\log T/\Delta^2)$ to quantify the hardness of TS to solve CMAB problems with greedy oracle, where $T$ is the time horizon and $\Delta$ is some reward gap. We also provide an almost matching regret upper bound. These are the first theoretical results for TS to solve CMAB with a common approximation oracle and break the misconception that TS cannot work with approximation oracles.
null
Universal Semi-Supervised Learning
https://papers.nips.cc/paper_files/paper/2021/hash/e06f967fb0d355592be4e7674fa31d26-Abstract.html
Zhuo Huang, Chao Xue, Bo Han, Jian Yang, Chen Gong
https://papers.nips.cc/paper_files/paper/2021/hash/e06f967fb0d355592be4e7674fa31d26-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13669-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e06f967fb0d355592be4e7674fa31d26-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zmVumB1Flg
https://papers.nips.cc/paper_files/paper/2021/file/e06f967fb0d355592be4e7674fa31d26-Supplemental.pdf
Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i.e., class set) and feature distribution (i.e., feature domain) are different between labeled dataset and unlabeled dataset. Such a problem seriously hinders the realistic landing of classical SSL. Different from the existing SSL methods targeting at the open-set problem that only study one certain scenario of class distribution mismatch and ignore the feature distribution mismatch, we consider a more general case where a mismatch exists in both class and feature distribution. In this case, we propose a ''Class-shAring data detection and Feature Adaptation'' (CAFA) framework which requires no prior knowledge of the class relationship between the labeled dataset and unlabeled dataset. Particularly, CAFA utilizes a novel scoring strategy to detect the data in the shared class set. Then, it conducts domain adaptation to fully exploit the value of the detected class-sharing data for better semi-supervised consistency training. Exhaustive experiments on several benchmark datasets show the effectiveness of our method in tackling open-set problems.
null
Improving Deep Learning Interpretability by Saliency Guided Training
https://papers.nips.cc/paper_files/paper/2021/hash/e0cd3f16f9e883ca91c2a4c24f47b3d9-Abstract.html
Aya Abdelsalam Ismail, Hector Corrada Bravo, Soheil Feizi
https://papers.nips.cc/paper_files/paper/2021/hash/e0cd3f16f9e883ca91c2a4c24f47b3d9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13670-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e0cd3f16f9e883ca91c2a4c24f47b3d9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x4zs7eC-BsI
https://papers.nips.cc/paper_files/paper/2021/file/e0cd3f16f9e883ca91c2a4c24f47b3d9-Supplemental.pdf
Saliency methods have been widely used to highlight important input features in model predictions. Most existing methods use backpropagation on a modified gradient function to generate saliency maps. Thus, noisy gradients can result in unfaithful feature attributions. In this paper, we tackle this issue and introduce a {\it saliency guided training} procedure for neural networks to reduce noisy gradients used in predictions while retaining the predictive performance of the model. Our saliency guided training procedure iteratively masks features with small and potentially noisy gradients while maximizing the similarity of model outputs for both masked and unmasked inputs. We apply the saliency guided training procedure to various synthetic and real data sets from computer vision, natural language processing, and time series across diverse neural architectures, including Recurrent Neural Networks, Convolutional Networks, and Transformers. Through qualitative and quantitative evaluations, we show that saliency guided training procedure significantly improves model interpretability across various domains while preserving its predictive performance.
null
SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data
https://papers.nips.cc/paper_files/paper/2021/hash/e0eacd983971634327ae1819ea8b6214-Abstract.html
Alicia Curth, Changhee Lee, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2021/hash/e0eacd983971634327ae1819ea8b6214-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13671-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e0eacd983971634327ae1819ea8b6214-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=f0_tkoEJV88
https://papers.nips.cc/paper_files/paper/2021/file/e0eacd983971634327ae1819ea8b6214-Supplemental.pdf
We study the problem of inferring heterogeneous treatment effects from time-to-event data. While both the related problems of (i) estimating treatment effects for binary or continuous outcomes and (ii) predicting survival outcomes have been well studied in the recent machine learning literature, their combination -- albeit of high practical relevance -- has received considerably less attention. With the ultimate goal of reliably estimating the effects of treatments on instantaneous risk and survival probabilities, we focus on the problem of learning (discrete-time) treatment-specific conditional hazard functions. We find that unique challenges arise in this context due to a variety of covariate shift issues that go beyond a mere combination of well-studied confounding and censoring biases. We theoretically analyse their effects by adapting recent generalization bounds from domain adaptation and treatment effect estimation to our setting and discuss implications for model design. We use the resulting insights to propose a novel deep learning method for treatment-specific hazard estimation based on balancing representations. We investigate performance across a range of experimental settings and empirically confirm that our method outperforms baselines by addressing covariate shifts from various sources.
null
Optimal Rates for Nonparametric Density Estimation under Communication Constraints
https://papers.nips.cc/paper_files/paper/2021/hash/e1021d43911ca2c1845910d84f40aeae-Abstract.html
Jayadev Acharya, Clement Canonne, Aditya Vikram Singh, Himanshu Tyagi
https://papers.nips.cc/paper_files/paper/2021/hash/e1021d43911ca2c1845910d84f40aeae-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13672-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e1021d43911ca2c1845910d84f40aeae-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CsV-Gms_JKy
https://papers.nips.cc/paper_files/paper/2021/file/e1021d43911ca2c1845910d84f40aeae-Supplemental.pdf
We consider density estimation for Besov spaces when the estimator is restricted to use only a limited number of bits about each sample. We provide a noninteractive adaptive estimator which exploits the sparsity of wavelet bases, along with a simulate-and-infer technique from parametric estimation under communication constraints. We show that our estimator is nearly rate-optimal by deriving minmax lower bounds that hold even when interactive protocols are allowed. Interestingly, while our wavelet-based estimator is almost rate-optimal for Sobolev spaces as well, it is unclear whether the standard Fourier basis, which arise naturally for those spaces, can be used to achieve the same performance.
null
Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery
https://papers.nips.cc/paper_files/paper/2021/hash/e13748298cfb23c19fdfd134a2221e7b-Abstract.html
Lijun Ding, Liwei Jiang, Yudong Chen, Qing Qu, Zhihui Zhu
https://papers.nips.cc/paper_files/paper/2021/hash/e13748298cfb23c19fdfd134a2221e7b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13673-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e13748298cfb23c19fdfd134a2221e7b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bm1Mrc3WHSe
https://papers.nips.cc/paper_files/paper/2021/file/e13748298cfb23c19fdfd134a2221e7b-Supplemental.pdf
We study the robust recovery of a low-rank matrix from sparsely and grossly corrupted Gaussian measurements, with no prior knowledge on the intrinsic rank. We consider the robust matrix factorization approach. We employ a robust $\ell_1$ loss function and deal with the challenge of the unknown rank by using an overspecified factored representation of the matrix variable. We then solve the associated nonconvex nonsmooth problem using a subgradient method with diminishing stepsizes. We show that under a regularity condition on the sensing matrices and corruption, which we call restricted direction preserving property (RDPP), even with rank overspecified, the subgradient method converges to the exact low-rank solution at a sublinear rate. Moreover, our result is more general in the sense that it automatically speeds up to a linear rate once the factor rank matches the unknown rank. On the other hand, we show that the RDPP condition holds under generic settings, such as Gaussian measurements under independent or adversarial sparse corruptions, where the result could be of independent interest. Both the exact recovery and the convergence rate of the proposed subgradient method are numerically verified in the overspecified regime. Moreover, our experiment further shows that our particular design of diminishing stepsize effectively prevents overfitting for robust recovery under overparameterized models, such as robust matrix sensing and learning robust deep image prior. This regularization effect is worth further investigation.
null
Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
https://papers.nips.cc/paper_files/paper/2021/hash/e140dbab44e01e699491a59c9978b924-Abstract.html
Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel
https://papers.nips.cc/paper_files/paper/2021/hash/e140dbab44e01e699491a59c9978b924-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13674-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e140dbab44e01e699491a59c9978b924-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2NJstikrGfP
https://papers.nips.cc/paper_files/paper/2021/file/e140dbab44e01e699491a59c9978b924-Supplemental.zip
Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games.
null
Learning Generalized Gumbel-max Causal Mechanisms
https://papers.nips.cc/paper_files/paper/2021/hash/e143c01e314f7b950daca31188cb5d0f-Abstract.html
Guy Lorberbom, Daniel D. Johnson, Chris J. Maddison, Daniel Tarlow, Tamir Hazan
https://papers.nips.cc/paper_files/paper/2021/hash/e143c01e314f7b950daca31188cb5d0f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13675-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e143c01e314f7b950daca31188cb5d0f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=oErdeq9ajjX
null
To perform counterfactual reasoning in Structural Causal Models (SCMs), one needs to know the causal mechanisms, which provide factorizations of conditional distributions into noise sources and deterministic functions mapping realizations of noise to samples. Unfortunately, the causal mechanism is not uniquely identified by data that can be gathered by observing and interacting with the world, so there remains the question of how to choose causal mechanisms. In recent work, Oberst & Sontag (2019) propose Gumbel-max SCMs, which use Gumbel-max reparameterizations as the causal mechanism due to an appealing counterfactual stability property. However, the justification requires appealing to intuition. In this work, we instead argue for choosing a causal mechanism that is best under a quantitative criteria such as minimizing variance when estimating counterfactual treatment effects. We propose a parameterized family of causal mechanisms that generalize Gumbel-max. We show that they can be trained to minimize counterfactual effect variance and other losses on a distribution of queries of interest, yielding lower variance estimates of counterfactual treatment effect than fixed alternatives, also generalizing to queries not seen at training time.
null
Bandit Learning with Delayed Impact of Actions
https://papers.nips.cc/paper_files/paper/2021/hash/e17184bcb70dcf3942c54e0b537ffc6d-Abstract.html
Wei Tang, Chien-Ju Ho, Yang Liu
https://papers.nips.cc/paper_files/paper/2021/hash/e17184bcb70dcf3942c54e0b537ffc6d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13676-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e17184bcb70dcf3942c54e0b537ffc6d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=i2bTx7ZWFfI
null
We consider a stochastic multi-armed bandit (MAB) problem with delayed impact of actions. In our setting, actions taken in the pastimpact the arm rewards in the subsequent future. This delayed impact of actions is prevalent in the real world. For example, the capability to pay back a loan for people in a certain social group might depend on historically how frequently that group has been approved loan applications. If banks keep rejecting loan applications to people in a disadvantaged group, it could create a feedback loop and further damage the chance of getting loans for people in that group. In this paper, we formulate this delayed and long-term impact of actions within the context of multi-armed bandits. We generalize the bandit setting to encode the dependency of this ``bias" due to the action history during learning. The goal is to maximize the collected utilities over time while taking into account the dynamics created by the delayed impacts of historical actions. We propose an algorithm that achieves a regret of $\tilde{O}(KT^{2/3})$ and show a matching regret lower bound of $\Omega(KT^{2/3})$, where $K$ is the number of arms and $T$ is the learning horizon. Our results complement the bandit literature by adding techniques to deal with actions with long-term impacts and have implications in designing fair algorithms.
null
A Stochastic Newton Algorithm for Distributed Convex Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/e17a5a399de92e1d01a56c50afb2a68e-Abstract.html
Brian Bullins, Kshitij Patel, Ohad Shamir, Nathan Srebro, Blake E. Woodworth
https://papers.nips.cc/paper_files/paper/2021/hash/e17a5a399de92e1d01a56c50afb2a68e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13677-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e17a5a399de92e1d01a56c50afb2a68e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ui0sz9Y2x9X
null
We propose and analyze a stochastic Newton algorithm for homogeneous distributed stochastic convex optimization, where each machine can calculate stochastic gradients of the same population objective, as well as stochastic Hessian-vector products (products of an independent unbiased estimator of the Hessian of the population objective with arbitrary vectors), with many such stochastic computations performed between rounds of communication. We show that our method can reduce the number, and frequency, of required communication rounds, compared to existing methods without hurting performance, by proving convergence guarantees for quasi-self-concordant objectives (e.g., logistic regression), alongside empirical evidence.
null
Are Transformers more robust than CNNs?
https://papers.nips.cc/paper_files/paper/2021/hash/e19347e1c3ca0c0b97de5fb3b690855a-Abstract.html
Yutong Bai, Jieru Mei, Alan L. Yuille, Cihang Xie
https://papers.nips.cc/paper_files/paper/2021/hash/e19347e1c3ca0c0b97de5fb3b690855a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13678-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e19347e1c3ca0c0b97de5fb3b690855a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hbHkvGBZB9
null
Transformer emerges as a powerful tool for visual recognition. In addition to demonstrating competitive performance on a broad range of visual benchmarks, recent works also argue that Transformers are much more robust than Convolutions Neural Networks (CNNs). Nonetheless, surprisingly, we find these conclusions are drawn from unfair experimental settings, where Transformers and CNNs are compared at different scales and are applied with distinct training frameworks. In this paper, we aim to provide the first fair & in-depth comparisons between Transformers and CNNs, focusing on robustness evaluations. With our unified training setup, we first challenge the previous belief that Transformers outshine CNNs when measuring adversarial robustness. More surprisingly, we find CNNs can easily be as robust as Transformers on defending against adversarial attacks, if they properly adopt Transformers' training recipes. While regarding generalization on out-of-distribution samples, we show pre-training on (external) large-scale datasets is not a fundamental request for enabling Transformers to achieve better performance than CNNs. Moreover, our ablations suggest such stronger generalization is largely benefited by the Transformer's self-attention-like architectures per se, rather than by other training setups. We hope this work can help the community better understand and benchmark the robustness of Transformers and CNNs. The code and models are publicly available at: https://github.com/ytongbai/ViTs-vs-CNNs.
null
Towards Sharper Generalization Bounds for Structured Prediction
https://papers.nips.cc/paper_files/paper/2021/hash/e1b90346c92331860b1391257a106bb1-Abstract.html
Shaojie Li, Yong Liu
https://papers.nips.cc/paper_files/paper/2021/hash/e1b90346c92331860b1391257a106bb1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13679-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e1b90346c92331860b1391257a106bb1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Wt3unuWMyl
https://papers.nips.cc/paper_files/paper/2021/file/e1b90346c92331860b1391257a106bb1-Supplemental.pdf
In this paper, we investigate the generalization performance of structured prediction learning and obtain state-of-the-art generalization bounds. Our analysis is based on factor graph decomposition of structured prediction algorithms, and we present novel margin guarantees from three different perspectives: Lipschitz continuity, smoothness, and space capacity condition. In the Lipschitz continuity scenario, we improve the square-root dependency on the label set cardinality of existing bounds to a logarithmic dependence. In the smoothness scenario, we provide generalization bounds that are not only a logarithmic dependency on the label set cardinality but a faster convergence rate of order $\mathcal{O}(\frac{1}{n})$ on the sample size $n$. In the space capacity scenario, we obtain bounds that do not depend on the label set cardinality and have faster convergence rates than $\mathcal{O}(\frac{1}{\sqrt{n}})$. In each scenario, applications are provided to suggest that these conditions are easy to be satisfied.
null
Automated Discovery of Adaptive Attacks on Adversarial Defenses
https://papers.nips.cc/paper_files/paper/2021/hash/e1c13a13fc6b87616b787b986f98a111-Abstract.html
Chengyuan Yao, Pavol Bielik, Petar Tsankov, Martin Vechev
https://papers.nips.cc/paper_files/paper/2021/hash/e1c13a13fc6b87616b787b986f98a111-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13680-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e1c13a13fc6b87616b787b986f98a111-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nWz-Si-uTzt
https://papers.nips.cc/paper_files/paper/2021/file/e1c13a13fc6b87616b787b986f98a111-Supplemental.pdf
Reliable evaluation of adversarial defenses is a challenging task, currently limited to an expert who manually crafts attacks that exploit the defense’s inner workings, or to approaches based on ensemble of fixed attacks, none of which may be effective for the specific defense at hand. Our key observation is that adaptive attacks are composed from a set of reusable building blocks that can be formalized in a search space and used to automatically discover attacks for unknown defenses. We evaluated our approach on 24 adversarial defenses and show that it outperforms AutoAttack, the current state-of-the-art tool for reliable evaluation of adversarial defenses: our tool discovered significantly stronger attacks by producing 3.0%-50.8% additional adversarial examples for 10 models, while obtaining attacks with slightly stronger or similar strength for the remaining models.
null
PolarStream: Streaming Object Detection and Segmentation with Polar Pillars
https://papers.nips.cc/paper_files/paper/2021/hash/e1e32e235eee1f970470a3a6658dfdd5-Abstract.html
Qi Chen, Sourabh Vora, Oscar Beijbom
https://papers.nips.cc/paper_files/paper/2021/hash/e1e32e235eee1f970470a3a6658dfdd5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13681-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e1e32e235eee1f970470a3a6658dfdd5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9N_vdopOU0h
https://papers.nips.cc/paper_files/paper/2021/file/e1e32e235eee1f970470a3a6658dfdd5-Supplemental.pdf
Recent works recognized lidars as an inherently streaming data source and showed that the end-to-end latency of lidar perception models can be reduced significantly by operating on wedge-shaped point cloud sectors rather then the full point cloud. However, due to use of cartesian coordinate systems these methods represent the sectors as rectangular regions, wasting memory and compute. In this work we propose using a polar coordinate system and make two key improvements on this design. First, we increase the spatial context by using multi-scale padding from neighboring sectors: preceding sector from the current scan and/or the following sector from the past scan. Second, we improve the core polar convolutional architecture by introducing feature undistortion and range stratified convolutions. Experimental results on the nuScenes dataset show significant improvements over other streaming based methods. We also achieve comparable results to existing non-streaming methods but with lower latencies.
null
Representation Costs of Linear Neural Networks: Analysis and Design
https://papers.nips.cc/paper_files/paper/2021/hash/e22cb9d6bbb4c290a94e4fff4d68a831-Abstract.html
Zhen Dai, Mina Karzand, Nathan Srebro
https://papers.nips.cc/paper_files/paper/2021/hash/e22cb9d6bbb4c290a94e4fff4d68a831-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13682-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e22cb9d6bbb4c290a94e4fff4d68a831-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=pSNs0PKx0Mw
null
For different parameterizations (mappings from parameters to predictors), we study the regularization cost in predictor space induced by $l_2$ regularization on the parameters (weights). We focus on linear neural networks as parameterizations of linear predictors. We identify the representation cost of certain sparse linear ConvNets and residual networks. In order to get a better understanding of how the architecture and parameterization affect the representation cost, we also study the reverse problem, identifying which regularizers on linear predictors (e.g., $l_p$ norms, group norms, the $k$-support-norm, elastic net) can be the representation cost induced by simple $l_2$ regularization, and designing the parameterizations that do so.
null
Teaching via Best-Case Counterexamples in the Learning-with-Equivalence-Queries Paradigm
https://papers.nips.cc/paper_files/paper/2021/hash/e22dd5dabde45eda5a1a67772c8e25dd-Abstract.html
Akash Kumar, Yuxin Chen, Adish Singla
https://papers.nips.cc/paper_files/paper/2021/hash/e22dd5dabde45eda5a1a67772c8e25dd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13683-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e22dd5dabde45eda5a1a67772c8e25dd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Ee7IOrpLwT
https://papers.nips.cc/paper_files/paper/2021/file/e22dd5dabde45eda5a1a67772c8e25dd-Supplemental.pdf
We study the sample complexity of teaching, termed as "teaching dimension" (TD) in the literature, for the learning-with-equivalence-queries (LwEQ) paradigm. More concretely, we consider a learner who asks equivalence queries (i.e., "is the queried hypothesis the target hypothesis?"), and a teacher responds either "yes" or "no" along with a counterexample to the queried hypothesis. This learning paradigm has been extensively studied when the learner receives worst-case or random counterexamples; in this paper, we consider the optimal teacher who picks best-case counterexamples to teach the target hypothesis within a hypothesis class. For this optimal teacher, we introduce LwEQ-TD, a notion of TD capturing the teaching complexity (i.e., the number of queries made) in this paradigm. We show that a significant reduction in queries can be achieved with best-case counterexamples, in contrast to worst-case or random counterexamples, for different hypothesis classes. Furthermore, we establish new connections of LwEQ-TD to the well-studied notions of TD in the learning-from-samples paradigm.
null
Distilling Meta Knowledge on Heterogeneous Graph for Illicit Drug Trafficker Detection on Social Media
https://papers.nips.cc/paper_files/paper/2021/hash/e234e195f3789f05483378c397db1cb5-Abstract.html
Yiyue Qian, Yiming Zhang, Yanfang (Fa Ye, Chuxu Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/e234e195f3789f05483378c397db1cb5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13684-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e234e195f3789f05483378c397db1cb5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9jRH00HT4-4
https://papers.nips.cc/paper_files/paper/2021/file/e234e195f3789f05483378c397db1cb5-Supplemental.pdf
Driven by the considerable profits, the crime of drug trafficking (a.k.a. illicit drug trading) has co-evolved with modern technologies, e.g., social media such as Instagram has become a popular platform for marketing and selling illicit drugs. The activities of online drug trafficking are nimble and resilient, which call for novel techniques to effectively detect, disrupt, and dismantle illicit drug trades. In this paper, we propose a holistic framework named MetaHG to automatically detect illicit drug traffickers on social media (i.e., Instagram), by tackling the following two new challenges: (1) different from existing works which merely focus on analyzing post content, MetaHG is capable of jointly modeling multi-modal content and relational structured information on social media for illicit drug trafficker detection; (2) in addition, through the proposed meta-learning technique, MetaHG addresses the issue of requiring sufficient data for model training. More specifically, in our proposed MetaHG, we first build a heterogeneous graph (HG) to comprehensively characterize the complex ecosystem of drug trafficking on social media. Then, we employ a relation-based graph convolutional neural network to learn node (i.e., user) representations over the built HG, in which we introduce graph structure refinement to compensate the sparse connection among entities in the HG for more robust node representation learning. Afterwards, we propose a meta-learning algorithm for model optimization. A self-supervised module and a knowledge distillation module are further designed to exploit unlabeled data for improving the model. Extensive experiments based on the real-world data collected from Instagram demonstrate that the proposed MetaHG outperforms state-of-the-art methods.
null
Curriculum Disentangled Recommendation with Noisy Multi-feedback
https://papers.nips.cc/paper_files/paper/2021/hash/e242660df1b69b74dcc7fde711f924ff-Abstract.html
Hong Chen, Yudong Chen, Xin Wang, Ruobing Xie, Rui Wang, Feng Xia, Wenwu Zhu
https://papers.nips.cc/paper_files/paper/2021/hash/e242660df1b69b74dcc7fde711f924ff-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13685-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e242660df1b69b74dcc7fde711f924ff-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=d1FHmxHPEQ0
https://papers.nips.cc/paper_files/paper/2021/file/e242660df1b69b74dcc7fde711f924ff-Supplemental.pdf
Learning disentangled representations for user intentions from multi-feedback (i.e., positive and negative feedback) can enhance the accuracy and explainability of recommendation algorithms. However, learning such disentangled representations from multi-feedback data is challenging because i) multi-feedback is complex: there exist complex relations among different types of feedback (e.g., click, unclick, and dislike, etc) as well as various user intentions, and ii) multi-feedback is noisy: there exists noisy (useless) information both in features and labels, which may deteriorate the recommendation performance. Existing works on disentangled representation learning only focus on positive feedback, failing to handle the complex relations and noise hidden in multi-feedback data. To solve this problem, in this work we propose a Curriculum Disentangled Recommendation (CDR) model that is capable of efficiently learning disentangled representations from complex and noisy multi-feedback for better recommendation. Concretely, we design a co-filtering dynamic routing mechanism that simultaneously captures the complex relations among different behavioral feedback and user intentions as well as denoise the representations in the feature level. We then present an adjustable self-evaluating curriculum that is able to evaluate sample difficulties for better model training and conduct denoising in the label level via disregarding useless information. Our extensive experiments on several real-world datasets demonstrate that the proposed CDR model can significantly outperform several state-of-the-art methods in terms of recommendation accuracy.
null
Interpretable agent communication from scratch (with a generic visual processor emerging on the side)
https://papers.nips.cc/paper_files/paper/2021/hash/e250c59336b505ed411d455abaa30b4d-Abstract.html
Roberto Dessi, Eugene Kharitonov, Baroni Marco
https://papers.nips.cc/paper_files/paper/2021/hash/e250c59336b505ed411d455abaa30b4d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13686-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e250c59336b505ed411d455abaa30b4d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1AvtkM4H-y7
https://papers.nips.cc/paper_files/paper/2021/file/e250c59336b505ed411d455abaa30b4d-Supplemental.pdf
As deep networks begin to be deployed as autonomous agents, the issue of how they can communicate with each other becomes important. Here, we train two deep nets from scratch to perform realistic referent identification through unsupervised emergent communication. We show that the largely interpretable emergent protocol allows the nets to successfully communicate even about object types they did not see at training time. The visual representations induced as a by-product of our training regime, moreover, show comparable quality, when re-used as generic visual features, to a recent self-supervised learning model. Our results provide concrete evidence of the viability of (interpretable) emergent deep net communication in a more realistic scenario than previously considered, as well as establishing an intriguing link between this field and self-supervised visual learning.
null
MAU: A Motion-Aware Unit for Video Prediction and Beyond
https://papers.nips.cc/paper_files/paper/2021/hash/e25cfa90f04351958216f97e3efdabe9-Abstract.html
Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xiang Xinguang, Wen Gao
https://papers.nips.cc/paper_files/paper/2021/hash/e25cfa90f04351958216f97e3efdabe9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13687-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e25cfa90f04351958216f97e3efdabe9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=qwtfY-3ibt7
null
Accurately predicting inter-frame motion information plays a key role in video prediction tasks. In this paper, we propose a Motion-Aware Unit (MAU) to capture reliable inter-frame motion information by broadening the temporal receptive field of the predictive units. The MAU consists of two modules, the attention module and the fusion module. The attention module aims to learn an attention map based on the correlations between the current spatial state and the historical spatial states. Based on the learned attention map, the historical temporal states are aggregated to an augmented motion information (AMI). In this way, the predictive unit can perceive more temporal dynamics from a wider receptive field. Then, the fusion module is utilized to further aggregate the augmented motion information (AMI) and current appearance information (current spatial state) to the final predicted frame. The computation load of MAU is relatively low and the proposed unit can be easily applied to other predictive models. Moreover, an information recalling scheme is employed into the encoders and decoders to help preserve the visual details of the predictions. We evaluate the MAU on both video prediction and early action recognition tasks. Experimental results show that the MAU outperforms the state-of-the-art methods on both tasks.
null
Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/e27c71957d1e6c223e0d48a165da2ee1-Abstract.html
Christopher Hoang, Sungryull Sohn, Jongwook Choi, Wilka Carvalho, Honglak Lee
https://papers.nips.cc/paper_files/paper/2021/hash/e27c71957d1e6c223e0d48a165da2ee1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13688-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e27c71957d1e6c223e0d48a165da2ee1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rD6ulZFTbf
https://papers.nips.cc/paper_files/paper/2021/file/e27c71957d1e6c223e0d48a165da2ee1-Supplemental.pdf
Operating in the real-world often requires agents to learn about a complex environment and apply this understanding to achieve a breadth of goals. This problem, known as goal-conditioned reinforcement learning (GCRL), becomes especially challenging for long-horizon goals. Current methods have tackled this problem by augmenting goal-conditioned policies with graph-based planning algorithms. However, they struggle to scale to large, high-dimensional state spaces and assume access to exploration mechanisms for efficiently collecting training data. In this work, we introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments so as to obtain a policy that is proficient for any goal. SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph. We further exploit SF to directly compute a goal-conditioned policy for inter-landmark traversal, which we use to execute plans to "frontier" landmarks at the edge of the explored state space. We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces and outperforms state-of-the-art baselines on long-horizon GCRL tasks.
null
Streaming Belief Propagation for Community Detection
https://papers.nips.cc/paper_files/paper/2021/hash/e2a2dcc36a08a345332c751b2f2e476c-Abstract.html
Yuchen Wu, Jakab Tardos, Mohammadhossein Bateni, André Linhares, Filipe Miguel Goncalves de Almeida, Andrea Montanari, Ashkan Norouzi-Fard
https://papers.nips.cc/paper_files/paper/2021/hash/e2a2dcc36a08a345332c751b2f2e476c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13689-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e2a2dcc36a08a345332c751b2f2e476c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3-F0-Zpcrno
null
The community detection problem requires to cluster the nodes of a network into a small number of well-connected ‘communities’. There has been substantial recent progress in characterizing the fundamental statistical limits of community detection under simple stochastic block models. However, in real-world applications, the network structure is typically dynamic, with nodes that join over time. In this setting, we would like a detection algorithm to perform only a limited number of updates at each node arrival. While standard voting approaches satisfy this constraint, it is unclear whether they exploit the network information optimally. We introduce a simple model for networks growing over time which we refer to as streaming stochastic block model (StSBM). Within this model, we prove that voting algorithms have fundamental limitations. We also develop a streaming belief-propagation (STREAMBP) approach, for which we prove optimality in certain regimes. We validate our theoretical findings on synthetic and real data
null
The staircase property: How hierarchical structure can guide deep learning
https://papers.nips.cc/paper_files/paper/2021/hash/e2db7186375992e729165726762cb4c1-Abstract.html
Emmanuel Abbe, Enric Boix-Adsera, Matthew S Brennan, Guy Bresler, Dheeraj Nagaraj
https://papers.nips.cc/paper_files/paper/2021/hash/e2db7186375992e729165726762cb4c1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13690-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e2db7186375992e729165726762cb4c1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fj6rFciApc
https://papers.nips.cc/paper_files/paper/2021/file/e2db7186375992e729165726762cb4c1-Supplemental.pdf
This paper identifies a structural property of data distributions that enables deep neural networks to learn hierarchically. We define the ``staircase'' property for functions over the Boolean hypercube, which posits that high-order Fourier coefficients are reachable from lower-order Fourier coefficients along increasing chains. We prove that functions satisfying this property can be learned in polynomial time using layerwise stochastic coordinate descent on regular neural networks -- a class of network architectures and initializations that have homogeneity properties. Our analysis shows that for such staircase functions and neural networks, the gradient-based algorithm learns high-level features by greedily combining lower-level features along the depth of the network. We further back our theoretical results with experiments showing that staircase functions are learnable by more standard ResNet architectures with stochastic gradient descent. Both the theoretical and experimental results support the fact that the staircase property has a role to play in understanding the capabilities of gradient-based learning on regular networks, in contrast to general polynomial-size networks that can emulate any Statistical Query or PAC algorithm, as recently shown.
null
MagNet: A Neural Network for Directed Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/e32084632d369461572832e6582aac36-Abstract.html
Xitong Zhang, Yixuan He, Nathan Brugnone, Michael Perlmutter, Matthew Hirn
https://papers.nips.cc/paper_files/paper/2021/hash/e32084632d369461572832e6582aac36-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13691-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e32084632d369461572832e6582aac36-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=TRDAFiwDq8A
https://papers.nips.cc/paper_files/paper/2021/file/e32084632d369461572832e6582aac36-Supplemental.pdf
The prevalence of graph-based data has spurred the rapid development of graph neural networks (GNNs) and related machine learning algorithms. Yet, despite the many datasets naturally modeled as directed graphs, including citation, website, and traffic networks, the vast majority of this research focuses on undirected graphs. In this paper, we propose MagNet, a GNN for directed graphs based on a complex Hermitian matrix known as the magnetic Laplacian. This matrix encodes undirected geometric structure in the magnitude of its entries and directional information in their phase. A charge parameter attunes spectral information to variation among directed cycles. We apply our network to a variety of directed graph node classification and link prediction tasks showing that MagNet performs well on all tasks and that its performance exceeds all other methods on a majority of such tasks. The underlying principles of MagNet are such that it can be adapted to other GNN architectures.
null
Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning
https://papers.nips.cc/paper_files/paper/2021/hash/e3251075554389fe91d17a794861d47b-Abstract.html
Hayeon Lee, Sewoong Lee, Song Chong, Sung Ju Hwang
https://papers.nips.cc/paper_files/paper/2021/hash/e3251075554389fe91d17a794861d47b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13692-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e3251075554389fe91d17a794861d47b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=oE5lMpPRm0
https://papers.nips.cc/paper_files/paper/2021/file/e3251075554389fe91d17a794861d47b-Supplemental.pdf
For deployment, neural architecture search should be hardware-aware, in order to satisfy the device-specific constraints (e.g., memory usage, latency and energy consumption) and enhance the model efficiency. Existing methods on hardware-aware NAS collect a large number of samples (e.g., accuracy and latency) from a target device, either builds a lookup table or a latency estimator. However, such approach is impractical in real-world scenarios as there exist numerous devices with different hardware specifications, and collecting samples from such a large number of devices will require prohibitive computational and monetary cost. To overcome such limitations, we propose Hardware-adaptive Efficient Latency Predictor (HELP), which formulates the device-specific latency estimation problem as a meta-learning problem, such that we can estimate the latency of a model's performance for a given task on an unseen device with a few samples. To this end, we introduce novel hardware embeddings to embed any devices considering them as black-box functions that output latencies, and meta-learn the hardware-adaptive latency predictor in a device-dependent manner, using the hardware embeddings. We validate the proposed HELP for its latency estimation performance on unseen platforms, on which it achieves high estimation performance with as few as 10 measurement samples, outperforming all relevant baselines. We also validate end-to-end NAS frameworks using HELP against ones without it, and show that it largely reduces the total time cost of the base NAS method, in latency-constrained settings.
null
Topological Relational Learning on Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/e334fd9dac68f13fa1a57796148cf812-Abstract.html
Yuzhou Chen, Baris Coskunuzer, Yulia Gel
https://papers.nips.cc/paper_files/paper/2021/hash/e334fd9dac68f13fa1a57796148cf812-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13693-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e334fd9dac68f13fa1a57796148cf812-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YOc9i6-NrQk
https://papers.nips.cc/paper_files/paper/2021/file/e334fd9dac68f13fa1a57796148cf812-Supplemental.pdf
Graph neural networks (GNNs) have emerged as a powerful tool for graph classification and representation learning. However, GNNs tend to suffer from over-smoothing problems and are vulnerable to graph perturbations. To address these challenges, we propose a novel topological neural framework of topological relational inference (TRI) which allows for integrating higher-order graph information to GNNs and for systematically learning a local graph structure. The key idea is to rewire the original graph by using the persistent homology of the small neighborhoods of the nodes and then to incorporate the extracted topological summaries as the side information into the local algorithm. As a result, the new framework enables us to harness both the conventional information on the graph structure and information on higher order topological properties of the graph. We derive theoretical properties on stability of the new local topological representation of the graph and discuss its implications on the graph algebraic connectivity. The experimental results on node classification tasks demonstrate that the new TRI-GNN outperforms all 14 state-of-the-art baselines on 6 out 7 graphs and exhibit higher robustness to perturbations, yielding up to 10\% better performance under noisy scenarios.
null
Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/e34376937c784505d9b4fcd980c2f1ce-Abstract.html
Pascal Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar
https://papers.nips.cc/paper_files/paper/2021/hash/e34376937c784505d9b4fcd980c2f1ce-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13694-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e34376937c784505d9b4fcd980c2f1ce-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Pye1c7itBu
https://papers.nips.cc/paper_files/paper/2021/file/e34376937c784505d9b4fcd980c2f1ce-Supplemental.pdf
In recent years, several results in the supervised learning setting suggested that classical statistical learning-theoretic measures, such as VC dimension, do not adequately explain the performance of deep learning models which prompted a slew of work in the infinite-width and iteration regimes. However, there is little theoretical explanation for the success of neural networks beyond the supervised setting. In this paper we argue that, under some distributional assumptions, classical learning-theoretic measures can sufficiently explain generalization for graph neural networks in the transductive setting. In particular, we provide a rigorous analysis of the performance of neural networks in the context of transductive inference, specifically by analysing the generalisation properties of graph convolutional networks for the problem of node classification. While VC-dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models. We further use the generalisation error bounds based on transductive Rademacher complexity to demonstrate the role of graph convolutions and network architectures in achieving smaller generalisation error and provide insights into when the graph structure can help in learning. The findings of this paper could re-new the interest in studying generalisation in neural networks in terms of learning-theoretic measures, albeit in specific problems.
null
Federated Linear Contextual Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/e347c51419ffb23ca3fd5050202f9c3d-Abstract.html
Ruiquan Huang, Weiqiang Wu, Jing Yang, Cong Shen
https://papers.nips.cc/paper_files/paper/2021/hash/e347c51419ffb23ca3fd5050202f9c3d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13695-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e347c51419ffb23ca3fd5050202f9c3d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Rt5mjXAqHrY
https://papers.nips.cc/paper_files/paper/2021/file/e347c51419ffb23ca3fd5050202f9c3d-Supplemental.pdf
This paper presents a novel federated linear contextual bandits model, where individual clients face different $K$-armed stochastic bandits coupled through common global parameters. By leveraging the geometric structure of the linear rewards, a collaborative algorithm called Fed-PE is proposed to cope with the heterogeneity across clients without exchanging local feature vectors or raw data. Fed-PE relies on a novel multi-client G-optimal design, and achieves near-optimal regrets for both disjoint and shared parameter cases with logarithmic communication costs. In addition, a new concept called collinearly-dependent policies is introduced, based on which a tight minimax regret lower bound for the disjoint parameter case is derived. Experiments demonstrate the effectiveness of the proposed algorithms on both synthetic and real-world datasets.
null
Least Square Calibration for Peer Reviews
https://papers.nips.cc/paper_files/paper/2021/hash/e354fd90b2d5c777bfec87a352a18976-Abstract.html
Sijun Tan, Jibang Wu, Xiaohui Bei, Haifeng Xu
https://papers.nips.cc/paper_files/paper/2021/hash/e354fd90b2d5c777bfec87a352a18976-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13696-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e354fd90b2d5c777bfec87a352a18976-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rTxCRLXRtk9
https://papers.nips.cc/paper_files/paper/2021/file/e354fd90b2d5c777bfec87a352a18976-Supplemental.zip
Peer review systems such as conference paper review often suffer from the issue of miscalibration. Previous works on peer review calibration usually only use the ordinal information or assume simplistic reviewer scoring functions such as linear functions. In practice, applications like academic conferences often rely on manual methods, such as open discussions, to mitigate miscalibration. It remains an important question to develop algorithms that can handle different types of miscalibrations based on available prior knowledge. In this paper, we propose a flexible framework, namely \emph{least square calibration} (LSC), for selecting top candidates from peer ratings. Our framework provably performs perfect calibration from noiseless linear scoring functions under mild assumptions, yet also provides competitive calibration results when the scoring function is from broader classes beyond linear functions and with arbitrary noise. On our synthetic dataset, we empirically demonstrate that our algorithm consistently outperforms the baseline which select top papers based on the highest average ratings.
null
Scaling Up Exact Neural Network Compression by ReLU Stability
https://papers.nips.cc/paper_files/paper/2021/hash/e35d7a5768c4b85b4780384d55dc3620-Abstract.html
Thiago Serra, Xin Yu, Abhinav Kumar, Srikumar Ramalingam
https://papers.nips.cc/paper_files/paper/2021/hash/e35d7a5768c4b85b4780384d55dc3620-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13697-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e35d7a5768c4b85b4780384d55dc3620-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=tqQ-8MuSqm
https://papers.nips.cc/paper_files/paper/2021/file/e35d7a5768c4b85b4780384d55dc3620-Supplemental.pdf
We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable. However, current approaches to determine the stability of neurons with Rectified Linear Unit (ReLU) activations require solving or finding a good approximation to multiple discrete optimization problems. In this work, we introduce an algorithm based on solving a single optimization problem to identify all stable neurons. Our approach is on median 183 times faster than the state-of-art method on CIFAR-10, which allows us to explore exact compression on deeper (5 x 100) and wider (2 x 800) networks within minutes. For classifiers trained under an amount of L1 regularization that does not worsen accuracy, we can remove up to 56% of the connections on the CIFAR-10 dataset. The code is available at the following link, https://github.com/yuxwind/ExactCompression .
null
Passive attention in artificial neural networks predicts human visual selectivity
https://papers.nips.cc/paper_files/paper/2021/hash/e360367584297ee8d2d5afa709cd440e-Abstract.html
Thomas Langlois, Haicheng Zhao, Erin Grant, Ishita Dasgupta, Tom Griffiths, Nori Jacoby
https://papers.nips.cc/paper_files/paper/2021/hash/e360367584297ee8d2d5afa709cd440e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13698-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e360367584297ee8d2d5afa709cd440e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AzmEMstdf3o
https://papers.nips.cc/paper_files/paper/2021/file/e360367584297ee8d2d5afa709cd440e-Supplemental.pdf
Developments in machine learning interpretability techniques over the past decade have provided new tools to observe the image regions that are most informative for classification and localization in artificial neural networks (ANNs). Are the same regions similarly informative to human observers? Using data from 79 new experiments and 7,810 participants, we show that passive attention techniques reveal a significant overlap with human visual selectivity estimates derived from 6 distinct behavioral tasks including visual discrimination, spatial localization, recognizability, free-viewing, cued-object search, and saliency search fixations. We find that input visualizations derived from relatively simple ANN architectures probed using guided backpropagation methods are the best predictors of a shared component in the joint variability of the human measures. We validate these correlational results with causal manipulations using recognition experiments. We show that images masked with ANN attention maps were easier for humans to classify than control masks in a speeded recognition experiment. Similarly, we find that recognition performance in the same ANN models was likewise influenced by masking input images using human visual selectivity maps. This work contributes a new approach to evaluating the biological and psychological validity of leading ANNs as models of human vision: by examining their similarities and differences in terms of their visual selectivity to the information contained in images.
null
GRIN: Generative Relation and Intention Network for Multi-agent Trajectory Prediction
https://papers.nips.cc/paper_files/paper/2021/hash/e3670ce0c315396e4836d7024abcf3dd-Abstract.html
Longyuan Li, Jian Yao, Li Wenliang, Tong He, Tianjun Xiao, Junchi Yan, David Wipf, Zheng Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/e3670ce0c315396e4836d7024abcf3dd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13699-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e3670ce0c315396e4836d7024abcf3dd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ephWA7KaWmD
https://papers.nips.cc/paper_files/paper/2021/file/e3670ce0c315396e4836d7024abcf3dd-Supplemental.pdf
Learning the distribution of future trajectories conditioned on the past is a crucial problem for understanding multi-agent systems. This is challenging because humans make decisions based on complex social relations and personal intents, resulting in highly complex uncertainties over trajectories. To address this problem, we propose a conditional deep generative model that combines advances in graph neural networks. The prior and recognition model encodes two types of latent codes for each agent: an inter-agent latent code to represent social relations and an intra-agent latent code to represent agent intentions. The decoder is carefully devised to leverage the codes in a disentangled way to predict multi-modal future trajectory distribution. Specifically, a graph attention network built upon inter-agent latent code is used to learn continuous pair-wise relations, and an agent's motion is controlled by its latent intents and its observations of all other agents. Through experiments on both synthetic and real-world datasets, we show that our model outperforms previous work in multiple performance metrics. We also show that our model generates realistic multi-modal trajectories.
null
Instance-Dependent Partial Label Learning
https://papers.nips.cc/paper_files/paper/2021/hash/e38e37a99f7de1f45d169efcdb288dd1-Abstract.html
Ning Xu, Congyu Qiao, Xin Geng, Min-Ling Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/e38e37a99f7de1f45d169efcdb288dd1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13700-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e38e37a99f7de1f45d169efcdb288dd1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AnJUTpZiiWD
https://papers.nips.cc/paper_files/paper/2021/file/e38e37a99f7de1f45d169efcdb288dd1-Supplemental.pdf
Partial label learning (PLL) is a typical weakly supervised learning problem, where each training example is associated with a set of candidate labels among which only one is true. Most existing PLL approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels. However, this assumption is not realistic since the candidate labels are always instance-dependent. In this paper, we consider instance-dependent PLL and assume that each example is associated with a latent label distribution constituted by the real number of each label, representing the degree to each label describing the feature. The incorrect label with a high degree is more likely to be annotated as the candidate label. Therefore, the latent label distribution is the essential labeling information in partially labeled examples and worth being leveraged for predictive model training. Motivated by this consideration, we propose a novel PLL method that recovers the label distribution as a label enhancement (LE) process and trains the predictive model iteratively in every epoch. Specifically, we assume the true posterior density of the latent label distribution takes on the variational approximate Dirichlet density parameterized by an inference model. Then the evidence lower bound is deduced for optimizing the inference model and the label distributions generated from the variational posterior are utilized for training the predictive model. Experiments on benchmark and real-world datasets validate the effectiveness of the proposed method. Source code is available at https://github.com/palm-ml/valen.
null
Deep Learning with Label Differential Privacy
https://papers.nips.cc/paper_files/paper/2021/hash/e3a54649aeec04cf1c13907bc6c5c8aa-Abstract.html
Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/e3a54649aeec04cf1c13907bc6c5c8aa-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13701-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e3a54649aeec04cf1c13907bc6c5c8aa-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RYcgfqmAOHh
https://papers.nips.cc/paper_files/paper/2021/file/e3a54649aeec04cf1c13907bc6c5c8aa-Supplemental.pdf
The Randomized Response (RR) algorithm is a classical technique to improve robustness in survey aggregation, and has been widely adopted in applications with differential privacy guarantees. We propose a novel algorithm, Randomized Response with Prior (RRWithPrior), which can provide more accurate results while maintaining the same level of privacy guaranteed by RR. We then apply RRWithPrior to learn neural networks with label differential privacy (LabelDP), and show that when only the label needs to be protected, the model performance can be significantly improved over the previous state-of-the-art private baselines. Moreover, we study different ways to obtain priors, which when used with RRWithPrior can additionally improve the model performance, further reducing the accuracy gap between private and non-private models. We complement the empirical results with theoretical analysis showing that LabelDP is provably easier than protecting both the inputs and labels.
null
Semialgebraic Representation of Monotone Deep Equilibrium Models and Applications to Certification
https://papers.nips.cc/paper_files/paper/2021/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html
Tong Chen, Jean B. Lasserre, Victor Magron, Edouard Pauwels
https://papers.nips.cc/paper_files/paper/2021/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13702-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e3b21256183cf7c2c7a66be163579d37-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=m4rb1Rlfdi
https://papers.nips.cc/paper_files/paper/2021/file/e3b21256183cf7c2c7a66be163579d37-Supplemental.pdf
Deep equilibrium models are based on implicitly defined functional relations and have shown competitive performance compared with the traditional deep networks. Monotone operator equilibrium networks (monDEQ) retain interesting performance with additional theoretical guaranties. Existing certification tools for classical deep networks cannot directly be applied to monDEQs for which much fewer tools exist. We introduce a semialgebraic representation for ReLU based monDEQs which allow to approximate the corresponding input output relation by semidefinite programs (SDP). We present several applications to network certification and obtain SDP models for the following problems : robustness certification, Lipschitz constant estimation, ellipsoidal uncertainty propagation. We use these models to certify robustness of monDEQs with respect to a general $L_p$ norm. Experimental results show that the proposed models outperform existing approaches for monDEQ certification. Furthermore, our investigations suggest that monDEQs are much more robust to $L_2$ perturbations than $L_{\infty}$ perturbations.
null
The Role of Global Labels in Few-Shot Classification and How to Infer Them
https://papers.nips.cc/paper_files/paper/2021/hash/e3b6fb0fd4df098162eede3313c54a8d-Abstract.html
Ruohan Wang, Massimiliano Pontil, Carlo Ciliberto
https://papers.nips.cc/paper_files/paper/2021/hash/e3b6fb0fd4df098162eede3313c54a8d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13703-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e3b6fb0fd4df098162eede3313c54a8d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3S0z0IjWkyl
https://papers.nips.cc/paper_files/paper/2021/file/e3b6fb0fd4df098162eede3313c54a8d-Supplemental.pdf
Few-shot learning is a central problem in meta-learning, where learners must quickly adapt to new tasks given limited training data. Recently, feature pre-training has become a ubiquitous component in state-of-the-art meta-learning methods and is shown to provide significant performance improvement. However, there is limited theoretical understanding of the connection between pre-training and meta-learning. Further, pre-training requires global labels shared across tasks, which may be unavailable in practice. In this paper, we show why exploiting pre-training is theoretically advantageous for meta-learning, and in particular the critical role of global labels. This motivates us to propose Meta Label Learning (MeLa), a novel meta-learning framework that automatically infers global labels to obtains robust few-shot models. Empirically, we demonstrate that MeLa is competitive with existing methods and provide extensive ablation experiments to highlight its key properties.
null
NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
https://papers.nips.cc/paper_files/paper/2021/hash/e41e164f7485ec4a28741a2d0ea41c74-Abstract.html
Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, Wenping Wang
https://papers.nips.cc/paper_files/paper/2021/hash/e41e164f7485ec4a28741a2d0ea41c74-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13704-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e41e164f7485ec4a28741a2d0ea41c74-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=D7bPRxNt_AP
https://papers.nips.cc/paper_files/paper/2021/file/e41e164f7485ec4a28741a2d0ea41c74-Supplemental.pdf
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.
null
Improved Guarantees for Offline Stochastic Matching via new Ordered Contention Resolution Schemes
https://papers.nips.cc/paper_files/paper/2021/hash/e43739bba7cdb577e9e3e4e42447f5a5-Abstract.html
Brian Brubach, Nathaniel Grammel, Will Ma, Aravind Srinivasan
https://papers.nips.cc/paper_files/paper/2021/hash/e43739bba7cdb577e9e3e4e42447f5a5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13705-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e43739bba7cdb577e9e3e4e42447f5a5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=z9Xs6T0y9Eg
https://papers.nips.cc/paper_files/paper/2021/file/e43739bba7cdb577e9e3e4e42447f5a5-Supplemental.pdf
Matching is one of the most fundamental and broadly applicable problems across many domains. In these diverse real-world applications, there is often a degree of uncertainty in the input which has led to the study of stochastic matching models. Here, each edge in the graph has a known, independent probability of existing derived from some prediction. Algorithms must probe edges to determine existence and match them irrevocably if they exist. Further, each vertex may have a patience constraint denoting how many of its neighboring edges can be probed. We present new ordered contention resolution schemes yielding improved approximation guarantees for some of the foundational problems studied in this area. For stochastic matching with patience constraints in general graphs, we provide a $0.382$-approximate algorithm, significantly improving over the previous best $0.31$-approximation of Baveja et al. (2018). When the vertices do not have patience constraints, we describe a $0.432$-approximate random order probing algorithm with several corollaries such as an improved guarantee for the Prophet Secretary problem under Edge Arrivals. Finally, for the special case of bipartite graphs with unit patience constraints on one of the partitions, we show a $0.632$-approximate algorithm that improves on the recent $1/3$-guarantee of Hikima et al. (2021).
null
UFC-BERT: Unifying Multi-Modal Controls for Conditional Image Synthesis
https://papers.nips.cc/paper_files/paper/2021/hash/e46bc064f8e92ac2c404b9871b2a4ef2-Abstract.html
Zhu Zhang, Jianxin Ma, Chang Zhou, Rui Men, Zhikang Li, Ming Ding, Jie Tang, Jingren Zhou, Hongxia Yang
https://papers.nips.cc/paper_files/paper/2021/hash/e46bc064f8e92ac2c404b9871b2a4ef2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13706-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e46bc064f8e92ac2c404b9871b2a4ef2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iEEAPq3TUEZ
https://papers.nips.cc/paper_files/paper/2021/file/e46bc064f8e92ac2c404b9871b2a4ef2-Supplemental.pdf
Conditional image synthesis aims to create an image according to some multi-modal guidance in the forms of textual descriptions, reference images, and image blocks to preserve, as well as their combinations. In this paper, instead of investigating these control signals separately, we propose a new two-stage architecture, UFC-BERT, to unify any number of multi-modal controls. In UFC-BERT, both the diverse control signals and the synthesized image are uniformly represented as a sequence of discrete tokens to be processed by Transformer. Different from existing two-stage autoregressive approaches such as DALL-E and VQGAN, UFC-BERT adopts non-autoregressive generation (NAR) at the second stage to enhance the holistic consistency of the synthesized image, to support preserving specified image blocks, and to improve the synthesis speed. Further, we design a progressive algorithm that iteratively improves the non-autoregressively generated image, with the help of two estimators developed for evaluating the compliance with the controls and evaluating the fidelity of the synthesized image, respectively. Extensive experiments on a newly collected large-scale clothing dataset M2C-Fashion and a facial dataset Multi-Modal CelebA-HQ verify that UFC-BERT can synthesize high-fidelity images that comply with flexible multi-modal controls.
null
Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
https://papers.nips.cc/paper_files/paper/2021/hash/e46be61f0050f9cc3a98d5d2192cb0eb-Abstract.html
Tim Seyde, Igor Gilitschenski, Wilko Schwarting, Bartolomeo Stellato, Martin Riedmiller, Markus Wulfmeier, Daniela Rus
https://papers.nips.cc/paper_files/paper/2021/hash/e46be61f0050f9cc3a98d5d2192cb0eb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13707-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e46be61f0050f9cc3a98d5d2192cb0eb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9BvDIW6_qxZ
https://papers.nips.cc/paper_files/paper/2021/file/e46be61f0050f9cc3a98d5d2192cb0eb-Supplemental.pdf
Reinforcement learning (RL) for continuous control typically employs distributions whose support covers the entire action space. In this work, we investigate the colloquially known phenomenon that trained agents often prefer actions at the boundaries of that space. We draw theoretical connections to the emergence of bang-bang behavior in optimal control, and provide extensive empirical evaluation across a variety of recent RL algorithms. We replace the normal Gaussian by a Bernoulli distribution that solely considers the extremes along each action dimension - a bang-bang controller. Surprisingly, this achieves state-of-the-art performance on several continuous control benchmarks - in contrast to robotic hardware, where energy and maintenance cost affect controller choices. Since exploration, learning, and the final solution are entangled in RL, we provide additional imitation learning experiments to reduce the impact of exploration on our analysis. Finally, we show that our observations generalize to environments that aim to model real-world challenges and evaluate factors to mitigate the emergence of bang-bang solutions. Our findings emphasise challenges for benchmarking continuous control algorithms, particularly in light of potential real-world applications.
null
Improving Generalization in Meta-RL with Imaginary Tasks from Latent Dynamics Mixture
https://papers.nips.cc/paper_files/paper/2021/hash/e48e13207341b6bffb7fb1622282247b-Abstract.html
Suyoung Lee, Sae-Young Chung
https://papers.nips.cc/paper_files/paper/2021/hash/e48e13207341b6bffb7fb1622282247b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13708-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e48e13207341b6bffb7fb1622282247b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PnpS7_SlNZi
https://papers.nips.cc/paper_files/paper/2021/file/e48e13207341b6bffb7fb1622282247b-Supplemental.pdf
The generalization ability of most meta-reinforcement learning (meta-RL) methods is largely limited to test tasks that are sampled from the same distribution used to sample training tasks. To overcome the limitation, we propose Latent Dynamics Mixture (LDM) that trains a reinforcement learning agent with imaginary tasks generated from mixtures of learned latent dynamics. By training a policy on mixture tasks along with original training tasks, LDM allows the agent to prepare for unseen test tasks during training and prevents the agent from overfitting the training tasks. LDM significantly outperforms standard meta-RL methods in test returns on the gridworld navigation and MuJoCo tasks where we strictly separate the training task distribution and the test task distribution.
null
Localization with Sampling-Argmax
https://papers.nips.cc/paper_files/paper/2021/hash/e4a6222cdb5b34375400904f03d8e6a5-Abstract.html
Jiefeng Li, Tong Chen, Ruiqi Shi, Yujing Lou, Yong-Lu Li, Cewu Lu
https://papers.nips.cc/paper_files/paper/2021/hash/e4a6222cdb5b34375400904f03d8e6a5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13709-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e4a6222cdb5b34375400904f03d8e6a5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lVBu4PqM9HU
https://papers.nips.cc/paper_files/paper/2021/file/e4a6222cdb5b34375400904f03d8e6a5-Supplemental.pdf
Soft-argmax operation is commonly adopted in detection-based methods to localize the target position in a differentiable manner. However, training the neural network with soft-argmax makes the shape of the probability map unconstrained. Consequently, the model lacks pixel-wise supervision through the map during training, leading to performance degradation. In this work, we propose sampling-argmax, a differentiable training method that imposes implicit constraints to the shape of the probability map by minimizing the expectation of the localization error. To approximate the expectation, we introduce a continuous formulation of the output distribution and develop a differentiable sampling process. The expectation can be approximated by calculating the average error of all samples drawn from the output distribution. We show that sampling-argmax can seamlessly replace the conventional soft-argmax operation on various localization tasks. Comprehensive experiments demonstrate the effectiveness and flexibility of the proposed method. Code is available at https://github.com/Jeff-sjtu/sampling-argmax
null
Improved Regularization and Robustness for Fine-tuning in Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/e4a93f0332b2519177ed55741ea4e5e7-Abstract.html
Dongyue Li, Hongyang Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/e4a93f0332b2519177ed55741ea4e5e7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13710-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e4a93f0332b2519177ed55741ea4e5e7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=QX32YlxrQJc
null
A widely used algorithm for transfer learning is fine-tuning, where a pre-trained model is fine-tuned on a target task with a small amount of labeled data. When the capacity of the pre-trained model is much larger than the size of the target data set, fine-tuning is prone to overfitting and "memorizing" the training labels. Hence, an important question is to regularize fine-tuning and ensure its robustness to noise. To address this question, we begin by analyzing the generalization properties of fine-tuning. We present a PAC-Bayes generalization bound that depends on the distance traveled in each layer during fine-tuning and the noise stability of the fine-tuned model. We empirically measure these quantities. Based on the analysis, we propose regularized self-labeling---the interpolation between regularization and self-labeling methods, including (i) layer-wise regularization to constrain the distance traveled in each layer; (ii) self label-correction and label-reweighting to correct mislabeled data points (that the model is confident) and reweight less confident data points. We validate our approach on an extensive collection of image and text data sets using multiple pre-trained model architectures. Our approach improves baseline methods by 1.76% (on average) for seven image classification tasks and 0.75% for a few-shot classification task. When the target data set includes noisy labels, our approach outperforms baseline methods by 3.56% on average in two noisy settings.
null
BARTScore: Evaluating Generated Text as Text Generation
https://papers.nips.cc/paper_files/paper/2021/hash/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Abstract.html
Weizhe Yuan, Graham Neubig, Pengfei Liu
https://papers.nips.cc/paper_files/paper/2021/hash/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13711-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5Ya8PbvpZ9
https://papers.nips.cc/paper_files/paper/2021/file/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Supplemental.pdf
A wide variety of NLP applications, such as machine translation, summarization, and dialog, involve text generation. One major challenge for these applications is how to evaluate whether such generated texts are actually fluent, accurate, or effective. In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models. The general idea is that models trained to convert the generated text to/from a reference output or the source text will achieve higher scores when the generated text is better. We operationalize this idea using BART, an encoder-decoder based pre-trained model, and propose a metric BARTScore with a number of variants that can be flexibly applied in an unsupervised fashion to evaluation of text from different perspectives (e.g. informativeness, fluency, or factuality). BARTScore is conceptually simple and empirically effective. It can outperform existing top-scoring metrics in 16 of 22 test settings, covering evaluation of 16 datasets (e.g., machine translation, text summarization) and 7 different perspectives (e.g., informativeness, factuality). Code to calculate BARTScore is available at https://github.com/neulab/BARTScore, and we have released an interactive leaderboard for meta-evaluation at http://explainaboard.nlpedia.ai/leaderboard/task-meval/ on the ExplainaBoard platform, which allows us to interactively understand the strengths, weaknesses, and complementarity of each metric.
null
An analysis of Ermakov-Zolotukhin quadrature using kernels
https://papers.nips.cc/paper_files/paper/2021/hash/e531e258fe3098c3bdd707c30a687d73-Abstract.html
Ayoub Belhadji
https://papers.nips.cc/paper_files/paper/2021/hash/e531e258fe3098c3bdd707c30a687d73-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13712-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e531e258fe3098c3bdd707c30a687d73-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Eec8D4UNceq
https://papers.nips.cc/paper_files/paper/2021/file/e531e258fe3098c3bdd707c30a687d73-Supplemental.pdf
We study a quadrature, proposed by Ermakov and Zolotukhin in the sixties, through the lens of kernel methods. The nodes of this quadrature rule follow the distribution of a determinantal point process, while the weights are defined through a linear system, similarly to the optimal kernel quadrature. In this work, we show how these two classes of quadrature are related, and we prove a tractable formula of the expected value of the squared worst-case integration error on the unit ball of an RKHS of the former quadrature. In particular, this formula involves the eigenvalues of the corresponding kernel and leads to improving on the existing theoretical guarantees of the optimal kernel quadrature with determinantal point processes.
null
Towards Understanding Why Lookahead Generalizes Better Than SGD and Beyond
https://papers.nips.cc/paper_files/paper/2021/hash/e53a0a2978c28872a4505bdb51db06dc-Abstract.html
Pan Zhou, Hanshu Yan, Xiaotong Yuan, Jiashi Feng, Shuicheng Yan
https://papers.nips.cc/paper_files/paper/2021/hash/e53a0a2978c28872a4505bdb51db06dc-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13713-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e53a0a2978c28872a4505bdb51db06dc-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=43mzkbz_QKG
https://papers.nips.cc/paper_files/paper/2021/file/e53a0a2978c28872a4505bdb51db06dc-Supplemental.pdf
To train networks, lookahead algorithm~\cite{zhang2019lookahead} updates its fast weights $k$ times via an inner-loop optimizer before updating its slow weights once by using the latest fast weights. Any optimizer, e.g. SGD, can serve as the inner-loop optimizer, and the derived lookahead generally enjoys remarkable test performance improvement over the vanilla optimizer. But theoretical understandings on the test performance improvement of lookahead remain absent yet. To solve this issue, we theoretically justify the advantages of lookahead in terms of the excess risk error which measures the test performance. Specifically, we prove that lookahead using SGD as its inner-loop optimizer can better balance the optimization error and generalization error to achieve smaller excess risk error than vanilla SGD on (strongly) convex problems and nonconvex problems with Polyak-{\L}ojasiewicz condition which has been observed/proved in neural networks. Moreover, we show the stagewise optimization strategy~\cite{barshan2015stage} which decays learning rate several times during training can also benefit lookahead in improving its optimization and generalization errors on strongly convex problems. Finally, we propose a stagewise locally-regularized lookahead (SLRLA) algorithm which sums up the vanilla objective and a local regularizer to minimize at each stage and provably enjoys optimization and generalization improvement over the conventional (stagewise) lookahead. Experimental results on CIFAR10/100 and ImageNet testify its advantages. Codes is available at \url{https://github.com/sail-sg/SLRLA-optimizer}.
null
Online Market Equilibrium with Application to Fair Division
https://papers.nips.cc/paper_files/paper/2021/hash/e562cd9c0768d5464b64cf61da7fc6bb-Abstract.html
Yuan Gao, Alex Peysakhovich, Christian Kroer
https://papers.nips.cc/paper_files/paper/2021/hash/e562cd9c0768d5464b64cf61da7fc6bb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13714-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CRPNhlp4jM
https://papers.nips.cc/paper_files/paper/2021/file/e562cd9c0768d5464b64cf61da7fc6bb-Supplemental.zip
Computing market equilibria is a problem of both theoretical and applied interest. Much research to date focuses on the case of static Fisher markets with full information on buyers' utility functions and item supplies. Motivated by real-world markets, we consider an online setting: individuals have linear, additive utility functions; items arrive sequentially and must be allocated and priced irrevocably. We define the notion of an online market equilibrium in such a market as time-indexed allocations and prices which guarantee buyer optimality and market clearance in hindsight. We propose a simple, scalable and interpretable allocation and pricing dynamics termed as PACE. When items are drawn i.i.d. from an unknown distribution (with a possibly continuous support), we show that PACE leads to an online market equilibrium asymptotically. In particular, PACE ensures that buyers' time-averaged utilities converge to the equilibrium utilities w.r.t. a static market with item supplies being the unknown distribution and that buyers' time-averaged expenditures converge to their per-period budget. Hence, many desirable properties of market equilibrium-based fair division such as envy-freeness, Pareto optimality, and the proportional-share guarantee are also attained asymptotically in the online setting. Next, we extend the dynamics to handle quasilinear buyer utilities, which gives the first online algorithm for computing first-price pacing equilibria. Finally, numerical experiments on real and synthetic datasets show that the dynamics converges quickly under various metrics.
null
Dynamic Resolution Network
https://papers.nips.cc/paper_files/paper/2021/hash/e56954b4f6347e897f954495eab16a88-Abstract.html
Mingjian Zhu, Kai Han, Enhua Wu, Qiulin Zhang, Ying Nie, Zhenzhong Lan, Yunhe Wang
https://papers.nips.cc/paper_files/paper/2021/hash/e56954b4f6347e897f954495eab16a88-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13715-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e56954b4f6347e897f954495eab16a88-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=J8JXWSlgvyY
https://papers.nips.cc/paper_files/paper/2021/file/e56954b4f6347e897f954495eab16a88-Supplemental.pdf
Deep convolutional neural networks (CNNs) are often of sophisticated design with numerous learnable parameters for the accuracy reason. To alleviate the expensive costs of deploying them on mobile devices, recent works have made huge efforts for excavating redundancy in pre-defined architectures. Nevertheless, the redundancy on the input resolution of modern CNNs has not been fully investigated, i.e., the resolution of input image is fixed. In this paper, we observe that the smallest resolution for accurately predicting the given image is different using the same neural network. To this end, we propose a novel dynamic-resolution network (DRNet) in which the input resolution is determined dynamically based on each input sample. Wherein, a resolution predictor with negligible computational costs is explored and optimized jointly with the desired network. Specifically, the predictor learns the smallest resolution that can retain and even exceed the original recognition accuracy for each image. During the inference, each input image will be resized to its predicted resolution for minimizing the overall computation burden. We then conduct extensive experiments on several benchmark networks and datasets. The results show that our DRNet can be embedded in any off-the-shelf network architecture to obtain a considerable reduction in computational complexity. For instance, DR-ResNet-50 achieves similar performance with an about 34% computation reduction, while gaining 1.4% accuracy increase with 10% computation reduction compared to the original ResNet-50 on ImageNet. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/DRNet.
null
Gauge Equivariant Transformer
https://papers.nips.cc/paper_files/paper/2021/hash/e57c6b956a6521b28495f2886ca0977a-Abstract.html
Lingshen He, Yiming Dong, Yisen Wang, Dacheng Tao, Zhouchen Lin
https://papers.nips.cc/paper_files/paper/2021/hash/e57c6b956a6521b28495f2886ca0977a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13716-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e57c6b956a6521b28495f2886ca0977a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fyL9HD-kImm
https://papers.nips.cc/paper_files/paper/2021/file/e57c6b956a6521b28495f2886ca0977a-Supplemental.pdf
Attention mechanism has shown great performance and efficiency in a lot of deep learning models, in which relative position encoding plays a crucial role. However, when introducing attention to manifolds, there is no canonical local coordinate system to parameterize neighborhoods. To address this issue, we propose an equivariant transformer to make our model agnostic to the orientation of local coordinate systems (\textit{i.e.}, gauge equivariant), which employs multi-head self-attention to jointly incorporate both position-based and content-based information. To enhance expressive ability, we adopt regular field of cyclic groups as feature fields in intermediate layers, and propose a novel method to parallel transport the feature vectors in these fields. In addition, we project the position vector of each point onto its local coordinate system to disentangle the orientation of the coordinate system in ambient space (\textit{i.e.}, global coordinate system), achieving rotation invariance. To the best of our knowledge, we are the first to introduce gauge equivariance to self-attention, thus name our model Gauge Equivariant Transformer (GET), which can be efficiently implemented on triangle meshes. Extensive experiments show that GET achieves state-of-the-art performance on two common recognition tasks.
null
Unsupervised Object-Based Transition Models For 3D Partially Observable Environments
https://papers.nips.cc/paper_files/paper/2021/hash/e5841df2166dd424a57127423d276bbe-Abstract.html
Antonia Creswell, Rishabh Kabra, Chris Burgess, Murray Shanahan
https://papers.nips.cc/paper_files/paper/2021/hash/e5841df2166dd424a57127423d276bbe-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13717-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e5841df2166dd424a57127423d276bbe-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1oVNAVhJ4GP
null
We present a slot-wise, object-based transition model that decomposes a scene into objects, aligns them (with respect to a slot-wise object memory) to maintain a consistent order across time, and predicts how those objects evolve over successive frames. The model is trained end-to-end without supervision using transition losses at the level of the object-structured representation rather than pixels. Thanks to the introduction of our novel alignment module, the model deals properly with two issues that are not handled satisfactorily by other transition models, namely object persistence and object identity. We show that the combination of an object-level loss and correct object alignment over time enables the model to outperform a state-of-the-art baseline, and allows it to deal well with object occlusion and re-appearance in partially observable environments.
null
Robust Contrastive Learning Using Negative Samples with Diminished Semantics
https://papers.nips.cc/paper_files/paper/2021/hash/e5afb0f2dbc6d39b312d7406054cb4c6-Abstract.html
Songwei Ge, Shlok Mishra, Chun-Liang Li, Haohan Wang, David Jacobs
https://papers.nips.cc/paper_files/paper/2021/hash/e5afb0f2dbc6d39b312d7406054cb4c6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13718-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e5afb0f2dbc6d39b312d7406054cb4c6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xLExSzfIDmo
https://papers.nips.cc/paper_files/paper/2021/file/e5afb0f2dbc6d39b312d7406054cb4c6-Supplemental.pdf
Unsupervised learning has recently made exceptional progress because of the development of more effective contrastive learning methods. However, CNNs are prone to depend on low-level features that humans deem non-semantic. This dependency has been conjectured to induce a lack of robustness to image perturbations or domain shift. In this paper, we show that by generating carefully designed negative samples, contrastive learning can learn more robust representations with less dependence on such features. Contrastive learning utilizes positive pairs which preserve semantic information while perturbing superficial features in the training images. Similarly, we propose to generate negative samples in a reversed way, where only the superfluous instead of the semantic features are preserved. We develop two methods, texture-based and patch-based augmentations, to generate negative samples. These samples achieve better generalization, especially under out-of-domain settings. We also analyze our method and the generated texture-based samples, showing that texture features are indispensable in classifying particular ImageNet classes and especially finer classes. We also show that the model bias between texture and shape features favors them differently under different test settings.
null
General Low-rank Matrix Optimization: Geometric Analysis and Sharper Bounds
https://papers.nips.cc/paper_files/paper/2021/hash/e60e81c4cbe5171cd654662d9887aec2-Abstract.html
Haixiang Zhang, Yingjie Bi, Javad Lavaei
https://papers.nips.cc/paper_files/paper/2021/hash/e60e81c4cbe5171cd654662d9887aec2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13719-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e60e81c4cbe5171cd654662d9887aec2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=P9ld0c4dwUF
https://papers.nips.cc/paper_files/paper/2021/file/e60e81c4cbe5171cd654662d9887aec2-Supplemental.pdf
This paper considers the global geometry of general low-rank minimization problems via the Burer-Monterio factorization approach. For the rank-$1$ case, we prove that there is no spurious second-order critical point for both symmetric and asymmetric problems if the rank-$2$ RIP constant $\delta$ is less than $1/2$. Combining with a counterexample with $\delta=1/2$, we show that the derived bound is the sharpest possible. For the arbitrary rank-$r$ case, the same property is established when the rank-$2r$ RIP constant $\delta$ is at most $1/3$. We design a counterexample to show that the non-existence of spurious second-order critical points may not hold if $\delta$ is at least $1/2$. In addition, for any problem with $\delta$ between $1/3$ and $1/2$, we prove that all second-order critical points have a positive correlation to the ground truth. Finally, the strict saddle property, which can lead to the polynomial-time global convergence of various algorithms, is established for both the symmetric and asymmetric problems when the rank-$2r$ RIP constant $\delta$ is less than $1/3$. The results of this paper significantly extend several existing bounds in the literature.
null
Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation
https://papers.nips.cc/paper_files/paper/2021/hash/e614f646836aaed9f89ce58e837e2310-Abstract.html
Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, Yoshua Bengio
https://papers.nips.cc/paper_files/paper/2021/hash/e614f646836aaed9f89ce58e837e2310-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13720-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e614f646836aaed9f89ce58e837e2310-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Arn2E4IRjEB
https://papers.nips.cc/paper_files/paper/2021/file/e614f646836aaed9f89ce58e837e2310-Supplemental.pdf
This paper is about the problem of learning a stochastic policy for generating an object (like a molecular graph) from a sequence of actions, such that the probability of generating an object is proportional to a given positive reward for that object. Whereas standard return maximization tends to converge to a single return-maximizing sequence, there are cases where we would like to sample a diverse set of high-return solutions. These arise, for example, in black-box function optimization when few rounds are possible, each with large batches of queries, where the batches should be diverse, e.g., in the design of new molecules. One can also see this as a problem of approximately converting an energy function to a generative distribution. While MCMC methods can achieve that, they are expensive and generally only perform local exploration. Instead, training a generative policy amortizes the cost of search during training and yields to fast generation. Using insights from Temporal Difference learning, we propose GFlowNet, based on a view of the generative process as a flow network, making it possible to handle the tricky case where different trajectories can yield the same final state, e.g., there are many ways to sequentially add atoms to generate some molecular graph. We cast the set of trajectories as a flow and convert the flow consistency equations into a learning objective, akin to the casting of the Bellman equations into Temporal Difference methods. We prove that any global minimum of the proposed objectives yields a policy which samples from the desired distribution, and demonstrate the improved performance and diversity of GFlowNet on a simple domain where there are many modes to the reward function, and on a molecule synthesis task.
null
Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/e61eaa38aed621dd776d0e67cfeee366-Abstract.html
Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai
https://papers.nips.cc/paper_files/paper/2021/hash/e61eaa38aed621dd776d0e67cfeee366-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13721-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e61eaa38aed621dd776d0e67cfeee366-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MjNFN44NbZm
https://papers.nips.cc/paper_files/paper/2021/file/e61eaa38aed621dd776d0e67cfeee366-Supplemental.pdf
Recent theoretical work studies sample-efficient reinforcement learning (RL) extensively in two settings: learning interactively in the environment (online RL), or learning from an offline dataset (offline RL). However, existing algorithms and theories for learning near-optimal policies in these two settings are rather different and disconnected. Towards bridging this gap, this paper initiates the theoretical study of *policy finetuning*, that is, online RL where the learner has additional access to a "reference policy" $\mu$ close to the optimal policy $\pi_\star$ in a certain sense. We consider the policy finetuning problem in episodic Markov Decision Processes (MDPs) with $S$ states, $A$ actions, and horizon length $H$. We first design a sharp *offline reduction* algorithm---which simply executes $\mu$ and runs offline policy optimization on the collected dataset---that finds an $\varepsilon$ near-optimal policy within $\widetilde{O}(H^3SC^\star/\varepsilon^2)$ episodes, where $C^\star$ is the single-policy concentrability coefficient between $\mu$ and $\pi_\star$. This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL. We then establish an $\Omega(H^3S\min\{C^\star, A\}/\varepsilon^2)$ sample complexity lower bound for *any* policy finetuning algorithm, including those that can adaptively explore the environment. This implies that---perhaps surprisingly---the optimal policy finetuning algorithm is either offline reduction or a purely online RL algorithm that does not use $\mu$. Finally, we design a new hybrid offline/online algorithm for policy finetuning that achieves better sample complexity than both vanilla offline reduction and purely online RL algorithms, in a relaxed setting where $\mu$ only satisfies concentrability partially up to a certain time step. Overall, our results offer a quantitative understanding on the benefit of a good reference policy, and make a step towards bridging offline and online RL.
null
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation
https://papers.nips.cc/paper_files/paper/2021/hash/e6384711491713d29bc63fc5eeb5ba4f-Abstract.html
Jungbeom Lee, Jooyoung Choi, Jisoo Mok, Sungroh Yoon
https://papers.nips.cc/paper_files/paper/2021/hash/e6384711491713d29bc63fc5eeb5ba4f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13722-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e6384711491713d29bc63fc5eeb5ba4f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MYs3AVBLeY8
https://papers.nips.cc/paper_files/paper/2021/file/e6384711491713d29bc63fc5eeb5ba4f-Supplemental.pdf
Weakly supervised semantic segmentation produces pixel-level localization from class labels; however, a classifier trained on such labels is likely to focus on a small discriminative region of the target object. We interpret this phenomenon using the information bottleneck principle: the final layer of a deep neural network, activated by the sigmoid or softmax activation functions, causes an information bottleneck, and as a result, only a subset of the task-relevant information is passed on to the output. We first support this argument through a simulated toy experiment and then propose a method to reduce the information bottleneck by removing the last activation function. In addition, we introduce a new pooling method that further encourages the transmission of information from non-discriminative regions to the classification. Our experimental evaluations demonstrate that this simple modification significantly improves the quality of localization maps on both the PASCAL VOC 2012 and MS COCO 2014 datasets, exhibiting a new state-of-the-art performance for weakly supervised semantic segmentation.
null
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs
https://papers.nips.cc/paper_files/paper/2021/hash/e64c9ec33f19c7de745bd6b6d1a7a86e-Abstract.html
Ayush Sekhari, Karthik Sridharan, Satyen Kale
https://papers.nips.cc/paper_files/paper/2021/hash/e64c9ec33f19c7de745bd6b6d1a7a86e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13723-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/e64c9ec33f19c7de745bd6b6d1a7a86e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jqjsLUrB8F
https://papers.nips.cc/paper_files/paper/2021/file/e64c9ec33f19c7de745bd6b6d1a7a86e-Supplemental.pdf
Multi-epoch, small-batch, Stochastic Gradient Descent (SGD) has been the method of choice for learning with large over-parameterized models. A popular theory for explaining why SGD works well in practice is that the algorithm has an implicit regularization that biases its output towards a good solution. Perhaps the theoretically most well understood learning setting for SGD is that of Stochastic Convex Optimization (SCO), where it is well known that SGD learns at a rate of $O(1/\sqrt{n})$, where $n$ is the number of samples. In this paper, we consider the problem of SCO and explore the role of implicit regularization, batch size and multiple epochs for SGD. Our main contributions are threefold: * We show that for any regularizer, there is an SCO problem for which Regularized Empirical Risk Minimzation fails to learn. This automatically rules out any implicit regularization based explanation for the success of SGD.* We provide a separation between SGD and learning via Gradient Descent on empirical loss (GD) in terms of sample complexity. We show that there is an SCO problem such that GD with any step size and number of iterations can only learn at a suboptimal rate: at least $\widetilde{\Omega}(1/n^{5/12})$.* We present a multi-epoch variant of SGD commonly used in practice. We prove that this algorithm is at least as good as single pass SGD in the worst case. However, for certain SCO problems, taking multiple passes over the dataset can significantly outperform single pass SGD. We extend our results to the general learning setting by showing a problem which is learnable for any data distribution, and for this problem, SGD is strictly better than RERM for any regularization function. We conclude by discussing the implications of our results for deep learning, and show a separation between SGD and ERM for two layer diagonal neural networks.
null