title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks
https://papers.nips.cc/paper_files/paper/2021/hash/ba9fab001f67381e56e410575874d967-Abstract.html
Boris van Breugel, Trent Kyono, Jeroen Berrevoets, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2021/hash/ba9fab001f67381e56e410575874d967-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13325-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/ba9fab001f67381e56e410575874d967-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XN1M27T6uux
https://papers.nips.cc/paper_files/paper/2021/file/ba9fab001f67381e56e410575874d967-Supplemental.pdf
Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data - while remaining truthful to the underlying data-generating process (DGP) - is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and - in contrast to existing methods - is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
null
EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/bac49b876d5dfc9cd169c22ef5178ca7-Abstract.html
Ondrej Bohdal, Yongxin Yang, Timothy Hospedales
https://papers.nips.cc/paper_files/paper/2021/hash/bac49b876d5dfc9cd169c22ef5178ca7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13326-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bac49b876d5dfc9cd169c22ef5178ca7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YT_hOa02tqO
https://papers.nips.cc/paper_files/paper/2021/file/bac49b876d5dfc9cd169c22ef5178ca7-Supplemental.zip
Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters. Nevertheless, existing approaches are relatively expensive as they need to compute second-order derivatives and store a longer computational graph. This cost prevents scaling them to larger network architectures. We present EvoGrad, a new approach to meta-learning that draws upon evolutionary techniques to more efficiently compute hypergradients. EvoGrad estimates hypergradient with respect to hyperparameters without calculating second-order gradients, or storing a longer computational graph, leading to significant improvements in efficiency. We evaluate EvoGrad on three substantial recent meta-learning applications, namely cross-domain few-shot learning with feature-wise transformations, noisy label learning with Meta-Weight-Net and low-resource cross-lingual learning with meta representation transformation. The results show that EvoGrad significantly improves efficiency and enables scaling meta-learning to bigger architectures such as from ResNet10 to ResNet34.
null
Biological learning in key-value memory networks
https://papers.nips.cc/paper_files/paper/2021/hash/bacadc62d6e67d7897cef027fa2d416c-Abstract.html
Danil Tyulmankov, Ching Fang, Annapurna Vadaparty, Guangyu Robert Yang
https://papers.nips.cc/paper_files/paper/2021/hash/bacadc62d6e67d7897cef027fa2d416c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13327-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bacadc62d6e67d7897cef027fa2d416c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sMRdrUIrZbT
https://papers.nips.cc/paper_files/paper/2021/file/bacadc62d6e67d7897cef027fa2d416c-Supplemental.pdf
In neuroscience, classical Hopfield networks are the standard biologically plausible model of long-term memory, relying on Hebbian plasticity for storage and attractor dynamics for recall. In contrast, memory-augmented neural networks in machine learning commonly use a key-value mechanism to store and read out memories in a single step. Such augmented networks achieve impressive feats of memory compared to traditional variants, yet their biological relevance is unclear. We propose an implementation of basic key-value memory that stores inputs using a combination of biologically plausible three-factor plasticity rules. The same rules are recovered when network parameters are meta-learned. Our network performs on par with classical Hopfield networks on autoassociative memory tasks and can be naturally extended to continual recall, heteroassociative memory, and sequence learning. Our results suggest a compelling alternative to the classical Hopfield network as a model of biological long-term memory.
null
Correlated Stochastic Block Models: Exact Graph Matching with Applications to Recovering Communities
https://papers.nips.cc/paper_files/paper/2021/hash/baf4f1a5938b8d520b328c13b51ccf11-Abstract.html
Miklos Racz, Anirudh Sridhar
https://papers.nips.cc/paper_files/paper/2021/hash/baf4f1a5938b8d520b328c13b51ccf11-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13328-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/baf4f1a5938b8d520b328c13b51ccf11-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hg0s8od-jd
https://papers.nips.cc/paper_files/paper/2021/file/baf4f1a5938b8d520b328c13b51ccf11-Supplemental.pdf
We consider the task of learning latent community structure from multiple correlated networks. First, we study the problem of learning the latent vertex correspondence between two edge-correlated stochastic block models, focusing on the regime where the average degree is logarithmic in the number of vertices. We derive the precise information-theoretic threshold for exact recovery: above the threshold there exists an estimator that outputs the true correspondence with probability close to 1, while below it no estimator can recover the true correspondence with probability bounded away from 0. As an application of our results, we show how one can exactly recover the latent communities using \emph{multiple} correlated graphs in parameter regimes where it is information-theoretically impossible to do so using just a single graph.
null
Twice regularized MDPs and the equivalence between robustness and regularization
https://papers.nips.cc/paper_files/paper/2021/hash/bb1443cc31d7396bf73e7858cea114e1-Abstract.html
Esther Derman, Matthieu Geist, Shie Mannor
https://papers.nips.cc/paper_files/paper/2021/hash/bb1443cc31d7396bf73e7858cea114e1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13329-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bb1443cc31d7396bf73e7858cea114e1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x00mCNwbH8Q
https://papers.nips.cc/paper_files/paper/2021/file/bb1443cc31d7396bf73e7858cea114e1-Supplemental.pdf
Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We finally generalize regularized MDPs to twice regularized MDPs (R${}^2$ MDPs), i.e., MDPs with $\textit{both}$ value and policy regularization. The corresponding Bellman operators enable developing policy iteration schemes with convergence and robustness guarantees. It also reduces planning and learning in robust MDPs to regularized MDPs.
null
Online Sign Identification: Minimization of the Number of Errors in Thresholding Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/bb3ea2b28c563b1fd6bc89a32ff4d14b-Abstract.html
Reda Ouhamma, Rémy Degenne, Pierre Gaillard, Vianney Perchet
https://papers.nips.cc/paper_files/paper/2021/hash/bb3ea2b28c563b1fd6bc89a32ff4d14b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13043-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bb3ea2b28c563b1fd6bc89a32ff4d14b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5fPBtLSGk21
https://papers.nips.cc/paper_files/paper/2021/file/bb3ea2b28c563b1fd6bc89a32ff4d14b-Supplemental.zip
In the fixed budget thresholding bandit problem, an algorithm sequentially allocates a budgeted number of samples to different distributions. It then predicts whether the mean of each distribution is larger or lower than a given threshold. We introduce a large family of algorithms (containing most existing relevant ones), inspired by the Frank-Wolfe algorithm, and provide a thorough yet generic analysis of their performance. This allowed us to construct new explicit algorithms, for a broad class of problems, whose losses are within a small constant factor of the non-adaptive oracle ones. Quite interestingly, we observed that adaptive methodsempirically greatly out-perform non-adaptive oracles, an uncommon behavior in standard online learning settings, such as regret minimization. We explain this surprising phenomenon on an insightful toy problem.
null
Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs
https://papers.nips.cc/paper_files/paper/2021/hash/bb57db42f77807a9c5823bd8c2d9aaef-Abstract.html
Jiafan He, Dongruo Zhou, Quanquan Gu
https://papers.nips.cc/paper_files/paper/2021/hash/bb57db42f77807a9c5823bd8c2d9aaef-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13330-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bb57db42f77807a9c5823bd8c2d9aaef-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9lwprXiGdR4
https://papers.nips.cc/paper_files/paper/2021/file/bb57db42f77807a9c5823bd8c2d9aaef-Supplemental.pdf
We study the reinforcement learning problem for discounted Markov Decision Processes (MDPs) under the tabular setting. We propose a model-based algorithm named UCBVI-$\gamma$, which is based on the \emph{optimism in the face of uncertainty principle} and the Bernstein-type bonus. We show that UCBVI-$\gamma$ achieves an $\tilde{O}\big({\sqrt{SAT}}/{(1-\gamma)^{1.5}}\big)$ regret, where $S$ is the number of states, $A$ is the number of actions, $\gamma$ is the discount factor and $T$ is the number of steps. In addition, we construct a class of hard MDPs and show that for any algorithm, the expected regret is at least $\tilde{\Omega}\big({\sqrt{SAT}}/{(1-\gamma)^{1.5}}\big)$. Our upper bound matches the minimax lower bound up to logarithmic factors, which suggests that UCBVI-$\gamma$ is nearly minimax optimal for discounted MDPs.
null
Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration
https://papers.nips.cc/paper_files/paper/2021/hash/bb836c01cdc9120a9c984c525e4b1a4a-Abstract.html
Yan Sun, Wenjun Xiong, Faming Liang
https://papers.nips.cc/paper_files/paper/2021/hash/bb836c01cdc9120a9c984c525e4b1a4a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13331-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bb836c01cdc9120a9c984c525e4b1a4a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VMAfyuC3uXP
https://papers.nips.cc/paper_files/paper/2021/file/bb836c01cdc9120a9c984c525e4b1a4a-Supplemental.pdf
Deep learning has powered recent successes of artificial intelligence (AI). However, the deep neural network, as the basic model of deep learning, has suffered from issues such as local traps and miscalibration. In this paper, we provide a new framework for sparse deep learning, which has the above issues addressed in a coherent way. In particular, we lay down a theoretical foundation for sparse deep learning and propose prior annealing algorithms for learning sparse neural networks. The former has successfully tamed the sparse deep neural network into the framework of statistical modeling, enabling prediction uncertainty correctly quantified. The latter can be asymptotically guaranteed to converge to the global optimum, enabling the validity of the down-stream statistical inference. Numerical result indicates the superiority of the proposed method compared to the existing ones.
null
Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration
https://papers.nips.cc/paper_files/paper/2021/hash/bbc92a647199b832ec90d7cf57074e9e-Abstract.html
Shengjia Zhao, Michael Kim, Roshni Sahoo, Tengyu Ma, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2021/hash/bbc92a647199b832ec90d7cf57074e9e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13332-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bbc92a647199b832ec90d7cf57074e9e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iFF-zKCgzS
https://papers.nips.cc/paper_files/paper/2021/file/bbc92a647199b832ec90d7cf57074e9e-Supplemental.pdf
When facing uncertainty, decision-makers want predictions they can trust. A machine learning provider can convey confidence to decision-makers by guaranteeing their predictions are distribution calibrated--- amongst the inputs that receive a predicted vector of class probabilities q, the actual distribution over classes is given by q. For multi-class prediction problems, however, directly optimizing predictions under distribution calibration tends to be infeasible, requiring sample complexity that grows exponentially in the number of classes C. In this work, we introduce a new notion---decision calibration---that requires the predicted distribution and true distribution over classes to be ``indistinguishable'' to downstream decision-makers. This perspective gives a new characterization of distribution calibration: a predictor is distribution calibrated if and only if it is decision calibrated with respect to all decision-makers. Our main result shows that under a mild restriction, unlike distribution calibration, decision calibration is actually feasible. We design a recalibration algorithm that provably achieves decision calibration efficiently, provided that the decision-makers have a bounded number of actions (e.g., polynomial in C). We validate our recalibration algorithm empirically: compared to existing methods, decision calibration improves decision-making on skin lesion and ImageNet classification with modern neural network predictors.
null
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks
https://papers.nips.cc/paper_files/paper/2021/hash/bc37e109d92bdc1ea71da6c919d54907-Abstract.html
Dmitry Kovalev, Elnur Gasanov, Alexander Gasnikov, Peter Richtarik
https://papers.nips.cc/paper_files/paper/2021/hash/bc37e109d92bdc1ea71da6c919d54907-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13333-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bc37e109d92bdc1ea71da6c919d54907-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=L8-54wkift
https://papers.nips.cc/paper_files/paper/2021/file/bc37e109d92bdc1ea71da6c919d54907-Supplemental.pdf
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network whose links are allowed to change in time. We solve two fundamental problems for this task. First, we establish {\em the first lower bounds} on the number of decentralized communication rounds and the number of local computations required to find an $\epsilon$-accurate solution. Second, we design two {\em optimal algorithms} that attain these lower bounds: (i) a variant of the recently proposed algorithm ADOM (Kovalev et al, 2021) enhanced via a multi-consensus subroutine, which is optimal in the case when access to the dual gradients is assumed, and (ii) a novel algorithm, called ADOM+, which is optimal in the case when access to the primal gradients is assumed. We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
null
Testing Probabilistic Circuits
https://papers.nips.cc/paper_files/paper/2021/hash/bc573864331a9e42e4511de6f678aa83-Abstract.html
Yash Pralhad Pote, Kuldeep S Meel
https://papers.nips.cc/paper_files/paper/2021/hash/bc573864331a9e42e4511de6f678aa83-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13334-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bc573864331a9e42e4511de6f678aa83-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sHu8-ux9VH
https://papers.nips.cc/paper_files/paper/2021/file/bc573864331a9e42e4511de6f678aa83-Supplemental.pdf
Probabilistic circuits (PCs) are a powerful modeling framework for representing tractable probability distributions over combinatorial spaces. In machine learning and probabilistic programming, one is often interested in understanding whether the distributions learned using PCs are close to the desired distribution. Thus, given two probabilistic circuits, a fundamental problem of interest is to determine whether their distributions are close to each other.The primary contribution of this paper is a closeness test for PCs with respect to the total variation distance metric. Our algorithm utilizes two common PC queries, counting and sampling. In particular, we provide a poly-time probabilistic algorithm to check the closeness of two PCs, when the PCs support tractable approximate counting and sampling. We demonstrate the practical efficiency of our algorithmic framework via a detailed experimental evaluation of a prototype implementation against a set of 375 PC benchmarks. We find that our test correctly decides the closeness of all 375 PCs within 3600 seconds.
null
Pseudo-Spherical Contrastive Divergence
https://papers.nips.cc/paper_files/paper/2021/hash/bc5fcb0018cecacba559dc512740091b-Abstract.html
Lantao Yu, Jiaming Song, Yang Song, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2021/hash/bc5fcb0018cecacba559dc512740091b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13335-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bc5fcb0018cecacba559dc512740091b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8qa6hkGYDJk
https://papers.nips.cc/paper_files/paper/2021/file/bc5fcb0018cecacba559dc512740091b-Supplemental.pdf
Energy-based models (EBMs) offer flexible distribution parametrization. However, due to the intractable partition function, they are typically trained via contrastive divergence for maximum likelihood estimation. In this paper, we propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum likelihood learning of EBMs. PS-CD is derived from the maximization of a family of strictly proper homogeneous scoring rules, which avoids the computation of the intractable partition function and provides a generalized family of learning objectives that include contrastive divergence as a special case. Moreover, PS-CD allows us to flexibly choose various learning objectives to train EBMs without additional computational cost or variational minimax optimization. Theoretical analysis on the proposed method and extensive experiments on both synthetic data and commonly used image datasets demonstrate the effectiveness and modeling flexibility of PS-CD, as well as its robustness to data contamination, thus showing its superiority over maximum likelihood and $f$-EBMs.
null
NORESQA: A Framework for Speech Quality Assessment using Non-Matching References
https://papers.nips.cc/paper_files/paper/2021/hash/bc6d753857fe3dd4275dff707dedf329-Abstract.html
Pranay Manocha, Buye Xu, Anurag Kumar
https://papers.nips.cc/paper_files/paper/2021/hash/bc6d753857fe3dd4275dff707dedf329-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13336-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bc6d753857fe3dd4275dff707dedf329-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RwASmRpLp-
https://papers.nips.cc/paper_files/paper/2021/file/bc6d753857fe3dd4275dff707dedf329-Supplemental.zip
The perceptual task of speech quality assessment (SQA) is a challenging task for machines to do. Objective SQA methods that rely on the availability of the corresponding clean reference have been the primary go-to approaches for SQA. Clearly, these methods fail in real-world scenarios where the ground truth clean references are not available. In recent years, non-intrusive methods that train neural networks to predict ratings or scores have attracted much attention, but they suffer from several shortcomings such as lack of robustness, reliance on labeled data for training and so on. In this work, we propose a new direction for speech quality assessment. Inspired by human's innate ability to compare and assess the quality of speech signals even when they have non-matching contents, we propose a novel framework that predicts a subjective relative quality score for the given speech signal with respect to any provided reference without using any subjective data. We show that neural networks trained using our framework produce scores that correlate well with subjective mean opinion scores (MOS) and are also competitive to methods such as DNSMOS, which explicitly relies on MOS from humans for training networks. Moreover, our method also provides a natural way to embed quality-related information in neural networks, which we show is helpful for downstream tasks such as speech enhancement.
null
AFEC: Active Forgetting of Negative Transfer in Continual Learning
https://papers.nips.cc/paper_files/paper/2021/hash/bc6dc48b743dc5d013b1abaebd2faed2-Abstract.html
Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong
https://papers.nips.cc/paper_files/paper/2021/hash/bc6dc48b743dc5d013b1abaebd2faed2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13337-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bc6dc48b743dc5d013b1abaebd2faed2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SI-vB7AYS_c
https://papers.nips.cc/paper_files/paper/2021/file/bc6dc48b743dc5d013b1abaebd2faed2-Supplemental.pdf
Continual learning aims to learn a sequence of tasks from dynamic data distributions. Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative. If the old knowledge interferes with the learning of a new task, i.e., the forward knowledge transfer is negative, then precisely remembering the old tasks will further aggravate the interference, thus decreasing the performance of continual learning. By contrast, biological neural networks can actively forget the old knowledge that conflicts with the learning of a new experience, through regulating the learning-triggered synaptic expansion and synaptic convergence. Inspired by the biological active forgetting, we propose to actively forget the old knowledge that limits the learning of new tasks to benefit continual learning. Under the framework of Bayesian continual learning, we develop a novel approach named Active Forgetting with synaptic Expansion-Convergence (AFEC). Our method dynamically expands parameters to learn each new task and then selectively combines them, which is formally consistent with the underlying mechanism of biological active forgetting. We extensively evaluate AFEC on a variety of continual learning benchmarks, including CIFAR-10 regression tasks, visual classification tasks and Atari reinforcement tasks, where AFEC effectively improves the learning of new tasks and achieves the state-of-the-art performance in a plug-and-play way.
null
Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization
https://papers.nips.cc/paper_files/paper/2021/hash/bcb3303a96a92dc38c12992941de7627-Abstract.html
Chengshuai Shi, Wei Xiong, Cong Shen, Jing Yang
https://papers.nips.cc/paper_files/paper/2021/hash/bcb3303a96a92dc38c12992941de7627-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13338-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bcb3303a96a92dc38c12992941de7627-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=q4Dln9kWFA0
https://papers.nips.cc/paper_files/paper/2021/file/bcb3303a96a92dc38c12992941de7627-Supplemental.pdf
Despite the significant interests and many progresses in decentralized multi-player multi-armed bandits (MP-MAB) problems in recent years, the regret gap to the natural centralized lower bound in the heterogeneous MP-MAB setting remains open. In this paper, we propose BEACON -- Batched Exploration with Adaptive COmmunicatioN -- that closes this gap. BEACON accomplishes this goal with novel contributions in implicit communication and efficient exploration. For the former, we propose a novel adaptive differential communication (ADC) design that significantly improves the implicit communication efficiency. For the latter, a carefully crafted batched exploration scheme is developed to enable incorporation of the combinatorial upper confidence bound (CUCB) principle. We then generalize the existing linear-reward MP-MAB problems, where the system reward is always the sum of individually collected rewards, to a new MP-MAB problem where the system reward is a general (nonlinear) function of individual rewards. We extend BEACON to solve this problem and prove a logarithmic regret. BEACON bridges the algorithm design and regret analysis of combinatorial MAB (CMAB) and MP-MAB, two largely disjointed areas in MAB, and the results in this paper suggest that this previously ignored connection is worth further investigation.
null
SWAD: Domain Generalization by Seeking Flat Minima
https://papers.nips.cc/paper_files/paper/2021/hash/bcb41ccdc4363c6848a1d760f26c28a0-Abstract.html
Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, Sungrae Park
https://papers.nips.cc/paper_files/paper/2021/hash/bcb41ccdc4363c6848a1d760f26c28a0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13339-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bcb41ccdc4363c6848a1d760f26c28a0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zkHlu_3sJYU
https://papers.nips.cc/paper_files/paper/2021/file/bcb41ccdc4363c6848a1d760f26c28a0-Supplemental.pdf
Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains. Although a variety of DG methods have been proposed, a recent study shows that under a fair evaluation protocol, called DomainBed, the simple empirical risk minimization (ERM) approach works comparable to or even outperforms previous methods. Unfortunately, simply solving ERM on a complex, non-convex loss function can easily lead to sub-optimal generalizability by seeking sharp minima. In this paper, we theoretically show that finding flat minima results in a smaller domain generalization gap. We also propose a simple yet effective method, named Stochastic Weight Averaging Densely (SWAD), to find flat minima. SWAD finds flatter minima and suffers less from overfitting than does the vanilla SWA by a dense and overfit-aware stochastic weight sampling strategy. SWAD shows state-of-the-art performances on five DG benchmarks, namely PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet, with consistent and large margins of +1.6% averagely on out-of-domain accuracy. We also compare SWAD with conventional generalization methods, such as data augmentation and consistency regularization methods, to verify that the remarkable performance improvements are originated from by seeking flat minima, not from better in-domain generalizability. Last but not least, SWAD is readily adaptable to existing DG methods without modification; the combination of SWAD and an existing DG method further improves DG performances. Source code is available at https://github.com/khanrc/swad.
null
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting
https://papers.nips.cc/paper_files/paper/2021/hash/bcc0d400288793e8bdcd7c19a8ac0c2b-Abstract.html
Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long
https://papers.nips.cc/paper_files/paper/2021/hash/bcc0d400288793e8bdcd7c19a8ac0c2b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13340-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bcc0d400288793e8bdcd7c19a8ac0c2b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=J4gRj6d5Qm
https://papers.nips.cc/paper_files/paper/2021/file/bcc0d400288793e8bdcd7c19a8ac0c2b-Supplemental.pdf
Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease. Code is available at this repository: https://github.com/thuml/Autoformer.
null
Predicting Event Memorability from Contextual Visual Semantics
https://papers.nips.cc/paper_files/paper/2021/hash/bcc2bdb799f873f02080ae277f291da1-Abstract.html
Qianli Xu, Fen Fang, Ana Molino, Vigneshwaran Subbaraju, Joo-Hwee Lim
https://papers.nips.cc/paper_files/paper/2021/hash/bcc2bdb799f873f02080ae277f291da1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13341-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bcc2bdb799f873f02080ae277f291da1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=luWTh5Q63e
https://papers.nips.cc/paper_files/paper/2021/file/bcc2bdb799f873f02080ae277f291da1-Supplemental.pdf
Episodic event memory is a key component of human cognition. Predicting event memorability,i.e., to what extent an event is recalled, is a tough challenge in memory research and has profound implications for artificial intelligence. In this study, we investigate factors that affect event memorability according to a cued recall process. Specifically, we explore whether event memorability is contingent on the event context, as well as the intrinsic visual attributes of image cues. We design a novel experiment protocol and conduct a large-scale experiment with 47 elder subjects over 3 months. Subjects’ memory of life events is tested in a cued recall process. Using advanced visual analytics methods, we build a first-of-its-kind event memorability dataset (called R3) with rich information about event context and visual semantic features. Furthermore, we propose a contextual event memory network (CEMNet) that tackles multi-modal input to predict item-wise event memorability, which outperforms competitive benchmarks. The findings inform deeper understanding of episodic event memory, and open up a new avenue for prediction of human episodic memory. Source code is available at https://github.com/ffzzy840304/Predicting-Event-Memorability.
null
Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning
https://papers.nips.cc/paper_files/paper/2021/hash/bcd0049c35799cdf57d06eaf2eb3cff6-Abstract.html
Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, Lei Shu
https://papers.nips.cc/paper_files/paper/2021/hash/bcd0049c35799cdf57d06eaf2eb3cff6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13342-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bcd0049c35799cdf57d06eaf2eb3cff6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RJ7XFI15Q8f
https://papers.nips.cc/paper_files/paper/2021/file/bcd0049c35799cdf57d06eaf2eb3cff6-Supplemental.pdf
Continual learning (CL) learns a sequence of tasks incrementally with the goal of achieving two main objectives: overcoming catastrophic forgetting (CF) and encouraging knowledge transfer (KT) across tasks. However, most existing techniques focus only on overcoming CF and have no mechanism to encourage KT, and thus do not do well in KT. Although several papers have tried to deal with both CF and KT, our experiments show that they suffer from serious CF when the tasks do not have much shared knowledge. Another observation is that most current CL methods do not use pre-trained models, but it has been shown that such models can significantly improve the end task performance. For example, in natural language processing, fine-tuning a BERT-like pre-trained language model is one of the most effective approaches. However, for CL, this approach suffers from serious CF. An interesting question is how to make the best use of pre-trained models for CL. This paper proposes a novel model called CTR to solve these problems. Our experimental results demonstrate the effectiveness of CTR
null
Bandits with many optimal arms
https://papers.nips.cc/paper_files/paper/2021/hash/bd33f02c4e28615b5af2d24703e066d5-Abstract.html
Rianne de Heide, James Cheshire, Pierre Ménard, Alexandra Carpentier
https://papers.nips.cc/paper_files/paper/2021/hash/bd33f02c4e28615b5af2d24703e066d5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13343-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bd33f02c4e28615b5af2d24703e066d5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ERzpLwEDOY
https://papers.nips.cc/paper_files/paper/2021/file/bd33f02c4e28615b5af2d24703e066d5-Supplemental.pdf
We consider a stochastic bandit problem with a possibly infinite number of arms. We write $p^*$ for the proportion of optimal arms and $\Delta$ for the minimal mean-gap between optimal and sub-optimal arms. We characterize the optimal learning rates both in the cumulative regret setting, and in the best-arm identification setting in terms of the problem parameters $T$ (the budget), $p^*$ and $\Delta$. For the objective of minimizing the cumulative regret, we provide a lower bound of order $\Omega(\log(T)/(p^*\Delta))$ and a UCB-style algorithm with matching upper bound up to a factor of $\log(1/\Delta)$. Our algorithm needs $p^*$ to calibrate its parameters, and we prove that this knowledge is necessary, since adapting to $p^*$ in this setting is impossible. For best-arm identification we also provide a lower bound of order $\Omega(\exp(-cT\Delta^2p^*))$ on the probability of outputting a sub-optimal arm where $c>0$ is an absolute constant. We also provide an elimination algorithm with an upper bound matching the lower bound up to a factor of order $\log(T)$ in the exponential, and that does not need $p^*$ or $\Delta$ as parameter. Our results apply directly to the three related problems of competing against the $j$-th best arm, identifying an $\epsilon$ good arm, and finding an arm with mean larger than a quantile of a known order.
null
Combiner: Full Attention Transformer with Sparse Computation Cost
https://papers.nips.cc/paper_files/paper/2021/hash/bd4a6d0563e0604510989eb8f9ff71f5-Abstract.html
Hongyu Ren, Hanjun Dai, Zihang Dai, Mengjiao Yang, Jure Leskovec, Dale Schuurmans, Bo Dai
https://papers.nips.cc/paper_files/paper/2021/hash/bd4a6d0563e0604510989eb8f9ff71f5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13344-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bd4a6d0563e0604510989eb8f9ff71f5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MQQeeDiO5vv
https://papers.nips.cc/paper_files/paper/2021/file/bd4a6d0563e0604510989eb8f9ff71f5-Supplemental.pdf
Transformers provide a class of expressive architectures that are extremely effective for sequence modeling. However, the key limitation of transformers is their quadratic memory and time complexity $\mathcal{O}(L^2)$ with respect to the sequence length in attention layers, which restricts application in extremely long sequences. Most existing approaches leverage sparsity or low-rank assumptions in the attention matrix to reduce cost, but sacrifice expressiveness. Instead, we propose Combiner, which provides full attention capability in each attention head while maintaining low computation and memory complexity. The key idea is to treat the self-attention mechanism as a conditional expectation over embeddings at each location, and approximate the conditional distribution with a structured factorization. Each location can attend to all other locations, either via direct attention, or through indirect attention to abstractions, which are again conditional expectations of embeddings from corresponding local regions. We show that most sparse attention patterns used in existing sparse transformers are able to inspire the design of such factorization for full attention, resulting in the same sub-quadratic cost ($\mathcal{O}(L\log(L))$ or $\mathcal{O}(L\sqrt{L})$). Combiner is a drop-in replacement for attention layers in existing transformers and can be easily implemented in common frameworks. An experimental evaluation on both autoregressive and bidirectional sequence tasks demonstrates the effectiveness of this approach, yielding state-of-the-art results on several image and text modeling tasks.
null
Geometry Processing with Neural Fields
https://papers.nips.cc/paper_files/paper/2021/hash/bd686fd640be98efaae0091fa301e613-Abstract.html
Guandao Yang, Serge Belongie, Bharath Hariharan, Vladlen Koltun
https://papers.nips.cc/paper_files/paper/2021/hash/bd686fd640be98efaae0091fa301e613-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13345-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bd686fd640be98efaae0091fa301e613-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=JG-SlCAx5_K
https://papers.nips.cc/paper_files/paper/2021/file/bd686fd640be98efaae0091fa301e613-Supplemental.pdf
Most existing geometry processing algorithms use meshes as the default shape representation. Manipulating meshes, however, requires one to maintain high quality in the surface discretization. For example, changing the topology of a mesh usually requires additional procedures such as remeshing. This paper instead proposes the use of neural fields for geometry processing. Neural fields can compactly store complicated shapes without spatial discretization. Moreover, neural fields are infinitely differentiable, which allows them to be optimized for objectives that involve higher-order derivatives. This raises the question: can geometry processing be done entirely using neural fields? We introduce loss functions and architectures to show that some of the most challenging geometry processing tasks, such as deformation and filtering, can be done with neural fields. Experimental results show that our methods are on par with the well-established mesh-based methods without committing to a particular surface discretization. Code is available at https://github.com/stevenygd/NFGP.
null
Contextual Recommendations and Low-Regret Cutting-Plane Algorithms
https://papers.nips.cc/paper_files/paper/2021/hash/bdc6c33585d0cf5d2a8cb83141cd037f-Abstract.html
Sreenivas Gollapudi, Guru Guruganesh, Kostas Kollias, Pasin Manurangsi, Renato Leme, Jon Schneider
https://papers.nips.cc/paper_files/paper/2021/hash/bdc6c33585d0cf5d2a8cb83141cd037f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13346-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bdc6c33585d0cf5d2a8cb83141cd037f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=45GfBQYtYlp
https://papers.nips.cc/paper_files/paper/2021/file/bdc6c33585d0cf5d2a8cb83141cd037f-Supplemental.pdf
We consider the following variant of contextual linear bandits motivated by routing applications in navigational engines and recommendation systems. We wish to learn a hidden $d$-dimensional value $w^*$. Every round, we are presented with a subset $\mathcal{X}_t \subseteq \mathbb{R}^d$ of possible actions. If we choose (i.e. recommend to the user) action $x_t$, we obtain utility $\langle x_t, w^* \rangle$ but only learn the identity of the best action $\arg\max_{x \in \X_t} \langle x, w^* \rangle$.We design algorithms for this problem which achieve regret $O(d\log T)$ and $\exp(O(d \log d))$. To accomplish this, we design novel cutting-plane algorithms with low “regret” -- the total distance between the true point $w^*$ and the hyperplanes the separation oracle returns. We also consider the variant where we are allowed to provide a list of several recommendations. In this variant, we give an algorithm with $O(d^2 \log d)$ regret and list size $\poly(d)$. Finally, we construct nearly tight algorithms for a weaker variant of this problem where the learner only learns the identity of an action that is better than the recommendation. Our results rely on new algorithmic techniques in convex geometry (including a variant of Steiner’s formula for the centroid of a convex set) which may be of independent interest.
null
Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network
https://papers.nips.cc/paper_files/paper/2021/hash/be1bc7997695495f756312886f566110-Abstract.html
Xiaolin Hu, Kai Li, Weiyi Zhang, Yi Luo, Jean-Marie Lemercier, Timo Gerkmann
https://papers.nips.cc/paper_files/paper/2021/hash/be1bc7997695495f756312886f566110-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13347-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/be1bc7997695495f756312886f566110-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SlxH2AbBBC2
https://papers.nips.cc/paper_files/paper/2021/file/be1bc7997695495f756312886f566110-Supplemental.zip
Recent advances in the design of neural network architectures, in particular those specialized in modeling sequences, have provided significant improvements in speech separation performance. In this work, we propose to use a bio-inspired architecture called Fully Recurrent Convolutional Neural Network (FRCNN) to solve the separation task. This model contains bottom-up, top-down and lateral connections to fuse information processed at various time-scales represented by stages. In contrast to the traditional approach updating stages in parallel, we propose to first update the stages one by one in the bottom-up direction, then fuse information from adjacent stages simultaneously and finally fuse information from all stages to the bottom stage together. Experiments showed that this asynchronous updating scheme achieved significantly better results with much fewer parameters than the traditional synchronous updating scheme on speech separation. In addition, the proposed model achieved competitive or better results with high efficiency as compared to other state-of-the-art approaches on two benchmark datasets.
null
Reinforcement Learning Enhanced Explainer for Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/be26abe76fb5c8a4921cf9d3e865b454-Abstract.html
Caihua Shan, Yifei Shen, Yao Zhang, Xiang Li, Dongsheng Li
https://papers.nips.cc/paper_files/paper/2021/hash/be26abe76fb5c8a4921cf9d3e865b454-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13348-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/be26abe76fb5c8a4921cf9d3e865b454-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nUtLCcV24hL
https://papers.nips.cc/paper_files/paper/2021/file/be26abe76fb5c8a4921cf9d3e865b454-Supplemental.pdf
Graph neural networks (GNNs) have recently emerged as revolutionary technologies for machine learning tasks on graphs. In GNNs, the graph structure is generally incorporated with node representation via the message passing scheme, making the explanation much more challenging. Given a trained GNN model, a GNN explainer aims to identify a most influential subgraph to interpret the prediction of an instance (e.g., a node or a graph), which is essentially a combinatorial optimization problem over graph. The existing works solve this problem by continuous relaxation or search-based heuristics. But they suffer from key issues such as violation of message passing and hand-crafted heuristics, leading to inferior interpretability. To address these issues, we propose a RL-enhanced GNN explainer, RG-Explainer, which consists of three main components: starting point selection, iterative graph generation and stopping criteria learning. RG-Explainer could construct a connected explanatory subgraph by sequentially adding nodes from the boundary of the current generated graph, which is consistent with the message passing scheme. Further, we design an effective seed locator to select the starting point, and learn stopping criteria to generate superior explanations. Extensive experiments on both synthetic and real datasets show that RG-Explainer outperforms state-of-the-art GNN explainers. Moreover, RG-Explainer can be applied in the inductive setting, demonstrating its better generalization ability.
null
NAS-Bench-x11 and the Power of Learning Curves
https://papers.nips.cc/paper_files/paper/2021/hash/be3159ad04564bfb90db9e32851ebf9c-Abstract.html
Shen Yan, Colin White, Yash Savani, Frank Hutter
https://papers.nips.cc/paper_files/paper/2021/hash/be3159ad04564bfb90db9e32851ebf9c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13349-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/be3159ad04564bfb90db9e32851ebf9c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=V8PcLz1NoQ0
https://papers.nips.cc/paper_files/paper/2021/file/be3159ad04564bfb90db9e32851ebf9c-Supplemental.pdf
While early research in neural architecture search (NAS) required extreme computational resources, the recent releases of tabular and surrogate benchmarks have greatly increased the speed and reproducibility of NAS research. However, two of the most popular benchmarks do not provide the full training information for each architecture. As a result, on these benchmarks it is not possible to evaluate many types of multi-fidelity algorithms, such as learning curve extrapolation, that require evaluating architectures at arbitrary epochs. In this work, we present a method using singular value decomposition and noise modeling to create surrogate benchmarks, NAS-Bench-111, NAS-Bench-311, and NAS-Bench-NLP11, that output the full training information for each architecture, rather than just the final validation accuracy. We demonstrate the power of using the full training information by introducing a learning curve extrapolation framework to modify single-fidelity algorithms, showing that it leads to improvements over popular single-fidelity algorithms which claimed to be state-of-the-art upon release.
null
Observation-Free Attacks on Stochastic Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/be315e7f05e9f13629031915fe87ad44-Abstract.html
Yinglun Xu, Bhuvesh Kumar, Jacob D. Abernethy
https://papers.nips.cc/paper_files/paper/2021/hash/be315e7f05e9f13629031915fe87ad44-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13350-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/be315e7f05e9f13629031915fe87ad44-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8gmBGNeOfT
https://papers.nips.cc/paper_files/paper/2021/file/be315e7f05e9f13629031915fe87ad44-Supplemental.pdf
We study data corruption attacks on stochastic multi arm bandit algorithms. Existing attack methodologies assume that the attacker can observe the multi arm bandit algorithm's realized behavior which is in contrast to the adversaries modeled in the robust multi arm bandit algorithms literature. To the best of our knowledge, we develop the first data corruption attack on stochastic multi arm bandit algorithms which works without observing the algorithm's realized behavior. Through this attack, we also discover a sufficient condition for a stochastic multi arm bandit algorithm to be susceptible to adversarial data corruptions. We show that any bandit algorithm that makes decisions just using the empirical mean reward, and the number of times that arm has been pulled in the past can suffer from linear regret under data corruption attacks. We further show that various popular stochastic multi arm bandit algorithms such UCB, $\epsilon$-greedy and Thompson Sampling satisfy this sufficient condition and are thus prone to data corruption attacks. We further analyze the behavior of our attack for these algorithms and show that using only $o(T)$ corruptions, our attack can force these algorithms to select a potentially non-optimal target arm preferred by the attacker for all but $o(T)$ rounds.
null
Learning Disentangled Behavior Embeddings
https://papers.nips.cc/paper_files/paper/2021/hash/be37ff14df68192d976f6ce76c6cbd15-Abstract.html
Changhao Shi, Sivan Schwartz, Shahar Levy, Shay Achvat, Maisan Abboud, Amir Ghanayim, Jackie Schiller, Gal Mishne
https://papers.nips.cc/paper_files/paper/2021/hash/be37ff14df68192d976f6ce76c6cbd15-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13351-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/be37ff14df68192d976f6ce76c6cbd15-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ThbM9_6DNU
https://papers.nips.cc/paper_files/paper/2021/file/be37ff14df68192d976f6ce76c6cbd15-Supplemental.zip
To understand the relationship between behavior and neural activity, experiments in neuroscience often include an animal performing a repeated behavior such as a motor task. Recent progress in computer vision and deep learning has shown great potential in the automated analysis of behavior by leveraging large and high-quality video datasets. In this paper, we design Disentangled Behavior Embedding (DBE) to learn robust behavioral embeddings from unlabeled, multi-view, high-resolution behavioral videos across different animals and multiple sessions. We further combine DBE with a stochastic temporal model to propose Variational Disentangled Behavior Embedding (VDBE), an end-to-end approach that learns meaningful discrete behavior representations and generates interpretable behavioral videos. Our models learn consistent behavior representations by explicitly disentangling the dynamic behavioral factors (pose) from time-invariant, non-behavioral nuisance factors (context) in a deep autoencoder, and exploit the temporal structures of pose dynamics. Compared to competing approaches, DBE and VDBE enjoy superior performance on downstream tasks such as fine-grained behavioral motif generation and behavior decoding.
null
The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/be3e9d3f7d70537357c67bb3f4086846-Abstract.html
Yujin Tang, David Ha
https://papers.nips.cc/paper_files/paper/2021/hash/be3e9d3f7d70537357c67bb3f4086846-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13352-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/be3e9d3f7d70537357c67bb3f4086846-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wtLW-Amuds
https://papers.nips.cc/paper_files/paper/2021/file/be3e9d3f7d70537357c67bb3f4086846-Supplemental.pdf
In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture. Such systems have inspired development of artificial intelligence algorithms in areas such as swarm optimization and cellular automata. Motivated by the emergence of collective behavior from complex cellular systems, we build systems that feed each sensory input from the environment into distinct, but identical neural networks, each with no fixed relationship with one another. We show that these sensory networks can be trained to integrate information received locally, and through communication via an attention mechanism, can collectively produce a globally coherent policy. Moreover, the system can still perform its task even if the ordering of its inputs is randomly permuted several times during an episode. These permutation invariant systems also display useful robustness and generalization properties that are broadly applicable. Interactive demo and videos of our results: https://attentionneuron.github.io
null
Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems
https://papers.nips.cc/paper_files/paper/2021/hash/be767243ca8f574c740fb4c26cc6dceb-Abstract.html
Sucheol Lee, Donghwan Kim
https://papers.nips.cc/paper_files/paper/2021/hash/be767243ca8f574c740fb4c26cc6dceb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13353-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/be767243ca8f574c740fb4c26cc6dceb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=U7vVeHydyR
https://papers.nips.cc/paper_files/paper/2021/file/be767243ca8f574c740fb4c26cc6dceb-Supplemental.pdf
Modern minimax problems, such as generative adversarial network and adversarial training, are often under a nonconvex-nonconcave setting, and developing an efficient method for such setting is of interest. Recently, two variants of the extragradient (EG) method are studied in that direction. First, a two-time-scale variant of the EG, named EG+, was proposed under a smooth structured nonconvex-nonconcave setting, with a slow $\mathcal{O}(1/k)$ rate on the squared gradient norm, where $k$ denotes the number of iterations. Second, another variant of EG with an anchoring technique, named extra anchored gradient (EAG), was studied under a smooth convex-concave setting, yielding a fast $\mathcal{O}(1/k^2)$ rate on the squared gradient norm. Built upon EG+ and EAG, this paper proposes a two-time-scale EG with anchoring, named fast extragradient (FEG), that has a fast $\mathcal{O}(1/k^2)$ rate on the squared gradient norm for smooth structured nonconvex-nonconcave problems; the corresponding saddle-gradient operator satisfies the negative comonotonicity condition. This paper further develops its backtracking line-search version, named FEG-A, for the case where the problem parameters are not available. The stochastic analysis of FEG is also provided.
null
Analysis of Sensing Spectral for Signal Recovery under a Generalized Linear Model
https://papers.nips.cc/paper_files/paper/2021/hash/becc353586042b6dbcc42c1b794c37b6-Abstract.html
Junjie Ma, Ji Xu, Arian Maleki
https://papers.nips.cc/paper_files/paper/2021/hash/becc353586042b6dbcc42c1b794c37b6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13354-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/becc353586042b6dbcc42c1b794c37b6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5-iRjd9FreV
https://papers.nips.cc/paper_files/paper/2021/file/becc353586042b6dbcc42c1b794c37b6-Supplemental.pdf
We consider a nonlinear inverse problem $\mathbf{y}= f(\mathbf{Ax})$, where observations $\mathbf{y} \in \mathbb{R}^m$ are the componentwise nonlinear transformation of $\mathbf{Ax} \in \mathbb{R}^m$, $\mathbf{x} \in \mathbb{R}^n$ is the signal of interest and $\mathbf{A}$ is a known linear mapping. By properly specifying the nonlinear processing function, this model can be particularized to many signal processing problems, including compressed sensing and phase retrieval. Our main goal in this paper is to understand the impact of sensing matrices, or more specifically the spectrum of sensing matrices, on the difficulty of recovering $\mathbf{x}$ from $\mathbf{y}$. Towards this goal, we study the performance of one of the most successful recovery methods, i.e. the expectation propagation algorithm (EP). We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP. Whether the spikiness of the spectrum can hurt or help the recovery performance of EP depends on $f$. We define certain quantities based on the function $f$ that enables us to describe the impact of the spikiness of the spectrum on EP recovery. Based on our framework, we are able to show that for instance, in phase-retrieval problems, matrices with spikier spectrums are better for EP, while in 1-bit compressed sensing problems, less spiky (flatter) spectrums offer better recoveries. Our results unify and substantially generalize the existing results that compare sub-Gaussian and orthogonal matrices, and provide a platform toward designing optimal sensing systems.
null
Revisiting ResNets: Improved Training and Scaling Strategies
https://papers.nips.cc/paper_files/paper/2021/hash/bef4d169d8bddd17d68303877a3ea945-Abstract.html
Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, Barret Zoph
https://papers.nips.cc/paper_files/paper/2021/hash/bef4d169d8bddd17d68303877a3ea945-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13355-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bef4d169d8bddd17d68303877a3ea945-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=dsmxf7FKiaY
https://papers.nips.cc/paper_files/paper/2021/file/bef4d169d8bddd17d68303877a3ea945-Supplemental.pdf
Novel computer vision architectures monopolize the spotlight, but the impact of the model architecture is often conflated with simultaneous changes to training methodology and scaling strategies.Our work revisits the canonical ResNet and studies these three aspects in an effort to disentangle them. Perhaps surprisingly, we find that training and scaling strategies may matter more than architectural changes, and further, that the resulting ResNets match recent state-of-the-art models. We show that the best performing scaling strategy depends on the training regime and offer two new scaling strategies: (1) scale model depth in regimes where overfitting can occur (width scaling is preferable otherwise); (2) increase image resolution more slowly than previously recommended.Using improved training and scaling strategies, we design a family of ResNet architectures, ResNet-RS, which are 1.7x - 2.7x faster than EfficientNets on TPUs, while achieving similar accuracies on ImageNet. In a large-scale semi-supervised learning setup, ResNet-RS achieves 86.2% top-1 ImageNet accuracy, while being 4.7x faster than EfficientNet-NoisyStudent. The training techniques improve transfer performance on a suite of downstream tasks (rivaling state-of-the-art self-supervised algorithms) and extend to video classification on Kinetics-400. We recommend practitioners use these simple revised ResNets as baselines for future research.
null
Sparse Flows: Pruning Continuous-depth Models
https://papers.nips.cc/paper_files/paper/2021/hash/bf1b2f4b901c21a1d8645018ea9aeb05-Abstract.html
Lucas Liebenwein, Ramin Hasani, Alexander Amini, Daniela Rus
https://papers.nips.cc/paper_files/paper/2021/hash/bf1b2f4b901c21a1d8645018ea9aeb05-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13356-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bf1b2f4b901c21a1d8645018ea9aeb05-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_1VZo_-aUiT
https://papers.nips.cc/paper_files/paper/2021/file/bf1b2f4b901c21a1d8645018ea9aeb05-Supplemental.zip
Continuous deep learning architectures enable learning of flexible probabilistic models for predictive modeling as neural ordinary differential equations (ODEs), and for generative modeling as continuous normalizing flows. In this work, we design a framework to decipher the internal dynamics of these continuous depth models by pruning their network architectures. Our empirical results suggest that pruning improves generalization for neural ODEs in generative modeling. We empirically show that the improvement is because pruning helps avoid mode-collapse and flatten the loss surface. Moreover, pruning finds efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy. We hope our results will invigorate further research into the performance-size trade-offs of modern continuous-depth models.
null
Spectrum-to-Kernel Translation for Accurate Blind Image Super-Resolution
https://papers.nips.cc/paper_files/paper/2021/hash/bf25356fd2a6e038f1a3a59c26687e80-Abstract.html
Guangpin Tao, Xiaozhong Ji, Wenzhuo Wang, Shuo Chen, Chuming Lin, Yun Cao, Tong Lu, Donghao Luo, Ying Tai
https://papers.nips.cc/paper_files/paper/2021/hash/bf25356fd2a6e038f1a3a59c26687e80-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13357-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bf25356fd2a6e038f1a3a59c26687e80-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=94Sj1CcC_Jl
https://papers.nips.cc/paper_files/paper/2021/file/bf25356fd2a6e038f1a3a59c26687e80-Supplemental.pdf
Deep-learning based Super-Resolution (SR) methods have exhibited promising performance under non-blind setting where blur kernel is known; however, blur kernels of Low-Resolution (LR) images in different practical applications are usually unknown. It may lead to a significant performance drop when degradation process of training images deviates from that of real images. In this paper, we propose a novel blind SR framework to super-resolve LR images degraded by arbitrary blur kernel with accurate kernel estimation in frequency domain. To our best knowledge, this is the first deep learning method which conducts blur kernel estimation in frequency domain. Specifically, we first demonstrate that feature representation in frequency domain is more conducive for blur kernel reconstruction than in spatial domain. Next, we present a Spectrum-to-Kernel (S$2$K) network to estimate general blur kernels in diverse forms. We use a conditional GAN (CGAN) combined with SR-oriented optimization target to learn the end-to-end translation from degraded images' spectra to unknown kernels. Extensive experiments on both synthetic and real-world images demonstrate that our proposed method sufficiently reduces blur kernel estimation error, thus enables the off-the-shelf non-blind SR methods to work under blind setting effectively, and achieves superior performance over state-of-the-art blind SR methods, averagely by 1.39dB, 0.48dB (Gaussian kernels) and 6.15dB, 4.57dB (motion kernels) for scales $2\times$ and $4\times$ respectively.
null
On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond
https://papers.nips.cc/paper_files/paper/2021/hash/bf40f0ab4e5e63171dd16036913ae828-Abstract.html
Angeliki Giannou, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Panayotis Mertikopoulos
https://papers.nips.cc/paper_files/paper/2021/hash/bf40f0ab4e5e63171dd16036913ae828-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13358-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bf40f0ab4e5e63171dd16036913ae828-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8IiakjcUHFH
https://papers.nips.cc/paper_files/paper/2021/file/bf40f0ab4e5e63171dd16036913ae828-Supplemental.pdf
In this paper, we examine the convergence rate of a wide range of regularized methods for learning in games. To that end, we propose a unified algorithmic template that we call “follow the generalized leader” (FTGL), and which includes asspecial cases the canonical “follow the regularized leader” algorithm, its optimistic variants, extra-gradient schemes, and many others. The proposed framework is also sufficiently flexible to account for several different feedback models – fromfull information to bandit feedback. In this general setting, we show that FTGL algorithms converge locally to strict Nash equilibria at a rate which does not depend on the level of uncertainty faced by the players, but only on the geometry of the regularizer near the equilibrium. In particular, we show that algorithms based on entropic regularization – like the exponential weights algorithm – enjoy a linear convergence rate, while Euclidean projection methods converge to equilibrium in a finite number of iterations, even with bandit feedback.
null
SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/bf499a12e998d178afd964adf64a60cb-Abstract.html
Bahare Fatemi, Layla El Asri, Seyed Mehran Kazemi
https://papers.nips.cc/paper_files/paper/2021/hash/bf499a12e998d178afd964adf64a60cb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13359-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bf499a12e998d178afd964adf64a60cb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=JWRRBHFPKTJ
https://papers.nips.cc/paper_files/paper/2021/file/bf499a12e998d178afd964adf64a60cb-Supplemental.pdf
Graph neural networks (GNNs) work well when the graph structure is provided. However, this structure may not always be available in real-world applications. One solution to this problem is to infer a task-specific latent structure and then apply a GNN to the inferred graph. Unfortunately, the space of possible graph structures grows super-exponentially with the number of nodes and so the task-specific supervision may be insufficient for learning both the structure and the GNN parameters. In this work, we propose the Simultaneous Learning of Adjacency and GNN Parameters with Self-supervision, or SLAPS, a method that provides more supervision for inferring a graph structure through self-supervision. A comprehensive experimental study demonstrates that SLAPS scales to large graphs with hundreds of thousands of nodes and outperforms several models that have been proposed to learn a task-specific graph structure on established benchmarks.
null
Aligning Pretraining for Detection via Object-Level Contrastive Learning
https://papers.nips.cc/paper_files/paper/2021/hash/bf5cd8b2509011b9502a72296edc14a0-Abstract.html
Fangyun Wei, Yue Gao, Zhirong Wu, Han Hu, Stephen Lin
https://papers.nips.cc/paper_files/paper/2021/hash/bf5cd8b2509011b9502a72296edc14a0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13360-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bf5cd8b2509011b9502a72296edc14a0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8PA2nX9v_r2
https://papers.nips.cc/paper_files/paper/2021/file/bf5cd8b2509011b9502a72296edc14a0-Supplemental.pdf
Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning. Such generality for transfer learning, however, sacrifices specificity if we are interested in a certain downstream task. We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task. In this paper, we follow this principle with a pretraining method specifically designed for the task of object detection. We attain alignment in the following three aspects: 1) object-level representations are introduced via selective search bounding boxes as object proposals; 2) the pretraining network architecture incorporates the same dedicated modules used in the detection pipeline (e.g. FPN); 3) the pretraining is equipped with object detection properties such as object-level translation invariance and scale invariance. Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection using a Mask R-CNN framework. Code is available at https://github.com/hologerry/SoCo.
null
Double/Debiased Machine Learning for Dynamic Treatment Effects
https://papers.nips.cc/paper_files/paper/2021/hash/bf65417dcecc7f2b0006e1f5793b7143-Abstract.html
Greg Lewis, Vasilis Syrgkanis
https://papers.nips.cc/paper_files/paper/2021/hash/bf65417dcecc7f2b0006e1f5793b7143-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13361-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bf65417dcecc7f2b0006e1f5793b7143-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=StKuQ0-dltN
https://papers.nips.cc/paper_files/paper/2021/file/bf65417dcecc7f2b0006e1f5793b7143-Supplemental.zip
We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes. We propose an extension of the double/debiased machine learning framework to estimate the dynamic effects of treatments and apply it to a concrete linear Markovian high-dimensional state space model and to general structural nested mean models. Our method allows the use of arbitrary machine learning methods to control for the high dimensional state, subject to a mean square error guarantee, while still allowing parametric estimation and construction of confidence intervals for the dynamic treatment effect parameters of interest. Our method is based on a sequential regression peeling process, which we show can be equivalently interpreted as a Neyman orthogonal moment estimator. This allows us to show root-n asymptotic normality of the estimated causal effects.
null
Local Disentanglement in Variational Auto-Encoders Using Jacobian $L_1$ Regularization
https://papers.nips.cc/paper_files/paper/2021/hash/bfd2308e9e75263970f8079115edebbd-Abstract.html
Travers Rhodes, Daniel Lee
https://papers.nips.cc/paper_files/paper/2021/hash/bfd2308e9e75263970f8079115edebbd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13362-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/bfd2308e9e75263970f8079115edebbd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8xyNqPvFZwC
https://papers.nips.cc/paper_files/paper/2021/file/bfd2308e9e75263970f8079115edebbd-Supplemental.pdf
There have been many recent advances in representation learning; however, unsupervised representation learning can still struggle with model identification issues related to rotations of the latent space. Variational Auto-Encoders (VAEs) and their extensions such as $\beta$-VAEs have been shown to improve local alignment of latent variables with PCA directions, which can help to improve model disentanglement under some conditions. Borrowing inspiration from Independent Component Analysis (ICA) and sparse coding, we propose applying an $L_1$ loss to the VAE's generative Jacobian during training to encourage local latent variable alignment with independent factors of variation in images of multiple objects or images with multiple parts. We demonstrate our results on a variety of datasets, giving qualitative and quantitative results using information theoretic and modularity measures that show our added $L_1$ cost encourages local axis alignment of the latent representation with individual factors of variation.
null
Design of Experiments for Stochastic Contextual Linear Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/c00193e70e8e27e70601b26161b4ae86-Abstract.html
Andrea Zanette, Kefan Dong, Jonathan N Lee, Emma Brunskill
https://papers.nips.cc/paper_files/paper/2021/hash/c00193e70e8e27e70601b26161b4ae86-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13363-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c00193e70e8e27e70601b26161b4ae86-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KsfuvGB3vco
https://papers.nips.cc/paper_files/paper/2021/file/c00193e70e8e27e70601b26161b4ae86-Supplemental.pdf
In the stochastic linear contextual bandit setting there exist several minimax procedures for exploration with policies that are reactive to the data being acquired. In practice, there can be a significant engineering overhead to deploy these algorithms, especially when the dataset is collected in a distributed fashion or when a human in the loop is needed to implement a different policy. Exploring with a single non-reactive policy is beneficial in such cases. Assuming some batch contexts are available, we design a single stochastic policy to collect a good dataset from which a near-optimal policy can be extracted. We present a theoretical analysis as well as numerical experiments on both synthetic and real-world datasets.
null
Encoding Spatial Distribution of Convolutional Features for Texture Representation
https://papers.nips.cc/paper_files/paper/2021/hash/c04c19c2c2474dbf5f7ac4372c5b9af1-Abstract.html
Yong Xu, Feng Li, Zhile Chen, Jinxiu Liang, Yuhui Quan
https://papers.nips.cc/paper_files/paper/2021/hash/c04c19c2c2474dbf5f7ac4372c5b9af1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13364-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KnN6mh23cSX
https://papers.nips.cc/paper_files/paper/2021/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Supplemental.pdf
Existing convolutional neural networks (CNNs) often use global average pooling (GAP) to aggregate feature maps into a single representation. However, GAP cannot well characterize complex distributive patterns of spatial features while such patterns play an important role in texture-oriented applications, e.g., material recognition and ground terrain classification. In the context of texture representation, this paper addressed the issue by proposing Fractal Encoding (FE), a feature encoding module grounded by multi-fractal geometry. Considering a CNN feature map as a union of level sets of points lying in the 2D space, FE characterizes their spatial layout via a local-global hierarchical fractal analysis which examines the multi-scale power behavior on each level set. This enables a CNN to encode the regularity on the spatial arrangement of image features, leading to a robust yet discriminative spectrum descriptor. In addition, FE has trainable parameters for data adaptivity and can be easily incorporated into existing CNNs for end-to-end training. We applied FE to ResNet-based texture classification and retrieval, and demonstrated its effectiveness on several benchmark datasets.
null
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
https://papers.nips.cc/paper_files/paper/2021/hash/c055dcc749c2632fd4dd806301f05ba6-Abstract.html
Yujia Huang, Huan Zhang, Yuanyuan Shi, J. Zico Kolter, Anima Anandkumar
https://papers.nips.cc/paper_files/paper/2021/hash/c055dcc749c2632fd4dd806301f05ba6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13365-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c055dcc749c2632fd4dd806301f05ba6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FTt28RYj5Pc
https://papers.nips.cc/paper_files/paper/2021/file/c055dcc749c2632fd4dd806301f05ba6-Supplemental.pdf
Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant. However, such a bound is often loose: it tends to over-regularize the neural network and degrade its natural accuracy. A tighter Lipschitz bound may provide a better tradeoff between natural and certified accuracy, but is generally hard to compute exactly due to non-convexity of the network. In this work, we propose an efficient and trainable \emph{local} Lipschitz upper bound by considering the interactions between activation functions (e.g. ReLU) and weight matrices. Specifically, when computing the induced norm of a weight matrix, we eliminate the corresponding rows and columns where the activation function is guaranteed to be a constant in the neighborhood of each given data point, which provides a provably tighter bound than the global Lipschitz constant of the neural network. Our method can be used as a plug-in module to tighten the Lipschitz bound in many certifiable training algorithms. Furthermore, we propose to clip activation functions (e.g., ReLU and MaxMin) with a learnable upper threshold and a sparsity loss to assist the network to achieve an even tighter local Lipschitz bound. Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and TinyImageNet datasets with various network architectures.
null
Average-Reward Learning and Planning with Options
https://papers.nips.cc/paper_files/paper/2021/hash/c058f544c737782deacefa532d9add4c-Abstract.html
Yi Wan, Abhishek Naik, Rich Sutton
https://papers.nips.cc/paper_files/paper/2021/hash/c058f544c737782deacefa532d9add4c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13366-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c058f544c737782deacefa532d9add4c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=guAXBsPR4tY
null
We extend the options framework for temporal abstraction in reinforcement learning from discounted Markov decision processes (MDPs) to average-reward MDPs. Our contributions include general convergent off-policy inter-option learning algorithms, intra-option algorithms for learning values and models, as well as sample-based planning variants of our learning algorithms. Our algorithms and convergence proofs extend those recently developed by Wan, Naik, and Sutton. We also extend the notion of option-interrupting behaviour from the discounted to the average-reward formulation. We show the efficacy of the proposed algorithms with experiments on a continuing version of the Four-Room domain.
null
SSAL: Synergizing between Self-Training and Adversarial Learning for Domain Adaptive Object Detection
https://papers.nips.cc/paper_files/paper/2021/hash/c0cccc24dd23ded67404f5e511c342b0-Abstract.html
Muhammad Akhtar Munir, Muhammad Haris Khan, M. Sarfraz, Mohsen Ali
https://papers.nips.cc/paper_files/paper/2021/hash/c0cccc24dd23ded67404f5e511c342b0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13367-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c0cccc24dd23ded67404f5e511c342b0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=F93Z9Au6HxE
https://papers.nips.cc/paper_files/paper/2021/file/c0cccc24dd23ded67404f5e511c342b0-Supplemental.pdf
We study adapting trained object detectors to unseen domains manifesting significant variations of object appearance, viewpoints and backgrounds. Most current methods align domains by either using image or instance-level feature alignment in an adversarial fashion. This often suffers due to the presence of unwanted background and as such lacks class-specific alignment. A common remedy to promote class-level alignment is to use high confidence predictions on the unlabelled domain as pseudo labels. These high confidence predictions are often fallacious since the model is poorly calibrated under domain shift. In this paper, we propose to leverage model’s predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment. Specifically, we measure predictive uncertainty on class assignments and the bounding box predictions. Model predictions with low uncertainty are used to generate pseudo-labels for self-supervision, whereas the ones with higher uncertainty are used to generate tiles for an adversarial feature alignment stage. This synergy between tiling around the uncertain object regions and generating pseudo-labels from highly certain object regions allows us to capture both the image and instance level context during the model adaptation stage. We perform extensive experiments covering various domain shift scenarios. Our approach improves upon existing state-of-the-art methods with visible margins.
null
Counterexample Guided RL Policy Refinement Using Bayesian Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/c0e19ce0dbabbc0d17a4f8d4324cc8e3-Abstract.html
Briti Gangopadhyay, Pallab Dasgupta
https://papers.nips.cc/paper_files/paper/2021/hash/c0e19ce0dbabbc0d17a4f8d4324cc8e3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13368-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c0e19ce0dbabbc0d17a4f8d4324cc8e3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bayZPpw9lM
https://papers.nips.cc/paper_files/paper/2021/file/c0e19ce0dbabbc0d17a4f8d4324cc8e3-Supplemental.pdf
Constructing Reinforcement Learning (RL) policies that adhere to safety requirements is an emerging field of study. RL agents learn via trial and error with an objective to optimize a reward signal. Often policies that are designed to accumulate rewards do not satisfy safety specifications. We present a methodology for counterexample guided refinement of a trained RL policy against a given safety specification. Our approach has two main components. The first component is an approach to discover failure trajectories using Bayesian optimization over multiple parameters of uncertainty from a policy learnt in a model-free setting. The second component selectively modifies the failure points of the policy using gradient-based updates. The approach has been tested on several RL environments, and we demonstrate that the policy can be made to respect the safety specifications through such targeted changes.
null
Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
https://papers.nips.cc/paper_files/paper/2021/hash/c0f168ce8900fa56e57789e2a2f2c9d0-Abstract.html
Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, Liwei Wang, Tie-Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/c0f168ce8900fa56e57789e2a2f2c9d0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13369-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c0f168ce8900fa56e57789e2a2f2c9d0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=X7XNPor93uG
https://papers.nips.cc/paper_files/paper/2021/file/c0f168ce8900fa56e57789e2a2f2c9d0-Supplemental.pdf
The attention module, which is a crucial component in Transformer, cannot scale efficiently to long sequences due to its quadratic complexity. Many works focus on approximating the dot-then-exponentiate softmax function in the original attention, leading to sub-quadratic or even linear-complexity Transformer architectures. However, we show that these methods cannot be applied to more powerful attention modules that go beyond the dot-then-exponentiate style, e.g., Transformers with relative positional encoding (RPE). Since in many state-of-the-art models, relative positional encoding is used as default, designing efficient Transformers that can incorporate RPE is appealing. In this paper, we propose a novel way to accelerate attention calculation for Transformers with RPE on top of the kernelized attention. Based upon the observation that relative positional encoding forms a Toeplitz matrix, we mathematically show that kernelized attention with RPE can be calculated efficiently using Fast Fourier Transform (FFT). With FFT, our method achieves $\mathcal{O}(n\log n)$ time complexity. Interestingly, we further demonstrate that properly using relative positional encoding can mitigate the training instability problem of vanilla kernelized attention. On a wide range of tasks, we empirically show that our models can be trained from scratch without any optimization issues. The learned model performs better than many efficient Transformer variants and is faster than standard Transformer in the long-sequence regime.
null
Learning in Non-Cooperative Configurable Markov Decision Processes
https://papers.nips.cc/paper_files/paper/2021/hash/c0f52c6624ae1359e105c8a5d8cd956a-Abstract.html
Giorgia Ramponi, Alberto Maria Metelli, Alessandro Concetti, Marcello Restelli
https://papers.nips.cc/paper_files/paper/2021/hash/c0f52c6624ae1359e105c8a5d8cd956a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13370-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c0f52c6624ae1359e105c8a5d8cd956a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=t-0eCf8L4-a
https://papers.nips.cc/paper_files/paper/2021/file/c0f52c6624ae1359e105c8a5d8cd956a-Supplemental.pdf
The Configurable Markov Decision Process framework includes two entities: a Reinforcement Learning agent and a configurator that can modify some environmental parameters to improve the agent's performance. This presupposes that the two actors have the same reward functions. What if the configurator does not have the same intentions as the agent? This paper introduces the Non-Cooperative Configurable Markov Decision Process, a setting that allows having two (possibly different) reward functions for the configurator and the agent. Then, we consider an online learning problem, where the configurator has to find the best among a finite set of possible configurations. We propose two learning algorithms to minimize the configurator's expected regret, which exploits the problem's structure, depending on the agent's feedback. While a naive application of the UCB algorithm yields a regret that grows indefinitely over time, we show that our approach suffers only bounded regret. Furthermore, we empirically show the performance of our algorithm in simulated domains.
null
Identification of Partially Observed Linear Causal Models: Graphical Conditions for the Non-Gaussian and Heterogeneous Cases
https://papers.nips.cc/paper_files/paper/2021/hash/c0f6fb5d3a389de216345e490469145e-Abstract.html
Jeffrey Adams, Niels Hansen, Kun Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/c0f6fb5d3a389de216345e490469145e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13371-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c0f6fb5d3a389de216345e490469145e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yCA2i3bGbfC
https://papers.nips.cc/paper_files/paper/2021/file/c0f6fb5d3a389de216345e490469145e-Supplemental.pdf
In causal discovery, linear non-Gaussian acyclic models (LiNGAMs) have been studied extensively. While the causally sufficient case is well understood, in many real problems the observed variables are not causally related. Rather, they are generated by latent variables, such as confounders and mediators, which may themselves be causally related. Existing results on the identification of the causal structure among the latent variables often require very strong graphical assumptions. In this paper, we consider partially observed linear models with either non-Gaussian or heterogeneous errors. In that case we give two graphical conditions which are necessary for identification of the causal structure. These conditions are closely related to sparsity of the causal edges. Together with one additional condition on the coefficients, which holds generically for any graph, the two graphical conditions are also sufficient for identifiability. These new conditions can be satisfied even when there is a large number of latent variables. We demonstrate the validity of our results on synthetic data.
null
DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer
https://papers.nips.cc/paper_files/paper/2021/hash/c0f971d8cd24364f2029fcb9ac7b71f5-Abstract.html
Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
https://papers.nips.cc/paper_files/paper/2021/hash/c0f971d8cd24364f2029fcb9ac7b71f5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13372-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c0f971d8cd24364f2029fcb9ac7b71f5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gRqHB07GGz3
https://papers.nips.cc/paper_files/paper/2021/file/c0f971d8cd24364f2029fcb9ac7b71f5-Supplemental.pdf
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers. Many previous learning-based approaches for inverse graphics adopt rasterization-based renderers and assume naive lighting and material models, which often fail to account for non-Lambertian, specular reflections commonly observed in the wild. In this work, we propose DIBR++, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths---speed and realism. Our renderer incorporates environmental lighting and spatially-varying material models to efficiently approximate light transport, either through direct estimation or via spherical basis functions. Compared to more advanced physics-based differentiable renderers leveraging path tracing, DIBR++ is highly performant due to its compact and expressive shading model, which enables easy integration with learning frameworks for geometry, reflectance and lighting prediction from a single image without requiring any ground-truth. We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting.
null
Coresets for Time Series Clustering
https://papers.nips.cc/paper_files/paper/2021/hash/c115ba9e04ab27fbbb664f932112246d-Abstract.html
Lingxiao Huang, K Sudhir, Nisheeth Vishnoi
https://papers.nips.cc/paper_files/paper/2021/hash/c115ba9e04ab27fbbb664f932112246d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13373-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c115ba9e04ab27fbbb664f932112246d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jar9C-V8GH
https://papers.nips.cc/paper_files/paper/2021/file/c115ba9e04ab27fbbb664f932112246d-Supplemental.pdf
We study the problem of constructing coresets for clustering problems with time series data. This problem has gained importance across many fields including biology, medicine, and economics due to the proliferation of sensors facilitating real-time measurement and rapid drop in storage costs. In particular, we consider the setting where the time series data on $N$ entities is generated from a Gaussian mixture model with autocorrelations over $k$ clusters in $\mathbb{R}^d$. Our main contribution is an algorithm to construct coresets for the maximum likelihood objective for this mixture model. Our algorithm is efficient, and under a mild boundedness assumption on the covariance matrices of the underlying Gaussians, the size of the coreset is independent of the number of entities $N$ and the number of observations for each entity, and depends only polynomially on $k$, $d$ and $1/\varepsilon$, where $\varepsilon$ is the error parameter. We empirically assess the performance of our coreset with synthetic data.
null
A Variational Perspective on Diffusion-Based Generative Models and Score Matching
https://papers.nips.cc/paper_files/paper/2021/hash/c11abfd29e4d9b4d4b566b01114d8486-Abstract.html
Chin-Wei Huang, Jae Hyun Lim, Aaron C. Courville
https://papers.nips.cc/paper_files/paper/2021/hash/c11abfd29e4d9b4d4b566b01114d8486-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13374-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c11abfd29e4d9b4d4b566b01114d8486-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bXehDYUjjXi
https://papers.nips.cc/paper_files/paper/2021/file/c11abfd29e4d9b4d4b566b01114d8486-Supplemental.pdf
Discrete-time diffusion-based generative models and score matching methods have shown promising results in modeling high-dimensional image data. Recently, Song et al. (2021) show that diffusion processes that transform data into noise can be reversed via learning the score function, i.e. the gradient of the log-density of the perturbed data. They propose to plug the learned score function into an inverse formula to define a generative diffusion process. Despite the empirical success, a theoretical underpinning of this procedure is still lacking. In this work, we approach the (continuous-time) generative diffusion directly and derive a variational framework for likelihood estimation, which includes continuous-time normalizing flows as a special case, and can be seen as an infinitely deep variational autoencoder. Under this framework, we show that minimizing the score-matching loss is equivalent to maximizing a lower bound of the likelihood of the plug-in reverse SDE proposed by Song et al. (2021), bridging the theoretical gap.
null
Online Active Learning with Surrogate Loss Functions
https://papers.nips.cc/paper_files/paper/2021/hash/c1619d2ad66f7629c12c87fe21d32a58-Abstract.html
Giulia DeSalvo, Claudio Gentile, Tobias Sommer Thune
https://papers.nips.cc/paper_files/paper/2021/hash/c1619d2ad66f7629c12c87fe21d32a58-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13375-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c1619d2ad66f7629c12c87fe21d32a58-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iKYO63MOWwi
https://papers.nips.cc/paper_files/paper/2021/file/c1619d2ad66f7629c12c87fe21d32a58-Supplemental.pdf
We derive a novel active learning algorithm in the streaming setting for binary classification tasks. The algorithm leverages weak labels to minimize the number of label requests, and trains a model to optimize a surrogate loss on a resulting set of labeled and weak-labeled points. Our algorithm jointly admits two crucial properties: theoretical guarantees in the general agnostic setting and a strong empirical performance. Our theoretical analysis shows that the algorithm attains favorable generalization and label complexity bounds, while our empirical study on 18 real-world datasets demonstrate that the algorithm outperforms standard baselines, including the Margin Algorithm, or Uncertainty Sampling, a high-performing active learning algorithm favored by practitioners.
null
Does Preprocessing Help Training Over-parameterized Neural Networks?
https://papers.nips.cc/paper_files/paper/2021/hash/c164bbc9d6c72a52c599bbb43d8db8e1-Abstract.html
Zhao Song, Shuo Yang, Ruizhe Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/c164bbc9d6c72a52c599bbb43d8db8e1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13376-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c164bbc9d6c72a52c599bbb43d8db8e1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LuxVfHv-59s
https://papers.nips.cc/paper_files/paper/2021/file/c164bbc9d6c72a52c599bbb43d8db8e1-Supplemental.pdf
Deep neural networks have achieved impressive performance in many areas. Designing a fast and provable method for training neural networks is a fundamental question in machine learning. The classical training method requires paying $\Omega(mnd)$ cost for both forward computation and backward computation, where $m$ is the width of the neural network, and we are given $n$ training points in $d$-dimensional space. In this paper, we propose two novel preprocessing ideas to bypass this $\Omega(mnd)$ barrier:* First, by preprocessing the initial weights of the neural networks, we can train the neural network in $\widetilde{O}(m^{1-\Theta(1/d)} n d)$ cost per iteration.* Second, by preprocessing the input data points, we can train neural network in $\widetilde{O} (m^{4/5} nd )$ cost per iteration.From the technical perspective, our result is a sophisticated combination of tools in different fields, greedy-type convergence analysis in optimization, sparsity observation in practical work, high-dimensional geometric search in data structure, concentration and anti-concentration in probability. Our results also provide theoretical insights for a large number of previously established fast training methods.In addition, our classical algorithm can be generalized to the Quantum computation model. Interestingly, we can get a similar sublinear cost per iteration but avoid preprocessing initial weights or input data points.
null
Causal Influence Detection for Improving Efficiency in Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/c1722a7941d61aad6e651a35b65a9c3e-Abstract.html
Maximilian Seitzer, Bernhard Schölkopf, Georg Martius
https://papers.nips.cc/paper_files/paper/2021/hash/c1722a7941d61aad6e651a35b65a9c3e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13377-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c1722a7941d61aad6e651a35b65a9c3e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DXJl9826dm
https://papers.nips.cc/paper_files/paper/2021/file/c1722a7941d61aad6e651a35b65a9c3e-Supplemental.pdf
Many reinforcement learning (RL) environments consist of independent entities that interact sparsely. In such environments, RL agents have only limited influence over other entities in any particular situation. Our idea in this work is that learning can be efficiently guided by knowing when and what the agent can influence with its actions. To achieve this, we introduce a measure of situation-dependent causal influence based on conditional mutual information and show that it can reliably detect states of influence. We then propose several ways to integrate this measure into RL algorithms to improve exploration and off-policy learning. All modified algorithms show strong increases in data efficiency on robotic manipulation tasks.
null
LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning
https://papers.nips.cc/paper_files/paper/2021/hash/c1b70d965ca504aa751ddb62ad69c63f-Abstract.html
Yoon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-chul Moon
https://papers.nips.cc/paper_files/paper/2021/hash/c1b70d965ca504aa751ddb62ad69c63f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13378-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c1b70d965ca504aa751ddb62ad69c63f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=eATOjMwxfUQ
https://papers.nips.cc/paper_files/paper/2021/file/c1b70d965ca504aa751ddb62ad69c63f-Supplemental.pdf
Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high. Data augmentation is another effective technique to enlarge the limited amount of labeled instances. The scarcity of labeled dataset leads us to consider the integration of data augmentation and active learning. One possible approach is a pipelined combination, which selects informative instances via the acquisition function and generates virtual instances from the selected instances via augmentation. However, this pipelined approach would not guarantee the informativeness of the virtual instances. This paper proposes Look-Ahead Data Acquisition via augmentation, or LADA framework, that looks ahead the effect of data augmentation in the process of acquisition. LADA jointly considers both 1) unlabeled data instance to be selected and 2) virtual data instance to be generated by data augmentation, to construct the acquisition function. Moreover, to generate maximally informative virtual instances, LADA optimizes the data augmentation policy to maximize the predictive acquisition score, resulting in the proposal of InfoSTN and InfoMixup. The experimental results of LADA show a significant improvement over the recent augmentation and acquisition baselines that were independently applied.
null
Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses
https://papers.nips.cc/paper_files/paper/2021/hash/c1b8bf9e071c0dabb899e7a27f353762-Abstract.html
Haipeng Luo, Chen-Yu Wei, Chung-Wei Lee
https://papers.nips.cc/paper_files/paper/2021/hash/c1b8bf9e071c0dabb899e7a27f353762-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13379-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c1b8bf9e071c0dabb899e7a27f353762-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XeM4Lld0zTR
https://papers.nips.cc/paper_files/paper/2021/file/c1b8bf9e071c0dabb899e7a27f353762-Supplemental.pdf
Policy optimization is a widely-used method in reinforcement learning. Due to its local-search nature, however, theoretical guarantees on global optimality often rely on extra assumptions on the Markov Decision Processes (MDPs) that bypass the challenge of global exploration. To eliminate the need of such assumptions, in this work, we develop a general solution that adds dilated bonuses to the policy update to facilitate global exploration. To showcase the power and generality of this technique, we apply it to several episodic MDP settings with adversarial losses and bandit feedback, improving and generalizing the state-of-the-art. Specifically, in the tabular case, we obtain $\widetilde{\mathcal{O}}(\sqrt{T})$ regret where $T$ is the number of episodes, improving the $\widetilde{\mathcal{O}}({T}^{\frac{2}{3}})$ regret bound by Shani et al. [2020]. When the number of states is infinite, under the assumption that the state-action values are linear in some low-dimensional features, we obtain $\widetilde{\mathcal{O}}({T}^{\frac{2}{3}})$ regret with the help of a simulator, matching the result of Neu and Olkhovskaya [2020] while importantly removing the need of an exploratory policy that their algorithm requires. To our knowledge, this is the first algorithm with sublinear regret for linear function approximation with adversarial losses, bandit feedback, and no exploratory assumptions. Finally, we also discuss how to further improve the regret or remove the need of a simulator using dilated bonuses, when an exploratory policy is available.
null
Multiclass versus Binary Differentially Private PAC Learning
https://papers.nips.cc/paper_files/paper/2021/hash/c1d53b7a97707b5cd1815c8d228d8ef1-Abstract.html
Satchit Sivakumar, Mark Bun, Marco Gaboardi
https://papers.nips.cc/paper_files/paper/2021/hash/c1d53b7a97707b5cd1815c8d228d8ef1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13380-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c1d53b7a97707b5cd1815c8d228d8ef1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MBxJ0ydw6b
null
We show a generic reduction from multiclass differentially private PAC learning to binary private PAC learning. We apply this transformation to a recently proposed binary private PAC learner to obtain a private multiclass learner with sample complexity that has a polynomial dependence on the multiclass Littlestone dimension and a poly-logarithmic dependence on the number of classes. This yields a doubly exponential improvement in the dependence on both parameters over learners from previous work. Our proof extends the notion of $\Psi$-dimension defined in work of Ben-David et al. [JCSS, 1995] to the online setting and explores its general properties.
null
Adversarially Robust Change Point Detection
https://papers.nips.cc/paper_files/paper/2021/hash/c1e39d912d21c91dce811d6da9929ae8-Abstract.html
Mengchu Li, Yi Yu
https://papers.nips.cc/paper_files/paper/2021/hash/c1e39d912d21c91dce811d6da9929ae8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13381-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c1e39d912d21c91dce811d6da9929ae8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xmMHxfE1qS6
https://papers.nips.cc/paper_files/paper/2021/file/c1e39d912d21c91dce811d6da9929ae8-Supplemental.pdf
Change point detection is becoming increasingly popular in many application areas. On one hand, most of the theoretically-justified methods are investigated in an ideal setting without model violations, or merely robust against identical heavy-tailed noise distribution across time and/or against isolate outliers; on the other hand, we are aware that there have been exponentially growing attacks from adversaries, who may pose systematic contamination on data to purposely create spurious change points or disguise true change points. In light of the timely need for a change point detection method that is robust against adversaries, we start with, arguably, the simplest univariate mean change point detection problem. The adversarial attacks are formulated through the Huber $\varepsilon$-contamination framework, which in particular allows the contamination distributions to be different at each time point. In this paper, we demonstrate a phase transition phenomenon in change point detection. This detection boundary is a function of the contamination proportion~$\varepsilon$ and is the first time shown in the literature. In addition, we derive the minimax-rate optimal localisation error rate, quantifying the cost of accuracy in terms of the contamination proportion. We propose a computationally feasible method, matching the minimax lower bound under certain conditions, saving for logarithmic factors. Extensive numerical experiments are conducted with comparisons to robust change point detection methods in the existing literature.
null
Cycle Self-Training for Domain Adaptation
https://papers.nips.cc/paper_files/paper/2021/hash/c1fea270c48e8079d8ddf7d06d26ab52-Abstract.html
Hong Liu, Jianmin Wang, Mingsheng Long
https://papers.nips.cc/paper_files/paper/2021/hash/c1fea270c48e8079d8ddf7d06d26ab52-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13382-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c1fea270c48e8079d8ddf7d06d26ab52-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-iu9-C_lan
https://papers.nips.cc/paper_files/paper/2021/file/c1fea270c48e8079d8ddf7d06d26ab52-Supplemental.pdf
Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant representations to narrow the domain shift, which are empirically effective but theoretically challenged by the hardness or impossibility theorems. Recently, self-training has been gaining momentum in UDA, which exploits unlabeled target data by training with target pseudo-labels. However, as corroborated in this work, under distributional shift, the pseudo-labels can be unreliable in terms of their large discrepancy from target ground truth. In this paper, we propose Cycle Self-Training (CST), a principled self-training algorithm that explicitly enforces pseudo-labels to generalize across domains. CST cycles between a forward step and a reverse step until convergence. In the forward step, CST generates target pseudo-labels with a source-trained classifier. In the reverse step, CST trains a target classifier using target pseudo-labels, and then updates the shared representations to make the target classifier perform well on the source data. We introduce the Tsallis entropy as a confidence-friendly regularization to improve the quality of target pseudo-labels. We analyze CST theoretically under realistic assumptions, and provide hard cases where CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail. Empirical results indicate that CST significantly improves over the state-of-the-arts on visual recognition and sentiment analysis benchmarks.
null
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation
https://papers.nips.cc/paper_files/paper/2021/hash/c203d8a151612acf12457e4d67635a95-Abstract.html
Bingchen Zhao, Kai Han
https://papers.nips.cc/paper_files/paper/2021/hash/c203d8a151612acf12457e4d67635a95-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13383-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c203d8a151612acf12457e4d67635a95-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xWq1MVj7YrE
https://papers.nips.cc/paper_files/paper/2021/file/c203d8a151612acf12457e4d67635a95-Supplemental.pdf
In this paper, we tackle the problem of novel visual category discovery, i.e., grouping unlabelled images from new classes into different semantic partitions by leveraging a labelled dataset that contains images from other different but relevant categories. This is a more realistic and challenging setting than conventional semi-supervised learning. We propose a two-branch learning framework for this problem, with one branch focusing on local part-level information and the other branch focusing on overall characteristics. To transfer knowledge from the labelled data to the unlabelled, we propose using dual ranking statistics on both branches to generate pseudo labels for training on the unlabelled data. We further introduce a mutual knowledge distillation method to allow information exchange and encourage agreement between the two branches for discovering new categories, allowing our model to enjoy the benefits of global and local features. We comprehensively evaluate our method on public benchmarks for generic object classification, as well as the more challenging datasets for fine-grained visual recognition, achieving state-of-the-art performance.
null
Stochastic Anderson Mixing for Nonconvex Stochastic Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/c203e4a1bdef9372cb9864bfc9b511cc-Abstract.html
Fuchao Wei, Chenglong Bao, Yang Liu
https://papers.nips.cc/paper_files/paper/2021/hash/c203e4a1bdef9372cb9864bfc9b511cc-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13384-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c203e4a1bdef9372cb9864bfc9b511cc-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hx2Ckkzdf53
https://papers.nips.cc/paper_files/paper/2021/file/c203e4a1bdef9372cb9864bfc9b511cc-Supplemental.pdf
Anderson mixing (AM) is an acceleration method for fixed-point iterations. Despite its success and wide usage in scientific computing, the convergence theory of AM remains unclear, and its applications to machine learning problems are not well explored. In this paper, by introducing damped projection and adaptive regularization to the classical AM, we propose a Stochastic Anderson Mixing (SAM) scheme to solve nonconvex stochastic optimization problems. Under mild assumptions, we establish the convergence theory of SAM, including the almost sure convergence to stationary points and the worst-case iteration complexity. Moreover, the complexity bound can be improved when randomly choosing an iterate as the output. To further accelerate the convergence, we incorporate a variance reduction technique into the proposed SAM. We also propose a preconditioned mixing strategy for SAM which can empirically achieve faster convergence or better generalization ability. Finally, we apply the SAM method to train various neural networks including the vanilla CNN, ResNets, WideResNet, ResNeXt, DenseNet and LSTM. Experimental results on image classification and language model demonstrate the advantages of our method.
null
Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model
https://papers.nips.cc/paper_files/paper/2021/hash/c21f4ce780c5c9d774f79841b81fdc6d-Abstract.html
Bingyan Wang, Yuling Yan, Jianqing Fan
https://papers.nips.cc/paper_files/paper/2021/hash/c21f4ce780c5c9d774f79841b81fdc6d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13385-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c21f4ce780c5c9d774f79841b81fdc6d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=J2YvvXDp7H
https://papers.nips.cc/paper_files/paper/2021/file/c21f4ce780c5c9d774f79841b81fdc6d-Supplemental.pdf
The curse of dimensionality is a widely known issue in reinforcement learning (RL). In the tabular setting where the state space $\mathcal{S}$ and the action space $\mathcal{A}$ are both finite, to obtain a near optimal policy with sampling access to a generative model, the minimax optimal sample complexity scales linearly with $|\mathcal{S}|\times|\mathcal{A}|$, which can be prohibitively large when $\mathcal{S}$ or $\mathcal{A}$ is large. This paper considers a Markov decision process (MDP) that admits a set of state-action features, which can linearly express (or approximate) its probability transition kernel. We show that a model-based approach (resp.$~$Q-learning) provably learns an $\varepsilon$-optimal policy (resp.$~$Q-function) with high probability as soon as the sample size exceeds the order of $\frac{K}{(1-\gamma)^{3}\varepsilon^{2}}$ (resp.$~$$\frac{K}{(1-\gamma)^{4}\varepsilon^{2}}$), up to some logarithmic factor. Here $K$ is the feature dimension and $\gamma\in(0,1)$ is the discount factor of the MDP. Both sample complexity bounds are provably tight, and our result for the model-based approach matches the minimax lower bound. Our results show that for arbitrarily large-scale MDP, both the model-based approach and Q-learning are sample-efficient when $K$ is relatively small, and hence the title of this paper.
null
NN-Baker: A Neural-network Infused Algorithmic Framework for Optimization Problems on Geometric Intersection Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/c236337b043acf93c7df397fdb9082b3-Abstract.html
Evan McCarty, Qi Zhao, Anastasios Sidiropoulos, Yusu Wang
https://papers.nips.cc/paper_files/paper/2021/hash/c236337b043acf93c7df397fdb9082b3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13386-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c236337b043acf93c7df397fdb9082b3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5TuGBbNSyAc
https://papers.nips.cc/paper_files/paper/2021/file/c236337b043acf93c7df397fdb9082b3-Supplemental.pdf
Recent years have witnessed a surge of approaches to use neural networks to help tackle combinatorial optimization problems, including graph optimization problems. However, theoretical understanding of such approaches remains limited. In this paper, we consider the geometric setting, where graphs are induced by points in a fixed dimensional Euclidean space. We show that several graph optimization problems can be approximated by an algorithm that is polynomial in graph size n via a framework we propose, call the Baker-paradigm. More importantly, a key advantage of the Baker-paradigm is that it decomposes the input problem into (at most linear number of) small sub-problems of fixed sizes (independent of the size of the input). For the family of such fixed-size sub-problems, we can now design neural networks with universal approximation guarantees to solve them. This leads to a mixed algorithmic-ML framework, which we call NN-Baker that has the capacity to approximately solve a family of graph optimization problems (e.g, maximum independent set and minimum vertex cover) in time linear to input graph size, and only polynomial to approximation parameter. We instantiate our NN-Baker by a CNN version and GNN version, and demonstrate the effectiveness and efficiency of our approach via a range of experiments.
null
A Note on Sparse Generalized Eigenvalue Problem
https://papers.nips.cc/paper_files/paper/2021/hash/c2368d3d45705a56e51ec5940e187f8d-Abstract.html
Yunfeng Cai, Guanhua Fang, Ping Li
https://papers.nips.cc/paper_files/paper/2021/hash/c2368d3d45705a56e51ec5940e187f8d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13387-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2368d3d45705a56e51ec5940e187f8d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WKR5ZUs91n
https://papers.nips.cc/paper_files/paper/2021/file/c2368d3d45705a56e51ec5940e187f8d-Supplemental.pdf
The sparse generalized eigenvalue problem (SGEP) aims to find the leading eigenvector with sparsity structure. SGEP plays an important role in statistical learning and has wide applications including, but not limited to, sparse principal component analysis, sparse canonical correlation analysis and sparse Fisher discriminant analysis, etc. Due to the sparsity constraint, the solution of SGEP entails interesting properties from both numerical and statistical perspectives. In this paper, we provide a detailed sensitivity analysis for SGEP and establish the rate-optimal perturbation bound under the sparse setting. Specifically, we show that the bound is related to the perturbation/noise level and the recovery of the true support of the leading eigenvector as well. We also investigate the estimator of SGEP via imposing a non-convex regularization. Such estimator can achieve the optimal error rate and can recover the sparsity structure as well. Extensive numerical experiments corroborate our theoretical findings via using alternating direction method of multipliers (ADMM)-based computational method.
null
RMIX: Learning Risk-Sensitive Policies for Cooperative Reinforcement Learning Agents
https://papers.nips.cc/paper_files/paper/2021/hash/c2626d850c80ea07e7511bbae4c76f4b-Abstract.html
Wei Qiu, Xinrun Wang, Runsheng Yu, Rundong Wang, Xu He, Bo An, Svetlana Obraztsova, Zinovi Rabinovich
https://papers.nips.cc/paper_files/paper/2021/hash/c2626d850c80ea07e7511bbae4c76f4b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13388-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2626d850c80ea07e7511bbae4c76f4b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=83SeeJals7j
https://papers.nips.cc/paper_files/paper/2021/file/c2626d850c80ea07e7511bbae4c76f4b-Supplemental.zip
Current value-based multi-agent reinforcement learning methods optimize individual Q values to guide individuals' behaviours via centralized training with decentralized execution (CTDE). However, such expected, i.e., risk-neutral, Q value is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. To address these issues, we propose RMIX, a novel cooperative MARL method with the Conditional Value at Risk (CVaR) measure over the learned distributions of individuals' Q values. Specifically, we first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution. Then, to handle the temporal nature of the stochastic outcomes during executions, we propose a dynamic risk level predictor for risk level tuning. Finally, we optimize the CVaR policies with CVaR values used to estimate the target in TD error during centralized training and the CVaR values are used as auxiliary local rewards to update the local distribution via Quantile Regression loss. Empirically, we show that our method outperforms many state-of-the-art methods on various multi-agent risk-sensitive navigation scenarios and challenging StarCraft II cooperative tasks, demonstrating enhanced coordination and revealing improved sample efficiency.
null
Optimal Policies Tend To Seek Power
https://papers.nips.cc/paper_files/paper/2021/hash/c26820b8a4c1b3c2aa868d6d57e14a79-Abstract.html
Alex Turner, Logan Smith, Rohin Shah, Andrew Critch, Prasad Tadepalli
https://papers.nips.cc/paper_files/paper/2021/hash/c26820b8a4c1b3c2aa868d6d57e14a79-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13389-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c26820b8a4c1b3c2aa868d6d57e14a79-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=l7-DBWawSZH
https://papers.nips.cc/paper_files/paper/2021/file/c26820b8a4c1b3c2aa868d6d57e14a79-Supplemental.pdf
Some researchers speculate that intelligent reinforcement learning (RL) agents would be incentivized to seek resources and power in pursuit of the objectives we specify for them. Other researchers point out that RL agents need not have human-like power-seeking instincts. To clarify this discussion, we develop the first formal theory of the statistical tendencies of optimal policies. In the context of Markov decision processes, we prove that certain environmental symmetries are sufficient for optimal policies to tend to seek power over the environment. These symmetries exist in many environments in which the agent can be shut down or destroyed. We prove that in these environments, most reward functions make it optimal to seek power by keeping a range of options available and, when maximizing average reward, by navigating towards larger sets of potential terminal states.
null
Catalytic Role Of Noise And Necessity Of Inductive Biases In The Emergence Of Compositional Communication
https://papers.nips.cc/paper_files/paper/2021/hash/c2839bed26321da8b466c80a032e4714-Abstract.html
Łukasz Kuciński, Tomasz Korbak, Paweł Kołodziej, Piotr Miłoś
https://papers.nips.cc/paper_files/paper/2021/hash/c2839bed26321da8b466c80a032e4714-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13390-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2839bed26321da8b466c80a032e4714-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Pgv4fwfh63L
https://papers.nips.cc/paper_files/paper/2021/file/c2839bed26321da8b466c80a032e4714-Supplemental.pdf
Communication is compositional if complex signals can be represented as a combination of simpler subparts. In this paper, we theoretically show that inductive biases on both the training framework and the data are needed to develop a compositional communication. Moreover, we prove that compositionality spontaneously arises in the signaling games, where agents communicate over a noisy channel. We experimentally confirm that a range of noise levels, which depends on the model and the data, indeed promotes compositionality. Finally, we provide a comprehensive study of this dependence and report results in terms of recently studied compositionality metrics: topographical similarity, conflict count, and context independence.
null
PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair
https://papers.nips.cc/paper_files/paper/2021/hash/c2937f3a1b3a177d2408574da0245a19-Abstract.html
Zimin Chen, Vincent J Hellendoorn, Pascal Lamblin, Petros Maniatis, Pierre-Antoine Manzagol, Daniel Tarlow, Subhodeep Moitra
https://papers.nips.cc/paper_files/paper/2021/hash/c2937f3a1b3a177d2408574da0245a19-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13391-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2937f3a1b3a177d2408574da0245a19-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=GEm4o9A6Jfb
https://papers.nips.cc/paper_files/paper/2021/file/c2937f3a1b3a177d2408574da0245a19-Supplemental.pdf
Machine learning for understanding and editing source code has recently attracted significant interest, with many developments in new models, new code representations, and new tasks.This proliferation can appear disparate and disconnected, making each approach seemingly unique and incompatible, thus obscuring the core machine learning challenges and contributions.In this work, we demonstrate that the landscape can be significantly simplified by taking a general approach of mapping a graph to a sequence of tokens and pointers.Our main result is to show that 16 recently published tasks of different shapes can be cast in this form, based on which a single model architecture achieves near or above state-of-the-art results on nearly all tasks, outperforming custom models like code2seq and alternative generic models like Transformers.This unification further enables multi-task learning and a series of cross-cutting experiments about the importance of different modeling choices for code understanding and repair tasks.The full framework, called PLUR, is easily extensible to more tasks, and will be open-sourced (https://github.com/google-research/plur).
null
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
https://papers.nips.cc/paper_files/paper/2021/hash/c2c2a04512b35d13102459f8784f1a2d-Abstract.html
Yu Meng, Chenyan Xiong, Payal Bajaj, saurabh tiwary, Paul Bennett, Jiawei Han, XIA SONG
https://papers.nips.cc/paper_files/paper/2021/hash/c2c2a04512b35d13102459f8784f1a2d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13392-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2c2a04512b35d13102459f8784f1a2d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=pk4q0SD_r1X
https://papers.nips.cc/paper_files/paper/2021/file/c2c2a04512b35d13102459f8784f1a2d-Supplemental.pdf
We present a self-supervised learning framework, COCO-LM, that pretrains Language Models by COrrecting and COntrasting corrupted text sequences. Following ELECTRA-style pretraining, COCO-LM employs an auxiliary language model to corrupt text sequences, upon which it constructs two new tasks for pretraining the main model. The first token-level task, Corrective Language Modeling, is to detect and correct tokens replaced by the auxiliary model, in order to better capture token-level semantics. The second sequence-level task, Sequence Contrastive Learning, is to align text sequences originated from the same source input while ensuring uniformity in the representation space. Experiments on GLUE and SQuAD demonstrate that COCO-LM not only outperforms recent state-of-the-art pretrained models in accuracy, but also improves pretraining efficiency. It achieves the MNLI accuracy of ELECTRA with 50% of its pretraining GPU hours. With the same pretraining steps of standard base/large-sized models, COCO-LM outperforms the previous best models by 1+ GLUE average points.
null
Minibatch and Momentum Model-based Methods for Stochastic Weakly Convex Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/c2c701fe341a7756ca7fd4eaa83ff63f-Abstract.html
Qi Deng, Wenzhi Gao
https://papers.nips.cc/paper_files/paper/2021/hash/c2c701fe341a7756ca7fd4eaa83ff63f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13393-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2c701fe341a7756ca7fd4eaa83ff63f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PqkKlKQuGZw
https://papers.nips.cc/paper_files/paper/2021/file/c2c701fe341a7756ca7fd4eaa83ff63f-Supplemental.zip
Stochastic model-based methods have received increasing attention lately due to their appealing robustness to the stepsize selection and provable efficiency guarantee. We make two important extensions for improving model-based methods on stochastic weakly convex optimization. First, we propose new minibatch model- based methods by involving a set of samples to approximate the model function in each iteration. For the first time, we show that stochastic algorithms achieve linear speedup over the batch size even for non-smooth and non-convex (particularly, weakly convex) problems. To this end, we develop a novel sensitivity analysis of the proximal mapping involved in each algorithm iteration. Our analysis appears to be of independent interests in more general settings. Second, motivated by the success of momentum stochastic gradient descent, we propose a new stochastic extrapolated model-based method, greatly extending the classic Polyak momentum technique to a wider class of stochastic algorithms for weakly convex optimization. The rate of convergence to some natural stationarity condition is established over a fairly flexible range of extrapolation terms.While mainly focusing on weakly convex optimization, we also extend our work to convex optimization. We apply the minibatch and extrapolated model-based methods to stochastic convex optimization, for which we provide a new complexity bound and promising linear speedup in batch size. Moreover, an accelerated model-based method based on Nesterov’s momentum is presented, for which we establish an optimal complexity bound for reaching optimality.
null
XDO: A Double Oracle Algorithm for Extensive-Form Games
https://papers.nips.cc/paper_files/paper/2021/hash/c2e06e9a80370952f6ec5463c77cbace-Abstract.html
Stephen McAleer, JB Lanier, Kevin A Wang, Pierre Baldi, Roy Fox
https://papers.nips.cc/paper_files/paper/2021/hash/c2e06e9a80370952f6ec5463c77cbace-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13394-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2e06e9a80370952f6ec5463c77cbace-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WDLf8cTq_V8
https://papers.nips.cc/paper_files/paper/2021/file/c2e06e9a80370952f6ec5463c77cbace-Supplemental.pdf
Policy Space Response Oracles (PSRO) is a reinforcement learning (RL) algorithm for two-player zero-sum games that has been empirically shown to find approximate Nash equilibria in large games. Although PSRO is guaranteed to converge to an approximate Nash equilibrium and can handle continuous actions, it may take an exponential number of iterations as the number of information states (infostates) grows. We propose Extensive-Form Double Oracle (XDO), an extensive-form double oracle algorithm for two-player zero-sum games that is guaranteed to converge to an approximate Nash equilibrium linearly in the number of infostates. Unlike PSRO, which mixes best responses at the root of the game, XDO mixes best responses at every infostate. We also introduce Neural XDO (NXDO), where the best response is learned through deep RL. In tabular experiments on Leduc poker, we find that XDO achieves an approximate Nash equilibrium in a number of iterations an order of magnitude smaller than PSRO. Experiments on a modified Leduc poker game and Oshi-Zumo show that tabular XDO achieves a lower exploitability than CFR with the same amount of computation. We also find that NXDO outperforms PSRO and NFSP on a sequential multidimensional continuous-action game. NXDO is the first deep RL method that can find an approximate Nash equilibrium in high-dimensional continuous-action sequential games.
null
Active Assessment of Prediction Services as Accuracy Surface Over Attribute Combinations
https://papers.nips.cc/paper_files/paper/2021/hash/c2f32522a84d5e6357e6abac087f1b0b-Abstract.html
Vihari Piratla, Soumen Chakrabarti, Sunita Sarawagi
https://papers.nips.cc/paper_files/paper/2021/hash/c2f32522a84d5e6357e6abac087f1b0b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13395-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2f32522a84d5e6357e6abac087f1b0b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7CKWdq2yQv
https://papers.nips.cc/paper_files/paper/2021/file/c2f32522a84d5e6357e6abac087f1b0b-Supplemental.pdf
Our goal is to evaluate the accuracy of a black-box classification model, not as a single aggregate on a given test data distribution, but as a surface over a large number of combinations of attributes characterizing multiple test data distributions. Such attributed accuracy measures become important as machine learning models get deployed as a service, where the training data distribution is hidden from clients, and different clients may be interested in diverse regions of the data distribution. We present Attributed Accuracy Assay (AAA) --- a Gaussian Process (GP)-based probabilistic estimator for such an accuracy surface. Each attribute combination, called an 'arm' is associated with a Beta density from which the service's accuracy is sampled. We expect the GP to smooth the parameters of the Beta density over related arms to mitigate sparsity. We show that obvious application of GPs cannot address the challenge of heteroscedastic uncertainty over a huge attribute space that is sparsely and unevenly populated. In response, we present two enhancements: pooling sparse observations, and regularizing the scale parameter of the Beta densities. After introducing these innovations, we establish the effectiveness of AAA both in terms of its estimation accuracy and exploration efficiency, through extensive experiments and analysis.
null
A mechanistic multi-area recurrent network model of decision-making
https://papers.nips.cc/paper_files/paper/2021/hash/c2f599841f21aaefeeabd2a60ef7bfe8-Abstract.html
Michael Kleinman, Chandramouli Chandrasekaran, Jonathan Kao
https://papers.nips.cc/paper_files/paper/2021/hash/c2f599841f21aaefeeabd2a60ef7bfe8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13396-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c2f599841f21aaefeeabd2a60ef7bfe8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FP7_Q6Np8w
https://papers.nips.cc/paper_files/paper/2021/file/c2f599841f21aaefeeabd2a60ef7bfe8-Supplemental.pdf
Recurrent neural networks (RNNs) trained on neuroscience-based tasks have been widely used as models for cortical areas performing analogous tasks. However, very few tasks involve a single cortical area, and instead require the coordination of multiple brain areas. Despite the importance of multi-area computation, there is a limited understanding of the principles underlying such computation. We propose to use multi-area RNNs with neuroscience-inspired architecture constraints to derive key features of multi-area computation. In particular, we show that incorporating multiple areas and Dale's Law is critical for biasing the networks to learn biologically plausible solutions. Additionally, we leverage the full observability of the RNNs to show that output-relevant information is preferentially propagated between areas. These results suggest that cortex uses modular computation to generate minimal sufficient representations of task information. More broadly, our results suggest that constrained multi-area RNNs can produce experimentally testable hypotheses for computations that occur within and across multiple brain areas, enabling new insights into distributed computation in neural systems.
null
Learning to Compose Visual Relations
https://papers.nips.cc/paper_files/paper/2021/hash/c3008b2c6f5370b744850a98a95b73ad-Abstract.html
Nan Liu, Shuang Li, Yilun Du, Josh Tenenbaum, Antonio Torralba
https://papers.nips.cc/paper_files/paper/2021/hash/c3008b2c6f5370b744850a98a95b73ad-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13397-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c3008b2c6f5370b744850a98a95b73ad-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fzwx-pzQGxe
https://papers.nips.cc/paper_files/paper/2021/file/c3008b2c6f5370b744850a98a95b73ad-Supplemental.pdf
The visual world around us can be described as a structured set of objects and their associated relations. An image of a room may be conjured given only the description of the underlying objects and their associated relations. While there has been significant work on designing deep neural networks which may compose individual objects together, less work has been done on composing the individual relations between objects. A principal difficulty is that while the placement of objects is mutually independent, their relations are entangled and dependent on each other. To circumvent this issue, existing works primarily compose relations by utilizing a holistic encoder, in the form of text or graphs. In this work, we instead propose to represent each relation as an unnormalized density (an energy-based model), enabling us to compose separate relations in a factorized manner. We show that such a factorized decomposition allows the model to both generate and edit scenes that have multiple sets of relations more faithfully. We further show that decomposition enables our model to effectively understand the underlying relational scene structure.
null
Identity testing for Mallows model
https://papers.nips.cc/paper_files/paper/2021/hash/c315f0320b7cd4ec85756fac52d78076-Abstract.html
Róbert Busa-Fekete, Dimitris Fotakis, Balazs Szorenyi, Emmanouil Zampetakis
https://papers.nips.cc/paper_files/paper/2021/hash/c315f0320b7cd4ec85756fac52d78076-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13398-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c315f0320b7cd4ec85756fac52d78076-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=M7emZFOLbH
https://papers.nips.cc/paper_files/paper/2021/file/c315f0320b7cd4ec85756fac52d78076-Supplemental.zip
In this paper, we devise identity tests for ranking data that is generated from Mallows model both in the \emph{asymptotic} and \emph{non-asymptotic} settings. First we consider the case when the central ranking is known, and devise two algorithms for testing the spread parameter of the Mallows model. The first one is obtained by constructing a Uniformly Most Powerful Unbiased (UMPU) test in the asymptotic setting and then converting it into a sample-optimal non-asymptotic identity test. The resulting test is, however, impractical even for medium sized data, because it requires computing the distribution of the sufficient statistic. The second non-asymptotic test is derived from an optimal learning algorithm for the Mallows model. This test is both easy to compute and is sample-optimal for a wide range of parameters. Next, we consider testing Mallows models for the unknown central ranking case. This case can be tackled in the asymptotic setting by introducing a bias that exponentially decays with the sample size. We support all our findings with extensive numerical experiments and show that the proposed tests scale gracefully with the number of items to be ranked.
null
Bandits with Knapsacks beyond the Worst Case
https://papers.nips.cc/paper_files/paper/2021/hash/c3395dd46c34fa7fd8d729d8cf88b7a8-Abstract.html
Karthik Abinav Sankararaman, Aleksandrs Slivkins
https://papers.nips.cc/paper_files/paper/2021/hash/c3395dd46c34fa7fd8d729d8cf88b7a8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13399-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c3395dd46c34fa7fd8d729d8cf88b7a8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=QGauQ-9y42C
https://papers.nips.cc/paper_files/paper/2021/file/c3395dd46c34fa7fd8d729d8cf88b7a8-Supplemental.pdf
Bandits with Knapsacks (BwK) is a general model for multi-armed bandits under supply/budget constraints. While worst-case regret bounds for BwK are well-understood, we present three results that go beyond the worst-case perspective. First, we provide upper and lower bounds which amount to a full characterization for logarithmic, instance-dependent regret rates.Second, we consider "simple regret" in BwK, which tracks algorithm's performance in a given round, and prove that it is small in all but a few rounds. Third, we provide a "generalreduction" from BwK to bandits which takes advantage of some known helpful structure, and apply this reduction to combinatorial semi-bandits, linear contextual bandits, and multinomial-logit bandits. Our results build on the BwK algorithm from prior work, providing new analyses thereof.
null
Closing the loop in medical decision support by understanding clinical decision-making: A case study on organ transplantation
https://papers.nips.cc/paper_files/paper/2021/hash/c344336196d5ec19bd54fd14befdde87-Abstract.html
Yuchao Qin, Fergus Imrie, Alihan Hüyük, Daniel Jarrett, alexander gimson, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2021/hash/c344336196d5ec19bd54fd14befdde87-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13400-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c344336196d5ec19bd54fd14befdde87-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gBxJ0R9Oua
https://papers.nips.cc/paper_files/paper/2021/file/c344336196d5ec19bd54fd14befdde87-Supplemental.pdf
Significant effort has been placed on developing decision support tools to improve patient care. However, drivers of real-world clinical decisions in complex medical scenarios are not yet well-understood, resulting in substantial gaps between these tools and practical applications. In light of this, we highlight that more attention on understanding clinical decision-making is required both to elucidate current clinical practices and to enable effective human-machine interactions. This is imperative in high-stakes scenarios with scarce available resources. Using organ transplantation as a case study, we formalize the desiderata of methods for understanding clinical decision-making. We show that most existing machine learning methods are insufficient to meet these requirements and propose iTransplant, a novel data-driven framework to learn the factors affecting decisions on organ offers in an instance-wise fashion directly from clinical data, as a possible solution. Through experiments on real-world liver transplantation data from OPTN, we demonstrate the use of iTransplant to: (1) discover which criteria are most important to clinicians for organ offer acceptance; (2) identify patient-specific organ preferences of clinicians allowing automatic patient stratification; and (3) explore variations in transplantation practices between different transplant centers. Finally, we emphasize that the insights gained by iTransplant can be used to inform the development of future decision support tools.
null
Change Point Detection via Multivariate Singular Spectrum Analysis
https://papers.nips.cc/paper_files/paper/2021/hash/c348616cd8a86ee661c7c98800678fad-Abstract.html
Arwa Alanqary, Abdullah Alomar, Devavrat Shah
https://papers.nips.cc/paper_files/paper/2021/hash/c348616cd8a86ee661c7c98800678fad-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13401-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c348616cd8a86ee661c7c98800678fad-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=i0DmV60aeK
https://papers.nips.cc/paper_files/paper/2021/file/c348616cd8a86ee661c7c98800678fad-Supplemental.pdf
The objective of change point detection (CPD) is to detect significant and abrupt changes in the dynamics of the underlying system of interest through multivariate time series observations. In this work, we develop and analyze an algorithm for CPD that is inspired by a variant of the classical singular spectrum analysis (SSA) approach for time series by combining it with the classical cumulative sum (CUSUM) statistic from sequential hypothesis testing. In particular, we model the underlying dynamics of multivariate time series observations through the spatio-temporal model introduced recently in the multivariate SSA (mSSA) literature. The change point in such a setting corresponds to a change in the underlying spatio-temporal model. As the primary contributions of this work, we develop an algorithm based on CUSUM-statistic to detect such change points in an online fashion. We extend the analysis of CUSUM statistics, traditionally done for the setting of independent observations, to the dependent setting of (multivariate) time series under the spatio-temporal model. Specifically, for a given parameter $h > 0$, our method achieves the following desirable trade-off: when a change happens, it detects it within $O(h)$ time delay on average, while in the absence of change, it does not declare false detection for at least $\exp(\Omega(h))$ time length on average. We conduct empirical experiments using benchmark and synthetic datasets. We find that the proposed method performs competitively or outperforms the state-of-the-art change point detection methods across datasets.
null
Meta-learning to Improve Pre-training
https://papers.nips.cc/paper_files/paper/2021/hash/c3810d4a9513b028fc0f2a83cb6d7b50-Abstract.html
Aniruddh Raghu, Jonathan Lorraine, Simon Kornblith, Matthew McDermott, David K. Duvenaud
https://papers.nips.cc/paper_files/paper/2021/hash/c3810d4a9513b028fc0f2a83cb6d7b50-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13402-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c3810d4a9513b028fc0f2a83cb6d7b50-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Wiq6Mg8btwT
https://papers.nips.cc/paper_files/paper/2021/file/c3810d4a9513b028fc0f2a83cb6d7b50-Supplemental.pdf
Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains. PT can incorporate various design choices such as task and data reweighting strategies, augmentation policies, and noise models, all of which can significantly impact the quality of representations learned. The hyperparameters introduced by these strategies therefore must be tuned appropriately. However, setting the values of these hyperparameters is challenging. Most existing methods either struggle to scale to high dimensions, are too slow and memory-intensive, or cannot be directly applied to the two-stage PT and FT learning process. In this work, we propose an efficient, gradient-based algorithm to meta-learn PT hyperparameters. We formalize the PT hyperparameter optimization problem and propose a novel method to obtain PT hyperparameter gradients by combining implicit differentiation and backpropagation through unrolled optimization. We demonstrate that our method improves predictive performance on two real-world domains. First, we optimize high-dimensional task weighting hyperparameters for multitask pre-training on protein-protein interaction graphs and improve AUROC by up to 3.9%. Second, we optimize a data augmentation neural network for self-supervised PT with SimCLR on electrocardiography data and improve AUROC by up to 1.9%.
null
Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem
https://papers.nips.cc/paper_files/paper/2021/hash/c39b9a47811f1eaf3244a63ae8c22734-Abstract.html
Adarsh Barik, Jean Honorio
https://papers.nips.cc/paper_files/paper/2021/hash/c39b9a47811f1eaf3244a63ae8c22734-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13403-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c39b9a47811f1eaf3244a63ae8c22734-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=l-0rLXvctI
https://papers.nips.cc/paper_files/paper/2021/file/c39b9a47811f1eaf3244a63ae8c22734-Supplemental.pdf
In this paper, we study the problem of fair sparse regression on a biased dataset where bias depends upon a hidden binary attribute. The presence of a hidden attribute adds an extra layer of complexity to the problem by combining sparse regression and clustering with unknown binary labels. The corresponding optimization problem is combinatorial, but we propose a novel relaxation of it as an invex optimization problem. To the best of our knowledge, this is the first invex relaxation for a combinatorial problem. We show that the inclusion of the debiasing/fairness constraint in our model has no adverse effect on the performance. Rather, it enables the recovery of the hidden attribute. The support of our recovered regression parameter vector matches exactly with the true parameter vector. Moreover, we simultaneously solve the clustering problem by recovering the exact value of the hidden attribute for each sample. Our method uses carefully constructed primal dual witnesses to provide theoretical guarantees for the combinatorial problem. To that end, we show that the sample complexity of our method is logarithmic in terms of the dimension of the regression parameter vector.
null
Probabilistic Margins for Instance Reweighting in Adversarial Training
https://papers.nips.cc/paper_files/paper/2021/hash/c3a690be93aa602ee2dc0ccab5b7b67e-Abstract.html
qizhou wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
https://papers.nips.cc/paper_files/paper/2021/hash/c3a690be93aa602ee2dc0ccab5b7b67e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13404-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c3a690be93aa602ee2dc0ccab5b7b67e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rg8gNkvs3u
https://papers.nips.cc/paper_files/paper/2021/file/c3a690be93aa602ee2dc0ccab5b7b67e-Supplemental.pdf
Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights. However, existing methods measuring the closeness are not very reliable: they are discrete and can take only a few values, and they are path-dependent, i.e., they may change given the same start and end points with different attack paths. In this paper, we propose three types of probabilistic margin (PM), which are continuous and path-independent, for measuring the aforementioned closeness and reweighing adversarial data. Specifically, a PM is defined as the difference between two estimated class-posterior probabilities, e.g., such a probability of the true label minus the probability of the most confusing label given some natural data. Though different PMs capture different geometric properties, all three PMs share a negative correlation with the vulnerability of data: data with larger/smaller PMs are safer/riskier and should have smaller/larger weights. Experiments demonstrated that PMs are reliable and PM-based reweighting methods outperformed state-of-the-art counterparts.
null
Unbalanced Optimal Transport through Non-negative Penalized Linear Regression
https://papers.nips.cc/paper_files/paper/2021/hash/c3c617a9b80b3ae1ebd868b0017cc349-Abstract.html
Laetitia Chapel, Rémi Flamary, Haoran Wu, Cédric Févotte, Gilles Gasso
https://papers.nips.cc/paper_files/paper/2021/hash/c3c617a9b80b3ae1ebd868b0017cc349-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13405-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c3c617a9b80b3ae1ebd868b0017cc349-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=O8Ffv3aRJr
https://papers.nips.cc/paper_files/paper/2021/file/c3c617a9b80b3ae1ebd868b0017cc349-Supplemental.pdf
This paper addresses the problem of Unbalanced Optimal Transport (UOT) in which the marginal conditions are relaxed (using weighted penalties in lieu of equality) and no additional regularization is enforced on the OT plan. In this context, we show that the corresponding optimization problem can be reformulated as a non-negative penalized linear regression problem. This reformulation allows us to propose novel algorithms inspired from inverse problems and nonnegative matrix factorization. In particular, we consider majorization-minimization which leads in our setting to efficient multiplicative updates for a variety of penalties. Furthermore, we derive for the first time an efficient algorithm to compute the regularization path of UOT with quadratic penalties. The proposed algorithm provides a continuity of piece-wise linear OT plans converging to the solution of balanced OT (corresponding to infinite penalty weights). We perform several numerical experiments on simulated and real data illustrating the new algorithms, and provide a detailed discussion about more sophisticated optimization tools that can further be used to solve OT problems thanks to our reformulation.
null
The Difficulty of Passive Learning in Deep Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/c3e0c62ee91db8dc7382bde7419bb573-Abstract.html
Georg Ostrovski, Pablo Samuel Castro, Will Dabney
https://papers.nips.cc/paper_files/paper/2021/hash/c3e0c62ee91db8dc7382bde7419bb573-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13406-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c3e0c62ee91db8dc7382bde7419bb573-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nPHA8fGicZk
https://papers.nips.cc/paper_files/paper/2021/file/c3e0c62ee91db8dc7382bde7419bb573-Supplemental.pdf
Learning to act from observational data without active environmental interaction is a well-known challenge in Reinforcement Learning (RL). Recent approaches involve constraints on the learned policy or conservative updates, preventing strong deviations from the state-action distribution of the dataset. Although these methods are evaluated using non-linear function approximation, theoretical justifications are mostly limited to the tabular or linear cases. Given the impressive results of deep reinforcement learning, we argue for a need to more clearly understand the challenges in this setting.In the vein of Held & Hein's classic 1963 experiment, we propose the "tandem learning" experimental paradigm which facilitates our empirical analysis of the difficulties in offline reinforcement learning. We identify function approximation in conjunction with fixed data distributions as the strongest factors, thereby extending but also challenging hypotheses stated in past work. Our results provide relevant insights for offline deep reinforcement learning, while also shedding new light on phenomena observed in the online case of learning control.
null
Intriguing Properties of Vision Transformers
https://papers.nips.cc/paper_files/paper/2021/hash/c404a5adbf90e09631678b13b05d9d7a-Abstract.html
Muhammad Muzammal Naseer, Kanchana Ranasinghe, Salman H Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang
https://papers.nips.cc/paper_files/paper/2021/hash/c404a5adbf90e09631678b13b05d9d7a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13407-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c404a5adbf90e09631678b13b05d9d7a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=o2mbl-Hmfgd
https://papers.nips.cc/paper_files/paper/2021/file/c404a5adbf90e09631678b13b05d9d7a-Supplemental.pdf
Vision transformers (ViT) have demonstrated impressive performance across numerous machine vision tasks. These models are based on multi-head self-attention mechanisms that can flexibly attend to a sequence of image patches to encode contextual cues. An important question is how such flexibility (in attending image-wide context conditioned on a given patch) can facilitate handling nuisances in natural images e.g., severe occlusions, domain shifts, spatial permutations, adversarial and natural perturbations. We systematically study this question via an extensive set of experiments encompassing three ViT families and provide comparisons with a high-performing convolutional neural network (CNN). We show and analyze the following intriguing properties of ViT: (a)Transformers are highly robust to severe occlusions, perturbations and domain shifts, e.g., retain as high as 60% top-1 accuracy on ImageNet even after randomly occluding 80% of the image content. (b)The robustness towards occlusions is not due to texture bias, instead we show that ViTs are significantly less biased towards local textures, compared to CNNs. When properly trained to encode shape-based features, ViTs demonstrate shape recognition capability comparable to that of human visual system, previously unmatched in the literature. (c)Using ViTs to encode shape representation leads to an interesting consequence of accurate semantic segmentation without pixel-level supervision. (d)Off-the-shelf features from a single ViT model can be combined to create a feature ensemble, leading to high accuracy rates across a range of classification datasets in both traditional and few-shot learning paradigms. We show effective features of ViTs are due to flexible and dynamic receptive fields possible via self-attention mechanisms. Our code will be publicly released.
null
PartialFed: Cross-Domain Personalized Federated Learning via Partial Initialization
https://papers.nips.cc/paper_files/paper/2021/hash/c429429bf1f2af051f2021dc92a8ebea-Abstract.html
Benyuan Sun, Hongxing Huo, YI YANG, Bo Bai
https://papers.nips.cc/paper_files/paper/2021/hash/c429429bf1f2af051f2021dc92a8ebea-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13408-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c429429bf1f2af051f2021dc92a8ebea-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lgDP84byd5
https://papers.nips.cc/paper_files/paper/2021/file/c429429bf1f2af051f2021dc92a8ebea-Supplemental.pdf
The burst of applications empowered by massive data have aroused unprecedented privacy concerns in AI society. Currently, data confidentiality protection has been one core issue during deep model training. Federated Learning (FL), which enables privacy-preserving training across multiple silos, gained rising popularity for its parameter-only communication. However, previous works have shown that FL revealed a significant performance drop if the data distributions are heterogeneous among different clients, especially when the clients have cross-domain characteristic, such as traffic, aerial and in-door. To address this challenging problem, we propose a novel idea, PartialFed, which loads a subset of the global model’s parameters rather than loading the entire model used in most previous works. We first validate our algorithm with manually decided loading strategies inspired by various expert priors, named PartialFed-Fix. Then we develop PartialFed-Adaptive, which automatically selects personalized loading strategy for each client. The superiority of our algorithm is proved by demonstrating the new state-of-the-art results on cross-domain federated classification and detection. In particular, solely by initializing a small fraction of layers locally, we improve the performance of FedAvg on Office-Home and UODB by 4.88% and 2.65%, respectively. Further studies show that the adaptive strategy performs significantly better on domains with large deviation, e.g. improves AP50 by 4.03% and 4.89% on aerial and medical image detection compared to FedAvg.
null
Adaptive Diffusion in Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/c42af2fa7356818e0389593714f59b52-Abstract.html
Jialin Zhao, Yuxiao Dong, Ming Ding, Evgeny Kharlamov, Jie Tang
https://papers.nips.cc/paper_files/paper/2021/hash/c42af2fa7356818e0389593714f59b52-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13409-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c42af2fa7356818e0389593714f59b52-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0Kb33DHJ1g
https://papers.nips.cc/paper_files/paper/2021/file/c42af2fa7356818e0389593714f59b52-Supplemental.zip
The success of graph neural networks (GNNs) largely relies on the process of aggregating information from neighbors defined by the input graph structures. Notably, message passing based GNNs, e.g., graph convolutional networks, leverage the immediate neighbors of each node during the aggregation process, and recently, graph diffusion convolution (GDC) is proposed to expand the propagation neighborhood by leveraging generalized graph diffusion. However, the neighborhood size in GDC is manually tuned for each graph by conducting grid search over the validation set, making its generalization practically limited. To address this issue, we propose the adaptive diffusion convolution (ADC) strategy to automatically learn the optimal neighborhood size from the data. Furthermore, we break the conventional assumption that all GNN layers and feature channels (dimensions) should use the same neighborhood for propagation. We design strategies to enable ADC to learn a dedicated propagation neighborhood for each GNN layer and each feature channel, making the GNN architecture fully coupled with graph structures---the unique property that differs GNNs from traditional neural networks. By directly plugging ADC into existing GNNs, we observe consistent and significant outperformance over both GDC and their vanilla versions across various datasets, demonstrating the improved model capacity brought by automatically learning unique neighborhood size per layer and per channel in GNNs.
null
Recurrent Submodular Welfare and Matroid Blocking Semi-Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/c44bebb973e14fe539676e0e9155b121-Abstract.html
Orestis Papadigenopoulos, Constantine Caramanis
https://papers.nips.cc/paper_files/paper/2021/hash/c44bebb973e14fe539676e0e9155b121-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13410-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c44bebb973e14fe539676e0e9155b121-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lS-ocNKV-u
https://papers.nips.cc/paper_files/paper/2021/file/c44bebb973e14fe539676e0e9155b121-Supplemental.pdf
A recent line of research focuses on the study of stochastic multi-armed bandits (MAB), in the case where temporal correlations of specific structure are imposed between the player's actions and the reward distributions of the arms. These correlations lead to (sub-)optimal solutions that exhibit interesting dynamical patterns -- a phenomenon that yields new challenges both from an algorithmic as well as a learning perspective. In this work, we extend the above direction to a combinatorial semi-bandit setting and study a variant of stochastic MAB, where arms are subject to matroid constraints and each arm becomes unavailable (blocked) for a fixed number of rounds after each play. A natural common generalization of the state-of-the-art for blocking bandits, and that for matroid bandits, only guarantees a $\frac{1}{2}$-approximation for general matroids. In this paper we develop the novel technique of correlated (interleaved) scheduling, which allows us to obtain a polynomial-time $(1 - \frac{1}{e})$-approximation algorithm (asymptotically and in expectation) for any matroid. Along the way, we discover an interesting connection to a variant of Submodular Welfare Maximization, for which we provide (asymptotically) matching upper and lower approximability bounds. In the case where the mean arm rewards are unknown, our technique naturally decouples the scheduling from the learning problem, and thus allows to control the $(1-\frac{1}{e})$-approximate regret of a UCB-based adaptation of our online algorithm.
null
Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models
https://papers.nips.cc/paper_files/paper/2021/hash/c460dc0f18fc309ac07306a4a55d2fd6-Abstract.html
Yi Sui, Ga Wu, Scott Sanner
https://papers.nips.cc/paper_files/paper/2021/hash/c460dc0f18fc309ac07306a4a55d2fd6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13411-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c460dc0f18fc309ac07306a4a55d2fd6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Wl32WBZnSP4
https://papers.nips.cc/paper_files/paper/2021/file/c460dc0f18fc309ac07306a4a55d2fd6-Supplemental.pdf
Explaining the influence of training data on deep neural network predictions is a critical tool for debugging models through data curation. A recent tractable and appealing approach for this task was provided via the concept of Representer Point Selection (RPS), i.e. a method the leverages the dual form of $l_2$ regularized optimization in the last layer of the neural network to identify the contribution of training points to the prediction. However, two key drawbacks of RPS are that they (i) lead to disagreement between the originally trained network and the RP regularized network modification and (ii) often yield a static ranking of training data for the same class, independent of the data being classified. Inspired by the RPS approach, we propose an alternative method based on a local Jacobian Taylor expansion (LJE) of the Jacobian.We empirically compared RPS-LJE with the original RPS-$l_2$ on image classification (with ResNet), text classification recurrent neural networks (with Bi-LSTM), and tabular classification (with XGBoost) tasks.Quantitatively, we show that RPS-LJE slightly outperforms RPS-$l_2$ and other state-of-the-art data explanation methods by up to 3\% on a data debugging task. Qualitatively, we observe that RPS-LJE provides individualized explanations for each test data point rather than the class-specific static ranking of points in the original approach. Overall, RPS-LJE represents a novel approach to RPS that provides a powerful tool for data-oriented explanation and debugging.
null
Editing a classifier by rewriting its prediction rules
https://papers.nips.cc/paper_files/paper/2021/hash/c46489a2d5a9a9ecfc53b17610926ddd-Abstract.html
Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry
https://papers.nips.cc/paper_files/paper/2021/hash/c46489a2d5a9a9ecfc53b17610926ddd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13412-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c46489a2d5a9a9ecfc53b17610926ddd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aM7UsuOAzB3
https://papers.nips.cc/paper_files/paper/2021/file/c46489a2d5a9a9ecfc53b17610926ddd-Supplemental.pdf
We propose a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules. Our method requires virtually no additional data collection and can be applied to a variety of settings, including adapting a model to new environments, and modifying it to ignore spurious features.
null
How Modular should Neural Module Networks Be for Systematic Generalization?
https://papers.nips.cc/paper_files/paper/2021/hash/c467978aaae44a0e8054e174bc0da4bb-Abstract.html
Vanessa D'Amario, Tomotake Sasaki, Xavier Boix
https://papers.nips.cc/paper_files/paper/2021/hash/c467978aaae44a0e8054e174bc0da4bb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13413-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c467978aaae44a0e8054e174bc0da4bb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=dwY40cSK-dt
https://papers.nips.cc/paper_files/paper/2021/file/c467978aaae44a0e8054e174bc0da4bb-Supplemental.pdf
Neural Module Networks (NMNs) aim at Visual Question Answering (VQA) via composition of modules that tackle a sub-task. NMNs are a promising strategy to achieve systematic generalization, i.e., overcoming biasing factors in the training distribution. However, the aspects of NMNs that facilitate systematic generalization are not fully understood. In this paper, we demonstrate that the degree of modularity of the NMN have large influence on systematic generalization. In a series of experiments on three VQA datasets (VQA-MNIST, SQOOP, and CLEVR-CoGenT), our results reveal that tuning the degree of modularity, especially at the image encoder stage, reaches substantially higher systematic generalization. These findings lead to new NMN architectures that outperform previous ones in terms of systematic generalization.
null
Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing
https://papers.nips.cc/paper_files/paper/2021/hash/c47e93742387750baba2e238558fa12d-Abstract.html
Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das
https://papers.nips.cc/paper_files/paper/2021/hash/c47e93742387750baba2e238558fa12d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13414-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c47e93742387750baba2e238558fa12d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=a1wQOh27zcy
https://papers.nips.cc/paper_files/paper/2021/file/c47e93742387750baba2e238558fa12d-Supplemental.pdf
Unsupervised domain adaptation which aims to adapt models trained on a labeled source domain to a completely unlabeled target domain has attracted much attention in recent years. While many domain adaptation techniques have been proposed for images, the problem of unsupervised domain adaptation in videos remains largely underexplored. In this paper, we introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation. First, unlike existing methods that rely on adversarial learning for feature alignment, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds as well as minimizing the similarity between different videos played at different speeds. Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains. Moreover, we also integrate a supervised contrastive learning objective using target pseudo-labels to enhance discriminability of the latent space for video domain adaptation. Extensive experiments on several benchmark datasets demonstrate the superiority of our proposed approach over state-of-the-art methods. Project page: https://cvir.github.io/projects/comix.
null
The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization
https://papers.nips.cc/paper_files/paper/2021/hash/c4b8bb990423f770dd7f26ff79168416-Abstract.html
Daniel LeJeune, Hamid Javadi, Richard Baraniuk
https://papers.nips.cc/paper_files/paper/2021/hash/c4b8bb990423f770dd7f26ff79168416-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13415-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c4b8bb990423f770dd7f26ff79168416-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bV89lw5OF8x
https://papers.nips.cc/paper_files/paper/2021/file/c4b8bb990423f770dd7f26ff79168416-Supplemental.pdf
Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training. By examining this masking, or dropout, in the linear case, we uncover a duality between such adaptive methods and regularization through the so-called “η-trick” that casts both as iteratively reweighted optimizations. We show that any dropout strategy that adapts to the weights in a monotonic way corresponds to an effective subquadratic regularization penalty, and therefore leads to sparse solutions. We obtain the effective penalties for several popular sparsification strategies, which are remarkably similar to classical penalties commonly used in sparse optimization. Considering variational dropout as a case study, we demonstrate similar empirical behavior between the adaptive dropout method and classical methods on the task of deep network sparsification, validating our theory.
null
Active Learning of Convex Halfspaces on Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/c4bf1e24f3e6f92ca9dfd9a7a1a1049c-Abstract.html
Maximilian Thiessen, Thomas Gaertner
https://papers.nips.cc/paper_files/paper/2021/hash/c4bf1e24f3e6f92ca9dfd9a7a1a1049c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13416-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c4bf1e24f3e6f92ca9dfd9a7a1a1049c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-__S9T8QrDD
null
We systematically study the query complexity of learning geodesically convex halfspaces on graphs. Geodesic convexity is a natural generalisation of Euclidean convexity and allows the definition of convex sets and halfspaces on graphs. We prove an upper bound on the query complexity linear in the treewidth and the minimum hull set size but only logarithmic in the diameter. We show tight lower bounds along well-established separation axioms and identify the Radon number as a central parameter of the query complexity and the VC dimension. While previous bounds typically depend on the cut size of the labelling, all parameters in our bounds can be computed from the unlabelled graph. We provide evidence that ground-truth communities in real-world graphs are often convex and empirically compare our proposed approach with other active learning algorithms.
null
Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/c4ca4238a0b923820dcc509a6f75849b-Abstract.html
Yuhang Li, Yufei Guo, Shanghang Zhang, Shikuang Deng, Yongqing Hai, Shi Gu
https://papers.nips.cc/paper_files/paper/2021/hash/c4ca4238a0b923820dcc509a6f75849b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13417-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c4ca4238a0b923820dcc509a6f75849b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=H4e7mBnC9f0
null
Spiking Neural Networks (SNNs) have emerged as a biology-inspired method mimicking the spiking nature of brain neurons. This bio-mimicry derives SNNs' energy efficiency of inference on neuromorphic hardware. However, it also causes an intrinsic disadvantage in training high-performing SNNs from scratch since the discrete spike prohibits the gradient calculation. To overcome this issue, the surrogate gradient (SG) approach has been proposed as a continuous relaxation. Yet the heuristic choice of SG leaves it vacant how the SG benefits the SNN training. In this work, we first theoretically study the gradient descent problem in SNN training and introduce finite difference gradient to quantitatively analyze the training behavior of SNN. Based on the introduced finite difference gradient, we propose a new family of Differentiable Spike (Dspike) functions that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation. Extensive experiments over several popular network structures show that training SNN with Dspike consistently outperforms the state-of-the-art training methods. For example, on the CIFAR10-DVS classification task, we can train a spiking ResNet-18 and achieve 75.4% top-1 accuracy with 10 time steps.
null
Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/c4d2ce3f3ebb5393a77c33c0cd95dc93-Abstract.html
Nurendra Choudhary, Nikhil Rao, Sumeet Katariya, Karthik Subbian, Chandan Reddy
https://papers.nips.cc/paper_files/paper/2021/hash/c4d2ce3f3ebb5393a77c33c0cd95dc93-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13418-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c4d2ce3f3ebb5393a77c33c0cd95dc93-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ACV8iBHtbR
https://papers.nips.cc/paper_files/paper/2021/file/c4d2ce3f3ebb5393a77c33c0cd95dc93-Supplemental.pdf
Logical reasoning over Knowledge Graphs (KGs) is a fundamental technique that can provide an efficient querying mechanism over large and incomplete databases. Current approaches employ spatial geometries such as boxes to learn query representations that encompass the answer entities and model the logical operations of projection and intersection. However, their geometry is restrictive and leads to non-smooth strict boundaries, which further results in ambiguous answer entities. Furthermore, previous works propose transformation tricks to handle unions which results in non-closure and, thus, cannot be chained in a stream. In this paper, we propose a Probabilistic Entity Representation Model (PERM) to encode entities as a Multivariate Gaussian density with mean and covariance parameters to capture its semantic position and smooth decision boundary, respectively. Additionally, we also define the closed logical operations of projection, intersection, and union that can be aggregated using an end-to-end objective function. On the logical query reasoning problem, we demonstrate that the proposed PERM significantly outperforms the state-of-the-art methods on various public benchmark KG datasets on standard evaluation metrics. We also evaluate PERM’s competence on a COVID-19 drug-repurposing case study and show that our proposed work is able to recommend drugs with substantially better F1 than current methods. Finally, we demonstrate the working of our PERM’s query answering process through a low-dimensional visualization of the Gaussian representations.
null
Black Box Probabilistic Numerics
https://papers.nips.cc/paper_files/paper/2021/hash/c4de8ced6214345614d33fb0b16a8acd-Abstract.html
Onur Teymur, Christopher Foley, Philip Breen, Toni Karvonen, Chris J. Oates
https://papers.nips.cc/paper_files/paper/2021/hash/c4de8ced6214345614d33fb0b16a8acd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13419-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c4de8ced6214345614d33fb0b16a8acd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FackmHUDcXX
https://papers.nips.cc/paper_files/paper/2021/file/c4de8ced6214345614d33fb0b16a8acd-Supplemental.pdf
Probabilistic numerics casts numerical tasks, such the numerical solution of differential equations, as inference problems to be solved. One approach is to model the unknown quantity of interest as a random variable, and to constrain this variable using data generated during the course of a traditional numerical method. However, data may be nonlinearly related to the quantity of interest, rendering the proper conditioning of random variables difficult and limiting the range of numerical tasks that can be addressed. Instead, this paper proposes to construct probabilistic numerical methods based only on the final output from a traditional method. A convergent sequence of approximations to the quantity of interest constitute a dataset, from which the limiting quantity of interest can be extrapolated, in a probabilistic analogue of Richardson’s deferred approach to the limit. This black box approach (1) massively expands the range of tasks to which probabilistic numerics can be applied, (2) inherits the features and performance of state-of-the-art numerical methods, and (3) enables provably higher orders of convergence to be achieved. Applications are presented for nonlinear ordinary and partial differential equations, as well as for eigenvalue problems—a setting for which no probabilistic numerical methods have yet been developed.
null
Interpolation can hurt robust generalization even when there is no noise
https://papers.nips.cc/paper_files/paper/2021/hash/c4f2c88e16a579900657c18726641c81-Abstract.html
Konstantin Donhauser, Alexandru Tifrea, Michael Aerni, Reinhard Heckel, Fanny Yang
https://papers.nips.cc/paper_files/paper/2021/hash/c4f2c88e16a579900657c18726641c81-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13420-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c4f2c88e16a579900657c18726641c81-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nNfj0pVn4Q
https://papers.nips.cc/paper_files/paper/2021/file/c4f2c88e16a579900657c18726641c81-Supplemental.pdf
Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers. These findings suggest that ridge regularization has vanishing benefits in high dimensions. We challenge this narrative by showing that, even in the absence of noise, avoiding interpolation through ridge regularization can significantly improve generalization. We prove this phenomenon for the robust risk of both linear regression and classification, and hence provide the first theoretical result on \emph{robust overfitting}.
null
On the Equivalence between Neural Network and Support Vector Machine
https://papers.nips.cc/paper_files/paper/2021/hash/c559da2ba967eb820766939a658022c8-Abstract.html
Yilan Chen, Wei Huang, Lam Nguyen, Tsui-Wei Weng
https://papers.nips.cc/paper_files/paper/2021/hash/c559da2ba967eb820766939a658022c8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13421-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c559da2ba967eb820766939a658022c8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=npUxA--_nyX
https://papers.nips.cc/paper_files/paper/2021/file/c559da2ba967eb820766939a658022c8-Supplemental.pdf
Recent research shows that the dynamics of an infinitely wide neural network (NN) trained by gradient descent can be characterized by Neural Tangent Kernel (NTK) \citep{jacot2018neural}. Under the squared loss, the infinite-width NN trained by gradient descent with an infinitely small learning rate is equivalent to kernel regression with NTK \citep{arora2019exact}. However, the equivalence is only known for ridge regression currently \citep{arora2019harnessing}, while the equivalence between NN and other kernel machines (KMs), e.g. support vector machine (SVM), remains unknown. Therefore, in this work, we propose to establish the equivalence between NN and SVM, and specifically, the infinitely wide NN trained by soft margin loss and the standard soft margin SVM with NTK trained by subgradient descent. Our main theoretical results include establishing the equivalence between NN and a broad family of $\ell_2$ regularized KMs with finite-width bounds, which cannot be handled by prior work, and showing that every finite-width NN trained by such regularized loss functions is approximately a KM. Furthermore, we demonstrate our theory can enable three practical applications, including (i) \textit{non-vacuous} generalization bound of NN via the corresponding KM; (ii) \textit{nontrivial} robustness certificate for the infinite-width NN (while existing robustness verification methods would provide vacuous bounds); (iii) intrinsically more robust infinite-width NNs than those from previous kernel regression.
null
Learning Semantic Representations to Verify Hardware Designs
https://papers.nips.cc/paper_files/paper/2021/hash/c5aa65949d20f6b20e1a922c13d974e7-Abstract.html
Shobha Vasudevan, Wenjie (Joe) Jiang, David Bieber, Rishabh Singh, hamid shojaei, C. Richard Ho, Charles Sutton
https://papers.nips.cc/paper_files/paper/2021/hash/c5aa65949d20f6b20e1a922c13d974e7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13422-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c5aa65949d20f6b20e1a922c13d974e7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=oIhzg4GJeOf
https://papers.nips.cc/paper_files/paper/2021/file/c5aa65949d20f6b20e1a922c13d974e7-Supplemental.pdf
Verification is a serious bottleneck in the industrial hardware design cycle, routinely requiring person-years of effort. Practical verification relies on a "best effort" process that simulates the design on test inputs. This suggests a new research question: Can this simulation data be exploited to learn a continuous representation of a hardware design that allows us to predict its functionality? As a first approach to this new problem, we introduce Design2Vec, a deep architecture that learns semantic abstractions of hardware designs. The key idea is to work at a higher level of abstraction than the gate or the bit level, namely the Register Transfer Level (RTL), which is somewhat analogous to software source code, and can be represented by a graph that incorporates control and data flow. This allows us to learn representations of RTL syntax and semantics using a graph neural network. We apply these representations to several tasks within verification, including predicting what cover points of the design will be exercised by a test, and generating new tests that will exercise desired cover points. We evaluate Design2Vec on three real-world hardware designs, including an industrial chip used in commercial data centers. Our results demonstrate that Design2Vec dramatically outperforms baseline approaches that do not incorporate the RTL semantics, scales to industrial designs, and can generate tests that exercise design points that are currently hard to cover with manually written tests by design verification experts.
null
Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training
https://papers.nips.cc/paper_files/paper/2021/hash/c5ab6cebaca97f7171139e4d414ff5a6-Abstract.html
Minguk Kang, Woohyeon Shim, Minsu Cho, Jaesik Park
https://papers.nips.cc/paper_files/paper/2021/hash/c5ab6cebaca97f7171139e4d414ff5a6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/13423-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/c5ab6cebaca97f7171139e4d414ff5a6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Ja-hVQrfeGZ
null
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN. While one of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN), it is widely known that training ACGAN is challenging as the number of classes in the dataset increases. ACGAN also tends to generate easily classifiable samples with a lack of diversity. In this paper, we introduce two cures for ACGAN. First, we identify that gradient exploding in the classifier can cause an undesirable collapse in early training, and projecting input vectors onto a unit hypersphere can resolve the problem. Second, we propose the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in the class-labeled dataset. On this foundation, we propose the Rebooted Auxiliary Classifier Generative Adversarial Network (ReACGAN). The experimental results show that ReACGAN achieves state-of-the-art generation results on CIFAR10, Tiny-ImageNet, CUB200, and ImageNet datasets. We also verify that ReACGAN benefits from differentiable augmentations and that D2D-CE harmonizes with StyleGAN2 architecture. Model weights and a software package that provides implementations of representative cGANs and all experiments in our paper are available at https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.
null