title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
Rectifying the Shortcut Learning of Background for Few-Shot Learning
https://papers.nips.cc/paper_files/paper/2021/hash/6cfe0e6127fa25df2a0ef2ae1067d915-Abstract.html
Xu Luo, Longhui Wei, Liangjian Wen, Jinrong Yang, Lingxi Xie, Zenglin Xu, Qi Tian
https://papers.nips.cc/paper_files/paper/2021/hash/6cfe0e6127fa25df2a0ef2ae1067d915-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12624-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6cfe0e6127fa25df2a0ef2ae1067d915-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=N1i6BJzouX4
https://papers.nips.cc/paper_files/paper/2021/file/6cfe0e6127fa25df2a0ef2ae1067d915-Supplemental.pdf
The category gap between training and evaluation has been characterised as one of the main obstacles to the success of Few-Shot Learning (FSL). In this paper, we for the first time empirically identify image background, common in realistic images, as a shortcut knowledge helpful for in-class classification but ungeneralizable beyond training categories in FSL. A novel framework, COSOC, is designed to tackle this problem by extracting foreground objects in images at both training and evaluation without any extra supervision. Extensive experiments carried on inductive FSL tasks demonstrate the effectiveness of our approaches.
null
SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
https://papers.nips.cc/paper_files/paper/2021/hash/6d0c932802f6953f70eb20931645fa40-Abstract.html
Devendra Singh Chaplot, Murtaza Dalal, Saurabh Gupta, Jitendra Malik, Russ R. Salakhutdinov
https://papers.nips.cc/paper_files/paper/2021/hash/6d0c932802f6953f70eb20931645fa40-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12625-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6d0c932802f6953f70eb20931645fa40-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=guHXB1dcD3l
https://papers.nips.cc/paper_files/paper/2021/file/6d0c932802f6953f70eb20931645fa40-Supplemental.pdf
In this paper, we explore how we can build upon the data and models of Internet images and use them to adapt to robot vision without requiring any extra labels. We present a framework called Self-supervised Embodied Active Learning (SEAL). It utilizes perception models trained on internet images to learn an active exploration policy. The observations gathered by this exploration policy are labelled using 3D consistency and used to improve the perception model. We build and utilize 3D semantic maps to learn both action and perception in a completely self-supervised manner. The semantic map is used to compute an intrinsic motivation reward for training the exploration policy and for labelling the agent observations using spatio-temporal 3D consistency and label propagation. We demonstrate that the SEAL framework can be used to close the action-perception loop: it improves object detection and instance segmentation performance of a pretrained perception model by just moving around in training environments and the improved perception model can be used to improve Object Goal Navigation.
null
Sifting through the noise: Universal first-order methods for stochastic variational inequalities
https://papers.nips.cc/paper_files/paper/2021/hash/6d65b5ac1f4ec80b9a7309311f4f9b13-Abstract.html
Kimon Antonakopoulos, Thomas Pethick, Ali Kavis, Panayotis Mertikopoulos, Volkan Cevher
https://papers.nips.cc/paper_files/paper/2021/hash/6d65b5ac1f4ec80b9a7309311f4f9b13-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12626-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6d65b5ac1f4ec80b9a7309311f4f9b13-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VsUQQkpEXgr
https://papers.nips.cc/paper_files/paper/2021/file/6d65b5ac1f4ec80b9a7309311f4f9b13-Supplemental.pdf
We examine a flexible algorithmic framework for solving monotone variational inequalities in the presence of randomness and uncertainty. The proposed template encompasses a wide range of popular first-order methods, including dual averaging, dual extrapolation and optimistic gradient algorithms – both adaptive and non-adaptive. Our first result is that the algorithm achieves the optimal rates of convergence for cocoercive problems when the profile of the randomness is known to the optimizer: $\mathcal{O}(1/\sqrt{T})$ for absolute noise profiles, and $\mathcal{O}(1/T)$ for relative ones. Subsequently, we drop all prior knowledge requirements (the absolute/relative variance of the randomness affecting the problem, the operator's cocoercivity constant, etc.), and we analyze an adaptive instance of the method that gracefully interpolates between the above rates – i.e. it achieves $\mathcal{O}(1/\sqrt{T})$ and $\mathcal{O}(1/T)$ in the absolute and relative cases, respectively. To our knowledge, this is the first universality result of its kind in the literature and, somewhat surprisingly, it shows that an extra-gradient proxy step is not required to achieve optimal rates.
null
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/6d7d394c9d0c886e9247542e06ebb705-Abstract.html
Jingfeng Wu, Vladimir Braverman, Lin Yang
https://papers.nips.cc/paper_files/paper/2021/hash/6d7d394c9d0c886e9247542e06ebb705-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12627-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6d7d394c9d0c886e9247542e06ebb705-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xdk17QJpf5q
https://papers.nips.cc/paper_files/paper/2021/file/6d7d394c9d0c886e9247542e06ebb705-Supplemental.pdf
In this paper we consider multi-objective reinforcement learning where the objectives are balanced using preferences. In practice, the preferences are often given in an adversarial manner, e.g., customers can be picky in many applications. We formalize this problem as an episodic learning problem on a Markov decision process, where transitions are unknown and a reward function is the inner product of a preference vector with pre-specified multi-objective reward functions. We consider two settings. In the online setting, the agent receives a (adversarial) preference every episode and proposes policies to interact with the environment. We provide a model-based algorithm that achieves a nearly minimax optimal regret bound $\widetilde{\mathcal{O}}\bigl(\sqrt{\min\{d,S\}\cdot H^2 SAK}\bigr)$, where $d$ is the number of objectives, $S$ is the number of states, $A$ is the number of actions, $H$ is the length of the horizon, and $K$ is the number of episodes. Furthermore, we consider preference-free exploration, i.e., the agent first interacts with the environment without specifying any preference and then is able to accommodate arbitrary preference vector up to $\epsilon$ error. Our proposed algorithm is provably efficient with a nearly optimal trajectory complexity $\widetilde{\mathcal{O}}\bigl({\min\{d,S\}\cdot H^3 SA}/{\epsilon^2}\bigr)$. This result partly resolves an open problem raised by \citet{jin2020reward}.
null
Exact Privacy Guarantees for Markov Chain Implementations of the Exponential Mechanism with Artificial Atoms
https://papers.nips.cc/paper_files/paper/2021/hash/6d96718a701f5bfba283bbdc71dfa5c4-Abstract.html
Jeremy Seeman, Matthew Reimherr, Aleksandra Slavković
https://papers.nips.cc/paper_files/paper/2021/hash/6d96718a701f5bfba283bbdc71dfa5c4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12628-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6d96718a701f5bfba283bbdc71dfa5c4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SbGpYmQHlS8
https://papers.nips.cc/paper_files/paper/2021/file/6d96718a701f5bfba283bbdc71dfa5c4-Supplemental.pdf
Implementations of the exponential mechanism in differential privacy often require sampling from intractable distributions. When approximate procedures like Markov chain Monte Carlo (MCMC) are used, the end result incurs costs to both privacy and accuracy. Existing work has examined these effects asymptotically, but implementable finite sample results are needed in practice so that users can specify privacy budgets in advance and implement samplers with exact privacy guarantees. In this paper, we use tools from ergodic theory and perfect simulation to design exact finite runtime sampling algorithms for the exponential mechanism by introducing an intermediate modified target distribution using artificial atoms. We propose an additional modification of this sampling algorithm that maintains its $\epsilon$-DP guarantee and has improved runtime at the cost of some utility. We then compare these methods in scenarios where we can explicitly calculate a $\delta$ cost (as in $(\epsilon, \delta)$-DP) incurred when using standard MCMC techniques. Much as there is a well known trade-off between privacy and utility, we demonstrate that there is also a trade-off between privacy guarantees and runtime.
null
The Emergence of Objectness: Learning Zero-shot Segmentation from Videos
https://papers.nips.cc/paper_files/paper/2021/hash/6d9cb7de5e8ac30bd5e8734bc96a35c1-Abstract.html
Runtao Liu, Zhirong Wu, Stella Yu, Stephen Lin
https://papers.nips.cc/paper_files/paper/2021/hash/6d9cb7de5e8ac30bd5e8734bc96a35c1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12629-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6d9cb7de5e8ac30bd5e8734bc96a35c1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=grfI7Rnv5P
https://papers.nips.cc/paper_files/paper/2021/file/6d9cb7de5e8ac30bd5e8734bc96a35c1-Supplemental.zip
Humans can easily detect and segment moving objects simply by observing how they move, even without knowledge of object semantics. Inspired by this, we develop a zero-shot unsupervised approach for learning object segmentations. The model comprises two visual pathways: an appearance pathway that segments individual RGB images into coherent object regions, and a motion pathway that predicts the flow vector for each region between consecutive video frames. The two pathways jointly reconstruct a new representation called segment flow. This decoupled representation of appearance and motion is trained in a self-supervised manner to reconstruct one frame from another.When pretrained on an unlabeled video corpus, the model can be useful for a variety of applications, including 1) primary object segmentation from a single image in a zero-shot fashion; 2) moving object segmentation from a video with unsupervised test-time adaptation; 3) image semantic segmentation by supervised fine-tuning on a labeled image dataset. We demonstrate encouraging experimental results on all of these tasks using pretrained models.
null
Direct Multi-view Multi-person 3D Pose Estimation
https://papers.nips.cc/paper_files/paper/2021/hash/6da9003b743b65f4c0ccd295cc484e57-Abstract.html
tao wang, Jianfeng Zhang, Yujun Cai, Shuicheng Yan, Jiashi Feng
https://papers.nips.cc/paper_files/paper/2021/hash/6da9003b743b65f4c0ccd295cc484e57-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12630-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6da9003b743b65f4c0ccd295cc484e57-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rG2ponW2Si
https://papers.nips.cc/paper_files/paper/2021/file/6da9003b743b65f4c0ccd295cc484e57-Supplemental.pdf
We present Multi-view Pose transformer (MvP) for estimating multi-person 3D poses from multi-view images. Instead of estimating 3D joint locations from costly volumetric representation or reconstructing the per-person 3D pose from multiple detected 2D poses as in previous methods, MvP directly regresses the multi-person 3D poses in a clean and efficient way, without relying on intermediate tasks. Specifically, MvP represents skeleton joints as learnable query embeddings and let them progressively attend to and reason over the multi-view information from the input images to directly regress the actual 3D joint locations. To improve the accuracy of such a simple pipeline, MvP presents a hierarchical scheme to concisely represent query embeddings of multi-person skeleton joints and introduces an input-dependent query adaptation approach. Further, MvP designs a novel geometrically guided attention mechanism, called projective attention, to more precisely fuse the cross-view information for each joint. MvP also introduces a RayConv operation to integrate the view-dependent camera geometry into the feature representations for augmenting the projective attention. We show experimentally that our MvP model outperforms the state-of-the-art methods on several benchmarks while being much more efficient. Notably, it achieves 92.3% AP25 on the challenging Panoptic dataset, improving upon the previous best approach [35] by 9.8%. MvP is general and also extendable to recovering human mesh represented by the SMPL model, thus useful for modeling multi-person body shapes. Code and models are available at https://github.com/sail-sg/mvp.
null
MST: Masked Self-Supervised Transformer for Visual Representation
https://papers.nips.cc/paper_files/paper/2021/hash/6dbbe6abe5f14af882ff977fc3f35501-Abstract.html
Zhaowen Li, Zhiyang Chen, Fan Yang, Wei Li, Yousong Zhu, Chaoyang Zhao, Rui Deng, Liwei Wu, Rui Zhao, Ming Tang, Jinqiao Wang
https://papers.nips.cc/paper_files/paper/2021/hash/6dbbe6abe5f14af882ff977fc3f35501-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12631-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6dbbe6abe5f14af882ff977fc3f35501-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=y_OmkmCH9w
https://papers.nips.cc/paper_files/paper/2021/file/6dbbe6abe5f14af882ff977fc3f35501-Supplemental.zip
Transformer has been widely used for self-supervised pre-training in Natural Language Processing (NLP) and achieved great success. However, it has not been fully explored in visual self-supervised learning. Meanwhile, previous methods only consider the high-level feature and learning representation from a global perspective, which may fail to transfer to the downstream dense prediction tasks focusing on local features. In this paper, we present a novel Masked Self-supervised Transformer approach named MST, which can explicitly capture the local context of an image while preserving the global semantic information. Specifically, inspired by the Masked Language Modeling (MLM) in NLP, we propose a masked token strategy based on the multi-head self-attention map, which dynamically masks some tokens of local patches without damaging the crucial structure for self-supervised learning. More importantly, the masked tokens together with the remaining tokens are further recovered by a global image decoder, which preserves the spatial information of the image and is more friendly to the downstream dense prediction tasks. The experiments on multiple datasets demonstrate the effectiveness and generality of the proposed method. For instance, MST achieves Top-1 accuracy of 76.9% with DeiT-S only using 300-epoch pre-training by linear evaluation, which outperforms supervised methods with the same epoch by 0.4% and its comparable variant DINO by 1.0%. For dense prediction tasks, MST also achieves 42.7% mAP on MS COCO object detection and 74.04% mIoU on Cityscapes segmentation only with 100-epoch pre-training.
null
Exploiting Opponents Under Utility Constraints in Sequential Games
https://papers.nips.cc/paper_files/paper/2021/hash/6de0f2761a44ff1e2ca60131058d8297-Abstract.html
Martino Bernasconi-de-Luca, Federico Cacciamani, Simone Fioravanti, Nicola Gatti, Alberto Marchesi, Francesco Trovò
https://papers.nips.cc/paper_files/paper/2021/hash/6de0f2761a44ff1e2ca60131058d8297-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12632-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6de0f2761a44ff1e2ca60131058d8297-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=m72s2rDrm3G
https://papers.nips.cc/paper_files/paper/2021/file/6de0f2761a44ff1e2ca60131058d8297-Supplemental.pdf
Recently, game-playing agents based on AI techniques have demonstrated super-human performance in several sequential games, such as chess, Go, and poker. Surprisingly, the multi-agent learning techniques that allowed to reach these achievements do not take into account the actual behavior of the human player, potentially leading to an impressive gap in performances. In this paper, we address the problem of designing artificial agents that learn how to effectively exploit unknown human opponents while playing repeatedly against them in an online fashion. We study the case in which the agent's strategy during each repetition of the game is subject to constraints ensuring that the human's expected utility is within some lower and upper thresholds. Our framework encompasses several real-world problems, such as human engagement in repeated game playing and human education by means of serious games. As a first result, we formalize a set of linear inequalities encoding the conditions that the agent's strategy must satisfy at each iteration in order to do not violate the given bounds for the human's expected utility. Then, we use such formulation in an upper confidence bound algorithm, and we prove that the resulting procedure suffers from sublinear regret and guarantees that the constraints are satisfied with high probability at each iteration. Finally, we empirically evaluate the convergence of our algorithm on standard testbeds of sequential games.
null
A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference
https://papers.nips.cc/paper_files/paper/2021/hash/6e01383fd96a17ae51cc3e15447e7533-Abstract.html
Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van den Broeck
https://papers.nips.cc/paper_files/paper/2021/hash/6e01383fd96a17ae51cc3e15447e7533-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12633-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e01383fd96a17ae51cc3e15447e7533-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9SD2Rb3NiWu
https://papers.nips.cc/paper_files/paper/2021/file/6e01383fd96a17ae51cc3e15447e7533-Supplemental.pdf
Circuit representations are becoming the lingua franca to express and reason about tractable generative and discriminative models. In this paper, we show how complex inference scenarios for these models that commonly arise in machine learning---from computing the expectations of decision tree ensembles to information-theoretic divergences of sum-product networks---can be represented in terms of tractable modular operations over circuits. Specifically, we characterize the tractability of simple transformations---sums, products, quotients, powers, logarithms, and exponentials---in terms of sufficient structural constraints of the circuits they operate on, and present novel hardness results for the cases in which these properties are not satisfied. Building on these operations, we derive a unified framework for reasoning about tractable models that generalizes several results in the literature and opens up novel tractable inference scenarios.
null
Demystifying and Generalizing BinaryConnect
https://papers.nips.cc/paper_files/paper/2021/hash/6e0cf80a83327822a972bcde3c1d9740-Abstract.html
Tim Dockhorn, Yaoliang Yu, Eyyüb Sari, Mahdi Zolnouri, Vahid Partovi Nia
https://papers.nips.cc/paper_files/paper/2021/hash/6e0cf80a83327822a972bcde3c1d9740-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12634-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e0cf80a83327822a972bcde3c1d9740-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FyaSaEbNm1W
https://papers.nips.cc/paper_files/paper/2021/file/6e0cf80a83327822a972bcde3c1d9740-Supplemental.pdf
BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization. However, our understanding of the inner workings of BC is still quite limited. We attempt to close this gap in four different aspects: (a) we show that existing quantization algorithms, including post-training quantization, are surprisingly similar to each other; (b) we argue for proximal maps as a natural family of quantizers that is both easy to design and analyze; (c) we refine the observation that BC is a special case of dual averaging, which itself is a special case of the generalized conditional gradient algorithm; (d) consequently, we propose ProxConnect (PC) as a generalization of BC and we prove its convergence properties by exploiting the established connections. We conduct experiments on CIFAR-10 and ImageNet, and verify that PC achieves competitive performance.
null
CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator
https://papers.nips.cc/paper_files/paper/2021/hash/6e16656a6ee1de7232164767ccfa7920-Abstract.html
Alek Dimitriev, Mingyuan Zhou
https://papers.nips.cc/paper_files/paper/2021/hash/6e16656a6ee1de7232164767ccfa7920-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12635-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e16656a6ee1de7232164767ccfa7920-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kwOjbvNyM-K
https://papers.nips.cc/paper_files/paper/2021/file/6e16656a6ee1de7232164767ccfa7920-Supplemental.pdf
Accurately backpropagating the gradient through categorical variables is a challenging task that arises in various domains, such as training discrete latent variable models. To this end, we propose CARMS, an unbiased estimator for categorical random variables based on multiple mutually negatively correlated (jointly antithetic) samples. CARMS combines REINFORCE with copula based sampling to avoid duplicate samples and reduce its variance, while keeping the estimator unbiased using importance sampling. It generalizes both the ARMS antithetic estimator for binary variables, which is CARMS for two categories, as well as LOORF/VarGrad, the leave-one-out REINFORCE estimator, which is CARMS with independent samples. We evaluate CARMS on several benchmark datasets on a generative modeling task, as well as a structured output prediction task, and find it to outperform competing methods including a strong self-control baseline. The code is publicly available.
null
Learning to Learn Dense Gaussian Processes for Few-Shot Learning
https://papers.nips.cc/paper_files/paper/2021/hash/6e2713a6efee97bacb63e52c54f0ada0-Abstract.html
Ze Wang, Zichen Miao, Xiantong Zhen, Qiang Qiu
https://papers.nips.cc/paper_files/paper/2021/hash/6e2713a6efee97bacb63e52c54f0ada0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12636-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e2713a6efee97bacb63e52c54f0ada0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=6p2jG0FJ5j
https://papers.nips.cc/paper_files/paper/2021/file/6e2713a6efee97bacb63e52c54f0ada0-Supplemental.pdf
Gaussian processes with deep neural networks demonstrate to be a strong learner for few-shot learning since they combine the strength of deep learning and kernels while being able to well capture uncertainty. However, it remains an open problem to leverage the shared knowledge provided by related tasks. In this paper, we propose to learn Gaussian processes with dense inducing variables by meta-learning for few-shot learning. In contrast to sparse Gaussian processes, we define a set of dense inducing variables to be of a much larger size than the support set in each task, which collects prior knowledge from experienced tasks. The dense inducing variables specify a shared Gaussian process prior over prediction functions of all tasks, which are learned in a variational inference framework and offer a strong inductive bias for learning new tasks. To achieve task-specific prediction functions, we propose to adapt the inducing variables to each task by efficient gradient descent. We conduct extensive experiments on common benchmark datasets for a variety of few-shot learning tasks. Our dense Gaussian processes present significant improvements over vanilla Gaussian processes and comparable or even better performance with state-of-the-art methods.
null
Stochastic Solutions for Linear Inverse Problems using the Prior Implicit in a Denoiser
https://papers.nips.cc/paper_files/paper/2021/hash/6e28943943dbed3c7f82fc05f269947a-Abstract.html
Zahra Kadkhodaie, Eero Simoncelli
https://papers.nips.cc/paper_files/paper/2021/hash/6e28943943dbed3c7f82fc05f269947a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12637-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e28943943dbed3c7f82fc05f269947a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x5hh6N9bUUb
https://papers.nips.cc/paper_files/paper/2021/file/6e28943943dbed3c7f82fc05f269947a-Supplemental.pdf
Deep neural networks have provided state-of-the-art solutions for problems such as image denoising, which implicitly rely on a prior probability model of natural images. Two recent lines of work – Denoising Score Matching and Plug-and-Play – propose methodologies for drawing samples from this implicit prior and using it to solve inverse problems, respectively. Here, we develop a parsimonious and robust generalization of these ideas. We rely on a classic statistical result that shows the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this to derive a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of supervised learning for denoising to a much broader set of problems. The algorithm relies on minimal assumptions and exhibits robust convergence over a wide range of parameter choices. To demonstrate the generality of our method, we use it to obtain state-of-the-art levels of unsupervised performance for deblurring, super-resolution, and compressive sensing.
null
Towards Stable and Robust AdderNets
https://papers.nips.cc/paper_files/paper/2021/hash/6e3197aae95c2ff8fcab35cb730f6a86-Abstract.html
Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
https://papers.nips.cc/paper_files/paper/2021/hash/6e3197aae95c2ff8fcab35cb730f6a86-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12638-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e3197aae95c2ff8fcab35cb730f6a86-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hPgy4_gpbmU
https://papers.nips.cc/paper_files/paper/2021/file/6e3197aae95c2ff8fcab35cb730f6a86-Supplemental.pdf
Adder neural network (AdderNet) replaces the original convolutions with massive multiplications by cheap additions while achieving comparable performance thus yields a series of energy-efficient neural networks. Compared with convolutional neural networks (CNNs), the training of AdderNets is much more sophisticated including several techniques for adjusting gradient and batch normalization. In addition, variances of both weights and activations in resulting adder networks are very enormous which limits its performance and the potential for applying to other tasks. To enhance the stability and robustness of AdderNets, we first thoroughly analyze the variance estimation of weight parameters and output features of an arbitrary adder layer. Then, we develop a weight normalization scheme for adaptively optimizing the weight distribution of AdderNets during the training procedure, which can reduce the perturbation on running mean and variance in batch normalization layers. Meanwhile, the proposed weight normalization can also be utilized to enhance the adversarial robustness of resulting networks. Experiments conducted on several benchmarks demonstrate the superiority of the proposed approach for generating AdderNets with higher performance.
null
Representing Long-Range Context for Graph Neural Networks with Global Attention
https://papers.nips.cc/paper_files/paper/2021/hash/6e67691b60ed3e4a55935261314dd534-Abstract.html
Zhanghao Wu, Paras Jain, Matthew Wright, Azalia Mirhoseini, Joseph E. Gonzalez, Ion Stoica
https://papers.nips.cc/paper_files/paper/2021/hash/6e67691b60ed3e4a55935261314dd534-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12639-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e67691b60ed3e4a55935261314dd534-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nYz2_BbZnYk
null
Graph neural networks are powerful architectures for structured datasets. However, current methods struggle to represent long-range dependencies. Scaling the depth or width of GNNs is insufficient to broaden receptive fields as larger GNNs encounter optimization instabilities such as vanishing gradients and representation oversmoothing, while pooling-based approaches have yet to become as universally useful as in computer vision. In this work, we propose the use of Transformer-based self-attention to learn long-range pairwise relationships, with a novel “readout” mechanism to obtain a global graph embedding. Inspired by recent computer vision results that find position-invariant attention performant in learning long-range relationships, our method, which we call GraphTrans, applies a permutation-invariant Transformer module after a standard GNN module. This simple architecture leads to state-of-the-art results on several graph classification tasks, outperforming methods that explicitly encode graph structure. Our results suggest that purely-learning-based approaches without graph structure may be suitable for learning high-level, long-range relationships on graphs. Code for GraphTrans is available at https://github.com/ucbrise/graphtrans.
null
Beyond Bandit Feedback in Online Multiclass Classification
https://papers.nips.cc/paper_files/paper/2021/hash/6e79ed05baec2754e25b4eac73a332d2-Abstract.html
Dirk van der Hoeven, Federico Fusco, Nicolò Cesa-Bianchi
https://papers.nips.cc/paper_files/paper/2021/hash/6e79ed05baec2754e25b4eac73a332d2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12640-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e79ed05baec2754e25b4eac73a332d2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AVvcLO2UYGA
https://papers.nips.cc/paper_files/paper/2021/file/6e79ed05baec2754e25b4eac73a332d2-Supplemental.zip
We study the problem of online multiclass classification in a setting where the learner's feedback is determined by an arbitrary directed graph. While including bandit feedback as a special case, feedback graphs allow a much richer set of applications, including filtering and label efficient classification.We introduce \textproc{Gappletron}, the first online multiclass algorithm that works with arbitrary feedback graphs. For this new algorithm,we prove surrogate regret bounds that hold, both in expectation and with high probability, for a large class of surrogate losses. Our bounds are of order $B\sqrt{\rho KT}$, where $B$ is the diameter of the prediction space, $K$ is the number of classes, $T$ is the time horizon, and $\rho$ is the domination number (a graph-theoretic parameter affecting the amount of exploration). In the full information case, we show that \textproc{Gappletron} achieves a constant surrogate regret of order $B^2K$. We also prove a general lower bound of order $\max\big\{B^2K,\sqrt{T}\big\}$ showing that our upper bounds are not significantly improvable. Experiments on synthetic data show that for various feedback graphs our algorithm is competitive against known baselines.
null
Learning Student-Friendly Teacher Networks for Knowledge Distillation
https://papers.nips.cc/paper_files/paper/2021/hash/6e7d2da6d3953058db75714ac400b584-Abstract.html
Dae Young Park, Moon-Hyun Cha, changwook jeong, Daesin Kim, Bohyung Han
https://papers.nips.cc/paper_files/paper/2021/hash/6e7d2da6d3953058db75714ac400b584-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12641-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e7d2da6d3953058db75714ac400b584-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0xs40KGnsq3
https://papers.nips.cc/paper_files/paper/2021/file/6e7d2da6d3953058db75714ac400b584-Supplemental.pdf
We propose a novel knowledge distillation approach to facilitate the transfer of dark knowledge from a teacher to a student. Contrary to most of the existing methods that rely on effective training of student models given pretrained teachers, we aim to learn the teacher models that are friendly to students and, consequently, more appropriate for knowledge transfer. In other words, at the time of optimizing a teacher model, the proposed algorithm learns the student branches jointly to obtain student-friendly representations. Since the main goal of our approach lies in training teacher models and the subsequent knowledge distillation procedure is straightforward, most of the existing knowledge distillation methods can adopt this technique to improve the performance of diverse student models in terms of accuracy and convergence speed. The proposed algorithm demonstrates outstanding accuracy in several well-known knowledge distillation techniques with various combinations of teacher and student models even in the case that their architectures are heterogeneous and there is no prior knowledge about student models at the time of training teacher networks
null
Implicit Transformer Network for Screen Content Image Continuous Super-Resolution
https://papers.nips.cc/paper_files/paper/2021/hash/6e7d5d259be7bf56ed79029c4e621f44-Abstract.html
Jingyu Yang, Sheng Shen, Huanjing Yue, Kun Li
https://papers.nips.cc/paper_files/paper/2021/hash/6e7d5d259be7bf56ed79029c4e621f44-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12642-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e7d5d259be7bf56ed79029c4e621f44-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x4t0fxWPNdi
https://papers.nips.cc/paper_files/paper/2021/file/6e7d5d259be7bf56ed79029c4e621f44-Supplemental.pdf
Nowadays, there is an explosive growth of screen contents due to the wide application of screen sharing, remote cooperation, and online education. To match the limited terminal bandwidth, high-resolution (HR) screen contents may be downsampled and compressed. At the receiver side, the super-resolution (SR)of low-resolution (LR) screen content images (SCIs) is highly demanded by the HR display or by the users to zoom in for detail observation. However, image SR methods mostly designed for natural images do not generalize well for SCIs due to the very different image characteristics as well as the requirement of SCI browsing at arbitrary scales. To this end, we propose a novel Implicit Transformer Super-Resolution Network (ITSRN) for SCISR. For high-quality continuous SR at arbitrary ratios, pixel values at query coordinates are inferred from image features at key coordinates by the proposed implicit transformer and an implicit position encoding scheme is proposed to aggregate similar neighboring pixel values to the query one. We construct benchmark SCI1K and SCI1K-compression datasets withLR and HR SCI pairs. Extensive experiments show that the proposed ITSRN significantly outperforms several competitive continuous and discrete SR methods for both compressed and uncompressed SCIs.
null
Channel Permutations for N:M Sparsity
https://papers.nips.cc/paper_files/paper/2021/hash/6e8404c3b93a9527c8db241a1846599a-Abstract.html
Jeff Pool, Chong Yu
https://papers.nips.cc/paper_files/paper/2021/hash/6e8404c3b93a9527c8db241a1846599a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12643-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6e8404c3b93a9527c8db241a1846599a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WAO1STUPWPP
https://papers.nips.cc/paper_files/paper/2021/file/6e8404c3b93a9527c8db241a1846599a-Supplemental.pdf
We introduce channel permutations as a method to maximize the accuracy of N:M sparse networks. N:M sparsity requires N out of M consecutive elements to be zero and has been shown to maintain accuracy for many models and tasks with a simple prune and fine-tune workflow. By permuting weight matrices along their channel dimension and adjusting the surrounding layers appropriately, we demonstrate accuracy recovery for even small, parameter-efficient networks, without affecting inference run-time. We also present both a quality metric to simplify judging permutations as well as efficient methods to search for high-quality permutations, including two optimizations to escape local minima. Finally, we share an ablation study to show the importance of each part of our search algorithm, experimental results showing correlation between our quality metric and final network accuracy, improved sparse network accuracy using our techniques with insignificant overhead to training time, and the transformation of unstructured to structured sparse workloads. Code to use these techniques when generating a 2:4 sparse network is available at https://github.com/NVIDIA/apex/tree/master/apex/contrib/sparsity.
null
Curriculum Learning for Vision-and-Language Navigation
https://papers.nips.cc/paper_files/paper/2021/hash/6f0442558302a6ededff195daf67f79b-Abstract.html
Jiwen Zhang, zhongyu wei, Jianqing Fan, Jiajie Peng
https://papers.nips.cc/paper_files/paper/2021/hash/6f0442558302a6ededff195daf67f79b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12644-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6f0442558302a6ededff195daf67f79b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1fr3bOX2t69
https://papers.nips.cc/paper_files/paper/2021/file/6f0442558302a6ededff195daf67f79b-Supplemental.pdf
Vision-and-Language Navigation (VLN) is a task where an agent navigates in an embodied indoor environment under human instructions. Previous works ignore the distribution of sample difficulty and we argue that this potentially degrade their agent performance. To tackle this issue, we propose a novel curriculum- based training paradigm for VLN tasks that can balance human prior knowledge and agent learning progress about training samples. We develop the principle of curriculum design and re-arrange the benchmark Room-to-Room (R2R) dataset to make it suitable for curriculum training. Experiments show that our method is model-agnostic and can significantly improve the performance, the generalizability, and the training efficiency of current state-of-the-art navigation agents without increasing model complexity.
null
Better Algorithms for Individually Fair $k$-Clustering
https://papers.nips.cc/paper_files/paper/2021/hash/6f221fcb5c504fe96789df252123770b-Abstract.html
Maryam Negahbani, Deeparnab Chakrabarty
https://papers.nips.cc/paper_files/paper/2021/hash/6f221fcb5c504fe96789df252123770b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12645-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6f221fcb5c504fe96789df252123770b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HEVfOwxrmQh
https://papers.nips.cc/paper_files/paper/2021/file/6f221fcb5c504fe96789df252123770b-Supplemental.pdf
We study data clustering problems with $\ell_p$-norm objectives (e.g. \textsc{$k$-Median} and \textsc{$k$-Means}) in the context of individual fairness. The dataset consists of $n$ points, and we want to find $k$ centers such that (a) the objective is minimized, while (b) respecting the individual fairness constraint that every point $v$ has a center within a distance at most $r(v)$, where $r(v)$ is $v$'s distance to its $(n/k)$th nearest point. Jung, Kannan, and Lutz [FORC 2020] introduced this concept and designed a clustering algorithm with provable (approximate) fairness and objective guarantees for the $\ell_\infty$ or \textsc{$k$-Center} objective. Mahabadi and Vakilian [ICML 2020] revisited this problem to give a local-search algorithm for all $\ell_p$-norms. Empirically, their algorithms outperform Jung et. al.'s by a large margin in terms of cost (for \textsc{$k$-Median} and \textsc{$k$-Means}), but they incur a reasonable loss in fairness. In this paper, our main contribution is to use Linear Programming (LP) techniques to obtain better algorithms for this problem, both in theory and in practice. We prove that by modifying known LP rounding techniques, one gets a worst-case guarantee on the objective which is much better than in MV20, and empirically, this objective is extremely close to the optimal. Furthermore, our theoretical fairness guarantees are comparable with MV20 in theory, and empirically, we obtain noticeably fairer solutions.Although solving the LP {\em exactly} might be prohibitive, we demonstrate that in practice, a simple sparsification technique drastically improves the run-time of our algorithm.
null
Video Instance Segmentation using Inter-Frame Communication Transformers
https://papers.nips.cc/paper_files/paper/2021/hash/6f2688a5fce7d48c8d19762b88c32c3b-Abstract.html
Sukjun Hwang, Miran Heo, Seoung Wug Oh, Seon Joo Kim
https://papers.nips.cc/paper_files/paper/2021/hash/6f2688a5fce7d48c8d19762b88c32c3b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12646-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6f2688a5fce7d48c8d19762b88c32c3b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=pvjfA4wogD6
https://papers.nips.cc/paper_files/paper/2021/file/6f2688a5fce7d48c8d19762b88c32c3b-Supplemental.zip
We propose a novel end-to-end solution for video instance segmentation (VIS) based on transformers. Recently, the per-clip pipeline shows superior performance over per-frame methods leveraging richer information from multiple frames. However, previous per-clip models require heavy computation and memory usage to achieve frame-to-frame communications, limiting practicality.In this work, we propose Inter-frame Communication Transformers (IFC), which significantly reduces the overhead for information-passing between frames by efficiently encoding the context within the input clip.Specifically, we propose to utilize concise memory tokens as a means of conveying information as well as summarizing each frame scene.The features of each frame are enriched and correlated with other frames through exchange of information between the precisely encoded memory tokens.We validate our method on the latest benchmark sets and achieved state-of-the-art performance (AP 42.6 on YouTube-VIS 2019 val set using the offline inference) while having a considerably fast runtime (89.4 FPS). Our method can also be applied to near-online inference for processing a video in real-time with only a small delay.The code is available at https://github.com/sukjunhwang/IFC
null
Progressive Coordinate Transforms for Monocular 3D Object Detection
https://papers.nips.cc/paper_files/paper/2021/hash/6f3ef77ac0e3619e98159e9b6febf557-Abstract.html
Li Wang, Li Zhang, Yi Zhu, Zhi Zhang, Tong He, Mu Li, Xiangyang Xue
https://papers.nips.cc/paper_files/paper/2021/hash/6f3ef77ac0e3619e98159e9b6febf557-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12647-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6f3ef77ac0e3619e98159e9b6febf557-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4G2dEuRZ7eO
https://papers.nips.cc/paper_files/paper/2021/file/6f3ef77ac0e3619e98159e9b6febf557-Supplemental.pdf
Recognizing and localizing objects in the 3D space is a crucial ability for an AI agent to perceive its surrounding environment. While significant progress has been achieved with expensive LiDAR point clouds, it poses a great challenge for 3D object detection given only a monocular image. While there exist different alternatives for tackling this problem, it is found that they are either equipped with heavy networks to fuse RGB and depth information or empirically ineffective to process millions of pseudo-LiDAR points. With in-depth examination, we realize that these limitations are rooted in inaccurate object localization. In this paper, we propose a novel and lightweight approach, dubbed {\em Progressive Coordinate Transforms} (PCT) to facilitate learning coordinate representations. Specifically, a localization boosting mechanism with confidence-aware loss is introduced to progressively refine the localization prediction. In addition, semantic image representation is also exploited to compensate for the usage of patch proposals. Despite being lightweight and simple, our strategy allows us to establish a new state-of-the-art among the monocular 3D detectors on the competitive KITTI benchmark. At the same time, our proposed PCT shows great generalization to most coordinate-based 3D detection frameworks.
null
Structured Reordering for Modeling Latent Alignments in Sequence Transduction
https://papers.nips.cc/paper_files/paper/2021/hash/6f46dd176364ccec308c2760189a4605-Abstract.html
bailin wang, Mirella Lapata, Ivan Titov
https://papers.nips.cc/paper_files/paper/2021/hash/6f46dd176364ccec308c2760189a4605-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12648-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6f46dd176364ccec308c2760189a4605-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=X2Cxixkcpx
https://papers.nips.cc/paper_files/paper/2021/file/6f46dd176364ccec308c2760189a4605-Supplemental.pdf
Despite success in many domains, neural models struggle in settings where train and test examples are drawn from different distributions. In particular, in contrast to humans, conventional sequence-to-sequence (seq2seq) models fail to generalize systematically, i.e., interpret sentences representing novel combinations of concepts (e.g., text segments) seen in training. Traditional grammar formalisms excel in such settings by implicitly encoding alignments between input and output segments, but are hard to scale and maintain. Instead of engineering a grammar, we directly model segment-to-segment alignments as discrete structured latent variables within a neural seq2seq model. To efficiently explore the large space of alignments, we introduce a reorder-first align-later framework whose central component is a neural reordering module producing separable permutations. We present an efficient dynamic programming algorithm performing exact marginal inference of separable permutations, and, thus, enabling end-to-end differentiable training of our model. The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks (i.e., semantic parsing and machine translation).
null
A universal probabilistic spike count model reveals ongoing modulation of neural variability
https://papers.nips.cc/paper_files/paper/2021/hash/6f5216f8d89b086c18298e043bfe48ed-Abstract.html
David Liu, Mate Lengyel
https://papers.nips.cc/paper_files/paper/2021/hash/6f5216f8d89b086c18298e043bfe48ed-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12649-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6f5216f8d89b086c18298e043bfe48ed-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=6ZdqOpE_UVF
https://papers.nips.cc/paper_files/paper/2021/file/6f5216f8d89b086c18298e043bfe48ed-Supplemental.pdf
Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method builds on sparse Gaussian processes and can model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates, using scalable variational inference to jointly infer the covariate-to-SCD mappings and latent trajectories in a data efficient way. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.
null
Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms
https://papers.nips.cc/paper_files/paper/2021/hash/6f5e4e86a87220e5d361ad82f1ebc335-Abstract.html
Chi Jin, Qinghua Liu, Sobhan Miryoosefi
https://papers.nips.cc/paper_files/paper/2021/hash/6f5e4e86a87220e5d361ad82f1ebc335-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12650-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6f5e4e86a87220e5d361ad82f1ebc335-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=b8Kl8mcK6tb
https://papers.nips.cc/paper_files/paper/2021/file/6f5e4e86a87220e5d361ad82f1ebc335-Supplemental.pdf
Finding the minimal structural assumptions that empower sample-efficient learning is one of the most important research directions in Reinforcement Learning (RL). This paper advances our understanding of this fundamental question by introducing a new complexity measure—Bellman Eluder (BE) dimension. We show that the family of RL problems of low BE dimension is remarkably rich, which subsumes a vast majority of existing tractable RL problems including but not limited to tabular MDPs, linear MDPs, reactive POMDPs, low Bellman rank problems as well as low Eluder dimension problems. This paper further designs a new optimization-based algorithm— GOLF, and reanalyzes a hypothesis elimination-based algorithm—OLIVE (proposed in Jiang et al. (2017)). We prove that both algorithms learn the near-optimal policies of low BE dimension problems in a number of samples that is polynomial in all relevant parameters, but independent of the size of state-action space. Our regret and sample complexity results match or improve the best existing results for several well-known subclasses of low BE dimension problems.
null
Detecting Anomalous Event Sequences with Temporal Point Processes
https://papers.nips.cc/paper_files/paper/2021/hash/6faa8040da20ef399b63a72d0e4ab575-Abstract.html
Oleksandr Shchur, Ali Caner Turkmen, Tim Januschowski, Jan Gasthaus, Stephan Günnemann
https://papers.nips.cc/paper_files/paper/2021/hash/6faa8040da20ef399b63a72d0e4ab575-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12651-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6faa8040da20ef399b63a72d0e4ab575-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MTMyxzrIKsM
https://papers.nips.cc/paper_files/paper/2021/file/6faa8040da20ef399b63a72d0e4ab575-Supplemental.pdf
Automatically detecting anomalies in event data can provide substantial value in domains such as healthcare, DevOps, and information security. In this paper, we frame the problem of detecting anomalous continuous-time event sequences as out-of-distribution (OOD) detection for temporal point processes (TPPs). First, we show how this problem can be approached using goodness-of-fit (GoF) tests. We then demonstrate the limitations of popular GoF statistics for TPPs and propose a new test that addresses these shortcomings. The proposed method can be combined with various TPP models, such as neural TPPs, and is easy to implement. In our experiments, we show that the proposed statistic excels at both traditional GoF testing, as well as at detecting anomalies in simulated and real-world data.
null
HNPE: Leveraging Global Parameters for Neural Posterior Estimation
https://papers.nips.cc/paper_files/paper/2021/hash/6fbd841e2e4b2938351a4f9b68f12e6b-Abstract.html
Pedro Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort
https://papers.nips.cc/paper_files/paper/2021/hash/6fbd841e2e4b2938351a4f9b68f12e6b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12652-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6fbd841e2e4b2938351a4f9b68f12e6b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=E8BxwYR8op
https://papers.nips.cc/paper_files/paper/2021/file/6fbd841e2e4b2938351a4f9b68f12e6b-Supplemental.pdf
Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method. A particularly challenging setting is when the model is strongly indeterminate, i.e. when distinct sets of parameters yield identical observations. This arises in many practical situations, such as when inferring the distance and power of a radio source (is the source close and weak or far and strong?) or when estimating the amplifier gain and underlying brain activity of an electrophysiological experiment. In this work, we present hierarchical neural posterior estimation (HNPE), a novel method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters. Our method extends recent developments in simulation-based inference (SBI) based on normalizing flows to Bayesian hierarchical models. We validate quantitatively our proposal on a motivating example amenable to analytical solutions and then apply it to invert a well known non-linear model from computational neuroscience, using both simulated and real EEG data.
null
Alignment Attention by Matching Key and Query Distributions
https://papers.nips.cc/paper_files/paper/2021/hash/6fd6b030c6afec018415662d0db43f9d-Abstract.html
Shujian Zhang, Xinjie Fan, Huangjie Zheng, Korawat Tanwisuth, Mingyuan Zhou
https://papers.nips.cc/paper_files/paper/2021/hash/6fd6b030c6afec018415662d0db43f9d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12653-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6fd6b030c6afec018415662d0db43f9d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=th788unrdTj
https://papers.nips.cc/paper_files/paper/2021/file/6fd6b030c6afec018415662d0db43f9d-Supplemental.pdf
The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains. Most such models use multi-head self-attention which is appealing for the ability to attend to information from different perspectives. This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head. The resulting alignment attention networks can be optimized as an unsupervised regularization in the existing attention framework. It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention. On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our approach on graph attention and visual question answering, showing the great potential of incorporating our alignment method into various attention-related tasks.
null
Settling the Variance of Multi-Agent Policy Gradients
https://papers.nips.cc/paper_files/paper/2021/hash/6fe6a8a6e6cb710584efc4af0c34ce50-Abstract.html
Jakub Grudzien Kuba, Muning Wen, Linghui Meng, shangding gu, Haifeng Zhang, David Mguni, Jun Wang, Yaodong Yang
https://papers.nips.cc/paper_files/paper/2021/hash/6fe6a8a6e6cb710584efc4af0c34ce50-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12654-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6fe6a8a6e6cb710584efc4af0c34ce50-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jI97GGA0H_
https://papers.nips.cc/paper_files/paper/2021/file/6fe6a8a6e6cb710584efc4af0c34ce50-Supplemental.pdf
Policy gradient (PG) methods are popular reinforcement learning (RL) methods where a baseline is often applied to reduce the variance of gradient estimates. In multi-agent RL (MARL), although the PG theorem can be naturally extended, the effectiveness of multi-agent PG (MAPG) methods degrades as the variance of gradient estimates increases rapidly with the number of agents. In this paper, we offer a rigorous analysis of MAPG methods by, firstly, quantifying the contributions of the number of agents and agents' explorations to the variance of MAPG estimators. Based on this analysis, we derive the optimal baseline (OB) that achieves the minimal variance. In comparison to the OB, we measure the excess variance of existing MARL algorithms such as vanilla MAPG and COMA. Considering using deep neural networks, we also propose a surrogate version of OB, which can be seamlessly plugged into any existing PG methods in MARL. On benchmarks of Multi-Agent MuJoCo and StarCraft challenges, our OB technique effectively stabilises training and improves the performance of multi-agent PPO and COMA algorithms by a significant margin. Code is released at \url{https://github.com/morning9393/Optimal-Baseline-for-Multi-agent-Policy-Gradients}.
null
For high-dimensional hierarchical models, consider exchangeability of effects across covariates instead of across datasets
https://papers.nips.cc/paper_files/paper/2021/hash/6ffad86b9a8dd4a3e98df1b0830d1c8c-Abstract.html
Brian Trippe, Hilary Finucane, Tamara Broderick
https://papers.nips.cc/paper_files/paper/2021/hash/6ffad86b9a8dd4a3e98df1b0830d1c8c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12655-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/6ffad86b9a8dd4a3e98df1b0830d1c8c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=28NikxkK6kJ
https://papers.nips.cc/paper_files/paper/2021/file/6ffad86b9a8dd4a3e98df1b0830d1c8c-Supplemental.pdf
Hierarchical Bayesian methods enable information sharing across regression problems on multiple groups of data. While standard practice is to model regression parameters (effects) as (1) exchangeable across the groups and (2) correlated to differing degrees across covariates, we show that this approach exhibits poor statistical performance when the number of covariates exceeds the number of groups. For instance, in statistical genetics, we might regress dozens of traits (defining groups) for thousands of individuals (responses) on up to millions of genetic variants (covariates). When an analyst has more covariates than groups, we argue that it is often preferable to instead model effects as (1) exchangeable across covariates and (2) correlated to differing degrees across groups. To this end, we propose a hierarchical model expressing our alternative perspective. We devise an empirical Bayes estimator for learning the degree of correlation between groups. We develop theory that demonstrates that our method outperforms the classic approach when the number of covariates dominates the number of groups, and corroborate this result empirically on several high-dimensional multiple regression and classification problems.
null
Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations
https://papers.nips.cc/paper_files/paper/2021/hash/700fdb2ba62d4554dc268c65add4b16e-Abstract.html
Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan
https://papers.nips.cc/paper_files/paper/2021/hash/700fdb2ba62d4554dc268c65add4b16e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12656-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/700fdb2ba62d4554dc268c65add4b16e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=6Ddt0bvKoeh
https://papers.nips.cc/paper_files/paper/2021/file/700fdb2ba62d4554dc268c65add4b16e-Supplemental.pdf
We present polynomial time and sample efficient algorithms for learning an unknown depth-2 feedforward neural network with general ReLU activations, under mild non-degeneracy assumptions. In particular, we consider learning an unknown network of the form $f(x) = {a}^{\mathsf{T}}\sigma({W}^\mathsf{T}x+b)$, where $x$ is drawn from the Gaussian distribution, and $\sigma(t) = \max(t,0)$ is the ReLU activation. Prior works for learning networks with ReLU activations assume that the bias ($b$) is zero. In order to deal with the presence of the bias terms, our proposed algorithm consists of robustly decomposing multiple higher order tensors arising from the Hermite expansion of the function $f(x)$. Using these ideas we also establish identifiability of the network parameters under very mild assumptions.
null
Controllable and Compositional Generation with Latent-Space Energy-Based Models
https://papers.nips.cc/paper_files/paper/2021/hash/701d804549a4a23d3cae801dac6c2c75-Abstract.html
Weili Nie, Arash Vahdat, Anima Anandkumar
https://papers.nips.cc/paper_files/paper/2021/hash/701d804549a4a23d3cae801dac6c2c75-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12657-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/701d804549a4a23d3cae801dac6c2c75-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kcI3T5qe1jr
https://papers.nips.cc/paper_files/paper/2021/file/701d804549a4a23d3cae801dac6c2c75-Supplemental.pdf
Controllable generation is one of the key requirements for successful adoption of deep generative models in real-world applications, but it still remains as a great challenge. In particular, the compositional ability to generate novel concept combinations is out of reach for most current models. In this work, we use energy-based models (EBMs) to handle compositional generation over a set of attributes. To make them scalable to high-resolution image generation, we introduce an EBM in the latent space of a pre-trained generative model such as StyleGAN. We propose a novel EBM formulation representing the joint distribution of data and attributes together, and we show how sampling from it is formulated as solving an ordinary differential equation (ODE). Given a pre-trained generator, all we need for controllable generation is to train an attribute classifier. Sampling with ODEs is done efficiently in the latent space and is robust to hyperparameters. Thus, our method is simple, fast to train, and efficient to sample. Experimental results show that our method outperforms the state-of-the-art in both conditional sampling and sequential editing. In compositional generation, our method excels at zero-shot generation of unseen attribute combinations. Also, by composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photo-realistic images of resolution 1024x1024.
null
Reverse-Complement Equivariant Networks for DNA Sequences
https://papers.nips.cc/paper_files/paper/2021/hash/706608cfdbcc1886bb7eea5513f90133-Abstract.html
Vincent Mallet, Jean-Philippe Vert
https://papers.nips.cc/paper_files/paper/2021/hash/706608cfdbcc1886bb7eea5513f90133-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12658-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/706608cfdbcc1886bb7eea5513f90133-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AD4jtD8wE1w
https://papers.nips.cc/paper_files/paper/2021/file/706608cfdbcc1886bb7eea5513f90133-Supplemental.pdf
As DNA sequencing technologies keep improving in scale and cost, there is a growing need to develop machine learning models to analyze DNA sequences, e.g., to decipher regulatory signals from DNA fragments bound by a particular protein of interest. As a double helix made of two complementary strands, a DNA fragment can be sequenced as two equivalent, so-called reverse complement (RC) sequences of nucleotides. To take into account this inherent symmetry of the data in machine learning models can facilitate learning. In this sense, several authors have recently proposed particular RC-equivariant convolutional neural networks (CNNs). However, it remains unknown whether other RC-equivariant architecture exist, which could potentially increase the set of basic models adapted to DNA sequences for practitioners. Here, we close this gap by characterizing the set of all linear RC-equivariant layers, and show in particular that new architectures exist beyond the ones already explored. We further discuss RC-equivariant pointwise nonlinearities adapted to different architectures, as well as RC-equivariant embeddings of $k$-mers as an alternative to one-hot encoding of nucleotides. We show experimentally that the new architectures can outperform existing ones.
null
Provably Efficient Reinforcement Learning with Linear Function Approximation under Adaptivity Constraints
https://papers.nips.cc/paper_files/paper/2021/hash/70a32110fff0f26d301e58ebbca9cb9f-Abstract.html
Tianhao Wang, Dongruo Zhou, Quanquan Gu
https://papers.nips.cc/paper_files/paper/2021/hash/70a32110fff0f26d301e58ebbca9cb9f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12659-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/70a32110fff0f26d301e58ebbca9cb9f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vYZmTEDFoqP
https://papers.nips.cc/paper_files/paper/2021/file/70a32110fff0f26d301e58ebbca9cb9f-Supplemental.pdf
We study reinforcement learning (RL) with linear function approximation under the adaptivity constraint. We consider two popular limited adaptivity models: the batch learning model and the rare policy switch model, and propose two efficient online RL algorithms for episodic linear Markov decision processes, where the transition probability and the reward function can be represented as a linear function of some known feature mapping. In specific, for the batch learning model, our proposed LSVI-UCB-Batch algorithm achieves an $\tilde O(\sqrt{d^3H^3T} + dHT/B)$ regret, where $d$ is the dimension of the feature mapping, $H$ is the episode length, $T$ is the number of interactions and $B$ is the number of batches. Our result suggests that it suffices to use only $\sqrt{T/dH}$ batches to obtain $\tilde O(\sqrt{d^3H^3T})$ regret. For the rare policy switch model, our proposed LSVI-UCB-RareSwitch algorithm enjoys an $\tilde O(\sqrt{d^3H^3T[1+T/(dH)]^{dH/B}})$ regret, which implies that $dH\log T$ policy switches suffice to obtain the $\tilde O(\sqrt{d^3H^3T})$ regret. Our algorithms achieve the same regret as the LSVI-UCB algorithm \citep{jin2020provably}, yet with a substantially smaller amount of adaptivity. We also establish a lower bound for the batch learning model, which suggests that the dependency on $B$ in our regret bound is tight.
null
Nonsmooth Implicit Differentiation for Machine-Learning and Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/70afbf2259b4449d8ae1429e054df1b1-Abstract.html
Jérôme Bolte, Tam Le, Edouard Pauwels, Tony Silveti-Falls
https://papers.nips.cc/paper_files/paper/2021/hash/70afbf2259b4449d8ae1429e054df1b1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12660-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/70afbf2259b4449d8ae1429e054df1b1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FGAi8TP3ShV
https://papers.nips.cc/paper_files/paper/2021/file/70afbf2259b4449d8ae1429e054df1b1-Supplemental.pdf
In view of training increasingly complex learning architectures, we establish a nonsmooth implicit function theorem with an operational calculus. Our result applies to most practical problems (i.e., definable problems) provided that a nonsmooth form of the classical invertibility condition is fulfilled. This approach allows for formal subdifferentiation: for instance, replacing derivatives by Clarke Jacobians in the usual differentiation formulas is fully justified for a wide class of nonsmooth problems. Moreover this calculus is entirely compatible with algorithmic differentiation (e.g., backpropagation). We provide several applications such as training deep equilibrium networks, training neural nets with conic optimization layers, or hyperparameter-tuning for nonsmooth Lasso-type models. To show the sharpness of our assumptions, we present numerical experiments showcasing the extremely pathological gradient dynamics one can encounter when applying implicit algorithmic differentiation without any hypothesis.
null
Heuristic-Guided Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/70d31b87bd021441e5e6bf23eb84a306-Abstract.html
Ching-An Cheng, Andrey Kolobov, Adith Swaminathan
https://papers.nips.cc/paper_files/paper/2021/hash/70d31b87bd021441e5e6bf23eb84a306-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12661-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/70d31b87bd021441e5e6bf23eb84a306-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HipwnJKnp3
https://papers.nips.cc/paper_files/paper/2021/file/70d31b87bd021441e5e6bf23eb84a306-Supplemental.pdf
We provide a framework to accelerate reinforcement learning (RL) algorithms by heuristics that are constructed by domain knowledge or offline data. Tabula rasa RL algorithms require environment interactions or computation that scales with the horizon of the sequential decision-making task. Using our framework, we show how heuristic-guided RL induces a much shorter horizon sub-problem that provably solves the original task. Our framework can be viewed as a horizon-based regularization for controlling bias and variance in RL under a finite interaction budget. In theory, we characterize the properties of a good heuristic and the resulting impact on RL acceleration. In particular, we introduce the novel concept of an improvable heuristic that can allow any RL agent to conservatively extrapolate beyond its prior knowledge. In practice, we instantiate our framework to accelerate several state-of-the-art algorithms in simulated robotic control tasks and procedurally generated games. Our framework complements the rich literature on warm-starting RL using expert demonstrations or exploratory data-sets, and creates a unified channel to inject prior knowledge into RL.
null
Statistical Undecidability in Linear, Non-Gaussian Causal Models in the Presence of Latent Confounders
https://papers.nips.cc/paper_files/paper/2021/hash/70d355680e628fe1c552221f690d8da4-Abstract.html
Konstantin Genin
https://papers.nips.cc/paper_files/paper/2021/hash/70d355680e628fe1c552221f690d8da4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12662-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/70d355680e628fe1c552221f690d8da4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KmPSe18DGLs
https://papers.nips.cc/paper_files/paper/2021/file/70d355680e628fe1c552221f690d8da4-Supplemental.pdf
If causal relationships are linear and acyclic and noise terms are independent and Gaussian, causal orientation is not identified from observational data --- even if faithfulness is satisfied (Spirtes et al., 2002). Shimizu et al. (2006) showed that acyclic, linear, {\bf non}-Gaussian (LiNGAM) causal models {\em are} identified from observational data, so long as no latent confounders are present. That holds even when faithfulness fails. Genin and Mayo-Wilson (2020) refine that result: not only are causal relationships identified, but causal orientation is {\em statistically decidable}. That means that for every $\epsilon>0,$ there is a method that converges in probability to the correct orientation and, at every sample size, outputs an incorrect orientation with probability less than $\epsilon.$ These results naturally raise questions about what happens in the presence of latent confounders. Hoyer et al. (2008) and Salehkaleybar et al. (2020) show that, although the causal model is not uniquely identified, causal orientation among observed variables is identified in the presence of latent confounders, so long as faithfulness is satisfied. This paper refines these results: although it is possible to converge to the right orientation in the limit, causal orientation is no longer statistically decidable---it is not possible to converge to the correct orientation with finite-sample bounds on the probability of orientation errors, even if faithfulness is satisfied. However, that limiting result suggests several adjustments to the LiNGAM model that may recover decidability.
null
A novel notion of barycenter for probability distributions based on optimal weak mass transport
https://papers.nips.cc/paper_files/paper/2021/hash/70d5212dd052b2ef06e5e562f6f9ab9c-Abstract.html
Elsa Cazelles, Felipe Tobar, Joaquin Fontbona
https://papers.nips.cc/paper_files/paper/2021/hash/70d5212dd052b2ef06e5e562f6f9ab9c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12663-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/70d5212dd052b2ef06e5e562f6f9ab9c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PwVruv8s3_Q
https://papers.nips.cc/paper_files/paper/2021/file/70d5212dd052b2ef06e5e562f6f9ab9c-Supplemental.pdf
We introduce weak barycenters of a family of probability distributions, based on the recently developed notion of optimal weak transport of mass by Gozlan et al. (2017) and Backhoff-Veraguas et al. (2020). We provide a theoretical analysis of this object and discuss its interpretation in the light of convex ordering between probability measures. In particular, we show that, rather than averaging the input distributions in a geometric way (as the Wasserstein barycenter based on classic optimal transport does) weak barycenters extract common geometric information shared by all the input distributions, encoded as a latent random variable that underlies all of them. We also provide an iterative algorithm to compute a weak barycenter for a finite family of input distributions, and a stochastic algorithm that computes them for arbitrary populations of laws. The latter approach is particularly well suited for the streaming setting, i.e., when distributions are observed sequentially. The notion of weak barycenter and our approaches to compute it are illustrated on synthetic examples, validated on 2D real-world data and compared to standard Wasserstein barycenters.
null
Temporal-attentive Covariance Pooling Networks for Video Recognition
https://papers.nips.cc/paper_files/paper/2021/hash/70efdf2ec9b086079795c442636b55fb-Abstract.html
Zilin Gao, Qilong Wang, Bingbing Zhang, Qinghua Hu, Peihua Li
https://papers.nips.cc/paper_files/paper/2021/hash/70efdf2ec9b086079795c442636b55fb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12664-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/70efdf2ec9b086079795c442636b55fb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=T2yRQao67x
https://papers.nips.cc/paper_files/paper/2021/file/70efdf2ec9b086079795c442636b55fb-Supplemental.pdf
For video recognition task, a global representation summarizing the whole contents of the video snippets plays an important role for the final performance. However, existing video architectures usually generate it by using a simple, global average pooling (GAP) method, which has limited ability to capture complex dynamics of videos. For image recognition task, there exist evidences showing that covariance pooling has stronger representation ability than GAP. Unfortunately, such plain covariance pooling used in image recognition is an orderless representative, which cannot model spatio-temporal structure inherent in videos. Therefore, this paper proposes a Temporal-attentive Covariance Pooling (TCP), inserted at the end of deep architectures, to produce powerful video representations. Specifically, our TCP first develops a temporal attention module to adaptively calibrate spatio-temporal features for the succeeding covariance pooling, approximatively producing attentive covariance representations. Then, a temporal covariance pooling performs temporal pooling of the attentive covariance representations to characterize both intra-frame correlations and inter-frame cross-correlations of the calibrated features. As such, the proposed TCP can capture complex temporal dynamics. Finally, a fast matrix power normalization is introduced to exploit geometry of covariance representations. Note that our TCP is model-agnostic and can be flexibly integrated into any video architectures, resulting in TCPNet for effective video recognition. The extensive experiments on six benchmarks (e.g., Kinetics, Something-Something V1 and Charades) using various video architectures show our TCPNet is clearly superior to its counterparts, while having strong generalization ability. The source code is publicly available.
null
Revisiting Smoothed Online Learning
https://papers.nips.cc/paper_files/paper/2021/hash/70fc5f043205720a49d973d280eb83e7-Abstract.html
Lijun Zhang, Wei Jiang, Shiyin Lu, Tianbao Yang
https://papers.nips.cc/paper_files/paper/2021/hash/70fc5f043205720a49d973d280eb83e7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12665-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/70fc5f043205720a49d973d280eb83e7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sn0wj3Dci2J
https://papers.nips.cc/paper_files/paper/2021/file/70fc5f043205720a49d973d280eb83e7-Supplemental.pdf
In this paper, we revisit the problem of smoothed online learning, in which the online learner suffers both a hitting cost and a switching cost, and target two performance metrics: competitive ratio and dynamic regret with switching cost. To bound the competitive ratio, we assume the hitting cost is known to the learner in each round, and investigate the simple idea of balancing the two costs by an optimization problem. Surprisingly, we find that minimizing the hitting cost alone is $\max(1, \frac{2}{\alpha})$-competitive for $\alpha$-polyhedral functions and $1 + \frac{4}{\lambda}$-competitive for $\lambda$-quadratic growth functions, both of which improve state-of-the-art results significantly. Moreover, when the hitting cost is both convex and $\lambda$-quadratic growth, we reduce the competitive ratio to $1 + \frac{2}{\sqrt{\lambda}}$ by minimizing the weighted sum of the hitting cost and the switching cost. To bound the dynamic regret with switching cost, we follow the standard setting of online convex optimization, in which the hitting cost is convex but hidden from the learner before making predictions. We modify Ader, an existing algorithm designed for dynamic regret, slightly to take into account the switching cost when measuring the performance. The proposed algorithm, named as Smoothed Ader, attains an optimal $O(\sqrt{T(1+P_T)})$ bound for dynamic regret with switching cost, where $P_T$ is the path-length of the comparator sequence. Furthermore, if the hitting cost is accessible in the beginning of each round, we obtain a similar guarantee without the bounded gradient condition, and establish an $\Omega(\sqrt{T(1+P_T)})$ lower bound to confirm the optimality.
null
Marginalised Gaussian Processes with Nested Sampling
https://papers.nips.cc/paper_files/paper/2021/hash/712a67567ec10c52c2b966224cf94d1e-Abstract.html
Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen
https://papers.nips.cc/paper_files/paper/2021/hash/712a67567ec10c52c2b966224cf94d1e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12666-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/712a67567ec10c52c2b966224cf94d1e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zHj5fx11jQC
https://papers.nips.cc/paper_files/paper/2021/file/712a67567ec10c52c2b966224cf94d1e-Supplemental.pdf
Gaussian Process models are a rich distribution over functions with inductive biases controlled by a kernel function. Learning occurs through optimisation of the kernel hyperparameters using the marginal likelihood as the objective. This work proposes nested sampling as a means of marginalising kernel hyperparameters, because it is a technique that is well-suited to exploring complex, multi-modal distributions. We benchmark against Hamiltonian Monte Carlo on time-series and two-dimensional regression tasks, finding that a principled approach to quantifying hyperparameter uncertainty substantially improves the quality of prediction intervals.
null
Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/713fd63d76c8a57b16fc433fb4ae718a-Abstract.html
Andrea Zanette, Martin J Wainwright, Emma Brunskill
https://papers.nips.cc/paper_files/paper/2021/hash/713fd63d76c8a57b16fc433fb4ae718a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12667-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/713fd63d76c8a57b16fc433fb4ae718a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EnmG3G5SYR
https://papers.nips.cc/paper_files/paper/2021/file/713fd63d76c8a57b16fc433fb4ae718a-Supplemental.pdf
Actor-critic methods are widely used in offline reinforcement learningpractice, but are not so well-understood theoretically. We propose a newoffline actor-critic algorithm that naturally incorporates the pessimism principle, leading to several key advantages compared to the state of the art. The algorithm can operate when the Bellman evaluation operator is closed with respect to the action value function of the actor's policies; this is a more general setting than the low-rank MDP model. Despite the added generality, the procedure is computationally tractable as it involves the solution of a sequence of second-order programs.We prove an upper bound on the suboptimality gap of the policy returned by the procedure that depends on the data coverage of any arbitrary, possibly data dependent comparator policy.The achievable guarantee is complemented with a minimax lower bound that is matching up to logarithmic factors.
null
Bayesian Bellman Operators
https://papers.nips.cc/paper_files/paper/2021/hash/7180cffd6a8e829dacfc2a31b3f72ece-Abstract.html
Mattie Fellows, Kristian Hartikainen, Shimon Whiteson
https://papers.nips.cc/paper_files/paper/2021/hash/7180cffd6a8e829dacfc2a31b3f72ece-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12668-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7180cffd6a8e829dacfc2a31b3f72ece-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_MQBBpJzoZd
https://papers.nips.cc/paper_files/paper/2021/file/7180cffd6a8e829dacfc2a31b3f72ece-Supplemental.zip
We introduce a novel perspective on Bayesian reinforcement learning (RL); whereas existing approaches infer a posterior over the transition distribution or Q-function, we characterise the uncertainty in the Bellman operator. Our Bayesian Bellman operator (BBO) framework is motivated by the insight that when bootstrapping is introduced, model-free approaches actually infer a posterior over Bellman operators, not value functions. In this paper, we use BBO to provide a rigorous theoretical analysis of model-free Bayesian RL to better understand its relationship to established frequentist RL methodologies. We prove that Bayesian solutions are consistent with frequentist RL solutions, even when approximate inference is used, and derive conditions for which convergence properties hold. Empirically, we demonstrate that algorithms derived from the BBO framework have sophisticated deep exploration properties that enable them to solve continuous control tasks at which state-of-the-art regularised actor-critic algorithms fail catastrophically.
null
Uncertainty Calibration for Ensemble-Based Debiasing Methods
https://papers.nips.cc/paper_files/paper/2021/hash/71a8b2ffe0b594a5c1b3c28090384fd7-Abstract.html
Ruibin Xiong, Yimeng Chen, Liang Pang, Xueqi Cheng, Zhi-Ming Ma, Yanyan Lan
https://papers.nips.cc/paper_files/paper/2021/hash/71a8b2ffe0b594a5c1b3c28090384fd7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12669-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/71a8b2ffe0b594a5c1b3c28090384fd7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4azYdmhHCG
https://papers.nips.cc/paper_files/paper/2021/file/71a8b2ffe0b594a5c1b3c28090384fd7-Supplemental.pdf
Ensemble-based debiasing methods have been shown effective in mitigating the reliance of classifiers on specific dataset bias, by exploiting the output of a bias-only model to adjust the learning target. In this paper, we focus on the bias-only model in these ensemble-based methods, which plays an important role but has not gained much attention in the existing literature. Theoretically, we prove that the debiasing performance can be damaged by inaccurate uncertainty estimations of the bias-only model. Empirically, we show that existing bias-only models fall short in producing accurate uncertainty estimations. Motivated by these findings, we propose to conduct calibration on the bias-only model, thus achieving a three-stage ensemble-based debiasing framework, including bias modeling, model calibrating, and debiasing. Experimental results on NLI and fact verification tasks show that our proposed three-stage debiasing framework consistently outperforms the traditional two-stage one in out-of-distribution accuracy.
null
Provably Faster Algorithms for Bilevel Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/71cc107d2e0408e60a3d3c44f47507bd-Abstract.html
Junjie Yang, Kaiyi Ji, Yingbin Liang
https://papers.nips.cc/paper_files/paper/2021/hash/71cc107d2e0408e60a3d3c44f47507bd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12670-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/71cc107d2e0408e60a3d3c44f47507bd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=10anajdGZm
https://papers.nips.cc/paper_files/paper/2021/file/71cc107d2e0408e60a3d3c44f47507bd-Supplemental.pdf
Bilevel optimization has been widely applied in many important machine learning applications such as hyperparameter optimization and meta-learning. Recently, several momentum-based algorithms have been proposed to solve bilevel optimization problems faster. However, those momentum-based algorithms do not achieve provably better computational complexity than $\mathcal{\widetilde O}(\epsilon^{-2})$ of the SGD-based algorithm. In this paper, we propose two new algorithms for bilevel optimization, where the first algorithm adopts momentum-based recursive iterations, and the second algorithm adopts recursive gradient estimations in nested loops to decrease the variance. We show that both algorithms achieve the complexity of $\mathcal{\widetilde O}(\epsilon^{-1.5})$, which outperforms all existing algorithms by the order of magnitude. Our experiments validate our theoretical results and demonstrate the superior empirical performance of our algorithms in hyperparameter applications.
null
Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link Prediction
https://papers.nips.cc/paper_files/paper/2021/hash/71ddb91e8fa0541e426a54e538075a5a-Abstract.html
Seongjun Yun, Seoyoon Kim, Junhyun Lee, Jaewoo Kang, Hyunwoo J. Kim
https://papers.nips.cc/paper_files/paper/2021/hash/71ddb91e8fa0541e426a54e538075a5a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12671-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/71ddb91e8fa0541e426a54e538075a5a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Ic9vRN3VpZ
https://papers.nips.cc/paper_files/paper/2021/file/71ddb91e8fa0541e426a54e538075a5a-Supplemental.pdf
Graph Neural Networks (GNNs) have been widely applied to various fields for learning over graph-structured data. They have shown significant improvements over traditional heuristic methods in various tasks such as node classification and graph classification. However, since GNNs heavily rely on smoothed node features rather than graph structure, they often show poor performance than simple heuristic methods in link prediction where the structural information, e.g., overlapped neighborhoods, degrees, and shortest paths, is crucial. To address this limitation, we propose Neighborhood Overlap-aware Graph Neural Networks (Neo-GNNs) that learn useful structural features from an adjacency matrix and estimate overlapped neighborhoods for link prediction. Our Neo-GNNs generalize neighborhood overlap-based heuristic methods and handle overlapped multi-hop neighborhoods. Our extensive experiments on Open Graph Benchmark datasets (OGB) demonstrate that Neo-GNNs consistently achieve state-of-the-art performance in link prediction.
null
Self-Supervised Multi-Object Tracking with Cross-input Consistency
https://papers.nips.cc/paper_files/paper/2021/hash/71e09b16e21f7b6919bbfc43f6a5b2f0-Abstract.html
Favyen Bastani, Songtao He, Samuel Madden
https://papers.nips.cc/paper_files/paper/2021/hash/71e09b16e21f7b6919bbfc43f6a5b2f0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12672-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/71e09b16e21f7b6919bbfc43f6a5b2f0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Vc2SXubOUuf
https://papers.nips.cc/paper_files/paper/2021/file/71e09b16e21f7b6919bbfc43f6a5b2f0-Supplemental.pdf
In this paper, we propose a self-supervised learning procedure for training a robust multi-object tracking (MOT) model given only unlabeled video. While several self-supervisory learning signals have been proposed in prior work on single-object tracking, such as color propagation and cycle-consistency, these signals are not effective for training RNN models, which are needed to achieve accurate MOT: they yield degenerate models that, for instance, always match new detections to tracks with the closest initial detections. We propose a novel self-supervisory signal that we call cross-input consistency: we construct two distinct inputs for the same sequence of video, by hiding different information about the sequence in each input. We then compute tracks in that sequence by applying an RNN model independently on each input, and train the model to produce consistent tracks across the two inputs. We evaluate our unsupervised method on MOT17 and KITTI --- remarkably, we find that, despite training only on unlabeled video, our unsupervised approach outperforms four supervised methods published in the last 1--2 years, including Tracktor++, FAMNet, GSM, and mmMOT.
null
Tree in Tree: from Decision Trees to Decision Graphs
https://papers.nips.cc/paper_files/paper/2021/hash/71f6278d140af599e06ad9bf1ba03cb0-Abstract.html
Bingzhao Zhu, Mahsa Shoaran
https://papers.nips.cc/paper_files/paper/2021/hash/71f6278d140af599e06ad9bf1ba03cb0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12673-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/71f6278d140af599e06ad9bf1ba03cb0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SlXwiSeyE1
https://papers.nips.cc/paper_files/paper/2021/file/71f6278d140af599e06ad9bf1ba03cb0-Supplemental.pdf
Decision trees have been widely used as classifiers in many machine learning applications thanks to their lightweight and interpretable decision process. This paper introduces Tree in Tree decision graph (TnT), a framework that extends the conventional decision tree to a more generic and powerful directed acyclic graph. TnT constructs decision graphs by recursively growing decision trees inside the internal or leaf nodes instead of greedy training. The time complexity of TnT is linear to the number of nodes in the graph, therefore it can construct decision graphs on large datasets. Compared to decision trees, we show that TnT achieves better classification performance with reduced model size, both as a stand-alone classifier and as a base-estimator in bagging/AdaBoost ensembles. Our proposed model is a novel, more efficient and accurate alternative to the widely-used decision trees.
null
Test-time Collective Prediction
https://papers.nips.cc/paper_files/paper/2021/hash/722caafb4825ef5d8670710fa29087cf-Abstract.html
Celestine Mendler-Dünner, Wenshuo Guo, Stephen Bates, Michael Jordan
https://papers.nips.cc/paper_files/paper/2021/hash/722caafb4825ef5d8670710fa29087cf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12674-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/722caafb4825ef5d8670710fa29087cf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hrkY-fe8nJV
https://papers.nips.cc/paper_files/paper/2021/file/722caafb4825ef5d8670710fa29087cf-Supplemental.pdf
An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points. Agents wish to benefit from the collective expertise of the full set of agents to make better predictions than they would individually, but may not be willing to release labeled data or model parameters. In this work, we explore a decentralized mechanism to make collective predictions at test time, that is inspired by the literature in social science on human consensus-making. Building on a query model to facilitate information exchange among agents, our approach leverages each agent’s pre-trained model without relying on external validation, model retraining, or data pooling. A theoretical analysis shows that our approach recovers inverse mean-squared-error (MSE) weighting in the large-sample limit which is known to be the optimal way to combine independent, unbiased estimators. Empirically, we demonstrate that our scheme effectively combines models with differing quality across the input space: the proposed consensus prediction achieves significant gains over classical model averaging, and even outperforms weighted averaging schemes that have access to additional validation data. Finally, we propose a decentralized Jackknife procedure as a tool to evaluate the sensitivity of the collective predictions with respect to a single agent's opinion.
null
A Continuous Mapping For Augmentation Design
https://papers.nips.cc/paper_files/paper/2021/hash/7230b2b03e2da37352abf1a659545b44-Abstract.html
Keyu Tian, Chen Lin, Ser Nam Lim, Wanli Ouyang, Puneet Dokania, Philip Torr
https://papers.nips.cc/paper_files/paper/2021/hash/7230b2b03e2da37352abf1a659545b44-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12675-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7230b2b03e2da37352abf1a659545b44-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gB4hvxTzQLQ
https://papers.nips.cc/paper_files/paper/2021/file/7230b2b03e2da37352abf1a659545b44-Supplemental.pdf
Automated data augmentation (ADA) techniques have played an important role in boosting the performance of deep models. Such techniques mostly aim to optimize a parameterized distribution over a discrete augmentation space. Thus, are restricted by the discretization of the search space which normally is handcrafted. To overcome the limitations, we take the first step to constructing a continuous mapping from $\mathbb{R}^d$ to image transformations (an augmentation space). Using this mapping, we take a novel approach where 1) we pose the ADA as a continuous optimization problem over the parameters of the augmentation distribution; and 2) use Stochastic Gradient Langevin Dynamics to learn and sample augmentations. This allows us to potentially explore the space of infinitely many possible augmentations, which otherwise was not possible due to the discretization of the space. This view of ADA is radically different from the standard discretization based view of ADA, and it opens avenues for utilizing the vast efficient gradient-based algorithms available for continuous optimization problems. Results over multiple benchmarks demonstrate the efficiency improvement of this work compared with previous methods.
null
Neural Routing by Memory
https://papers.nips.cc/paper_files/paper/2021/hash/7241bd19bb709da0f46807bde88aed25-Abstract.html
Kaipeng Zhang, Zhenqiang Li, Zhifeng Li, Wei Liu, Yoichi Sato
https://papers.nips.cc/paper_files/paper/2021/hash/7241bd19bb709da0f46807bde88aed25-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12676-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7241bd19bb709da0f46807bde88aed25-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gbEreV4H2Jv
null
Recent Convolutional Neural Networks (CNNs) have achieved significant success by stacking multiple convolutional blocks, named procedures in this paper, to extract semantic features. However, they use the same procedure sequence for all inputs, regardless of the intermediate features.This paper proffers a simple yet effective idea of constructing parallel procedures and assigning similar intermediate features to the same specialized procedures in a divide-and-conquer fashion. It relieves each procedure's learning difficulty and thus leads to superior performance. Specifically, we propose a routing-by-memory mechanism for existing CNN architectures. In each stage of the network, we introduce parallel Procedural Units (PUs). A PU consists of a memory head and a procedure. The memory head maintains a summary of a type of features. For an intermediate feature, we search its closest memory and forward it to the corresponding procedure in both training and testing. In this way, different procedures are tailored to different features and therefore tackle them better.Networks with the proposed mechanism can be trained efficiently using a four-step training strategy. Experimental results show that our method improves VGGNet, ResNet, and EfficientNet's accuracies on Tiny ImageNet, ImageNet, and CIFAR-100 benchmarks with a negligible extra computational cost.
null
GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles
https://papers.nips.cc/paper_files/paper/2021/hash/725215ed82ab6306919b485b81ff9615-Abstract.html
Octavian Ganea, Lagnajit Pattanaik, Connor Coley, Regina Barzilay, Klavs Jensen, William Green, Tommi Jaakkola
https://papers.nips.cc/paper_files/paper/2021/hash/725215ed82ab6306919b485b81ff9615-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12677-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/725215ed82ab6306919b485b81ff9615-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=af_hng9tuNj
https://papers.nips.cc/paper_files/paper/2021/file/725215ed82ab6306919b485b81ff9615-Supplemental.pdf
Prediction of a molecule’s 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery. Existing generative models have several drawbacks including lack of modeling important molecular geometry elements (e.g., torsion angles), separate optimization stages prone to error accumulation, and the need for structure fine-tuning based on approximate classical force-fields or computationally expensive methods. We propose GEOMOL --- an end-to-end, non-autoregressive, and SE(3)-invariant machine learning approach to generate distributions of low-energy molecular 3D conformers. Leveraging the power of message passing neural networks (MPNNs) to capture local and global graph information, we predict local atomic 3D structures and torsion angles, avoid- ing unnecessary over-parameterization of the geometric degrees of freedom (e.g., one angle per non-terminal bond). Such local predictions suffice both for both the training loss computation and for the full deterministic conformer assembly (at test time). We devise a non-adversarial optimal transport based loss function to promote diverse conformer generation. GEOMOL predominantly outperforms popular open-source, commercial, or state-of-the-art machine learning (ML) models, while achieving significant speed-ups. We expect such differentiable 3D structure generators to significantly impact molecular modeling and related applications.
null
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
https://papers.nips.cc/paper_files/paper/2021/hash/7274a60c83145b1082be9caa91926ecf-Abstract.html
Zhize Li, Peter Richtarik
https://papers.nips.cc/paper_files/paper/2021/hash/7274a60c83145b1082be9caa91926ecf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12678-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7274a60c83145b1082be9caa91926ecf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=eNB4WXnNczJ
https://papers.nips.cc/paper_files/paper/2021/file/7274a60c83145b1082be9caa91926ecf-Supplemental.pdf
Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov's accelerated gradient descent [31, 32] and Adam [14]. In order to combine the benefits of communication compression and convergence acceleration, we propose a \emph{compressed and accelerated} gradient method based on ANITA [20] for distributed optimization, which we call CANITA. Our CANITA achieves the \emph{first accelerated rate} $O\bigg(\sqrt{\Big(1+\sqrt{\frac{\omega^3}{n}}\Big)\frac{L}{\epsilon}} + \omega\big(\frac{1}{\epsilon}\big)^{\frac{1}{3}}\bigg)$, which improves upon the state-of-the-art non-accelerated rate $O\left((1+\frac{\omega}{n})\frac{L}{\epsilon} + \frac{\omega^2+\omega}{\omega+n}\frac{1}{\epsilon}\right)$ of DIANA [12] for distributed general convex problems, where $\epsilon$ is the target error, $L$ is the smooth parameter of the objective, $n$ is the number of machines/devices, and $\omega$ is the compression parameter (larger $\omega$ means more compression can be applied, and no compression implies $\omega=0$). Our results show that as long as the number of devices $n$ is large (often true in distributed/federated learning), or the compression $\omega$ is not very high, CANITA achieves the faster convergence rate $O\Big(\sqrt{\frac{L}{\epsilon}}\Big)$, i.e., the number of communication rounds is $O\Big(\sqrt{\frac{L}{\epsilon}}\Big)$ (vs. $O\big(\frac{L}{\epsilon}\big)$ achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds).
null
Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers
https://papers.nips.cc/paper_files/paper/2021/hash/729c68884bd359ade15d5f163166738a-Abstract.html
Mikita Dvornik, Isma Hadji, Konstantinos G. Derpanis, Animesh Garg, Allan Jepson
https://papers.nips.cc/paper_files/paper/2021/hash/729c68884bd359ade15d5f163166738a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12679-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/729c68884bd359ade15d5f163166738a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=A_Aeb-XLozL
https://papers.nips.cc/paper_files/paper/2021/file/729c68884bd359ade15d5f163166738a-Supplemental.zip
In this work, we consider the problem of sequence-to-sequence alignment for signals containing outliers. Assuming the absence of outliers, the standard Dynamic Time Warping (DTW) algorithm efficiently computes the optimal alignment between two (generally) variable-length sequences. While DTW is robust to temporal shifts and dilations of the signal, it fails to align sequences in a meaningful way in the presence of outliers that can be arbitrarily interspersed in the sequences. To address this problem, we introduce Drop-DTW, a novel algorithm that aligns the common signal between the sequences while automatically dropping the outlier elements from the matching. The entire procedure is implemented as a single dynamic program that is efficient and fully differentiable. In our experiments, we show that Drop-DTW is a robust similarity measure for sequence retrieval and demonstrate its effectiveness as a training loss on diverse applications. With Drop-DTW, we address temporal step localization on instructional videos, representation learning from noisy videos, and cross-modal representation learning for audio-visual retrieval and localization. In all applications, we take a weakly- or unsupervised approach and demonstrate state-of-the-art results under these settings.
null
Safe Reinforcement Learning with Natural Language Constraints
https://papers.nips.cc/paper_files/paper/2021/hash/72f67e70f6b7cdc4cc893edaddf0c4c6-Abstract.html
Tsung-Yen Yang, Michael Y Hu, Yinlam Chow, Peter J Ramadge, Karthik Narasimhan
https://papers.nips.cc/paper_files/paper/2021/hash/72f67e70f6b7cdc4cc893edaddf0c4c6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12680-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/72f67e70f6b7cdc4cc893edaddf0c4c6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=qxKh67NNJ2I
https://papers.nips.cc/paper_files/paper/2021/file/72f67e70f6b7cdc4cc893edaddf0c4c6-Supplemental.pdf
While safe reinforcement learning (RL) holds great promise for many practical applications like robotics or autonomous cars, current approaches require specifying constraints in mathematical form. Such specifications demand domain expertise, limiting the adoption of safe RL. In this paper, we propose learning to interpret natural language constraints for safe RL. To this end, we first introduce HAZARDWORLD, a new multi-task benchmark that requires an agent to optimize reward while not violating constraints specified in free-form text. We then develop an agent with a modular architecture that can interpret and adhere to such textual constraints while learning new tasks. Our model consists of (1) a constraint interpreter that encodes textual constraints into spatial and temporal representations of forbidden states, and (2) a policy network that uses these representations to produce a policy achieving minimal constraint violations during training. Across different domains in HAZARDWORLD, we show that our method achieves higher rewards (up to11x) and fewer constraint violations (by 1.8x) compared to existing approaches. However, in terms of absolute performance, HAZARDWORLD still poses significant challenges for agents to learn efficiently, motivating the need for future work.
null
Compositional Modeling of Nonlinear Dynamical Systems with ODE-based Random Features
https://papers.nips.cc/paper_files/paper/2021/hash/72fe6f9fdab5f4d465ac6da028e4544c-Abstract.html
Thomas McDonald, Mauricio Álvarez
https://papers.nips.cc/paper_files/paper/2021/hash/72fe6f9fdab5f4d465ac6da028e4544c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12681-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/72fe6f9fdab5f4d465ac6da028e4544c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9fHO2sjdZq8
https://papers.nips.cc/paper_files/paper/2021/file/72fe6f9fdab5f4d465ac6da028e4544c-Supplemental.pdf
Effectively modeling phenomena present in highly nonlinear dynamical systems whilst also accurately quantifying uncertainty is a challenging task, which often requires problem-specific techniques. We present a novel, domain-agnostic approach to tackling this problem, using compositions of physics-informed random features, derived from ordinary differential equations. The architecture of our model leverages recent advances in approximate inference for deep Gaussian processes, such as layer-wise weight-space approximations which allow us to incorporate random Fourier features, and stochastic variational inference for approximate Bayesian inference. We provide evidence that our model is capable of capturing highly nonlinear behaviour in real-world multivariate time series data. In addition, we find that our approach achieves comparable performance to a number of other probabilistic models on benchmark regression tasks.
null
Implicit Semantic Response Alignment for Partial Domain Adaptation
https://papers.nips.cc/paper_files/paper/2021/hash/731b03008e834f92a03085ef47061c4a-Abstract.html
Wenxiao Xiao, Zhengming Ding, Hongfu Liu
https://papers.nips.cc/paper_files/paper/2021/hash/731b03008e834f92a03085ef47061c4a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12682-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/731b03008e834f92a03085ef47061c4a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LNXTIrMqyGz
https://papers.nips.cc/paper_files/paper/2021/file/731b03008e834f92a03085ef47061c4a-Supplemental.pdf
Partial Domain Adaptation (PDA) addresses the unsupervised domain adaptation problem where the target label space is a subset of the source label space. Most state-of-art PDA methods tackle the inconsistent label space by assigning weights to classes or individual samples, in an attempt to discard the source data that belongs to the irrelevant classes. However, we believe samples from those extra categories would still contain valuable information to promote positive transfer. In this paper, we propose the Implicit Semantic Response Alignment to explore the intrinsic relationships among different categories by applying a weighted schema on the feature level. Specifically, we design a class2vec module to extract the implicit semantic topics from the visual features. With an attention layer, we calculate the semantic response according to each implicit semantic topic. Then semantic responses of source and target data are aligned to retain the relevant information contained in multiple categories by weighting the features, instead of samples. Experiments on several cross-domain benchmark datasets demonstrate the effectiveness of our method over the state-of-the-art PDA methods. Moreover, we elaborate in-depth analyses to further explore implicit semantic alignment.
null
ToAlign: Task-Oriented Alignment for Unsupervised Domain Adaptation
https://papers.nips.cc/paper_files/paper/2021/hash/731c83db8d2ff01bdc000083fd3c3740-Abstract.html
Guoqiang Wei, Cuiling Lan, Wenjun Zeng, Zhizheng Zhang, Zhibo Chen
https://papers.nips.cc/paper_files/paper/2021/hash/731c83db8d2ff01bdc000083fd3c3740-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12683-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/731c83db8d2ff01bdc000083fd3c3740-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XP9SZpjZkq
https://papers.nips.cc/paper_files/paper/2021/file/731c83db8d2ff01bdc000083fd3c3740-Supplemental.pdf
Unsupervised domain adaptive classifcation intends to improve the classifcation performance on unlabeled target domain. To alleviate the adverse effect of domain shift, many approaches align the source and target domains in the feature space. However, a feature is usually taken as a whole for alignment without explicitly making domain alignment proactively serve the classifcation task, leading to sub-optimal solution. In this paper, we propose an effective Task-oriented Alignment (ToAlign) for unsupervised domain adaptation (UDA). We study what features should be aligned across domains and propose to make the domain alignment proactively serve classifcation by performing feature decomposition and alignment under the guidance of the prior knowledge induced from the classifcation task itself. Particularly, we explicitly decompose a feature in the source domain into a task-related/discriminative feature that should be aligned, and a task-irrelevant feature that should be avoided/ignored, based on the classifcation meta-knowledge. Extensive experimental results on various benchmarks (e.g., Offce-Home, Visda-2017, and DomainNet) under different domain adaptation settings demonstrate the effectiveness of ToAlign which helps achieve the state-of-the-art performance. The code is publicly available at https://github.com/microsoft/UDA.
null
Prior-independent Dynamic Auctions for a Value-maximizing Buyer
https://papers.nips.cc/paper_files/paper/2021/hash/735143e9ff8c47def504f1ba0442df98-Abstract.html
Yuan Deng, Hanrui Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/735143e9ff8c47def504f1ba0442df98-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12684-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/735143e9ff8c47def504f1ba0442df98-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iU88qpcgh2X
https://papers.nips.cc/paper_files/paper/2021/file/735143e9ff8c47def504f1ba0442df98-Supplemental.pdf
We study prior-independent dynamic auction design with production costs for a value-maximizing buyer, a paradigm that is becoming prevalent recently following the development of automatic bidding algorithms in advertising platforms. In contrast to a utility-maximizing buyer, who maximizes the difference between her total value and total payment, a value-maximizing buyer aims to maximize her total value subject to a return on investment (ROI) constraint. Our main result is a dynamic mechanism with regret $\tilde{O}(T^{2/3})$, where $T$ is the time horizon, against the first-best benchmark, i.e., the maximum amount of revenue the seller can extract assuming all values of the buyer are publicly known.
null
Safe Reinforcement Learning by Imagining the Near Future
https://papers.nips.cc/paper_files/paper/2021/hash/73b277c11266681122132d024f53a75b-Abstract.html
Garrett Thomas, Yuping Luo, Tengyu Ma
https://papers.nips.cc/paper_files/paper/2021/hash/73b277c11266681122132d024f53a75b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12685-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/73b277c11266681122132d024f53a75b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vIDBSGl3vzl
https://papers.nips.cc/paper_files/paper/2021/file/73b277c11266681122132d024f53a75b-Supplemental.zip
Safe reinforcement learning is a promising path toward applying reinforcement learning algorithms to real-world problems, where suboptimal behaviors may lead to actual negative consequences. In this work, we focus on the setting where unsafe states can be avoided by planning ahead a short time into the future. In this setting, a model-based agent with a sufficiently accurate model can avoid unsafe states.We devise a model-based algorithm that heavily penalizes unsafe trajectories, and derive guarantees that our algorithm can avoid unsafe states under certain assumptions. Experiments demonstrate that our algorithm can achieve competitive rewards with fewer safety violations in several continuous control tasks.
null
Contrastive Active Inference
https://papers.nips.cc/paper_files/paper/2021/hash/73c730319cf839f143bf40954448ce39-Abstract.html
Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt
https://papers.nips.cc/paper_files/paper/2021/hash/73c730319cf839f143bf40954448ce39-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12686-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/73c730319cf839f143bf40954448ce39-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5t5FPwzE6mq
https://papers.nips.cc/paper_files/paper/2021/file/73c730319cf839f143bf40954448ce39-Supplemental.pdf
Active inference is a unifying theory for perception and action resting upon the idea that the brain maintains an internal model of the world by minimizing free energy. From a behavioral perspective, active inference agents can be seen as self-evidencing beings that act to fulfill their optimistic predictions, namely preferred outcomes or goals. In contrast, reinforcement learning requires human-designed rewards to accomplish any desired outcome. Although active inference could provide a more natural self-supervised objective for control, its applicability has been limited because of the shortcomings in scaling the approach to complex environments. In this work, we propose a contrastive objective for active inference that strongly reduces the computational burden in learning the agent's generative model and planning future actions. Our method performs notably better than likelihood-based active inference in image-based tasks, while also being computationally cheaper and easier to train. We compare to reinforcement learning agents that have access to human-designed reward functions, showing that our approach closely matches their performance. Finally, we also show that contrastive methods perform significantly better in the case of distractors in the environment and that our method is able to generalize goals to variations in the background.
null
Overparameterization Improves Robustness to Covariate Shift in High Dimensions
https://papers.nips.cc/paper_files/paper/2021/hash/73fed7fd472e502d8908794430511f4d-Abstract.html
Nilesh Tripuraneni, Ben Adlam, Jeffrey Pennington
https://papers.nips.cc/paper_files/paper/2021/hash/73fed7fd472e502d8908794430511f4d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12687-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/73fed7fd472e502d8908794430511f4d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PxMfDdPnTfV
https://papers.nips.cc/paper_files/paper/2021/file/73fed7fd472e502d8908794430511f4d-Supplemental.pdf
A significant obstacle in the development of robust machine learning models is \emph{covariate shift}, a form of distribution shift that occurs when the input distributions of the training and test sets differ while the conditional label distributions remain the same. Despite the prevalence of covariate shift in real-world applications, a theoretical understanding in the context of modern machine learning has remained lacking. In this work, we examine the exact high-dimensional asymptotics of random feature regression under covariate shift and present a precise characterization of the limiting test error, bias, and variance in this setting. Our results motivate a natural partial order over covariate shifts that provides a sufficient condition for determining when the shift will harm (or even help) test performance. We find that overparameterized models exhibit enhanced robustness to covariate shift, providing one of the first theoretical explanations for this ubiquitous empirical phenomenon. Additionally, our analysis reveals an exact linear relationship between the in-distribution and out-of-distribution generalization performance, offering an explanation for this surprising recent observation.
null
Logarithmic Regret in Feature-based Dynamic Pricing
https://papers.nips.cc/paper_files/paper/2021/hash/742141ceda6b8f6786609d31c8ef129f-Abstract.html
Jianyu Xu, Yu-Xiang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/742141ceda6b8f6786609d31c8ef129f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12688-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/742141ceda6b8f6786609d31c8ef129f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ii5mGEbRo93
https://papers.nips.cc/paper_files/paper/2021/file/742141ceda6b8f6786609d31c8ef129f-Supplemental.pdf
Feature-based dynamic pricing is an increasingly popular model of setting prices for highly differentiated products with applications in digital marketing, online sales, real estate and so on. The problem was formally studied as an online learning problem [Javanmard & Nazerzadeh, 2019] where a seller needs to propose prices on the fly for a sequence of $T$ products based on their features $x$ while having a small regret relative to the best ---"omniscient"--- pricing strategy she could have come up with in hindsight. We revisit this problem and provide two algorithms (EMLP and ONSP) for stochastic and adversarial feature settings, respectively, and prove the optimal $O(d\log{T})$ regret bounds for both. In comparison, the best existing results are $O\left(\min\left\{\frac{1}{\lambda_{\min}^2}\log{T}, \sqrt{T}\right\}\right)$ and $O(T^{2/3})$ respectively, with $\lambda_{\min}$ being the smallest eigenvalue of $\mathbb{E}[xx^T]$ that could be arbitrarily close to $0$. We also prove an $\Omega(\sqrt{T})$ information-theoretic lower bound for a slightly more general setting, which demonstrates that "knowing-the-demand-curve" leads to an exponential improvement in feature-based dynamic pricing.
null
Dimension-free empirical entropy estimation
https://papers.nips.cc/paper_files/paper/2021/hash/74378afe5e8b20910cf1f939e57f0480-Abstract.html
Doron Cohen, Aryeh Kontorovich, Aaron Koolyk, Geoffrey Wolfer
https://papers.nips.cc/paper_files/paper/2021/hash/74378afe5e8b20910cf1f939e57f0480-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12689-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/74378afe5e8b20910cf1f939e57f0480-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HFvPfNDTShj
https://papers.nips.cc/paper_files/paper/2021/file/74378afe5e8b20910cf1f939e57f0480-Supplemental.zip
We seek an entropy estimator for discrete distributions with fully empirical accuracy bounds. As stated, this goal is infeasible without some prior assumptions on the distribution. We discover that a certain information moment assumption renders the problem feasible. We argue that the moment assumption is natural and, in some sense, {\em minimalistic} --- weaker than finite support or tail decay conditions. Under the moment assumption, we provide the first finite-sample entropy estimates for infinite alphabets, nearly recovering the known minimax rates. Moreover, we demonstrate that our empirical bounds are significantly sharper than the state-of-the-art bounds, for various natural distributions and non-trivial sample regimes. Along the way, we give a dimension-free analogue of the Cover-Thomas result on entropy continuity (with respect to total variation distance) for finite alphabets, which may be of independent interest.
null
Towards Biologically Plausible Convolutional Networks
https://papers.nips.cc/paper_files/paper/2021/hash/746b02b6680562f44ad7526675bac026-Abstract.html
Roman Pogodin, Yash Mehta, Timothy Lillicrap, Peter E Latham
https://papers.nips.cc/paper_files/paper/2021/hash/746b02b6680562f44ad7526675bac026-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12690-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/746b02b6680562f44ad7526675bac026-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ibD-yZEVBUX
https://papers.nips.cc/paper_files/paper/2021/file/746b02b6680562f44ad7526675bac026-Supplemental.pdf
Convolutional networks are ubiquitous in deep learning. They are particularly useful for images, as they reduce the number of parameters, reduce training time, and increase accuracy. However, as a model of the brain they are seriously problematic, since they require weight sharing - something real neurons simply cannot do. Consequently, while neurons in the brain can be locally connected (one of the features of convolutional networks), they cannot be convolutional. Locally connected but non-convolutional networks, however, significantly underperform convolutional ones. This is troublesome for studies that use convolutional networks to explain activity in the visual system. Here we study plausible alternatives to weight sharing that aim at the same regularization principle, which is to make each neuron within a pool react similarly to identical inputs. The most natural way to do that is by showing the network multiple translations of the same image, akin to saccades in animal vision. However, this approach requires many translations, and doesn't remove the performance gap. We propose instead to add lateral connectivity to a locally connected network, and allow learning via Hebbian plasticity. This requires the network to pause occasionally for a sleep-like phase of "weight sharing". This method enables locally connected networks to achieve nearly convolutional performance on ImageNet and improves their fit to the ventral stream data, thus supporting convolutional networks as a model of the visual stream.
null
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
https://papers.nips.cc/paper_files/paper/2021/hash/747d3443e319a22747fbb873e8b2f9f2-Abstract.html
Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Cho-Jui Hsieh
https://papers.nips.cc/paper_files/paper/2021/hash/747d3443e319a22747fbb873e8b2f9f2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12691-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/747d3443e319a22747fbb873e8b2f9f2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kR95DuwwXHZ
null
Attention is sparse in vision transformers. We observe the final prediction in vision transformers is only based on a subset of most informative tokens, which is sufficient for accurate image recognition. Based on this observation, we propose a dynamic token sparsification framework to prune redundant tokens progressively and dynamically based on the input. Specifically, we devise a lightweight prediction module to estimate the importance score of each token given the current features. The module is added to different layers to prune redundant tokens hierarchically. To optimize the prediction module in an end-to-end manner, we propose an attention masking strategy to differentiably prune a token by blocking its interactions with other tokens. Benefiting from the nature of self-attention, the unstructured sparse tokens are still hardware friendly, which makes our framework easy to achieve actual speed-up. By hierarchically pruning 66% of the input tokens, our method greatly reduces 31% $\sim$ 37% FLOPs and improves the throughput by over 40% while the drop of accuracy is within 0.5% for various vision transformers. Equipped with the dynamic token sparsification framework, DynamicViT models can achieve very competitive complexity/accuracy trade-offs compared to state-of-the-art CNNs and vision transformers on ImageNet. Code is available at https://github.com/raoyongming/DynamicViT
null
Learning Transferable Adversarial Perturbations
https://papers.nips.cc/paper_files/paper/2021/hash/7486cef2522ee03547cfb970a404a874-Abstract.html
Krishna kanth Nakka, Mathieu Salzmann
https://papers.nips.cc/paper_files/paper/2021/hash/7486cef2522ee03547cfb970a404a874-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12692-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7486cef2522ee03547cfb970a404a874-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sIDvIyR5I1R
https://papers.nips.cc/paper_files/paper/2021/file/7486cef2522ee03547cfb970a404a874-Supplemental.pdf
While effective, deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, recent work has shown that such attacks could be generated by another deep network, leading to significant speedups over optimization-based perturbations. However, the ability of such generative methods to generalize to different test-time situations has not been systematically studied. In this paper, we, therefore, investigate the transferability of generated perturbations when the conditions at inference time differ from the training ones in terms of the target architecture, target data, and target task. Specifically, we identify the mid-level features extracted by the intermediate layers of DNNs as common ground across different architectures, datasets, and tasks. This lets us introduce a loss function based on such mid-level features to learn an effective, transferable perturbation generator. Our experiments demonstrate that our approach outperforms the state-of-the-art universal and transferable attack strategies.
null
PortaSpeech: Portable and High-Quality Generative Text-to-Speech
https://papers.nips.cc/paper_files/paper/2021/hash/748d6b6ed8e13f857ceaa6cfbdca14b8-Abstract.html
Yi Ren, Jinglin Liu, Zhou Zhao
https://papers.nips.cc/paper_files/paper/2021/hash/748d6b6ed8e13f857ceaa6cfbdca14b8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12693-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/748d6b6ed8e13f857ceaa6cfbdca14b8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xmJsuh8xlq
https://papers.nips.cc/paper_files/paper/2021/file/748d6b6ed8e13f857ceaa6cfbdca14b8-Supplemental.zip
Non-autoregressive text-to-speech (NAR-TTS) models such as FastSpeech 2 and Glow-TTS can synthesize high-quality speech from the given text in parallel. After analyzing two kinds of generative NAR-TTS models (VAE and normalizing flow), we find that: VAE is good at capturing the long-range semantics features (e.g., prosody) even with small model size but suffers from blurry and unnatural results; and normalizing flow is good at reconstructing the frequency bin-wise details but performs poorly when the number of model parameters is limited. Inspired by these observations, to generate diverse speech with natural details and rich prosody using a lightweight architecture, we propose PortaSpeech, a portable and high-quality generative text-to-speech model. Specifically, 1) to model both the prosody and mel-spectrogram details accurately, we adopt a lightweight VAE with an enhanced prior followed by a flow-based post-net with strong conditional inputs as the main architecture. 2) To further compress the model size and memory footprint, we introduce the grouped parameter sharing mechanism to the affine coupling layers in the post-net. 3) To improve the expressiveness of synthesized speech and reduce the dependency on accurate fine-grained alignment between text and speech, we propose a linguistic encoder with mixture alignment combining hard word-level alignment and soft phoneme-level alignment, which explicitly extracts word-level semantic information. Experimental results show that PortaSpeech outperforms other TTS models in both voice quality and prosody modeling in terms of subjective and objective evaluation metrics, and shows only a slight performance degradation when reducing the model parameters to 6.7M (about 4x model size and 3x runtime memory compression ratio compared with FastSpeech 2). Our extensive ablation studies demonstrate that each design in PortaSpeech is effective.
null
Exponential Graph is Provably Efficient for Decentralized Deep Training
https://papers.nips.cc/paper_files/paper/2021/hash/74e1ed8b55ea44fd7dbb685c412568a4-Abstract.html
Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, PAN PAN, Wotao Yin
https://papers.nips.cc/paper_files/paper/2021/hash/74e1ed8b55ea44fd7dbb685c412568a4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12694-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/74e1ed8b55ea44fd7dbb685c412568a4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=l2UWXn5iBQI
https://papers.nips.cc/paper_files/paper/2021/file/74e1ed8b55ea44fd7dbb685c412568a4-Supplemental.pdf
Decentralized SGD is an emerging training method for deep learning known for its much less (thus faster) communication per iteration, which relaxes the averaging step in parallel SGD to inexact averaging. The less exact the averaging is, however, the more the total iterations the training needs to take. Therefore, the key to making decentralized SGD efficient is to realize nearly-exact averaging using little communication. This requires a skillful choice of communication topology, which is an under-studied topic in decentralized optimization.In this paper, we study so-called exponential graphs where every node is connected to $O(\log(n))$ neighbors and $n$ is the total number of nodes. This work proves such graphs can lead to both fast communication and effective averaging simultaneously. We also discover that a sequence of $\log(n)$ one-peer exponential graphs, in which each node communicates to one single neighbor per iteration, can together achieve exact averaging. This favorable property enables one-peer exponential graph to average as effective as its static counterpart but communicates more efficiently. We apply these exponential graphs in decentralized (momentum) SGD to obtain the state-of-the-art balance between per-iteration communication and iteration complexity among all commonly-used topologies. Experimental results on a variety of tasks and models demonstrate that decentralized (momentum) SGD over exponential graphs promises both fast and high-quality training. Our code is implemented through BlueFog and available at https://github.com/Bluefog-Lib/NeurIPS2021-Exponential-Graph.
null
CLIP-It! Language-Guided Video Summarization
https://papers.nips.cc/paper_files/paper/2021/hash/7503cfacd12053d309b6bed5c89de212-Abstract.html
Medhini Narasimhan, Anna Rohrbach, Trevor Darrell
https://papers.nips.cc/paper_files/paper/2021/hash/7503cfacd12053d309b6bed5c89de212-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12695-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7503cfacd12053d309b6bed5c89de212-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3ccoZ40Us0N
https://papers.nips.cc/paper_files/paper/2021/file/7503cfacd12053d309b6bed5c89de212-Supplemental.zip
A generic video summary is an abridged version of a video that conveys the whole story and features the most important scenes. Yet the importance of scenes in a video is often subjective, and users should have the option of customizing the summary by using natural language to specify what is important to them. Further, existing models for fully automatic generic summarization have not exploited available language models, which can serve as an effective prior for saliency. This work introduces CLIP-It, a single framework for addressing both generic and query-focused video summarization, typically approached separately in the literature. We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query (for query-focused summarization) or an automatically generated dense video caption (for generic video summarization). Our model can be extended to the unsupervised setting by training without ground-truth supervision. We outperform baselines and prior work by a significant margin on both standard video summarization datasets (TVSum and SumMe) and a query-focused video summarization dataset (QFVS). Particularly, we achieve large improvements in the transfer setting, attesting to our method's strong generalization capabilities.
null
Learning Treatment Effects in Panels with General Intervention Patterns
https://papers.nips.cc/paper_files/paper/2021/hash/7504adad8bb96320eb3afdd4df6e1f60-Abstract.html
Vivek Farias, Andrew Li, Tianyi Peng
https://papers.nips.cc/paper_files/paper/2021/hash/7504adad8bb96320eb3afdd4df6e1f60-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12696-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7504adad8bb96320eb3afdd4df6e1f60-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gEXbJVhVK5_
null
The problem of causal inference with panel data is a central econometric question. The following is a fundamental version of this problem: Let $M^*$ be a low rank matrix and $E$ be a zero-mean noise matrix. For a `treatment' matrix $Z$ with entries in $\{0,1\}$ we observe the matrix $O$ with entries $O_{ij} := M^*_{ij} + E_{ij} + \mathcal{T}_{ij} Z_{ij}$ where $\mathcal{T}_{ij} $ are unknown, heterogenous treatment effects. The problem requires we estimate the average treatment effect $\tau^* := \sum_{ij} \mathcal{T}_{ij} Z_{ij} / \sum_{ij} Z_{ij}$. The synthetic control paradigm provides an approach to estimating $\tau^*$ when $Z$ places support on a single row. This paper extends that framework to allow rate-optimal recovery of $\tau^*$ for general $Z$, thus broadly expanding its applicability. Our guarantees are the first of their type in this general setting. Computational experiments on synthetic and real-world data show a substantial advantage over competing estimators.
null
Lossy Compression for Lossless Prediction
https://papers.nips.cc/paper_files/paper/2021/hash/7535bbb91c8fde347ad861f293126633-Abstract.html
Yann Dubois, Benjamin Bloem-Reddy, Karen Ullrich, Chris J. Maddison
https://papers.nips.cc/paper_files/paper/2021/hash/7535bbb91c8fde347ad861f293126633-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12697-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7535bbb91c8fde347ad861f293126633-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wZrOOO9XBn
https://papers.nips.cc/paper_files/paper/2021/file/7535bbb91c8fde347ad861f293126633-Supplemental.pdf
Most data is automatically collected and only ever "seen" by algorithms. Yet, data compressors preserve perceptual fidelity rather than just the information needed by algorithms performing downstream tasks. In this paper, we characterize the bit-rate required to ensure high performance on all predictive tasks that are invariant under a set of transformations, such as data augmentations. Based on our theory, we design unsupervised objectives for training neural compressors. Using these objectives, we train a generic image compressor that achieves substantial rate savings (more than 1000x on ImageNet) compared to JPEG on 8 datasets, without decreasing downstream classification performance.
null
From Optimality to Robustness: Adaptive Re-Sampling Strategies in Stochastic Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/75429d136f65d2d6168b9b6c5f6ec951-Abstract.html
Dorian Baudry, Patrick Saux, Odalric-Ambrym Maillard
https://papers.nips.cc/paper_files/paper/2021/hash/75429d136f65d2d6168b9b6c5f6ec951-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12698-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/75429d136f65d2d6168b9b6c5f6ec951-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=t6EL1tTI3D
https://papers.nips.cc/paper_files/paper/2021/file/75429d136f65d2d6168b9b6c5f6ec951-Supplemental.pdf
The stochastic multi-arm bandit problem has been extensively studied under standard assumptions on the arm's distribution (e.g bounded with known support, exponential family, etc). These assumptions are suitable for many real-world problems but sometimes they require knowledge (on tails for instance) that may not be precisely accessible to the practitioner, raising the question of the robustness of bandit algorithms to model misspecification. In this paper we study a generic \emph{Dirichlet Sampling} (DS) algorithm, based on pairwise comparisons of empirical indices computed with \textit{re-sampling} of the arms' observations and a data-dependent \textit{exploration bonus}. We show that different variants of this strategy achieve provably optimal regret guarantees when the distributions are bounded and logarithmic regret for semi-bounded distributions with a mild quantile condition. We also show that a simple tuning achieve robustness with respect to a large class of unbounded distributions, at the cost of slightly worse than logarithmic asymptotic regret. We finally provide numerical experiments showing the merits of DS in a decision-making problem on synthetic agriculture data.
null
CCVS: Context-aware Controllable Video Synthesis
https://papers.nips.cc/paper_files/paper/2021/hash/757b505cfd34c64c85ca5b5690ee5293-Abstract.html
Guillaume Le Moing, Jean Ponce, Cordelia Schmid
https://papers.nips.cc/paper_files/paper/2021/hash/757b505cfd34c64c85ca5b5690ee5293-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12699-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/757b505cfd34c64c85ca5b5690ee5293-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VvGIT6AOGsx
https://papers.nips.cc/paper_files/paper/2021/file/757b505cfd34c64c85ca5b5690ee5293-Supplemental.pdf
This presentation introduces a self-supervised learning approach to the synthesis of new videos clips from old ones, with several new key elements for improved spatial resolution and realism: It conditions the synthesis process on contextual information for temporal continuity and ancillary information for fine control. The prediction model is doubly autoregressive, in the latent space of an autoencoder for forecasting, and in image space for updating contextual information, which is also used to enforce spatio-temporal consistency through a learnable optical flow module. Adversarial training of the autoencoder in the appearance and temporal domains is used to further improve the realism of its output. A quantizer inserted between the encoder and the transformer in charge of forecasting future frames in latent space (and its inverse inserted between the transformer and the decoder) adds even more flexibility by affording simple mechanisms for handling multimodal ancillary information for controlling the synthesis process (e.g., a few sample frames, an audio track, a trajectory in image space) and taking into account the intrinsically uncertain nature of the future by allowing multiple predictions. Experiments with an implementation of the proposed approach give very good qualitative and quantitative results on multiple tasks and standard benchmarks.
null
An Online Riemannian PCA for Stochastic Canonical Correlation Analysis
https://papers.nips.cc/paper_files/paper/2021/hash/758a06618c69880a6cee5314ee42d52f-Abstract.html
Zihang Meng, Rudrasis Chakraborty, Vikas Singh
https://papers.nips.cc/paper_files/paper/2021/hash/758a06618c69880a6cee5314ee42d52f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12700-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/758a06618c69880a6cee5314ee42d52f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=21uqYo8soks
https://papers.nips.cc/paper_files/paper/2021/file/758a06618c69880a6cee5314ee42d52f-Supplemental.pdf
We present an efficient stochastic algorithm (RSG+) for canonical correlation analysis (CCA) using a reparametrization of the projection matrices. We show how this reparametrization (into structured matrices), simple in hindsight, directly presents an opportunity to repurpose/adjust mature techniques for numerical optimization on Riemannian manifolds. Our developments nicely complement existing methods for this problem which either require $O(d^3)$ time complexity per iteration with $O(\frac{1}{\sqrt{t}})$ convergence rate (where $d$ is the dimensionality) or only extract the top $1$ component with $O(\frac{1}{t})$ convergence rate. In contrast, our algorithm offers a strict improvement for this classical problem: it achieves $O(d^2k)$ runtime complexity per iteration for extracting the top $k$ canonical components with $O(\frac{1}{t})$ convergence rate. While the paper primarily focuses on the formulation and technical analysis of its properties, our experiments show that the empirical behavior on common datasets is quite promising, We also explore a potential application in training fair models where the label of protected attribute is missing or otherwise unavailable.
null
Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics
https://papers.nips.cc/paper_files/paper/2021/hash/75c58d36157505a600e0695ed0b3a22d-Abstract.html
Bhavin Choksi, Milad Mozafari, Callum Biggs O'May, B. ADOR, Andrea Alamia, Rufin VanRullen
https://papers.nips.cc/paper_files/paper/2021/hash/75c58d36157505a600e0695ed0b3a22d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12701-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/75c58d36157505a600e0695ed0b3a22d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=v4vjMuXF-B
https://papers.nips.cc/paper_files/paper/2021/file/75c58d36157505a600e0695ed0b3a22d-Supplemental.pdf
Deep neural networks excel at image classification, but their performance is far less robust to input perturbations than human perception. In this work we explore whether this shortcoming may be partly addressed by incorporating brain-inspired recurrent dynamics in deep convolutional networks. We take inspiration from a popular framework in neuroscience: "predictive coding". At each layer of the hierarchical model, generative feedback "predicts" (i.e., reconstructs) the pattern of activity in the previous layer. The reconstruction errors are used to iteratively update the network’s representations across timesteps, and to optimize the network's feedback weights over the natural image dataset--a form of unsupervised training. We show that implementing this strategy into two popular networks, VGG16 and EfficientNetB0, improves their robustness against various corruptions and adversarial attacks. We hypothesize that other feedforward networks could similarly benefit from the proposed framework. To promote research in this direction, we provide an open-sourced PyTorch-based package called \textit{Predify}, which can be used to implement and investigate the impacts of the predictive coding dynamics in any convolutional neural network.
null
Deep Extrapolation for Attribute-Enhanced Generation
https://papers.nips.cc/paper_files/paper/2021/hash/75da5036f659fe64b53f3d9b39412967-Abstract.html
Alvin Chan, Ali Madani, Ben Krause, Nikhil Naik
https://papers.nips.cc/paper_files/paper/2021/hash/75da5036f659fe64b53f3d9b39412967-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12702-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/75da5036f659fe64b53f3d9b39412967-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NCDMYD2y5kK
https://papers.nips.cc/paper_files/paper/2021/file/75da5036f659fe64b53f3d9b39412967-Supplemental.pdf
Attribute extrapolation in sample generation is challenging for deep neural networks operating beyond the training distribution. We formulate a new task for extrapolation in sequence generation, focusing on natural language and proteins, and propose GENhance, a generative framework that enhances attributes through a learned latent space. Trained on movie reviews and a computed protein stability dataset, GENhance can generate strongly-positive text reviews and highly stable protein sequences without being exposed to similar data during training. We release our benchmark tasks and models to contribute to the study of generative modeling extrapolation and data-driven design in biology and chemistry.
null
Generalized DataWeighting via Class-Level Gradient Manipulation
https://papers.nips.cc/paper_files/paper/2021/hash/75ebb02f92fc30a8040bbd625af999f1-Abstract.html
Can Chen, Shuhao Zheng, Xi Chen, Erqun Dong, Xue (Steve) Liu, Hao Liu, Dejing Dou
https://papers.nips.cc/paper_files/paper/2021/hash/75ebb02f92fc30a8040bbd625af999f1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12703-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/75ebb02f92fc30a8040bbd625af999f1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xRrdX_wV1JI
https://papers.nips.cc/paper_files/paper/2021/file/75ebb02f92fc30a8040bbd625af999f1-Supplemental.pdf
Label noise and class imbalance are two major issues coexisting in real-world datasets. To alleviate the two issues, state-of-the-art methods reweight each instance by leveraging a small amount of clean and unbiased data. Yet, these methods overlook class-level information within each instance, which can be further utilized to improve performance. To this end, in this paper, we propose Generalized Data Weighting (GDW) to simultaneously mitigate label noise and class imbalance by manipulating gradients at the class level. To be specific, GDW unrolls the loss gradient to class-level gradients by the chain rule and reweights the flow of each gradient separately. In this way, GDW achieves remarkable performance improvement on both issues. Aside from the performance gain, GDW efficiently obtains class-level weights without introducing any extra computational cost compared with instance weighting methods. Specifically, GDW performs a gradient descent step on class-level weights, which only relies on intermediate gradients. Extensive experiments in various settings verify the effectiveness of GDW. For example, GDW outperforms state-of-the-art methods by $2.56\%$ under the $60\%$ uniform noise setting in CIFAR10. Our code is available at https://github.com/GGchen1997/GDW-NIPS2021.
null
Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation
https://papers.nips.cc/paper_files/paper/2021/hash/75fc093c0ee742f6dddaa13fff98f104-Abstract.html
Can Qin, Handong Zhao, Lichen Wang, Huan Wang, Yulun Zhang, Yun Fu
https://papers.nips.cc/paper_files/paper/2021/hash/75fc093c0ee742f6dddaa13fff98f104-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12704-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/75fc093c0ee742f6dddaa13fff98f104-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kAFq29tuVw0
null
Graph Similarity Computation (GSC) is essential to wide-ranging graph applications such as retrieval, plagiarism/anomaly detection, etc. The exact computation of graph similarity, e.g., Graph Edit Distance (GED), is an NP-hard problem that cannot be exactly solved within an adequate time given large graphs. Thanks to the strong representation power of graph neural network (GNN), a variety of GNN-based inexact methods emerged. To capture the subtle difference across graphs, the key success is designing the dense interaction with features fusion at the early stage, which, however, is a trade-off between speed and accuracy. For slow learning of graph similarity, this paper proposes a novel early-fusion approach by designing a co-attention-based feature fusion network on multilevel GNN features. To further improve the speed without much accuracy drop, we introduce an efficient GSC solution by distilling the knowledge from the slow early-fusion model to the student one for fast inference. Such a student model also enables the offline collection of individual graph embeddings, speeding up the inference time in orders. To address the instability through knowledge transfer, we decompose the dynamic joint embedding into the static pseudo individual ones for precise teacher-student alignment. The experimental analysis on the real-world datasets demonstrates the superiority of our approach over the state-of-the-art methods on both accuracy and efficiency. Particularly, we speed up the prior art by more than 10x on the benchmark AIDS data.
null
Meta Learning Backpropagation And Improving It
https://papers.nips.cc/paper_files/paper/2021/hash/7608de7a475c0c878f60960d72a92654-Abstract.html
Louis Kirsch, Jürgen Schmidhuber
https://papers.nips.cc/paper_files/paper/2021/hash/7608de7a475c0c878f60960d72a92654-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12705-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7608de7a475c0c878f60960d72a92654-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hhU9TEvB6AF
https://papers.nips.cc/paper_files/paper/2021/file/7608de7a475c0c878f60960d72a92654-Supplemental.pdf
Many concepts have been proposed for meta learning with neural networks (NNs), e.g., NNs that learn to reprogram fast weights, Hebbian plasticity, learned learning rules, and meta recurrent NNs. Our Variable Shared Meta Learning (VSML) unifies the above and demonstrates that simple weight-sharing and sparsity in an NN is sufficient to express powerful learning algorithms (LAs) in a reusable fashion. A simple implementation of VSML where the weights of a neural network are replaced by tiny LSTMs allows for implementing the backpropagation LA solely by running in forward-mode. It can even meta learn new LAs that differ from online backpropagation and generalize to datasets outside of the meta training distribution without explicit gradient calculation. Introspection reveals that our meta learned LAs learn through fast association in a way that is qualitatively different from gradient descent.
null
Posterior Meta-Replay for Continual Learning
https://papers.nips.cc/paper_files/paper/2021/hash/761b42cfff120aac30045f7a110d0256-Abstract.html
Christian Henning, Maria Cervera, Francesco D'Angelo, Johannes von Oswald, Regina Traber, Benjamin Ehret, Seijin Kobayashi, Benjamin F. Grewe, João Sacramento
https://papers.nips.cc/paper_files/paper/2021/hash/761b42cfff120aac30045f7a110d0256-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12706-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/761b42cfff120aac30045f7a110d0256-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AhuVLaYp6gn
https://papers.nips.cc/paper_files/paper/2021/file/761b42cfff120aac30045f7a110d0256-Supplemental.pdf
Learning a sequence of tasks without access to i.i.d. observations is a widely studied form of continual learning (CL) that remains challenging. In principle, Bayesian learning directly applies to this setting, since recursive and one-off Bayesian updates yield the same result. In practice, however, recursive updating often leads to poor trade-off solutions across tasks because approximate inference is necessary for most models of interest. Here, we describe an alternative Bayesian approach where task-conditioned parameter distributions are continually inferred from data. We offer a practical deep learning implementation of our framework based on probabilistic task-conditioned hypernetworks, an approach we term posterior meta-replay. Experiments on standard benchmarks show that our probabilistic hypernetworks compress sequences of posterior parameter distributions with virtually no forgetting. We obtain considerable performance gains compared to existing Bayesian CL methods, and identify task inference as our major limiting factor. This limitation has several causes that are independent of the considered sequential setting, opening up new avenues for progress in CL.
null
Optimizing Reusable Knowledge for Continual Learning via Metalearning
https://papers.nips.cc/paper_files/paper/2021/hash/761e6675f9e54673cc778e7fdb2823d2-Abstract.html
Julio Hurtado, Alain Raymond, Alvaro Soto
https://papers.nips.cc/paper_files/paper/2021/hash/761e6675f9e54673cc778e7fdb2823d2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12707-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/761e6675f9e54673cc778e7fdb2823d2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sFyrGPCKQJC
https://papers.nips.cc/paper_files/paper/2021/file/761e6675f9e54673cc778e7fdb2823d2-Supplemental.zip
When learning tasks over time, artificial neural networks suffer from a problem known as Catastrophic Forgetting (CF). This happens when the weights of a network are overwritten during the training of a new task causing forgetting of old information. To address this issue, we propose MetA Reusable Knowledge or MARK, a new method that fosters weight reusability instead of overwriting when learning a new task. Specifically, MARK keeps a set of shared weights among tasks. We envision these shared weights as a common Knowledge Base (KB) that is not only used to learn new tasks, but also enriched with new knowledge as the model learns new tasks. Key components behind MARK are two-fold. On the one hand, a metalearning approach provides the key mechanism to incrementally enrich the KB with new knowledge and to foster weight reusability among tasks. On the other hand, a set of trainable masks provides the key mechanism to selectively choose from the KB relevant weights to solve each task. By using MARK, we achieve state of the art results in several popular benchmarks, surpassing the best performing methods in terms of average accuracy by over 10% on the 20-Split-MiniImageNet dataset, while achieving almost zero forgetfulness using 55% of the number of parameters. Furthermore, an ablation study provides evidence that, indeed, MARK is learning reusable knowledge that is selectively used by each task.
null
A sampling-based circuit for optimal decision making
https://papers.nips.cc/paper_files/paper/2021/hash/76444b3132fda0e2aca778051d776f1c-Abstract.html
Camille Rullán Buxó, Cristina Savin
https://papers.nips.cc/paper_files/paper/2021/hash/76444b3132fda0e2aca778051d776f1c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12708-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/76444b3132fda0e2aca778051d776f1c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9FREJhzo1q
https://papers.nips.cc/paper_files/paper/2021/file/76444b3132fda0e2aca778051d776f1c-Supplemental.pdf
Many features of human and animal behavior can be understood in the framework of Bayesian inference and optimal decision making, but the biological substrate of such processes is not fully understood. Neural sampling provides a flexible code for probabilistic inference in high dimensions and explains key features of sensory responses under experimental manipulations of uncertainty. However, since it encodes uncertainty implicitly, across time and neurons, it remains unclear how such representations can be used for decision making. Here we propose a spiking network model that maps neural samples of a task-specific marginal distribution into an instantaneous representation of uncertainty via a procedure inspired by online kernel density estimation, so that its output can be readily used for decision making. Our model is consistent with experimental results at the level of single neurons and populations, and makes predictions for how neural responses and decisions could be modulated by uncertainty and prior biases. More generally, our work brings together conflicting perspectives on probabilistic brain computation.
null
Compressed Video Contrastive Learning
https://papers.nips.cc/paper_files/paper/2021/hash/7647966b7343c29048673252e490f736-Abstract.html
Yuqi Huo, Mingyu Ding, Haoyu Lu, Nanyi Fei, Zhiwu Lu, Ji-Rong Wen, Ping Luo
https://papers.nips.cc/paper_files/paper/2021/hash/7647966b7343c29048673252e490f736-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12709-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7647966b7343c29048673252e490f736-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RdWt-VDPZEG
null
This work concerns self-supervised video representation learning (SSVRL), one topic that has received much attention recently. Since videos are storage-intensive and contain a rich source of visual content, models designed for SSVRL are expected to be storage- and computation-efficient, as well as effective. However, most existing methods only focus on one of the two objectives, failing to consider both at the same time. In this work, for the first time, the seemingly contradictory goals are simultaneously achieved by exploiting compressed videos and capturing mutual information between two input streams. Specifically, a novel Motion Vector based Cross Guidance Contrastive learning approach (MVCGC) is proposed. For storage and computation efficiency, we choose to directly decode RGB frames and motion vectors (that resemble low-resolution optical flows) from compressed videos on-the-fly. To enhance the representation ability of the motion vectors, hence the effectiveness of our method, we design a cross guidance contrastive learning algorithm based on multi-instance InfoNCE loss, where motion vectors can take supervision signals from RGB frames and vice versa. Comprehensive experiments on two downstream tasks show that our MVCGC yields new state-of-the-art while being significantly more efficient than its competitors.
null
Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation
https://papers.nips.cc/paper_files/paper/2021/hash/7695ea769f021803c508817dd374bb27-Abstract.html
Jiafan He, Dongruo Zhou, Quanquan Gu
https://papers.nips.cc/paper_files/paper/2021/hash/7695ea769f021803c508817dd374bb27-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12710-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7695ea769f021803c508817dd374bb27-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZOeN0pU8jae
https://papers.nips.cc/paper_files/paper/2021/file/7695ea769f021803c508817dd374bb27-Supplemental.pdf
We study reinforcement learning (RL) with linear function approximation. Existing algorithms for this problem only have high-probability regret and/or Probably Approximately Correct (PAC) sample complexity guarantees, which cannot guarantee the convergence to the optimal policy. In this paper, in order to overcome the limitation of existing algorithms, we propose a new algorithm called FLUTE, which enjoys uniform-PAC convergence to the optimal policy with high probability. The uniform-PAC guarantee is the strongest possible guarantee for reinforcement learning in the literature, which can directly imply both PAC and high probability regret bounds, making our algorithm superior to all existing algorithms with linear function approximation. At the core of our algorithm is a novel minimax value function estimator and a multi-level partition scheme to select the training samples from historical observations. Both of these techniques are new and of independent interest.
null
Attention Bottlenecks for Multimodal Fusion
https://papers.nips.cc/paper_files/paper/2021/hash/76ba9f564ebbc35b1014ac498fafadd0-Abstract.html
Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun
https://papers.nips.cc/paper_files/paper/2021/hash/76ba9f564ebbc35b1014ac498fafadd0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12711-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/76ba9f564ebbc35b1014ac498fafadd0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KJ5h-yfUHa
https://papers.nips.cc/paper_files/paper/2021/file/76ba9f564ebbc35b1014ac498fafadd0-Supplemental.pdf
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks.A common approach for building multimodal models is to simply combine multiple of these modality-specific architectures using late-stage fusion of final representations or predictions ('late-fusion').Instead, we introduce a novel transformer based architecture that uses 'attention bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, these bottlenecks force information between different modalities to pass through a small number of '`bottleneck' latent units, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
null
Convergence of adaptive algorithms for constrained weakly convex optimization
https://papers.nips.cc/paper_files/paper/2021/hash/76c073d8a82d9ddaf993300be03ac70f-Abstract.html
Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher
https://papers.nips.cc/paper_files/paper/2021/hash/76c073d8a82d9ddaf993300be03ac70f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12712-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/76c073d8a82d9ddaf993300be03ac70f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=GNFcszMtYvV
https://papers.nips.cc/paper_files/paper/2021/file/76c073d8a82d9ddaf993300be03ac70f-Supplemental.zip
We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective. We prove the $\mathcal{\tilde O}(t^{-1/2})$ rate of convergence for the squared norm of the gradient of Moreau envelope, which is the standard stationarity measure for this class of problems. It matches the known rates that adaptive algorithms enjoy for the specific case of unconstrained smooth nonconvex stochastic optimization. Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly unbounded optimization domains. Finally, we illustrate the applications and extensions of our results to specific problems and algorithms.
null
On the Convergence of Step Decay Step-Size for Stochastic Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/76c538125fc5c9ec6ad1d05650a57de5-Abstract.html
Xiaoyu Wang, Sindri Magnússon, Mikael Johansson
https://papers.nips.cc/paper_files/paper/2021/hash/76c538125fc5c9ec6ad1d05650a57de5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12713-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/76c538125fc5c9ec6ad1d05650a57de5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=M-W0asp3fD
https://papers.nips.cc/paper_files/paper/2021/file/76c538125fc5c9ec6ad1d05650a57de5-Supplemental.pdf
The convergence of stochastic gradient descent is highly dependent on the step-size, especially on non-convex problems such as neural network training. Step decay step-size schedules (constant and then cut) are widely used in practice because of their excellent convergence and generalization qualities, but their theoretical properties are not yet well understood. We provide convergence results for step decay in the non-convex regime, ensuring that the gradient norm vanishes at an $\mathcal{O}(\ln T/\sqrt{T})$ rate. We also provide near-optimal (and sometimes provably tight) convergence guarantees for general, possibly non-smooth, convex and strongly convex problems. The practical efficiency of the step decay step-size is demonstrated in several large-scale deep neural network training tasks.
null
BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation
https://papers.nips.cc/paper_files/paper/2021/hash/76f1cfd7754a6e4fc3281bcccb3d0902-Abstract.html
Mingguo He, Zhewei Wei, zengfeng Huang, Hongteng Xu
https://papers.nips.cc/paper_files/paper/2021/hash/76f1cfd7754a6e4fc3281bcccb3d0902-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12714-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/76f1cfd7754a6e4fc3281bcccb3d0902-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=WigDnV-_Gq
https://papers.nips.cc/paper_files/paper/2021/file/76f1cfd7754a6e4fc3281bcccb3d0902-Supplemental.zip
Many representative graph neural networks, $e.g.$, GPR-GNN and ChebNet, approximate graph convolutions with graph spectral filters. However, existing work either applies predefined filter weights or learns them without necessary constraints, which may lead to oversimplified or ill-posed filters. To overcome these issues, we propose $\textit{BernNet}$, a novel graph neural network with theoretical support that provides a simple but effective scheme for designing and learning arbitrary graph spectral filters. In particular, for any filter over the normalized Laplacian spectrum of a graph, our BernNet estimates it by an order-$K$ Bernstein polynomial approximation and designs its spectral property by setting the coefficients of the Bernstein basis. Moreover, we can learn the coefficients (and the corresponding filter weights) based on observed graphs and their associated signals and thus achieve the BernNet specialized for the data. Our experiments demonstrate that BernNet can learn arbitrary spectral filters, including complicated band-rejection and comb filters, and it achieves superior performance in real-world graph modeling tasks. Code is available at https://github.com/ivam-he/BernNet.
null
Co-evolution Transformer for Protein Contact Prediction
https://papers.nips.cc/paper_files/paper/2021/hash/770f8e448d07586afbf77bb59f698587-Abstract.html
He Zhang, Fusong Ju, Jianwei Zhu, Liang He, Bin Shao, Nanning Zheng, Tie-Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/770f8e448d07586afbf77bb59f698587-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12715-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/770f8e448d07586afbf77bb59f698587-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PcpExudEmDd
https://papers.nips.cc/paper_files/paper/2021/file/770f8e448d07586afbf77bb59f698587-Supplemental.zip
Proteins are the main machinery of life and protein functions are largely determined by their 3D structures. The measurement of the pairwise proximity between amino acids of a protein, known as inter-residue contact map, well characterizes the structural information of a protein. Protein contact prediction (PCP) is an essential building block of many protein structure related applications. The prevalent approach to contact prediction is based on estimating the inter-residue contacts using hand-crafted coevolutionary features derived from multiple sequence alignments (MSAs). To mitigate the information loss caused by hand-crafted features, some recently proposed methods try to learn residue co-evolutions directly from MSAs. These methods generally derive coevolutionary features by aggregating the learned residue representations from individual sequences with equal weights, which is inconsistent with the premise that residue co-evolutions are a reflection of collective covariation patterns of numerous homologous proteins. Moreover, non-homologous residues and gaps commonly exist in MSAs. By aggregating features from all homologs equally, the non-homologous information may cause misestimation of the residue co-evolutions. To overcome these issues, we propose an attention-based architecture, Co-evolution Transformer (CoT), for PCP. CoT jointly considers the information from all homologous sequences in the MSA to better capture global coevolutionary patterns. To mitigate the influence of the non-homologous information, CoT selectively aggregates the features from different homologs by assigning smaller weights to non-homologous sequences or residue pairs. Extensive experiments on two rigorous benchmark datasets demonstrate the effectiveness of CoT. In particular, CoT achieves a $51.6\%$ top-L long-range precision score for the Free Modeling (FM) domains on the CASP14 benchmark, which outperforms the winner group of CASP14 contact prediction challenge by $9.8\%$.
null
Unsupervised Foreground Extraction via Deep Region Competition
https://papers.nips.cc/paper_files/paper/2021/hash/77369e37b2aa1404f416275183ab055f-Abstract.html
Peiyu Yu, Sirui Xie, Xiaojian Ma, Yixin Zhu, Ying Nian Wu, Song-Chun Zhu
https://papers.nips.cc/paper_files/paper/2021/hash/77369e37b2aa1404f416275183ab055f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12716-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/77369e37b2aa1404f416275183ab055f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=tu5Wg41hWl_
https://papers.nips.cc/paper_files/paper/2021/file/77369e37b2aa1404f416275183ab055f-Supplemental.pdf
We present Deep Region Competition (DRC), an algorithm designed to extract foreground objects from images in a fully unsupervised manner. Foreground extraction can be viewed as a special case of generic image segmentation that focuses on identifying and disentangling objects from the background. In this work, we rethink the foreground extraction by reconciling energy-based prior with generative image modeling in the form of Mixture of Experts (MoE), where we further introduce the learned pixel re-assignment as the essential inductive bias to capture the regularities of background regions. With this modeling, the foreground-background partition can be naturally found through Expectation-Maximization (EM). We show that the proposed method effectively exploits the interaction between the mixture components during the partitioning process, which closely connects to region competition, a seminal approach for generic image segmentation. Experiments demonstrate that DRC exhibits more competitive performances on complex real-world data and challenging multi-object scenes compared with prior methods. Moreover, we show empirically that DRC can potentially generalize to novel foreground objects even from categories unseen during training.
null
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
https://papers.nips.cc/paper_files/paper/2021/hash/77b88288ebae7b17b7c8610a48c40dd1-Abstract.html
Divyansh Jhunjhunwala, Ankur Mallick, Advait Gadhikar, Swanand Kadhe, Gauri Joshi
https://papers.nips.cc/paper_files/paper/2021/hash/77b88288ebae7b17b7c8610a48c40dd1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12717-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/77b88288ebae7b17b7c8610a48c40dd1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=BKeJmkspvc
https://papers.nips.cc/paper_files/paper/2021/file/77b88288ebae7b17b7c8610a48c40dd1-Supplemental.pdf
We study the problem of estimating at a central server the mean of a set of vectors distributed across several nodes (one vector per node). When the vectors are high-dimensional, the communication cost of sending entire vectors may be prohibitive, and it may be imperative for them to use sparsification techniques. While most existing work on sparsified mean estimation is agnostic to the characteristics of the data vectors, in many practical applications such as federated learning, there may be spatial correlations (similarities in the vectors sent by different nodes) or temporal correlations (similarities in the data sent by a single node over different iterations of the algorithm) in the data vectors. We leverage these correlations by simply modifying the decoding method used by the server to estimate the mean. We provide an analysis of the resulting estimation error as well as experiments for PCA, K-Means and Logistic Regression, which show that our estimators consistently outperform more sophisticated and expensive sparsification methods.
null
Last-iterate Convergence in Extensive-Form Games
https://papers.nips.cc/paper_files/paper/2021/hash/77bb14f6132ea06dea456584b7d5581e-Abstract.html
Chung-Wei Lee, Christian Kroer, Haipeng Luo
https://papers.nips.cc/paper_files/paper/2021/hash/77bb14f6132ea06dea456584b7d5581e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12718-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/77bb14f6132ea06dea456584b7d5581e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=C0bV8xGhsz
https://papers.nips.cc/paper_files/paper/2021/file/77bb14f6132ea06dea456584b7d5581e-Supplemental.pdf
Regret-based algorithms are highly efficient at finding approximate Nash equilibria in sequential games such as poker games. However, most regret-based algorithms, including counterfactual regret minimization (CFR) and its variants, rely on iterate averaging to achieve convergence. Inspired by recent advances on last-iterate convergence of optimistic algorithms in zero-sum normal-form games, we study this phenomenon in sequential games, and provide a comprehensive study of last-iterate convergence for zero-sum extensive-form games with perfect recall (EFGs), using various optimistic regret-minimization algorithms over treeplexes. This includes algorithms using the vanilla entropy or squared Euclidean norm regularizers, as well as their dilated versions which admit more efficient implementation. In contrast to CFR, we show that all of these algorithms enjoy last-iterate convergence, with some of them even converging exponentially fast. We also provide experiments to further support our theoretical results.
null
Class-Incremental Learning via Dual Augmentation
https://papers.nips.cc/paper_files/paper/2021/hash/77ee3bc58ce560b86c2b59363281e914-Abstract.html
Fei Zhu, Zhen Cheng, Xu-yao Zhang, Cheng-lin Liu
https://papers.nips.cc/paper_files/paper/2021/hash/77ee3bc58ce560b86c2b59363281e914-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12719-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/77ee3bc58ce560b86c2b59363281e914-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8dqEeFuhgMG
null
Deep learning systems typically suffer from catastrophic forgetting of past knowledge when acquiring new skills continually. In this paper, we emphasize two dilemmas, representation bias and classifier bias in class-incremental learning, and present a simple and novel approach that employs explicit class augmentation (classAug) and implicit semantic augmentation (semanAug) to address the two biases, respectively. On the one hand, we propose to address the representation bias by learning transferable and diverse representations. Specifically, we investigate the feature representations in incremental learning based on spectral analysis and present a simple technique called classAug, to let the model see more classes during training for learning representations transferable across classes. On the other hand, to overcome the classifier bias, semanAug implicitly involves the simultaneous generating of an infinite number of instances of old classes in the deep feature space, which poses tighter constraints to maintain the decision boundary of previously learned classes. Without storing any old samples, our method can perform comparably with representative data replay based approaches.
null
Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems
https://papers.nips.cc/paper_files/paper/2021/hash/7806689d934e610d660caf5536fea0b2-Abstract.html
Zixiu Wang, Yiwen Guo, Hu Ding
https://papers.nips.cc/paper_files/paper/2021/hash/7806689d934e610d660caf5536fea0b2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12720-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/7806689d934e610d660caf5536fea0b2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=we8d1FjibAc
null
In many machine learning tasks, a common approach for dealing with large-scale data is to build a small summary, {\em e.g.,} coreset, that can efficiently represent the original input. However, real-world datasets usually contain outliers and most existing coreset construction methods are not resilient against outliers (in particular, an outlier can be located arbitrarily in the space by an adversarial attacker). In this paper, we propose a novel robust coreset method for the {\em continuous-and-bounded learning} problems (with outliers) which includes a broad range of popular optimization objectives in machine learning, {\em e.g.,} logistic regression and $ k $-means clustering. Moreover, our robust coreset can be efficiently maintained in fully-dynamic environment. To the best of our knowledge, this is the first robust and fully-dynamic coreset construction method for these optimization problems. Another highlight is that our coreset size can depend on the doubling dimension of the parameter space, rather than the VC dimension of the objective function which could be very large or even challenging to compute. Finally, we conduct the experiments on real-world datasets to evaluate the effectiveness of our proposed robust coreset method.
null
Rethinking and Reweighting the Univariate Losses for Multi-Label Ranking: Consistency and Generalization
https://papers.nips.cc/paper_files/paper/2021/hash/781397bc0630d47ab531ea850bddcf63-Abstract.html
Guoqiang Wu, Chongxuan LI, Kun Xu, Jun Zhu
https://papers.nips.cc/paper_files/paper/2021/hash/781397bc0630d47ab531ea850bddcf63-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12721-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/781397bc0630d47ab531ea850bddcf63-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=42yEyjooGSC
https://papers.nips.cc/paper_files/paper/2021/file/781397bc0630d47ab531ea850bddcf63-Supplemental.pdf
The (partial) ranking loss is a commonly used evaluation measure for multi-label classification, which is usually optimized with convex surrogates for computational efficiency. Prior theoretical efforts on multi-label ranking mainly focus on (Fisher) consistency analyses. However, there is a gap between existing theory and practice --- some inconsistent pairwise losses can lead to promising performance, while some consistent univariate losses usually have no clear superiority in practice. To take a step towards filling up this gap, this paper presents a systematic study from two complementary perspectives of consistency and generalization error bounds of learning algorithms. We theoretically find two key factors of the distribution (or dataset) that affect the learning guarantees of algorithms: the instance-wise class imbalance and the label size $c$. Specifically, in an extremely imbalanced case, the algorithm with the consistent univariate loss has an error bound of $O(c)$, while the one with the inconsistent pairwise loss depends on $O(\sqrt{c})$ as shown in prior work. This may shed light on the superior performance of pairwise methods in practice, where real datasets are usually highly imbalanced. Moreover, we present an inconsistent reweighted univariate loss-based algorithm that enjoys an error bound of $O(\sqrt{c})$ for promising performance as well as the computational efficiency of univariate losses. Finally, experimental results confirm our theoretical findings.
null
Fair Clustering Under a Bounded Cost
https://papers.nips.cc/paper_files/paper/2021/hash/781877bda0783aac5f1cf765c128b437-Abstract.html
Seyed Esmaeili, Brian Brubach, Aravind Srinivasan, John Dickerson
https://papers.nips.cc/paper_files/paper/2021/hash/781877bda0783aac5f1cf765c128b437-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12722-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/781877bda0783aac5f1cf765c128b437-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Zr9YPpxg2B1
https://papers.nips.cc/paper_files/paper/2021/file/781877bda0783aac5f1cf765c128b437-Supplemental.pdf
Clustering is a fundamental unsupervised learning problem where a dataset is partitioned into clusters that consist of nearby points in a metric space. A recent variant, fair clustering, associates a color with each point representing its group membership and requires that each color has (approximately) equal representation in each cluster to satisfy group fairness. In this model, the cost of the clustering objective increases due to enforcing fairness in the algorithm. The relative increase in the cost, the ```````''price of fairness,'' can indeed be unbounded. Therefore, in this paper we propose to treat an upper bound on the clustering objective as a constraint on the clustering problem, and to maximize equality of representation subject to it. We consider two fairness objectives: the group utilitarian objective and the group egalitarian objective, as well as the group leximin objective which generalizes the group egalitarian objective. We derive fundamental lower bounds on the approximation of the utilitarian and egalitarian objectives and introduce algorithms with provable guarantees for them. For the leximin objective we introduce an effective heuristic algorithm. We further derive impossibility results for other natural fairness objectives. We conclude with experimental results on real-world datasets that demonstrate the validity of our algorithms.
null
Improving Calibration through the Relationship with Adversarial Robustness
https://papers.nips.cc/paper_files/paper/2021/hash/78421a2e0e1168e5cd1b7a8d23773ce6-Abstract.html
Yao Qin, Xuezhi Wang, Alex Beutel, Ed Chi
https://papers.nips.cc/paper_files/paper/2021/hash/78421a2e0e1168e5cd1b7a8d23773ce6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12723-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/78421a2e0e1168e5cd1b7a8d23773ce6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NJex-5TZIQa
https://papers.nips.cc/paper_files/paper/2021/file/78421a2e0e1168e5cd1b7a8d23773ce6-Supplemental.pdf
Neural networks lack adversarial robustness, i.e., they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated predictions, i.e., the predicted probability is not a good indicator of how much we should trust our model. In this paper, we study the connection between adversarial robustness and calibration and find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated predictions. Based on this insight, we examine if calibration can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary. We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration over the model even under distributional shifts. In addition, AR-AdaLS can also be applied to an ensemble model to further improve model calibration.
null