title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
The decomposition of the higher-order homology embedding constructed from the $k$-Laplacian
|
https://papers.nips.cc/paper_files/paper/2021/hash/842424a1d0595b76ec4fa03c46e8d755-Abstract.html
|
Yu-Chia Chen, Marina Meila
|
https://papers.nips.cc/paper_files/paper/2021/hash/842424a1d0595b76ec4fa03c46e8d755-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12824-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/842424a1d0595b76ec4fa03c46e8d755-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=026hEw26i3-
|
https://papers.nips.cc/paper_files/paper/2021/file/842424a1d0595b76ec4fa03c46e8d755-Supplemental.zip
|
The null space of the $k$-th order Laplacian $\mathbf{\mathcal L}_k$, known as the {\em $k$-th homology vector space}, encodes the non-trivial topology of a manifold or a network. Understanding the structure of the homology embedding can thus disclose geometric or topological information from the data. The study of the null space embedding of the graph Laplacian $\mathbf{\mathcal L}_0$ has spurred new research and applications, such as spectral clustering algorithms with theoretical guarantees and estimators of the Stochastic Block Model. In this work, we investigate the geometry of the $k$-th homology embedding and focus on cases reminiscent of spectral clustering. Namely, we analyze the {\em connected sum} of manifolds as a perturbation to the direct sum of their homology embeddings. We propose an algorithm to factorize the homology embedding into subspaces corresponding to a manifold's simplest topological components. The proposed framework is applied to the {\em shortest homologous loop detection} problem, a problem known to be NP-hard in general. Our spectral loop detection algorithm scales better than existing methods and is effective on diverse data such as point clouds and images.
| null |
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs
|
https://papers.nips.cc/paper_files/paper/2021/hash/843a4d7fb5b1641b0bb8e3c2b2e75231-Abstract.html
|
Han Zhong, Jiayi Huang, Lin Yang, Liwei Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/843a4d7fb5b1641b0bb8e3c2b2e75231-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12825-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/843a4d7fb5b1641b0bb8e3c2b2e75231-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fHfdquSc2kT
|
https://papers.nips.cc/paper_files/paper/2021/file/843a4d7fb5b1641b0bb8e3c2b2e75231-Supplemental.zip
|
Despite a large amount of effort in dealing with heavy-tailed error in machine learning, little is known when moments of the error can become non-existential: the random noise $\eta$ satisfies Pr$\left[|\eta| > |y|\right] \le 1/|y|^{\alpha}$ for some $\alpha > 0$. We make the first attempt to actively handle such super heavy-tailed noise in bandit learning problems: We propose a novel robust statistical estimator, mean of medians, which estimates a random variable by computing the empirical mean of a sequence of empirical medians. We then present a generic reductionist algorithmic framework for solving bandit learning problems (including multi-armed and linear bandit problem): the mean of medians estimator can be applied to nearly any bandit learning algorithm as a black-box filtering for its reward signals and obtain similar regret bound as if the reward is sub-Gaussian. We show that the regret bound is near-optimal even with very heavy-tailed noise. We also empirically demonstrate the effectiveness of the proposed algorithm, which further corroborates our theoretical results.
| null |
A nonparametric method for gradual change problems with statistical guarantees
|
https://papers.nips.cc/paper_files/paper/2021/hash/8452a95c40e2b232acd9b8a8712935d7-Abstract.html
|
Lizhen Nie, Dan Nicolae
|
https://papers.nips.cc/paper_files/paper/2021/hash/8452a95c40e2b232acd9b8a8712935d7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12826-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8452a95c40e2b232acd9b8a8712935d7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=bo75bBsQzeZ
| null |
We consider the detection and localization of gradual changes in the distribution of a sequence of time-ordered observations. Existing literature focuses mostly on the simpler abrupt setting which assumes a discontinuity jump in distribution, and is unrealistic for some applied settings. We propose a general method for detecting and localizing gradual changes that does not require any specific data generating model, any particular data type, or any prior knowledge about which features of the distribution are subject to change. Despite relaxed assumptions, the proposed method possesses proven theoretical guarantees for both detection and localization.
| null |
Nested Graph Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8462a7c229aea03dde69da754c3bbcc4-Abstract.html
|
Muhan Zhang, Pan Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/8462a7c229aea03dde69da754c3bbcc4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12827-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8462a7c229aea03dde69da754c3bbcc4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=7_eLEvFjCi3
|
https://papers.nips.cc/paper_files/paper/2021/file/8462a7c229aea03dde69da754c3bbcc4-Supplemental.pdf
|
Graph neural network (GNN)'s success in graph classification is closely related to the Weisfeiler-Lehman (1-WL) algorithm. By iteratively aggregating neighboring node features to a center node, both 1-WL and GNN obtain a node representation that encodes a rooted subtree around the center node. These rooted subtree representations are then pooled into a single representation to represent the whole graph. However, rooted subtrees are of limited expressiveness to represent a non-tree graph. To address it, we propose Nested Graph Neural Networks (NGNNs). NGNN represents a graph with rooted subgraphs instead of rooted subtrees, so that two graphs sharing many identical subgraphs (rather than subtrees) tend to have similar representations. The key is to make each node representation encode a subgraph around it more than a subtree. To achieve this, NGNN extracts a local subgraph around each node and applies a base GNN to each subgraph to learn a subgraph representation. The whole-graph representation is then obtained by pooling these subgraph representations. We provide a rigorous theoretical analysis showing that NGNN is strictly more powerful than 1-WL. In particular, we proved that NGNN can discriminate almost all r-regular graphs, where 1-WL always fails. Moreover, unlike other more powerful GNNs, NGNN only introduces a constant-factor higher time complexity than standard GNNs. NGNN is a plug-and-play framework that can be combined with various base GNNs. We test NGNN with different base GNNs on several benchmark datasets. NGNN uniformly improves their performance and shows highly competitive performance on all datasets.
| null |
Multimodal and Multilingual Embeddings for Large-Scale Speech Mining
|
https://papers.nips.cc/paper_files/paper/2021/hash/8466f9ace6a9acbe71f75762ffc890f1-Abstract.html
|
Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk
|
https://papers.nips.cc/paper_files/paper/2021/hash/8466f9ace6a9acbe71f75762ffc890f1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12828-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8466f9ace6a9acbe71f75762ffc890f1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6fmgB38rLI1
|
https://papers.nips.cc/paper_files/paper/2021/file/8466f9ace6a9acbe71f75762ffc890f1-Supplemental.pdf
|
We present an approach to encode a speech signal into a fixed-size representation which minimizes the cosine loss with the existing massively multilingual LASER text embedding space. Sentences are close in this embedding space, independently of their language and modality, either text or audio. Using a similarity metric in that multimodal embedding space, we perform mining of audio in German, French, Spanish and English from Librivox against billions of sentences from Common Crawl. This yielded more than twenty thousand hours of aligned speech translations. To evaluate the automatically mined speech/text corpora, we train neural speech translation systems for several languages pairs. Adding the mined data, achieves significant improvements in the BLEU score on the CoVoST2 and the MUST-C test sets with respect to a very competitive baseline. Our approach can also be used to directly perform speech-to-speech mining, without the need to first transcribe or translate the data. We obtain more than one thousand three hundred hours of aligned speech in French, German, Spanish and English. This speech corpus has the potential to boost research in speech-to-speech translation which suffers from scarcity of natural end-to-end training data. All the mined multimodal corpora will be made freely available.
| null |
Necessary and sufficient graphical conditions for optimal adjustment sets in causal graphical models with hidden variables
|
https://papers.nips.cc/paper_files/paper/2021/hash/8485ae387a981d783f8764e508151cd9-Abstract.html
|
Jakob Runge
|
https://papers.nips.cc/paper_files/paper/2021/hash/8485ae387a981d783f8764e508151cd9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12829-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8485ae387a981d783f8764e508151cd9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=s6MWPKgL5XB
|
https://papers.nips.cc/paper_files/paper/2021/file/8485ae387a981d783f8764e508151cd9-Supplemental.pdf
|
The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perkovi{\'c} et~al. [Journal of Machine Learning Research, 18: 1--62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90\% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold.
|
https://papers.nips.cc/paper_files/paper/2021/file/8485ae387a981d783f8764e508151cd9-Supplemental%20Errata.pdf
|
On Blame Attribution for Accountable Multi-Agent Sequential Decision Making
|
https://papers.nips.cc/paper_files/paper/2021/hash/848c4965359e617d5e16c924b4a85fd9-Abstract.html
|
Stelios Triantafyllou, Adish Singla, Goran Radanovic
|
https://papers.nips.cc/paper_files/paper/2021/hash/848c4965359e617d5e16c924b4a85fd9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12830-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/848c4965359e617d5e16c924b4a85fd9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Rizxjst0_2B
|
https://papers.nips.cc/paper_files/paper/2021/file/848c4965359e617d5e16c924b4a85fd9-Supplemental.pdf
|
Blame attribution is one of the key aspects of accountable decision making, as it provides means to quantify the responsibility of an agent for a decision making outcome. In this paper, we study blame attribution in the context of cooperative multi-agent sequential decision making. As a particular setting of interest, we focus on cooperative decision making formalized by Multi-Agent Markov Decision Processes (MMDPs), and we analyze different blame attribution methods derived from or inspired by existing concepts in cooperative game theory. We formalize desirable properties of blame attribution in the setting of interest, and we analyze the relationship between these properties and the studied blame attribution methods. Interestingly, we show that some of the well known blame attribution methods, such as Shapley value, are not performance-incentivizing, while others, such as Banzhaf index, may over-blame agents. To mitigate these value misalignment and fairness issues, we introduce a novel blame attribution method, unique in the set of properties it satisfies, which trade-offs explanatory power (by under-blaming agents) for the aforementioned properties. We further show how to account for uncertainty about agents' decision making policies, and we experimentally: a) validate the qualitative properties of the studied blame attribution methods, and b) analyze their robustness to uncertainty.
| null |
FLEX: Unifying Evaluation for Few-Shot NLP
|
https://papers.nips.cc/paper_files/paper/2021/hash/8493eeaccb772c0878f99d60a0bd2bb3-Abstract.html
|
Jonathan Bragg, Arman Cohan, Kyle Lo, Iz Beltagy
|
https://papers.nips.cc/paper_files/paper/2021/hash/8493eeaccb772c0878f99d60a0bd2bb3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12831-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8493eeaccb772c0878f99d60a0bd2bb3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_WnGcwXLYOE
|
https://papers.nips.cc/paper_files/paper/2021/file/8493eeaccb772c0878f99d60a0bd2bb3-Supplemental.pdf
|
Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. In response, we formulate the FLEX Principles, a set of requirements and best practices for unified, rigorous, valid, and cost-sensitive few-shot NLP evaluation. These principles include Sample Size Design, a novel approach to benchmark design that optimizes statistical accuracy and precision while keeping evaluation costs manageable. Following the principles, we release the FLEX benchmark, which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks. In addition, we present UniFew, a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity, UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.
| null |
A flow-based latent state generative model of neural population responses to natural images
|
https://papers.nips.cc/paper_files/paper/2021/hash/84a529a92de322be42dd3365afd54f91-Abstract.html
|
Mohammad Bashiri, Edgar Walker, Konstantin-Klemens Lurz, Akshay Jagadish, Taliah Muhammad, Zhiwei Ding, Zhuokun Ding, Andreas Tolias, Fabian Sinz
|
https://papers.nips.cc/paper_files/paper/2021/hash/84a529a92de322be42dd3365afd54f91-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12832-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/84a529a92de322be42dd3365afd54f91-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1yeYYtLqq7K
|
https://papers.nips.cc/paper_files/paper/2021/file/84a529a92de322be42dd3365afd54f91-Supplemental.pdf
|
We present a joint deep neural system identification model for two major sources of neural variability: stimulus-driven and stimulus-conditioned fluctuations. To this end, we combine (1) state-of-the-art deep networks for stimulus-driven activity and (2) a flexible, normalizing flow-based generative model to capture the stimulus-conditioned variability including noise correlations. This allows us to train the model end-to-end without the need for sophisticated probabilistic approximations associated with many latent state models for stimulus-conditioned fluctuations. We train the model on the responses of thousands of neurons from multiple areas of the mouse visual cortex to natural images. We show that our model outperforms previous state-of-the-art models in predicting the distribution of neural population responses to novel stimuli, including shared stimulus-conditioned variability. Furthermore, it successfully learns known latent factors of the population responses that are related to behavioral variables such as pupil dilation, and other factors that vary systematically with brain area or retinotopic location. Overall, our model accurately accounts for two critical sources of neural variability while avoiding several complexities associated with many existing latent state models. It thus provides a useful tool for uncovering the interplay between different factors that contribute to variability in neural activity.
| null |
Learnable Fourier Features for Multi-dimensional Spatial Positional Encoding
|
https://papers.nips.cc/paper_files/paper/2021/hash/84c2d4860a0fc27bcf854c444fb8b400-Abstract.html
|
Yang Li, Si Si, Gang Li, Cho-Jui Hsieh, Samy Bengio
|
https://papers.nips.cc/paper_files/paper/2021/hash/84c2d4860a0fc27bcf854c444fb8b400-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12833-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/84c2d4860a0fc27bcf854c444fb8b400-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=R0h3NUMao_U
| null |
Attentional mechanisms are order-invariant. Positional encoding is a crucial component to allow attention-based deep model architectures such as Transformer to address sequences or images where the position of information matters. In this paper, we propose a novel positional encoding method based on learnable Fourier features. Instead of hard-coding each position as a token or a vector, we represent each position, which can be multi-dimensional, as a trainable encoding based on learnable Fourier feature mapping, modulated with a multi-layer perceptron. The representation is particularly advantageous for a spatial multi-dimensional position, e.g., pixel positions on an image, where $L_2$ distances or more complex positional relationships need to be captured. Our experiments based on several public benchmark tasks show that our learnable Fourier feature representation for multi-dimensional positional encoding outperforms existing methods by both improving the accuracy and allowing faster convergence.
| null |
Doubly Robust Thompson Sampling with Linear Payoffs
|
https://papers.nips.cc/paper_files/paper/2021/hash/84d5711e9bf5547001b765878e7b0157-Abstract.html
|
Wonyoung Kim, Gi-Soo Kim, Myunghee Cho Paik
|
https://papers.nips.cc/paper_files/paper/2021/hash/84d5711e9bf5547001b765878e7b0157-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12834-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/84d5711e9bf5547001b765878e7b0157-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WBVbl8POq8v
|
https://papers.nips.cc/paper_files/paper/2021/file/84d5711e9bf5547001b765878e7b0157-Supplemental.pdf
|
A challenging aspect of the bandit problem is that a stochastic reward is observed only for the chosen arm and the rewards of other arms remain missing. The dependence of the arm choice on the past context and reward pairs compounds the complexity of regret analysis.We propose a novel multi-armed contextual bandit algorithm called Doubly Robust Thompson Sampling (DRTS) employing the doubly-robust estimator used in missing data literature to Thompson Sampling with contexts (\texttt{LinTS}).Different from previous works relying on missing data techniques (Dimakopoulou et al. [2019], Kim and Paik [2019]), the proposed algorithm is designed to allow a novel additive regret decomposition leading to an improved regret bound with the order of $\tilde{O}(\phi^{-2}\sqrt{T})$, where $\phi^2$ is the minimum eigenvalue of the covariance matrix of contexts.This is the first regret bound of \texttt{LinTS} using $\phi^2$ without $d$, where $d$ is the dimension of the context.Applying the relationship between $\phi^2$ and $d$, the regret bound of the proposed algorithm is $\tilde{O}(d\sqrt{T})$ in many practical scenarios, improving the bound of \texttt{LinTS} by a factor of $\sqrt{d}$.A benefit of the proposed method is that it uses all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of \texttt{LinTS}.Empirical studies show the advantage of the proposed algorithm over \texttt{LinTS}.
| null |
A Computationally Efficient Method for Learning Exponential Family Distributions
|
https://papers.nips.cc/paper_files/paper/2021/hash/84f7e69969dea92a925508f7c1f9579a-Abstract.html
|
Abhin Shah, Devavrat Shah, Gregory Wornell
|
https://papers.nips.cc/paper_files/paper/2021/hash/84f7e69969dea92a925508f7c1f9579a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12835-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/84f7e69969dea92a925508f7c1f9579a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fxGT4XaLkpX
| null |
We consider the question of learning the natural parameters of a $k$ parameter \textit{minimal} exponential family from i.i.d. samples in a computationally and statistically efficient manner. We focus on the setting where the support as well as the natural parameters are appropriately bounded. While the traditional maximum likelihood estimator for this class of exponential family is consistent, asymptotically normal, and asymptotically efficient, evaluating it is computationally hard. In this work, we propose a computationally efficient estimator that is consistent as well as asymptotically normal under mild conditions. We provide finite sample guarantees to achieve an ($\ell_2$) error of $\alpha$ in the parameter estimation with sample complexity $O(\mathrm{poly}(k/\alpha))$ and computational complexity ${O}(\mathrm{poly}(k/\alpha))$. To establish these results, we show that, at the population level, our method can be viewed as the maximum likelihood estimation of a re-parameterized distribution belonging to the same class of exponential family.
| null |
Rethinking Neural Operations for Diverse Tasks
|
https://papers.nips.cc/paper_files/paper/2021/hash/84fdbc3ac902561c00871c9b0c226756-Abstract.html
|
Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar
|
https://papers.nips.cc/paper_files/paper/2021/hash/84fdbc3ac902561c00871c9b0c226756-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12836-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/84fdbc3ac902561c00871c9b0c226756-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=je4ymjfb5LC
|
https://papers.nips.cc/paper_files/paper/2021/file/84fdbc3ac902561c00871c9b0c226756-Supplemental.pdf
|
An important goal of AutoML is to automate-away the design of neural networks on new tasks in under-explored domains. Motivated by this goal, we study the problem of enabling users to discover the right neural operations given data from their specific domain. We introduce a search space of operations called XD-Operations that mimic the inductive bias of standard multi-channel convolutions while being much more expressive: we prove that it includes many named operations across multiple application areas. Starting with any standard backbone such as ResNet, we show how to transform it into a search space over XD-operations and how to traverse the space using a simple weight sharing scheme. On a diverse set of tasks—solving PDEs, distance prediction for protein folding, and music modeling—our approach consistently yields models with lower error than baseline networks and often even lower error than expert-designed domain-specific approaches.
| null |
Motif-based Graph Self-Supervised Learning for Molecular Property Prediction
|
https://papers.nips.cc/paper_files/paper/2021/hash/85267d349a5e647ff0a9edcb5ffd1e02-Abstract.html
|
ZAIXI ZHANG, Qi Liu, Hao Wang, Chengqiang Lu, Chee-Kong Lee
|
https://papers.nips.cc/paper_files/paper/2021/hash/85267d349a5e647ff0a9edcb5ffd1e02-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12837-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/85267d349a5e647ff0a9edcb5ffd1e02-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=gwGYN1fQY8H
|
https://papers.nips.cc/paper_files/paper/2021/file/85267d349a5e647ff0a9edcb5ffd1e02-Supplemental.pdf
|
Predicting molecular properties with data-driven methods has drawn much attention in recent years. Particularly, Graph Neural Networks (GNNs) have demonstrated remarkable success in various molecular generation and prediction tasks. In cases where labeled data is scarce, GNNs can be pre-trained on unlabeled molecular data to first learn the general semantic and structural information before being finetuned for specific tasks. However, most existing self-supervised pretraining frameworks for GNNs only focus on node-level or graph-level tasks. These approaches cannot capture the rich information in subgraphs or graph motifs. For example, functional groups (frequently-occurred subgraphs in molecular graphs) often carry indicative information about the molecular properties. To bridge this gap, we propose Motif-based Graph Self-supervised Learning (MGSSL) by introducing a novel self-supervised motif generation framework for GNNs. First, for motif extraction from molecular graphs, we design a molecule fragmentation method that leverages a retrosynthesis-based algorithm BRICS and additional rules for controlling the size of motif vocabulary. Second, we design a general motif-based generative pretraining framework in which GNNs are asked to make topological and label predictions. This generative framework can be implemented in two different ways, i.e., breadth-first or depth-first. Finally, to take the multi-scale information in molecular graphs into consideration, we introduce a multi-level self-supervised pre-training. Extensive experiments on various downstream benchmark tasks show that our methods outperform all state-of-the-art baselines.
| null |
On Inductive Biases for Heterogeneous Treatment Effect Estimation
|
https://papers.nips.cc/paper_files/paper/2021/hash/8526e0962a844e4a2f158d831d5fddf7-Abstract.html
|
Alicia Curth, Mihaela van der Schaar
|
https://papers.nips.cc/paper_files/paper/2021/hash/8526e0962a844e4a2f158d831d5fddf7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12838-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8526e0962a844e4a2f158d831d5fddf7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HWshP75OfKR
|
https://papers.nips.cc/paper_files/paper/2021/file/8526e0962a844e4a2f158d831d5fddf7-Supplemental.pdf
|
We investigate how to exploit structural similarities of an individual's potential outcomes (POs) under different treatments to obtain better estimates of conditional average treatment effects in finite samples. Especially when it is unknown whether a treatment has an effect at all, it is natural to hypothesize that the POs are similar -- yet, some existing strategies for treatment effect estimation employ regularization schemes that implicitly encourage heterogeneity even when it does not exist and fail to fully make use of shared structure. In this paper, we investigate and compare three end-to-end learning strategies to overcome this problem -- based on regularization, reparametrization and a flexible multi-task architecture -- each encoding inductive bias favoring shared behavior across POs. To build understanding of their relative strengths, we implement all strategies using neural networks and conduct a wide range of semi-synthetic experiments. We observe that all three approaches can lead to substantial improvements upon numerous baselines and gain insight into performance differences across various experimental settings.
| null |
DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples
|
https://papers.nips.cc/paper_files/paper/2021/hash/854d6fae5ee42911677c739ee1734486-Abstract.html
|
Yi Xu, Jiandong Ding, Lu Zhang, Shuigeng Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/854d6fae5ee42911677c739ee1734486-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12839-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/854d6fae5ee42911677c739ee1734486-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NlLynLBBi01
|
https://papers.nips.cc/paper_files/paper/2021/file/854d6fae5ee42911677c739ee1734486-Supplemental.pdf
|
The scarcity of labeled data is a critical obstacle to deep learning. Semi-supervised learning (SSL) provides a promising way to leverage unlabeled data by pseudo labels. However, when the size of labeled data is very small (say a few labeled samples per class), SSL performs poorly and unstably, possibly due to the low quality of learned pseudo labels. In this paper, we propose a new SSL method called DP-SSL that adopts an innovative data programming (DP) scheme to generate probabilistic labels for unlabeled data. Different from existing DP methods that rely on human experts to provide initial labeling functions (LFs), we develop a multiple-choice learning~(MCL) based approach to automatically generate LFs from scratch in SSL style. With the noisy labels produced by the LFs, we design a label model to resolve the conflict and overlap among the noisy labels, and finally infer probabilistic labels for unlabeled samples. Extensive experiments on four standard SSL benchmarks show that DP-SSL can provide reliable labels for unlabeled data and achieve better classification performance on test sets than existing SSL methods, especially when only a small number of labeled samples are available. Concretely, for CIFAR-10 with only 40 labeled samples, DP-SSL achieves 93.82% annotation accuracy on unlabeled data and 93.46% classification accuracy on test data, which are higher than the SOTA results.
| null |
Transformer in Transformer
|
https://papers.nips.cc/paper_files/paper/2021/hash/854d9fca60b4bd07f9bb215d59ef5561-Abstract.html
|
Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing XU, Yunhe Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/854d9fca60b4bd07f9bb215d59ef5561-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12840-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/854d9fca60b4bd07f9bb215d59ef5561-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=iFODavhthGZ
|
https://papers.nips.cc/paper_files/paper/2021/file/854d9fca60b4bd07f9bb215d59ef5561-Supplemental.pdf
|
Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (\eg, 16$\times$16) as “visual sentences” and present to further divide them into smaller patches (\eg, 4$\times$4) as “visual words”. The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, \eg, we achieve an 81.5\% top-1 accuracy on the ImageNet, which is about 1.7\% higher than that of the state-of-the-art visual transformer with similar computational cost. The PyTorch code is available at \url{https://github.com/huawei-noah/CV-Backbones}, and the MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/TNT}.
| null |
Adversarial Graph Augmentation to Improve Graph Contrastive Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/854f1fb6f65734d9e49f708d6cd84ad6-Abstract.html
|
Susheel Suresh, Pan Li, Cong Hao, Jennifer Neville
|
https://papers.nips.cc/paper_files/paper/2021/hash/854f1fb6f65734d9e49f708d6cd84ad6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12841-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/854f1fb6f65734d9e49f708d6cd84ad6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ioyq7NsR1KJ
|
https://papers.nips.cc/paper_files/paper/2021/file/854f1fb6f65734d9e49f708d6cd84ad6-Supplemental.pdf
|
Self-supervised learning of graph neural networks (GNN) is in great need because of the widespread label scarcity issue in real-world graph/network data. Graph contrastive learning (GCL), by training GNNs to maximize the correspondence between the representations of the same graph in its different augmented forms, may yield robust and transferable GNNs even without using labels. However, GNNs trained by traditional GCL often risk capturing redundant graph features and thus may be brittle and provide sub-par performance in downstream tasks. Here, we propose a novel principle, termed adversarial-GCL (\textit{AD-GCL}), which enables GNNs to avoid capturing redundant information during the training by optimizing adversarial graph augmentation strategies used in GCL. We pair AD-GCL with theoretical explanations and design a practical instantiation based on trainable edge-dropping graph augmentation. We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to~14\% in unsupervised, ~6\% in transfer and~3\% in semi-supervised learning settings overall with 18 different benchmark datasets for the tasks of molecule property regression and classification, and social network classification.
| null |
Online Control of Unknown Time-Varying Dynamical Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/856b503e276cc491e7e6e0ac1b9f4b17-Abstract.html
|
Edgar Minasyan, Paula Gradu, Max Simchowitz, Elad Hazan
|
https://papers.nips.cc/paper_files/paper/2021/hash/856b503e276cc491e7e6e0ac1b9f4b17-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12842-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/856b503e276cc491e7e6e0ac1b9f4b17-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ao2METZY4n
|
https://papers.nips.cc/paper_files/paper/2021/file/856b503e276cc491e7e6e0ac1b9f4b17-Supplemental.pdf
|
We study online control of time-varying linear systems with unknown dynamics in the nonstochastic control model. At a high level, we demonstrate that this setting is \emph{qualitatively harder} than that of either unknown time-invariant or known time-varying dynamics, and complement our negative results with algorithmic upper bounds in regimes where sublinear regret is possible. More specifically, we study regret bounds with respect to common classes of policies: Disturbance Action (SLS), Disturbance Response (Youla), and linear feedback policies. While these three classes are essentially equivalent for LTI systems, we demonstrate that these equivalences break down for time-varying systems. We prove a lower bound that no algorithm can obtain sublinear regret with respect to the first two classes unless a certain measure of system variability also scales sublinearly in the horizon. Furthermore, we show that offline planning over the state linear feedback policies is NP-hard, suggesting hardness of the online learning problem. On the positive side, we give an efficient algorithm that attains a sublinear regret bound against the class of Disturbance Response policies up to the aforementioned system variability term. In fact, our algorithm enjoys sublinear \emph{adaptive} regret bounds, which is a strictly stronger metric than standard regret and is more appropriate for time-varying systems. We sketch extensions to Disturbance Action policies and partial observation, and propose an inefficient algorithm for regret against linear state feedback policies.
| null |
Contrastive Reinforcement Learning of Symbolic Reasoning Domains
|
https://papers.nips.cc/paper_files/paper/2021/hash/859555c74e9afd45ab771c615c1e49a6-Abstract.html
|
Gabriel Poesia, WenXin Dong, Noah Goodman
|
https://papers.nips.cc/paper_files/paper/2021/hash/859555c74e9afd45ab771c615c1e49a6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12843-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/859555c74e9afd45ab771c615c1e49a6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZarM_uLVyGw
|
https://papers.nips.cc/paper_files/paper/2021/file/859555c74e9afd45ab771c615c1e49a6-Supplemental.pdf
|
Abstract symbolic reasoning, as required in domains such as mathematics and logic, is a key component of human intelligence. Solvers for these domains have important applications, especially to computer-assisted education. But learning to solve symbolic problems is challenging for machine learning algorithms. Existing models either learn from human solutions or use hand-engineered features, making them expensive to apply in new domains. In this paper, we instead consider symbolic domains as simple environments where states and actions are given as unstructured text, and binary rewards indicate whether a problem is solved. This flexible setup makes it easy to specify new domains, but search and planning become challenging. We introduce five environments inspired by the Mathematics Common Core Curriculum, and observe that existing Reinforcement Learning baselines perform poorly. We then present a novel learning algorithm, Contrastive Policy Learning (ConPoLe) that explicitly optimizes the InfoNCE loss, which lower bounds the mutual information between the current state and next states that continue on a path to the solution. ConPoLe successfully solves all four domains. Moreover, problem representations learned by ConPoLe enable accurate prediction of the categories of problems in a real mathematics curriculum. Our results suggest new directions for reinforcement learning in symbolic domains, as well as applications to mathematics education.
| null |
Spatial Ensemble: a Novel Model Smoothing Mechanism for Student-Teacher Framework
|
https://papers.nips.cc/paper_files/paper/2021/hash/8597a6cfa74defcbde3047c891d78f90-Abstract.html
|
Tengteng Huang, Yifan Sun, Xun Wang, Haotian Yao, Chi Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/8597a6cfa74defcbde3047c891d78f90-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12844-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8597a6cfa74defcbde3047c891d78f90-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jR3WPq3WTd
| null |
Model smoothing is of central importance for obtaining a reliable teacher model in the student-teacher framework, where the teacher generates surrogate supervision signals to train the student. A popular model smoothing method is the Temporal Moving Average (TMA), which continuously averages the teacher parameters with the up-to-date student parameters. In this paper, we propose ''Spatial Ensemble'', a novel model smoothing mechanism in parallel with TMA. Spatial Ensemble randomly picks up a small fragment of the student model to directly replace the corresponding fragment of the teacher model. Consequentially, it stitches different fragments of historical student models into a unity, yielding the ''Spatial Ensemble'' effect. Spatial Ensemble obtains comparable student-teacher learning performance by itself and demonstrates valuable complementarity with temporal moving average. Their integration, named Spatial-Temporal Smoothing, brings general (sometimes significant) improvement to the student-teacher learning framework on a variety of state-of-the-art methods. For example, based on the self-supervised method BYOL, it yields +0.9% top-1 accuracy improvement on ImageNet, while based on the semi-supervised approach FixMatch, it increases the top-1 accuracy by around +6% on CIFAR-10 when only few training labels are available. Codes and models are available at: https://github.com/tengteng95/Spatial_Ensemble.
| null |
Probabilistic Tensor Decomposition of Neural Population Spiking Activity
|
https://papers.nips.cc/paper_files/paper/2021/hash/859b755563f548d008f936906a959c8f-Abstract.html
|
Hugo Soulat, Sepiedeh Keshavarzi, Troy Margrie, Maneesh Sahani
|
https://papers.nips.cc/paper_files/paper/2021/hash/859b755563f548d008f936906a959c8f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12845-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/859b755563f548d008f936906a959c8f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1bBF5Zq1YHz
|
https://papers.nips.cc/paper_files/paper/2021/file/859b755563f548d008f936906a959c8f-Supplemental.pdf
|
The firing of neural populations is coordinated across cells, in time, and across experimentalconditions or repeated experimental trials; and so a full understanding of the computationalsignificance of neural responses must be based on a separation of these different contributions tostructured activity.Tensor decomposition is an approach to untangling the influence of multiple factors in data that iscommon in many fields. However, despite some recent interest in neuroscience, wider applicabilityof the approach is hampered by the lack of a full probabilistic treatment allowing principledinference of a decomposition from non-Gaussian spike-count data.Here, we extend the Pólya-Gamma (PG) augmentation, previously used in sampling-based Bayesianinference, to implement scalable variational inference in non-conjugate spike-count models.Using this new approach, we develop techniques related to automatic relevance determination to inferthe most appropriate tensor rank, as well as to incorporate priors based on known brain anatomy suchas the segregation of cell response properties by brain area.We apply the model to neural recordings taken under conditions of visual-vestibular sensoryintegration, revealing how the encoding of self- and visual-motion signals is modulated by thesensory information available to the animal.
| null |
Recurrent Bayesian Classifier Chains for Exact Multi-Label Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/859bf1416b8b8761c5d588dee78dc65f-Abstract.html
|
Walter Gerych, Tom Hartvigsen, Luke Buquicchio, Emmanuel Agu, Elke A. Rundensteiner
|
https://papers.nips.cc/paper_files/paper/2021/hash/859bf1416b8b8761c5d588dee78dc65f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12846-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/859bf1416b8b8761c5d588dee78dc65f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RQJWn82Xga2
|
https://papers.nips.cc/paper_files/paper/2021/file/859bf1416b8b8761c5d588dee78dc65f-Supplemental.pdf
|
Exact multi-label classification is the task of assigning each datapoint a set of class labels such that the assigned set exactly matches the ground truth. Optimizing for exact multi-label classification is important in domains where missing a single label can be especially costly, such as in object detection for autonomous vehicles or symptom classification for disease diagnosis. Recurrent Classifier Chains (RCCs), a recurrent neural network extension of ensemble-based classifier chains, are the state-of-the-art exact multi-label classification method for maximizing subset accuracy. However, RCCs iteratively predict classes with an unprincipled ordering, and therefore indiscriminately condition class probabilities. These disadvantages make RCCs prone to predicting inaccurate label sets. In this work we propose Recurrent Bayesian Classifier Chains (RBCCs), which learn a Bayesian network of class dependencies and leverage this network in order to condition the prediction of child nodes only on their parents. By conditioning predictions in this way, we perform principled and non-noisy class prediction. We demonstrate the effectiveness of our RBCC method on a variety of real-world multi-label datasets, where we routinely outperform the state of the art methods for exact multi-label classification.
| null |
Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic
|
https://papers.nips.cc/paper_files/paper/2021/hash/85a4413ecea7122bcc399cf0a53bba26-Abstract.html
|
Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael Jordan, Zhaoran Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/85a4413ecea7122bcc399cf0a53bba26-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12847-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/85a4413ecea7122bcc399cf0a53bba26-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9IJLHPuLpvZ
|
https://papers.nips.cc/paper_files/paper/2021/file/85a4413ecea7122bcc399cf0a53bba26-Supplemental.pdf
|
Actor-critic (AC) algorithms, empowered by neural networks, have had significant empirical success in recent years. However, most of the existing theoretical support for AC algorithms focuses on the case of linear function approximations, or linearized neural networks, where the feature representation is fixed throughout training. Such a limitation fails to capture the key aspect of representation learning in neural AC, which is pivotal in practical problems. In this work, we take a mean-field perspective on the evolution and convergence of feature-based neural AC. Specifically, we consider a version of AC where the actor and critic are represented by overparameterized two-layer neural networks and are updated with two-timescale learning rates. The critic is updated by temporal-difference (TD) learning with a larger stepsize while the actor is updated via proximal policy optimization (PPO) with a smaller stepsize. In the continuous-time and infinite-width limiting regime, when the timescales are properly separated, we prove that neural AC finds the globally optimal policy at a sublinear rate. Additionally, we prove that the feature representation induced by the critic network is allowed to evolve within a neighborhood of the initial one.
| null |
Assessing Fairness in the Presence of Missing Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/85dca1d270f7f9aef00c9d372f114482-Abstract.html
|
Yiliang Zhang, Qi Long
|
https://papers.nips.cc/paper_files/paper/2021/hash/85dca1d270f7f9aef00c9d372f114482-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12848-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/85dca1d270f7f9aef00c9d372f114482-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=myJO35O7Gg
|
https://papers.nips.cc/paper_files/paper/2021/file/85dca1d270f7f9aef00c9d372f114482-Supplemental.pdf
|
Missing data are prevalent and present daunting challenges in real data analysis. While there is a growing body of literature on fairness in analysis of fully observed data, there has been little theoretical work on investigating fairness in analysis of incomplete data. In practice, a popular analytical approach for dealing with missing data is to use only the set of complete cases, i.e., observations with all features fully observed to train a prediction algorithm. However, depending on the missing data mechanism, the distribution of complete cases and the distribution of the complete data may be substantially different. When the goal is to develop a fair algorithm in the complete data domain where there are no missing values, an algorithm that is fair in the complete case domain may show disproportionate bias towards some marginalized groups in the complete data domain. To fill this significant gap, we study the problem of estimating fairness in the complete data domain for an arbitrary model evaluated merely using complete cases. We provide upper and lower bounds on the fairness estimation error and conduct numerical experiments to assess our theoretical results. Our work provides the first known theoretical results on fairness guarantee in analysis of incomplete data.
| null |
Adversarial Attack Generation Empowered by Min-Max Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/85e5526a360b0bcf082d8d42e7bf100b-Abstract.html
|
Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/85e5526a360b0bcf082d8d42e7bf100b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12849-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/85e5526a360b0bcf082d8d42e7bf100b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xlNpxfGMTTu
|
https://papers.nips.cc/paper_files/paper/2021/file/85e5526a360b0bcf082d8d42e7bf100b-Supplemental.pdf
|
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness. Nevertheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the adversarial context. In this paper, we show how a general notion of min-max optimization over multiple domains can be leveraged to the design of different types of adversarial attacks. In particular, given a set of risk sources, minimizing the worst-case attack loss can be reformulated as a min-max problem by introducing domain weights that are maximized over the probability simplex of the domain set. We showcase this unified framework in three attack generation problems -- attacking model ensembles, devising universal perturbation under multiple inputs, and crafting attacks resilient to data transformations. Extensive experiments demonstrate that our approach leads to substantial attack improvement over the existing heuristic strategies as well as robustness improvement over state-of-the-art defense methods against multiple perturbation types. Furthermore, we find that the self-adjusted domain weights learned from min-max optimization can provide a holistic tool to explain the difficulty level of attack across domains.
| null |
Safe Pontryagin Differentiable Programming
|
https://papers.nips.cc/paper_files/paper/2021/hash/85ea6fd7a2ca3960d0cf5201933ac998-Abstract.html
|
Wanxin Jin, Shaoshuai Mou, George J. Pappas
|
https://papers.nips.cc/paper_files/paper/2021/hash/85ea6fd7a2ca3960d0cf5201933ac998-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12850-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/85ea6fd7a2ca3960d0cf5201933ac998-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wfGbrrWgXDm
|
https://papers.nips.cc/paper_files/paper/2021/file/85ea6fd7a2ca3960d0cf5201933ac998-Supplemental.pdf
|
We propose a Safe Pontryagin Differentiable Programming (Safe PDP) methodology, which establishes a theoretical and algorithmic framework to solve a broad class of safety-critical learning and control tasks---problems that require the guarantee of safety constraint satisfaction at any stage of the learning and control progress. In the spirit of interior-point methods, Safe PDP handles different types of system constraints on states and inputs by incorporating them into the cost or loss through barrier functions. We prove three fundamentals of the proposed Safe PDP: first, both the solution and its gradient in the backward pass can be approximated by solving their more efficient unconstrained counterparts; second, the approximation for both the solution and its gradient can be controlled for arbitrary accuracy by a barrier parameter; and third, importantly, all intermediate results throughout the approximation and optimization strictly respect the constraints, thus guaranteeing safety throughout the entire learning and control process. We demonstrate the capabilities of Safe PDP in solving various safety-critical tasks, including safe policy optimization, safe motion planning, and learning MPCs from demonstrations, on different challenging systems such as 6-DoF maneuvering quadrotor and 6-DoF rocket powered landing.
| null |
Class-Disentanglement and Applications in Adversarial Detection and Defense
|
https://papers.nips.cc/paper_files/paper/2021/hash/8606f35ec6c77858dfb80a385d0d1151-Abstract.html
|
Kaiwen Yang, Tianyi Zhou, Yonggang Zhang, Xinmei Tian, Dacheng Tao
|
https://papers.nips.cc/paper_files/paper/2021/hash/8606f35ec6c77858dfb80a385d0d1151-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12851-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8606f35ec6c77858dfb80a385d0d1151-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jFMzBeLyTc0
|
https://papers.nips.cc/paper_files/paper/2021/file/8606f35ec6c77858dfb80a385d0d1151-Supplemental.pdf
|
What is the minimum necessary information required by a neural net $D(\cdot)$ from an image $x$ to accurately predict its class? Extracting such information in the input space from $x$ can allocate the areas $D(\cdot)$ mainly attending to and shed novel insights to the detection and defense of adversarial attacks. In this paper, we propose ''class-disentanglement'' that trains a variational autoencoder $G(\cdot)$ to extract this class-dependent information as $x - G(x)$ via a trade-off between reconstructing $x$ by $G(x)$ and classifying $x$ by $D(x-G(x))$, where the former competes with the latter in decomposing $x$ so the latter retains only necessary information for classification in $x-G(x)$. We apply it to both clean images and their adversarial images and discover that the perturbations generated by adversarial attacks mainly lie in the class-dependent part $x-G(x)$. The decomposition results also provide novel interpretations to classification and attack models. Inspired by these observations, we propose to conduct adversarial detection and adversarial defense respectively on $x - G(x)$ and $G(x)$, which consistently outperform the results on the original $x$. In experiments, this simple approach substantially improves the detection and defense against different types of adversarial attacks.
| null |
Active 3D Shape Reconstruction from Vision and Touch
|
https://papers.nips.cc/paper_files/paper/2021/hash/8635b5fd6bc675033fb72e8a3ccc10a0-Abstract.html
|
Edward Smith, David Meger, Luis Pineda, Roberto Calandra, Jitendra Malik, Adriana Romero Soriano, Michal Drozdzal
|
https://papers.nips.cc/paper_files/paper/2021/hash/8635b5fd6bc675033fb72e8a3ccc10a0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12852-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8635b5fd6bc675033fb72e8a3ccc10a0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zdTW91r2wKO
|
https://papers.nips.cc/paper_files/paper/2021/file/8635b5fd6bc675033fb72e8a3ccc10a0-Supplemental.pdf
|
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch. However, in 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings, leaving the active exploration of the shape largely unexplored. In active touch sensing for 3D reconstruction, the goal is to actively select the tactile readings that maximize the improvement in shape reconstruction accuracy. However, the development of deep learning-based active touch models is largely limited by the lack of frameworks for shape exploration. In this paper, we focus on this problem and introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile signals; and 3) a set of data-driven solutions with either tactile or visuotactile priors to guide the shape exploration. Our framework enables the development of the first fully data-driven solutions to active touch on top of learned models for object understanding. Our experiments show the benefits of such solutions in the task of 3D shape understanding where our models consistently outperform natural baselines. We provide our framework as a tool to foster future research in this direction.
| null |
CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
|
https://papers.nips.cc/paper_files/paper/2021/hash/865bf46435bd84fa5d89f64cf3ba7347-Abstract.html
|
Tatiana Likhomanenko, Qiantong Xu, Gabriel Synnaeve, Ronan Collobert, Alex Rogozhnikov
|
https://papers.nips.cc/paper_files/paper/2021/hash/865bf46435bd84fa5d89f64cf3ba7347-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12853-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/865bf46435bd84fa5d89f64cf3ba7347-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=n-FqqWXnWW
|
https://papers.nips.cc/paper_files/paper/2021/file/865bf46435bd84fa5d89f64cf3ba7347-Supplemental.pdf
|
Without positional information, attention-based Transformer neural networks are permutation-invariant. Absolute or relative positional embeddings are the most popular ways to feed Transformer models with positional information. Absolute positional embeddings are simple to implement, but suffer from generalization issues when evaluating on sequences longer than seen at training time. Relative positions are more robust to input length change, but are more complex to implement and yield inferior model throughput due to extra computational and memory costs. In this paper, we propose an augmentation-based approach (CAPE) for absolute positional embeddings, which keeps the advantages of both absolute (simplicity and speed) and relative positional embeddings (better generalization). In addition, our empirical evaluation on state-of-the-art models in machine translation, image and speech recognition demonstrates that CAPE leads to better generalization performance as well as increased stability with respect to training hyper-parameters.
| null |
Multi-armed Bandit Requiring Monotone Arm Sequences
|
https://papers.nips.cc/paper_files/paper/2021/hash/865dfbde8a344b44095495f3591f7407-Abstract.html
|
Ningyuan Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/865dfbde8a344b44095495f3591f7407-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12854-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/865dfbde8a344b44095495f3591f7407-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=iZDMbX1W8AV
|
https://papers.nips.cc/paper_files/paper/2021/file/865dfbde8a344b44095495f3591f7407-Supplemental.pdf
|
In many online learning or multi-armed bandit problems, the taken actions or pulled arms are ordinal and required to be monotone over time. Examples include dynamic pricing, in which the firms use markup pricing policies to please early adopters and deter strategic waiting, and clinical trials, in which the dose allocation usually follows the dose escalation principle to prevent dose limiting toxicities. We consider the continuum-armed bandit problem when the arm sequence is required to be monotone. We show that when the unknown objective function is Lipschitz continuous, the regret is $O(T)$. When in addition the objective function is unimodal or quasiconcave, the regret is $\tilde O(T^{3/4})$ under the proposed algorithm, which is also shown to be the optimal rate. This deviates from the optimal rate $\tilde O(T^{2/3})$ in the continuous-armed bandit literature and demonstrates the cost to the learning efficiency brought by the monotonicity requirement.
| null |
Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8682cc30db9c025ecd3fee433f8ab54c-Abstract.html
|
Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low
|
https://papers.nips.cc/paper_files/paper/2021/hash/8682cc30db9c025ecd3fee433f8ab54c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12855-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8682cc30db9c025ecd3fee433f8ab54c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yRfsADObu18
|
https://papers.nips.cc/paper_files/paper/2021/file/8682cc30db9c025ecd3fee433f8ab54c-Supplemental.pdf
|
In collaborative machine learning(CML), multiple agents pool their resources(e.g., data) together for a common learning task. In realistic CML settings where the agents are self-interested and not altruistic, they may be unwilling to share data or model information without adequate rewards. Furthermore, as the data/model information shared by the agents may differ in quality, designing rewards which are fair to them is important so that they would not feel exploited nor discouraged from sharing. In this paper, we adopt federated learning as the CML paradigm, propose a novel cosine gradient Shapley value(CGSV) to fairly evaluate the expected marginal contribution of each agent’s uploaded model parameter update/gradient without needing an auxiliary validation dataset, and based on the CGSV, design a novel training-time gradient reward mechanism with a fairness guarantee by sparsifying the aggregated parameter update/gradient downloaded from the server as reward to each agent such that its resulting quality is commensurate to that of the agent’s uploaded parameter update/gradient. We empirically demonstrate the effectiveness of our fair gradient reward mechanism on multiple benchmark datasets in terms of fairness, predictive performance, and time overhead.
| null |
Generalizable Imitation Learning from Observation via Inferring Goal Proximity
|
https://papers.nips.cc/paper_files/paper/2021/hash/868b7df964b1af24c8c0a9e43a330c6a-Abstract.html
|
Youngwoon Lee, Andrew Szot, Shao-Hua Sun, Joseph J. Lim
|
https://papers.nips.cc/paper_files/paper/2021/hash/868b7df964b1af24c8c0a9e43a330c6a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12856-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/868b7df964b1af24c8c0a9e43a330c6a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lp9foO8AFoD
|
https://papers.nips.cc/paper_files/paper/2021/file/868b7df964b1af24c8c0a9e43a330c6a-Supplemental.pdf
|
Task progress is intuitive and readily available task information that can guide an agent closer to the desired goal. Furthermore, a task progress estimator can generalize to new situations. From this intuition, we propose a simple yet effective imitation learning from observation method for a goal-directed task using a learned goal proximity function as a task progress estimator for better generalization to unseen states and goals. We obtain this goal proximity function from expert demonstrations and online agent experience, and then use the learned goal proximity as a dense reward for policy training. We demonstrate that our proposed method can robustly generalize compared to prior imitation learning methods on a set of goal-directed tasks in navigation, locomotion, and robotic manipulation, even with demonstrations that cover only a part of the states.
| null |
DualNet: Continual Learning, Fast and Slow
|
https://papers.nips.cc/paper_files/paper/2021/hash/86a1fa88adb5c33bd7a68ac2f9f3f96b-Abstract.html
|
Quang Pham, Chenghao Liu, Steven Hoi
|
https://papers.nips.cc/paper_files/paper/2021/hash/86a1fa88adb5c33bd7a68ac2f9f3f96b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12857-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/86a1fa88adb5c33bd7a68ac2f9f3f96b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=eQ7Kh-QeWnO
|
https://papers.nips.cc/paper_files/paper/2021/file/86a1fa88adb5c33bd7a68ac2f9f3f96b-Supplemental.pdf
|
According to Complementary Learning Systems (CLS) theory~\cite{mcclelland1995there} in neuroscience, humans do effective \emph{continual learning} through two complementary systems: a fast learning system centered on the hippocampus for rapid learning of the specifics and individual experiences, and a slow learning system located in the neocortex for the gradual acquisition of structured knowledge about the environment. Motivated by this theory, we propose a novel continual learning framework named ``DualNet", which comprises a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for unsupervised representation learning of task-agnostic general representation via a Self-Supervised Learning (SSL) technique. The two fast and slow learning systems are complementary and work seamlessly in a holistic continual learning framework. Our extensive experiments on two challenging continual learning benchmarks of CORE50 and miniImageNet show that DualNet outperforms state-of-the-art continual learning methods by a large margin. We further conduct ablation studies of different SSL objectives to validate DualNet's efficacy, robustness, and scalability. Code is publicly available at \url{https://github.com/phquang/DualNet}.
| null |
Deformable Butterfly: A Highly Structured and Sparse Linear Transform
|
https://papers.nips.cc/paper_files/paper/2021/hash/86b122d4358357d834a87ce618a55de0-Abstract.html
|
Rui Lin, Jie Ran, King Hung Chiu, Graziano Chesi, Ngai Wong
|
https://papers.nips.cc/paper_files/paper/2021/hash/86b122d4358357d834a87ce618a55de0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12858-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/86b122d4358357d834a87ce618a55de0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=P-if5sUWBn
|
https://papers.nips.cc/paper_files/paper/2021/file/86b122d4358357d834a87ce618a55de0-Supplemental.pdf
|
We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions. It inherits the fine-to-coarse-grained learnable hierarchy of traditional butterflies and when deployed to neural networks, the prominent structures and sparsity in a DeBut layer constitutes a new way for network compression. We apply DeBut as a drop-in replacement of standard fully connected and convolutional layers, and demonstrate its superiority in homogenizing a neural network and rendering it favorable properties such as light weight and low inference complexity, without compromising accuracy. The natural complexity-accuracy tradeoff arising from the myriad deformations of a DeBut layer also opens up new rooms for analytical and practical research. The codes and Appendix are publicly available at: https://github.com/ruilin0212/DeBut.
| null |
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
|
https://papers.nips.cc/paper_files/paper/2021/hash/86b3e165b8154656a71ffe8a327ded7d-Abstract.html
|
Colin Wei, Sang Michael Xie, Tengyu Ma
|
https://papers.nips.cc/paper_files/paper/2021/hash/86b3e165b8154656a71ffe8a327ded7d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12859-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/86b3e165b8154656a71ffe8a327ded7d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MDMV2SxCboX
|
https://papers.nips.cc/paper_files/paper/2021/file/86b3e165b8154656a71ffe8a327ded7d-Supplemental.pdf
|
Pretrained language models have achieved state-of-the-art performance when adapted to a downstream NLP task. However, theoretical analysis of these models is scarce and challenging since the pretraining and downstream tasks can be very different. We propose an analysis framework that links the pretraining and downstream tasks with an underlying latent variable generative model of text -- the downstream classifier must recover a function of the posterior distribution over the latent variables. We analyze head tuning (learning a classifier on top of the frozen pretrained model) and prompt tuning in this setting. The generative model in our analysis is either a Hidden Markov Model (HMM) or an HMM augmented with a latent memory component, motivated by long-term dependencies in natural language. We show that 1) under certain non-degeneracy conditions on the HMM, simple classification heads can solve the downstream task, 2) prompt tuning obtains downstream guarantees with weaker non-degeneracy conditions, and 3) our recovery guarantees for the memory-augmented HMM are stronger than for the vanilla HMM because task-relevant information is easier to recover from the long-term memory. Experiments on synthetically generated data from HMMs back our theoretical findings.
| null |
Learning Diverse Policies in MOBA Games via Macro-Goals
|
https://papers.nips.cc/paper_files/paper/2021/hash/86dba86754c0ad93997a11fa947d97b2-Abstract.html
|
Yiming Gao, Bei Shi, Xueying Du, Liang Wang, Guangwei Chen, Zhenjie Lian, Fuhao Qiu, GUOAN HAN, Weixuan Wang, Deheng Ye, Qiang Fu, Wei Yang, Lanxiao Huang
|
https://papers.nips.cc/paper_files/paper/2021/hash/86dba86754c0ad93997a11fa947d97b2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12860-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/86dba86754c0ad93997a11fa947d97b2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xVs5d5ZSWaa
|
https://papers.nips.cc/paper_files/paper/2021/file/86dba86754c0ad93997a11fa947d97b2-Supplemental.pdf
|
Recently, many researchers have made successful progress in building the AI systems for MOBA-game-playing with deep reinforcement learning, such as on Dota 2 and Honor of Kings. Even though these AI systems have achieved or even exceeded human-level performance, they still suffer from the lack of policy diversity. In this paper, we propose a novel Macro-Goals Guided framework, called MGG, to learn diverse policies in MOBA games. MGG abstracts strategies as macro-goals from human demonstrations and trains a Meta-Controller to predict these macro-goals. To enhance policy diversity, MGG samples macro-goals from the Meta-Controller prediction and guides the training process towards these goals. Experimental results on the typical MOBA game Honor of Kings demonstrate that MGG can execute diverse policies in different matches and lineups, and also outperform the state-of-the-art methods over 102 heroes.
| null |
Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi
|
https://papers.nips.cc/paper_files/paper/2021/hash/86e8f7ab32cfd12577bc2619bc635690-Abstract.html
|
Ho Chit Siu, Jaime Peña, Edenna Chen, Yutai Zhou, Victor Lopez, Kyle Palko, Kimberlee Chang, Ross Allen
|
https://papers.nips.cc/paper_files/paper/2021/hash/86e8f7ab32cfd12577bc2619bc635690-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12861-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=x_JOyw5CLP
| null |
Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve objective team performance or those that improve subjective metrics of trust? In this study, we perform a single-blind evaluation of teams of humans and AI agents in the cooperative card game Hanabi, with both rule-based and learning-based agents. In addition to the game score, used as an objective metric of the human-AI team performance, we also quantify subjective measures of the human's perceived performance, teamwork, interpretability, trust, and overall preference of AI teammate. We find that humans have a clear preference toward a rule-based AI teammate (SmartBot) over a state-of-the-art learning-based AI teammate (Other-Play) across nearly all subjective metrics, and generally view the learning-based agent negatively, despite no statistical difference in the game score. This result has implications for future AI design and reinforcement learning benchmarking, highlighting the need to incorporate subjective metrics of human-AI teaming rather than a singular focus on objective task performance.
| null |
Counterfactual Invariance to Spurious Correlations in Text Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/8710ef761bbb29a6f9d12e4ef8e4379c-Abstract.html
|
Victor Veitch, Alexander D'Amour, Steve Yadlowsky, Jacob Eisenstein
|
https://papers.nips.cc/paper_files/paper/2021/hash/8710ef761bbb29a6f9d12e4ef8e4379c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12862-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8710ef761bbb29a6f9d12e4ef8e4379c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BdKxQp0iBi8
|
https://papers.nips.cc/paper_files/paper/2021/file/8710ef761bbb29a6f9d12e4ef8e4379c-Supplemental.pdf
|
Informally, a 'spurious correlation' is the dependence of a model on some aspect of the input data that an analyst thinks shouldn't matter. In machine learning, these have a know-it-when-you-see-it character; e.g., changing the gender of a sentence's subject changes a sentiment predictor's output. To check for spurious correlations, we can 'stress test' models by perturbing irrelevant parts of input data and seeing if model predictions change. In this paper, we study stress testing using the tools of causal inference. We introduce counterfactual invariance as a formalization of the requirement that changing irrelevant parts of the input shouldn't change model predictions. We connect counterfactual invariance to out-of-domain model performance, and provide practical schemes for learning (approximately) counterfactual invariant predictors (without access to counterfactual examples). It turns out that both the means and implications of counterfactual invariance depend fundamentally on the true underlying causal structure of the data---in particular, whether the label causes the features or the features cause the label. Distinct causal structures require distinct regularization schemes to induce counterfactual invariance. Similarly, counterfactual invariance implies different domain shift guarantees depending on the underlying causal structure. This theory is supported by empirical results on text classification.
| null |
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
|
https://papers.nips.cc/paper_files/paper/2021/hash/8726bb30dc7ce15023daa8ff8402bcfd-Abstract.html
|
Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/8726bb30dc7ce15023daa8ff8402bcfd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12863-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8726bb30dc7ce15023daa8ff8402bcfd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=I39u89067j
|
https://papers.nips.cc/paper_files/paper/2021/file/8726bb30dc7ce15023daa8ff8402bcfd-Supplemental.pdf
|
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples. By formalizing this malicious attack as finding the worst-case training data within a specific $\infty$-Wasserstein ball, we show that minimizing adversarial risk on the perturbed data is equivalent to optimizing an upper bound of natural risk on the original data. This implies that adversarial training can serve as a principled defense against delusive attacks. Thus, the test accuracy decreased by delusive attacks can be largely recovered by adversarial training. To further understand the internal mechanism of the defense, we disclose that adversarial training can resist the delusive perturbations by preventing the learner from overly relying on non-robust features in a natural setting. Finally, we complement our theoretical findings with a set of experiments on popular benchmark datasets, which show that the defense withstands six different practical attacks. Both theoretical and empirical results vote for adversarial training when confronted with delusive adversaries.
| null |
Determinantal point processes based on orthogonal polynomials for sampling minibatches in SGD
|
https://papers.nips.cc/paper_files/paper/2021/hash/8744cf92c88433f8cb04a02e6db69a0d-Abstract.html
|
Rémi Bardenet, Subhroshekhar Ghosh, Meixia LIN
|
https://papers.nips.cc/paper_files/paper/2021/hash/8744cf92c88433f8cb04a02e6db69a0d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12864-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8744cf92c88433f8cb04a02e6db69a0d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QpRufbD4xdn
|
https://papers.nips.cc/paper_files/paper/2021/file/8744cf92c88433f8cb04a02e6db69a0d-Supplemental.pdf
|
Stochastic gradient descent (SGD) is a cornerstone of machine learning. When the number $N$ of data items is large, SGD relies on constructing an unbiased estimator of the gradient of the empirical risk using a small subset of the original dataset, called a minibatch. Default minibatch construction involves uniformly sampling a subset of the desired size, but alternatives have been explored for variance reduction. In particular, experimental evidence suggests drawing minibatches from determinantal point processes (DPPs), tractable distributions over minibatches that favour diversity among selected items. However, like in recent work on DPPs for coresets, providing a systematic and principled understanding of how and why DPPs help has been difficult. In this work, we contribute an orthogonal polynomial-based determinantal point process paradigm for performing minibatch sampling in SGD. Our approach leverages the specific data distribution at hand, which endows it with greater sensitivity and power over existing data-agnostic methods. We substantiate our method via a detailed theoretical analysis of its convergence properties, interweaving between the discrete data set and the underlying continuous domain. In particular, we show how specific DPPs and a string of controlled approximations can lead to gradient estimators with a variance that decays faster with the batchsize than under uniform sampling. Coupled with existing finite-time guarantees for SGD on convex objectives, this entails that, for a large enough batchsize and a fixed budget of item-level gradients to evaluate, DPP minibatches lead to a smaller bound on the mean square approximation error than uniform minibatches. Moreover, our estimators are amenable to a recent algorithm that directly samples linear statistics of DPPs (i.e., the gradient estimator) without sampling the underlying DPP (i.e., the minibatch), thereby reducing computational overhead. We provide detailed synthetic as well as real data experiments to substantiate our theoretical claims.
| null |
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/8757150decbd89b0f5442ca3db4d0e0e-Abstract.html
|
Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Luc V Gool
|
https://papers.nips.cc/paper_files/paper/2021/hash/8757150decbd89b0f5442ca3db4d0e0e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12865-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8757150decbd89b0f5442ca3db4d0e0e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=j2gshvolULz
|
https://papers.nips.cc/paper_files/paper/2021/file/8757150decbd89b0f5442ca3db4d0e0e-Supplemental.pdf
|
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection. However, current methods are still primarily applied to curated datasets like ImageNet. In this paper, we first study how biases in the dataset affect existing methods. Our results show that an approach like MoCo works surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets. Second, given the generality of the approach, we try to realize further gains with minor modifications. We show that learning additional invariances - through the use of multi-scale cropping, stronger augmentations and nearest neighbors - improves the representations. Finally, we observe that MoCo learns spatially structured representations when trained with a multi-crop strategy. The representations can be used for semantic segment retrieval and video instance segmentation without finetuning. Moreover, the results are on par with specialized models. We hope this work will serve as a useful study for other researchers.
| null |
Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/87682805257e619d49b8e0dfdc14affa-Abstract.html
|
Hyeong-Seok Choi, Juheon Lee, Wansoo Kim, Jie Lee, Hoon Heo, Kyogu Lee
|
https://papers.nips.cc/paper_files/paper/2021/hash/87682805257e619d49b8e0dfdc14affa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12866-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/87682805257e619d49b8e0dfdc14affa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Aw96fN64soV
|
https://papers.nips.cc/paper_files/paper/2021/file/87682805257e619d49b8e0dfdc14affa-Supplemental.pdf
|
We present a neural analysis and synthesis (NANSY) framework that can manipulate the voice, pitch, and speed of an arbitrary speech signal. Most of the previous works have focused on using information bottleneck to disentangle analysis features for controllable synthesis, which usually results in poor reconstruction quality. We address this issue by proposing a novel training strategy based on information perturbation. The idea is to perturb information in the original input signal (e.g., formant, pitch, and frequency response), thereby letting synthesis networks selectively take essential attributes to reconstruct the input signal. Because NANSY does not need any bottleneck structures, it enjoys both high reconstruction quality and controllability. Furthermore, NANSY does not require any labels associated with speech data such as text and speaker information, but rather uses a new set of analysis features, i.e., wav2vec feature and newly proposed pitch feature, Yingram, which allows for fully self-supervised training. Taking advantage of fully self-supervised training, NANSY can be easily extended to a multilingual setting by simply training it with a multilingual dataset. The experiments show that NANSY can achieve significant improvement in performance in several applications such as zero-shot voice conversion, pitch shift, and time-scale modification.
| null |
Auto-Encoding Knowledge Graph for Unsupervised Medical Report Generation
|
https://papers.nips.cc/paper_files/paper/2021/hash/876e1c59023b1a0e95808168e1a8ff89-Abstract.html
|
Fenglin Liu, Chenyu You, Xian Wu, Shen Ge, Sheng wang, Xu Sun
|
https://papers.nips.cc/paper_files/paper/2021/hash/876e1c59023b1a0e95808168e1a8ff89-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12867-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/876e1c59023b1a0e95808168e1a8ff89-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nIL7Q-p7-Sh
| null |
Medical report generation, which aims to automatically generate a long and coherent report of a given medical image, has been receiving growing research interests. Existing approaches mainly adopt a supervised manner and heavily rely on coupled image-report pairs. However, in the medical domain, building a large-scale image-report paired dataset is both time-consuming and expensive. To relax the dependency on paired data, we propose an unsupervised model Knowledge Graph Auto-Encoder (KGAE) which accepts independent sets of images and reports in training. KGAE consists of a pre-constructed knowledge graph, a knowledge-driven encoder and a knowledge-driven decoder. The knowledge graph works as the shared latent space to bridge the visual and textual domains; The knowledge-driven encoder projects medical images and reports to the corresponding coordinates in this latent space and the knowledge-driven decoder generates a medical report given a coordinate in this space. Since the knowledge-driven encoder and decoder can be trained with independent sets of images and reports, KGAE is unsupervised. The experiments show that the unsupervised KGAE generates desirable medical reports without using any image-report training pairs. Moreover, KGAE can also work in both semi-supervised and supervised settings, and accept paired images and reports in training. By further fine-tuning with image-report pairs, KGAE consistently outperforms the current state-of-the-art models on two datasets.
| null |
Diffusion Normalizing Flow
|
https://papers.nips.cc/paper_files/paper/2021/hash/876f1f9954de0aa402d91bb988d12cd4-Abstract.html
|
Qinsheng Zhang, Yongxin Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/876f1f9954de0aa402d91bb988d12cd4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12868-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/876f1f9954de0aa402d91bb988d12cd4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=x1Lp2bOlVIo
|
https://papers.nips.cc/paper_files/paper/2021/file/876f1f9954de0aa402d91bb988d12cd4-Supplemental.pdf
|
We present a novel generative modeling method called diffusion normalizing flow based on stochastic differential equations (SDEs). The algorithm consists of two neural SDEs: a forward SDE that gradually adds noise to the data to transform the data into Gaussian random noise, and a backward SDE that gradually removes the noise to sample from the data distribution. By jointly training the two neural SDEs to minimize a common cost function that quantifies the difference between the two, the backward SDE converges to a diffusion process the starts with a Gaussian distribution and ends with the desired data distribution. Our method is closely related to normalizing flow and diffusion probabilistic models, and can be viewed as a combination of the two. Compared with normalizing flow, diffusion normalizing flow is able to learn distributions with sharp boundaries. Compared with diffusion probabilistic models, diffusion normalizing flow requires fewer discretization steps and thus has better sampling efficiency. Our algorithm demonstrates competitive performance in both high-dimension data density estimation and image generation tasks.
| null |
Introspective Distillation for Robust Question Answering
|
https://papers.nips.cc/paper_files/paper/2021/hash/878d5691c824ee2aaf770f7d36c151d6-Abstract.html
|
Yulei Niu, Hanwang Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/878d5691c824ee2aaf770f7d36c151d6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12869-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/878d5691c824ee2aaf770f7d36c151d6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OBLl2xoDHPw
|
https://papers.nips.cc/paper_files/paper/2021/file/878d5691c824ee2aaf770f7d36c151d6-Supplemental.pdf
|
Question answering (QA) models are well-known to exploit data bias, e.g., the language prior in visual QA and the position bias in reading comprehension. Recent debiasing methods achieve good out-of-distribution (OOD) generalizability with a considerable sacrifice of the in-distribution (ID) performance. Therefore, they are only applicable in domains where the test distribution is known in advance. In this paper, we present a novel debiasing method called Introspective Distillation (IntroD) to make the best of both worlds for QA. Our key technical contribution is to blend the inductive bias of OOD and ID by introspecting whether a training sample fits in the factual ID world or the counterfactual OOD one. Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQuAD demonstrate that our proposed IntroD maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones.
| null |
Rethinking the Pruning Criteria for Convolutional Neural Network
|
https://papers.nips.cc/paper_files/paper/2021/hash/87ae6fb631f7c8a627e8e28785d9992d-Abstract.html
|
Zhongzhan Huang, Wenqi Shao, Xinjiang Wang, Liang Lin, Ping Luo
|
https://papers.nips.cc/paper_files/paper/2021/hash/87ae6fb631f7c8a627e8e28785d9992d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12870-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/87ae6fb631f7c8a627e8e28785d9992d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HL_4vjPTdtp
|
https://papers.nips.cc/paper_files/paper/2021/file/87ae6fb631f7c8a627e8e28785d9992d-Supplemental.pdf
|
Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters. From our comprehensive experiments, we found two blind spots of pruning criteria: (1) Similarity: There are some strong similarities among several primary pruning criteria that are widely cited and compared. According to these criteria, the ranks of filters’ Importance Score are almost identical, resulting in similar pruned structures. (2) Applicability: The filters' Importance Score measured by some pruning criteria are too close to distinguish the network redundancy well. In this paper, we analyze the above blind spots on different types of pruning criteria with layer-wise pruning or global pruning. We also break some stereotypes, such as that the results of $\ell_1$ and $\ell_2$ pruning are not always similar. These analyses are based on the empirical experiments and our assumption (Convolutional Weight Distribution Assumption) that the well-trained convolutional filters in each layer approximately follow a Gaussian-alike distribution. This assumption has been verified through systematic and extensive statistical tests.
| null |
Adaptive Machine Unlearning
|
https://papers.nips.cc/paper_files/paper/2021/hash/87f7ee4fdb57bdfd52179947211b7ebb-Abstract.html
|
Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites
|
https://papers.nips.cc/paper_files/paper/2021/hash/87f7ee4fdb57bdfd52179947211b7ebb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12871-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/87f7ee4fdb57bdfd52179947211b7ebb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Goz-qsH1F14
|
https://papers.nips.cc/paper_files/paper/2021/file/87f7ee4fdb57bdfd52179947211b7ebb-Supplemental.pdf
|
Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees only for sequences that are chosen independently of the models that are published. If people choose to delete their data as a function of the published models (because they don’t like what the models reveal about them, for example), then the update sequence is adaptive. In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information. Combined with ideas from prior work which give guarantees for non-adaptive deletion sequences, this leads to extremely flexible algorithms able to handle arbitrary model classes and training methodologies, giving strong provable deletion guarantees for adaptive deletion sequences. We show in theory how prior work for non-convex models fails against adaptive deletion sequences, and use this intuition to design a practical attack against the SISA algorithm of Bourtoule et al. [2021] on CIFAR-10, MNIST, Fashion-MNIST.
| null |
EditGAN: High-Precision Semantic Image Editing
|
https://papers.nips.cc/paper_files/paper/2021/hash/880610aa9f9de9ea7c545169c716f477-Abstract.html
|
Huan Ling, Karsten Kreis, Daiqing Li, Seung Wook Kim, Antonio Torralba, Sanja Fidler
|
https://papers.nips.cc/paper_files/paper/2021/hash/880610aa9f9de9ea7c545169c716f477-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12872-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/880610aa9f9de9ea7c545169c716f477-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ppv5yqhpNyE
| null |
Generative adversarial networks (GANs) have recently found applications in image editing. However, most GAN-based image editing methods often require large-scale datasets with semantic segmentation annotations for training, only provide high-level control, or merely interpolate between different images. Here, we propose EditGAN, a novel method for high-quality, high-precision semantic image editing, allowing users to edit images by modifying their highly detailed part segmentation masks, e.g., drawing a new mask for the headlight of a car. EditGAN builds on a GAN framework that jointly models images and their semantic segmentation, requiring only a handful of labeled examples – making it a scalable tool for editing. Specifically, we embed an image into the GAN’s latent space and perform conditional latent code optimization according to the segmentation edit, which effectively also modifies the image. To amortize optimization, we find “editing vectors” in latent space that realize the edits. The framework allows us to learn an arbitrary number of editing vectors, which can then be directly applied on other images at interactive rates. We experimentally show that EditGAN can manipulate images with an unprecedented level of detail and freedom while preserving full image quality. We can also easily combine multiple edits and perform plausible edits beyond EditGAN’s training data. We demonstrate EditGAN on a wide variety of image types and quantitatively outperform several previous editing methods on standard editing benchmark tasks.
| null |
Deep Molecular Representation Learning via Fusing Physical and Chemical Information
|
https://papers.nips.cc/paper_files/paper/2021/hash/884d247c6f65a96a7da4d1105d584ddd-Abstract.html
|
Shuwen Yang, Ziyao Li, Guojie Song, Lingsheng Cai
|
https://papers.nips.cc/paper_files/paper/2021/hash/884d247c6f65a96a7da4d1105d584ddd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12873-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/884d247c6f65a96a7da4d1105d584ddd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Uxi7X1EqywV
|
https://papers.nips.cc/paper_files/paper/2021/file/884d247c6f65a96a7da4d1105d584ddd-Supplemental.pdf
|
Molecular representation learning is the first yet vital step in combining deep learning and molecular science. To push the boundaries of molecular representation learning, we present PhysChem, a novel neural architecture that learns molecular representations via fusing physical and chemical information of molecules. PhysChem is composed of a physicist network (PhysNet) and a chemist network (ChemNet). PhysNet is a neural physical engine that learns molecular conformations through simulating molecular dynamics with parameterized forces; ChemNet implements geometry-aware deep message-passing to learn chemical / biomedical properties of molecules. Two networks specialize in their own tasks and cooperate by providing expertise to each other. By fusing physical and chemical information, PhysChem achieved state-of-the-art performances on MoleculeNet, a standard molecular machine learning benchmark. The effectiveness of PhysChem was further corroborated on cutting-edge datasets of SARS-CoV-2.
| null |
Neural optimal feedback control with local learning rules
|
https://papers.nips.cc/paper_files/paper/2021/hash/88591b4d3219675bdeb33584b755f680-Abstract.html
|
Johannes Friedrich, Siavash Golkar, Shiva Farashahi, Alexander Genkin, Anirvan Sengupta, Dmitri Chklovskii
|
https://papers.nips.cc/paper_files/paper/2021/hash/88591b4d3219675bdeb33584b755f680-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12874-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/88591b4d3219675bdeb33584b755f680-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=hzioAx8g9x
|
https://papers.nips.cc/paper_files/paper/2021/file/88591b4d3219675bdeb33584b755f680-Supplemental.pdf
|
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli. A prominent framework for addressing such control problems is Optimal Feedback Control (OFC). OFC generates control actions that optimize behaviorally relevant criteria by integrating noisy sensory stimuli and the predictions of an internal model using the Kalman filter or its extensions. However, a satisfactory neural model of Kalman filtering and control is lacking because existing proposals have the following limitations: not considering the delay of sensory feedback, training in alternating phases, requiring knowledge of the noise covariance matrices, as well as that of systems dynamics. Moreover, the majority of these studies considered Kalman filtering in isolation, and not jointly with control. To address these shortcomings, we introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach (i.e., policy gradient algorithm). We implement this algorithm in a biologically plausible neural network with local synaptic plasticity rules. This network, with local synaptic plasticity rules, performs system identification, Kalman filtering and control with delayed noisy sensory feedback. This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances. It can perform state estimation with delayed sensory feedback, with the help of an internal model. It learns the control policy without requiring any knowledge of the dynamics, thus avoiding the need for weight transport. In this way, our implementation of OFC solves the credit assignment problem needed to produce the appropriate sensory-motor control in the presence of stimulus delay.
| null |
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection
|
https://papers.nips.cc/paper_files/paper/2021/hash/8860e834a67da41edd6ffe8a1c58fa55-Abstract.html
|
Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta
|
https://papers.nips.cc/paper_files/paper/2021/hash/8860e834a67da41edd6ffe8a1c58fa55-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12875-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8860e834a67da41edd6ffe8a1c58fa55-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=bGVZ6_u08Jy
| null |
We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.
| null |
Noether Networks: meta-learning useful conserved quantities
|
https://papers.nips.cc/paper_files/paper/2021/hash/886ad506e0c115cf590d18ebb6c26561-Abstract.html
|
Ferran Alet, Dylan Doblar, Allan Zhou, Josh Tenenbaum, Kenji Kawaguchi, Chelsea Finn
|
https://papers.nips.cc/paper_files/paper/2021/hash/886ad506e0c115cf590d18ebb6c26561-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12876-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/886ad506e0c115cf590d18ebb6c26561-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_NOwVKCmSo
|
https://papers.nips.cc/paper_files/paper/2021/file/886ad506e0c115cf590d18ebb6c26561-Supplemental.pdf
|
Progress in machine learning (ML) stems from a combination of data availability, computational resources, and an appropriate encoding of inductive biases. Useful biases often exploit symmetries in the prediction problem, such as convolutional networks relying on translation equivariance. Automatically discovering these useful symmetries holds the potential to greatly improve the performance of ML systems, but still remains a challenge. In this work, we focus on sequential prediction problems and take inspiration from Noether's theorem to reduce the problem of finding inductive biases to meta-learning useful conserved quantities. We propose Noether Networks: a new type of architecture where a meta-learned conservation loss is optimized inside the prediction function. We show, theoretically and experimentally, that Noether Networks improve prediction quality, providing a general framework for discovering inductive biases in sequential problems.
| null |
Uncertainty-Driven Loss for Single Image Super-Resolution
|
https://papers.nips.cc/paper_files/paper/2021/hash/88a199611ac2b85bd3f76e8ee7e55650-Abstract.html
|
Qian Ning, Weisheng Dong, Xin Li, Jinjian Wu, GUANGMING Shi
|
https://papers.nips.cc/paper_files/paper/2021/hash/88a199611ac2b85bd3f76e8ee7e55650-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12877-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/88a199611ac2b85bd3f76e8ee7e55650-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MXmmuhJYPdU
|
https://papers.nips.cc/paper_files/paper/2021/file/88a199611ac2b85bd3f76e8ee7e55650-Supplemental.pdf
|
In low-level vision such as single image super-resolution (SISR), traditional MSE or L1 loss function treats every pixel equally with the assumption that the importance of all pixels is the same. However, it has been long recognized that texture and edge areas carry more important visual information than smooth areas in photographic images. How to achieve such spatial adaptation in a principled manner has been an open problem in both traditional model-based and modern learning-based approaches toward SISR. In this paper, we propose a new adaptive weighted loss for SISR to train deep networks focusing on challenging situations such as textured and edge pixels with high uncertainty. Specifically, we introduce variance estimation characterizing the uncertainty on a pixel-by-pixel basis into SISR solutions so the targeted pixels in a high-resolution image (mean) and their corresponding uncertainty (variance) can be learned simultaneously. Moreover, uncertainty estimation allows us to leverage conventional wisdom such as sparsity prior for regularizing SISR solutions. Ultimately, pixels with large certainty (e.g., texture and edge pixels) will be prioritized for SISR according to their importance to visual quality. For the first time, we demonstrate that such uncertainty-driven loss can achieve better results than MSE or L1 loss for a wide range of network architectures. Experimental results on three popular SISR networks show that our proposed uncertainty-driven loss has achieved better PSNR performance than traditional loss functions without any increased computation during testing. The code is available at https://see.xidian.edu.cn/faculty/wsdong/Projects/UDL-SR.htm
| null |
GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
|
https://papers.nips.cc/paper_files/paper/2021/hash/88ae6372cfdc5df69a976e893f4d554b-Abstract.html
|
Chen Zhu, Renkun Ni, Zheng Xu, Kezhi Kong, W. Ronny Huang, Tom Goldstein
|
https://papers.nips.cc/paper_files/paper/2021/hash/88ae6372cfdc5df69a976e893f4d554b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12878-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/88ae6372cfdc5df69a976e893f4d554b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=eXlxB3aLOe
|
https://papers.nips.cc/paper_files/paper/2021/file/88ae6372cfdc5df69a976e893f4d554b-Supplemental.pdf
|
Innovations in neural architectures have fostered significant breakthroughs in language modeling and computer vision. Unfortunately, novel architectures often result in challenging hyper-parameter choices and training instability if the network parameters are not properly initialized. A number of architecture-specific initialization schemes have been proposed, but these schemes are not always portable to new architectures. This paper presents GradInit, an automated and architecture agnostic method for initializing neural networks. GradInit is based on a simple heuristic; the norm of each network layer is adjusted so that a single step of SGD or Adam with prescribed hyperparameters results in the smallest possible loss value. This adjustment is done by introducing a scalar multiplier variable in front of each parameter block, and then optimizing these variables using a simple numerical scheme. GradInit accelerates the convergence and test performance of many convolutional architectures, both with or without skip connections, and even without normalization layers. It also improves the stability of the original Transformer architecture for machine translation, enabling training it without learning rate warmup using either Adam or SGD under a wide range of learning rates and momentum coefficients. Code is available at https://github.com/zhuchen03/gradinit.
| null |
Capacity and Bias of Learned Geometric Embeddings for Directed Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/88d25099b103efd638163ecb40a55589-Abstract.html
|
Michael Boratko, Dongxu Zhang, Nicholas Monath, Luke Vilnis, Kenneth L Clarkson, Andrew McCallum
|
https://papers.nips.cc/paper_files/paper/2021/hash/88d25099b103efd638163ecb40a55589-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12879-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/88d25099b103efd638163ecb40a55589-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0IqTX6FcZWv
|
https://papers.nips.cc/paper_files/paper/2021/file/88d25099b103efd638163ecb40a55589-Supplemental.pdf
|
A wide variety of machine learning tasks such as knowledge base completion, ontology alignment, and multi-label classification can benefit from incorporating into learning differentiable representations of graphs or taxonomies. While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs. Experimentally these gains are seen only in lower dimensions, however, with performance benefits diminishing in higher dimensions. In this work, we introduce a novel variant of box embeddings that uses a learned smoothing parameter to achieve better representational capacity than vector models in low dimensions, while also avoiding performance saturation common to other geometric models in high dimensions. Further, we present theoretical results that prove box embeddings can represent any DAG. We perform rigorous empirical evaluations of vector, hyperbolic, and region-based geometric representations on several families of synthetic and real-world directed graphs. Analysis of these results exposes correlations between different families of graphs, graph characteristics, model size, and embedding geometry, providing useful insights into the inductive biases of various differentiable graph representations.
| null |
Online Learning Of Neural Computations From Sparse Temporal Feedback
|
https://papers.nips.cc/paper_files/paper/2021/hash/88e1ce84f9feef5a08d0df0334c53468-Abstract.html
|
Lukas Braun, Tim Vogels
|
https://papers.nips.cc/paper_files/paper/2021/hash/88e1ce84f9feef5a08d0df0334c53468-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12880-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/88e1ce84f9feef5a08d0df0334c53468-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nJUDGEc69a5
|
https://papers.nips.cc/paper_files/paper/2021/file/88e1ce84f9feef5a08d0df0334c53468-Supplemental.pdf
|
Neuronal computations depend on synaptic connectivity and intrinsic electrophysiological properties. Synaptic connectivity determines which inputs from presynaptic neurons are integrated, while cellular properties determine how inputs are filtered over time. Unlike their biological counterparts, most computational approaches to learning in simulated neural networks are limited to changes in synaptic connectivity. However, if intrinsic parameters change, neural computations are altered drastically. Here, we include the parameters that determine the intrinsic properties, e.g., time constants and reset potential, into the learning paradigm. Using sparse feedback signals that indicate target spike times, and gradient-based parameter updates, we show that the intrinsic parameters can be learned along with the synaptic weights to produce specific input-output functions. Specifically, we use a teacher-student paradigm in which a randomly initialised leaky integrate-and-fire or resonate-and-fire neuron must recover the parameters of a teacher neuron. We show that complex temporal functions can be learned online and without backpropagation through time, relying on event-based updates only. Our results are a step towards online learning of neural computations from ungraded and unsigned sparse feedback signals with a biologically inspired learning mechanism.
| null |
Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style
|
https://papers.nips.cc/paper_files/paper/2021/hash/8929c70f8d710e412d38da624b21c3c8-Abstract.html
|
Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, Francesco Locatello
|
https://papers.nips.cc/paper_files/paper/2021/hash/8929c70f8d710e412d38da624b21c3c8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12881-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8929c70f8d710e412d38da624b21c3c8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4pf_pOo0Dt
|
https://papers.nips.cc/paper_files/paper/2021/file/8929c70f8d710e412d38da624b21c3c8-Supplemental.pdf
|
Self-supervised representation learning has shown remarkable success in a number of domains. A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant. We seek to understand the empirical success of this approach from a theoretical perspective. We formulate the augmentation process as a latent variable model by postulating a partition of the latent representation into a content component, which is assumed invariant to augmentation, and a style component, which is allowed to change. Unlike prior work on disentanglement and independent component analysis, we allow for both nontrivial statistical and causal dependencies in the latent space. We study the identifiability of the latent representation based on pairs of views of the observations and prove sufficient conditions that allow us to identify the invariant content partition up to an invertible mapping in both generative and discriminative settings. We find numerical simulations with dependent latent variables are consistent with our theory. Lastly, we introduce Causal3DIdent, a dataset of high-dimensional, visually complex images with rich causal dependencies, which we use to study the effect of data augmentations performed in practice.
| null |
Instance-Conditional Knowledge Distillation for Object Detection
|
https://papers.nips.cc/paper_files/paper/2021/hash/892c91e0a653ba19df81a90f89d99bcd-Abstract.html
|
Zijian Kang, Peizhen Zhang, Xiangyu Zhang, Jian Sun, Nanning Zheng
|
https://papers.nips.cc/paper_files/paper/2021/hash/892c91e0a653ba19df81a90f89d99bcd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12882-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/892c91e0a653ba19df81a90f89d99bcd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=k7aeAz4Vbb
|
https://papers.nips.cc/paper_files/paper/2021/file/892c91e0a653ba19df81a90f89d99bcd-Supplemental.pdf
|
Knowledge distillation has shown great success in classification, however, it is still challenging for detection. In a typical image for detection, representations from different locations may have different contributions to detection targets, making the distillation hard to balance. In this paper, we propose a conditional distillation framework to distill the desired knowledge, namely knowledge that is beneficial in terms of both classification and localization for every instance. The framework introduces a learnable conditional decoding module, which retrieves information given each target instance as query. Specifically, we encode the condition information as query and use the teacher's representations as key. The attention between query and key is used to measure the contribution of different features, guided by a localization-recognition-sensitive auxiliary task. Extensive experiments demonstrate the efficacy of our method: we observe impressive improvements under various settings. Notably, we boost RetinaNet with ResNet-50 backbone from $37.4$ to $40.7$ mAP ($+3.3$) under $1\times$ schedule, that even surpasses the teacher ($40.4$ mAP) with ResNet-101 backbone under $3\times$ schedule. Code has been released on https://github.com/megvii-research/ICD.
| null |
Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction
|
https://papers.nips.cc/paper_files/paper/2021/hash/89562dccfeb1d0394b9ae7e09544dc70-Abstract.html
|
Konstantin Schürholt, Dimche Kostadinov, Damian Borth
|
https://papers.nips.cc/paper_files/paper/2021/hash/89562dccfeb1d0394b9ae7e09544dc70-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12883-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/89562dccfeb1d0394b9ae7e09544dc70-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=F1D8buayXQT
|
https://papers.nips.cc/paper_files/paper/2021/file/89562dccfeb1d0394b9ae7e09544dc70-Supplemental.pdf
|
Self-Supervised Learning (SSL) has been shown to learn useful and information-preserving representations. Neural Networks (NNs) are widely applied, yet their weight space is still not fully understood. Therefore, we propose to use SSL to learn hyper-representations of the weights of populations of NNs. To that end, we introduce domain specific data augmentations and an adapted attention architecture. Our empirical evaluation demonstrates that self-supervised representation learning in this domain is able to recover diverse NN model characteristics. Further, we show that the proposed learned representations outperform prior work for predicting hyper-parameters, test accuracy, and generalization gap as well as transfer to out-of-distribution settings.
| null |
Multimodal Virtual Point 3D Detection
|
https://papers.nips.cc/paper_files/paper/2021/hash/895daa408f494ad58006c47a30f51c1f-Abstract.html
|
Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl
|
https://papers.nips.cc/paper_files/paper/2021/hash/895daa408f494ad58006c47a30f51c1f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12884-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/895daa408f494ad58006c47a30f51c1f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8xoN9ZdSW8
| null |
Lidar-based sensing drives current autonomous vehicles. Despite rapid progress, current Lidar sensors still lag two decades behind traditional color cameras in terms of resolution and cost. For autonomous driving, this means that large objects close to the sensors are easily visible, but far-away or small objects comprise only one measurement or two. This is an issue, especially when these objects turn out to be driving hazards. On the other hand, these same objects are clearly visible in onboard RGB sensors. In this work, we present an approach to seamlessly fuse RGB sensors into Lidar-based 3D recognition. Our approach takes a set of 2D detections to generate dense 3D virtual points to augment an otherwise sparse 3D point cloud. These virtual points naturally integrate into any standard Lidar-based 3D detectors along with regular Lidar measurements. The resulting multi-modal detector is simple and effective. Experimental results on the large-scale nuScenes dataset show that our framework improves a strong CenterPoint baseline by a significant $6.6$ mAP, and outperforms competing fusion approaches. Code and more visualizations are available at https://tianweiy.github.io/mvp/
| null |
On Joint Learning for Solving Placement and Routing in Chip Design
|
https://papers.nips.cc/paper_files/paper/2021/hash/898aef0932f6aaecda27aba8e9903991-Abstract.html
|
Ruoyu Cheng, Junchi Yan
|
https://papers.nips.cc/paper_files/paper/2021/hash/898aef0932f6aaecda27aba8e9903991-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12885-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/898aef0932f6aaecda27aba8e9903991-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rsd-9hCIit3
| null |
For its advantage in GPU acceleration and less dependency on human experts, machine learning has been an emerging tool for solving the placement and routing problems, as two critical steps in modern chip design flow. Being still in its early stage, there are several fundamental issues unresolved: scalability, reward design, and end-to-end learning paradigm etc. To achieve end-to-end placement learning, we first propose a joint learning method for the placement of macros and standard cells, by the integration of reinforcement learning with a gradient based optimization scheme. To further bridge the placement with the subsequent routing task, we also develop a joint learning approach via reinforcement learning. One key design in our (reinforcement) learning paradigm involves a multi-view embedding model to encode both global graph level and local node level information of the input macros. Moreover, the random network distillation is devised to encourage exploration. Experiments on public chip design benchmarks show that our method can effectively learn from experience and also provide high-quality intermediate placement for the post standard cell placement, within few hours for training.
| null |
Learning with Algorithmic Supervision via Continuous Relaxations
|
https://papers.nips.cc/paper_files/paper/2021/hash/89ae0fe22c47d374bc9350ef99e01685-Abstract.html
|
Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
|
https://papers.nips.cc/paper_files/paper/2021/hash/89ae0fe22c47d374bc9350ef99e01685-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12886-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/89ae0fe22c47d374bc9350ef99e01685-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=w0ZNeU5S-l
|
https://papers.nips.cc/paper_files/paper/2021/file/89ae0fe22c47d374bc9350ef99e01685-Supplemental.pdf
|
The integration of algorithmic components into neural architectures has gained increased attention recently, as it allows training neural networks with new forms of supervision such as ordering constraints or silhouettes instead of using ground truth labels. Many approaches in the field focus on the continuous relaxation of a specific task and show promising results in this context. But the focus on single tasks also limits the applicability of the proposed concepts to a narrow range of applications. In this work, we build on those ideas to propose an approach that allows to integrate algorithms into end-to-end trainable neural network architectures based on a general approximation of discrete conditions. To this end, we relax these conditions in control structures such as conditional statements, loops, and indexing, so that resulting algorithms are smoothly differentiable. To obtain meaningful gradients, each relevant variable is perturbed via logistic distributions and the expectation value under this perturbation is approximated. We evaluate the proposed continuous relaxation model on four challenging tasks and show that it can keep up with relaxations specifically designed for each individual task.
| null |
Differentiable Multiple Shooting Layers
|
https://papers.nips.cc/paper_files/paper/2021/hash/89b9c689a57b82e59074c6ba09aa394d-Abstract.html
|
Stefano Massaroli, Michael Poli, Sho Sonoda, Taiji Suzuki, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
|
https://papers.nips.cc/paper_files/paper/2021/hash/89b9c689a57b82e59074c6ba09aa394d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12887-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/89b9c689a57b82e59074c6ba09aa394d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NKNjbKb5dK
|
https://papers.nips.cc/paper_files/paper/2021/file/89b9c689a57b82e59074c6ba09aa394d-Supplemental.pdf
|
We detail a novel class of implicit neural models. Leveraging time-parallel methods for differential equations, Multiple Shooting Layers (MSLs) seek solutions of initial value problems via parallelizable root-finding algorithms. MSLs broadly serve as drop-in replacements for neural ordinary differential equations (Neural ODEs) with improved efficiency in number of function evaluations (NFEs) and wall-clock inference time. We develop the algorithmic framework of MSLs, analyzing the different choices of solution methods from a theoretical and computational perspective. MSLs are showcased in long horizon optimal control of ODEs and PDEs and as latent models for sequence generation. Finally, we investigate the speedups obtained through application of MSL inference in neural controlled differential equations (Neural CDEs) for time series classification of medical data.
| null |
Global-aware Beam Search for Neural Abstractive Summarization
|
https://papers.nips.cc/paper_files/paper/2021/hash/89d4402dc03d3b7318bbac10203034ab-Abstract.html
|
Ye Ma, Zixun Lan, Lu Zong, Kaizhu Huang
|
https://papers.nips.cc/paper_files/paper/2021/hash/89d4402dc03d3b7318bbac10203034ab-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12888-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/89d4402dc03d3b7318bbac10203034ab-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NvdzzasFiGr
|
https://papers.nips.cc/paper_files/paper/2021/file/89d4402dc03d3b7318bbac10203034ab-Supplemental.pdf
|
This study develops a calibrated beam-based algorithm with awareness of the global attention distribution for neural abstractive summarization, aiming to improve the local optimality problem of the original beam search in a rigorous way. Specifically, a novel global protocol is proposed based on the attention distribution to stipulate how a global optimal hypothesis should attend to the source. A global scoring mechanism is then developed to regulate beam search to generate summaries in a near-global optimal fashion. This novel design enjoys a distinctive property, i.e., the global attention distribution could be predicted before inference, enabling step-wise improvements on the beam search through the global scoring mechanism. Extensive experiments on nine datasets show that the global (attention)-aware inference significantly improves state-of-the-art summarization models even using empirical hyper-parameters. The algorithm is also proven robust as it remains to generate meaningful texts with corrupted attention distributions. The codes and a comprehensive set of examples are available.
| null |
DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras
|
https://papers.nips.cc/paper_files/paper/2021/hash/89fcd07f20b6785b92134bd6c1d0fa42-Abstract.html
|
Zachary Teed, Jia Deng
|
https://papers.nips.cc/paper_files/paper/2021/hash/89fcd07f20b6785b92134bd6c1d0fa42-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12889-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/89fcd07f20b6785b92134bd6c1d0fa42-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZBfUo_dr4H
|
https://papers.nips.cc/paper_files/paper/2021/file/89fcd07f20b6785b92134bd6c1d0fa42-Supplemental.pdf
|
We introduce DROID-SLAM, a new deep learning based SLAM system. DROID-SLAM consists of recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer. DROID-SLAM is accurate, achieving large improvements over prior work, and robust, suffering from substantially fewer catastrophic failures. Despite training on monocular video, it can leverage stereo or RGB-D video to achieve improved performance at test time. The URL to our open source code is https://github.com/princeton-vl/DROID-SLAM.
| null |
Few-Shot Object Detection via Association and DIscrimination
|
https://papers.nips.cc/paper_files/paper/2021/hash/8a1e808b55fde9455cb3d8857ed88389-Abstract.html
|
Yuhang Cao, Jiaqi Wang, Ying Jin, Tong Wu, Kai Chen, Ziwei Liu, Dahua Lin
|
https://papers.nips.cc/paper_files/paper/2021/hash/8a1e808b55fde9455cb3d8857ed88389-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12890-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8a1e808b55fde9455cb3d8857ed88389-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=XjIm8hOCxTm
|
https://papers.nips.cc/paper_files/paper/2021/file/8a1e808b55fde9455cb3d8857ed88389-Supplemental.pdf
|
Object detection has achieved substantial progress in the last decade. However, detecting novel classes with only few samples remains challenging, since deep learning under low data regime usually leads to a degraded feature space. Existing works employ a holistic fine-tuning paradigm to tackle this problem, where the model is first pre-trained on all base classes with abundant samples, and then it is used to carve the novel class feature space. Nonetheless, this paradigm is still imperfect. Durning fine-tuning, a novel class may implicitly leverage the knowledge of multiple base classes to construct its feature space, which induces a scattered feature space, hence violating the inter-class separability. To overcome these obstacles, we propose a two-step fine-tuning framework, Few-shot object detection via Association and DIscrimination (FADI), which builds up a discriminative feature space for each novel class with two integral steps. 1) In the association step, in contrast to implicitly leveraging multiple base classes, we construct a compact novel class feature space via explicitly imitating a specific base class feature space. Specifically, we associate each novel class with a base class according to their semantic similarity. After that, the feature space of a novel class can readily imitate the well-trained feature space of the associated base class. 2) In the discrimination step, to ensure the separability between the novel classes and associated base classes, we disentangle the classification branches for base and novel classes. To further enlarge the inter-class separability between all classes, a set-specialized margin loss is imposed. Extensive experiments on standard Pascal VOC and MS-COCO datasets demonstrate that FADI achieves new state-of-the-art performance, significantly improving the baseline in any shot/split by +18.7. Notably, the advantage of FADI is most announced on extremely few-shot scenarios (e.g. 1- and 3- shot).
| null |
Neural Dubber: Dubbing for Videos According to Scripts
|
https://papers.nips.cc/paper_files/paper/2021/hash/8a9c8ac001d3ef9e4ce39b1177295e03-Abstract.html
|
Chenxu Hu, Qiao Tian, Tingle Li, Wang Yuping, Yuxuan Wang, Hang Zhao
|
https://papers.nips.cc/paper_files/paper/2021/hash/8a9c8ac001d3ef9e4ce39b1177295e03-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12891-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8a9c8ac001d3ef9e4ce39b1177295e03-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_H7TNRQQeH8
|
https://papers.nips.cc/paper_files/paper/2021/file/8a9c8ac001d3ef9e4ce39b1177295e03-Supplemental.zip
|
Dubbing is a post-production process of re-recording actors’ dialogues, which is extensively used in filmmaking and video production. It is usually performed manually by professional voice actors who read lines with proper prosody, and in synchronization with the pre-recorded videos. In this work, we propose Neural Dubber, the first neural network model to solve a novel automatic video dubbing (AVD) task: synthesizing human speech synchronized with the given video from the text. Neural Dubber is a multi-modal text-to-speech (TTS) model that utilizes the lip movement in the video to control the prosody of the generated speech. Furthermore, an image-based speaker embedding (ISE) module is developed for the multi-speaker setting, which enables Neural Dubber to generate speech with a reasonable timbre according to the speaker’s face. Experiments on the chemistry lecture single-speaker dataset and LRS2 multi-speaker dataset show that Neural Dubber can generate speech audios on par with state-of-the-art TTS models in terms of speech quality. Most importantly, both qualitative and quantitative evaluations show that Neural Dubber can control the prosody of synthesized speech by the video, and generate high-fidelity speech temporally synchronized with the video.
| null |
Neural Bootstrapper
|
https://papers.nips.cc/paper_files/paper/2021/hash/8abfe8ac9ec214d68541fcb888c0b4c3-Abstract.html
|
Minsuk Shin, Hyungjoo Cho, Hyun-seok Min, Sungbin Lim
|
https://papers.nips.cc/paper_files/paper/2021/hash/8abfe8ac9ec214d68541fcb888c0b4c3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12892-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8abfe8ac9ec214d68541fcb888c0b4c3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Hk2oOy4GJlH
|
https://papers.nips.cc/paper_files/paper/2021/file/8abfe8ac9ec214d68541fcb888c0b4c3-Supplemental.pdf
|
Bootstrapping has been a primary tool for ensemble and uncertainty quantification in machine learning and statistics. However, due to its nature of multiple training and resampling, bootstrapping deep neural networks is computationally burdensome; hence it has difficulties in practical application to the uncertainty estimation and related tasks. To overcome this computational bottleneck, we propose a novel approach called Neural Bootstrapper (NeuBoots), which learns to generate bootstrapped neural networks through single model training. NeuBoots injects the bootstrap weights into the high-level feature layers of the backbone network and outputs the bootstrapped predictions of the target, without additional parameters and the repetitive computations from scratch. We apply NeuBoots to various machine learning tasks related to uncertainty quantification, including prediction calibrations in image classification and semantic segmentation, active learning, and detection of out-of-distribution samples. Our empirical results show that NeuBoots outperforms other bagging based methods under a much lower computational cost without losing the validity of bootstrapping.
| null |
An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b0bb3eff8c1e5bf7f206125959921d7-Abstract.html
|
Cyrus Cousins
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b0bb3eff8c1e5bf7f206125959921d7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12893-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b0bb3eff8c1e5bf7f206125959921d7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZdyLIxqgz29
|
https://papers.nips.cc/paper_files/paper/2021/file/8b0bb3eff8c1e5bf7f206125959921d7-Supplemental.pdf
|
We address an inherent difficulty in welfare-theoretic fair machine learning (ML), by proposing an equivalently-axiomatically justified alternative setting, and studying the resulting computational and statistical learning questions. Welfare metrics quantify overall wellbeing across a population of groups, and welfare-based objectives and constraints have recently been proposed to incentivize fair ML methods to satisfy their diverse needs. However, many ML problems are cast as loss minimization tasks, rather than utility maximization, and thus require nontrivial modeling to construct utility functions. We define a complementary metric, termed malfare, measuring overall societal harm, with axiomatic justification via the standard axioms of cardinal welfare, and cast fair ML as malfare minimization over the risk values (expected losses) of each group. Surprisingly, the axioms of cardinal welfare (malfare) dictate that this is not equivalent to simply defining utility as negative loss and maximizing welfare. Building upon these concepts, we define fair-PAC learning, where a fair-PAC learner is an algorithm that learns an ε-δ malfare-optimal model with bounded sample complexity, for any data distribution and (axiomatically justified) malfare concept. Finally, we show conditions under which many standard PAC-learners may be converted to fair-PAC learners, which places fair-PAC learning on firm theoretical ground, as it yields statistical — and in some cases computational — efficiency guarantees for many well-studied ML models. Fair-PAC learning is also practically relevant, as it democratizes fair ML by providing concrete training algorithms with rigorous generalization guarantees.
| null |
HSVA: Hierarchical Semantic-Visual Adaptation for Zero-Shot Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b0d268963dd0cfb808aac48a549829f-Abstract.html
|
Shiming Chen, Guosen Xie, Yang Liu, Qinmu Peng, Baigui Sun, Hao Li, Xinge You, Ling Shao
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b0d268963dd0cfb808aac48a549829f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12894-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b0d268963dd0cfb808aac48a549829f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=JCodE7xOcc5
|
https://papers.nips.cc/paper_files/paper/2021/file/8b0d268963dd0cfb808aac48a549829f-Supplemental.pdf
|
Zero-shot learning (ZSL) tackles the unseen class recognition problem, transferring semantic knowledge from seen classes to unseen ones. Typically, to guarantee desirable knowledge transfer, a common (latent) space is adopted for associating the visual and semantic domains in ZSL. However, existing common space learning methods align the semantic and visual domains by merely mitigating distribution disagreement through one-step adaptation. This strategy is usually ineffective due to the heterogeneous nature of the feature representations in the two domains, which intrinsically contain both distribution and structure variations. To address this and advance ZSL, we propose a novel hierarchical semantic-visual adaptation (HSVA) framework. Specifically, HSVA aligns the semantic and visual domains by adopting a hierarchical two-step adaptation, i.e., structure adaptation and distribution adaptation. In the structure adaptation step, we take two task-specific encoders to encode the source data (visual domain) and the target data (semantic domain) into a structure-aligned common space. To this end, a supervised adversarial discrepancy (SAD) module is proposed to adversarially minimize the discrepancy between the predictions of two task-specific classifiers, thus making the visual and semantic feature manifolds more closely aligned. In the distribution adaptation step, we directly minimize the Wasserstein distance between the latent multivariate Gaussian distributions to align the visual and semantic distributions using a common encoder. Finally, the structure and distribution adaptation are derived in a unified framework under two partially-aligned variational autoencoders. Extensive experiments on four benchmark datasets demonstrate that HSVA achieves superior performance on both conventional and generalized ZSL. The code is available at \url{https://github.com/shiming-chen/HSVA}.
| null |
Higher Order Kernel Mean Embeddings to Capture Filtrations of Stochastic Processes
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b2dfbe0c1d43f9537dae01e96458ff1-Abstract.html
|
Cristopher Salvi, Maud Lemercier, Chong Liu, Blanka Horvath, Theodoros Damoulas, Terry Lyons
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b2dfbe0c1d43f9537dae01e96458ff1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12895-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b2dfbe0c1d43f9537dae01e96458ff1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=CtugaUzfYw
|
https://papers.nips.cc/paper_files/paper/2021/file/8b2dfbe0c1d43f9537dae01e96458ff1-Supplemental.pdf
|
Stochastic processes are random variables with values in some space of paths. However, reducing a stochastic process to a path-valued random variable ignores its filtration, i.e. the flow of information carried by the process through time. By conditioning the process on its filtration, we introduce a family of higher order kernel mean embeddings (KMEs) that generalizes the notion of KME to capture additional information related to the filtration. We derive empirical estimators for the associated higher order maximum mean discrepancies (MMDs) and prove consistency. We then construct a filtration-sensitive kernel two-sample test able to capture information that gets missed by the standard MMD test. In addition, leveraging our higher order MMDs we construct a family of universal kernels on stochastic processes that allows to solve real-world calibration and optimal stopping problems in quantitative finance (such as the pricing of American options) via classical kernel-based regression methods. Finally, adapting existing tests for conditional independence to the case of stochastic processes, we design a causal-discovery algorithm to recover the causal graph of structural dependencies among interacting bodies solely from observations of their multidimensional trajectories.
| null |
Low-Rank Subspaces in GANs
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b4066554730ddfaa0266346bdc1b202-Abstract.html
|
Jiapeng Zhu, Ruili Feng, Yujun Shen, Deli Zhao, Zheng-Jun Zha, Jingren Zhou, Qifeng Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b4066554730ddfaa0266346bdc1b202-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12896-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b4066554730ddfaa0266346bdc1b202-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Xp5BhDKdil5
|
https://papers.nips.cc/paper_files/paper/2021/file/8b4066554730ddfaa0266346bdc1b202-Supplemental.pdf
|
The latent space of a Generative Adversarial Network (GAN) has been shown to encode rich semantics within some subspaces. To identify these subspaces, researchers typically analyze the statistical information from a collection of synthesized data, and the identified subspaces tend to control image attributes globally (i.e., manipulating an attribute causes the change of an entire image). By contrast, this work introduces low-rank subspaces that enable more precise control of GAN generation. Concretely, given an arbitrary image and a region of interest (e.g., eyes of face images), we manage to relate the latent space to the image region with the Jacobian matrix and then use low-rank factorization to discover steerable latent subspaces. There are three distinguishable strengths of our approach that can be aptly called LowRankGAN. First, compared to analytic algorithms in prior work, our low-rank factorization of Jacobians is able to find the low-dimensional representation of attribute manifold, making image editing more precise and controllable. Second, low-rank factorization naturally yields a null space of attributes such that moving the latent code within it only affects the outer region of interest. Therefore, local image editing can be simply achieved by projecting an attribute vector into the null space without relying on a spatial mask as existing methods do. Third, our method can robustly work with a local region from one image for analysis yet well generalize to other images, making it much easy to use in practice. Extensive experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
| null |
Neural Symplectic Form: Learning Hamiltonian Equations on General Coordinate Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b519f198dd26772e3e82874826b04aa-Abstract.html
|
Yuhan Chen, Takashi Matsubara, Takaharu Yaguchi
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b519f198dd26772e3e82874826b04aa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12897-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b519f198dd26772e3e82874826b04aa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4h4oqp-ATxb
|
https://papers.nips.cc/paper_files/paper/2021/file/8b519f198dd26772e3e82874826b04aa-Supplemental.pdf
|
In recent years, substantial research on the methods for learning Hamiltonian equations has been conducted. Although these approaches are very promising, the commonly used representation of the Hamilton equation uses the generalized momenta, which are generally unknown. Therefore, the training data must be represented in this unknown coordinate system, and this causes difficulty in applying the model to real data. Meanwhile, Hamiltonian equations also have a coordinate-free expression that is expressed by using the symplectic 2-form. In this study, we propose a model that learns the symplectic form from data using neural networks, thereby providing a method for learning Hamiltonian equations from data represented in general coordinate systems, which are not limited to the generalized coordinates and the generalized momenta. Consequently, the proposed method is capable not only of modeling target equations of both Hamiltonian and Lagrangian formalisms but also of extracting unknown Hamiltonian structures hidden in the data. For example, many polynomial ordinary differential equations such as the Lotka-Volterra equation are known to admit non-trivial Hamiltonian structures, and our numerical experiments show that such structures can be certainly learned from data. Technically, each symplectic 2-form is associated with a skew-symmetric matrix, but not all skew-symmetric matrices define the symplectic 2-form. In the proposed method, using the fact that symplectic 2-forms are derived as the exterior derivative of certain differential 1-forms, we model the differential 1-form by neural networks, thereby improving the efficiency of learning.
| null |
Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b5700012be65c9da25f49408d959ca0-Abstract.html
|
Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b5700012be65c9da25f49408d959ca0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12898-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b5700012be65c9da25f49408d959ca0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2e_VWzcU4j7
|
https://papers.nips.cc/paper_files/paper/2021/file/8b5700012be65c9da25f49408d959ca0-Supplemental.pdf
|
Low-complexity models such as linear function representation play a pivotal role in enabling sample-efficient reinforcement learning (RL). The current paper pertains to a scenario with value-based linear representation, which postulates linear realizability of the optimal Q-function (also called the ``linear $Q^{\star}$ problem''). While linear realizability alone does not allow for sample-efficient solutions in general, the presence of a large sub-optimality gap is a potential game changer, depending on the sampling mechanism in use. Informally, sample efficiency is achievable with a large sub-optimality gap when a generative model is available, but is unfortunately infeasible when we turn to standard online RL settings. We make progress towards understanding this linear $Q^{\star}$ problem by investigating a new sampling protocol, which draws samples in an online/exploratory fashion but allows one to backtrack and revisit previous states. This protocol is more flexible than the standard online RL setting, while being practically relevant and far more restrictive than the generative model. We develop an algorithm tailored to this setting, achieving a sample complexity that scales polynomially with the feature dimension, the horizon, and the inverse sub-optimality gap, but not the size of the state/action space. Our findings underscore the fundamental interplay between sampling protocols and low-complexity function representation in RL.
| null |
Self-Paced Contrastive Learning for Semi-supervised Medical Image Segmentation with Meta-labels
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b5c8441a8ff8e151b191c53c1842a38-Abstract.html
|
Jizong Peng, Ping Wang, Christian Desrosiers, Marco Pedersoli
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b5c8441a8ff8e151b191c53c1842a38-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12899-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b5c8441a8ff8e151b191c53c1842a38-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8Uui49rOfc
|
https://papers.nips.cc/paper_files/paper/2021/file/8b5c8441a8ff8e151b191c53c1842a38-Supplemental.pdf
|
The contrastive pre-training of a recognition model on a large dataset of unlabeled data often boosts the model’s performance on downstream tasks like image classification. However, in domains such as medical imaging, collecting unlabeled data can be challenging and expensive. In this work, we consider the task of medical image segmentation and adapt contrastive learning with meta-label annotations to scenarios where no additional unlabeled data is available. Meta-labels, such as the location of a 2D slice in a 3D MRI scan, often come for free during the acquisition process. We use these meta-labels to pre-train the image encoder, as well as in a semi-supervised learning step that leverages a reduced set of annotated data. A self-paced learning strategy exploiting the weak annotations is proposed to furtherhelp the learning process and discriminate useful labels from noise. Results on five medical image segmentation datasets show that our approach: i) highly boosts the performance of a model trained on a few scans, ii) outperforms previous contrastive and semi-supervised approaches, and iii) reaches close to the performance of a model trained on the full data.
| null |
Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b77b4b5156dc11dec152c6c71481565-Abstract.html
|
Jimmy Smith, Scott Linderman, David Sussillo
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b77b4b5156dc11dec152c6c71481565-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12900-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b77b4b5156dc11dec152c6c71481565-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=od-00q5T2vB
|
https://papers.nips.cc/paper_files/paper/2021/file/8b77b4b5156dc11dec152c6c71481565-Supplemental.pdf
|
Recurrent neural networks (RNNs) are powerful models for processing time-series data, but it remains challenging to understand how they function. Improving this understanding is of substantial interest to both the machine learning and neuroscience communities. The framework of reverse engineering a trained RNN by linearizing around its fixed points has provided insight, but the approach has significant challenges. These include difficulty choosing which fixed point to expand around when studying RNN dynamics and error accumulation when reconstructing the nonlinear dynamics with the linearized dynamics. We present a new model that overcomes these limitations by co-training an RNN with a novel switching linear dynamical system (SLDS) formulation. A first-order Taylor series expansion of the co-trained RNN and an auxiliary function trained to pick out the RNN's fixed points govern the SLDS dynamics. The results are a trained SLDS variant that closely approximates the RNN, an auxiliary function that can produce a fixed point for each point in state-space, and a trained nonlinear RNN whose dynamics have been regularized such that its first-order terms perform the computation, if possible. This model removes the post-training fixed point optimization and allows us to unambiguously study the learned dynamics of the SLDS at any point in state-space. It also generalizes SLDS models to continuous manifolds of switching points while sharing parameters across switches. We validate the utility of the model on two synthetic tasks relevant to previous work reverse engineering RNNs. We then show that our model can be used as a drop-in in more complex architectures, such as LFADS, and apply this LFADS hybrid to analyze single-trial spiking activity from the motor system of a non-human primate.
| null |
Learning-Augmented Dynamic Power Management with Multiple States via New Ski Rental Bounds
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b8388180314a337c9aa3c5aa8e2f37a-Abstract.html
|
Antonios Antoniadis, Christian Coester, Marek Elias, Adam Polak, Bertrand Simon
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b8388180314a337c9aa3c5aa8e2f37a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12901-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b8388180314a337c9aa3c5aa8e2f37a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xkQ4MhLv52X
|
https://papers.nips.cc/paper_files/paper/2021/file/8b8388180314a337c9aa3c5aa8e2f37a-Supplemental.zip
|
We study the online problem of minimizing power consumption in systems with multiple power-saving states. During idle periods of unknown lengths, an algorithm has to choose between power-saving states of different energy consumption and wake-up costs. We develop a learning-augmented online algorithm that makes decisions based on (potentially inaccurate) predicted lengths of the idle periods. The algorithm's performance is near-optimal when predictions are accurate and degrades gracefully with increasing prediction error, with a worst-case guarantee almost identical to the optimal classical online algorithm for the problem. A key ingredient in our approach is a new algorithm for the online ski-rental problem in the learning augmented setting with tight dependence on the prediction error. We support our theoretical findings with experiments.
| null |
Learning Equivariant Energy Based Models with Equivariant Stein Variational Gradient Descent
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b9e7ab295e87570551db122a04c6f7c-Abstract.html
|
Priyank Jaini, Lars Holdijk, Max Welling
|
https://papers.nips.cc/paper_files/paper/2021/hash/8b9e7ab295e87570551db122a04c6f7c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12902-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8b9e7ab295e87570551db122a04c6f7c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=syu7m80S_CA
|
https://papers.nips.cc/paper_files/paper/2021/file/8b9e7ab295e87570551db122a04c6f7c-Supplemental.pdf
|
We focus on the problem of efficient sampling and learning of probability densities by incorporating symmetries in probabilistic models. We first introduce Equivariant Stein Variational Gradient Descent algorithm -- an equivariant sampling method based on Stein's identity for sampling from densities with symmetries. Equivariant SVGD explicitly incorporates symmetry information in a density through equivariant kernels which makes the resultant sampler efficient both in terms of sample complexity and the quality of generated samples. Subsequently, we define equivariant energy based models to model invariant densities that are learned using contrastive divergence. By utilizing our equivariant SVGD for training equivariant EBMs, we propose new ways of improving and scaling up training of energy based models. We apply these equivariant energy models for modelling joint densities in regression and classification tasks for image datasets, many-body particle systems and molecular structure generation.
| null |
Information Directed Sampling for Sparse Linear Bandits
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ba6c657b03fc7c8dd4dff8e45defcd2-Abstract.html
|
Botao Hao, Tor Lattimore, Wei Deng
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ba6c657b03fc7c8dd4dff8e45defcd2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12903-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ba6c657b03fc7c8dd4dff8e45defcd2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=syIj5ggwCYJ
|
https://papers.nips.cc/paper_files/paper/2021/file/8ba6c657b03fc7c8dd4dff8e45defcd2-Supplemental.pdf
|
Stochastic sparse linear bandits offer a practical model for high-dimensional online decision-making problems and have a rich information-regret structure. In this work we explore the use of information-directed sampling (IDS), which naturally balances the information-regret trade-off. We develop a class of information-theoretic Bayesian regret bounds that nearly match existing lower bounds on a variety of problem instances, demonstrating the adaptivity of IDS. To efficiently implement sparse IDS, we propose an empirical Bayesian approach for sparse posterior sampling using a spike-and-slab Gaussian-Laplace prior. Numerical results demonstrate significant regret reductions by sparse IDS relative to several baselines.
| null |
Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-dimensional Vector Autoregressive Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/8be627bc543fd91be4d7f26ee86f5ee9-Abstract.html
|
Xiao Lv, Wei Cui, Yulong Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/8be627bc543fd91be4d7f26ee86f5ee9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12904-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8be627bc543fd91be4d7f26ee86f5ee9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=dsevgAwUH4m
|
https://papers.nips.cc/paper_files/paper/2021/file/8be627bc543fd91be4d7f26ee86f5ee9-Supplemental.pdf
|
In this paper, we present non-asymptotic optimization guarantees of gradient descent methods for estimating structured transition matrices in high-dimensional vector autoregressive (VAR) models. We adopt the projected gradient descent (PGD) for single-structured transition matrices and the alternating projected gradient descent (AltPGD) for superposition-structured ones. Our analysis demonstrates that both gradient algorithms converge linearly to the statistical error even though the strong convexity of the objective function is absent under the high-dimensional settings. Moreover our result is sharp (up to a constant factor) in the sense of matching the phase transition theory of the corresponding model with independent samples. To the best of our knowledge, this analysis constitutes first non-asymptotic optimization guarantees of the linear rate for regularized estimation in high-dimensional VAR models. Numerical results are provided to support our theoretical analysis.
| null |
Large-Scale Unsupervised Object Discovery
|
https://papers.nips.cc/paper_files/paper/2021/hash/8bf1211fd4b7b94528899de0a43b9fb3-Abstract.html
|
Van Huy Vo, Elena Sizikova, Cordelia Schmid, Patrick Pérez, Jean Ponce
|
https://papers.nips.cc/paper_files/paper/2021/hash/8bf1211fd4b7b94528899de0a43b9fb3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12905-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8bf1211fd4b7b94528899de0a43b9fb3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ys6L_NWchCp
|
https://papers.nips.cc/paper_files/paper/2021/file/8bf1211fd4b7b94528899de0a43b9fb3-Supplemental.pdf
|
Existing approaches to unsupervised object discovery (UOD) do not scale up to large datasets without approximations that compromise their performance. We propose a novel formulation of UOD as a ranking problem, amenable to the arsenal of distributed methods available for eigenvalue problems and link analysis. Through the use of self-supervised features, we also demonstrate the first effective fully unsupervised pipeline for UOD. Extensive experiments on COCO~\cite{Lin2014cocodataset} and OpenImages~\cite{openimages} show that, in the single-object discovery setting where a single prominent object is sought in each image, the proposed LOD (Large-scale Object Discovery) approach is on par with, or better than the state of the art for medium-scale datasets (up to 120K images), and over 37\% better than the only other algorithms capable of scaling up to 1.7M images. In the multi-object discovery setting where multiple objects are sought in each image, the proposed LOD is over 14\% better in average precision (AP) than all other methods for datasets ranging from 20K to 1.7M images. Using self-supervised features, we also show that the proposed method obtains state-of-the-art UOD performance on OpenImages.
| null |
Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c1b6fa97c4288a4514365198566c6fa-Abstract.html
|
Jiehong Lin, Hongyang Li, Ke Chen, Jiangbo Lu, Kui Jia
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c1b6fa97c4288a4514365198566c6fa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12906-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8c1b6fa97c4288a4514365198566c6fa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Fa-w-10s7YQ
| null |
As a basic component of SE(3)-equivariant deep feature learning, steerable convolution has recently demonstrated its advantages for 3D semantic analysis. The advantages are, however, brought by expensive computations on dense, volumetric data, which prevent its practical use for efficient processing of 3D data that are inherently sparse. In this paper, we propose a novel design of Sparse Steerable Convolution (SS-Conv) to address the shortcoming; SS-Conv greatly accelerates steerable convolution with sparse tensors, while strictly preserving the property of SE(3)-equivariance. Based on SS-Conv, we propose a general pipeline for precise estimation of object poses, wherein a key design is a Feature-Steering module that takes the full advantage of SE(3)-equivariance and is able to conduct an efficient pose refinement. To verify our designs, we conduct thorough experiments on three tasks of 3D object semantic analysis, including instance-level 6D pose estimation, category-level 6D pose and size estimation, and category-level 6D pose tracking. Our proposed pipeline based on SS-Conv outperforms existing methods on almost all the metrics evaluated by the three tasks. Ablation studies also show the superiority of our SS-Conv over alternative convolutions in terms of both accuracy and efficiency. Our code is released publicly at https://github.com/Gorilla-Lab-SCUT/SS-Conv.
| null |
Noisy Adaptation Generates Lévy Flights in Attractor Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c249675aea6c3cbd91661bbae767ff1-Abstract.html
|
Xingsi Dong, Tianhao Chu, Tiejun Huang, Zilong Ji, Si Wu
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c249675aea6c3cbd91661bbae767ff1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12907-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8c249675aea6c3cbd91661bbae767ff1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Pvzqji3at5
|
https://papers.nips.cc/paper_files/paper/2021/file/8c249675aea6c3cbd91661bbae767ff1-Supplemental.pdf
|
Lévy flights describe a special class of random walks whose step sizes satisfy a power-law tailed distribution. As being an efficientsearching strategy in unknown environments, Lévy flights are widely observed in animal foraging behaviors. Recent studies further showed that human cognitive functions also exhibit the characteristics of Lévy flights. Despite being a general phenomenon, the neural mechanism at the circuit level for generating Lévy flights remains unresolved. Here, we investigate how Lévy flights can be achieved in attractor neural networks. To elucidate the underlying mechanism clearly, we first study continuous attractor neural networks (CANNs), and find that noisy neural adaptation, exemplified by spike frequency adaptation (SFA) in this work, can generate Lévy flights representing transitions of the network state in the attractor space. Specifically, the strength of SFA defines a travelling wave boundary, below which the network state displays local Brownian motion, and above which the network state displays long-jump motion. Noises in neural adaptation causes the network state to intermittently switch between these two motion modes, manifesting the characteristics of Lévy flights. We further extend the study to a general attractor neural network, and demonstrate that our model can explain the Lévy-flight phenomenon observed during free memory retrieval of humans. We hope that this study will give us insight into understanding the neural mechanism for optimal information processing in the brain.
| null |
On Linear Stability of SGD and Input-Smoothness of Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c26d2fad09dc76f3ff36b6ea752b0e1-Abstract.html
|
Chao Ma, Lexing Ying
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c26d2fad09dc76f3ff36b6ea752b0e1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12908-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8c26d2fad09dc76f3ff36b6ea752b0e1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yAvCV6NwWQ
|
https://papers.nips.cc/paper_files/paper/2021/file/8c26d2fad09dc76f3ff36b6ea752b0e1-Supplemental.pdf
|
The multiplicative structure of parameters and input data in the first layer of neural networks is explored to build connection between the landscape of the loss function with respect to parameters and the landscape of the model function with respect to input data. By this connection, it is shown that flat minima regularize the gradient of the model function, which explains the good generalization performance of flat minima. Then, we go beyond the flatness and consider high-order moments of the gradient noise, and show that Stochastic Gradient Dascent (SGD) tends to impose constraints on these moments by a linear stability analysis of SGD around global minima. Together with the multiplicative structure, we identify the Sobolev regularization effect of SGD, i.e. SGD regularizes the Sobolev seminorms of the model function with respect to the input data. Finally, bounds for generalization error and adversarial robustness are provided for solutions found by SGD under assumptions of the data distribution.
| null |
Joint inference and input optimization in equilibrium networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c3c27ac7d298331a1bdfd0a5e8703d3-Abstract.html
|
Swaminathan Gurumurthy, Shaojie Bai, Zachary Manchester, J. Zico Kolter
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c3c27ac7d298331a1bdfd0a5e8703d3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12909-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8c3c27ac7d298331a1bdfd0a5e8703d3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RgH0gGH9B64
|
https://papers.nips.cc/paper_files/paper/2021/file/8c3c27ac7d298331a1bdfd0a5e8703d3-Supplemental.pdf
|
Many tasks in deep learning involve optimizing over the inputs to a network to minimize or maximize some objective; examples include optimization over latent spaces in a generative model to match a target image, or adversarially perturbing an input to worsen classifier performance. Performing such optimization, however, is traditionally quite costly, as it involves a complete forward and backward pass through the network for each gradient step. In a separate line of work, a recent thread of research has developed the deep equilibrium (DEQ) model, a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer. In this paper, we show that there is a natural synergy between these two settings. Although, naively using DEQs for these optimization problems is expensive (owing to the time needed to compute a fixed point for each gradient step), we can leverage the fact that gradient-based optimization can itself be cast as a fixed point iteration to substantially improve the overall speed. That is, we simultaneously both solve for the DEQ fixed point and optimize over network inputs, all within a single "augmented" DEQ model that jointly encodes both the original network and the optimization process. Indeed, the procedure is fast enough that it allows us to efficiently train DEQ models for tasks traditionally relying on an "inner" optimization loop. We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
| null |
A unified framework for bandit multiple testing
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c460674cd61bf189e62b4da4bd9d7c1-Abstract.html
|
Ziyu Xu, Ruodu Wang, Aaditya Ramdas
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c460674cd61bf189e62b4da4bd9d7c1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12910-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8c460674cd61bf189e62b4da4bd9d7c1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0fPgXqP1Mq
|
https://papers.nips.cc/paper_files/paper/2021/file/8c460674cd61bf189e62b4da4bd9d7c1-Supplemental.pdf
|
In bandit multiple hypothesis testing, each arm corresponds to a different null hypothesis that we wish to test, and the goal is to design adaptive algorithms that correctly identify large set of interesting arms (true discoveries), while only mistakenly identifying a few uninteresting ones (false discoveries). One common metric in non-bandit multiple testing is the false discovery rate (FDR). We propose a unified, modular framework for bandit FDR control that emphasizes the decoupling of exploration and summarization of evidence. We utilize the powerful martingale-based concept of "e-processes" to ensure FDR control for arbitrary composite nulls, exploration rules and stopping times in generic problem settings. In particular, valid FDR control holds even if the reward distributions of the arms could be dependent, multiple arms may be queried simultaneously, and multiple (cooperating or competing) agents may be querying arms, covering combinatorial semi-bandit type settings as well. Prior work has considered in great detail the setting where each arm's reward distribution is independent and sub-Gaussian, and a single arm is queried at each step. Our framework recovers matching sample complexity guarantees in this special case, and performs comparably or better in practice. For other settings, sample complexities will depend on the finer details of the problem (composite nulls being tested, exploration algorithm, data dependence structure, stopping rule) and we do not explore these; our contribution is to show that the FDR guarantee is clean and entirely agnostic to these details.
| null |
Recovering Latent Causal Factor for Generalization to Distributional Shifts
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c6744c9d42ec2cb9e8885b54ff744d0-Abstract.html
|
Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c6744c9d42ec2cb9e8885b54ff744d0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12911-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8c6744c9d42ec2cb9e8885b54ff744d0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=go3GvM7aFD
|
https://papers.nips.cc/paper_files/paper/2021/file/8c6744c9d42ec2cb9e8885b54ff744d0-Supplemental.pdf
|
Distributional shifts between training and target domains may degrade the prediction accuracy of learned models, mainly because these models often learn features that possess only correlation rather than causal relation with the output. Such a correlation, which is known as ``spurious correlation'' statistically, is domain-dependent hence may fail to generalize to unseen domains. To avoid such a spurious correlation, we propose \textbf{La}tent \textbf{C}ausal \textbf{I}nvariance \textbf{M}odels (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction. Specifically, the LaCIM introduces a pair of correlated latent factors: (a) causal factor and (b) others, while the extent of this correlation is governed by a domain variable that characterizes the distributional shifts. On the basis of this, we prove that the distribution of observed variables conditioning on latent variables is shift-invariant. Equipped with such an invariance, we prove that the causal factor can be recovered without mixing information from others, which induces the ground-truth predicting mechanism. We propose a Variational-Bayesian-based method to learn this invariance for prediction. The utility of our approach is verified by improved generalization to distributional shifts on various real-world data. Our code is freely available at \url{https://github.com/wubotong/LaCIM}.
| null |
Graph Differentiable Architecture Search with Structure Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c9f32e03aeb2e3000825c8c875c4edd-Abstract.html
|
Yijian Qin, Xin Wang, Zeyang Zhang, Wenwu Zhu
|
https://papers.nips.cc/paper_files/paper/2021/hash/8c9f32e03aeb2e3000825c8c875c4edd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12912-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8c9f32e03aeb2e3000825c8c875c4edd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kSv_AMdehh3
|
https://papers.nips.cc/paper_files/paper/2021/file/8c9f32e03aeb2e3000825c8c875c4edd-Supplemental.pdf
|
Discovering ideal Graph Neural Networks (GNNs) architectures for different tasks is labor intensive and time consuming. To save human efforts, Neural Architecture Search (NAS) recently has been used to automatically discover adequate GNN architectures for certain tasks in order to achieve competitive or even better performance compared with manually designed architectures. However, existing works utilizing NAS to search GNN structures fail to answer the question: how NAS is able to select the desired GNN architectures? In this paper, we investigate this question to solve the problem, for the first time. We conduct a measurement study with experiments to discover that gradient based NAS methods tend to select proper architectures based on the usefulness of different types of information with respect to the target task. Our explorations further show that gradient based NAS also suffers from noises hidden in the graph, resulting in searching suboptimal GNN architectures. Based on our findings, we propose a Graph differentiable Architecture Search model with Structure Optimization (GASSO), which allows differentiable search of the architecture with gradient descent and is able to discover graph neural architectures with better performance through employing graph structure learning as a denoising process in the search procedure. The proposed GASSO model is capable of simultaneously searching the optimal architecture and adaptively adjusting graph structure by jointly optimizing graph architecture search and graph structure denoising. Extensive experiments on real-world graph datasets demonstrate that our proposed GASSO model is able to achieve state-of-the-art performance compared with existing baselines.
| null |
Designing Counterfactual Generators using Deep Model Inversion
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ca01ea920679a0fe3728441494041b9-Abstract.html
|
Jayaraman Thiagarajan, Vivek Sivaraman Narayanaswamy, Deepta Rajan, Jia Liang, Akshay Chaudhari, Andreas Spanias
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ca01ea920679a0fe3728441494041b9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12913-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ca01ea920679a0fe3728441494041b9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=iHisgL7PFj2
|
https://papers.nips.cc/paper_files/paper/2021/file/8ca01ea920679a0fe3728441494041b9-Supplemental.pdf
|
Explanation techniques that synthesize small, interpretable changes to a given image while producing desired changes in the model prediction have become popular for introspecting black-box models. Commonly referred to as counterfactuals, the synthesized explanations are required to contain discernible changes (for easy interpretability) while also being realistic (consistency to the data manifold). In this paper, we focus on the case where we have access only to the trained deep classifier and not the actual training data. While the problem of inverting deep models to synthesize images from the training distribution has been explored, our goal is to develop a deep inversion approach to generate counterfactual explanations for a given query image. Despite their effectiveness in conditional image synthesis, we show that existing deep inversion methods are insufficient for producing meaningful counterfactuals. We propose DISC (Deep Inversion for Synthesizing Counterfactuals) that improves upon deep inversion by utilizing (a) stronger image priors, (b) incorporating a novel manifold consistency objective and (c) adopting a progressive optimization strategy. We find that, in addition to producing visually meaningful explanations, the counterfactuals from DISC are effective at learning classifier decision boundaries and are robust to unknown test-time corruptions.
| null |
A Faster Maximum Cardinality Matching Algorithm with Applications in Machine Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ca696ca160520b1cf5a569b4be525e8-Abstract.html
|
Nathaniel Lahn, Sharath Raghvendra, Jiacheng Ye
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ca696ca160520b1cf5a569b4be525e8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12914-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ca696ca160520b1cf5a569b4be525e8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=FM8auLVlRMo
|
https://papers.nips.cc/paper_files/paper/2021/file/8ca696ca160520b1cf5a569b4be525e8-Supplemental.pdf
|
Maximum cardinality bipartite matching is an important graph optimization problem with several applications. For instance, maximum cardinality matching in a $\delta$-disc graph can be used in the computation of the bottleneck matching as well as the $\infty$-Wasserstein and the Lévy-Prokhorov distances between probability distributions. For any point sets $A, B \subset \mathbb{R}^2$, the $\delta$-disc graph is a bipartite graph formed by connecting every pair of points $(a,b) \in A\times B$ by an edge if the Euclidean distance between them is at most $\delta$. Using the classical Hopcroft-Karp algorithm, a maximum-cardinality matching on any $\delta$-disc graph can be found in $\tilde{O}(n^{3/2})$ time.~\footnote{We use $\tilde{O}(\cdot)$ to suppress poly-logarithmic terms in the complexity.} In this paper, we present a simplification of a recent algorithm (Lahn and Raghvendra, JoCG 2021) for the maximum cardinality matching problem and describe how a maximum cardinality matching in a $\delta$-disc graph can be computed asymptotically faster than $O(n^{3/2})$ time for any moderately dense point set. As applications, we show that if $A$ and $B$ are point sets drawn uniformly at random from a unit square, an exact bottleneck matching can be computed in $\tilde{O}(n^{4/3})$ time. On the other hand, experiments suggest that the Hopcroft-Karp algorithm seems to take roughly $\Theta (n^{3/2})$ time for this case. This translates to substantial improvements in execution time for larger inputs.
| null |
Dynamic population-based meta-learning for multi-agent communication with natural language
|
https://papers.nips.cc/paper_files/paper/2021/hash/8caa38721906c1a0bb95c80fab33a893-Abstract.html
|
Abhinav Gupta, Marc Lanctot, Angeliki Lazaridou
|
https://papers.nips.cc/paper_files/paper/2021/hash/8caa38721906c1a0bb95c80fab33a893-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12915-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8caa38721906c1a0bb95c80fab33a893-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NFurmj-rIWe
|
https://papers.nips.cc/paper_files/paper/2021/file/8caa38721906c1a0bb95c80fab33a893-Supplemental.pdf
|
In this work, our goal is to train agents that can coordinate with seen, unseen as well as human partners in a multi-agent communication environment involving natural language. Previous work using a single set of agents has shown great progress in generalizing to known partners, however it struggles when coordinating with unfamiliar agents. To mitigate that, recent work explored the use of population-based approaches, where multiple agents interact with each other with the goal of learning more generic protocols. These methods, while able to result in good coordination between unseen partners, still only achieve so in cases of simple languages, thus failing to adapt to human partners using natural language. We attribute this to the use of static populations and instead propose a dynamic population-based meta-learning approach that builds such a population in an iterative manner. We perform a holistic evaluation of our method on two different referential games, and show that our agents outperform all prior work when communicating with seen partners and humans. Furthermore, we analyze the natural language generation skills of our agents, where we find that our agents also outperform strong baselines. Finally, we test the robustness of our agents when communicating with out-of-population agents and carefully test the importance of each component of our method through ablation studies.
| null |
Adversarial Neuron Pruning Purifies Backdoored Deep Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/8cbe9ce23f42628c98f80fa0fac8b19a-Abstract.html
|
Dongxian Wu, Yisen Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/8cbe9ce23f42628c98f80fa0fac8b19a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12916-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8cbe9ce23f42628c98f80fa0fac8b19a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4cEapqXfP30
|
https://papers.nips.cc/paper_files/paper/2021/file/8cbe9ce23f42628c98f80fa0fac8b19a-Supplemental.pdf
|
As deep neural networks (DNNs) are growing larger, their requirements for computational resources become huge, which makes outsourcing training more popular. Training in a third-party platform, however, may introduce potential risks that a malicious trainer will return backdoored DNNs, which behave normally on clean samples but output targeted misclassifications whenever a trigger appears at the test time. Without any knowledge of the trigger, it is difficult to distinguish or recover benign DNNs from backdoored ones. In this paper, we first identify an unexpected sensitivity of backdoored DNNs, that is, they are much easier to collapse and tend to predict the target label on clean samples when their neurons are adversarially perturbed. Based on these observations, we propose a novel model repairing method, termed Adversarial Neuron Pruning (ANP), which prunes some sensitive neurons to purify the injected backdoor. Experiments show, even with only an extremely small amount of clean data (e.g., 1%), ANP effectively removes the injected backdoor without causing obvious performance degradation.
| null |
Towards Robust and Reliable Algorithmic Recourse
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ccfb1140664a5fa63177fb6e07352f0-Abstract.html
|
Sohini Upadhyay, Shalmali Joshi, Himabindu Lakkaraju
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ccfb1140664a5fa63177fb6e07352f0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12917-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ccfb1140664a5fa63177fb6e07352f0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=AuVKs6JmBtY
|
https://papers.nips.cc/paper_files/paper/2021/file/8ccfb1140664a5fa63177fb6e07352f0-Supplemental.pdf
|
As predictive models are increasingly being deployed in high-stakes decision making (e.g., loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (e.g., dataset shifts), thereby rendering previously prescribed recourses ineffective.To address this problem, we propose a novel framework, RObust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) We quantify the probability of invalidation for recourses generated without accounting for model shifts. 2) We prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation on multiple synthetic and real-world datasets demonstrates the efficacy of the proposed framework.
| null |
Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ce241e1ed84937ee48322b170b9b18c-Abstract.html
|
Yufei Wang, Can Xu, Huang Hu, Chongyang Tao, Stephen Wan, Mark Dras, Mark Johnson, Daxin Jiang
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ce241e1ed84937ee48322b170b9b18c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12918-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ce241e1ed84937ee48322b170b9b18c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vecLnc6g6iQ
|
https://papers.nips.cc/paper_files/paper/2021/file/8ce241e1ed84937ee48322b170b9b18c-Supplemental.pdf
|
Sequence-to-Sequence (Seq2Seq) neural text generation models, especially the pre-trained ones (e.g., BART and T5), have exhibited compelling performance on various natural language generation tasks. However, the black-box nature of these models limits their application in tasks where specific rules (e.g., controllable constraints, prior knowledge) need to be executed. Previous works either design specific model structures (e.g., Copy Mechanism corresponding to the rule "the generated output should include certain words in the source input'') or implement specialized inference algorithms (e.g., Constrained Beam Search) to execute particular rules through the text generation. These methods require the careful design case-by-case and are difficult to support multiple rules concurrently. In this paper, we propose a novel module named Neural Rule-Execution Tracking Machine (NRETM) that can be equipped into various transformer-based generators to leverage multiple rules simultaneously to guide the neural generation model for superior generation performance in an unified and scalable way. Extensive experiments on several benchmarks verify the effectiveness of our proposed model in both controllable and general text generation tasks.
| null |
Scalable Online Planning via Reinforcement Learning Fine-Tuning
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ce8b102d40392a688f8c04b3cd6cae0-Abstract.html
|
Arnaud Fickinger, Hengyuan Hu, Brandon Amos, Stuart Russell, Noam Brown
|
https://papers.nips.cc/paper_files/paper/2021/hash/8ce8b102d40392a688f8c04b3cd6cae0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12919-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8ce8b102d40392a688f8c04b3cd6cae0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=D0xGh031I9m
|
https://papers.nips.cc/paper_files/paper/2021/file/8ce8b102d40392a688f8c04b3cd6cae0-Supplemental.zip
|
Lookahead search has been a critical component of recent AI successes, such as in the games of chess, go, and poker. However, the search methods used in these games, and in many other settings, are tabular. Tabular search methods do not scale well with the size of the search space, and this problem is exacerbated by stochasticity and partial observability. In this work we replace tabular search with online model-based fine-tuning of a policy neural network via reinforcement learning, and show that this approach outperforms state-of-the-art search algorithms in benchmark settings. In particular, we use our search algorithm to achieve a new state-of-the-art result in self-play Hanabi, and show the generality of our algorithm by also showing that it outperforms tabular search in the Atari game Ms. Pacman.
| null |
Adversarial Regression with Doubly Non-negative Weighting Matrices
|
https://papers.nips.cc/paper_files/paper/2021/hash/8cfef17bee2b7a75a3ce09d40b497f6b-Abstract.html
|
Tam Le, Truyen Nguyen, Makoto Yamada, Jose Blanchet, Viet Anh Nguyen
|
https://papers.nips.cc/paper_files/paper/2021/hash/8cfef17bee2b7a75a3ce09d40b497f6b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12920-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8cfef17bee2b7a75a3ce09d40b497f6b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=npvEdo4Ftb1
|
https://papers.nips.cc/paper_files/paper/2021/file/8cfef17bee2b7a75a3ce09d40b497f6b-Supplemental.pdf
|
Many machine learning tasks that involve predicting an output response can be solved by training a weighted regression model. Unfortunately, the predictive power of this type of models may severely deteriorate under low sample sizes or under covariate perturbations. Reweighting the training samples has aroused as an effective mitigation strategy to these problems. In this paper, we propose a novel and coherent scheme for kernel-reweighted regression by reparametrizing the sample weights using a doubly non-negative matrix. When the weighting matrix is confined in an uncertainty set using either the log-determinant divergence or the Bures-Wasserstein distance, we show that the adversarially reweighted estimate can be solved efficiently using first-order methods. Numerical experiments show that our reweighting strategy delivers promising results on numerous datasets.
| null |
Learned Robust PCA: A Scalable Deep Unfolding Approach for High-Dimensional Outlier Detection
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d2355364e9a2ba1f82f975414937b43-Abstract.html
|
HanQin Cai, Jialin Liu, Wotao Yin
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d2355364e9a2ba1f82f975414937b43-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12921-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8d2355364e9a2ba1f82f975414937b43-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=G7W2mriQLxf
|
https://papers.nips.cc/paper_files/paper/2021/file/8d2355364e9a2ba1f82f975414937b43-Supplemental.pdf
|
Robust principal component analysis (RPCA) is a critical tool in modern machine learning, which detects outliers in the task of low-rank matrix reconstruction. In this paper, we propose a scalable and learnable non-convex approach for high-dimensional RPCA problems, which we call Learned Robust PCA (LRPCA). LRPCA is highly efficient, and its free parameters can be effectively learned to optimize via deep unfolding. Moreover, we extend deep unfolding from finite iterations to infinite iterations via a novel feedforward-recurrent-mixed neural network model. We establish the recovery guarantee of LRPCA under mild assumptions for RPCA. Numerical experiments show that LRPCA outperforms the state-of-the-art RPCA algorithms, such as ScaledGD and AltProj, on both synthetic datasets and real-world applications.
| null |
Proxy-Normalizing Activations to Match Batch Normalization while Removing Batch Dependence
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d2a5f7d4afa5d0530789d3066945330-Abstract.html
|
Antoine Labatie, Dominic Masters, Zach Eaton-Rosen, Carlo Luschi
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d2a5f7d4afa5d0530789d3066945330-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12922-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8d2a5f7d4afa5d0530789d3066945330-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pzmwfDLoANS
|
https://papers.nips.cc/paper_files/paper/2021/file/8d2a5f7d4afa5d0530789d3066945330-Supplemental.pdf
|
We investigate the reasons for the performance degradation incurred with batch-independent normalization. We find that the prototypical techniques of layer normalization and instance normalization both induce the appearance of failure modes in the neural network's pre-activations: (i) layer normalization induces a collapse towards channel-wise constant functions; (ii) instance normalization induces a lack of variability in instance statistics, symptomatic of an alteration of the expressivity. To alleviate failure mode (i) without aggravating failure mode (ii), we introduce the technique "Proxy Normalization" that normalizes post-activations using a proxy distribution. When combined with layer normalization or group normalization, this batch-independent normalization emulates batch normalization's behavior and consistently matches or exceeds its performance.
| null |
Dynamic Bottleneck for Robust Self-Supervised Exploration
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d3369c4c086f236fabf61d614a32818-Abstract.html
|
Chenjia Bai, Lingxiao Wang, Lei Han, Animesh Garg, Jianye Hao, Peng Liu, Zhaoran Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/8d3369c4c086f236fabf61d614a32818-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12923-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/8d3369c4c086f236fabf61d614a32818-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-t6TeG3A6Do
|
https://papers.nips.cc/paper_files/paper/2021/file/8d3369c4c086f236fabf61d614a32818-Supplemental.pdf
|
Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards. However, such methods are usually sensitive to environmental dynamics-irrelevant information, e.g., white-noise. To handle such dynamics-irrelevant information, we propose a Dynamic Bottleneck (DB) model, which attains a dynamics-relevant representation based on the information-bottleneck principle. Based on the DB model, we further propose DB-bonus, which encourages the agent to explore state-action pairs with high information gain. We establish theoretical connections between the proposed DB-bonus, the upper confidence bound (UCB) for linear case, and the visiting count for tabular case. We evaluate the proposed method on Atari suits with dynamics-irrelevant noises. Our experiments show that exploration with DB bonus outperforms several state-of-the-art exploration methods in noisy environments.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.