title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective
|
https://papers.nips.cc/paper_files/paper/2021/hash/af4f00ca48321fb026865c5a1772dafd-Abstract.html
|
Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/af4f00ca48321fb026865c5a1772dafd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13225-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/af4f00ca48321fb026865c5a1772dafd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WQkGUUNsPu6
|
https://papers.nips.cc/paper_files/paper/2021/file/af4f00ca48321fb026865c5a1772dafd-Supplemental.pdf
|
Training generative adversarial networks (GANs) with limited real image data generally results in deteriorated performance and collapsed models. To conquer this challenge, we are inspired by the latest observation, that one can discover independently trainable and highly sparse subnetworks (a.k.a., lottery tickets) from GANs. Treating this as an inductive prior, we suggest a brand-new angle towards data-efficient GAN training: by first identifying the lottery ticket from the original GAN using the small training set of real images; and then focusing on training that sparse subnetwork by re-using the same set. We find our coordinated framework to offer orthogonal gains to existing real image data augmentation methods, and we additionally present a new feature-level augmentation that can be applied together with them. Comprehensive experiments endorse the effectiveness of our proposed framework, across various GAN architectures (SNGAN, BigGAN, and StyleGAN-V2) and diverse datasets (CIFAR-10, CIFAR-100, Tiny-ImageNet, ImageNet, and multiple few-shot generation datasets). Codes are available at: https://github.com/VITA-Group/Ultra-Data-Efficient-GAN-Training.
| null |
When Are Solutions Connected in Deep Networks?
|
https://papers.nips.cc/paper_files/paper/2021/hash/af5baf594e9197b43c9f26f17b205e5b-Abstract.html
|
Quynh N. Nguyen, Pierre Bréchet, Marco Mondelli
|
https://papers.nips.cc/paper_files/paper/2021/hash/af5baf594e9197b43c9f26f17b205e5b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13226-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/af5baf594e9197b43c9f26f17b205e5b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GuTIBjOSIw8
|
https://papers.nips.cc/paper_files/paper/2021/file/af5baf594e9197b43c9f26f17b205e5b-Supplemental.pdf
|
The question of how and why the phenomenon of mode connectivity occurs in training deep neural networks has gained remarkable attention in the research community. From a theoretical perspective, two possible explanations have been proposed: (i) the loss function has connected sublevel sets, and (ii) the solutions found by stochastic gradient descent are dropout stable. While these explanations provide insights into the phenomenon, their assumptions are not always satisfied in practice. In particular, the first approach requires the network to have one layer with order of $N$ neurons ($N$ being the number of training samples), while the second one requires the loss to be almost invariant after removing half of the neurons at each layer (up to some rescaling of the remaining ones). In this work, we improve both conditions by exploiting the quality of the features at every intermediate layer together with a milder over-parameterization requirement. More specifically, we show that: (i) under generic assumptions on the features of intermediate layers, it suffices that the last two hidden layers have order of $\sqrt{N}$ neurons, and (ii) if subsets of features at each layer are linearly separable, then almost no over-parameterization is needed to show the connectivity. Our experiments confirm that the proposed condition ensures the connectivity of solutions found by stochastic gradient descent, even in settings where the previous requirements do not hold.
| null |
TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation
|
https://papers.nips.cc/paper_files/paper/2021/hash/af5d5ef24881f3c3049a7b9bfe74d58b-Abstract.html
|
Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William Cheung, James Kwok
|
https://papers.nips.cc/paper_files/paper/2021/hash/af5d5ef24881f3c3049a7b9bfe74d58b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13227-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/af5d5ef24881f3c3049a7b9bfe74d58b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vrkQ07gp0kq
|
https://papers.nips.cc/paper_files/paper/2021/file/af5d5ef24881f3c3049a7b9bfe74d58b-Supplemental.pdf
|
In few-shot domain adaptation (FDA), classifiers for the target domain are trained with \emph{accessible} labeled data in the source domain (SD) and few labeled data in the target domain (TD). However, data usually contain private information in the current era, e.g., data distributed on personal phones. Thus, the private data will be leaked if we directly access data in SD to train a target-domain classifier (required by FDA methods). In this paper, to prevent privacy leakage in SD, we consider a very challenging problem setting, where the classifier for the TD has to be trained using few labeled target data and a well-trained SD classifier, named few-shot hypothesis adaptation (FHA). In FHA, we cannot access data in SD, as a result, the private information in SD will be protected well. To this end, we propose a target-oriented hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i.e., an intermediate domain) to help train a target-domain classifier. TOHAN maintains two deep networks simultaneously, in which one focuses on learning an intermediate domain and the other takes care of the intermediate-to-target distributional adaptation and the target-risk minimization. Experimental results show that TOHAN outperforms competitive baselines significantly.
| null |
Learning Graph Cellular Automata
|
https://papers.nips.cc/paper_files/paper/2021/hash/af87f7cdcda223c41c3f3ef05a3aaeea-Abstract.html
|
Daniele Grattarola, Lorenzo Livi, Cesare Alippi
|
https://papers.nips.cc/paper_files/paper/2021/hash/af87f7cdcda223c41c3f3ef05a3aaeea-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13228-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/af87f7cdcda223c41c3f3ef05a3aaeea-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=H2Vl40HAFSB
|
https://papers.nips.cc/paper_files/paper/2021/file/af87f7cdcda223c41c3f3ef05a3aaeea-Supplemental.zip
|
Cellular automata (CA) are a class of computational models that exhibit rich dynamics emerging from the local interaction of cells arranged in a regular lattice. In this work we focus on a generalised version of typical CA, called graph cellular automata (GCA), in which the lattice structure is replaced by an arbitrary graph. In particular, we extend previous work that used convolutional neural networks to learn the transition rule of conventional CA and we use graph neural networks to learn a variety of transition rules for GCA. First, we present a general-purpose architecture for learning GCA, and we show that it can represent any arbitrary GCA with finite and discrete state space. Then, we test our approach on three different tasks: 1) learning the transition rule of a GCA on a Voronoi tessellation; 2) imitating the behaviour of a group of flocking agents; 3) learning a rule that converges to a desired target state.
| null |
Efficient Online Estimation of Causal Effects by Deciding What to Observe
|
https://papers.nips.cc/paper_files/paper/2021/hash/af8d1eb220186400c494db7091e402b0-Abstract.html
|
Shantanu Gupta, Zachary Lipton, David Childers
|
https://papers.nips.cc/paper_files/paper/2021/hash/af8d1eb220186400c494db7091e402b0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13229-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/af8d1eb220186400c494db7091e402b0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=aVKRtX-0rdW
|
https://papers.nips.cc/paper_files/paper/2021/file/af8d1eb220186400c494db7091e402b0-Supplemental.pdf
|
Researchers often face data fusion problems, where multiple data sources are available, each capturing a distinct subset of variables. While problem formulations typically take the data as given, in practice, data acquisition can be an ongoing process. In this paper, we introduce the problem of deciding, at each time, which data source to sample from. Our goal is to estimate a given functional of the parameters of a probabilistic model as efficiently as possible. We propose online moment selection (OMS), a framework in which structural assumptions are encoded as moment conditions. The optimal action at each step depends, in part, on the very moments that identify the functional of interest. Our algorithms balance exploration with choosing the best action as suggested by estimated moments. We propose two selection strategies: (1) explore-then-commit (ETC) and (2) explore-then-greedy (ETG), proving that both achieve zero asymptotic regret as assessed by MSE. We instantiate our setup for average treatment effect estimation, where structural assumptions are given by a causal graph and data sources include subsets of mediators, confounders, and instrumental variables.
| null |
Perturbation Theory for the Information Bottleneck
|
https://papers.nips.cc/paper_files/paper/2021/hash/af8d9c4e238c63fb074b44eb6aed80ae-Abstract.html
|
Vudtiwat Ngampruetikorn, David J. Schwab
|
https://papers.nips.cc/paper_files/paper/2021/hash/af8d9c4e238c63fb074b44eb6aed80ae-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13230-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/af8d9c4e238c63fb074b44eb6aed80ae-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=A2HvBPoSBMs
|
https://papers.nips.cc/paper_files/paper/2021/file/af8d9c4e238c63fb074b44eb6aed80ae-Supplemental.pdf
|
Extracting relevant information from data is crucial for all forms of learning. The information bottleneck (IB) method formalizes this, offering a mathematically precise and conceptually appealing framework for understanding learning phenomena. However the nonlinearity of the IB problem makes it computationally expensive and analytically intractable in general. Here we derive a perturbation theory for the IB method and report the first complete characterization of the learning onset, the limit of maximum relevant information per bit extracted from data. We test our results on synthetic probability distributions, finding good agreement with the exact numerical solution near the onset of learning. We explore the difference and subtleties in our derivation and previous attempts at deriving a perturbation theory for the learning onset and attribute the discrepancy to a flawed assumption. Our work also provides a fresh perspective on the intimate relationship between the IB method and the strong data processing inequality.
| null |
Deconvolutional Networks on Graph Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/afa299a4d1d8c52e75dd8a24c3ce534f-Abstract.html
|
Jia Li, Jiajin Li, Yang Liu, Jianwei Yu, Yueting Li, Hong Cheng
|
https://papers.nips.cc/paper_files/paper/2021/hash/afa299a4d1d8c52e75dd8a24c3ce534f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13231-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/afa299a4d1d8c52e75dd8a24c3ce534f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=y2p9IIXwdg2
|
https://papers.nips.cc/paper_files/paper/2021/file/afa299a4d1d8c52e75dd8a24c3ce534f-Supplemental.zip
|
In this paper, we consider an inverse problem in graph learning domain -- "given the graph representations smoothed by Graph Convolutional Network (GCN), how can we reconstruct the input graph signal?" We propose Graph Deconvolutional Network (GDN) and motivate the design of GDN via a combination of inverse filters in spectral domain and de-noising layers in wavelet domain, as the inverse operation results in a high frequency amplifier and may amplify the noise. We demonstrate the effectiveness of the proposed method on several tasks including graph feature imputation and graph structure generation.
| null |
Variational Multi-Task Learning with Gumbel-Softmax Priors
|
https://papers.nips.cc/paper_files/paper/2021/hash/afd4836712c5e77550897e25711e1d96-Abstract.html
|
Jiayi Shen, Xiantong Zhen, Marcel Worring, Ling Shao
|
https://papers.nips.cc/paper_files/paper/2021/hash/afd4836712c5e77550897e25711e1d96-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13232-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/afd4836712c5e77550897e25711e1d96-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-2rkcde3CDJ
|
https://papers.nips.cc/paper_files/paper/2021/file/afd4836712c5e77550897e25711e1d96-Supplemental.pdf
|
Multi-task learning aims to explore task relatedness to improve individual tasks, which is of particular significance in the challenging scenario that only limited data is available for each task. To tackle this challenge, we propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks. We cast multi-task learning as a variational Bayesian inference problem, in which task relatedness is explored in a unified manner by specifying priors. To incorporate shared knowledge into each task, we design the prior of a task to be a learnable mixture of the variational posteriors of other related tasks, which is learned by the Gumbel-Softmax technique. In contrast to previous methods, our VMTL can exploit task relatedness for both representations and classifiers in a principled way by jointly inferring their posteriors. This enables individual tasks to fully leverage inductive biases provided by related tasks, therefore improving the overall performance of all tasks. Experimental results demonstrate that the proposed VMTL is able to effectively tackle a variety of challenging multi-task learning settings with limited training data for both classification and regression. Our method consistently surpasses previous methods, including strong Bayesian approaches, and achieves state-of-the-art performance on five benchmark datasets.
| null |
Accelerating Quadratic Optimization with Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/afdec7005cc9f14302cd0474fd0f3c96-Abstract.html
|
Jeffrey Ichnowski, Paras Jain, Bartolomeo Stellato, Goran Banjac, Michael Luo, Francesco Borrelli, Joseph E. Gonzalez, Ion Stoica, Ken Goldberg
|
https://papers.nips.cc/paper_files/paper/2021/hash/afdec7005cc9f14302cd0474fd0f3c96-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13233-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/afdec7005cc9f14302cd0474fd0f3c96-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5FtUGRvwEF
| null |
First-order methods for quadratic optimization such as OSQP are widely used for large-scale machine learning and embedded optimal control, where many related problems must be rapidly solved. These methods face two persistent challenges: manual hyperparameter tuning and convergence time to high-accuracy solutions. To address these, we explore how Reinforcement Learning (RL) can learn a policy to tune parameters to accelerate convergence. In experiments with well-known QP benchmarks we find that our RL policy, RLQP, significantly outperforms state-of-the-art QP solvers by up to 3x. RLQP generalizes surprisingly well to previously unseen problems with varying dimension and structure from different applications, including the QPLIB, Netlib LP and Maros-M{\'e}sz{\'a}ros problems. Code, models, and videos are available at https://berkeleyautomation.github.io/rlqp/.
| null |
Deep Residual Learning in Spiking Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/afe434653a898da20044041262b3ac74-Abstract.html
|
Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, Yonghong Tian
|
https://papers.nips.cc/paper_files/paper/2021/hash/afe434653a898da20044041262b3ac74-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13234-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/afe434653a898da20044041262b3ac74-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6OoCDvFV4m
|
https://papers.nips.cc/paper_files/paper/2021/file/afe434653a898da20044041262b3ac74-Supplemental.pdf
|
Deep Spiking Neural Networks (SNNs) present optimization difficulties for gradient-based approaches due to discrete binary activation and complex spatial-temporal dynamics. Considering the huge success of ResNet in deep learning, it would be natural to train deep SNNs with residual learning. Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons, which suffers the degradation problem and can hardly implement residual learning. In this paper, we propose the spike-element-wise (SEW) ResNet to realize residual learning in deep SNNs. We prove that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems of Spiking ResNet. We evaluate our SEW ResNet on ImageNet, DVS Gesture, and CIFAR10-DVS datasets, and show that SEW ResNet outperforms the state-of-the-art directly trained SNNs in both accuracy and time-steps. Moreover, SEW ResNet can achieve higher performance by simply adding more layers, providing a simple method to train deep SNNs. To our best knowledge, this is the first time that directly training deep SNNs with more than 100 layers becomes possible. Our codes are available at https://github.com/fangwei123456/Spike-Element-Wise-ResNet.
| null |
Duplex Sequence-to-Sequence Learning for Reversible Machine Translation
|
https://papers.nips.cc/paper_files/paper/2021/hash/afecc60f82be41c1b52f6705ec69e0f1-Abstract.html
|
Zaixiang Zheng, Hao Zhou, Shujian Huang, Jiajun Chen, Jingjing Xu, Lei Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/afecc60f82be41c1b52f6705ec69e0f1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13235-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/afecc60f82be41c1b52f6705ec69e0f1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2BbDxFtDht7
| null |
Sequence-to-sequence learning naturally has two directions. How to effectively utilize supervision signals from both directions? Existing approaches either require two separate models, or a multitask-learned model but with inferior performance. In this paper, we propose REDER (Reversible Duplex Transformer), a parameter-efficient model and apply it to machine translation. Either end of REDER can simultaneously input and output a distinct language. Thus REDER enables {\em reversible machine translation} by simply flipping the input and output ends. Experiments verify that REDER achieves the first success of reversible machine translation, which helps outperform its multitask-trained baselines by up to 1.3 BLEU.
| null |
Improved Coresets and Sublinear Algorithms for Power Means in Euclidean Spaces
|
https://papers.nips.cc/paper_files/paper/2021/hash/b035d6563a2adac9f822940c145263ce-Abstract.html
|
Vincent Cohen-Addad, David Saulpic, Chris Schwiegelshohn
|
https://papers.nips.cc/paper_files/paper/2021/hash/b035d6563a2adac9f822940c145263ce-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13236-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b035d6563a2adac9f822940c145263ce-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wvylaMP20_b
|
https://papers.nips.cc/paper_files/paper/2021/file/b035d6563a2adac9f822940c145263ce-Supplemental.pdf
|
In this paper, we consider the problem of finding high dimensional power means: given a set $A$ of $n$ points in $\R^d$, find the point $m$ that minimizes the sum of Euclidean distance, raised to the power $z$, over all input points. Special cases of problem include the well-known Fermat-Weber problem -- or geometric median problem -- where $z = 1$, the mean or centroid where $z=2$, and the Minimum Enclosing Ball problem, where $z = \infty$.We consider these problem in the big data regime.Here, we are interested in sampling as few points as possible such that we can accurately estimate $m$.More specifically, we consider sublinear algorithms as well as coresets for these problems.Sublinear algorithms have a random query access to the $A$ and the goal is to minimize the number of queries.Here, we show that $\tilde{O}(\varepsilon^{-z-3})$ samples are sufficient to achieve a $(1+\varepsilon)$ approximation, generalizing the results from Cohen, Lee, Miller, Pachocki, and Sidford [STOC '16] and Inaba, Katoh, and Imai [SoCG '94] to arbitrary $z$. Moreover, we show that this bound is nearly optimal, as any algorithm requires at least $\Omega(\varepsilon^{-z+1})$ queries to achieve said approximation.The second contribution are coresets for these problems, where we aim to find find a small, weighted subset of the points which approximate cost of every candidate point $c\in \mathbb{R}^d$ up to a $(1\pm\varepsilon)$ factor. Here, we show that $\tilde{O}(\varepsilon^{-2})$ points are sufficient, improving on the $\tilde{O}(d\varepsilon^{-2})$ bound by Feldman and Langberg [STOC '11] and the $\tilde{O}(\varepsilon^{-4})$ bound by Braverman, Jiang, Krauthgamer, and Wu [SODA 21].
| null |
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0490b85e92b64dbb5db76bf8fca6a82-Abstract.html
|
Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, Daniel Soudry
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0490b85e92b64dbb5db76bf8fca6a82-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13237-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b0490b85e92b64dbb5db76bf8fca6a82-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vRWZsBLKqA
|
https://papers.nips.cc/paper_files/paper/2021/file/b0490b85e92b64dbb5db76bf8fca6a82-Supplemental.pdf
|
Unstructured pruning reduces the memory footprint in deep neural networks (DNNs). Recently, researchers proposed different types of structural pruning intending to reduce also the computation complexity. In this work, we first suggest a new measure called mask-diversity which correlates with the expected accuracy of the different types of structural pruning. We focus on the recently suggested N:M fine-grained block sparsity mask, in which for each block of M weights, we have at least N zeros. While N:M fine-grained block sparsity allows acceleration in actual modern hardware, it can be used only to accelerate the inference phase. In order to allow for similar accelerations in the training phase, we suggest a novel transposable fine-grained sparsity mask, where the same mask can be used for both forward and backward passes. Our transposable mask guarantees that both the weight matrix and its transpose follow the same sparsity pattern; thus, the matrix multiplication required for passing the error backward can also be accelerated. We formulate the problem of finding the optimal transposable-mask as a minimum-cost flow problem. Additionally, to speed up the minimum-cost flow computation, we also introduce a fast linear-time approximation that can be used when the masks dynamically change during training. Our experiments suggest a 2x speed-up in the matrix multiplications with no accuracy degradation over vision and language models. Finally, to solve the problem of switching between different structure constraints, we suggest a method to convert a pre-trained model with unstructured sparsity to an N:M fine-grained block sparsity model with little to no training. A reference implementation can be found at https://github.com/papers-submission/structuredtransposablemasks.
| null |
Learning and Generalization in RNNs
|
https://papers.nips.cc/paper_files/paper/2021/hash/b04c387c8384ca083a71b8da516f65f6-Abstract.html
|
Abhishek Panigrahi, Navin Goyal
|
https://papers.nips.cc/paper_files/paper/2021/hash/b04c387c8384ca083a71b8da516f65f6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13238-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b04c387c8384ca083a71b8da516f65f6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yr7nrY18Xu
|
https://papers.nips.cc/paper_files/paper/2021/file/b04c387c8384ca083a71b8da516f65f6-Supplemental.pdf
|
Simple recurrent neural networks (RNNs) and their more advanced cousins LSTMs etc. have been very successful in sequence modeling. Their theoretical understanding, however, is lacking and has not kept pace with the progress for feedforward networks, where a reasonably complete understanding in the special case of highly overparametrized one-hidden-layer networks has emerged. In this paper, we make progress towards remedying this situation by proving that RNNs can learn functions of sequences. In contrast to the previous work that could only deal with functions of sequences that are sums of functions of individual tokens in the sequence, we allow general functions. Conceptually and technically, we introduce new ideas which enable us to extract information from the hidden state of the RNN in our proofs---addressing a crucial weakness in previous work. We illustrate our results on some regular language recognition problems.
| null |
Improving Visual Quality of Image Synthesis by A Token-based Generator with Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/b056eb1587586b71e2da9acfe4fbd19e-Abstract.html
|
Yanhong Zeng, Huan Yang, Hongyang Chao, Jianbo Wang, Jianlong Fu
|
https://papers.nips.cc/paper_files/paper/2021/hash/b056eb1587586b71e2da9acfe4fbd19e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13239-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b056eb1587586b71e2da9acfe4fbd19e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lGoKo9WS2A_
|
https://papers.nips.cc/paper_files/paper/2021/file/b056eb1587586b71e2da9acfe4fbd19e-Supplemental.zip
|
We present a new perspective of achieving image synthesis by viewing this task as a visual token generation problem. Different from existing paradigms that directly synthesize a full image from a single input (e.g., a latent code), the new formulation enables a flexible local manipulation for different image regions, which makes it possible to learn content-aware and fine-grained style control for image synthesis. Specifically, it takes as input a sequence of latent tokens to predict the visual tokens for synthesizing an image. Under this perspective, we propose a token-based generator (i.e., TokenGAN). Particularly, the TokenGAN inputs two semantically different visual tokens, i.e., the learned constant content tokens and the style tokens from the latent space. Given a sequence of style tokens, the TokenGAN is able to control the image synthesis by assigning the styles to the content tokens by attention mechanism with a Transformer. We conduct extensive experiments and show that the proposed TokenGAN has achieved state-of-the-art results on several widely-used image synthesis benchmarks, including FFHQ and LSUN CHURCH with different resolutions. In particular, the generator is able to synthesize high-fidelity images with (1024x1024) size, dispensing with convolutions entirely.
| null |
The Effect of the Intrinsic Dimension on the Generalization of Quadratic Classifiers
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0928f2d4ba7ea33b05024f21d937f48-Abstract.html
|
Fabian Latorre, Leello Tadesse Dadi, Paul Rolland, Volkan Cevher
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0928f2d4ba7ea33b05024f21d937f48-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13240-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b0928f2d4ba7ea33b05024f21d937f48-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_hKvtsqItc
|
https://papers.nips.cc/paper_files/paper/2021/file/b0928f2d4ba7ea33b05024f21d937f48-Supplemental.pdf
|
It has been recently observed that neural networks, unlike kernel methods, enjoy a reduced sample complexity when the distribution is isotropic (i.e., when the covariance matrix is the identity). We find that this sensitivity to the data distribution is not exclusive to neural networks, and the same phenomenon can be observed on the class of quadratic classifiers (i.e., the sign of a quadratic polynomial) with a nuclear-norm constraint. We demonstrate this by deriving an upper bound on the Rademacher Complexity that depends on two key quantities: (i) the intrinsic dimension, which is a measure of isotropy, and (ii) the largest eigenvalue of the second moment (covariance) matrix of the distribution. Our result improves the dependence on the dimension over the best previously known bound and precisely quantifies the relation between the sample complexity and the level of isotropy of the distribution.
| null |
DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0ab42fcb7133122b38521d13da7120b-Abstract.html
|
Hang Xu, Kelly Kostopoulou, Aritra Dutta, Xin Li, Alexandros Ntoulas, Panos Kalnis
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0ab42fcb7133122b38521d13da7120b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13241-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b0ab42fcb7133122b38521d13da7120b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OAy508Q3T8
|
https://papers.nips.cc/paper_files/paper/2021/file/b0ab42fcb7133122b38521d13da7120b-Supplemental.pdf
|
Sparse tensors appear frequently in federated deep learning, either as a direct artifact of the deep neural network’s gradients, or as a result of an explicit sparsification process. Existing communication primitives are agnostic to the peculiarities of deep learning; consequently, they impose unnecessary communication overhead. This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors, tailored to federated deep learning. DeepReduce decomposes sparse tensors into two sets, values and indices, and allows both independent and combined compression of these sets. We support a variety of common compressors, such as Deflate for values, or run-length encoding for indices. We also propose two novel compression schemes that achieve superior results: curve fitting-based for values, and bloom filter-based for indices. DeepReduce is orthogonal to existing gradient sparsifiers and can be applied in conjunction with them, transparently to the end-user, to significantly lower the communication overhead. As proof of concept, we implement our approach on TensorFlow and PyTorch. Our experiments with large real models demonstrate that DeepReduce transmits 320% less data than existing sparsifiers, without affecting accuracy. Code is available at https://github.com/hangxu0304/DeepReduce.
| null |
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0b79da57b95837f14be95aaa4d54cf8-Abstract.html
|
Lingxiao Wang, Zhuoran Yang, Zhaoran Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0b79da57b95837f14be95aaa4d54cf8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13242-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b0b79da57b95837f14be95aaa4d54cf8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tUeeRzMXJZ
|
https://papers.nips.cc/paper_files/paper/2021/file/b0b79da57b95837f14be95aaa4d54cf8-Supplemental.zip
|
Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments.
| null |
Predicting Deep Neural Network Generalization with Perturbation Response Curves
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0dd033cbe58aa5ea27747271bfd84e3-Abstract.html
|
Yair Schiff, Brian Quanz, Payel Das, Pin-Yu Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0dd033cbe58aa5ea27747271bfd84e3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13243-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b0dd033cbe58aa5ea27747271bfd84e3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lRYfPNKCRu
|
https://papers.nips.cc/paper_files/paper/2021/file/b0dd033cbe58aa5ea27747271bfd84e3-Supplemental.pdf
|
The field of Deep Learning is rich with empirical evidence of human-like performance on a variety of prediction tasks. However, despite these successes, the recent Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition suggests that there is a need for more robust and efficient measures of network generalization. In this work, we propose a new framework for evaluating the generalization capabilities of trained networks. We use perturbation response (PR) curves that capture the accuracy change of a given network as a function of varying levels of training sample perturbation. From these PR curves, we derive novel statistics that capture generalization capability. Specifically, we introduce two new measures for accurately predicting generalization gaps: the Gi-score and Pal-score, which are inspired by the Gini coefficient and Palma ratio (measures of income inequality), that accurately predict generalization gaps. Using our framework applied to intra and inter-class sample mixup, we attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the PGDL competition. In addition, we show that our framework and the proposed statistics can be used to capture to what extent a trained network is invariant to a given parametric input transformation, such as rotation or translation. Therefore, these generalization gap prediction statistics also provide a useful means for selecting optimal network architectures and hyperparameters that are invariant to a certain perturbation.
| null |
Exploiting Domain-Specific Features to Enhance Domain Generalization
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0f2ad44d26e1a6f244201fe0fd864d1-Abstract.html
|
Manh-Ha Bui, Toan Tran, Anh Tran, Dinh Phung
|
https://papers.nips.cc/paper_files/paper/2021/hash/b0f2ad44d26e1a6f244201fe0fd864d1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13244-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b0f2ad44d26e1a6f244201fe0fd864d1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vKxFYApxBjr
|
https://papers.nips.cc/paper_files/paper/2021/file/b0f2ad44d26e1a6f244201fe0fd864d1-Supplemental.pdf
|
Domain Generalization (DG) aims to train a model, from multiple observed source domains, in order to perform well on unseen target domains. To obtain the generalization capability, prior DG approaches have focused on extracting domain-invariant information across sources to generalize on target domains, while useful domain-specific information which strongly correlates with labels in individual domains and the generalization to target domains is usually ignored. In this paper, we propose meta-Domain Specific-Domain Invariant (mDSDI) - a novel theoretically sound framework that extends beyond the invariance view to further capture the usefulness of domain-specific information. Our key insight is to disentangle features in the latent space while jointly learning both domain-invariant and domain-specific features in a unified framework. The domain-specific representation is optimized through the meta-learning framework to adapt from source domains, targeting a robust generalization on unseen domains. We empirically show that mDSDI provides competitive results with state-of-the-art techniques in DG. A further ablation study with our generated dataset, Background-Colored-MNIST, confirms the hypothesis that domain-specific is essential, leading to better results when compared with only using domain-invariant.
| null |
Optimal Order Simple Regret for Gaussian Process Bandits
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1300291698eadedb559786c809cc592-Abstract.html
|
Sattar Vakili, Nacime Bouziani, Sepehr Jalali, Alberto Bernacchia, Da-shan Shiu
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1300291698eadedb559786c809cc592-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13245-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b1300291698eadedb559786c809cc592-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=r_KsP_YjX3O
|
https://papers.nips.cc/paper_files/paper/2021/file/b1300291698eadedb559786c809cc592-Supplemental.pdf
|
Consider the sequential optimization of a continuous, possibly non-convex, and expensive to evaluate objective function $f$. The problem can be cast as a Gaussian Process (GP) bandit where $f$ lives in a reproducing kernel Hilbert space (RKHS). The state of the art analysis of several learning algorithms shows a significant gap between the lower and upper bounds on the simple regret performance. When $N$ is the number of exploration trials and $\gamma_N$ is the maximal information gain, we prove an $\tilde{\mathcal{O}}(\sqrt{\gamma_N/N})$ bound on the simple regret performance of a pure exploration algorithm that is significantly tighter than the existing bounds. We show that this bound is order optimal up to logarithmic factors for the cases where a lower bound on regret is known. To establish these results, we prove novel and sharp confidence intervals for GP models applicable to RKHS elements which may be of broader interest.
| null |
Generalization Guarantee of SGD for Pairwise Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1301141feffabac455e1f90a7de2054-Abstract.html
|
Yunwen Lei, Mingrui Liu, Yiming Ying
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1301141feffabac455e1f90a7de2054-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13246-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b1301141feffabac455e1f90a7de2054-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lHFCaetMa_
|
https://papers.nips.cc/paper_files/paper/2021/file/b1301141feffabac455e1f90a7de2054-Supplemental.pdf
|
Recently, there is a growing interest in studying pairwise learning since it includes many important machine learning tasks as specific examples, e.g., metric learning, AUC maximization and ranking. While stochastic gradient descent (SGD) is an efficient method, there is a lacking study on its generalization behavior for pairwise learning. In this paper, we present a systematic study on the generalization analysis of SGD for pairwise learning to understand the balance between generalization and optimization. We develop a novel high-probability generalization bound for uniformly-stable algorithms to incorporate the variance information for better generalization, based on which we establish the first nonsmooth learning algorithm to achieve almost optimal high-probability and dimension-independent generalization bounds in linear time. We consider both convex and nonconvex pairwise learning problems. Our stability analysis for convex problems shows how the interpolation can help generalization. We establish a uniform convergence of gradients, and apply it to derive the first generalization bounds on population gradients for nonconvex problems. Finally, we develop better generalization bounds for gradient-dominated problems.
| null |
Supercharging Imbalanced Data Learning With Energy-based Contrastive Representation Transfer
|
https://papers.nips.cc/paper_files/paper/2021/hash/b151ce4935a3c2807e1dd9963eda16d8-Abstract.html
|
Junya Chen, Zidi Xiu, Benjamin Goldstein, Ricardo Henao, Lawrence Carin, Chenyang Tao
|
https://papers.nips.cc/paper_files/paper/2021/hash/b151ce4935a3c2807e1dd9963eda16d8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13247-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b151ce4935a3c2807e1dd9963eda16d8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kpTMw7ZMJB
|
https://papers.nips.cc/paper_files/paper/2021/file/b151ce4935a3c2807e1dd9963eda16d8-Supplemental.pdf
|
Dealing with severe class imbalance poses a major challenge for many real-world applications, especially when the accurate classification and generalization of minority classes are of primary interest.In computer vision and NLP, learning from datasets with long-tail behavior is a recurring theme, especially for naturally occurring labels. Existing solutions mostly appeal to sampling or weighting adjustments to alleviate the extreme imbalance, or impose inductive bias to prioritize generalizable associations. Here we take a novel perspective to promote sample efficiency and model generalization based on the invariance principles of causality. Our contribution posits a meta-distributional scenario, where the causal generating mechanism for label-conditional features is invariant across different labels. Such causal assumption enables efficient knowledge transfer from the dominant classes to their under-represented counterparts, even if their feature distributions show apparent disparities. This allows us to leverage a causal data augmentation procedure to enlarge the representation of minority classes. Our development is orthogonal to the existing imbalanced data learning techniques thus can be seamlessly integrated. The proposed approach is validated on an extensive set of synthetic and real-world tasks against state-of-the-art solutions.
| null |
Heavy Ball Momentum for Conditional Gradient
|
https://papers.nips.cc/paper_files/paper/2021/hash/b166b57d195370cd41f80dd29ed523d9-Abstract.html
|
Bingcong Li, Alireza Sadeghi, Georgios Giannakis
|
https://papers.nips.cc/paper_files/paper/2021/hash/b166b57d195370cd41f80dd29ed523d9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13248-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b166b57d195370cd41f80dd29ed523d9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=XBFZ6GXjalo
|
https://papers.nips.cc/paper_files/paper/2021/file/b166b57d195370cd41f80dd29ed523d9-Supplemental.pdf
|
Conditional gradient, aka Frank Wolfe (FW) algorithms, have well-documented merits in machine learning and signal processing applications. Unlike projection-based methods, momentum cannot improve the convergence rate of FW, in general. This limitation motivates the present work, which deals with heavy ball momentum, and its impact to FW. Specifically, it is established that heavy ball offers a unifying perspective on the primal-dual (PD) convergence, and enjoys a tighter \textit{per iteration} PD error rate, for multiple choices of step sizes, where PD error can serve as the stopping criterion in practice. In addition, it is asserted that restart, a scheme typically employed jointly with Nesterov's momentum, can further tighten this PD error bound. Numerical results demonstrate the usefulness of heavy ball momentum in FW iterations.
| null |
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
|
https://papers.nips.cc/paper_files/paper/2021/hash/b17c0907e67d868b4e0feb43dbbe6f11-Abstract.html
|
Cheng-I Jeff Lai, Yang Zhang, Alexander H. Liu, Shiyu Chang, Yi-Lun Liao, Yung-Sung Chuang, Kaizhi Qian, Sameer Khurana, David Cox, Jim Glass
|
https://papers.nips.cc/paper_files/paper/2021/hash/b17c0907e67d868b4e0feb43dbbe6f11-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13249-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b17c0907e67d868b4e0feb43dbbe6f11-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=UoVpP8R2Vn
|
https://papers.nips.cc/paper_files/paper/2021/file/b17c0907e67d868b4e0feb43dbbe6f11-Supplemental.pdf
|
Self-supervised speech representation learning (speech SSL) has demonstrated the benefit of scale in learning rich representations for Automatic Speech Recognition (ASR) with limited paired data, such as wav2vec 2.0. We investigate the existence of sparse subnetworks in pre-trained speech SSL models that achieve even better low-resource ASR results. However, directly applying widely adopted pruning methods such as the Lottery Ticket Hypothesis (LTH) is suboptimal in the computational cost needed. Moreover, we show that the discovered subnetworks yield minimal performance gain compared to the original dense network.We present Prune-Adjust-Re-Prune (PARP), which discovers and finetunes subnetworks for much better performance, while only requiring a single downstream ASR finetuning run. PARP is inspired by our surprising observation that subnetworks pruned for pre-training tasks need merely a slight adjustment to achieve a sizeable performance boost in downstream ASR tasks. Extensive experiments on low-resource ASR verify (1) sparse subnetworks exist in mono-lingual/multi-lingual pre-trained speech SSL, and (2) the computational advantage and performance gain of PARP over baseline pruning methods.In particular, on the 10min Librispeech split without LM decoding, PARP discovers subnetworks from wav2vec 2.0 with an absolute 10.9%/12.6% WER decrease compared to the full model. We further demonstrate the effectiveness of PARP via: cross-lingual pruning without any phone recognition degradation, the discovery of a multi-lingual subnetwork for 10 spoken languages in 1 finetuning run, and its applicability to pre-trained BERT/XLNet for natural language tasks1.
| null |
Robust Learning of Optimal Auctions
|
https://papers.nips.cc/paper_files/paper/2021/hash/b19aa25ff58940d974234b48391b9549-Abstract.html
|
Wenshuo Guo, Michael Jordan, Emmanouil Zampetakis
|
https://papers.nips.cc/paper_files/paper/2021/hash/b19aa25ff58940d974234b48391b9549-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13250-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b19aa25ff58940d974234b48391b9549-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Gl3ADZLz9ir
|
https://papers.nips.cc/paper_files/paper/2021/file/b19aa25ff58940d974234b48391b9549-Supplemental.pdf
|
We study the problem of learning revenue-optimal multi-bidder auctions from samples when the samples of bidders' valuations can be adversarially corrupted or drawn from distributions that are adversarially perturbed. First, we prove tight upper bounds on the revenue we can obtain with a corrupted distribution under a population model, for both regular valuation distributions and distributions with monotone hazard rate (MHR). We then propose new algorithms that, given only an ``approximate distribution'' for the bidder's valuation, can learn a mechanism whose revenue is nearly optimal simultaneously for all ``true distributions'' that are $\alpha$-close to the original distribution in Kolmogorov-Smirnov distance. The proposed algorithms operate beyond the setting of bounded distributions that have been studied in prior works, and are guaranteed to obtain a fraction $1-O(\alpha)$ of the optimal revenue under the true distribution when the distributions are MHR. Moreover, they are guaranteed to yield at least a fraction $1-O(\sqrt{\alpha})$ of the optimal revenue when the distributions are regular. We prove that these upper bounds cannot be further improved, by providing matching lower bounds. Lastly, we derive sample complexity upper bounds for learning a near-optimal auction for both MHR and regular distributions.
| null |
Disrupting Deep Uncertainty Estimation Without Harming Accuracy
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1b20d09041289e6c3fbb81850c5da54-Abstract.html
|
Ido Galil, Ran El-Yaniv
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1b20d09041289e6c3fbb81850c5da54-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13251-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b1b20d09041289e6c3fbb81850c5da54-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tgdoUMqlwMv
|
https://papers.nips.cc/paper_files/paper/2021/file/b1b20d09041289e6c3fbb81850c5da54-Supplemental.pdf
|
Deep neural networks (DNNs) have proven to be powerful predictors and are widely used for various tasks. Credible uncertainty estimation of their predictions, however, is crucial for their deployment in many risk-sensitive applications. In this paper we present a novel and simple attack, which unlike adversarial attacks, does not cause incorrect predictions but instead cripples the network's capacity for uncertainty estimation. The result is that after the attack, the DNN is more confident of its incorrect predictions than about its correct ones without having its accuracy reduced. We present two versions of the attack. The first scenario focuses on a black-box regime (where the attacker has no knowledge of the target network) and the second scenario attacks a white-box setting. The proposed attack is only required to be of minuscule magnitude for its perturbations to cause severe uncertainty estimation damage, with larger magnitudes resulting in completely unusable uncertainty estimations.We demonstrate successful attacks on three of the most popular uncertainty estimation methods: the vanilla softmax score, Deep Ensembles and MC-Dropout. Additionally, we show an attack on SelectiveNet, the selective classification architecture. We test the proposed attack on several contemporary architectures such as MobileNetV2 and EfficientNetB0, all trained to classify ImageNet.
| null |
SOFT: Softmax-free Transformer with Linear Complexity
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1d10e7bafa4421218a51b1e1f1b0ba2-Abstract.html
|
Jiachen Lu, Jinghan Yao, Junge Zhang, Xiatian Zhu, Hang Xu, Weiguo Gao, Chunjing XU, Tao Xiang, Li Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b1d10e7bafa4421218a51b1e1f1b0ba2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13252-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b1d10e7bafa4421218a51b1e1f1b0ba2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rndqBJsGoKh
|
https://papers.nips.cc/paper_files/paper/2021/file/b1d10e7bafa4421218a51b1e1f1b0ba2-Supplemental.pdf
|
Vision transformers (ViTs) have pushed the state-of-the-art for various visual recognition tasks by patch-wise image tokenization followed by self-attention. However, the employment of self-attention modules results in a quadratic complexity in both computation and memory usage. Various attempts on approximating the self-attention computation with linear complexity have been made in Natural Language Processing. However, an in-depth analysis in this work shows that they are either theoretically flawed or empirically ineffective for visual recognition. We further identify that their limitations are rooted in keeping the softmax self-attention during approximations. Specifically, conventional self-attention is computed by normalizing the scaled dot-product between token feature vectors. Keeping this softmax operation challenges any subsequent linearization efforts. Based on this insight, for the first time, a softmax-free transformer or SOFT is proposed. To remove softmax in self-attention, Gaussian kernel function is used to replace the dot-product similarity without further normalization. This enables a full self-attention matrix to be approximated via a low-rank matrix decomposition. The robustness of the approximation is achieved by calculating its Moore-Penrose inverse using a Newton-Raphson method. Extensive experiments on ImageNet show that our SOFT significantly improves the computational efficiency of existing ViT variants. Crucially, with a linear complexity, much longer token sequences are permitted in SOFT, resulting in superior trade-off between accuracy and complexity.
| null |
Task-Adaptive Neural Network Search with Meta-Contrastive Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/b20bb95ab626d93fd976af958fbc61ba-Abstract.html
|
Wonyong Jeong, Hayeon Lee, Geon Park, Eunyoung Hyung, Jinheon Baek, Sung Ju Hwang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b20bb95ab626d93fd976af958fbc61ba-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13253-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b20bb95ab626d93fd976af958fbc61ba-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=U68DvXABbJ3
|
https://papers.nips.cc/paper_files/paper/2021/file/b20bb95ab626d93fd976af958fbc61ba-Supplemental.pdf
|
Most conventional Neural Architecture Search (NAS) approaches are limited in that they only generate architectures without searching for the optimal parameters. While some NAS methods handle this issue by utilizing a supernet trained on a large-scale dataset such as ImageNet, they may be suboptimal if the target tasks are highly dissimilar from the dataset the supernet is trained on. To address such limitations, we introduce a novel problem of Neural Network Search (NNS), whose goal is to search for the optimal pretrained network for a novel dataset and constraints (e.g. number of parameters), from a model zoo. Then, we propose a novel framework to tackle the problem, namely Task-Adaptive Neural Network Search (TANS). Given a model-zoo that consists of network pretrained on diverse datasets, we use a novel amortized meta-learning framework to learn a cross-modal latent space with contrastive loss, to maximize the similarity between a dataset and a high-performing network on it, and minimize the similarity between irrelevant dataset-network pairs. We validate the effectiveness and efficiency of our method on ten real-world datasets, against existing NAS/AutoML baselines. The results show that our method instantly retrieves networks that outperform models obtained with the baselines with significantly fewer training steps to reach the target performance, thus minimizing the total cost of obtaining a task-optimal network. Our code and the model-zoo are available at https://anonymous.4open.science/r/TANS-33D6
| null |
Neural Flows: Efficient Alternative to Neural ODEs
|
https://papers.nips.cc/paper_files/paper/2021/hash/b21f9f98829dea9a48fd8aaddc1f159d-Abstract.html
|
Marin Biloš, Johanna Sommer, Syama Sundar Rangapuram, Tim Januschowski, Stephan Günnemann
|
https://papers.nips.cc/paper_files/paper/2021/hash/b21f9f98829dea9a48fd8aaddc1f159d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13254-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b21f9f98829dea9a48fd8aaddc1f159d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=XCs9rM255KZ
|
https://papers.nips.cc/paper_files/paper/2021/file/b21f9f98829dea9a48fd8aaddc1f159d-Supplemental.pdf
|
Neural ordinary differential equations describe how values change in time. This is the reason why they gained importance in modeling sequential data, especially when the observations are made at irregular intervals. In this paper we propose an alternative by directly modeling the solution curves - the flow of an ODE - with a neural network. This immediately eliminates the need for expensive numerical solvers while still maintaining the modeling capability of neural ODEs. We propose several flow architectures suitable for different applications by establishing precise conditions on when a function defines a valid flow. Apart from computational efficiency, we also provide empirical evidence of favorable generalization performance via applications in time series modeling, forecasting, and density estimation.
| null |
Multi-Objective Meta Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/b23975176653284f1f7356ba5539cfcb-Abstract.html
|
Feiyang YE, Baijiong Lin, Zhixiong Yue, Pengxin Guo, Qiao Xiao, Yu Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b23975176653284f1f7356ba5539cfcb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13255-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b23975176653284f1f7356ba5539cfcb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wKf9iSu_TEm
|
https://papers.nips.cc/paper_files/paper/2021/file/b23975176653284f1f7356ba5539cfcb-Supplemental.pdf
|
Meta learning with multiple objectives has been attracted much attention recently since many applications need to consider multiple factors when designing learning models. Existing gradient-based works on meta learning with multiple objectives mainly combine multiple objectives into a single objective in a weighted sum manner. This simple strategy usually works but it requires to tune the weights associated with all the objectives, which could be time consuming. Different from those works, in this paper, we propose a gradient-based Multi-Objective Meta Learning (MOML) framework without manually tuning weights. Specifically, MOML formulates the objective function of meta learning with multiple objectives as a Multi-Objective Bi-Level optimization Problem (MOBLP) where the upper-level subproblem is to solve several possibly conflicting objectives for the meta learner. To solve the MOBLP, we devise the first gradient-based optimization algorithm by alternatively solving the lower-level and upper-level subproblems via the gradient descent method and the gradient-based multi-objective optimization method, respectively. Theoretically, we prove the convergence properties of the proposed gradient-based optimization algorithm. Empirically, we show the effectiveness of the proposed MOML framework in several meta learning problems, including few-shot learning, domain adaptation, multi-task learning, and neural architecture search. The source code of MOML is available at https://github.com/Baijiong-Lin/MOML.
| null |
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
|
https://papers.nips.cc/paper_files/paper/2021/hash/b24d21019de5e59da180f1661904f49a-Abstract.html
|
Gadi Naveh, Zohar Ringel
|
https://papers.nips.cc/paper_files/paper/2021/hash/b24d21019de5e59da180f1661904f49a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13256-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b24d21019de5e59da180f1661904f49a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vBYwwBxVcsE
|
https://papers.nips.cc/paper_files/paper/2021/file/b24d21019de5e59da180f1661904f49a-Supplemental.pdf
|
Deep neural networks (DNNs) in the infinite width/channel limit have received much attention recently, as they provide a clear analytical window to deep learning via mappings to Gaussian Processes (GPs). Despite its theoretical appeal, this viewpoint lacks a crucial ingredient of deep learning in finite DNNs, laying at the heart of their success --- \textit{feature learning}. Here we consider DNNs trained with noisy gradient descent on a large training set and derive a self-consistent Gaussian Process theory accounting for \textit{strong} finite-DNN and feature learning effects. Applying this to a toy model of a two-layer linear convolutional neural network (CNN) shows good agreement with experiments. We further identify, both analytically and numerically, a sharp transition between a feature learning regime and a lazy learning regime in this model. Strong finite-DNN effects are also derived for a non-linear two-layer fully connected network. We have numerical evidence demonstrating that the assumptions required for our theory hold true in more realistic settings (Myrtle5 CNN trained on CIFAR-10).Our self-consistent theory provides a rich and versatile analytical framework for studying strong finite-DNN effects, most notably - feature learning.
| null |
Mini-Batch Consistent Slot Set Encoder for Scalable Set Encoding
|
https://papers.nips.cc/paper_files/paper/2021/hash/b24d516bb65a5a58079f0f3526c87c57-Abstract.html
|
Andreis Bruno, Jeffrey Willette, Juho Lee, Sung Ju Hwang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b24d516bb65a5a58079f0f3526c87c57-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13257-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b24d516bb65a5a58079f0f3526c87c57-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=FTdrVlWfvsz
|
https://papers.nips.cc/paper_files/paper/2021/file/b24d516bb65a5a58079f0f3526c87c57-Supplemental.pdf
|
Most existing set encoding algorithms operate under the implicit assumption that all the set elements are accessible, and that there are ample computational and memory resources to load the set into memory during training and inference. However, both assumptions fail when the set is excessively large such that it is impossible to load all set elements into memory, or when data arrives in a stream. To tackle such practical challenges in large-scale set encoding, the general set-function constraints of permutation invariance and equivariance are not sufficient. We introduce a new property termed Mini-Batch Consistency (MBC) that is required for large scale mini-batch set encoding. Additionally, we present a scalable and efficient attention-based set encoding mechanism that is amenable to mini-batch processing of sets, and capable of updating set representations as data arrives. The proposed method adheres to the required symmetries of invariance and equivariance as well as maintaining MBC for any partition of the input set. We perform extensive experiments and show that our method is computationally efficient and results in rich set encoding representations for set-structured data.
| null |
Efficient and Local Parallel Random Walks
|
https://papers.nips.cc/paper_files/paper/2021/hash/b282d1735283e8eea45bce393cefe265-Abstract.html
|
Michael Kapralov, Silvio Lattanzi, Navid Nouri, Jakab Tardos
|
https://papers.nips.cc/paper_files/paper/2021/hash/b282d1735283e8eea45bce393cefe265-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13258-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b282d1735283e8eea45bce393cefe265-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yKoZfSVFtAx
|
https://papers.nips.cc/paper_files/paper/2021/file/b282d1735283e8eea45bce393cefe265-Supplemental.pdf
|
Random walks are a fundamental primitive used in many machine learning algorithms with several applications in clustering and semi-supervised learning. Despite their relevance, the first efficient parallel algorithm to compute random walks has been introduced very recently (Łącki et al.). Unfortunately their method has a fundamental shortcoming: their algorithm is non-local in that it heavily relies on computing random walks out of all nodes in the input graph, even though in many practical applications one is interested in computing random walks only from a small subset of nodes in the graph. In this paper, we present a new algorithm that overcomes this limitation by building random walks efficiently and locally at the same time. We show that our technique is both memory and round efficient, and in particular yields an efficient parallel local clustering algorithm. Finally, we complement our theoretical analysis with experimental results showing that our algorithm is significantly more scalable than previous approaches.
| null |
Amortized Variational Inference for Simple Hierarchical Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/b28d7c6b6aec04f5525b453411ff4336-Abstract.html
|
Abhinav Agrawal, Justin Domke
|
https://papers.nips.cc/paper_files/paper/2021/hash/b28d7c6b6aec04f5525b453411ff4336-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13259-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b28d7c6b6aec04f5525b453411ff4336-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Rw_fo_Z2vV
|
https://papers.nips.cc/paper_files/paper/2021/file/b28d7c6b6aec04f5525b453411ff4336-Supplemental.pdf
|
It is difficult to use subsampling with variational inference in hierarchical models since the number of local latent variables scales with the dataset. Thus, inference in hierarchical models remains a challenge at a large scale. It is helpful to use a variational family with a structure matching the posterior, but optimization is still slow due to the huge number of local distributions. Instead, this paper suggests an amortized approach where shared parameters simultaneously represent all local distributions. This approach is similarly accurate as using a given joint distribution (e.g., a full-rank Gaussian) but is feasible on datasets that are several orders of magnitude larger. It is also dramatically faster than using a structured variational distribution.
| null |
Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm
|
https://papers.nips.cc/paper_files/paper/2021/hash/b294504229c668e750dfcc4ea9617f0a-Abstract.html
|
Nathan Noiry, Vianney Perchet, Flore Sentenac
|
https://papers.nips.cc/paper_files/paper/2021/hash/b294504229c668e750dfcc4ea9617f0a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13260-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b294504229c668e750dfcc4ea9617f0a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=TZ0eEqEBRA
|
https://papers.nips.cc/paper_files/paper/2021/file/b294504229c668e750dfcc4ea9617f0a-Supplemental.pdf
|
Motivated by sequential budgeted allocation problems, we investigate online matching problems where connections between vertices are not i.i.d., but they have fixed degree distributions -- the so-called configuration model. We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant stochastic discrete processes by their continuous counterparts, that are solutions of an explicit system of partial differential equations. This technique gives precise bounds on the estimation errors, with arbitrarily high probability as the problem size increases. In particular, it allows the formal comparison between different configuration models. We also prove that, quite surprisingly, GREEDY can have better performance guarantees than RANKING, another celebrated algorithm for online matching that usually outperforms the former.
| null |
End-to-end reconstruction meets data-driven regularization for inverse problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2df0a0d4116c55f81fd5aa1ef876510-Abstract.html
|
Subhadip Mukherjee, Marcello Carioni, Ozan Öktem, Carola-Bibiane Schönlieb
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2df0a0d4116c55f81fd5aa1ef876510-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13261-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b2df0a0d4116c55f81fd5aa1ef876510-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_0kknjKJ6BH
|
https://papers.nips.cc/paper_files/paper/2021/file/b2df0a0d4116c55f81fd5aa1ef876510-Supplemental.pdf
|
We propose a new approach for learning end-to-end reconstruction operators based on unpaired training data for ill-posed inverse problems. The proposed method combines the classical variational framework with iterative unrolling and essentially seeks to minimize a weighted combination of the expected distortion in the measurement space and the Wasserstein-1 distance between the distributions of the reconstruction and the ground-truth. More specifically, the regularizer in the variational setting is parametrized by a deep neural network and learned simultaneously with the unrolled reconstruction operator. The variational problem is then initialized with the output of the reconstruction network and solved iteratively till convergence. Notably, it takes significantly fewer iterations to converge as compared to variational methods, thanks to the excellent initialization obtained via the unrolled operator. The resulting approach combines the computational efficiency of end-to-end unrolled reconstruction with the well-posedness and noise-stability guarantees of the variational setting. Moreover, we demonstrate with the example of image reconstruction in X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods and that it outperforms or is at least on par with state-of-the-art supervised data-driven reconstruction approaches.
| null |
An online passive-aggressive algorithm for difference-of-squares classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2ea5e977c5fc1ccfa74171a9723dd61-Abstract.html
|
Lawrence Saul
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2ea5e977c5fc1ccfa74171a9723dd61-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13262-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b2ea5e977c5fc1ccfa74171a9723dd61-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=i3MWzYAT4G
|
https://papers.nips.cc/paper_files/paper/2021/file/b2ea5e977c5fc1ccfa74171a9723dd61-Supplemental.pdf
|
We investigate a low-rank model of quadratic classification inspired by previous work on factorization machines, polynomial networks, and capsule-based architectures for visual object recognition. The model is parameterized by a pair of affine transformations, and it classifies examples by comparing the magnitudes of vectors that these transformations produce. The model is also over-parameterized in the sense that different pairs of affine transformations can describe classifiers with the same decision boundary and confidence scores. We show that such pairs arise from discrete and continuous symmetries of the model’s parameter space: in particular, the latter define symmetry groups of rotations and Lorentz transformations, and we use these group structures to devise appropriately invariant procedures for model alignment and averaging. We also leverage the form of the model’s decision boundary to derive simple margin-based updates for online learning. Here we explore a strategy of passive-aggressive learning: for each example, we compute the minimum change in parameters that is required to predict its correct label with high confidence. We derive these updates by solving a quadratically constrained quadratic program (QCQP); interestingly, this QCQP is nonconvex but tractable, and it can be solved efficiently by elementary methods. We highlight the conceptual and practical contributions of this approach. Conceptually, we show that it extends the paradigm of passive-aggressive learning to a larger family of nonlinear models for classification. Practically, we show that these models perform well on large-scale problems in online learning.
| null |
Finite-Sample Analysis of Off-Policy TD-Learning via Generalized Bellman Operators
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2eeb7362ef83deff5c7813a67e14f0a-Abstract.html
|
Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, Karthikeyan Shanmugam
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2eeb7362ef83deff5c7813a67e14f0a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13263-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b2eeb7362ef83deff5c7813a67e14f0a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=esCx4oejjxw
|
https://papers.nips.cc/paper_files/paper/2021/file/b2eeb7362ef83deff5c7813a67e14f0a-Supplemental.pdf
|
In TD-learning, off-policy sampling is known to be more practical than on-policy sampling, and by decoupling learning from data collection, it enables data reuse. It is known that policy evaluation has the interpretation of solving a generalized Bellman equation. In this paper, we derive finite-sample bounds for any general off-policy TD-like stochastic approximation algorithm that solves for the fixed-point of this generalized Bellman operator. Our key step is to show that the generalized Bellman operator is simultaneously a contraction mapping with respect to a weighted $\ell_p$-norm for each $p$ in $[1,\infty)$, with a common contraction factor. Off-policy TD-learning is known to suffer from high variance due to the product of importance sampling ratios. A number of algorithms (e.g. $Q^\pi(\lambda)$, Tree-Backup$(\lambda)$, Retrace$(\lambda)$, and $Q$-trace) have been proposed in the literature to address this issue. Our results immediately imply finite-sample bounds of these algorithms. In particular, we provide first-known finite-sample guarantees for $Q^\pi(\lambda)$, Tree-Backup$(\lambda)$, and Retrace$(\lambda)$, and improve the best known bounds of $Q$-trace in \citep{chen2021finite}. Moreover, we show the bias-variance trade-offs in each of these algorithms.
| null |
A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2f627fff19fda463cb386442eac2b3d-Abstract.html
|
Runzhong Wang, Zhigang Hua, Gan Liu, Jiayi Zhang, Junchi Yan, Feng Qi, Shuang Yang, Jun Zhou, Xiaokang Yang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b2f627fff19fda463cb386442eac2b3d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13264-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b2f627fff19fda463cb386442eac2b3d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nwWLJsTJfv
|
https://papers.nips.cc/paper_files/paper/2021/file/b2f627fff19fda463cb386442eac2b3d-Supplemental.pdf
|
Combinatorial Optimization (CO) has been a long-standing challenging research topic featured by its NP-hard nature. Traditionally such problems are approximately solved with heuristic algorithms which are usually fast but may sacrifice the solution quality. Currently, machine learning for combinatorial optimization (MLCO) has become a trending research topic, but most existing MLCO methods treat CO as a single-level optimization by directly learning the end-to-end solutions, which are hard to scale up and mostly limited by the capacity of ML models given the high complexity of CO. In this paper, we propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph (e.g. add, delete or modify edges in a graph), fused with a lower-level heuristic algorithm solving on the optimized graph. Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity. The experiments and results on several popular CO problems like Directed Acyclic Graph scheduling, Graph Edit Distance and Hamiltonian Cycle Problem show its effectiveness over manually designed heuristics and single-level learning methods.
| null |
Improved Learning Rates of a Functional Lasso-type SVM with Sparse Multi-Kernel Representation
|
https://papers.nips.cc/paper_files/paper/2021/hash/b31df16a88ce00fed951f24b46e08649-Abstract.html
|
shaogao lv, Junhui Wang, Jiankun Liu, Yong Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/b31df16a88ce00fed951f24b46e08649-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13265-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b31df16a88ce00fed951f24b46e08649-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MgsSQPOYNx1
|
https://papers.nips.cc/paper_files/paper/2021/file/b31df16a88ce00fed951f24b46e08649-Supplemental.pdf
|
In this paper, we provide theoretical results of estimation bounds and excess risk upper bounds for support vector machine (SVM) with sparse multi-kernel representation. These convergence rates for multi-kernel SVM are established by analyzing a Lasso-type regularized learning scheme within composite multi-kernel spaces. It is shown that the oracle rates of convergence of classifiers depend on the complexity of multi-kernels, the sparsity, a Bernstein condition and the sample size, which significantly improves on previous results even for the additive or linear cases. In summary, this paper not only provides unified theoretical results for multi-kernel SVMs, but also enriches the literature on high-dimensional nonparametric classification.
| null |
When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?
|
https://papers.nips.cc/paper_files/paper/2021/hash/b36ed8a07e3cd80ee37138524690eca1-Abstract.html
|
Lijie Fan, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Chuang Gan
|
https://papers.nips.cc/paper_files/paper/2021/hash/b36ed8a07e3cd80ee37138524690eca1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13266-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b36ed8a07e3cd80ee37138524690eca1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=70kOIgjKhbA
|
https://papers.nips.cc/paper_files/paper/2021/file/b36ed8a07e3cd80ee37138524690eca1-Supplemental.pdf
|
Contrastive learning (CL) can learn generalizable feature representations and achieve state-of-the-art performance of downstream tasks by finetuning a linear classifier on top of it. However, as adversarial robustness becomes vital in image classification, it remains unclear whether or not CL is able to preserve robustness to downstream tasks. The main challenge is that in the self-supervised pretraining + supervised finetuning paradigm, adversarial robustness is easily forgotten due to a learning task mismatch from pretraining to finetuning. We call such challenge 'cross-task robustness transferability'. To address the above problem, in this paper we revisit and advance CL principles through the lens of robustness enhancement. We show that (1) the design of contrastive views matters: High-frequency components of images are beneficial to improving model robustness; (2) Augmenting CL with pseudo-supervision stimulus (e.g., resorting to feature clustering) helps preserve robustness without forgetting. Equipped with our new designs, we propose AdvCL, a novel adversarial contrastive pretraining framework. We show that AdvCL is able to enhance cross-task robustness transferability without loss of model accuracy and finetuning efficiency. With a thorough experimental study, we demonstrate that AdvCL outperforms the state-of-the-art self-supervised robust learning methods across multiple datasets (CIFAR-10, CIFAR-100, and STL-10) and finetuning schemes (linear evaluation and full model finetuning).
| null |
Learning Transferable Features for Point Cloud Detection via 3D Contrastive Co-training
|
https://papers.nips.cc/paper_files/paper/2021/hash/b3b25a26a0828ea5d48d8f8aa0d6f9af-Abstract.html
|
Zeng Yihan, Chunwei Wang, Yunbo Wang, Hang Xu, Chaoqiang Ye, Zhen Yang, Chao Ma
|
https://papers.nips.cc/paper_files/paper/2021/hash/b3b25a26a0828ea5d48d8f8aa0d6f9af-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13267-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b3b25a26a0828ea5d48d8f8aa0d6f9af-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=iH1_KBzbwQq
|
https://papers.nips.cc/paper_files/paper/2021/file/b3b25a26a0828ea5d48d8f8aa0d6f9af-Supplemental.pdf
|
Most existing point cloud detection models require large-scale, densely annotated datasets. They typically underperform in domain adaptation settings, due to geometry shifts caused by different physical environments or LiDAR sensor configurations. Therefore, it is challenging but valuable to learn transferable features between a labeled source domain and a novel target domain, without any access to target labels. To tackle this problem, we introduce the framework of 3D Contrastive Co-training (3D-CoCo) with two technical contributions. First, 3D-CoCo is inspired by our observation that the bird-eye-view (BEV) features are more transferable than low-level geometry features. We thus propose a new co-training architecture that includes separate 3D encoders with domain-specific parameters, as well as a BEV transformation module for learning domain-invariant features. Second, 3D-CoCo extends the approach of contrastive instance alignment to point cloud detection, whose performance was largely hindered by the mismatch between the fictitious distribution of BEV features, induced by pseudo-labels, and the true distribution. The mismatch is greatly reduced by 3D-CoCo with transformed point clouds, which are carefully designed by considering specific geometry priors. We construct new domain adaptation benchmarks using three large-scale 3D datasets. Experimental results show that our proposed 3D-CoCo effectively closes the domain gap and outperforms the state-of-the-art methods by large margins.
| null |
SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark
|
https://papers.nips.cc/paper_files/paper/2021/hash/b3e3e393c77e35a4a3f3cbd1e429b5dc-Abstract.html
|
Victor Zhong, Austin W. Hanjie, Sida Wang, Karthik Narasimhan, Luke Zettlemoyer
|
https://papers.nips.cc/paper_files/paper/2021/hash/b3e3e393c77e35a4a3f3cbd1e429b5dc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13268-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b3e3e393c77e35a4a3f3cbd1e429b5dc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tW7L9dKZ0OM
| null |
Existing work in language grounding typically study single environments. How do we build unified models that apply across multiple environments? We propose the multi-environment Symbolic Interactive Language Grounding benchmark (SILG), which unifies a collection of diverse grounded language learning environments under a common interface. SILG consists of grid-world environments that require generalization to new dynamics, entities, and partially observed worlds (RTFM, Messenger, NetHack), as well as symbolic counterparts of visual worlds that re- quire interpreting rich natural language with respect to complex scenes (ALFWorld, Touchdown). Together, these environments provide diverse grounding challenges in richness of observation space, action space, language specification, and plan com- plexity. In addition, we propose the first shared model architecture for RL on these environments, and evaluate recent advances such as egocentric local convolution, recurrent state-tracking, entity-centric attention, and pretrained LM using SILG. Our shared architecture achieves comparable performance to environment-specific architectures. Moreover, we find that many recent modelling advances do not result in significant gains on environments other than the one they were designed for. This highlights the need for a multi-environment benchmark. Finally, the best models significantly underperform humans on SILG, which suggests ample room for future work. We hope SILG enables the community to quickly identify new methodolo- gies for language grounding that generalize to a diverse set of environments and their associated challenges.
| null |
A Surrogate Objective Framework for Prediction+Programming with Soft Constraints
|
https://papers.nips.cc/paper_files/paper/2021/hash/b427426b8acd2c2e53827970f2c2f526-Abstract.html
|
Kai Yan, Jie Yan, Chuan Luo, Liting Chen, Qingwei Lin, Dongmei Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b427426b8acd2c2e53827970f2c2f526-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13269-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b427426b8acd2c2e53827970f2c2f526-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9Sa2xh4mGR
|
https://papers.nips.cc/paper_files/paper/2021/file/b427426b8acd2c2e53827970f2c2f526-Supplemental.pdf
|
Prediction+optimization is a common real-world paradigm where we have to predict problem parameters before solving the optimization problem. However, the criteria by which the prediction model is trained are often inconsistent with the goal of the downstream optimization problem. Recently, decision-focused prediction approaches, such as SPO+ and direct optimization, have been proposed to fill this gap. However, they cannot directly handle the soft constraints with the max operator required in many real-world objectives. This paper proposes a novel analytically differentiable surrogate objective framework for real-world linear and semi-definite negative quadratic programming problems with soft linear and non-negative hard constraints. This framework gives the theoretical bounds on constraints’ multipliers, and derives the closed-form solution with respect to predictive parameters and thus gradients for any variable in the problem. We evaluate our method in three applications extended with soft constraints: synthetic linear programming, portfolio optimization, and resource provisioning, demonstrating that our method outperforms traditional two-staged methods and other decision-focused approaches
| null |
Learning to Predict Trustworthiness with Steep Slope Loss
|
https://papers.nips.cc/paper_files/paper/2021/hash/b432f34c5a997c8e7c806a895ecc5e25-Abstract.html
|
Yan Luo, Yongkang Wong, Mohan S. Kankanhalli, Qi Zhao
|
https://papers.nips.cc/paper_files/paper/2021/hash/b432f34c5a997c8e7c806a895ecc5e25-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13270-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b432f34c5a997c8e7c806a895ecc5e25-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cBWFSWwjBSC
|
https://papers.nips.cc/paper_files/paper/2021/file/b432f34c5a997c8e7c806a895ecc5e25-Supplemental.pdf
|
Understanding the trustworthiness of a prediction yielded by a classifier is critical for the safe and effective use of AI models. Prior efforts have been proven to be reliable on small-scale datasets. In this work, we study the problem of predicting trustworthiness on real-world large-scale datasets, where the task is more challenging due to high-dimensional features, diverse visual concepts, and a large number of samples. In such a setting, we observe that the trustworthiness predictors trained with prior-art loss functions, i.e., the cross entropy loss, focal loss, and true class probability confidence loss, are prone to view both correct predictions and incorrect predictions to be trustworthy. The reasons are two-fold. Firstly, correct predictions are generally dominant over incorrect predictions. Secondly, due to the data complexity, it is challenging to differentiate the incorrect predictions from the correct ones on real-world large-scale datasets. To improve the generalizability of trustworthiness predictors, we propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other. The proposed loss is evaluated with two representative deep learning models, i.e., Vision Transformer and ResNet, as trustworthiness predictors. We conduct comprehensive experiments and analyses on ImageNet, which show that the proposed loss effectively improves the generalizability of trustworthiness predictors. The code and pre-trained trustworthiness predictors for reproducibility are available at \url{https://github.com/luoyan407/predict_trustworthiness}.
| null |
On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
|
https://papers.nips.cc/paper_files/paper/2021/hash/b433da1b32b5ca96c0ba7fcb9edba97d-Abstract.html
|
Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, Dmitry P. Vetrov
|
https://papers.nips.cc/paper_files/paper/2021/hash/b433da1b32b5ca96c0ba7fcb9edba97d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13271-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b433da1b32b5ca96c0ba7fcb9edba97d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=B6uDDaDoW4a
|
https://papers.nips.cc/paper_files/paper/2021/file/b433da1b32b5ca96c0ba7fcb9edba97d-Supplemental.pdf
|
Training neural networks with batch normalization and weight decay has become a common practice in recent years. In this work, we show that their combined use may result in a surprising periodic behavior of optimization dynamics: the training process regularly exhibits destabilizations that, however, do not lead to complete divergence but cause a new period of training. We rigorously investigate the mechanism underlying the discovered periodic behavior from both empirical and theoretical points of view and analyze the conditions in which it occurs in practice. We also demonstrate that periodic behavior can be regarded as a generalization of two previously opposing perspectives on training with batch normalization and weight decay, namely the equilibrium presumption and the instability presumption.
| null |
NeRV: Neural Representations for Videos
|
https://papers.nips.cc/paper_files/paper/2021/hash/b44182379bf9fae976e6ae5996e13cd8-Abstract.html
|
Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser Nam Lim, Abhinav Shrivastava
|
https://papers.nips.cc/paper_files/paper/2021/hash/b44182379bf9fae976e6ae5996e13cd8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13272-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b44182379bf9fae976e6ae5996e13cd8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BbikqBWZTGB
|
https://papers.nips.cc/paper_files/paper/2021/file/b44182379bf9fae976e6ae5996e13cd8-Supplemental.pdf
|
We propose a novel neural representation for videos (NeRV) which encodes videos in neural networks. Unlike conventional representations that treat videos as frame sequences, we represent videos as neural networks taking frame index as input. Given a frame index, NeRV outputs the corresponding RGB image. Video encoding in NeRV is simply fitting a neural network to video frames and decoding process is a simple feedforward operation. As an image-wise implicit representation, NeRV output the whole image and shows great efficiency compared to pixel-wise implicit representation, improving the encoding speed by $\textbf{25}\times$ to $\textbf{70}\times$, the decoding speed by $\textbf{38}\times$ to $\textbf{132}\times$, while achieving better video quality. With such a representation, we can treat videos as neural networks, simplifying several video-related tasks. For example, conventional video compression methods are restricted by a long and complex pipeline, specifically designed for the task. In contrast, with NeRV, we can use any neural network compression method as a proxy for video compression, and achieve comparable performance to traditional frame-based video compression approaches (H.264, HEVC \etc). Besides compression, we demonstrate the generalization of NeRV for video denoising. The source code and pre-trained model can be found at https://github.com/haochen-rye/NeRV.git.
| null |
Surrogate Regret Bounds for Polyhedral Losses
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4572f47b7c69e27b8e46646d9579e67-Abstract.html
|
Rafael Frongillo, Bo Waggoner
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4572f47b7c69e27b8e46646d9579e67-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13273-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b4572f47b7c69e27b8e46646d9579e67-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VUJlv99HgAZ
|
https://papers.nips.cc/paper_files/paper/2021/file/b4572f47b7c69e27b8e46646d9579e67-Supplemental.pdf
|
Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates. The second shows that for sufficiently non-polyhedral surrogates, the regret bound is a square root, meaning fast surrogate generalization rates translate to slow rates for the target. Together, these results suggest polyhedral surrogates are optimal in many cases.
| null |
Last iterate convergence of SGD for Least-Squares in the Interpolation regime.
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4a0e0fbaa9f16d8947c49f4e610b549-Abstract.html
|
Aditya Vardhan Varre, Loucas Pillaud-Vivien, Nicolas Flammarion
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4a0e0fbaa9f16d8947c49f4e610b549-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13274-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b4a0e0fbaa9f16d8947c49f4e610b549-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zsq86HNvXr6
|
https://papers.nips.cc/paper_files/paper/2021/file/b4a0e0fbaa9f16d8947c49f4e610b549-Supplemental.pdf
|
Motivated by the recent successes of neural networks that have the ability to fit the data perfectly \emph{and} generalize well, we study the noiseless model in the fundamental least-squares setup. We assume that an optimum predictor perfectly fits the inputs and outputs $\langle \theta_* , \phi(X) \rangle = Y$, where $\phi(X)$ stands for a possibly infinite dimensional non-linear feature map. To solve this problem, we consider the estimator given by the last iterate of stochastic gradient descent (SGD) with constant step-size. In this context, our contribution is two fold: (i) \emph{from a (stochastic) optimization perspective}, we exhibit an archetypal problem where we can show explicitly the convergence of SGD final iterate for a non-strongly convex problem with constant step-size whereas usual results use some form of average and (ii) \emph{from a statistical perspective}, we give explicit non-asymptotic convergence rates in the over-parameterized setting and leverage a \emph{fine-grained} parameterization of the problem to exhibit polynomial rates that can be faster than $O(1/T)$. The link with reproducing kernel Hilbert spaces is established.
| null |
Generative vs. Discriminative: Rethinking The Meta-Continual Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4e267d84075f66ebd967d95331fcc03-Abstract.html
|
Mohammadamin Banayeeanzade, Rasoul Mirzaiezadeh, Hosein Hasani, Mahdieh Soleymani
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4e267d84075f66ebd967d95331fcc03-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13275-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b4e267d84075f66ebd967d95331fcc03-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=soDi-HkzC1
|
https://papers.nips.cc/paper_files/paper/2021/file/b4e267d84075f66ebd967d95331fcc03-Supplemental.pdf
|
Deep neural networks have achieved human-level capabilities in various learning tasks. However, they generally lose performance in more realistic scenarios like learning in a continual manner. In contrast, humans can incorporate their prior knowledge to learn new concepts efficiently without forgetting older ones. In this work, we leverage meta-learning to encourage the model to learn how to learn continually. Inspired by human concept learning, we develop a generative classifier that efficiently uses data-driven experience to learn new concepts even from few samples while being immune to forgetting. Along with cognitive and theoretical insights, extensive experiments on standard benchmarks demonstrate the effectiveness of the proposed method. The ability to remember all previous concepts, with negligible computational and structural overheads, suggests that generative models provide a natural way for alleviating catastrophic forgetting, which is a major drawback of discriminative models.
| null |
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4f8e5c5fb53f5ba81072451531d5460-Abstract.html
|
Antoine Bodin, Nicolas Macris
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4f8e5c5fb53f5ba81072451531d5460-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13276-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b4f8e5c5fb53f5ba81072451531d5460-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=luCVRHASXC0
|
https://papers.nips.cc/paper_files/paper/2021/file/b4f8e5c5fb53f5ba81072451531d5460-Supplemental.pdf
|
Recent evidence has shown the existence of a so-called double-descent and even triple-descent behavior for the generalization error of deep-learning models. This important phenomenon commonly appears in implemented neural network architectures, and also seems to emerge in epoch-wise curves during the training process. A recent line of research has highlighted that random matrix tools can be used to obtain precise analytical asymptotics of the generalization (and training) errors of the random feature model. In this contribution, we analyze the whole temporal behavior of the generalization and training errors under gradient flow for the random feature model. We show that in the asymptotic limit of large system size the full time-evolution path of both errors can be calculated analytically. This allows us to observe how the double and triple descents develop over time, if and when early stopping is an option, and also observe time-wise descent structures. Our techniques are based on Cauchy complex integral representations of the errors together with recent random matrix methods based on linear pencils.
| null |
Rethinking Graph Transformers with Spectral Attention
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4fd1d2cb085390fbbadae65e07876a7-Abstract.html
|
Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, Prudencio Tossou
|
https://papers.nips.cc/paper_files/paper/2021/hash/b4fd1d2cb085390fbbadae65e07876a7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13277-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b4fd1d2cb085390fbbadae65e07876a7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=huAdB-Tj4yG
|
https://papers.nips.cc/paper_files/paper/2021/file/b4fd1d2cb085390fbbadae65e07876a7-Supplemental.pdf
|
In recent years, the Transformer architecture has proven to be very successful in sequence processing, but its application to other data structures, such as graphs, has remained limited due to the difficulty of properly defining positions. Here, we present the \textit{Spectral Attention Network} (SAN), which uses a learned positional encoding (LPE) that can take advantage of the full Laplacian spectrum to learn the position of each node in a given graph.This LPE is then added to the node features of the graph and passed to a fully-connected Transformer.By leveraging the full spectrum of the Laplacian, our model is theoretically powerful in distinguishing graphs, and can better detect similar sub-structures from their resonance.Further, by fully connecting the graph, the Transformer does not suffer from over-squashing, an information bottleneck of most GNNs, and enables better modeling of physical phenomenons such as heat transfer and electric interaction.When tested empirically on a set of 4 standard datasets, our model performs on par or better than state-of-the-art GNNs, and outperforms any attention-based model by a wide margin, becoming the first fully-connected architecture to perform well on graph benchmarks.
| null |
Perceptual Score: What Data Modalities Does Your Model Perceive?
|
https://papers.nips.cc/paper_files/paper/2021/hash/b51a15f382ac914391a58850ab343b00-Abstract.html
|
Itai Gat, Idan Schwartz, Alex Schwing
|
https://papers.nips.cc/paper_files/paper/2021/hash/b51a15f382ac914391a58850ab343b00-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13278-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b51a15f382ac914391a58850ab343b00-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0V2Xd-26Kj
|
https://papers.nips.cc/paper_files/paper/2021/file/b51a15f382ac914391a58850ab343b00-Supplemental.pdf
|
Machine learning advances in the last decade have relied significantly on large-scale datasets that continue to grow in size. Increasingly, those datasets also contain different data modalities. However, large multi-modal datasets are hard to annotate, and annotations may contain biases that we are often unaware of. Deep-net-based classifiers, in turn, are prone to exploit those biases and to find shortcuts. To study and quantify this concern, we introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features, i.e., modalities. Using the perceptual score, we find a surprisingly consistent trend across four popular datasets: recent, more accurate state-of-the-art multi-modal models for visual question-answering or visual dialog tend to perceive the visual data less than their predecessors. This is concerning as answers are hence increasingly inferred from textual cues only. Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions. We hope to spur a discussion on the perceptiveness of multi-modal models and also hope to encourage the community working on multi-modal classifiers to start quantifying perceptiveness via the proposed perceptual score.
| null |
PiRank: Scalable Learning To Rank via Differentiable Sorting
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5200c6107fc3d41d19a2b66835c3974-Abstract.html
|
Robin Swezey, Aditya Grover, Bruno Charron, Stefano Ermon
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5200c6107fc3d41d19a2b66835c3974-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13279-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b5200c6107fc3d41d19a2b66835c3974-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=dL8p6rLFTS3
|
https://papers.nips.cc/paper_files/paper/2021/file/b5200c6107fc3d41d19a2b66835c3974-Supplemental.pdf
|
A key challenge with machine learning approaches for ranking is the gap between the performance metrics of interest and the surrogate loss functions that can be optimized with gradient-based methods. This gap arises because ranking metrics typically involve a sorting operation which is not differentiable w.r.t. the model parameters. Prior works have proposed surrogates that are loosely related to ranking metrics or simple smoothed versions thereof, and often fail to scale to real-world applications. We propose PiRank, a new class of differentiable surrogates for ranking, which employ a continuous, temperature-controlled relaxation to the sorting operator based on NeuralSort [1]. We show that PiRank exactly recovers the desired metrics in the limit of zero temperature and further propose a divide-and-conquer extension that scales favorably to large list sizes, both in theory and practice. Empirically, we demonstrate the role of larger list sizes during training and show that PiRank significantly improves over comparable approaches on publicly available Internet-scale learning-to-rank benchmarks.
| null |
Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/b534ba68236ba543ae44b22bd110a1d6-Abstract.html
|
Liming Jiang, Bo Dai, Wayne Wu, Chen Change Loy
|
https://papers.nips.cc/paper_files/paper/2021/hash/b534ba68236ba543ae44b22bd110a1d6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13280-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b534ba68236ba543ae44b22bd110a1d6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=spjlJ4jeM_
|
https://papers.nips.cc/paper_files/paper/2021/file/b534ba68236ba543ae44b22bd110a1d6-Supplemental.zip
|
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images. Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting, the underlying cause that impedes the generator's convergence. This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator. As an alternative method to existing approaches that rely on standard data augmentations or model regularization, APA alleviates overfitting by employing the generator itself to augment the real data distribution with generated images, which deceives the discriminator adaptively. Extensive experiments demonstrate the effectiveness of APA in improving synthesis quality in the low-data regime. We provide a theoretical analysis to examine the convergence and rationality of our new training strategy. APA is simple and effective. It can be added seamlessly to powerful contemporary GANs, such as StyleGAN2, with negligible computational cost. Code: https://github.com/EndlessSora/DeceiveD.
| null |
CoFrNets: Interpretable Neural Architecture Inspired by Continued Fractions
|
https://papers.nips.cc/paper_files/paper/2021/hash/b538f279cb2ca36268b23f557a831508-Abstract.html
|
Isha Puri, Amit Dhurandhar, Tejaswini Pedapati, Karthikeyan Shanmugam, Dennis Wei, Kush R. Varshney
|
https://papers.nips.cc/paper_files/paper/2021/hash/b538f279cb2ca36268b23f557a831508-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13281-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b538f279cb2ca36268b23f557a831508-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kGXlIEQgvC
|
https://papers.nips.cc/paper_files/paper/2021/file/b538f279cb2ca36268b23f557a831508-Supplemental.pdf
|
In recent years there has been a considerable amount of research on local post hoc explanations for neural networks. However, work on building interpretable neural architectures has been relatively sparse. In this paper, we present a novel neural architecture, CoFrNet, inspired by the form of continued fractions which are known to have many attractive properties in number theory, such as fast convergence of approximations to real numbers. We show that CoFrNets can be efficiently trained as well as interpreted leveraging their particular functional form. Moreover, we prove that such architectures are universal approximators based on a proof strategy that is different than the typical strategy used to prove universal approximation results for neural networks based on infinite width (or depth), which is likely to be of independent interest. We experiment on nonlinear synthetic functions and are able to accurately model as well as estimate feature attributions and even higher order terms in some cases, which is a testament to the representational power as well as interpretability of such architectures. To further showcase the power of CoFrNets, we experiment on seven real datasets spanning tabular, text and image modalities, and show that they are either comparable or significantly better than other interpretable models and multilayer perceptrons, sometimes approaching the accuracies of state-of-the-art models.
| null |
Iterative Teaching by Label Synthesis
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5488aeff42889188d03c9895255cecc-Abstract.html
|
Weiyang Liu, Zhen Liu, Hanchen Wang, Liam Paull, Bernhard Schölkopf, Adrian Weller
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5488aeff42889188d03c9895255cecc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13282-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b5488aeff42889188d03c9895255cecc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9rphbXqgmqM
|
https://papers.nips.cc/paper_files/paper/2021/file/b5488aeff42889188d03c9895255cecc-Supplemental.pdf
|
In this paper, we consider the problem of iterative machine teaching, where a teacher provides examples sequentially based on the current iterative learner. In contrast to previous methods that have to scan over the entire pool and select teaching examples from it in each iteration, we propose a label synthesis teaching framework where the teacher randomly selects input teaching examples (e.g., images) and then synthesizes suitable outputs (e.g., labels) for them. We show that this framework can avoid costly example selection while still provably achieving exponential teachability. We propose multiple novel teaching algorithms in this framework. Finally, we empirically demonstrate the value of our framework.
| null |
Variational Diffusion Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/b578f2a52a0229873fefc2a4b06377fa-Abstract.html
|
Diederik Kingma, Tim Salimans, Ben Poole, Jonathan Ho
|
https://papers.nips.cc/paper_files/paper/2021/hash/b578f2a52a0229873fefc2a4b06377fa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13283-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b578f2a52a0229873fefc2a4b06377fa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2LdBqxc1Yv
|
https://papers.nips.cc/paper_files/paper/2021/file/b578f2a52a0229873fefc2a4b06377fa-Supplemental.pdf
|
Diffusion-based generative models have demonstrated a capacity for perceptually impressive synthesis, but can they also be great likelihood-based models? We answer this in the affirmative, and introduce a family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks. Unlike other diffusion-based models, our method allows for efficient optimization of the noise schedule jointly with the rest of the model. We show that the variational lower bound (VLB) simplifies to a remarkably short expression in terms of the signal-to-noise ratio of the diffused data, thereby improving our theoretical understanding of this model class. Using this insight, we prove an equivalence between several models proposed in the literature. In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints. This enables us to learn a noise schedule that minimizes the variance of the resulting VLB estimator, leading to faster optimization. Combining these advances with architectural improvements, we obtain state-of-the-art likelihoods on image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often significantly faster optimization. In addition, we show how to use the model as part of a bits-back compression scheme, and demonstrate lossless compression rates close to the theoretical optimum.
| null |
FastCorrect: Fast Error Correction with Edit Alignment for Automatic Speech Recognition
|
https://papers.nips.cc/paper_files/paper/2021/hash/b597460c506e8e35fb0cc1c1905dd3bc-Abstract.html
|
Yichong Leng, Xu Tan, Linchen Zhu, Jin Xu, Renqian Luo, Linquan Liu, Tao Qin, Xiangyang Li, Edward Lin, Tie-Yan Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/b597460c506e8e35fb0cc1c1905dd3bc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13284-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b597460c506e8e35fb0cc1c1905dd3bc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=N3oi7URBakV
|
https://papers.nips.cc/paper_files/paper/2021/file/b597460c506e8e35fb0cc1c1905dd3bc-Supplemental.zip
|
Error correction techniques have been used to refine the output sentences from automatic speech recognition (ASR) models and achieve a lower word error rate (WER) than original ASR outputs. Previous works usually use a sequence-to-sequence model to correct an ASR output sentence autoregressively, which causes large latency and cannot be deployed in online ASR services. A straightforward solution to reduce latency, inspired by non-autoregressive (NAR) neural machine translation, is to use an NAR sequence generation model for ASR error correction, which, however, comes at the cost of significantly increased ASR error rate. In this paper, observing distinctive error patterns and correction operations (i.e., insertion, deletion, and substitution) in ASR, we propose FastCorrect, a novel NAR error correction model based on edit alignment. In training, FastCorrect aligns each source token from an ASR output sentence to the target tokens from the corresponding ground-truth sentence based on the edit distance between the source and target sentences, and extracts the number of target tokens corresponding to each source token during edition/correction, which is then used to train a length predictor and to adjust the source tokens to match the length of the target sentence for parallel generation. In inference, the token number predicted by the length predictor is used to adjust the source tokens for target sequence generation. Experiments on the public AISHELL-1 dataset and an internal industrial-scale ASR dataset show the effectiveness of FastCorrect for ASR error correction: 1) it speeds up the inference by 6-9 times and maintains the accuracy (8-14% WER reduction) compared with the autoregressive correction model; and 2) it outperforms the popular NAR models adopted in neural machine translation and text edition by a large margin.
| null |
Integrated Latent Heterogeneity and Invariance Learning in Kernel Space
|
https://papers.nips.cc/paper_files/paper/2021/hash/b59a51a3c0bf9c5228fde841714f523a-Abstract.html
|
Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
|
https://papers.nips.cc/paper_files/paper/2021/hash/b59a51a3c0bf9c5228fde841714f523a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13285-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b59a51a3c0bf9c5228fde841714f523a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5oX9BvY5VTU
|
https://papers.nips.cc/paper_files/paper/2021/file/b59a51a3c0bf9c5228fde841714f523a-Supplemental.pdf
|
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i.i.d$ testing data. Recently, invariant learning methods for out-of-distribution (OOD) generalization propose to find causally invariant relationships with multi-environments. However, modern datasets are frequently multi-sourced without explicit source labels, rendering many invariant learning methods inapplicable. In this paper, we propose Kernelized Heterogeneous Risk Minimization (KerHRM) algorithm, which achieves both the latent heterogeneity exploration and invariant learning in kernel space, and then gives feedback to the original neural network by appointing invariant gradient direction. We theoretically justify our algorithm and empirically validate the effectiveness of our algorithm with extensive experiments.
| null |
Hierarchical Reinforcement Learning with Timed Subgoals
|
https://papers.nips.cc/paper_files/paper/2021/hash/b59c21a078fde074a6750e91ed19fb21-Abstract.html
|
Nico Gürtler, Dieter Büchler, Georg Martius
|
https://papers.nips.cc/paper_files/paper/2021/hash/b59c21a078fde074a6750e91ed19fb21-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13286-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b59c21a078fde074a6750e91ed19fb21-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=96ULbah4DC
|
https://papers.nips.cc/paper_files/paper/2021/file/b59c21a078fde074a6750e91ed19fb21-Supplemental.pdf
|
Hierarchical reinforcement learning (HRL) holds great potential for sample-efficient learning on challenging long-horizon tasks. In particular, letting a higher level assign subgoals to a lower level has been shown to enable fast learning on difficult problems. However, such subgoal-based methods have been designed with static reinforcement learning environments in mind and consequently struggle with dynamic elements beyond the immediate control of the agent even though they are ubiquitous in real-world problems. In this paper, we introduce Hierarchical reinforcement learning with Timed Subgoals (HiTS), an HRL algorithm that enables the agent to adapt its timing to a dynamic environment by not only specifying what goal state is to be reached but also when. We discuss how communicating with a lower level in terms of such timed subgoals results in a more stable learning problem for the higher level. Our experiments on a range of standard benchmarks and three new challenging dynamic reinforcement learning environments show that our method is capable of sample-efficient learning where an existing state-of-the-art subgoal-based HRL method fails to learn stable solutions.
| null |
Fair Scheduling for Time-dependent Resources
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5b1d9ada94bb80609d21eecf7a2ce7a-Abstract.html
|
Bo Li, Minming Li, Ruilong Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5b1d9ada94bb80609d21eecf7a2ce7a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13287-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b5b1d9ada94bb80609d21eecf7a2ce7a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QMJb9BvLqXU
|
https://papers.nips.cc/paper_files/paper/2021/file/b5b1d9ada94bb80609d21eecf7a2ce7a-Supplemental.pdf
|
We study a fair resource scheduling problem, where a set of interval jobs are to be allocated to heterogeneous machines controlled by intellectual agents.Each job is associated with release time, deadline, and processing time such that it can be processed if its complete processing period is between its release time and deadline. The machines gain possibly different utilities by processing different jobs, and all jobs assigned to the same machine should be processed without overlap.We consider two widely studied solution concepts, namely, maximin share fairness and envy-freeness.For both criteria, we discuss the extent to which fair allocations exist and present constant approximation algorithms for various settings.
| null |
SNIPS: Solving Noisy Inverse Problems Stochastically
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5c01503041b70d41d80e3dbe31bbd8c-Abstract.html
|
Bahjat Kawar, Gregory Vaksman, Michael Elad
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5c01503041b70d41d80e3dbe31bbd8c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13288-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b5c01503041b70d41d80e3dbe31bbd8c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pBKOx_dxYAN
|
https://papers.nips.cc/paper_files/paper/2021/file/b5c01503041b70d41d80e3dbe31bbd8c-Supplemental.pdf
|
In this work we introduce a novel stochastic algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem, where the observation is assumed to be contaminated by additive white Gaussian noise. Our solution incorporates ideas from Langevin dynamics and Newton's method, and exploits a pre-trained minimum mean squared error (MMSE) Gaussian denoiser. The proposed approach relies on an intricate derivation of the posterior score function that includes a singular value decomposition (SVD) of the degradation operator, in order to obtain a tractable iterative algorithm for the desired sampling. Due to its stochasticity, the algorithm can produce multiple high perceptual quality samples for the same noisy observation. We demonstrate the abilities of the proposed paradigm for image deblurring, super-resolution, and compressive sensing. We show that the samples produced are sharp, detailed and consistent with the given measurements, and their diversity exposes the inherent uncertainty in the inverse problem being solved.
| null |
Stateful ODE-Nets using Basis Function Expansions
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5d62aa6024ab6a65a12c78c4c2d4efc-Abstract.html
|
Alejandro Queiruga, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney
|
https://papers.nips.cc/paper_files/paper/2021/hash/b5d62aa6024ab6a65a12c78c4c2d4efc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13289-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b5d62aa6024ab6a65a12c78c4c2d4efc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=KvjtYlrmAj
|
https://papers.nips.cc/paper_files/paper/2021/file/b5d62aa6024ab6a65a12c78c4c2d4efc-Supplemental.pdf
|
The recently-introduced class of ordinary differential equation networks (ODE-Nets) establishes a fruitful connection between deep learning and dynamical systems. In this work, we reconsider formulations of the weights as continuous-in-depth functions using linear combinations of basis functions which enables us to leverage parameter transformations such as function projections. In turn, this view allows us to formulate a novel stateful ODE-Block that handles stateful layers. The benefits of this new ODE-Block are twofold: first, it enables incorporating meaningful continuous-in-depth batch normalization layers to achieve state-of-the-art performance; second, it enables compressing the weights through a change of basis, without retraining, while maintaining near state-of-the-art performance and reducing both inference time and memory footprint. Performance is demonstrated by applying our stateful ODE-Block to (a) image classification tasks using convolutional units and (b) sentence-tagging tasks using transformer encoder units.
| null |
Beyond the Signs: Nonparametric Tensor Completion via Sign Series
|
https://papers.nips.cc/paper_files/paper/2021/hash/b60c5ab647a27045b462934977ccad9a-Abstract.html
|
Chanwoo Lee, Miaoyan Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b60c5ab647a27045b462934977ccad9a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13290-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b60c5ab647a27045b462934977ccad9a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rkA36z2plsI
|
https://papers.nips.cc/paper_files/paper/2021/file/b60c5ab647a27045b462934977ccad9a-Supplemental.pdf
|
We consider the problem of tensor estimation from noisy observations with possibly missing entries. A nonparametric approach to tensor completion is developed based on a new model which we coin as sign representable tensors. The model represents the signal tensor of interest using a series of structured sign tensors. Unlike earlier methods, the sign series representation effectively addresses both low- and high-rank signals, while encompassing many existing tensor models---including CP models, Tucker models, single index models, structured tensors with repeating entries---as special cases. We provably reduce the tensor estimation problem to a series of structured classification tasks, and we develop a learning reduction machinery to empower existing low-rank tensor algorithms for more challenging high-rank estimation. Excess risk bounds, estimation errors, and sample complexities are established. We demonstrate the outperformance of our approach over previous methods on two datasets, one on human brain connectivity networks and the other on topic data mining.
| null |
Functional Variational Inference based on Stochastic Process Generators
|
https://papers.nips.cc/paper_files/paper/2021/hash/b613e70fd9f59310cf0a8d33de3f2800-Abstract.html
|
Chao Ma, José Miguel Hernández-Lobato
|
https://papers.nips.cc/paper_files/paper/2021/hash/b613e70fd9f59310cf0a8d33de3f2800-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13291-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b613e70fd9f59310cf0a8d33de3f2800-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=KLILoGYuOfw
|
https://papers.nips.cc/paper_files/paper/2021/file/b613e70fd9f59310cf0a8d33de3f2800-Supplemental.pdf
|
Bayesian inference in the space of functions has been an important topic for Bayesian modeling in the past. In this paper, we propose a new solution to this problem called Functional Variational Inference (FVI). In FVI, we minimize a divergence in function space between the variational distribution and the posterior process. This is done by using as functional variational family a new class of flexible distributions called Stochastic Process Generators (SPGs), which are cleverly designed so that the functional ELBO can be estimated efficiently using analytic solutions and mini-batch sampling. FVI can be applied to stochastic process priors when random function samples from those priors are available. Our experiments show that FVI consistently outperforms weight-space and function space VI methods on several tasks, which validates the effectiveness of our approach.
| null |
TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?
|
https://papers.nips.cc/paper_files/paper/2021/hash/b618c3210e934362ac261db280128c22-Abstract.html
|
Yuejiang Liu, Parth Kothari, Bastien van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, Alexandre Alahi
|
https://papers.nips.cc/paper_files/paper/2021/hash/b618c3210e934362ac261db280128c22-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13292-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b618c3210e934362ac261db280128c22-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=86NHK__yFDl
|
https://papers.nips.cc/paper_files/paper/2021/file/b618c3210e934362ac261db280128c22-Supplemental.pdf
|
Test-time training (TTT) through self-supervised learning (SSL) is an emerging paradigm to tackle distributional shifts. Despite encouraging results, it remains unclear when this approach thrives or fails. In this work, we first provide an in-depth look at its limitations and show that TTT can possibly deteriorate, instead of improving, the test-time performance in the presence of severe distribution shifts. To address this issue, we introduce a test-time feature alignment strategy utilizing offline feature summarization and online moment matching, which regularizes adaptation without revisiting training data. We further scale this strategy in the online setting through batch-queue decoupling to enable robust moment estimates even with limited batch size. Given aligned feature distributions, we then shed light on the strong potential of TTT by theoretically analyzing its performance post adaptation. This analysis motivates our use of more informative self-supervision in the form of contrastive learning for visual recognition problems. We empirically demonstrate that our modified version of test-time training, termed TTT++, outperforms state-of-the-art methods by significant margins on several benchmarks. Our result indicates that storing and exploiting extra information, in addition to model parameters, can be a promising direction towards robust test-time adaptation.
| null |
Double Machine Learning Density Estimation for Local Treatment Effects with Instruments
|
https://papers.nips.cc/paper_files/paper/2021/hash/b61a560ed1b918340a0ddd00e08c990e-Abstract.html
|
Yonghan Jung, Jin Tian, Elias Bareinboim
|
https://papers.nips.cc/paper_files/paper/2021/hash/b61a560ed1b918340a0ddd00e08c990e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13293-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b61a560ed1b918340a0ddd00e08c990e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=mSuBvrUJFsF
|
https://papers.nips.cc/paper_files/paper/2021/file/b61a560ed1b918340a0ddd00e08c990e-Supplemental.pdf
|
Local treatment effects are a common quantity found throughout the empirical sciences that measure the treatment effect among those who comply with what they are assigned. Most of the literature is focused on estimating the average of such quantity, which is called the ``local average treatment effect (LATE)'' [Imbens and Angrist, 1994]). In this work, we study how to estimate the density of the local treatment effect, which is naturally more informative than its average. Specifically, we develop two families of methods for this task, namely, kernel-smoothing and model-based approaches. The kernel-smoothing-based approach estimates the density through some smooth kernel functions. The model-based approach estimates the density by projecting it onto a finite-dimensional density class. For both approaches, we derive the corresponding double/debiased machine learning-based estimators [Chernozhukov et al., 2018]. We further study the asymptotic convergence rates of the estimators and show that they are robust to the biases in nuisance function estimation. The use of the proposed methods is illustrated through both synthetic and a real dataset called 401(k).
| null |
Dirichlet Energy Constrained Learning for Deep Graph Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6417f112bd27848533e54885b66c288-Abstract.html
|
Kaixiong Zhou, Xiao Huang, Daochen Zha, Rui Chen, Li Li, Soo-Hyun Choi, Xia Hu
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6417f112bd27848533e54885b66c288-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13294-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b6417f112bd27848533e54885b66c288-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6YL_BntJrz6
|
https://papers.nips.cc/paper_files/paper/2021/file/b6417f112bd27848533e54885b66c288-Supplemental.pdf
|
Graph neural networks (GNNs) integrate deep architectures and topological structure modeling in an effective way. However, the performance of existing GNNs would decrease significantly when they stack many layers, because of the over-smoothing issue. Node embeddings tend to converge to similar vectors when GNNs keep recursively aggregating the representations of neighbors. To enable deep GNNs, several methods have been explored recently. But they are developed from either techniques in convolutional neural networks or heuristic strategies. There is no generalizable and theoretical principle to guide the design of deep GNNs. To this end, we analyze the bottleneck of deep GNNs by leveraging the Dirichlet energy of node embeddings, and propose a generalizable principle to guide the training of deep GNNs. Based on it, a novel deep GNN framework -- Energetic Graph Neural Networks (EGNN) is designed. It could provide lower and upper constraints in terms of Dirichlet energy at each layer to avoid over-smoothing. Experimental results demonstrate that EGNN achieves state-of-the-art performance by using deep layers.
| null |
Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6846b0186a035fcc76b1b1d26fd42fa-Abstract.html
|
Murtaza Dalal, Deepak Pathak, Russ R. Salakhutdinov
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6846b0186a035fcc76b1b1d26fd42fa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13295-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b6846b0186a035fcc76b1b1d26fd42fa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=48uzkHOKMfz
|
https://papers.nips.cc/paper_files/paper/2021/file/b6846b0186a035fcc76b1b1d26fd42fa-Supplemental.pdf
|
Despite the potential of reinforcement learning (RL) for building general-purpose robotic systems, training RL agents to solve robotics tasks still remains challenging due to the difficulty of exploration in purely continuous action spaces. Addressing this problem is an active area of research with the majority of focus on improving RL methods via better optimization or more efficient exploration. An alternate but important component to consider improving is the interface of the RL algorithm with the robot. In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy. These parameterized primitives are expressive, simple to implement, enable efficient exploration and can be transferred across robots, tasks and environments. We perform a thorough empirical study across challenging tasks in three distinct domains with image input and a sparse terminal reward. We find that our simple change to the action interface substantially improves both the learning efficiency and task performance irrespective of the underlying RL algorithm, significantly outperforming prior methods which learn skills from offline expert data.
| null |
Boosted CVaR Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/b691334ccf10d4ab144d672f7783c8a3-Abstract.html
|
Runtian Zhai, Chen Dan, Arun Suggala, J. Zico Kolter, Pradeep Ravikumar
|
https://papers.nips.cc/paper_files/paper/2021/hash/b691334ccf10d4ab144d672f7783c8a3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13296-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b691334ccf10d4ab144d672f7783c8a3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=INsYqFjBWnF
|
https://papers.nips.cc/paper_files/paper/2021/file/b691334ccf10d4ab144d672f7783c8a3-Supplemental.pdf
|
Many modern machine learning tasks require models with high tail performance, i.e. high performance over the worst-off samples in the dataset. This problem has been widely studied in fields such as algorithmic fairness, class imbalance, and risk-sensitive decision making. A popular approach to maximize the model's tail performance is to minimize the CVaR (Conditional Value at Risk) loss, which computes the average risk over the tails of the loss. However, for classification tasks where models are evaluated by the zero-one loss, we show that if the classifiers are deterministic, then the minimizer of the average zero-one loss also minimizes the CVaR zero-one loss, suggesting that CVaR loss minimization is not helpful without additional assumptions. We circumvent this negative result by minimizing the CVaR loss over randomized classifiers, for which the minimizers of the average zero-one loss and the CVaR zero-one loss are no longer the same, so minimizing the latter can lead to better tail performance. To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost. Based on this framework, we design an algorithm called $\alpha$-AdaLPBoost. We empirically evaluate our proposed algorithm on four benchmark datasets and show that it achieves higher tail performance than deterministic model training methods.
| null |
Disentangled Contrastive Learning on Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6cda17abb967ed28ec9610137aa45f7-Abstract.html
|
Haoyang Li, Xin Wang, Ziwei Zhang, Zehuan Yuan, Hang Li, Wenwu Zhu
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6cda17abb967ed28ec9610137aa45f7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13297-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b6cda17abb967ed28ec9610137aa45f7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rJq1SdaNPX4
|
https://papers.nips.cc/paper_files/paper/2021/file/b6cda17abb967ed28ec9610137aa45f7-Supplemental.pdf
|
Recently, self-supervised learning for graph neural networks (GNNs) has attracted considerable attention because of their notable successes in learning the representation of graph-structure data. However, the formation of a real-world graph typically arises from the highly complex interaction of many latent factors. The existing self-supervised learning methods for GNNs are inherently holistic and neglect the entanglement of the latent factors, resulting in the learned representations suboptimal for downstream tasks and difficult to be interpreted. Learning disentangled graph representations with self-supervised learning poses great challenges and remains largely ignored by the existing literature. In this paper, we introduce the Disentangled Graph Contrastive Learning (DGCL) method, which is able to learn disentangled graph-level representations with self-supervision. In particular, we first identify the latent factors of the input graph and derive its factorized representations. Each of the factorized representations describes a latent and disentangled aspect pertinent to a specific latent factor of the graph. Then we propose a novel factor-wise discrimination objective in a contrastive learning manner, which can force the factorized representations to independently reflect the expressive information from different latent factors. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our method against several state-of-the-art baselines.
| null |
Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6f8dc086b2d60c5856e4ff517060392-Abstract.html
|
Lin Guan, Mudit Verma, Suna (Sihang) Guo, Ruohan Zhang, Subbarao Kambhampati
|
https://papers.nips.cc/paper_files/paper/2021/hash/b6f8dc086b2d60c5856e4ff517060392-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13298-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b6f8dc086b2d60c5856e4ff517060392-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SEz-FQltAYN
|
https://papers.nips.cc/paper_files/paper/2021/file/b6f8dc086b2d60c5856e4ff517060392-Supplemental.pdf
|
Human explanation (e.g., in terms of feature importance) has been recently used to extend the communication channel between human and agent in interactive machine learning. Under this setting, human trainers provide not only the ground truth but also some form of explanation. However, this kind of human guidance was only investigated in supervised learning tasks, and it remains unclear how to best incorporate this type of human knowledge into deep reinforcement learning. In this paper, we present the first study of using human visual explanations in human-in-the-loop reinforcement learning (HIRL). We focus on the task of learning from feedback, in which the human trainer not only gives binary evaluative "good" or "bad" feedback for queried state-action pairs, but also provides a visual explanation by annotating relevant features in images. We propose EXPAND (EXPlanation AugmeNted feeDback) to encourage the model to encode task-relevant features through a context-aware data augmentation that only perturbs irrelevant features in human salient information. We choose five tasks, namely Pixel-Taxi and four Atari games, to evaluate the performance and sample efficiency of this approach. We show that our method significantly outperforms methods leveraging human explanation that are adapted from supervised learning, and Human-in-the-loop RL baselines that only utilize evaluative feedback.
| null |
SOLQ: Segmenting Objects by Learning Queries
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7087c1f4f89e63af8d46f3b20271153-Abstract.html
|
Bin Dong, Fangao Zeng, Tiancai Wang, Xiangyu Zhang, Yichen Wei
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7087c1f4f89e63af8d46f3b20271153-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13299-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b7087c1f4f89e63af8d46f3b20271153-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=78GFU9e56Dq
|
https://papers.nips.cc/paper_files/paper/2021/file/b7087c1f4f89e63af8d46f3b20271153-Supplemental.pdf
|
In this paper, we propose an end-to-end framework for instance segmentation. Based on the recently introduced DETR, our method, termed SOLQ, segments objects by learning unified queries. In SOLQ, each query represents one object and has multiple representations: class, location and mask. The object queries learned perform classification, box regression and mask encoding simultaneously in an unified vector form. During training phase, the mask vectors encoded are supervised by the compression coding of raw spatial masks. In inference time,mask vectors produced can be directly transformed to spatial masks by the inverse process of compression coding. Experimental results show that SOLQ can achieve state-of-the-art performance, surpassing most of existing approaches. Moreover, the joint learning of unified query representation can greatly improve the detection performance of DETR. We hope our SOLQ can serve as a strong baseline for the Transformer-based instance segmentation.
| null |
Extending Lagrangian and Hamiltonian Neural Networks with Differentiable Contact Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7a8486459730bea9569414ef76cf03f-Abstract.html
|
Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7a8486459730bea9569414ef76cf03f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13300-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b7a8486459730bea9569414ef76cf03f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pZQrKCkbas
|
https://papers.nips.cc/paper_files/paper/2021/file/b7a8486459730bea9569414ef76cf03f-Supplemental.pdf
|
The incorporation of appropriate inductive bias plays a critical role in learning dynamics from data. A growing body of work has been exploring ways to enforce energy conservation in the learned dynamics by encoding Lagrangian or Hamiltonian dynamics into the neural network architecture. These existing approaches are based on differential equations, which do not allow discontinuity in the states and thereby limit the class of systems one can learn. However, in reality, most physical systems, such as legged robots and robotic manipulators, involve contacts and collisions, which introduce discontinuities in the states. In this paper, we introduce a differentiable contact model, which can capture contact mechanics: frictionless/frictional, as well as elastic/inelastic. This model can also accommodate inequality constraints, such as limits on the joint angles. The proposed contact model extends the scope of Lagrangian and Hamiltonian neural networks by allowing simultaneous learning of contact and system properties. We demonstrate this framework on a series of challenging 2D and 3D physical systems with different coefficients of restitution and friction. The learned dynamics can be used as a differentiable physics simulator for downstream gradient-based optimization tasks, such as planning and control.
| null |
Best-case lower bounds in online learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7da6669894867f04b8727876a69ffc0-Abstract.html
|
Cristóbal Guzmán, Nishant Mehta, Ali Mortazavi
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7da6669894867f04b8727876a69ffc0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13301-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b7da6669894867f04b8727876a69ffc0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=uZJJFpFl60W
|
https://papers.nips.cc/paper_files/paper/2021/file/b7da6669894867f04b8727876a69ffc0-Supplemental.pdf
|
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In this work, we initiate the study of best-case lower bounds in online convex optimization, wherein we bound the largest \emph{improvement} an algorithm can obtain relative to the single best action in hindsight. This problem is motivated by the goal of better understanding the adaptivity of a learning algorithm. Another motivation comes from fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for decision-theoretic online learning (DTOL) that satisfy a notion of group fairness. Our contributions are a general method to provide best-case lower bounds in Follow The Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, timeless methods, and adaptive gradient methods. In stark contrast, we show that the linearized version of FTRL can attain negative linear regret. Finally, in DTOL with two experts and binary losses, we fully characterize the best-case sequences, which provides a finer understanding of the best-case lower bounds.
| null |
A Comprehensively Tight Analysis of Gradient Descent for PCA
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7f7ada7d848002260ee5eb7d8835709-Abstract.html
|
Zhiqiang Xu, Ping Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/b7f7ada7d848002260ee5eb7d8835709-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13302-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b7f7ada7d848002260ee5eb7d8835709-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1W2WuCYbz_C
|
https://papers.nips.cc/paper_files/paper/2021/file/b7f7ada7d848002260ee5eb7d8835709-Supplemental.pdf
|
We study the Riemannian gradient method for PCA on which a crucial fact is that despite the simplicity of the considered setting, i.e., deterministic version of Krasulina's method, the convergence rate has not been well-understood yet. In this work, we provide a general tight analysis for the gap-dependent rate at $O(\frac{1}{\Delta}\log\frac{1}{\epsilon})$ that holds for any real symmetric matrix. More importantly, when the gap $\Delta$ is significantly smaller than the target accuracy $\epsilon$ on the objective sub-optimality of the final solution, the rate of this type is actually not tight any more, which calls for a worst-case rate. We further give the first worst-case analysis that achieves a rate of convergence at $O(\frac{1}{\epsilon}\log\frac{1}{\epsilon})$. The two analyses naturally roll out a comprehensively tight convergence rate at $O(\frac{1}{\max\{\Delta,\epsilon\}}\hskip-.3em\log\frac{1}{\epsilon})$. Particularly, our gap-dependent analysis suggests a new promising learning rate for stochastic variance reduced PCA algorithms. Experiments are conducted to confirm our findings as well.
| null |
On Robust Optimal Transport: Computational Complexity and Barycenter Computation
|
https://papers.nips.cc/paper_files/paper/2021/hash/b80ba73857eed2a36dc7640e2310055a-Abstract.html
|
Khang Le, Huy Nguyen, Quang M Nguyen, Tung Pham, Hung Bui, Nhat Ho
|
https://papers.nips.cc/paper_files/paper/2021/hash/b80ba73857eed2a36dc7640e2310055a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13303-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b80ba73857eed2a36dc7640e2310055a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xRLT28nnlFV
|
https://papers.nips.cc/paper_files/paper/2021/file/b80ba73857eed2a36dc7640e2310055a-Supplemental.pdf
|
We consider robust variants of the standard optimal transport, named robust optimal transport, where marginal constraints are relaxed via Kullback-Leibler divergence. We show that Sinkhorn-based algorithms can approximate the optimal cost of robust optimal transport in $\widetilde{\mathcal{O}}(\frac{n^2}{\varepsilon})$ time, in which $n$ is the number of supports of the probability distributions and $\varepsilon$ is the desired error. Furthermore, we investigate a fixed-support robust barycenter problem between $m$ discrete probability distributions with at most $n$ number of supports and develop an approximating algorithm based on iterative Bregman projections (IBP). For the specific case $m = 2$, we show that this algorithm can approximate the optimal barycenter value in $\widetilde{\mathcal{O}}(\frac{mn^2}{\varepsilon})$ time, thus being better than the previous complexity $\widetilde{\mathcal{O}}(\frac{mn^2}{\varepsilon^2})$ of the IBP algorithm for approximating the Wasserstein barycenter.
| null |
Asymptotically Best Causal Effect Identification with Multi-Armed Bandits
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8102d1fa5df93e62cf26cd4400a0727-Abstract.html
|
Alan Malek, Silvia Chiappa
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8102d1fa5df93e62cf26cd4400a0727-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13304-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b8102d1fa5df93e62cf26cd4400a0727-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zqo2sqixxbE
|
https://papers.nips.cc/paper_files/paper/2021/file/b8102d1fa5df93e62cf26cd4400a0727-Supplemental.pdf
|
This paper considers the problem of selecting a formula for identifying a causal quantity of interest among a set of available formulas. We assume an online setting in which the investigator may alter the data collection mechanism in a data-dependent way with the aim of identifying the formula with lowest asymptotic variance in as few samples as possible. We formalize this setting by using the best-arm-identification bandit framework where the standard goal of learning the arm with the lowest loss is replaced with the goal of learning the arm that will produce the best estimate. We introduce new tools for constructing finite-sample confidence bounds on estimates of the asymptotic variance that account for the estimation of potentially complex nuisance functions, and adapt the best-arm-identification algorithms of LUCB and Successive Elimination to use these bounds. We validate our method by providing upper bounds on the sample complexity and an empirical study on artificially generated data.
| null |
Learning rule influences recurrent network representations but not attractor structure in decision-making tasks
|
https://papers.nips.cc/paper_files/paper/2021/hash/b87039703fe79778e9f140b78621d7fb-Abstract.html
|
Brandon McMahan, Michael Kleinman, Jonathan Kao
|
https://papers.nips.cc/paper_files/paper/2021/hash/b87039703fe79778e9f140b78621d7fb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13305-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b87039703fe79778e9f140b78621d7fb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Bix4uw5GcbE
|
https://papers.nips.cc/paper_files/paper/2021/file/b87039703fe79778e9f140b78621d7fb-Supplemental.zip
|
Recurrent neural networks (RNNs) are popular tools for studying computational dynamics in neurobiological circuits. However, due to the dizzying array of design choices, it is unclear if computational dynamics unearthed from RNNs provide reliable neurobiological inferences. Understanding the effects of design choices on RNN computation is valuable in two ways. First, invariant properties that persist in RNNs across a wide range of design choices are more likely to be candidate neurobiological mechanisms. Second, understanding what design choices lead to similar dynamical solutions reduces the burden of imposing that all design choices be totally faithful replications of biology. We focus our investigation on how RNN learning rule and task design affect RNN computation. We trained large populations of RNNs with different, but commonly used, learning rules on decision-making tasks inspired by neuroscience literature. For relatively complex tasks, we find that attractor topology is invariant to the choice of learning rule, but representational geometry is not. For simple tasks, we find that attractor topology depends on task input noise. However, when a task becomes increasingly complex, RNN attractor topology becomes invariant to input noise. Together, our results suggest that RNN dynamics are robust across learning rules but can be sensitive to the training task design, especially for simpler tasks.
| null |
Few-Shot Segmentation via Cycle-Consistent Transformer
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8b12f949378552c21f28deff8ba8eb6-Abstract.html
|
Gengwei Zhang, Guoliang Kang, Yi Yang, Yunchao Wei
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8b12f949378552c21f28deff8ba8eb6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13306-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b8b12f949378552c21f28deff8ba8eb6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=LWH-C1HoQG_
|
https://papers.nips.cc/paper_files/paper/2021/file/b8b12f949378552c21f28deff8ba8eb6-Supplemental.pdf
|
Few-shot segmentation aims to train a segmentation model that can fast adapt to novel classes with few exemplars. The conventional training paradigm is to learn to make predictions on query images conditioned on the features from support images. Previous methods only utilized the semantic-level prototypes of support images as the conditional information. These methods cannot utilize all pixel-wise support information for the query predictions, which is however critical for the segmentation task. In this paper, we focus on utilizing pixel-wise relationships between support and target images to facilitate the few-shot semantic segmentation task. We design a novel Cycle-Consistent Transformer (CyCTR) module to aggregate pixel-wise support features into query ones. CyCTR performs cross-attention between features from different images, i.e. support and query images. We observe that there may exist unexpected irrelevant pixel-level support features. Directly performing cross-attention may aggregate these features from support to query and bias the query features. Thus, we propose using a novel cycle-consistent attention mechanism to filter out possible harmful support features and encourage query features to attend to the most informative pixels from support images. Experiments on all few-shot segmentation benchmarks demonstrate that our proposed CyCTR leads to remarkable improvement compared to previous state-of-the-art methods. Specifically, on Pascal-5^i and COCO-20^i datasets, we achieve 66.6% and 45.6% mIoU for 5-shot segmentation, outperforming previous state-of-the-art by 4.6% and 7.1% respectively.
| null |
DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8b2926bd27d4307569ad119b6025f94-Abstract.html
|
Pál András Papp, Karolis Martinkus, Lukas Faber, Roger Wattenhofer
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8b2926bd27d4307569ad119b6025f94-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13307-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b8b2926bd27d4307569ad119b6025f94-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fpQojkIV5q8
|
https://papers.nips.cc/paper_files/paper/2021/file/b8b2926bd27d4307569ad119b6025f94-Supplemental.pdf
|
This paper studies Dropout Graph Neural Networks (DropGNNs), a new approach that aims to overcome the limitations of standard GNN frameworks. In DropGNNs, we execute multiple runs of a GNN on the input graph, with some of the nodes randomly and independently dropped in each of these runs. Then, we combine the results of these runs to obtain the final result. We prove that DropGNNs can distinguish various graph neighborhoods that cannot be separated by message passing GNNs. We derive theoretical bounds for the number of runs required to ensure a reliable distribution of dropouts, and we prove several properties regarding the expressive capabilities and limits of DropGNNs. We experimentally validate our theoretical findings on expressiveness. Furthermore, we show that DropGNNs perform competitively on established GNN benchmarks.
| null |
Photonic Differential Privacy with Direct Feedback Alignment
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8c4c8b2271787e2f78b5fe2ce193caa-Abstract.html
|
Ruben Ohana, Hamlet Medina, Julien Launay, Alessandro Cappelli, Iacopo Poli, Liva Ralaivola, Alain Rakotomamonjy
|
https://papers.nips.cc/paper_files/paper/2021/hash/b8c4c8b2271787e2f78b5fe2ce193caa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13308-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b8c4c8b2271787e2f78b5fe2ce193caa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OgtWS4bkNO8
|
https://papers.nips.cc/paper_files/paper/2021/file/b8c4c8b2271787e2f78b5fe2ce193caa-Supplemental.zip
|
Optical Processing Units (OPUs) -- low-power photonic chips dedicated to large scale random projections -- have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation. Here, we demonstrate how to leverage the intrinsic noise of optical random projections to build a differentially private DFA mechanism, making OPUs a solution of choice to provide a \emph{private-by-design} training. We provide a theoretical analysis of our adaptive privacy mechanism, carefully measuring how the noise of optical random projections propagates in the process and gives rise to provable Differential Privacy. Finally, we conduct experiments demonstrating the ability of our learning procedure to achieve solid end-task performance.
| null |
Searching Parameterized AP Loss for Object Detection
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9009beb804fa097c04d226a8ba5102e-Abstract.html
|
Tao Chenxin, Zizhang Li, Xizhou Zhu, Gao Huang, Yong Liu, jifeng dai
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9009beb804fa097c04d226a8ba5102e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13309-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b9009beb804fa097c04d226a8ba5102e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=hLTZCN7f3M-
|
https://papers.nips.cc/paper_files/paper/2021/file/b9009beb804fa097c04d226a8ba5102e-Supplemental.pdf
|
Loss functions play an important role in training deep-network-based object detectors. The most widely used evaluation metric for object detection is Average Precision (AP), which captures the performance of localization and classification sub-tasks simultaneously. However, due to the non-differentiable nature of the AP metric, traditional object detectors adopt separate differentiable losses for the two sub-tasks. Such a mis-alignment issue may well lead to performance degradation. To address this, existing works seek to design surrogate losses for the AP metric manually, which requires expertise and may still be sub-optimal. In this paper, we propose Parameterized AP Loss, where parameterized functions are introduced to substitute the non-differentiable components in the AP calculation. Different AP approximations are thus represented by a family of parameterized functions in a unified formula. Automatic parameter search algorithm is then employed to search for the optimal parameters. Extensive experiments on the COCO benchmark with three different object detectors (i.e., RetinaNet, Faster R-CNN, and Deformable DETR) demonstrate that the proposed Parameterized AP Loss consistently outperforms existing handcrafted losses. Code shall be released.
| null |
Fair Exploration via Axiomatic Bargaining
|
https://papers.nips.cc/paper_files/paper/2021/hash/b90c46963248e6d7aab1e0f429743ca0-Abstract.html
|
Jackie Baek, Vivek Farias
|
https://papers.nips.cc/paper_files/paper/2021/hash/b90c46963248e6d7aab1e0f429743ca0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13310-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b90c46963248e6d7aab1e0f429743ca0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GEKTIKvslP
|
https://papers.nips.cc/paper_files/paper/2021/file/b90c46963248e6d7aab1e0f429743ca0-Supplemental.zip
|
Motivated by the consideration of fairly sharing the cost of exploration between multiple groups in learning problems, we develop the Nash bargaining solution in the context of multi-armed bandits. Specifically, the 'grouped' bandit associated with any multi-armed bandit problem associates, with each time step, a single group from some finite set of groups. The utility gained by a given group under some learning policy is naturally viewed as the reduction in that group's regret relative to the regret that group would have incurred 'on its own'. We derive policies that yield the Nash bargaining solution relative to the set of incremental utilities possible under any policy. We show that on the one hand, the 'price of fairness' under such policies is limited, while on the other hand, regret optimal policies are arbitrarily unfair under generic conditions. Our theoretical development is complemented by a case study on contextual bandits for warfarin dosing where we are concerned with the cost of exploration across multiple races and age groups.
| null |
Unifying lower bounds on prediction dimension of convex surrogates
|
https://papers.nips.cc/paper_files/paper/2021/hash/b91a76b0b2fa7ce160212f53f3d2edba-Abstract.html
|
Jessica Finocchiaro, Rafael Frongillo, Bo Waggoner
|
https://papers.nips.cc/paper_files/paper/2021/hash/b91a76b0b2fa7ce160212f53f3d2edba-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13311-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b91a76b0b2fa7ce160212f53f3d2edba-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=DfWL8kIb0eF
|
https://papers.nips.cc/paper_files/paper/2021/file/b91a76b0b2fa7ce160212f53f3d2edba-Supplemental.pdf
|
The convex consistency dimension of a supervised learning task is the lowest prediction dimension $d$ such that there exists a convex surrogate $L : \mathbb{R}^d \times \mathcal Y \to \mathbb R$ that is consistent for the given task. We present a new tool based on property elicitation, $d$-flats, for lower-bounding convex consistency dimension. This tool unifies approaches from a variety of domains, including continuous and discrete prediction problems. We use $d$-flats to obtain a new lower bound on the convex consistency dimension of risk measures, resolving an open question due to Frongillo and Kash (NeurIPS 2015). In discrete prediction settings, we show that the $d$-flats approach recovers and even tightens previous lower bounds using feasible subspace dimension.
| null |
Ultrahyperbolic Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/b91b1facf3b3a7890177f02ac188f14c-Abstract.html
|
Marc Law
|
https://papers.nips.cc/paper_files/paper/2021/hash/b91b1facf3b3a7890177f02ac188f14c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13312-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b91b1facf3b3a7890177f02ac188f14c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sf2BxJNXC3K
|
https://papers.nips.cc/paper_files/paper/2021/file/b91b1facf3b3a7890177f02ac188f14c-Supplemental.pdf
|
Riemannian space forms, such as the Euclidean space, sphere and hyperbolic space, are popular and powerful representation spaces in machine learning. For instance, hyperbolic geometry is appropriate to represent graphs without cycles and has been used to extend Graph Neural Networks. Recently, some pseudo-Riemannian space forms that generalize both hyperbolic and spherical geometries have been exploited to learn a specific type of nonparametric embedding called ultrahyperbolic. The lack of geodesic between every pair of ultrahyperbolic points makes the task of learning parametric models (e.g., neural networks) difficult. This paper introduces a method to learn parametric models in ultrahyperbolic space. We experimentally show the relevance of our approach in the tasks of graph and node classification.
| null |
NeuroMLR: Robust & Reliable Route Recommendation on Road Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/b922ede9c9eb9eabec1c1fecbdecb45d-Abstract.html
|
Jayant Jain, Vrittika Bagadia, Sahil Manchanda, Sayan Ranu
|
https://papers.nips.cc/paper_files/paper/2021/hash/b922ede9c9eb9eabec1c1fecbdecb45d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13313-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b922ede9c9eb9eabec1c1fecbdecb45d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Sl0WX9H6ZJg
|
https://papers.nips.cc/paper_files/paper/2021/file/b922ede9c9eb9eabec1c1fecbdecb45d-Supplemental.pdf
|
Predicting the most likely route from a source location to a destination is a core functionality in mapping services. Although the problem has been studied in the literature, two key limitations remain to be addressed. First, our study reveals that a significant portion of the routes recommended by existing methods fail to reach the destination. Second, existing techniques are transductive in nature; hence, they fail to recommend routes if unseen roads are encountered at inference time. In this paper, we address these limitations through an inductive algorithm called NeuroMLR. NeuroMLR learns a generative model from historical trajectories by conditioning on three explanatory factors: the current location, the destination, and real-time traffic conditions. The conditional distributions are learned through a novel combination of Lipschitz embedding with Graph Convolutional Networks (GCN) using historical trajectory data. Through in-depth experiments on real-world datasets, we establish that NeuroMLR imparts significant improvement in accuracy over the state of the art. More importantly, NeuroMLR generalizes dramatically better to unseen data and the recommended routes reach the destination with much higher likelihood than existing techniques.
| null |
Risk Bounds and Calibration for a Smart Predict-then-Optimize Method
|
https://papers.nips.cc/paper_files/paper/2021/hash/b943325cc7b7422d2871b345bf9b067f-Abstract.html
|
Heyuan Liu, Paul Grigas
|
https://papers.nips.cc/paper_files/paper/2021/hash/b943325cc7b7422d2871b345bf9b067f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13314-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b943325cc7b7422d2871b345bf9b067f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pSitk34qYit
|
https://papers.nips.cc/paper_files/paper/2021/file/b943325cc7b7422d2871b345bf9b067f-Supplemental.pdf
|
The predict-then-optimize framework is fundamental in practical stochastic decision-making problems: first predict unknown parameters of an optimization model, then solve the problem using the predicted values. A natural loss function in this setting is defined by measuring the decision error induced by the predicted parameters, which was named the Smart Predict-then-Optimize (SPO) loss by Elmachtoub and Grigas [2021]. Since the SPO loss is typically nonconvex and possibly discontinuous, Elmachtoub and Grigas [2021] introduced a convex surrogate, called the SPO+ loss, that importantly accounts for the underlying structure of the optimization model. In this paper, we greatly expand upon the consistency results for the SPO+ loss provided by Elmachtoub and Grigas [2021]. We develop risk bounds and uniform calibration results for the SPO+ loss relative to the SPO loss, which provide a quantitative way to transfer the excess surrogate risk to excess true risk. By combining our risk bounds with generalization bounds, we show that the empirical minimizer of the SPO+ loss achieves low excess true risk with high probability. We first demonstrate these results in the case when the feasible region of the underlying optimization problem is a polyhedron, and then we show that the results can be strengthened substantially when the feasible region is a level set of a strongly convex function. We perform experiments to empirically demonstrate the strength of the SPO+ surrogate, as compared to standard $\ell_1$ and squared $\ell_2$ prediction error losses, on portfolio allocation and cost-sensitive multi-class classification problems.
| null |
Three-dimensional spike localization and improved motion correction for Neuropixels recordings
|
https://papers.nips.cc/paper_files/paper/2021/hash/b950ea26ca12daae142bd74dba4427c8-Abstract.html
|
Julien Boussard, Erdem Varol, Hyun Dong Lee, Nishchal Dethe, Liam Paninski
|
https://papers.nips.cc/paper_files/paper/2021/hash/b950ea26ca12daae142bd74dba4427c8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13315-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b950ea26ca12daae142bd74dba4427c8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ohfi44BZPC4
|
https://papers.nips.cc/paper_files/paper/2021/file/b950ea26ca12daae142bd74dba4427c8-Supplemental.pdf
|
Neuropixels (NP) probes are dense linear multi-electrode arrays that have rapidly become essential tools for studying the electrophysiology of large neural populations. Unfortunately, a number of challenges remain in analyzing the large datasets output by these probes. Here we introduce several new methods for extracting useful spiking information from NP probes. First, we use a simple point neuron model, together with a neural-network denoiser, to efficiently map spikes detected on the probe into three-dimensional localizations. Previous methods localized spikes in two dimensions only; we show that the new localization approach is significantly more robust and provides an improved feature set for clustering spikes according to neural identity (``spike sorting"). Next, we apply a Poisson denoising method to the resulting three-dimensional point-cloud representation of the data, and show that the resulting 3D images can be accurately registered over time, leading to improved tracking of time-varying neural activity over the probe, and in turn, crisper estimates of neural clusters over time. The code to reproduce our results and an example neuropixels dataset is provided in the supplementary material.
| null |
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/b98249b38337c5088bbc660d8f872d6a-Abstract.html
|
Hanzhe Hu, Fangyun Wei, Han Hu, Qiwei Ye, Jinshi Cui, Liwei Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b98249b38337c5088bbc660d8f872d6a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13316-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b98249b38337c5088bbc660d8f872d6a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5CGPY2VeEGb
|
https://papers.nips.cc/paper_files/paper/2021/file/b98249b38337c5088bbc660d8f872d6a-Supplemental.pdf
|
Due to the limited and even imbalanced data, semi-supervised semantic segmentation tends to have poor performance on some certain categories, e.g., tailed categories in Cityscapes dataset which exhibits a long-tailed label distribution. Existing approaches almost all neglect this problem, and treat categories equally. Some popular approaches such as consistency regularization or pseudo-labeling may even harm the learning of under-performing categories, that the predictions or pseudo labels of these categories could be too inaccurate to guide the learning on the unlabeled data. In this paper, we look into this problem, and propose a novel framework for semi-supervised semantic segmentation, named adaptive equalization learning (AEL). AEL adaptively balances the training of well and badly performed categories, with a confidence bank to dynamically track category-wise performance during training. The confidence bank is leveraged as an indicator to tilt training towards under-performing categories, instantiated in three strategies: 1) adaptive Copy-Paste and CutMix data augmentation approaches which give more chance for under-performing categories to be copied or cut; 2) an adaptive data sampling approach to encourage pixels from under-performing category to be sampled; 3) a simple yet effective re-weighting method to alleviate the training noise raised by pseudo-labeling. Experimentally, AEL outperforms the state-of-the-art methods by a large margin on the Cityscapes and Pascal VOC benchmarks under various data partition protocols. Code is available at https://github.com/hzhupku/SemiSeg-AEL.
| null |
On the Bias-Variance-Cost Tradeoff of Stochastic Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/b986700c627db479a4d9460b75de7222-Abstract.html
|
Yifan Hu, Xin Chen, Niao He
|
https://papers.nips.cc/paper_files/paper/2021/hash/b986700c627db479a4d9460b75de7222-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13317-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b986700c627db479a4d9460b75de7222-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=I5f4e3udn2
|
https://papers.nips.cc/paper_files/paper/2021/file/b986700c627db479a4d9460b75de7222-Supplemental.pdf
|
We consider stochastic optimization when one only has access to biased stochastic oracles of the objective, and obtaining stochastic gradients with low biases comes at high costs. This setting captures a variety of optimization paradigms widely used in machine learning, such as conditional stochastic optimization, bilevel optimization, and distributionally robust optimization. We examine a family of multi-level Monte Carlo (MLMC) gradient methods that exploit a delicate trade-off among the bias, the variance, and the oracle cost. We provide a systematic study of their convergences and total computation complexities for strongly convex, convex, and nonconvex objectives, and demonstrate their superiority over the naive biased stochastic gradient method. Moreover, when applied to conditional stochastic optimization, the MLMC gradient methods significantly improve the best-known sample complexity in the literature.
| null |
Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9acb4ae6121c941324b2b1d3fac5c30-Abstract.html
|
Jason Altschuler, Sinho Chewi, Patrik R Gerber, Austin Stromme
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9acb4ae6121c941324b2b1d3fac5c30-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13318-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b9acb4ae6121c941324b2b1d3fac5c30-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YV3uoawS5KK
|
https://papers.nips.cc/paper_files/paper/2021/file/b9acb4ae6121c941324b2b1d3fac5c30-Supplemental.pdf
|
We study first-order optimization algorithms for computing the barycenter of Gaussian distributions with respect to the optimal transport metric. Although the objective is geodesically non-convex, Riemannian gradient descent empirically converges rapidly, in fact faster than off-the-shelf methods such as Euclidean gradient descent and SDP solvers. This stands in stark contrast to the best-known theoretical results, which depend exponentially on the dimension. In this work, we prove new geodesic convexity results which provide stronger control of the iterates, yielding a dimension-free convergence rate. Our techniques also enable the analysis of two related notions of averaging, the entropically-regularized barycenter and the geometric median, providing the first convergence guarantees for these problems.
| null |
Reinforcement Learning in Newcomblike Environments
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9ed18a301c9f3d183938c451fa183df-Abstract.html
|
James Bell, Linda Linsefors, Caspar Oesterheld, Joar Skalse
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9ed18a301c9f3d183938c451fa183df-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13319-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b9ed18a301c9f3d183938c451fa183df-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cx2q4cOBnne
|
https://papers.nips.cc/paper_files/paper/2021/file/b9ed18a301c9f3d183938c451fa183df-Supplemental.pdf
|
Newcomblike decision problems have been studied extensively in the decision theory literature, but they have so far been largely absent in the reinforcement learning literature. In this paper we study value-based reinforcement learning algorithms in the Newcomblike setting, and answer some of the fundamental theoretical questions about the behaviour of such algorithms in these environments. We show that a value-based reinforcement learning agent cannot converge to a policy that is not \emph{ratifiable}, i.e., does not only choose actions that are optimal given that policy. This gives us a powerful tool for reasoning about the limit behaviour of agents -- for example, it lets us show that there are Newcomblike environments in which a reinforcement learning agent cannot converge to any optimal policy. We show that a ratifiable policy always exists in our setting, but that there are cases in which a reinforcement learning agent normally cannot converge to it (and hence cannot converge at all). We also prove several results about the possible limit behaviours of agents in cases where they do not converge to any policy.
| null |
Comprehensive Knowledge Distillation with Causal Intervention
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9f35816f460ab999cbc168c4da26ff3-Abstract.html
|
Xiang Deng, Zhongfei Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/b9f35816f460ab999cbc168c4da26ff3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13320-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/b9f35816f460ab999cbc168c4da26ff3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ch9qlCdrHD7
|
https://papers.nips.cc/paper_files/paper/2021/file/b9f35816f460ab999cbc168c4da26ff3-Supplemental.pdf
|
Knowledge distillation (KD) addresses model compression by distilling knowledge from a large model (teacher) to a smaller one (student). The existing distillation approaches mainly focus on using different criteria to align the sample representations learned by the student and the teacher, while they fail to transfer the class representations. Good class representations can benefit the sample representation learning by shaping the sample representation distribution. On the other hand, the existing approaches enforce the student to fully imitate the teacher while ignoring the fact that the teacher is typically not perfect. Although the teacher has learned rich and powerful representations, it also contains unignorable bias knowledge which is usually induced by the context prior (e.g., background) in the training data. To address these two issues, in this paper, we propose comprehensive, interventional distillation (CID) that captures both sample and class representations from the teacher while removing the bias with causal intervention. Different from the existing literature that uses the softened logits of the teacher as the training targets, CID considers the softened logits as the context information of an image, which is further used to remove the biased knowledge based on causal inference. Keeping the good representations while removing the bad bias enables CID to have a better generalization ability on test data and a better transferability across different datasets against the existing state-of-the-art approaches, which is demonstrated by extensive experiments on several benchmark datasets.
| null |
Reinforcement Learning with Latent Flow
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba3c5fe1d6d6708b5bffaeb6942b7e04-Abstract.html
|
Wenling Shang, Xiaofei Wang, Aravind Srinivas, Aravind Rajeswaran, Yang Gao, Pieter Abbeel, Misha Laskin
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba3c5fe1d6d6708b5bffaeb6942b7e04-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13321-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/ba3c5fe1d6d6708b5bffaeb6942b7e04-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YadmOcMC9aa
|
https://papers.nips.cc/paper_files/paper/2021/file/ba3c5fe1d6d6708b5bffaeb6942b7e04-Supplemental.zip
|
Temporal information is essential to learning effective policies with Reinforcement Learning (RL). However, current state-of-the-art RL algorithms either assume that such information is given as part of the state space or, when learning from pixels, use the simple heuristic of frame-stacking to implicitly capture temporal information present in the image observations. This heuristic is in contrast to the current paradigm in video classification architectures, which utilize explicit encodings of temporal information through methods such as optical flow and two-stream architectures to achieve state-of-the-art performance. Inspired by leading video classification architectures, we introduce the Flow of Latents for Reinforcement Learning (Flare), a network architecture for RL that explicitly encodes temporal information through latent vector differences. We show that Flare recovers optimal performance in state-based RL without explicit access to the state velocity, solely with positional state information. Flare is the most sample efficient model-free pixel-based RL algorithm on the DeepMind Control suite when evaluated on the 500k and 1M step benchmarks across 5 challenging control tasks, and, when used with Rainbow DQN, outperforms the competitive baseline on Atari games at 100M time step benchmark across 8 challenging games.
| null |
Understanding How Encoder-Decoder Architectures Attend
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba3c736667394d5082f86f28aef38107-Abstract.html
|
Kyle Aitken, Vinay Ramasesh, Yuan Cao, Niru Maheswaranathan
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba3c736667394d5082f86f28aef38107-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13322-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/ba3c736667394d5082f86f28aef38107-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=503UwCYEe5
|
https://papers.nips.cc/paper_files/paper/2021/file/ba3c736667394d5082f86f28aef38107-Supplemental.pdf
|
Encoder-decoder networks with attention have proven to be a powerful way to solve many sequence-to-sequence tasks. In these networks, attention aligns encoder and decoder states and is often used for visualizing network behavior. However, the mechanisms used by networks to generate appropriate attention matrices are still mysterious. Moreover, how these mechanisms vary depending on the particular architecture used for the encoder and decoder (recurrent, feed-forward, etc.) are also not well understood. In this work, we investigate how encoder-decoder networks solve different sequence-to-sequence tasks. We introduce a way of decomposing hidden states over a sequence into temporal (independent of input) and input-driven (independent of sequence position) components. This reveals how attention matrices are formed: depending on the task requirements, networks rely more heavily on either the temporal or input-driven components. These findings hold across both recurrent and feed-forward architectures despite their differences in forming the temporal components. Overall, our results provide new insight into the inner workings of attention-based encoder-decoder networks.
| null |
Latent Execution for Neural Program Synthesis Beyond Domain-Specific Languages
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba3c95c2962d3aab2f6e667932daa3c5-Abstract.html
|
Xinyun Chen, Dawn Song, Yuandong Tian
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba3c95c2962d3aab2f6e667932daa3c5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13323-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/ba3c95c2962d3aab2f6e667932daa3c5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_nRSyha2SP
|
https://papers.nips.cc/paper_files/paper/2021/file/ba3c95c2962d3aab2f6e667932daa3c5-Supplemental.pdf
|
Program synthesis from input-output (IO) examples has been a long-standing challenge. While recent works demonstrated limited success on domain-specific languages (DSL), it remains highly challenging to apply them to real-world programming languages, such as C. Due to complicated syntax and token variation, there are three major challenges: (1) unlike many DSLs, programs in languages like C need to compile first and are not executed via interpreters; (2) the program search space grows exponentially when the syntax and semantics of the programming language become more complex; and (3) collecting a large-scale dataset of real-world programs is non-trivial. As a first step to address these challenges, we propose LaSynth and show its efficacy in a restricted-C domain (i.e., C code with tens of tokens, with sequential, branching, loop and simple arithmetic operations but no library call). More specifically, LaSynth learns the latent representation to approximate the execution of partially generated programs, even if they are incomplete in syntax (addressing (1)). The learned execution significantly improves the performance of next token prediction over existing approaches, facilitating search (addressing (2)). Finally, once trained with randomly generated ground-truth programs and their IO pairs, LaSynth can synthesize more concise programs that resemble human-written code. Furthermore, retraining our model with these synthesized programs yields better performance with fewer samples for both Karel and C program synthesis, indicating the promise of leveraging the learned program synthesizer to improve the dataset quality for input-output program synthesis (addressing (3)). When evaluating on whether the program execution outputs match the IO pairs, LaSynth achieves 55.2% accuracy on generating simple C code with tens of tokens including loops and branches, outperforming existing approaches without executors by around 20%.
| null |
Two steps to risk sensitivity
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba530cdf0a884348613f2aaa3a5ba5e8-Abstract.html
|
Christopher Gagne, Peter Dayan
|
https://papers.nips.cc/paper_files/paper/2021/hash/ba530cdf0a884348613f2aaa3a5ba5e8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13324-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/ba530cdf0a884348613f2aaa3a5ba5e8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HnLDt9v6Q-j
|
https://papers.nips.cc/paper_files/paper/2021/file/ba530cdf0a884348613f2aaa3a5ba5e8-Supplemental.pdf
|
Distributional reinforcement learning (RL) – in which agents learn about all the possible long-term consequences of their actions, and not just the expected value – is of great recent interest. One of the most important affordances of a distributional view is facilitating a modern, measured, approach to risk when outcomes are not completely certain. By contrast, psychological and neuroscientific investigations into decision making under risk have utilized a variety of more venerable theoretical models such as prospect theory that lack axiomatically desirable properties such as coherence. Here, we consider a particularly relevant risk measure for modeling human and animal planning, called conditional value-at-risk (CVaR), which quantifies worst-case outcomes (e.g., vehicle accidents or predation). We first adopt a conventional distributional approach to CVaR in a sequential setting and reanalyze the choices of human decision-makers in the well-known two-step task, revealing substantial risk aversion that had been lurking under stickiness and perseveration. We then consider a further critical property of risk sensitivity, namely time consistency, showing alternatives to this form of CVaR that enjoy this desirable characteristic. We use simulations to examine settings in which the various forms differ in ways that have implications for human and animal planning and behavior.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.