title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
Decentralized Q-learning in Zero-sum Markov Games
|
https://papers.nips.cc/paper_files/paper/2021/hash/985e9a46e10005356bbaf194249f6856-Abstract.html
|
Muhammed Sayin, Kaiqing Zhang, David Leslie, Tamer Basar, Asuman Ozdaglar
|
https://papers.nips.cc/paper_files/paper/2021/hash/985e9a46e10005356bbaf194249f6856-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13024-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/985e9a46e10005356bbaf194249f6856-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nhkbYh30Tl
|
https://papers.nips.cc/paper_files/paper/2021/file/985e9a46e10005356bbaf194249f6856-Supplemental.pdf
|
We study multi-agent reinforcement learning (MARL) in infinite-horizon discounted zero-sum Markov games. We focus on the practical but challenging setting of decentralized MARL, where agents make decisions without coordination by a centralized controller, but only based on their own payoffs and local actions executed. The agents need not observe the opponent's actions or payoffs, possibly being even oblivious to the presence of the opponent, nor be aware of the zero-sum structure of the underlying game, a setting also referred to as radically uncoupled in the literature of learning in games. In this paper, we develop a radically uncoupled Q-learning dynamics that is both rational and convergent: the learning dynamics converges to the best response to the opponent's strategy when the opponent follows an asymptotically stationary strategy; when both agents adopt the learning dynamics, they converge to the Nash equilibrium of the game. The key challenge in this decentralized setting is the non-stationarity of the environment from an agent's perspective, since both her own payoffs and the system evolution depend on the actions of other agents, and each agent adapts her policies simultaneously and independently. To address this issue, we develop a two-timescale learning dynamics where each agent updates her local Q-function and value function estimates concurrently, with the latter happening at a slower timescale.
| null |
Fast Certified Robust Training with Short Warmup
|
https://papers.nips.cc/paper_files/paper/2021/hash/988f9153ac4fd966ea302dd9ab9bae15-Abstract.html
|
Zhouxing Shi, Yihan Wang, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh
|
https://papers.nips.cc/paper_files/paper/2021/hash/988f9153ac4fd966ea302dd9ab9bae15-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13025-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/988f9153ac4fd966ea302dd9ab9bae15-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Qh-fwFsrEz
|
https://papers.nips.cc/paper_files/paper/2021/file/988f9153ac4fd966ea302dd9ab9bae15-Supplemental.pdf
|
Recently, bound propagation based certified robust training methods have been proposed for training neural networks with certifiable robustness guarantees. Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly. In this paper, we identify two important issues in existing methods, namely exploded bounds at initialization, and the imbalance in ReLU activation states and improve IBP training. These two issues make certified training difficult and unstable, and thereby long warmup schedules were needed in prior works. To mitigate these issues and conduct faster certified training with shorter warmup, we propose three improvements based on IBP training: 1) We derive a new weight initialization method for IBP training; 2) We propose to fully add Batch Normalization (BN) to each layer in the model, since we find BN can reduce the imbalance in ReLU activation states; 3) We also design regularization to explicitly tighten certified bounds and balance ReLU activation states during wamrup. We are able to obtain 65.03% verified error on CIFAR-10 ($\epsilon=\frac{8}{255}$) and 82.36% verified error on TinyImageNet ($\epsilon=\frac{1}{255}$) using very short training schedules (160 and 80 total epochs, respectively), outperforming literature SOTA trained with hundreds or thousands epochs under the same network architecture. The code is available at https://github.com/shizhouxing/Fast-Certified-Robust-Training.
| null |
Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices
|
https://papers.nips.cc/paper_files/paper/2021/hash/98c39996bf1543e974747a2549b3107c-Abstract.html
|
Federico López, Beatrice Pozzetti, Steve Trettel, Michael Strube, Anna Wienhard
|
https://papers.nips.cc/paper_files/paper/2021/hash/98c39996bf1543e974747a2549b3107c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13026-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/98c39996bf1543e974747a2549b3107c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9uXILaIam0
|
https://papers.nips.cc/paper_files/paper/2021/file/98c39996bf1543e974747a2549b3107c-Supplemental.pdf
|
We propose the use of the vector-valued distance to compute distances and extract geometric information from the manifold of symmetric positive definite matrices (SPD), and develop gyrovector calculus, constructing analogs of vector space operations in this curved space. We implement these operations and showcase their versatility in the tasks of knowledge graph completion, item recommendation, and question answering. In experiments, the SPD models outperform their equivalents in Euclidean and hyperbolic space. The vector-valued distance allows us to visualize embeddings, showing that the models learn to disentangle representations of positive samples from negative ones.
| null |
Improved Transformer for High-Resolution GANs
|
https://papers.nips.cc/paper_files/paper/2021/hash/98dce83da57b0395e163467c9dae521b-Abstract.html
|
Long Zhao, Zizhao Zhang, Ting Chen, Dimitris Metaxas, Han Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/98dce83da57b0395e163467c9dae521b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13027-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/98dce83da57b0395e163467c9dae521b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zmbiQmdtg9
|
https://papers.nips.cc/paper_files/paper/2021/file/98dce83da57b0395e163467c9dae521b-Supplemental.pdf
|
Attention-based models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of self-attention operation, making them difficult to be adopted for high-resolution image generation based on Generative Adversarial Networks (GANs). In this paper, we introduce two key ingredients to Transformer to address this challenge. First, in low-resolution stages of the generative process, standard global self-attention is replaced with the proposed multi-axis blocked self-attention which allows efficient mixing of local and global attention. Second, in high-resolution stages, we drop self-attention while only keeping multi-layer perceptrons reminiscent of the implicit neural function. To further improve the performance, we introduce an additional self-modulation component based on cross-attention. The resulting model, denoted as HiT, has a nearly linear computational complexity with respect to the image size and thus directly scales to synthesizing high definition images. We show in the experiments that the proposed HiT achieves state-of-the-art FID scores of 30.83 and 2.95 on unconditional ImageNet $128 \times 128$ and FFHQ $256 \times 256$, respectively, with a reasonable throughput. We believe the proposed HiT is an important milestone for generators in GANs which are completely free of convolutions. Our code is made publicly available at https://github.com/google-research/hit-gan.
| null |
Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence
|
https://papers.nips.cc/paper_files/paper/2021/hash/98f13708210194c475687be6106a3b84-Abstract.html
|
Xue Yang, Xiaojiang Yang, Jirui Yang, Qi Ming, Wentao Wang, Qi Tian, Junchi Yan
|
https://papers.nips.cc/paper_files/paper/2021/hash/98f13708210194c475687be6106a3b84-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13028-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/98f13708210194c475687be6106a3b84-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=pmWeMLm411_
|
https://papers.nips.cc/paper_files/paper/2021/file/98f13708210194c475687be6106a3b84-Supplemental.pdf
|
Existing rotated object detectors are mostly inherited from the horizontal detection paradigm, as the latter has evolved into a well-developed area. However, these detectors are difficult to perform prominently in high-precision detection due to the limitation of current regression loss design, especially for objects with large aspect ratios. Taking the perspective that horizontal detection is a special case for rotated object detection, in this paper, we are motivated to change the design of rotation regression loss from induction paradigm to deduction methodology, in terms of the relation between rotation and horizontal detection. We show that one essential challenge is how to modulate the coupled parameters in the rotation regression loss, as such the estimated parameters can influence to each other during the dynamic joint optimization, in an adaptive and synergetic way. Specifically, we first convert the rotated bounding box into a 2-D Gaussian distribution, and then calculate the Kullback-Leibler Divergence (KLD) between the Gaussian distributions as the regression loss. By analyzing the gradient of each parameter, we show that KLD (and its derivatives) can dynamically adjust the parameter gradients according to the characteristics of the object. For instance, it will adjust the importance (gradient weight) of the angle parameter according to the aspect ratio. This mechanism can be vital for high-precision detection as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. More importantly, we have proved that KLD is scale invariant. We further show that the KLD loss can be degenerated into the popular Ln-norm loss for horizontal detection. Experimental results on seven datasets using different detectors show its consistent superiority, and codes are available at https://github.com/yangxue0827/RotationDetection.
| null |
On Locality of Local Explanation Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/995665640dc319973d3173a74a03860c-Abstract.html
|
Sahra Ghalebikesabi, Lucile Ter-Minassian, Karla DiazOrdaz, Chris C Holmes
|
https://papers.nips.cc/paper_files/paper/2021/hash/995665640dc319973d3173a74a03860c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13029-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/995665640dc319973d3173a74a03860c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=UKoV0-BamX4
|
https://papers.nips.cc/paper_files/paper/2021/file/995665640dc319973d3173a74a03860c-Supplemental.pdf
|
Shapley values provide model agnostic feature attributions for model outcome at a particular instance by simulating feature absence under a global population distribution. The use of a global population can lead to potentially misleading results when local model behaviour is of interest. Hence we consider the formulation of neighbourhood reference distributions that improve the local interpretability of Shapley values. By doing so, we find that the Nadaraya-Watson estimator, a well-studied kernel regressor, can be expressed as a self-normalised importance sampling estimator. Empirically, we observe that Neighbourhood Shapley values identify meaningful sparse feature relevance attributions that provide insight into local model behaviour, complimenting conventional Shapley analysis. They also increase on-manifold explainability and robustness to the construction of adversarial classifiers.
| null |
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
|
https://papers.nips.cc/paper_files/paper/2021/hash/995693c15f439e3d189b06e89d145dd5-Abstract.html
|
Bowen Zhang, Yidong Wang, Wenxin Hou, HAO WU, Jindong Wang, Manabu Okumura, Takahiro Shinozaki
|
https://papers.nips.cc/paper_files/paper/2021/hash/995693c15f439e3d189b06e89d145dd5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13030-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/995693c15f439e3d189b06e89d145dd5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=3qMwV98zLIk
|
https://papers.nips.cc/paper_files/paper/2021/file/995693c15f439e3d189b06e89d145dd5-Supplemental.pdf
|
The recently proposed FixMatch achieved state-of-the-art results on most semi-supervised learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes. To address this issue, we propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status. The core of CPL is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo labels. CPL does not introduce additional parameters or computations (forward or backward propagation). We apply CPL to FixMatch and call our improved algorithm FlexMatch. FlexMatch achieves state-of-the-art performance on a variety of SSL benchmarks, with especially strong performances when the labeled data are extremely limited or when the task is challenging. For example, FlexMatch achieves 13.96% and 18.96% error rate reduction over FixMatch on CIFAR-100 and STL-10 datasets respectively, when there are only 4 labels per class. CPL also significantly boosts the convergence speed, e.g., FlexMatch can use only 1/5 training time of FixMatch to achieve even better performance. Furthermore, we show that CPL can be easily adapted to other SSL algorithms and remarkably improve their performances. We open-source our code at https://github.com/TorchSSL/TorchSSL.
| null |
Relative Flatness and Generalization
|
https://papers.nips.cc/paper_files/paper/2021/hash/995f5e03890b029865f402e83a81c29d-Abstract.html
|
Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley
|
https://papers.nips.cc/paper_files/paper/2021/hash/995f5e03890b029865f402e83a81c29d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13031-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/995f5e03890b029865f402e83a81c29d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sygvo7ctb_
|
https://papers.nips.cc/paper_files/paper/2021/file/995f5e03890b029865f402e83a81c29d-Supplemental.pdf
|
Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks. While it has been empirically observed that flatness measures consistently correlate strongly with generalization, it is still an open theoretical problem why and under which circumstances flatness is connected to generalization, in particular in light of reparameterizations that change certain flatness measures but leave generalization unchanged. We investigate the connection between flatness and generalization by relating it to the interpolation from representative data, deriving notions of representativeness, and feature robustness. The notions allow us to rigorously connect flatness and generalization and to identify conditions under which the connection holds. Moreover, they give rise to a novel, but natural relative flatness measure that correlates strongly with generalization, simplifies to ridge regression for ordinary least squares, and solves the reparameterization issue.
| null |
The Image Local Autoregressive Transformer
|
https://papers.nips.cc/paper_files/paper/2021/hash/9996535e07258a7bbfd8b132435c5962-Abstract.html
|
Chenjie Cao, Yuxin Hong, Xiang Li, Chengrong Wang, Chengming Xu, Yanwei Fu, Xiangyang Xue
|
https://papers.nips.cc/paper_files/paper/2021/hash/9996535e07258a7bbfd8b132435c5962-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13032-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9996535e07258a7bbfd8b132435c5962-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6mEWjDYJeE-
|
https://papers.nips.cc/paper_files/paper/2021/file/9996535e07258a7bbfd8b132435c5962-Supplemental.pdf
|
Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance compared to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model -- image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both quantitative and qualitative results show the efficacy of our model.
| null |
Towards Multi-Grained Explainability for Graph Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/99bcfcd754a98ce89cb86f73acc04645-Abstract.html
|
Xiang Wang, Yingxin Wu, An Zhang, Xiangnan He, Tat-Seng Chua
|
https://papers.nips.cc/paper_files/paper/2021/hash/99bcfcd754a98ce89cb86f73acc04645-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13033-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/99bcfcd754a98ce89cb86f73acc04645-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=e5vrkfc5aau
| null |
When a graph neural network (GNN) made a prediction, one raises question about explainability: “Which fraction of the input graph is most influential to the model’s decision?” Producing an answer requires understanding the model’s inner workings in general and emphasizing the insights on the decision for the instance at hand. Nonetheless, most of current approaches focus only on one aspect: (1) local explainability, which explains each instance independently, thus hardly exhibits the class-wise patterns; and (2) global explainability, which systematizes the globally important patterns, but might be trivial in the local context. This dichotomy limits the flexibility and effectiveness of explainers greatly. A performant paradigm towards multi-grained explainability is until-now lacking and thus a focus of our work. In this work, we exploit the pre-training and fine-tuning idea to develop our explainer and generate multi-grained explanations. Specifically, the pre-training phase accounts for the contrastivity among different classes, so as to highlight the class-wise characteristics from a global view; afterwards, the fine-tuning phase adapts the explanations in the local context. Experiments on both synthetic and real-world datasets show the superiority of our explainer, in terms of AUC on explaining graph classification over the leading baselines. Our codes and datasets are available at https://github.com/Wuyxin/ReFine.
| null |
Behavior From the Void: Unsupervised Active Pre-Training
|
https://papers.nips.cc/paper_files/paper/2021/hash/99bf3d153d4bf67d640051a1af322505-Abstract.html
|
Hao Liu, Pieter Abbeel
|
https://papers.nips.cc/paper_files/paper/2021/hash/99bf3d153d4bf67d640051a1af322505-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13034-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/99bf3d153d4bf67d640051a1af322505-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fIn4wLS2XzU
|
https://papers.nips.cc/paper_files/paper/2021/file/99bf3d153d4bf67d640051a1af322505-Supplemental.pdf
|
We introduce a new unsupervised pre-training method for reinforcement learning called APT, which stands for Active Pre-Training. APT learns behaviors and representations by actively searching for novel states in reward-free environments. The key novel idea is to explore the environment by maximizing a non-parametric entropy computed in an abstract representation space, which avoids challenging density modeling and consequently allows our approach to scale much better in environments that have high-dimensional observations (e.g., image observations). We empirically evaluate APT by exposing task-specific reward after a long unsupervised pre-training phase. In Atari games, APT achieves human-level performance on 12 games and obtains highly competitive performance compared to canonical fully supervised RL algorithms. On DMControl suite, APT beats all baselines in terms of asymptotic performance and data efficiency and dramatically improves performance on tasks that are extremely difficult to train from scratch.
| null |
Autonomous Reinforcement Learning via Subgoal Curricula
|
https://papers.nips.cc/paper_files/paper/2021/hash/99c83c904d0d64fbef50d919a5c66a80-Abstract.html
|
Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn
|
https://papers.nips.cc/paper_files/paper/2021/hash/99c83c904d0d64fbef50d919a5c66a80-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13035-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/99c83c904d0d64fbef50d919a5c66a80-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ELU8Bu1Z9w1
|
https://papers.nips.cc/paper_files/paper/2021/file/99c83c904d0d64fbef50d919a5c66a80-Supplemental.pdf
|
Reinforcement learning (RL) promises to enable autonomous acquisition of complex behaviors for diverse agents. However, the success of current reinforcement learning algorithms is predicated on an often under-emphasised requirement -- each trial needs to start from a fixed initial state distribution. Unfortunately, resetting the environment to its initial state after each trial requires substantial amount of human supervision and extensive instrumentation of the environment which defeats the goal of autonomous acquisition of complex behaviors. In this work, we propose Value-accelerated Persistent Reinforcement Learning (VaPRL), which generates a curriculum of initial states such that the agent can bootstrap on the success of easier tasks to efficiently learn harder tasks. The agent also learns to reach the initial states proposed by the curriculum, minimizing the reliance on human interventions into the learning. We observe that VaPRL reduces the interventions required by three orders of magnitude compared to episodic RL while outperforming prior state-of-the art methods for reset-free RL both in terms of sample efficiency and asymptotic performance on a variety of simulated robotics problems.
| null |
Statistically and Computationally Efficient Linear Meta-representation Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/99e7e6ce097324aceb45f98299ceb621-Abstract.html
|
Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
|
https://papers.nips.cc/paper_files/paper/2021/hash/99e7e6ce097324aceb45f98299ceb621-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13036-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/99e7e6ce097324aceb45f98299ceb621-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=48LtSAkxjiX
|
https://papers.nips.cc/paper_files/paper/2021/file/99e7e6ce097324aceb45f98299ceb621-Supplemental.pdf
|
In typical few-shot learning, each task is not equipped with enough data to be learned in isolation. To cope with such data scarcity, meta-representation learning methods train across many related tasks to find a shared (lower-dimensional) representation of the data where all tasks can be solved accurately. It is hypothesized that any new arriving tasks can be rapidly trained on this low-dimensional representation using only a few samples. Despite the practical successes of this approach, its statistical and computational properties are less understood. Moreover, the prescribed algorithms in these studies have little resemblance to those used in practice or they are computationally intractable. To understand and explain the success of popular meta-representation learning approaches such as ANIL, MetaOptNet, R2D2, and OML, we study a alternating gradient-descent minimization (AltMinGD) method (and its variant alternating minimization (AltMin)) which underlies the aforementioned methods. For a simple but canonical setting of shared linear representations, we show that AltMinGD achieves nearly-optimal estimation error, requiring only $\Omega(\mathrm{polylog}\,d)$ samples per task. This agrees with the observed efficacy of this algorithm in the practical few-shot learning scenarios.
| null |
Decentralized Learning in Online Queuing Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/99ef04eb612baf0e86671a5109e22154-Abstract.html
|
Flore Sentenac, Etienne Boursier, Vianney Perchet
|
https://papers.nips.cc/paper_files/paper/2021/hash/99ef04eb612baf0e86671a5109e22154-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13037-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/99ef04eb612baf0e86671a5109e22154-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BuoTowxp-9
|
https://papers.nips.cc/paper_files/paper/2021/file/99ef04eb612baf0e86671a5109e22154-Supplemental.pdf
|
Motivated by packet routing in computer networks, online queuing systems are composed of queues receiving packets at different rates. Repeatedly, they send packets to servers, each of them treating only at most one packet at a time. In the centralized case, the number of accumulated packets remains bounded (i.e., the system is stable) as long as the ratio between service rates and arrival rates is larger than $1$. In the decentralized case, individual no-regret strategies ensures stability when this ratio is larger than $2$. Yet, myopically minimizing regret disregards the long term effects due to the carryover of packets to further rounds. On the other hand, minimizing long term costs leads to stable Nash equilibria as soon as the ratio exceeds $\frac{e}{e-1}$. Stability with decentralized learning strategies with a ratio below $2$ was a major remaining question. We first argue that for ratios up to $2$, cooperation is required for stability of learning strategies, as selfish minimization of policy regret, a patient notion of regret, might indeed still be unstable in this case. We therefore consider cooperative queues and propose the first learning decentralized algorithm guaranteeing stability of the system as long as the ratio of rates is larger than $1$, thus reaching performances comparable to centralized strategies.
| null |
Explainable Semantic Space by Grounding Language to Vision with Cross-Modal Contrastive Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a1335ef5ffebb0de9d089c4182e4868-Abstract.html
|
Yizhen Zhang, Minkyu Choi, Kuan Han, Zhongming Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a1335ef5ffebb0de9d089c4182e4868-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13038-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a1335ef5ffebb0de9d089c4182e4868-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ljOg2HIBDGH
|
https://papers.nips.cc/paper_files/paper/2021/file/9a1335ef5ffebb0de9d089c4182e4868-Supplemental.pdf
|
In natural language processing, most models try to learn semantic representations merely from texts. The learned representations encode the “distributional semantics” but fail to connect to any knowledge about the physical world. In contrast, humans learn language by grounding concepts in perception and action and the brain encodes “grounded semantics” for cognition. Inspired by this notion and recent work in vision-language learning, we design a two-stream model for grounding language learning in vision. The model includes a VGG-based visual stream and a Bert-based language stream. The two streams merge into a joint representational space. Through cross-modal contrastive learning, the model first learns to align visual and language representations with the MS COCO dataset. The model further learns to retrieve visual objects with language queries through a cross-modal attention module and to infer the visual relations between the retrieved objects through a bilinear operator with the Visual Genome dataset. After training, the model’s language stream is a stand-alone language model capable of embedding concepts in a visually grounded semantic space. This semantic space manifests principal dimensions explainable with human intuition and neurobiological knowledge. Word embeddings in this semantic space are predictive of human-defined norms of semantic features and are segregated into perceptually distinctive clusters. Furthermore, the visually grounded language model also enables compositional language understanding based on visual knowledge and multimodal image search with queries based on images, texts, or their combinations.
| null |
BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a1756fd0c741126d7bbd4b692ccbd91-Abstract.html
|
Weizhe Hua, Yichi Zhang, Chuan Guo, Zhiru Zhang, G. Edward Suh
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a1756fd0c741126d7bbd4b692ccbd91-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13039-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a1756fd0c741126d7bbd4b692ccbd91-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=eAPrmf2g8f2
|
https://papers.nips.cc/paper_files/paper/2021/file/9a1756fd0c741126d7bbd4b692ccbd91-Supplemental.zip
|
Neural network robustness has become a central topic in machine learning in recent years. Most training algorithms that improve the model's robustness to adversarial and common corruptions also introduce a large computational overhead, requiring as many as ten times the number of forward and backward passes in order to converge. To combat this inefficiency, we propose BulletTrain, a boundary example mining technique to drastically reduce the computational cost of robust training. Our key observation is that only a small fraction of examples are beneficial for improving robustness. BulletTrain dynamically predicts these important examples and optimizes robust training algorithms to focus on the important examples. We apply our technique to several existing robust training algorithms and achieve a 2.2x speed-up for TRADES and MART on CIFAR-10 and a 1.7x speed-up for AugMix on CIFAR-10-C and CIFAR-100-C without any reduction in clean and robust accuracy.
| null |
Neural Distance Embeddings for Biological Sequences
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a1de01f893e0d2551ecbb7ce4dc963e-Abstract.html
|
Gabriele Corso, Zhitao Ying, Michal Pándy, Petar Veličković, Jure Leskovec, Pietro Liò
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a1de01f893e0d2551ecbb7ce4dc963e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13040-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a1de01f893e0d2551ecbb7ce4dc963e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fClMl0pAIhd
|
https://papers.nips.cc/paper_files/paper/2021/file/9a1de01f893e0d2551ecbb7ce4dc963e-Supplemental.pdf
|
The development of data-dependent heuristics and representations for biological sequences that reflect their evolutionary distance is critical for large-scale biological research. However, popular machine learning approaches, based on continuous Euclidean spaces, have struggled with the discrete combinatorial formulation of the edit distance that models evolution and the hierarchical relationship that characterises real-world datasets. We present Neural Distance Embeddings (NeuroSEED), a general framework to embed sequences in geometric vector spaces, and illustrate the effectiveness of the hyperbolic space that captures the hierarchical structure and provides an average 38% reduction in embedding RMSE against the best competing geometry. The capacity of the framework and the significance of these improvements are then demonstrated devising supervised and unsupervised NeuroSEED approaches to multiple core tasks in bioinformatics. Benchmarked with common baselines, the proposed approaches display significant accuracy and/or runtime improvements on real-world datasets. As an example for hierarchical clustering, the proposed pretrained and from-scratch methods match the quality of competing baselines with 30x and 15x runtime reduction, respectively.
| null |
Fitting summary statistics of neural data with a differentiable spiking network simulator
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a32ff36c65e8ba30915a21b7bd76506-Abstract.html
|
Guillaume Bellec, Shuqi Wang, Alireza Modirshanechi, Johanni Brea, Wulfram Gerstner
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a32ff36c65e8ba30915a21b7bd76506-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13041-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a32ff36c65e8ba30915a21b7bd76506-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9DEAT9pDiN
|
https://papers.nips.cc/paper_files/paper/2021/file/9a32ff36c65e8ba30915a21b7bd76506-Supplemental.pdf
|
Fitting network models to neural activity is an important tool in neuroscience. A popular approach is to model a brain area with a probabilistic recurrent spiking network whose parameters maximize the likelihood of the recorded activity. Although this is widely used, we show that the resulting model does not produce realistic neural activity. To correct for this, we suggest to augment the log-likelihood with terms that measure the dissimilarity between simulated and recorded activity. This dissimilarity is defined via summary statistics commonly used in neuroscience and the optimization is efficient because it relies on back-propagation through the stochastically simulated spike trains. We analyze this method theoretically and show empirically that it generates more realistic activity statistics. We find that it improves upon other fitting algorithms for spiking network models like GLMs (Generalized Linear Models) which do not usually rely on back-propagation. This new fitting algorithm also enables the consideration of hidden neurons which is otherwise notoriously hard, and we show that it can be crucial when trying to infer the network connectivity from spike recordings.
| null |
PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a3f263a5e5f63006098a05cd7491997-Abstract.html
|
Anish Agarwal, Abdullah Alomar, Varkey Alumootil, Devavrat Shah, Dennis Shen, Zhi Xu, Cindy Yang
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a3f263a5e5f63006098a05cd7491997-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13042-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a3f263a5e5f63006098a05cd7491997-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=v2w7CVZGHeA
|
https://papers.nips.cc/paper_files/paper/2021/file/9a3f263a5e5f63006098a05cd7491997-Supplemental.pdf
|
We consider offline reinforcement learning (RL) with heterogeneous agents under severe data scarcity, i.e., we only observe a single historical trajectory for every agent under an unknown, potentially sub-optimal policy. We find that the performance of state-of-the-art offline and model-based RL methods degrade significantly given such limited data availability, even for commonly perceived "solved" benchmark settings such as "MountainCar" and "CartPole". To address this challenge, we propose PerSim, a model-based offline RL approach which first learns a personalized simulator for each agent by collectively using the historical trajectories across all agents, prior to learning a policy. We do so by positing that the transition dynamics across agents can be represented as a latent function of latent factors associated with agents, states, and actions; subsequently, we theoretically establish that this function is well-approximated by a "low-rank" decomposition of separable agent, state, and action latent functions. This representation suggests a simple, regularized neural network architecture to effectively learn the transition dynamics per agent, even with scarce, offline data. We perform extensive experiments across several benchmark environments and RL methods. The consistent improvement of our approach, measured in terms of both state dynamics prediction and eventual reward, confirms the efficacy of our framework in leveraging limited historical data to simultaneously learn personalized policies across agents.
| null |
All Tokens Matter: Token Labeling for Training Better Vision Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a49a25d845a483fae4be7e341368e36-Abstract.html
|
Zi-Hang Jiang, Qibin Hou, Li Yuan, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, Jiashi Feng
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a49a25d845a483fae4be7e341368e36-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13044-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a49a25d845a483fae4be7e341368e36-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2vubO341F_E
|
https://papers.nips.cc/paper_files/paper/2021/file/9a49a25d845a483fae4be7e341368e36-Supplemental.pdf
|
In this paper, we present token labeling---a new training objective for training high-performance vision transformers (ViTs). Different from the standard training objective of ViTs that computes the classification loss on an additional trainable class token, our proposed one takes advantage of all the image patch tokens to compute the training loss in a dense manner. Specifically, token labeling reformulates the image classification problem into multiple token-level recognition problems and assigns each patch token with an individual location-specific supervision generated by a machine annotator. Experiments show that token labeling can clearly and consistently improve the performance of various ViT models across a wide spectrum. For a vision transformer with 26M learnable parameters serving as an example, with token labeling, the model can achieve 84.4% Top-1 accuracy on ImageNet. The result can be further increased to 86.4% by slightly scaling the model size up to 150M, delivering the minimal-sized model among previous models (250M+) reaching 86%. We also show that token labeling can clearly improve the generalization capability of the pretrained models on downstream tasks with dense prediction, such as semantic segmentation. Our code and model are publiclyavailable at https://github.com/zihangJiang/TokenLabeling.
| null |
Partition and Code: learning how to compress graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a4d6e8685bd057e4f68930bd7c8ecc0-Abstract.html
|
Giorgos Bouritsas, Andreas Loukas, Nikolaos Karalias, Michael Bronstein
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a4d6e8685bd057e4f68930bd7c8ecc0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13045-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a4d6e8685bd057e4f68930bd7c8ecc0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=qL_juuU4P3Y
|
https://papers.nips.cc/paper_files/paper/2021/file/9a4d6e8685bd057e4f68930bd7c8ecc0-Supplemental.pdf
|
Can we use machine learning to compress graph data? The absence of ordering in graphs poses a significant challenge to conventional compression algorithms, limiting their attainable gains as well as their ability to discover relevant patterns. On the other hand, most graph compression approaches rely on domain-dependent handcrafted representations and cannot adapt to different underlying graph distributions. This work aims to establish the necessary principles a lossless graph compression method should follow to approach the entropy storage lower bound. Instead of making rigid assumptions about the graph distribution, we formulate the compressor as a probabilistic model that can be learned from data and generalise to unseen instances. Our “Partition and Code” framework entails three steps: first, a partitioning algorithm decomposes the graph into subgraphs, then these are mapped to the elements of a small dictionary on which we learn a probability distribution, and finally, an entropy encoder translates the representation into bits. All the components (partitioning, dictionary and distribution) are parametric and can be trained with gradient descent. We theoretically compare the compression quality of several graph encodings and prove, under mild conditions, that PnC achieves compression gains that grow either linearly or quadratically with the number of vertices. Empirically, PnC yields significant compression improvements on diverse real-world networks.
| null |
Knowledge-inspired 3D Scene Graph Prediction in Point Cloud
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a555403384fc12f931656dea910e334-Abstract.html
|
Shoulong Zhang, shuai li, Aimin Hao, Hong Qin
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a555403384fc12f931656dea910e334-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13046-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a555403384fc12f931656dea910e334-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OLyhLK2eQP
|
https://papers.nips.cc/paper_files/paper/2021/file/9a555403384fc12f931656dea910e334-Supplemental.pdf
|
Prior knowledge integration helps identify semantic entities and their relationships in a graphical representation, however, its meaningful abstraction and intervention remain elusive. This paper advocates a knowledge-inspired 3D scene graph prediction method solely based on point clouds. At the mathematical modeling level, we formulate the task as two sub-problems: knowledge learning and scene graph prediction with learned prior knowledge. Unlike conventional methods that learn knowledge embedding and regular patterns from encoded visual information, we propose to suppress the misunderstandings caused by appearance similarities and other perceptual confusion. At the network design level, we devise a graph auto-encoder to automatically extract class-dependent representations and topological patterns from the one-hot class labels and their intrinsic graphical structures, so that the prior knowledge can avoid perceptual errors and noises. We further devise a scene graph prediction model to predict credible relationship triplets by incorporating the related prototype knowledge with perceptual information. Comprehensive experiments confirm that, our method can successfully learn representative knowledge embedding, and the obtained prior knowledge can effectively enhance the accuracy of relationship predictions. Our thorough evaluations indicate the new method can achieve the state-of-the-art performance compared with other scene graph prediction methods.
| null |
Online Variational Filtering and Parameter Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a6a1aaafe73c572b7374828b03a1881-Abstract.html
|
Andrew Campbell, Yuyang Shi, Thomas Rainforth, Arnaud Doucet
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a6a1aaafe73c572b7374828b03a1881-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13047-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a6a1aaafe73c572b7374828b03a1881-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=et2st4Jqhc
|
https://papers.nips.cc/paper_files/paper/2021/file/9a6a1aaafe73c572b7374828b03a1881-Supplemental.pdf
|
We present a variational method for online state estimation and parameter learning in state-space models (SSMs), a ubiquitous class of latent variable models for sequential data. As per standard batch variational techniques, we use stochastic gradients to simultaneously optimize a lower bound on the log evidence with respect to both model parameters and a variational approximation of the states' posterior distribution. However, unlike existing approaches, our method is able to operate in an entirely online manner, such that historic observations do not require revisitation after being incorporated and the cost of updates at each time step remains constant, despite the growing dimensionality of the joint posterior distribution of the states. This is achieved by utilizing backward decompositions of this joint posterior distribution and of its variational approximation, combined with Bellman-type recursions for the evidence lower bound and its gradients. We demonstrate the performance of this methodology across several examples, including high-dimensional SSMs and sequential Variational Auto-Encoders.
| null |
Heavy Ball Neural Ordinary Differential Equations
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a86d531e19ec6f5937ad1373bb118bd-Abstract.html
|
Hedi Xia, Vai Suliafu, Hangjie Ji, Tan Nguyen, Andrea Bertozzi, Stanley Osher, Bao Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/9a86d531e19ec6f5937ad1373bb118bd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13048-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9a86d531e19ec6f5937ad1373bb118bd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fYLfs9yrtMQ
|
https://papers.nips.cc/paper_files/paper/2021/file/9a86d531e19ec6f5937ad1373bb118bd-Supplemental.zip
|
We propose heavy ball neural ordinary differential equations (HBNODEs), leveraging the continuous limit of the classical momentum accelerated gradient descent, to improve neural ODEs (NODEs) training and inference. HBNODEs have two properties that imply practical advantages over NODEs: (i) The adjoint state of an HBNODE also satisfies an HBNODE, accelerating both forward and backward ODE solvers, thus significantly reducing the number of function evaluations (NFEs) and improving the utility of the trained models. (ii) The spectrum of HBNODEs is well structured, enabling effective learning of long-term dependencies from complex sequential data. We verify the advantages of HBNODEs over NODEs on benchmark tasks, including image classification, learning complex dynamics, and sequential modeling. Our method requires remarkably fewer forward and backward NFEs, is more accurate, and learns long-term dependencies more effectively than the other ODE-based neural network models. Code is available at \url{https://github.com/hedixia/HeavyBallNODE}.
| null |
Structure learning in polynomial time: Greedy algorithms, Bregman information, and exponential families
|
https://papers.nips.cc/paper_files/paper/2021/hash/9ab8a8a9349eb1dd73ce155ce64c80fa-Abstract.html
|
Goutham Rajendran, Bohdan Kivva, Ming Gao, Bryon Aragam
|
https://papers.nips.cc/paper_files/paper/2021/hash/9ab8a8a9349eb1dd73ce155ce64c80fa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13049-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9ab8a8a9349eb1dd73ce155ce64c80fa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Fv0DPhwB6o9
|
https://papers.nips.cc/paper_files/paper/2021/file/9ab8a8a9349eb1dd73ce155ce64c80fa-Supplemental.pdf
|
Greedy algorithms have long been a workhorse for learning graphical models, and more broadly for learning statistical models with sparse structure. In the context of learning directed acyclic graphs, greedy algorithms are popular despite their worst-case exponential runtime. In practice, however, they are very efficient. We provide new insight into this phenomenon by studying a general greedy score-based algorithm for learning DAGs. Unlike edge-greedy algorithms such as the popular GES and hill-climbing algorithms, our approach is vertex-greedy and requires at most a polynomial number of score evaluations. We then show how recent polynomial-time algorithms for learning DAG models are a special case of this algorithm, thereby illustrating how these order-based algorithms can be rigourously interpreted as score-based algorithms. This observation suggests new score functions and optimality conditions based on the duality between Bregman divergences and exponential families, which we explore in detail. Explicit sample and computational complexity bounds are derived. Finally, we provide extensive experiments suggesting that this algorithm indeed optimizes the score in a variety of settings.
| null |
On the Sample Complexity of Learning under Geometric Stability
|
https://papers.nips.cc/paper_files/paper/2021/hash/9ac5a6d86e8924182271bd820acbce0e-Abstract.html
|
Alberto Bietti, Luca Venturi, Joan Bruna
|
https://papers.nips.cc/paper_files/paper/2021/hash/9ac5a6d86e8924182271bd820acbce0e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13050-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9ac5a6d86e8924182271bd820acbce0e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vlf0zTKa5Lh
|
https://papers.nips.cc/paper_files/paper/2021/file/9ac5a6d86e8924182271bd820acbce0e-Supplemental.pdf
|
Many supervised learning problems involve high-dimensional data such as images, text, or graphs. In order to make efficient use of data, it is often useful to leverage certain geometric priors in the problem at hand, such as invariance to translations, permutation subgroups, or stability to small deformations. We study the sample complexity of learning problems where the target function presents such invariance and stability properties, by considering spherical harmonic decompositions of such functions on the sphere. We provide non-parametric rates of convergence for kernel methods, and show improvements in sample complexity by a factor equal to the size of the group when using an invariant kernel over the group, compared to the corresponding non-invariant kernel. These improvements are valid when the sample size is large enough, with an asymptotic behavior that depends on spectral properties of the group. Finally, these gains are extended beyond invariance groups to also cover geometric stability to small deformations, modeled here as subsets (not necessarily subgroups) of permutations.
| null |
SIMILAR: Submodular Information Measures Based Active Learning In Realistic Scenarios
|
https://papers.nips.cc/paper_files/paper/2021/hash/9af08cda54faea9adf40a201794183cf-Abstract.html
|
Suraj Kothawade, Nathan Beck, Krishnateja Killamsetty, Rishabh Iyer
|
https://papers.nips.cc/paper_files/paper/2021/hash/9af08cda54faea9adf40a201794183cf-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13051-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9af08cda54faea9adf40a201794183cf-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VGDFaLNFFk
|
https://papers.nips.cc/paper_files/paper/2021/file/9af08cda54faea9adf40a201794183cf-Supplemental.pdf
|
Active learning has proven to be useful for minimizing labeling costs by selecting the most informative samples. However, existing active learning methods do not work well in realistic scenarios such as imbalance or rare classes,out-of-distribution data in the unlabeled set, and redundancy. In this work, we propose SIMILAR (Submodular Information Measures based actIve LeARning), a unified active learning framework using recently proposed submodular information measures (SIM) as acquisition functions. We argue that SIMILAR not only works in standard active learning but also easily extends to the realistic settings considered above and acts as a one-stop solution for active learning that is scalable to large real-world datasets. Empirically, we show that SIMILAR significantly outperforms existing active learning algorithms by as much as ~5%−18%in the case of rare classes and ~5%−10%in the case of out-of-distribution data on several image classification tasks like CIFAR-10, MNIST, and ImageNet.
| null |
Monte Carlo Tree Search With Iteratively Refining State Abstractions
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b0ead00a217ea2c12e06a72eec4923f-Abstract.html
|
Samuel Sokota, Caleb Y Ho, Zaheen Ahmad, J. Zico Kolter
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b0ead00a217ea2c12e06a72eec4923f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13052-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9b0ead00a217ea2c12e06a72eec4923f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0qnPBmvJSaf
|
https://papers.nips.cc/paper_files/paper/2021/file/9b0ead00a217ea2c12e06a72eec4923f-Supplemental.zip
|
Decision-time planning is the process of constructing a transient, local policy with the intent of using it to make the immediate decision. Monte Carlo tree search (MCTS), which has been leveraged to great success in Go, chess, shogi, Hex, Atari, and other settings, is perhaps the most celebrated decision-time planning algorithm. Unfortunately, in its original form, MCTS can degenerate to one-step search in domains with stochasticity. Progressive widening is one way to ameliorate this issue, but we argue that it possesses undesirable properties for some settings. In this work, we present a method, called abstraction refining, for extending MCTS to stochastic environments which, unlike progressive widening, leverages the geometry of the state space. We argue that leveraging the geometry of the space can offer advantages. To support this claim, we present a series of experimental examples in which abstraction refining outperforms progressive widening, given equal simulation budgets.
| null |
Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b16759a62899465ab21e2e79d2ef75c-Abstract.html
|
Danruo DENG, Guangyong Chen, Jianye Hao, Qiong Wang, Pheng-Ann Heng
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b16759a62899465ab21e2e79d2ef75c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13053-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9b16759a62899465ab21e2e79d2ef75c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=q1eCa1kMfDd
|
https://papers.nips.cc/paper_files/paper/2021/file/9b16759a62899465ab21e2e79d2ef75c-Supplemental.pdf
|
The backpropagation networks are notably susceptible to catastrophic forgetting, where networks tend to forget previously learned skills upon learning new ones. To address such the 'sensitivity-stability' dilemma, most previous efforts have been contributed to minimizing the empirical risk with different parameter regularization terms and episodic memory, but rarely exploring the usages of the weight loss landscape. In this paper, we investigate the relationship between the weight loss landscape and sensitivity-stability in the continual learning scenario, based on which, we propose a novel method, Flattening Sharpness for Dynamic Gradient Projection Memory (FS-DGPM). In particular, we introduce a soft weight to represent the importance of each basis representing past tasks in GPM, which can be adaptively learned during the learning process, so that less important bases can be dynamically released to improve the sensitivity of new skill learning. We further introduce Flattening Sharpness (FS) to reduce the generalization gap by explicitly regulating the flatness of the weight loss landscape of all seen tasks. As demonstrated empirically, our proposed method consistently outperforms baselines with the superior ability to learn new skills while alleviating forgetting effectively.
| null |
Taxonomizing local versus global structure in neural network loss landscapes
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b72e31dac81715466cd580a448cf823-Abstract.html
|
Yaoqing Yang, Liam Hodgkinson, Ryan Theisen, Joe Zou, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b72e31dac81715466cd580a448cf823-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13054-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9b72e31dac81715466cd580a448cf823-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=P6bUrLREcne
| null |
Viewing neural network models in terms of their loss landscapes has a long history in the statistical mechanics approach to learning, and in recent years it has received attention within machine learning proper. Among other things, local metrics (such as the smoothness of the loss landscape) have been shown to correlate with global properties of the model (such as good generalization performance). Here, we perform a detailed empirical analysis of the loss landscape structure of thousands of neural network models, systematically varying learning tasks, model architectures, and/or quantity/quality of data. By considering a range of metrics that attempt to capture different aspects of the loss landscape, we demonstrate that the best test accuracy is obtained when: the loss landscape is globally well-connected; ensembles of trained models are more similar to each other; and models converge to locally smooth regions. We also show that globally poorly-connected landscapes can arise when models are small or when they are trained to lower quality data; and that, if the loss landscape is globally poorly-connected, then training to zero loss can actually lead to worse test accuracy. Our detailed empirical results shed light on phases of learning (and consequent double descent behavior), fundamental versus incidental determinants of good generalization, the role of load-like and temperature-like parameters in the learning process, different influences on the loss landscape from model and data, and the relationships between local and global metrics, all topics of recent interest.
| null |
Learning Models for Actionable Recourse
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b82909c30456ac902e14526e63081d4-Abstract.html
|
Alexis Ross, Himabindu Lakkaraju, Osbert Bastani
|
https://papers.nips.cc/paper_files/paper/2021/hash/9b82909c30456ac902e14526e63081d4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13055-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9b82909c30456ac902e14526e63081d4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=JZK9uP4Fev
|
https://papers.nips.cc/paper_files/paper/2021/file/9b82909c30456ac902e14526e63081d4-Supplemental.pdf
|
As machine learning models are increasingly deployed in high-stakes domains such as legal and financial decision-making, there has been growing interest in post-hoc methods for generating counterfactual explanations. Such explanations provide individuals adversely impacted by predicted outcomes (e.g., an applicant denied a loan) with recourse---i.e., a description of how they can change their features to obtain a positive outcome. We propose a novel algorithm that leverages adversarial training and PAC confidence sets to learn models that theoretically guarantee recourse to affected individuals with high probability without sacrificing accuracy. We demonstrate the efficacy of our approach via extensive experiments on real data.
| null |
Efficient and Accurate Gradients for Neural SDEs
|
https://papers.nips.cc/paper_files/paper/2021/hash/9ba196c7a6e89eafd0954de80fc1b224-Abstract.html
|
Patrick Kidger, James Foster, Xuechen (Chen) Li, Terry Lyons
|
https://papers.nips.cc/paper_files/paper/2021/hash/9ba196c7a6e89eafd0954de80fc1b224-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13056-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9ba196c7a6e89eafd0954de80fc1b224-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=b2bkE0Qq8Ya
|
https://papers.nips.cc/paper_files/paper/2021/file/9ba196c7a6e89eafd0954de80fc1b224-Supplemental.pdf
|
Neural SDEs combine many of the best qualities of both RNNs and SDEs, and as such are a natural choice for modelling many types of temporal dynamics. They offer memory efficiency, high-capacity function approximation, and strong priors on model space. Neural SDEs may be trained as VAEs or as GANs; in either case it is necessary to backpropagate through the SDE solve. In particular this may be done by constructing a backwards-in-time SDE whose solution is the desired parameter gradients. However, this has previously suffered from severe speed and accuracy issues, due to high computational complexity, numerical errors in the SDE solve, and the cost of reconstructing Brownian motion. Here, we make several technical innovations to overcome these issues. First, we introduce the \textit{reversible Heun method}: a new SDE solver that is algebraically reversible -- which reduces numerical gradient errors to almost zero, improving several test metrics by substantial margins over state-of-the-art. Moreover it requires half as many function evaluations as comparable solvers, giving up to a $1.98\times$ speedup. Next, we introduce the \textit{Brownian interval}. This is a new and computationally efficient way of exactly sampling \textit{and reconstructing} Brownian motion; this is in contrast to previous reconstruction techniques that are both approximate and relatively slow. This gives up to a $10.6\times$ speed improvement over previous techniques. After that, when specifically training Neural SDEs as GANs (Kidger et al. 2021), we demonstrate how SDE-GANs may be trained through careful weight clipping and choice of activation function. This reduces computational cost (giving up to a $1.87\times$ speedup), and removes the truncation errors of the double adjoint required for gradient penalty, substantially improving several test metrics. Altogether these techniques offer substantial improvements over the state-of-the-art, with respect to both training speed and with respect to classification, prediction, and MMD test metrics. We have contributed implementations of all of our techniques to the \texttt{torchsde} library to help facilitate their adoption.
| null |
EIGNN: Efficient Infinite-Depth Graph Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/9bd5ee6fe55aaeb673025dbcb8f939c1-Abstract.html
|
Juncheng Liu, Kenji Kawaguchi, Bryan Hooi, Yiwei Wang, Xiaokui Xiao
|
https://papers.nips.cc/paper_files/paper/2021/hash/9bd5ee6fe55aaeb673025dbcb8f939c1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13057-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9bd5ee6fe55aaeb673025dbcb8f939c1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=blzTEKKRIcV
|
https://papers.nips.cc/paper_files/paper/2021/file/9bd5ee6fe55aaeb673025dbcb8f939c1-Supplemental.pdf
|
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications. However, with their inherently finite aggregation layers, existing GNN models may not be able to effectively capture long-range dependencies in the underlying graphs. Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN), to efficiently capture very long-range dependencies. We theoretically derive a closed-form solution of EIGNN which makes training an infinite-depth GNN model tractable. We then further show that we can achieve more efficient computation for training EIGNN by using eigendecomposition. The empirical results of comprehensive experiments on synthetic and real-world datasets show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance. Furthermore, we show that our model is also more robust against both noise and adversarial perturbations on node features.
| null |
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
|
https://papers.nips.cc/paper_files/paper/2021/hash/9bdb8b1faffa4b3d41779bb495d79fb9-Abstract.html
|
Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gurbuzbalaban, Umut Simsekli, Lingjiong Zhu
|
https://papers.nips.cc/paper_files/paper/2021/hash/9bdb8b1faffa4b3d41779bb495d79fb9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13058-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9bdb8b1faffa4b3d41779bb495d79fb9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WlkzLjxpYe
|
https://papers.nips.cc/paper_files/paper/2021/file/9bdb8b1faffa4b3d41779bb495d79fb9-Supplemental.pdf
|
Understanding generalization in deep learning has been one of the major challenges in statistical learning theory over the last decade. While recent work has illustrated that the dataset and the training algorithm must be taken into account in order to obtain meaningful generalization bounds, it is still theoretically not clear which properties of the data and the algorithm determine the generalization performance. In this study, we approach this problem from a dynamical systems theory perspective and represent stochastic optimization algorithms as \emph{random iterated function systems} (IFS). Well studied in the dynamical systems literature, under mild assumptions, such IFSs can be shown to be ergodic with an invariant measure that is often supported on sets with a \emph{fractal structure}. As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure. Then, by leveraging results from dynamical systems theory, we show that the generalization error can be explicitly linked to the choice of the algorithm (e.g., stochastic gradient descent -- SGD), algorithm hyperparameters (e.g., step-size, batch-size), and the geometry of the problem (e.g., Hessian of the loss). We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden-layered neural networks) and algorithms (e.g., SGD and preconditioned variants), and obtain analytical estimates for our bound. For modern neural networks, we develop an efficient algorithm to compute the developed bound and support our theory with various experiments on neural networks.
| null |
An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence
|
https://papers.nips.cc/paper_files/paper/2021/hash/9be40cee5b0eee1462c82c6964087ff9-Abstract.html
|
Agustinus Kristiadi, Matthias Hein, Philipp Hennig
|
https://papers.nips.cc/paper_files/paper/2021/hash/9be40cee5b0eee1462c82c6964087ff9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13059-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9be40cee5b0eee1462c82c6964087ff9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=J-pFhOiGVn7
| null |
A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data. But far away from them, ReLU Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be asymptotically overconfident. This issue arises since the output variance of a BNN with finitely many features is quadratic in the distance from the data region. Meanwhile, Bayesian linear models with ReLU features converge, in the infinite-width limit, to a particular Gaussian process (GP) with a variance that grows cubically so that no asymptotic overconfidence can occur. While this may seem of mostly theoretical interest, in this work, we show that it can be used in practice to the benefit of BNNs. We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data. Although the resulting model approximates a full GP posterior, thanks to its structure, it can be applied post-hoc to any pre-trained ReLU BNN at a low cost.
| null |
Bandit Phase Retrieval
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c36b930df0e0e8b05d4e1fcb4cdef27-Abstract.html
|
Tor Lattimore, Botao Hao
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c36b930df0e0e8b05d4e1fcb4cdef27-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13060-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9c36b930df0e0e8b05d4e1fcb4cdef27-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fThfMoV7Ri
|
https://papers.nips.cc/paper_files/paper/2021/file/9c36b930df0e0e8b05d4e1fcb4cdef27-Supplemental.pdf
|
We study a bandit version of phase retrieval where the learner chooses actions $(A_t)_{t=1}^n$ in the $d$-dimensional unit ball and the expected reward is $\langle A_t, \theta_\star \rangle^2$ with $\theta_\star \in \mathbb R^d$ an unknown parameter vector. We prove an upper bound on the minimax cumulative regret in this problem of $\smash{\tilde \Theta(d \sqrt{n})}$, which matches known lower bounds up to logarithmic factors and improves on the best known upper bound by a factor of $\smash{\sqrt{d}}$. We also show that the minimax simple regret is $\smash{\tilde \Theta(d / \sqrt{n})}$ and that this is only achievable by an adaptive algorithm. Our analysis shows that an apparently convincing heuristic for guessing lower bounds can be misleading and that uniform bounds on the information ratio for information-directed sampling (Russo and Van Roy, 2014) are not sufficient for optimal regret.
| null |
Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c4e6233c6d5ff637e7984152a3531d5-Abstract.html
|
Yin Tat Lee, Ruoqi Shen, Kevin Tian
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c4e6233c6d5ff637e7984152a3531d5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13061-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9c4e6233c6d5ff637e7984152a3531d5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=napaTaDQ0lY
|
https://papers.nips.cc/paper_files/paper/2021/file/9c4e6233c6d5ff637e7984152a3531d5-Supplemental.pdf
|
We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of $\widetilde{\Omega}(\kappa d)$ on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results \cite{DwivediCW018, ChenDWY19, LeeST20a} up to logarithmic factors and answering an open question of \cite{ChewiLACGR20}. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.
| null |
Taming Communication and Sample Complexities in Decentralized Policy Evaluation for Cooperative Multi-Agent Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c51a13764ca629f439f6accbb4ec413-Abstract.html
|
Xin Zhang, Zhuqing Liu, Jia Liu, Zhengyuan Zhu, Songtao Lu
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c51a13764ca629f439f6accbb4ec413-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13062-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9c51a13764ca629f439f6accbb4ec413-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=D5APl1Yixnc
|
https://papers.nips.cc/paper_files/paper/2021/file/9c51a13764ca629f439f6accbb4ec413-Supplemental.pdf
|
Cooperative multi-agent reinforcement learning (MARL) has received increasing attention in recent years and has found many scientific and engineering applications. However, a key challenge arising from many cooperative MARL algorithm designs (e.g., the actor-critic framework) is the policy evaluation problem, which can only be conducted in a {\em decentralized} fashion. In this paper, we focus on decentralized MARL policy evaluation with nonlinear function approximation, which is often seen in deep MARL. We first show that the empirical decentralized MARL policy evaluation problem can be reformulated as a decentralized nonconvex-strongly-concave minimax saddle point problem. We then develop a decentralized gradient-based descent ascent algorithm called GT-GDA that enjoys a convergence rate of $\mathcal{O}(1/T)$. To further reduce the sample complexity, we propose two decentralized stochastic optimization algorithms called GT-SRVR and GT-SRVRI, which enhance GT-GDA by variance reduction techniques. We show that all algorithms all enjoy an $\mathcal{O}(1/T)$ convergence rate to a stationary point of the reformulated minimax problem. Moreover, the fast convergence rates of GT-SRVR and GT-SRVRI imply $\mathcal{O}(\epsilon^{-2})$ communication complexity and $\mathcal{O}(m\sqrt{n}\epsilon^{-2})$ sample complexity, where $m$ is the number of agents and $n$ is the length of trajectories. To our knowledge, this paper is the first work that achieves both $\mathcal{O}(\epsilon^{-2})$ sample complexity and $\mathcal{O}(\epsilon^{-2})$ communication complexity in decentralized policy evaluation for cooperative MARL. Our extensive experiments also corroborate the theoretical performance of our proposed decentralized policy evaluation algorithms.
| null |
Federated Graph Classification over Non-IID Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c6947bd95ae487c81d4e19d3ed8cd6f-Abstract.html
|
Han Xie, Jing Ma, Li Xiong, Carl Yang
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c6947bd95ae487c81d4e19d3ed8cd6f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13063-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9c6947bd95ae487c81d4e19d3ed8cd6f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yJqcM36Qvnu
|
https://papers.nips.cc/paper_files/paper/2021/file/9c6947bd95ae487c81d4e19d3ed8cd6f-Supplemental.pdf
|
Federated learning has emerged as an important paradigm for training machine learning models in different domains. For graph-level tasks such as graph classification, graphs can also be regarded as a special type of data samples, which can be collected and stored in separate local systems. Similar to other domains, multiple local systems, each holding a small set of graphs, may benefit from collaboratively training a powerful graph mining model, such as the popular graph neural networks (GNNs). To provide more motivation towards such endeavors, we analyze real-world graphs from different domains to confirm that they indeed share certain graph properties that are statistically significant compared with random graphs. However, we also find that different sets of graphs, even from the same domain or same dataset, are non-IID regarding both graph structures and node features. To handle this, we propose a graph clustered federated learning (GCFL) framework that dynamically finds clusters of local systems based on the gradients of GNNs, and theoretically justify that such clusters can reduce the structure and feature heterogeneity among graphs owned by the local systems. Moreover, we observe the gradients of GNNs to be rather fluctuating in GCFL which impedes high-quality clustering, and design a gradient sequence-based clustering mechanism based on dynamic time warping (GCFL+). Extensive experimental results and in-depth analysis demonstrate the effectiveness of our proposed frameworks.
| null |
SubTab: Subsetting Features of Tabular Data for Self-Supervised Representation Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c8661befae6dbcd08304dbf4dcaf0db-Abstract.html
|
Talip Ucar, Ehsan Hajiramezanali, Lindsay Edwards
|
https://papers.nips.cc/paper_files/paper/2021/hash/9c8661befae6dbcd08304dbf4dcaf0db-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13064-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9c8661befae6dbcd08304dbf4dcaf0db-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vrhNQ7aYSdr
|
https://papers.nips.cc/paper_files/paper/2021/file/9c8661befae6dbcd08304dbf4dcaf0db-Supplemental.pdf
|
Self-supervised learning has been shown to be very effective in learning useful representations, and yet much of the success is achieved in data types such as images, audio, and text. The success is mainly enabled by taking advantage of spatial, temporal, or semantic structure in the data through augmentation. However, such structure may not exist in tabular datasets commonly used in fields such as healthcare, making it difficult to design an effective augmentation method, and hindering a similar progress in tabular data setting. In this paper, we introduce a new framework, Subsetting features of Tabular data (SubTab), that turns the task of learning from tabular data into a multi-view representation learning problem by dividing the input features to multiple subsets. We argue that reconstructing the data from the subset of its features rather than its corrupted version in an autoencoder setting can better capture its underlying latent representation. In this framework, the joint representation can be expressed as the aggregate of latent variables of the subsets at test time, which we refer to as collaborative inference. Our experiments show that the SubTab achieves the state of the art (SOTA) performance of 98.31% on MNIST in tabular setting, on par with CNN-based SOTA models, and surpasses existing baselines on three other real-world datasets by a significant margin.
| null |
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance
|
https://papers.nips.cc/paper_files/paper/2021/hash/9cdf26568d166bc6793ef8da5afa0846-Abstract.html
|
Hongjian Wang, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Simsekli, Murat A. Erdogdu
|
https://papers.nips.cc/paper_files/paper/2021/hash/9cdf26568d166bc6793ef8da5afa0846-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13065-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9cdf26568d166bc6793ef8da5afa0846-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yxHPRAqCqn
|
https://papers.nips.cc/paper_files/paper/2021/file/9cdf26568d166bc6793ef8da5afa0846-Supplemental.pdf
|
Recent studies have provided both empirical and theoretical evidence illustrating that heavy tails can emerge in stochastic gradient descent (SGD) in various scenarios. Such heavy tails potentially result in iterates with diverging variance, which hinders the use of conventional convergence analysis techniques that rely on the existence of the second-order moments. In this paper, we provide convergence guarantees for SGD under a state-dependent and heavy-tailed noise with a potentially infinite variance, for a class of strongly convex objectives. In the case where the $p$-th moment of the noise exists for some $p\in [1,2)$, we first identify a condition on the Hessian, coined `$p$-positive (semi-)definiteness', that leads to an interesting interpolation between the positive semi-definite cone ($p=2$) and the cone of diagonally dominant matrices with non-negative diagonal entries ($p=1$). Under this condition, we provide a convergence rate for the distance to the global optimum in $L^p$. Furthermore, we provide a generalized central limit theorem, which shows that the properly scaled Polyak-Ruppert averaging converges weakly to a multivariate $\alpha$-stable random vector.Our results indicate that even under heavy-tailed noise with infinite variance, SGD can converge to the global optimum without necessitating any modification neither to the loss function nor to the algorithm itself, as typically required in robust statistics.We demonstrate the implications of our resultsover misspecified models, in the presence of heavy-tailed data.
| null |
Conflict-Averse Gradient Descent for Multi-task learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d27fdf2477ffbff837d73ef7ae23db9-Abstract.html
|
Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d27fdf2477ffbff837d73ef7ae23db9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13066-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9d27fdf2477ffbff837d73ef7ae23db9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_61Qh8tULj_
|
https://papers.nips.cc/paper_files/paper/2021/file/9d27fdf2477ffbff837d73ef7ae23db9-Supplemental.pdf
|
The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks. A standard multi-task learning objective is to minimize the average loss across all tasks. While straightforward, using this objective often results in much worse final performance for each task than learning them independently. A major challenge in optimizing a multi-task model is the conflicting gradients, where gradients of different task objectives are not well aligned so that following the average gradient direction can be detrimental to specific tasks' performance. Previous work has proposed several heuristics to manipulate the task gradients for mitigating this problem. But most of them lack convergence guarantee and/or could converge to any Pareto-stationary point.In this paper, we introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function, while leveraging the worst local improvement of individual tasks to regularize the algorithm trajectory. CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss. It includes the regular gradient descent (GD) and the multiple gradient descent algorithm (MGDA) in the multi-objective optimization (MOO) literature as special cases. On a series of challenging multi-task supervised learning and reinforcement learning tasks, CAGrad achieves improved performance over prior state-of-the-art multi-objective gradient manipulation methods.
| null |
Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d38e6eab92b2aeb0a83b570188d5a1a-Abstract.html
|
Xingyuan Sun, Tianju Xue, Szymon Rusinkiewicz, Ryan P. Adams
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d38e6eab92b2aeb0a83b570188d5a1a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13067-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9d38e6eab92b2aeb0a83b570188d5a1a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wdIDt--oLmV
|
https://papers.nips.cc/paper_files/paper/2021/file/9d38e6eab92b2aeb0a83b570188d5a1a-Supplemental.pdf
|
In design, fabrication, and control problems, we are often faced with the task of synthesis, in which we must generate an object or configuration that satisfies a set of constraints while maximizing one or more objective functions. The synthesis problem is typically characterized by a physical process in which many different realizations may achieve the goal. This many-to-one map presents challenges to the supervised learning of feed-forward synthesis, as the set of viable designs may have a complex structure. In addition, the non-differentiable nature of many physical simulations prevents efficient direct optimization. We address both of these problems with a two-stage neural network architecture that we may consider to be an autoencoder. We first learn the decoder: a differentiable surrogate that approximates the many-to-one physical realization process. We then learn the encoder, which maps from goal to design, while using the fixed decoder to evaluate the quality of the realization. We evaluate the approach on two case studies: extruder path planning in additive manufacturing and constrained soft robot inverse kinematics. We compare our approach to direct optimization of the design using the learned surrogate, and to supervised learning of the synthesis problem. We find that our approach produces higher quality solutions than supervised learning, while being competitive in quality with direct optimization, at a greatly reduced computational cost.
| null |
Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d684c589d67031a627ad33d59db65e5-Abstract.html
|
Dylan J. Foster, Akshay Krishnamurthy
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d684c589d67031a627ad33d59db65e5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13068-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9d684c589d67031a627ad33d59db65e5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=3qYgdGj9Svt
|
https://papers.nips.cc/paper_files/paper/2021/file/9d684c589d67031a627ad33d59db65e5-Supplemental.pdf
|
A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise, often quantified by the performance of the best hypothesis; such results are known as first-order or small-loss guarantees. While first-order guarantees are relatively well understood in statistical and online learning, adapting to low noise in contextual bandits (and more broadly, decision making) presents major algorithmic challenges. In a COLT 2017 open problem, Agarwal, Krishnamurthy, Langford, Luo, and Schapire asked whether first-order guarantees are even possible for contextual bandits and---if so---whether they can be attained by efficient algorithms. We give a resolution to this question by providing an optimal and efficient reduction from contextual bandits to online regression with the logarithmic (or, cross-entropy) loss. Our algorithm is simple and practical, readily accommodates rich function classes, and requires no distributional assumptions beyond realizability. In a large-scale empirical evaluation, we find that our approach typically outperforms comparable non-first-order methods.On the technical side, we show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees, and we combine this observation with new refinements to the regression oracle reduction framework of Foster and Rakhlin (2020). The use of triangular discrimination yields novel results even for the classical statistical learning model, and we anticipate that it will find broader use.
| null |
Distributed Estimation with Multiple Samples per User: Sharp Rates and Phase Transition
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d740bd0f36aaa312c8d504e28c42163-Abstract.html
|
Jayadev Acharya, Clement Canonne, Yuhan Liu, Ziteng Sun, Himanshu Tyagi
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d740bd0f36aaa312c8d504e28c42163-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13069-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9d740bd0f36aaa312c8d504e28c42163-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GfVeFihyLRe
|
https://papers.nips.cc/paper_files/paper/2021/file/9d740bd0f36aaa312c8d504e28c42163-Supplemental.pdf
|
We obtain tight minimax rates for the problem of distributed estimation of discrete distributions under communication constraints, where $n$ users observing $m $ samples each can broadcast only $\ell$ bits. Our main result is a tight characterization (up to logarithmic factors) of the error rate as a function of $m$, $\ell$, the domain size, and the number of users under most regimes of interest. While previous work focused on the setting where each user only holds one sample, we show that as $m$ grows the $\ell_1$ error rate gets reduced by a factor of $\sqrt{m}$ for small $m$. However, for large $m$ we observe an interesting phase transition: the dependence of the error rate on the communication constraint $\ell$ changes from $1/\sqrt{2^{\ell}}$ to $1/\sqrt{\ell}$.
| null |
Revisiting Deep Learning Models for Tabular Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d86d83f925f2149e9edb0ac3b49229c-Abstract.html
|
Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, Artem Babenko
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d86d83f925f2149e9edb0ac3b49229c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13070-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9d86d83f925f2149e9edb0ac3b49229c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=i_Q1yrOegLY
|
https://papers.nips.cc/paper_files/paper/2021/file/9d86d83f925f2149e9edb0ac3b49229c-Supplemental.pdf
|
The existing literature on deep learning for tabular data proposes a wide range of novel architectures and reports competitive results on various datasets. However, the proposed models are usually not properly compared to each other and existing works often use different benchmarks and experiment protocols. As a result, it is unclear for both researchers and practitioners what models perform best. Additionally, the field still lacks effective baselines, that is, the easy-to-use models that provide competitive performance across different problems.In this work, we perform an overview of the main families of DL architectures for tabular data and raise the bar of baselines in tabular DL by identifying two simple and powerful deep architectures. The first one is a ResNet-like architecture which turns out to be a strong baseline that is often missing in prior works. The second model is our simple adaptation of the Transformer architecture for tabular data, which outperforms other solutions on most tasks. Both models are compared to many existing architectures on a diverse set of tasks under the same training and tuning protocols. We also compare the best DL models with Gradient Boosted Decision Trees and conclude that there is still no universally superior solution. The source code is available at https://github.com/yandex-research/rtdl.
| null |
Backdoor Attack with Imperceptible Input and Latent Modification
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d99197e2ebf03fc388d09f1e94af89b-Abstract.html
|
Khoa Doan, Yingjie Lao, Ping Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/9d99197e2ebf03fc388d09f1e94af89b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13071-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9d99197e2ebf03fc388d09f1e94af89b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2j_cut38wv
|
https://papers.nips.cc/paper_files/paper/2021/file/9d99197e2ebf03fc388d09f1e94af89b-Supplemental.pdf
|
Recent studies have shown that deep neural networks (DNN) are vulnerable to various adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model such that the compromised model will behave normally without the presence of the trigger. Techniques for generating backdoor images that are visually imperceptible from clean images have also been developed recently, which further enhance the stealthiness of the backdoor attacks from the input space. Along with the development of attacks, defense against backdoor attacks is also evolving. Many existing countermeasures found that backdoor tends to leave tangible footprints in the latent or feature space, which can be utilized to mitigate backdoor attacks.In this paper, we extend the concept of imperceptible backdoor from the input space to the latent representation, which significantly improves the effectiveness against the existing defense mechanisms, especially those relying on the distinguishability between clean inputs and backdoor inputs in latent space. In the proposed framework, the trigger function will learn to manipulate the input by injecting imperceptible input noise while matching the latent representations of the clean and manipulated inputs via a Wasserstein-based regularization of the corresponding empirical distributions. We formulate such an objective as a non-convex and constrained optimization problem and solve the problem with an efficient stochastic alternating optimization procedure. We name the proposed backdoor attack as Wasserstein Backdoor (WB), which achieves a high attack success rate while being stealthy from both the input and latent spaces, as tested in several benchmark datasets, including MNIST, CIFAR10, GTSRB, and TinyImagenet.
| null |
SOPE: Spectrum of Off-Policy Estimators
|
https://papers.nips.cc/paper_files/paper/2021/hash/9dd16e049becf4d5087c90a83fea403b-Abstract.html
|
Christina Yuan, Yash Chandak, Stephen Giguere, Philip S. Thomas, Scott Niekum
|
https://papers.nips.cc/paper_files/paper/2021/hash/9dd16e049becf4d5087c90a83fea403b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13072-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9dd16e049becf4d5087c90a83fea403b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Mfi0LZmFB5a
|
https://papers.nips.cc/paper_files/paper/2021/file/9dd16e049becf4d5087c90a83fea403b-Supplemental.pdf
|
Many sequential decision making problems are high-stakes and require off-policy evaluation (OPE) of a new policy using historical data collected using some other policy. One of the most common OPE techniques that provides unbiased estimates is trajectory based importance sampling (IS). However, due to the high variance of trajectory IS estimates, importance sampling methods based on state-action visitation distributions (SIS) have recently been adopted. Unfortunately, while SIS often provides lower variance estimates for long horizons, estimating the state-action distribution ratios can be challenging and lead to biased estimates. In this paper, we present a new perspective on this bias-variance trade-off and show the existence of a spectrum of estimators whose endpoints are SIS and IS. Additionally, we also establish a spectrum for doubly-robust and weighted version of these estimators. We provide empirical evidence that estimators in this spectrum can be used to trade-off between the bias and variance of IS and SIS and can achieve lower mean-squared error than both IS and SIS.
| null |
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
|
https://papers.nips.cc/paper_files/paper/2021/hash/9dfcf16f0adbc5e2a55ef02db36bac7f-Abstract.html
|
Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, Christos Thrampoulidis
|
https://papers.nips.cc/paper_files/paper/2021/hash/9dfcf16f0adbc5e2a55ef02db36bac7f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13073-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9dfcf16f0adbc5e2a55ef02db36bac7f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=UZm2IQhgIyB
|
https://papers.nips.cc/paper_files/paper/2021/file/9dfcf16f0adbc5e2a55ef02db36bac7f-Supplemental.pdf
|
The goal in label-imbalanced and group-sensitive classification is to optimize relevant metrics such as balanced error and equal opportunity. Classical methods, such as weighted cross-entropy, fail when training deep nets to the terminal phase of training (TPT), that is training beyond zero training error. This observation has motivated recent flurry of activity in developing heuristic alternatives following the intuitive mechanism of promoting larger margin for minorities. In contrast to previous heuristics, we follow a principled analysis explaining how different loss adjustments affect margins. First, we prove that for all linear classifiers trained in TPT, it is necessary to introduce multiplicative, rather than additive, logit adjustments so that the interclass margins change appropriately. To show this, we discover a connection of the multiplicative CE modification to the cost-sensitive support-vector machines. Perhaps counterintuitively, we also find that, at the start of training, the same multiplicative weights can actually harm the minority classes. Thus, while additive adjustments are ineffective in the TPT, we show that they can speed up convergence by countering the initial negative effect of the multiplicative weights. Motivated by these findings, we formulate the vector-scaling (VS) loss, that captures existing techniques as special cases. Moreover, we introduce a natural extension of the VS-loss to group-sensitive classification, thus treating the two common types of imbalances (label/group) in a unifying way. Importantly, our experiments on state-of-the-art datasets are fully consistent with our theoretical insights and confirm the superior performance of our algorithms. Finally, for imbalanced Gaussian-mixtures data, we perform a generalization analysis, revealing tradeoffs between balanced / standard error and equal opportunity.
| null |
Neural Program Generation Modulo Static Analysis
|
https://papers.nips.cc/paper_files/paper/2021/hash/9e1a36515d6704d7eb7a30d783400e5d-Abstract.html
|
Rohan Mukherjee, Yeming Wen, Dipak Chaudhari, Thomas Reps, Swarat Chaudhuri, Christopher Jermaine
|
https://papers.nips.cc/paper_files/paper/2021/hash/9e1a36515d6704d7eb7a30d783400e5d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13074-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9e1a36515d6704d7eb7a30d783400e5d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yaksQCYcRs
|
https://papers.nips.cc/paper_files/paper/2021/file/9e1a36515d6704d7eb7a30d783400e5d-Supplemental.pdf
|
State-of-the-art neural models of source code tend to be evaluated on the generation of individual expressions and lines of code, and commonly fail on long-horizon tasks such as the generation of entire method bodies. We propose to address this deficiency using weak supervision from a static program analyzer. Our neurosymbolic method allows a deep generative model to symbolically compute, using calls to a static analysis tool, long-distance semantic relationships in the code that it has already generated. During training, the model observes these relationships and learns to generate programs conditioned on them. We apply our approach to the problem of generating entire Java methods given the remainder of the class that contains the method. Our experiments show that the approach substantially outperforms a state-of-the-art transformer and a model that explicitly tries to learn program semantics on this task, both in terms of producing programs free of basic semantic errors and in terms of syntactically matching the ground truth.
| null |
Unfolding Taylor's Approximations for Image Restoration
|
https://papers.nips.cc/paper_files/paper/2021/hash/9e3cfc48eccf81a0d57663e129aef3cb-Abstract.html
|
man zhou, Xueyang Fu, Zeyu Xiao, Gang Yang, Aiping Liu, Zhiwei Xiong
|
https://papers.nips.cc/paper_files/paper/2021/hash/9e3cfc48eccf81a0d57663e129aef3cb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13075-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9e3cfc48eccf81a0d57663e129aef3cb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8vwDIC9pEb
|
https://papers.nips.cc/paper_files/paper/2021/file/9e3cfc48eccf81a0d57663e129aef3cb-Supplemental.zip
|
Deep learning provides a new avenue for image restoration, which demands a delicate balance between fine-grained details and high-level contextualized information during recovering the latent clear image. In practice, however, existing methods empirically construct encapsulated end-to-end mapping networks without deepening into the rationality, and neglect the intrinsic prior knowledge of restoration task. To solve the above problems, inspired by Taylor’s Approximations, we unfold Taylor’s Formula to construct a novel framework for image restoration. We find the main part and the derivative part of Taylor’s Approximations take the same effect as the two competing goals of high-level contextualized information and spatial details of image restoration respectively. Specifically, our framework consists of two steps, which are correspondingly responsible for the mapping and derivative functions. The former first learns the high-level contextualized information and the later combines it with the degraded input to progressively recover local high-order spatial details. Our proposed framework is orthogonal to existing methods and thus can be easily integrated with them for further improvement, and extensive experiments demonstrate the effectiveness and scalability of our proposed framework.
| null |
Metropolis-Hastings Data Augmentation for Graph Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/9e7ba617ad9e69b39bd0c29335b79629-Abstract.html
|
Hyeonjin Park, Seunghun Lee, Sihyeon Kim, Jinyoung Park, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, Hyunwoo J. Kim
|
https://papers.nips.cc/paper_files/paper/2021/hash/9e7ba617ad9e69b39bd0c29335b79629-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13076-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9e7ba617ad9e69b39bd0c29335b79629-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=c3u5qyZawqh
|
https://papers.nips.cc/paper_files/paper/2021/file/9e7ba617ad9e69b39bd0c29335b79629-Supplemental.pdf
|
Graph Neural Networks (GNNs) often suffer from weak-generalization due to sparsely labeled data despite their promising results on various graph-based tasks. Data augmentation is a prevalent remedy to improve the generalization ability of models in many domains. However, due to the non-Euclidean nature of data space and the dependencies between samples, designing effective augmentation on graphs is challenging. In this paper, we propose a novel framework Metropolis-Hastings Data Augmentation (MH-Aug) that draws augmented graphs from an explicit target distribution for semi-supervised learning. MH-Aug produces a sequence of augmented graphs from the target distribution enables flexible control of the strength and diversity of augmentation. Since the direct sampling from the complex target distribution is challenging, we adopt the Metropolis-Hastings algorithm to obtain the augmented samples. We also propose a simple and effective semi-supervised learning strategy with generated samples from MH-Aug. Our extensive experiments demonstrate that MH-Aug can generate a sequence of samples according to the target distribution to significantly improve the performance of GNNs.
| null |
Strategic Behavior is Bliss: Iterative Voting Improves Social Welfare
|
https://papers.nips.cc/paper_files/paper/2021/hash/9edcc1391c208ba0b503fe9a22574251-Abstract.html
|
Joshua Kavner, Lirong Xia
|
https://papers.nips.cc/paper_files/paper/2021/hash/9edcc1391c208ba0b503fe9a22574251-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13077-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9edcc1391c208ba0b503fe9a22574251-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MrAN2U5EPZZ
|
https://papers.nips.cc/paper_files/paper/2021/file/9edcc1391c208ba0b503fe9a22574251-Supplemental.pdf
|
Recent work in iterative voting has defined the additive dynamic price of anarchy (ADPoA) as the difference in social welfare between the truthful and worst-case equilibrium profiles resulting from repeated strategic manipulations. While iterative plurality has been shown to only return alternatives with at most one less initial votes than the truthful winner, it is less understood how agents' welfare changes in equilibrium. To this end, we differentiate agents' utility from their manipulation mechanism and determine iterative plurality's ADPoA in the worst- and average-cases. We first prove that the worst-case ADPoA is linear in the number of agents. To overcome this negative result, we study the average-case ADPoA and prove that equilibrium winners have a constant order welfare advantage over the truthful winner in expectation. Our positive results illustrate the prospect for social welfare to increase due to strategic manipulation.
| null |
Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations
|
https://papers.nips.cc/paper_files/paper/2021/hash/9eed867b73ab1eab60583c9d4a789b1b-Abstract.html
|
Ayush Sekhari, Christoph Dann, Mehryar Mohri, Yishay Mansour, Karthik Sridharan
|
https://papers.nips.cc/paper_files/paper/2021/hash/9eed867b73ab1eab60583c9d4a789b1b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13078-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9eed867b73ab1eab60583c9d4a789b1b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZAOrF0mYSYU
|
https://papers.nips.cc/paper_files/paper/2021/file/9eed867b73ab1eab60583c9d4a789b1b-Supplemental.pdf
|
There have been many recent advances on provably efficient Reinforcement Learning (RL) in problems with rich observation spaces. However, all these works share a strong realizability assumption about the optimal value function of the true MDP. Such realizability assumptions are often too strong to hold in practice. In this work, we consider the more realistic setting of agnostic RL with rich observation spaces and a fixed class of policies $\Pi$ that may not contain any near-optimal policy. We provide an algorithm for this setting whose error is bounded in terms of the rank $d$ of the underlying MDP. Specifically, our algorithm enjoys a sample complexity bound of $\widetilde{O}\left((H^{4d} K^{3d} \log |\Pi|)/\epsilon^2\right)$ where $H$ is the length of episodes, $K$ is the number of actions and $\epsilon>0$ is the desired sub-optimality. We also provide a nearly matching lower bound for this agnostic setting that shows that the exponential dependence on rank is unavoidable, without further assumptions.
| null |
Functional Regularization for Reinforcement Learning via Learned Fourier Features
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f0609b9d45dd55bed75f892cf095fcf-Abstract.html
|
Alexander Li, Deepak Pathak
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f0609b9d45dd55bed75f892cf095fcf-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13079-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9f0609b9d45dd55bed75f892cf095fcf-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=uTqvj8i3xv
|
https://papers.nips.cc/paper_files/paper/2021/file/9f0609b9d45dd55bed75f892cf095fcf-Supplemental.pdf
|
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis and show that it improves the sample efficiency of both state-based and image-based RL. We perform infinite-width analysis of our architecture using the Neural Tangent Kernel and theoretically show that tuning the initial variance of the Fourier basis is equivalent to functional regularization of the learned deep network. That is, these learned Fourier features allow for adjusting the degree to which networks underfit or overfit different frequencies in the training data, and hence provide a controlled mechanism to improve the stability and performance of RL optimization. Empirically, this allows us to prioritize learning low-frequency functions and speed up learning by reducing networks' susceptibility to noise in the optimization process, such as during Bellman updates. Experiments on standard state-based and image-based RL benchmarks show clear benefits of our architecture over the baselines.
| null |
Adaptive First-Order Methods Revisited: Convex Minimization without Lipschitz Requirements
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f16b57bdd4400066a83cd8eaa151c41-Abstract.html
|
Kimon Antonakopoulos, Panayotis Mertikopoulos
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f16b57bdd4400066a83cd8eaa151c41-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13080-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9f16b57bdd4400066a83cd8eaa151c41-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wTLc2HcWLIM
|
https://papers.nips.cc/paper_files/paper/2021/file/9f16b57bdd4400066a83cd8eaa151c41-Supplemental.pdf
|
We propose a new family of adaptive first-order methods for a class of convex minimization problems that may fail to be Lipschitz continuous or smooth in the standard sense. Specifically, motivated by a recent flurry of activity on non-Lipschitz (NoLips) optimization, we consider problems that are continuous or smooth relative to a reference Bregman function – as opposed to a global, ambient norm (Euclidean or otherwise). These conditions encompass a wide range ofproblems with singular objective, such as Fisher markets, Poisson tomography, D-design, and the like. In this setting, the application of existing order-optimal adaptive methods – like UnixGrad or AcceleGrad – is not possible, especially in the presence of randomness and uncertainty. The proposed method, adaptive mirror descent (AdaMir), aims to close this gap by concurrently achieving min-max optimal rates in problems that are relatively continuous or smooth, including stochastic ones.
| null |
Adapting to function difficulty and growth conditions in private optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f820adf84bf8a1c259f464ba89ea11f-Abstract.html
|
Hilal Asi, Daniel Levy, John C. Duchi
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f820adf84bf8a1c259f464ba89ea11f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13081-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9f820adf84bf8a1c259f464ba89ea11f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HbaQ4FEh-6
|
https://papers.nips.cc/paper_files/paper/2021/file/9f820adf84bf8a1c259f464ba89ea11f-Supplemental.pdf
|
We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize. While previous work provide worst-case bounds for arbitrary convex functions, it is often the case that the function at hand belongs to a smaller class that enjoys faster rates. Concretely, we show that for functions exhibiting $\kappa$-growth around the optimum, i.e., $f(x) \ge f(x^\star) + \lambda \kappa^{-1} \|x-x^\star\|_2^\kappa$ for $\kappa > 1$, our algorithms improve upon the standard ${\sqrt{d}}/{n\varepsilon}$ privacy rate to the faster $({\sqrt{d}}/{n\varepsilon})^{\tfrac{\kappa}{\kappa - 1}}$. Crucially, they achieve these rates without knowledge of the growth constant $\kappa$ of the function. Our algorithms build upon the inverse sensitivity mechanism, which adapts to instance difficulty [2], and recent localization techniques in private optimization [25]. We complement our algorithms with matching lower bounds for these function classes and demonstrate that our adaptive algorithm is simultaneously (minimax) optimal over all $\kappa \ge 1+c$ whenever $c = \Theta(1)$.
| null |
Support Recovery of Sparse Signals from a Mixture of Linear Measurements
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f8785c7f9b578bec2c09e616568d270-Abstract.html
|
Soumyabrata Pal, Arya Mazumdar, Venkata Gandikota
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f8785c7f9b578bec2c09e616568d270-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13082-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9f8785c7f9b578bec2c09e616568d270-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-ioMuxJ6ud9
|
https://papers.nips.cc/paper_files/paper/2021/file/9f8785c7f9b578bec2c09e616568d270-Supplemental.pdf
|
Recovery of support of a sparse vector from simple measurements is a widely studied problem, considered under the frameworks of compressed sensing, 1-bit compressed sensing, and more general single index models. We consider generalizations of this problem: mixtures of linear regressions, and mixtures of linear classifiers, where the goal is to recover supports of multiple sparse vectors using only a small number of possibly noisy linear, and 1-bit measurements respectively. The key challenge is that the measurements from different vectors are randomly mixed. Both of these problems have also received attention recently. In mixtures of linear classifiers, an observation corresponds to the side of the queried hyperplane a random unknown vector lies in; whereas in mixtures of linear regressions we observe the projection of a random unknown vector on the queried hyperplane. The primary step in recovering the unknown vectors from the mixture is to first identify the support of all the individual component vectors. In this work, we study the number of measurements sufficient for recovering the supports of all the component vectors in a mixture in both these models. We provide algorithms that use a number of measurements polynomial in $k, \log n$ and quasi-polynomial in $\ell$, to recover the support of all the $\ell$ unknown vectors in the mixture with high probability when each individual component is a $k$-sparse $n$-dimensional vector.
| null |
Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f96f36b7aae3b1ff847c26ac94c604e-Abstract.html
|
Nicolas Loizou, Hugo Berard, Gauthier Gidel, Ioannis Mitliagkas, Simon Lacoste-Julien
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f96f36b7aae3b1ff847c26ac94c604e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13083-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9f96f36b7aae3b1ff847c26ac94c604e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9_CTZ_xdQk
|
https://papers.nips.cc/paper_files/paper/2021/file/9f96f36b7aae3b1ff847c26ac94c604e-Supplemental.pdf
|
Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) [Mescheder et al., 2017]. SGDA is known to converge to a stationary point for specific classes of games, but current convergence analyses require a bounded variance assumption. SCO is used successfully for solving large-scale adversarial problems, but its convergence guarantees are limited to its deterministic variant. In this work, we introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO under this condition for solving a class of stochastic variational inequality problems that are potentially non-monotone. We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size, and we propose insightful stepsize-switching rules to guarantee convergence to the exact solution. In addition, our convergence guarantees hold under the arbitrary sampling paradigm, and as such, we give insights into the complexity of minibatching.
| null |
Tighter Expected Generalization Error Bounds via Wasserstein Distance
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f975093da0252e2c0ae181d74c90dc6-Abstract.html
|
Borja Rodríguez Gálvez, German Bassi, Ragnar Thobaben, Mikael Skoglund
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f975093da0252e2c0ae181d74c90dc6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13084-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9f975093da0252e2c0ae181d74c90dc6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xJYek6zantM
|
https://papers.nips.cc/paper_files/paper/2021/file/9f975093da0252e2c0ae181d74c90dc6-Supplemental.pdf
|
This work presents several expected generalization error bounds based on the Wasserstein distance. More specifically, it introduces full-dataset, single-letter, and random-subset bounds, and their analogous in the randomized subsample setting from Steinke and Zakynthinou [1]. Moreover, when the loss function is bounded and the geometry of the space is ignored by the choice of the metric in the Wasserstein distance, these bounds recover from below (and thus, are tighter than) current bounds based on the relative entropy. In particular, they generate new, non-vacuous bounds based on the relative entropy. Therefore, these results can be seen as a bridge between works that account for the geometry of the hypothesis space and those based on the relative entropy, which is agnostic to such geometry. Furthermore, it is shown how to produce various new bounds based on different information measures (e.g., the lautum information or several $f$-divergences) based on these bounds and how to derive similar bounds with respect to the backward channel using the presented proof techniques.
| null |
Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f9e8cba3700df6a947a8cf91035ab84-Abstract.html
|
Deeksha Adil, Brian Bullins, Sushant Sachdeva
|
https://papers.nips.cc/paper_files/paper/2021/hash/9f9e8cba3700df6a947a8cf91035ab84-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13085-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9f9e8cba3700df6a947a8cf91035ab84-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8n0eirHTH7I
|
https://papers.nips.cc/paper_files/paper/2021/file/9f9e8cba3700df6a947a8cf91035ab84-Supplemental.pdf
|
We provide several algorithms for constrained optimization of a large class of convex problems, including softmax, $\ell_p$ regression, and logistic regression. Central to our approach is the notion of width reduction, a technique which has proven immensely useful in the context of maximum flow [Christiano et al., STOC'11] and, more recently, $\ell_p$ regression [Adil et al., SODA'19], in terms of improving the iteration complexity from $O(m^{1/2})$ to $\tilde{O}(m^{1/3})$, where $m$ is the number of rows of the design matrix, and where each iteration amounts to a linear system solve. However, a considerable drawback is that these methods require both problem-specific potentials and individually tailored analyses.As our main contribution, we initiate a new direction of study by presenting the first \emph{unified} approach to achieving $m^{1/3}$-type rates. Notably, our method goes beyond these previously considered problems to more broadly capture \emph{quasi-self-concordant} losses, a class which has recently generated much interest and includes the well-studied problem of logistic regression, among others. In order to do so, we develop a unified width reduction method for carefully handling these losses based on a more general set of potentials. Additionally, we directly achieve $m^{1/3}$-type rates in the constrained setting without the need for any explicit acceleration schemes, thus naturally complementing recent work based on a ball-oracle approach [Carmon et al., NeurIPS'20].
| null |
Bridging the Imitation Gap by Adaptive Insubordination
|
https://papers.nips.cc/paper_files/paper/2021/hash/9fc664916bce863561527f06a96f5ff3-Abstract.html
|
Luca Weihs, Unnat Jain, Iou-Jen Liu, Jordi Salvador, Svetlana Lazebnik, Aniruddha Kembhavi, Alex Schwing
|
https://papers.nips.cc/paper_files/paper/2021/hash/9fc664916bce863561527f06a96f5ff3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13086-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9fc664916bce863561527f06a96f5ff3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Wlx0DqiUTD_
|
https://papers.nips.cc/paper_files/paper/2021/file/9fc664916bce863561527f06a96f5ff3-Supplemental.pdf
|
In practice, imitation learning is preferred over pure reinforcement learning whenever it is possible to design a teaching agent to provide expert supervision. However, we show that when the teaching agent makes decisions with access to privileged information that is unavailable to the student, this information is marginalized during imitation learning, resulting in an "imitation gap" and, potentially, poor results. Prior work bridges this gap via a progression from imitation learning to reinforcement learning. While often successful, gradual progression fails for tasks that require frequent switches between exploration and memorization. To better address these tasks and alleviate the imitation gap we propose 'Adaptive Insubordination' (ADVISOR). ADVISOR dynamically weights imitation and reward-based reinforcement learning losses during training, enabling on-the-fly switching between imitation and exploration. On a suite of challenging tasks set within gridworlds, multi-agent particle environments, and high-fidelity 3D simulators, we show that on-the-fly switching with ADVISOR outperforms pure imitation, pure reinforcement learning, as well as their sequential and parallel combinations.
| null |
Adversarial Robustness with Non-uniform Perturbations
|
https://papers.nips.cc/paper_files/paper/2021/hash/9fd98f856d3ca2086168f264a117ed7c-Abstract.html
|
Ecenaz Erdemir, Jeffrey Bickford, Luca Melis, Sergul Aydore
|
https://papers.nips.cc/paper_files/paper/2021/hash/9fd98f856d3ca2086168f264a117ed7c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13087-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9fd98f856d3ca2086168f264a117ed7c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cQLkLAQgZ5I
|
https://papers.nips.cc/paper_files/paper/2021/file/9fd98f856d3ca2086168f264a117ed7c-Supplemental.pdf
|
Robustness of machine learning models is critical for security related applications, where real-world adversaries are uniquely focused on evading neural network based detectors. Prior work mainly focus on crafting adversarial examples (AEs) with small uniform norm-bounded perturbations across features to maintain the requirement of imperceptibility. However, uniform perturbations do not result in realistic AEs in domains such as malware, finance, and social networks. For these types of applications, features typically have some semantically meaningful dependencies. The key idea of our proposed approach is to enable non-uniform perturbations that can adequately represent these feature dependencies during adversarial training. We propose using characteristics of the empirical data distribution, both on correlations between the features and the importance of the features themselves. Using experimental datasets for malware classification, credit risk prediction, and spam detection, we show that our approach is more robust to real-world attacks. Finally, we present robustness certification utilizing non-uniform perturbation bounds, and show that non-uniform bounds achieve better certification.
| null |
Container: Context Aggregation Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/9fe77ac7060e716f2d42631d156825c0-Abstract.html
|
peng gao, Jiasen Lu, hongsheng Li, Roozbeh Mottaghi, Aniruddha Kembhavi
|
https://papers.nips.cc/paper_files/paper/2021/hash/9fe77ac7060e716f2d42631d156825c0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13088-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/9fe77ac7060e716f2d42631d156825c0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=J9Rc5P4xjT
|
https://papers.nips.cc/paper_files/paper/2021/file/9fe77ac7060e716f2d42631d156825c0-Supplemental.zip
|
Convolutional neural networks (CNNs) are ubiquitous in computer vision, with a myriad of effective and efficient variations. Recently, Transformers -- originally introduced in natural language processing -- have been increasingly adopted in computer vision. While early adopters continued to employ CNN backbones, the latest networks are end-to-end CNN-free Transformer solutions. A recent surprising finding now shows that a simple MLP based solution without any traditional convolutional or Transformer components can produce effective visual representations. While CNNs, Transformers and MLP-Mixers may be considered as completely disparate architectures, we provide a unified view showing that they are in fact special cases of a more general method to aggregate spatial context in a neural network stack. We present the \model (CONText AggregatIon NEtwoRk), a general-purpose building block for multi-head context aggregation that can exploit long-range interactions \emph{a la} Transformers while still exploiting the inductive bias of the local convolution operation leading to faster convergence speeds, often seen in CNNs. Our \model architecture achieves 82.7 \% Top-1 accuracy on ImageNet using 22M parameters, +2.8 improvement compared with DeiT-Small, and can converge to 79.9 \% Top-1 accuracy in just 200 epochs. In contrast to Transformer-based methods that do not scale well to downstream tasks that rely on larger input image resolutions, our efficient network, named \modellight, can be employed in object detection and instance segmentation networks such as DETR, RetinaNet and Mask-RCNN to obtain an impressive detection mAP of 38.9, 43.8, 45.1 and mask mAP of 41.3, providing large improvements of 6.6, 7.3, 6.9 and 6.6 pts respectively, compared to a ResNet-50 backbone with a comparable compute and parameter size. Our method also achieves promising results on self-supervised learning compared to DeiT on the DINO framework. Code is released at https://github.com/allenai/container.
| null |
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0160709701140704575d499c997b6ca-Abstract.html
|
Zhanqiu Zhang, Jie Wang, Jiajun Chen, Shuiwang Ji, Feng Wu
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0160709701140704575d499c997b6ca-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13089-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a0160709701140704575d499c997b6ca-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Twf_XYunk5j
|
https://papers.nips.cc/paper_files/paper/2021/file/a0160709701140704575d499c997b6ca-Supplemental.pdf
|
Query embedding (QE)---which aims to embed entities and first-order logical (FOL) queries in low-dimensional spaces---has shown great power in multi-hop reasoning over knowledge graphs. Recently, embedding entities and queries with geometric shapes becomes a promising direction, as geometric shapes can naturally represent answer sets of queries and logical relationships among them. However, existing geometry-based models have difficulty in modeling queries with negation, which significantly limits their applicability. To address this challenge, we propose a novel query embedding model, namely \textbf{Con}e \textbf{E}mbeddings (ConE), which is the first geometry-based QE model that can handle all the FOL operations, including conjunction, disjunction, and negation. Specifically, ConE represents entities and queries as Cartesian products of two-dimensional cones, where the intersection and union of cones naturally model the conjunction and disjunction operations. By further noticing that the closure of complement of cones remains cones, we design geometric complement operators in the embedding space for the negation operations. Experiments demonstrate that ConE significantly outperforms existing state-of-the-art methods on benchmark datasets.
| null |
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0205b87490c847182672e8d371e9948-Abstract.html
|
Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina F. Balcan, Virginia Smith, Ameet Talwalkar
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0205b87490c847182672e8d371e9948-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13090-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a0205b87490c847182672e8d371e9948-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=p99rWde9fVJ
|
https://papers.nips.cc/paper_files/paper/2021/file/a0205b87490c847182672e8d371e9948-Supplemental.pdf
|
Tuning hyperparameters is a crucial but arduous part of the machine learning pipeline. Hyperparameter optimization is even more challenging in federated learning, where models are learned over a distributed network of heterogeneous devices; here, the need to keep data on device and perform local training makes it difficult to efficiently train and evaluate configurations. In this work, we investigate the problem of federated hyperparameter tuning. We first identify key challenges and show how standard approaches may be adapted to form baselines for the federated setting. Then, by making a novel connection to the neural architecture search technique of weight-sharing, we introduce a new method, FedEx, to accelerate federated hyperparameter tuning that is applicable to widely-used federated optimization methods such as FedAvg and recent variants. Theoretically, we show that a FedEx variant correctly tunes the on-device learning rate in the setting of online convex optimization across devices. Empirically, we show that FedEx can outperform natural baselines for federated hyperparameter tuning by several percentage points on the Shakespeare, FEMNIST, and CIFAR-10 benchmarks—obtaining higher accuracy using the same training budget.
| null |
Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
|
https://papers.nips.cc/paper_files/paper/2021/hash/a02ef8389f6d40f84b50504613117f88-Abstract.html
|
Anshul Nasery, Soumyadeep Thakur, Vihari Piratla, Abir De, Sunita Sarawagi
|
https://papers.nips.cc/paper_files/paper/2021/hash/a02ef8389f6d40f84b50504613117f88-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13091-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a02ef8389f6d40f84b50504613117f88-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=U7SBcmRf65
|
https://papers.nips.cc/paper_files/paper/2021/file/a02ef8389f6d40f84b50504613117f88-Supplemental.pdf
|
In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions. Such models are often re-trained on new data periodically, and they hence need to generalize to data not too far into the future. In this context, there is much prior work on enhancing temporal generalization, e.g. continuous transportation of past data, kernel smoothed time-sensitive parameters and more recently, adversarial learning of time-invariant features. However, these methods share several limitations, e.g, poor scalability, training instability, and dependence on unlabeled data from the future. Responding to the above limitations, we propose a simple method that starts with a model with time-sensitive parameters but regularizes its temporal complexity using a Gradient Interpolation (GI) loss. GI allows the decision boundary to change along time and can still prevent overfitting to the limited training time snapshots by allowing task-specific control over changes along time. We compare our method to existing baselines on multiple real-world datasets, which show that GI outperforms more complicated generative and adversarial approaches on the one hand, and simpler gradient regularization methods on the other.
| null |
Agent Modelling under Partial Observability for Deep Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/a03caec56cd82478bf197475b48c05f9-Abstract.html
|
Georgios Papoudakis, Filippos Christianos, Stefano Albrecht
|
https://papers.nips.cc/paper_files/paper/2021/hash/a03caec56cd82478bf197475b48c05f9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13092-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a03caec56cd82478bf197475b48c05f9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QcwJmp1sTnk
|
https://papers.nips.cc/paper_files/paper/2021/file/a03caec56cd82478bf197475b48c05f9-Supplemental.pdf
|
Modelling the behaviours of other agents is essential for understanding how agents interact and making effective decisions. Existing methods for agent modelling commonly assume knowledge of the local observations and chosen actions of the modelled agents during execution. To eliminate this assumption, we extract representations from the local information of the controlled agent using encoder-decoder architectures. Using the observations and actions of the modelled agents during training, our models learn to extract representations about the modelled agents conditioned only on the local observations of the controlled agent. The representations are used to augment the controlled agent's decision policy which is trained via deep reinforcement learning; thus, during execution, the policy does not require access to other agents' information. We provide a comprehensive evaluation and ablations studies in cooperative, competitive and mixed multi-agent environments, showing that our method achieves significantly higher returns than baseline methods which do not use the learned representations.
| null |
Leveraging Distribution Alignment via Stein Path for Cross-Domain Cold-Start Recommendation
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0443c8c8c3372d662e9173c18faaa2c-Abstract.html
|
Weiming Liu, Jiajie Su, Chaochao Chen, Xiaolin Zheng
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0443c8c8c3372d662e9173c18faaa2c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13093-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a0443c8c8c3372d662e9173c18faaa2c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=dYGFRxCf7P
|
https://papers.nips.cc/paper_files/paper/2021/file/a0443c8c8c3372d662e9173c18faaa2c-Supplemental.pdf
|
Cross-Domain Recommendation (CDR) has been popularly studied to utilize different domain knowledge to solve the cold-start problem in recommender systems. In this paper, we focus on the Cross-Domain Cold-Start Recommendation (CDCSR) problem. That is, how to leverage the information from a source domain, where items are 'warm', to improve the recommendation performance of a target domain, where items are 'cold'. Unfortunately, previous approaches on cold-start and CDR cannot reduce the latent embedding discrepancy across domains efficiently and lead to model degradation. To address this issue, we propose DisAlign, a cross-domain recommendation framework for the CDCSR problem, which utilizes both rating and auxiliary representations from the source domain to improve the recommendation performance of the target domain. Specifically, we first propose Stein path alignment for aligning the latent embedding distributions across domains, and then further propose its improved version, i.e., proxy Stein path, which can reduce the operation consumption and improve efficiency. Our empirical study on Douban and Amazon datasets demonstrate that DisAlign significantly outperforms the state-of-the-art models under the CDCSR setting.
| null |
Conservative Offline Distributional Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/a05d886123a54de3ca4b0985b718fb9b-Abstract.html
|
Yecheng Ma, Dinesh Jayaraman, Osbert Bastani
|
https://papers.nips.cc/paper_files/paper/2021/hash/a05d886123a54de3ca4b0985b718fb9b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13094-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a05d886123a54de3ca4b0985b718fb9b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Z2vksUFuVst
|
https://papers.nips.cc/paper_files/paper/2021/file/a05d886123a54de3ca4b0985b718fb9b-Supplemental.pdf
|
Many reinforcement learning (RL) problems in practice are offline, learning purely from observational data. A key challenge is how to ensure the learned policy is safe, which requires quantifying the risk associated with different actions. In the online setting, distributional RL algorithms do so by learning the distribution over returns (i.e., cumulative rewards) instead of the expected return; beyond quantifying risk, they have also been shown to learn better representations for planning. We proposeConservative Offline Distributional Actor Critic (CODAC), an offline RL algorithm suitable for both risk-neutral and risk-averse domains. CODAC adapts distributional RL to the offline setting by penalizing the predicted quantiles of the return for out-of-distribution actions. We prove that CODAC learns a conservative return distribution---in particular, for finite MDPs, CODAC converges to an uniform lower bound on the quantiles of the return distribution; our proof relies on a novel analysis of the distributional Bellman operator. In our experiments, on two challenging robot navigation tasks, CODAC successfully learns risk-averse policies using offline data collected purely from risk-neutral agents. Furthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of both expected and risk-sensitive performance.
| null |
Separation Results between Fixed-Kernel and Feature-Learning Probability Metrics
|
https://papers.nips.cc/paper_files/paper/2021/hash/a081c174f5913958ba8c6443bacffcb9-Abstract.html
|
Carles Domingo i Enrich, Youssef Mroueh
|
https://papers.nips.cc/paper_files/paper/2021/hash/a081c174f5913958ba8c6443bacffcb9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13095-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a081c174f5913958ba8c6443bacffcb9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WKxmP7bcFvt
|
https://papers.nips.cc/paper_files/paper/2021/file/a081c174f5913958ba8c6443bacffcb9-Supplemental.zip
|
Several works in implicit and explicit generative modeling empirically observed that feature-learning discriminators outperform fixed-kernel discriminators in terms of the sample quality of the models. We provide separation results between probability metrics with fixed-kernel and feature-learning discriminators using the function classes $\mathcal{F}_2$ and $\mathcal{F}_1$ respectively, which were developed to study overparametrized two-layer neural networks. In particular, we construct pairs of distributions over hyper-spheres that can not be discriminated by fixed kernel $(\mathcal{F}_2)$ integral probability metric (IPM) and Stein discrepancy (SD) in high dimensions, but that can be discriminated by their feature learning ($\mathcal{F}_1$) counterparts. To further study the separation we provide links between the $\mathcal{F}_1$ and $\mathcal{F}_2$ IPMs with sliced Wasserstein distances. Our work suggests that fixed-kernel discriminators perform worse than their feature learning counterparts because their corresponding metrics are weaker.
| null |
Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0ae15571eb4a97ac1c34a114f1bb179-Abstract.html
|
Aurelien Bibaut, Nathan Kallus, Maria Dimakopoulou, Antoine Chambaz, Mark van der Laan
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0ae15571eb4a97ac1c34a114f1bb179-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13096-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a0ae15571eb4a97ac1c34a114f1bb179-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2FDhSA_yxY
| null |
Empirical risk minimization (ERM) is the workhorse of machine learning, whether for classification and regression or for off-policy policy learning, but its model-agnostic guarantees can fail when we use adaptively collected data, such as the result of running a contextual bandit algorithm. We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class and provide first-of-their-kind generalization guarantees and fast convergence rates. Our results are based on a new maximal inequality that carefully leverages the importance sampling structure to obtain rates with the good dependence on the exploration rate in the data. For regression, we provide fast rates that leverage the strong convexity of squared-error loss. For policy learning, we provide regret guarantees that close an open gap in the existing literature whenever exploration decays to zero, as is the case for bandit-collected data. An empirical investigation validates our theory.
| null |
Bayesian Optimization with High-Dimensional Outputs
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0d3973ad100ad83a64c304bb58677dd-Abstract.html
|
Wesley J. Maddox, Maximilian Balandat, Andrew G. Wilson, Eytan Bakshy
|
https://papers.nips.cc/paper_files/paper/2021/hash/a0d3973ad100ad83a64c304bb58677dd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13097-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a0d3973ad100ad83a64c304bb58677dd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vDo__0UwFNo
|
https://papers.nips.cc/paper_files/paper/2021/file/a0d3973ad100ad83a64c304bb58677dd-Supplemental.pdf
|
Bayesian optimization is a sample-efficient black-box optimization procedure that is typically applied to a small number of independent objectives. However, in practice we often wish to optimize objectives defined over many correlated outcomes (or “tasks”). For example, scientists may want to optimize the coverage of a cell tower network across a dense grid of locations. Similarly, engineers may seek to balance the performance of a robot across dozens of different environments via constrained or robust optimization. However, the Gaussian Process (GP) models typically used as probabilistic surrogates for multi-task Bayesian optimization scale poorly with the number of outcomes, greatly limiting applicability. We devise an efficient technique for exact multi-task GP sampling that combines exploiting Kronecker structure in the covariance matrices with Matheron’s identity, allowing us to perform Bayesian optimization using exact multi-task GP models with tens of thousands of correlated outputs. In doing so, we achieve substantial improvements in sample efficiency compared to existing approaches that model solely the outcome metrics. We demonstrate how this unlocks a new class of applications for Bayesian optimization across a range of tasks in science and engineering, including optimizing interference patterns of an optical interferometer with 65,000 outputs.
| null |
Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks
|
https://papers.nips.cc/paper_files/paper/2021/hash/a113c1ecd3cace2237256f4c712f61b5-Abstract.html
|
Chen Ma, Xiangyu Guo, Li Chen, Jun-Hai Yong, Yisen Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/a113c1ecd3cace2237256f4c712f61b5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13098-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a113c1ecd3cace2237256f4c712f61b5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=g0wang64Zjd
|
https://papers.nips.cc/paper_files/paper/2021/file/a113c1ecd3cace2237256f4c712f61b5-Supplemental.pdf
|
One major problem in black-box adversarial attacks is the high query complexity in the hard-label attack setting, where only the top-1 predicted label is available. In this paper, we propose a novel geometric-based approach called Tangent Attack (TA), which identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack. Assuming the decision boundary is locally flat, we theoretically prove that the minimum $\ell_2$ distortion can be obtained by reaching the decision boundary along the tangent line passing through such tangent point in each iteration. To improve the robustness of our method, we further propose a generalized method which replaces the hemisphere with a semi-ellipsoid to adapt to curved decision boundaries. Our approach is free of pre-training. Extensive experiments conducted on the ImageNet and CIFAR-10 datasets demonstrate that our approach can consume only a small number of queries to achieve the low-magnitude distortion. The implementation source code is released online.
| null |
Scalable Diverse Model Selection for Accessible Transfer Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1140a3d0df1c81e24ae954d935e8926-Abstract.html
|
Daniel Bolya, Rohit Mittapalli, Judy Hoffman
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1140a3d0df1c81e24ae954d935e8926-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13099-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a1140a3d0df1c81e24ae954d935e8926-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=bhEAWsS9-Sb
|
https://papers.nips.cc/paper_files/paper/2021/file/a1140a3d0df1c81e24ae954d935e8926-Supplemental.pdf
|
With the preponderance of pretrained deep learning models available off-the-shelf from model banks today, finding the best weights to fine-tune to your use-case can be a daunting task. Several methods have recently been proposed to find good models for transfer learning, but they either don't scale well to large model banks or don't perform well on the diversity of off-the-shelf models. Ideally the question we want to answer is, "given some data and a source model, can you quickly predict the model's accuracy after fine-tuning?" In this paper, we formalize this setting as "Scalable Diverse Model Selection" and propose several benchmarks for evaluating on this task. We find that existing model selection and transferability estimation methods perform poorly here and analyze why this is the case. We then introduce simple techniques to improve the performance and speed of these algorithms. Finally, we iterate on existing methods to create PARC, which outperforms all other methods on diverse model selection. We have released the benchmarks and method code in hope to inspire future work in model selection for accessible transfer learning.
| null |
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering
|
https://papers.nips.cc/paper_files/paper/2021/hash/a11ce019e96a4c60832eadd755a17a58-Abstract.html
|
Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, Fredo Durand
|
https://papers.nips.cc/paper_files/paper/2021/hash/a11ce019e96a4c60832eadd755a17a58-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13100-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a11ce019e96a4c60832eadd755a17a58-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=q0h6av9Vi8
|
https://papers.nips.cc/paper_files/paper/2021/file/a11ce019e96a4c60832eadd755a17a58-Supplemental.zip
|
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation. Rendering a ray from an LFN requires only a single network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs.
| null |
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
|
https://papers.nips.cc/paper_files/paper/2021/hash/a11f9e533f28593768ebf87075ab34f2-Abstract.html
|
Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester Cole, Ce Liu, Deva Ramanan
|
https://papers.nips.cc/paper_files/paper/2021/hash/a11f9e533f28593768ebf87075ab34f2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13101-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a11f9e533f28593768ebf87075ab34f2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-JJy-Hw8TFB
|
https://papers.nips.cc/paper_files/paper/2021/file/a11f9e533f28593768ebf87075ab34f2-Supplemental.pdf
|
We introduce ViSER, a method for recovering articulated 3D shapes and dense3D trajectories from monocular videos. Previous work on high-quality reconstruction of dynamic 3D shapes typically relies on multiple camera views, strong category-specific priors, or 2D keypoint supervision. We show that none of these are required if one can reliably estimate long-range correspondences in a video, making use of only 2D object masks and two-frame optical flow as inputs. ViSER infers correspondences by matching 2D pixels to a canonical, deformable 3D mesh via video-specific surface embeddings that capture the pixel appearance of each surface point. These embeddings behave as a continuous set of keypoint descriptors defined over the mesh surface, which can be used to establish dense long-range correspondences across pixels. The surface embeddings are implemented as coordinate-based MLPs that are fit to each video via self-supervised losses.Experimental results show that ViSER compares favorably against prior work on challenging videos of humans with loose clothing and unusual poses as well as animals videos from DAVIS and YTVOS. Project page: viser-shape.github.io.
| null |
Understanding the Effect of Stochasticity in Policy Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/a12f69495f41bb3b637ba1b6238884d6-Abstract.html
|
Jincheng Mei, Bo Dai, Chenjun Xiao, Csaba Szepesvari, Dale Schuurmans
|
https://papers.nips.cc/paper_files/paper/2021/hash/a12f69495f41bb3b637ba1b6238884d6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13102-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a12f69495f41bb3b637ba1b6238884d6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=oAog3W9w6R
|
https://papers.nips.cc/paper_files/paper/2021/file/a12f69495f41bb3b637ba1b6238884d6-Supplemental.pdf
|
We study the effect of stochasticity in on-policy policy optimization, and make the following four contributions. \emph{First}, we show that the preferability of optimization methods depends critically on whether stochastic versus exact gradients are used. In particular, unlike the true gradient setting, geometric information cannot be easily exploited in the stochastic case for accelerating policy optimization without detrimental consequences or impractical assumptions. \emph{Second}, to explain these findings we introduce the concept of committal rate for stochastic policy optimization, and show that this can serve as a criterion for determining almost sure convergence to global optimality. \emph{Third}, we show that in the absence of external oracle information, which allows an algorithm to determine the difference between optimal and sub-optimal actions given only on-policy samples, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely. That is, an uninformed algorithm either converges to a globally optimal policy with probability $1$ but at a rate no better than $O(1/t)$, or it achieves faster than $O(1/t)$ convergence but then must fail to converge to the globally optimal policy with some positive probability. \emph{Finally}, we use the committal rate theory to explain why practical policy optimization methods are sensitive to random initialization, then develop an ensemble method that can be guaranteed to achieve near-optimal solutions with high probability.
| null |
Fine-Grained Zero-Shot Learning with DNA as Side Information
|
https://papers.nips.cc/paper_files/paper/2021/hash/a18630ab1c3b9f14454cf70dc7114834-Abstract.html
|
Sarkhan Badirli, Zeynep Akata, George Mohler, Christine Picard, Mehmet M Dundar
|
https://papers.nips.cc/paper_files/paper/2021/hash/a18630ab1c3b9f14454cf70dc7114834-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13103-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a18630ab1c3b9f14454cf70dc7114834-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RqAzAoL8BER
|
https://papers.nips.cc/paper_files/paper/2021/file/a18630ab1c3b9f14454cf70dc7114834-Supplemental.zip
|
Fine-grained zero-shot learning task requires some form of side-information to transfer discriminative information from seen to unseen classes. As manually annotated visual attributes are extremely costly and often impractical to obtain for a large number of classes, in this study we use DNA as a side information for the first time for fine-grained zero-shot classification of species. Mitochondrial DNA plays an important role as a genetic marker in evolutionary biology and has been used to achieve near perfect accuracy in species classification of living organisms. We implement a simple hierarchical Bayesian model that uses DNA information to establish the hierarchy in the image space and employs local priors to define surrogate classes for unseen ones. On the benchmark CUB dataset we show that DNA can be equally promising, yet in general a more accessible alternative than word vectors as a side information. This is especially important as obtaining robust word representations for fine-grained species names is not a practicable goal when information about these species in free-form text is limited. On a newly compiled fine-grained insect dataset that uses DNA information from over a thousand species we show that the Bayesian approach outperforms state-of-the-art by a wide margin.
| null |
Optimal Underdamped Langevin MCMC Method
|
https://papers.nips.cc/paper_files/paper/2021/hash/a18aa23ee676d7f5ffb34cf16df3e08c-Abstract.html
|
Zhengmian Hu, Feihu Huang, Heng Huang
|
https://papers.nips.cc/paper_files/paper/2021/hash/a18aa23ee676d7f5ffb34cf16df3e08c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13104-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a18aa23ee676d7f5ffb34cf16df3e08c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6wuE1-G4pu6
|
https://papers.nips.cc/paper_files/paper/2021/file/a18aa23ee676d7f5ffb34cf16df3e08c-Supplemental.pdf
|
In the paper, we study the underdamped Langevin diffusion (ULD) with strongly-convex potential consisting of finite summation of $N$ smooth components, and propose an efficient discretization method, which requires $O(N+d^\frac{1}{3}N^\frac{2}{3}/\varepsilon^\frac{2}{3})$ gradient evaluations to achieve $\varepsilon$-error (in $\sqrt{\mathbb{E}{\lVert{\cdot}\rVert_2^2}}$ distance) for approximating $d$-dimensional ULD. Moreover, we prove a lower bound of gradient complexity as $\Omega(N+d^\frac{1}{3}N^\frac{2}{3}/\varepsilon^\frac{2}{3})$, which indicates that our method is optimal in dependence of $N$, $\varepsilon$, and $d$. In particular, we apply our method to sample the strongly-log-concave distribution and obtain gradient complexity better than all existing gradient based sampling algorithms. Experimental results on both synthetic and real-world data show that our new method consistently outperforms the existing ULD approaches.
| null |
Scheduling jobs with stochastic holding costs
|
https://papers.nips.cc/paper_files/paper/2021/hash/a19744e268754fb0148b017647355b7b-Abstract.html
|
Dabeen Lee, Milan Vojnovic
|
https://papers.nips.cc/paper_files/paper/2021/hash/a19744e268754fb0148b017647355b7b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13105-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a19744e268754fb0148b017647355b7b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=B_jLF98u13
| null |
This paper proposes a learning and scheduling algorithm to minimize the expected cumulative holding cost incurred by jobs, where statistical parameters defining their individual holding costs are unknown a priori. In each time slot, the server can process a job while receiving the realized random holding costs of the jobs remaining in the system. Our algorithm is a learning-based variant of the $c\mu$ rule for scheduling: it starts with a preemption period of fixed length which serves as a learning phase, and after accumulating enough data about individual jobs, it switches to nonpreemptive scheduling mode. The algorithm is designed to handle instances with large or small gaps in jobs' parameters and achieves near-optimal performance guarantees. The performance of our algorithm is captured by its regret, where the benchmark is the minimum possible cost attained when the statistical parameters of jobs are fully known. We prove upper bounds on the regret of our algorithm, and we derive a regret lower bound that is almost matching the proposed upper bounds. Our numerical results demonstrate the effectiveness of our algorithm and show that our theoretical regret analysis is nearly tight.
| null |
REMIPS: Physically Consistent 3D Reconstruction of Multiple Interacting People under Weak Supervision
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1a2c3fed88e9b3ba5bc3625c074a04e-Abstract.html
|
Mihai Fieraru, Mihai Zanfir, Teodor Szente, Eduard Bazavan, Vlad Olaru, Cristian Sminchisescu
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1a2c3fed88e9b3ba5bc3625c074a04e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13106-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a1a2c3fed88e9b3ba5bc3625c074a04e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-AV3AKwgiG
|
https://papers.nips.cc/paper_files/paper/2021/file/a1a2c3fed88e9b3ba5bc3625c074a04e-Supplemental.zip
|
The three-dimensional reconstruction of multiple interacting humans given a monocular image is crucial for the general task of scene understanding, as capturing the subtleties of interaction is often the very reason for taking a picture. Current 3D human reconstruction methods either treat each person independently, ignoring most of the context, or reconstruct people jointly, but cannot recover interactions correctly when people are in close proximity. In this work, we introduce \textbf{REMIPS}, a model for 3D \underline{Re}construction of \underline{M}ultiple \underline{I}nteracting \underline{P}eople under Weak \underline{S}upervision. \textbf{REMIPS} can reconstruct a variable number of people directly from monocular images. At the core of our methodology stands a novel transformer network that combines unordered person tokens (one for each detected human) with positional-encoded tokens from image features patches. We introduce a novel unified model for self- and interpenetration-collisions based on a mesh approximation computed by applying decimation operators. We rely on self-supervised losses for flexibility and generalisation in-the-wild and incorporate self-contact and interaction-contact losses directly into the learning process. With \textbf{REMIPS}, we report state-of-the-art quantitative results on common benchmarks even in cases where no 3D supervision is used. Additionally, qualitative visual results show that our reconstructions are plausible in terms of pose and shape and coherent for challenging images, collected in-the-wild, where people are often interacting.
| null |
Differentiable Annealed Importance Sampling and the Perils of Gradient Noise
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1a609f1ac109d0be28d8ae112db1bbb-Abstract.html
|
Guodong Zhang, Kyle Hsu, Jianing Li, Chelsea Finn, Roger B. Grosse
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1a609f1ac109d0be28d8ae112db1bbb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13107-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a1a609f1ac109d0be28d8ae112db1bbb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6rqjgrL7Lq
|
https://papers.nips.cc/paper_files/paper/2021/file/a1a609f1ac109d0be28d8ae112db1bbb-Supplemental.pdf
|
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation, but are not fully differentiable due to the use of Metropolis-Hastings correction steps. Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective using gradient-based methods. To this end, we propose Differentiable AIS (DAIS), a variant of AIS which ensures differentiability by abandoning the Metropolis-Hastings corrections. As a further advantage, DAIS allows for mini-batch gradients. We provide a detailed convergence analysis for Bayesian linear regression which goes beyond previous analyses by explicitly accounting for the sampler not having reached equilibrium. Using this analysis, we prove that DAIS is consistent in the full-batch setting and provide a sublinear convergence rate. Furthermore, motivated by the problem of learning from large-scale datasets, we study a stochastic variant of DAIS that uses mini-batch gradients. Surprisingly, stochastic DAIS can be arbitrarily bad due to a fundamental incompatibility between the goals of last-iterate convergence to the posterior and elimination of the accumulated stochastic error. This is in stark contrast with other settings such as gradient-based optimization and Langevin dynamics, where the effect of gradient noise can be washed out by taking smaller steps. This indicates that annealing-based marginal likelihood estimation with stochastic gradients may require new ideas.
| null |
PSD Representations for Effective Probability Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1b63b36ba67b15d2f47da55cdb8018d-Abstract.html
|
Alessandro Rudi, Carlo Ciliberto
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1b63b36ba67b15d2f47da55cdb8018d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13108-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a1b63b36ba67b15d2f47da55cdb8018d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HyQskgZwXO
|
https://papers.nips.cc/paper_files/paper/2021/file/a1b63b36ba67b15d2f47da55cdb8018d-Supplemental.pdf
|
Finding a good way to model probability densities is key to probabilistic inference. An ideal model should be able to concisely approximate any probability while being also compatible with two main operations: multiplications of two models (product rule) and marginalization with respect to a subset of the random variables (sum rule). In this work, we show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end. In particular, we characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees. Moreover, we show that we can perform efficiently both sum and product rule in closed form via matrix operations, enjoying the same versatility of mixture models. Our results open the way to applications of PSD models to density estimation, decision theory, and inference.
| null |
Exploiting a Zoo of Checkpoints for Unseen Tasks
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1c3ae6c49a89d92aef2d423dadb477f-Abstract.html
|
Jiaji Huang, Qiang Qiu, Kenneth Church
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1c3ae6c49a89d92aef2d423dadb477f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13109-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a1c3ae6c49a89d92aef2d423dadb477f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nlR7LzSArtK
|
https://papers.nips.cc/paper_files/paper/2021/file/a1c3ae6c49a89d92aef2d423dadb477f-Supplemental.pdf
|
There are so many models in the literature that it is difficult for practitioners to decide which combinations are likely to be effective for a new task. This paper attempts to address this question by capturing relationships among checkpoints published on the web. We model the space of tasks as a Gaussian process. The covariance can be estimated from checkpoints and unlabeled probing data. With the Gaussian process, we can identify representative checkpoints by a maximum mutual information criterion. This objective is submodular. A greedy method identifies representatives that are likely to "cover'' the task space. These representatives generalize to new tasks with superior performance. Empirical evidence is provided for applications from both computational linguistics as well as computer vision.
| null |
Towards Open-World Feature Extrapolation: An Inductive Graph Learning Approach
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1c5aff9679455a233086e26b72b9a06-Abstract.html
|
Qitian Wu, Chenxiao Yang, Junchi Yan
|
https://papers.nips.cc/paper_files/paper/2021/hash/a1c5aff9679455a233086e26b72b9a06-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13110-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a1c5aff9679455a233086e26b72b9a06-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=l7ULU2q6mvY
|
https://papers.nips.cc/paper_files/paper/2021/file/a1c5aff9679455a233086e26b72b9a06-Supplemental.pdf
|
We target open-world feature extrapolation problem where the feature space of input data goes through expansion and a model trained on partially observed features needs to handle new features in test data without further retraining. The problem is of much significance for dealing with features incrementally collected from different fields. To this end, we propose a new learning paradigm with graph representation and learning. Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data. Based on our framework, we design two training strategies, a self-supervised approach and an inductive learning approach, to endow the model with extrapolation ability and alleviate feature-level over-fitting. We also provide theoretical analysis on the generalization error on test data with new features, which dissects the impact of training features and algorithms on generalization performance. Our experiments over several classification datasets and large-scale advertisement click prediction datasets demonstrate that our model can produce effective embeddings for unseen features and significantly outperforms baseline methods that adopt KNN and local aggregation.
| null |
Adversarial Teacher-Student Representation Learning for Domain Generalization
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2137a2ae8e39b5002a3f8909ecb88fe-Abstract.html
|
Fu-En Yang, Yuan-Chia Cheng, Zu-Yun Shiau, Yu-Chiang Frank Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2137a2ae8e39b5002a3f8909ecb88fe-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13111-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a2137a2ae8e39b5002a3f8909ecb88fe-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=gKyyBfMM4Y
|
https://papers.nips.cc/paper_files/paper/2021/file/a2137a2ae8e39b5002a3f8909ecb88fe-Supplemental.pdf
|
Domain generalization (DG) aims to transfer the learning task from a single or multiple source domains to unseen target domains. To extract and leverage the information which exhibits sufficient generalization ability, we propose a simple yet effective approach of Adversarial Teacher-Student Representation Learning, with the goal of deriving the domain generalizable representations via generating and exploring out-of-source data distributions. Our proposed framework advances Teacher-Student learning in an adversarial learning manner, which alternates between knowledge-distillation based representation learning and novel-domain data augmentation. The former progressively updates the teacher network for deriving domain-generalizable representations, while the latter synthesizes data out-of-source yet plausible distributions. Extensive image classification experiments on benchmark datasets in multiple and single source DG settings confirm that, our model exhibits sufficient generalization ability and performs favorably against state-of-the-art DG methods.
| null |
Stochastic bandits with groups of similar arms.
|
https://papers.nips.cc/paper_files/paper/2021/hash/a22c0238589078fb10b606ab62015744-Abstract.html
|
Fabien Pesquerel, Hassan SABER, Odalric-Ambrym Maillard
|
https://papers.nips.cc/paper_files/paper/2021/hash/a22c0238589078fb10b606ab62015744-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13112-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a22c0238589078fb10b606ab62015744-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RcIorZrz88d
|
https://papers.nips.cc/paper_files/paper/2021/file/a22c0238589078fb10b606ab62015744-Supplemental.pdf
|
We consider a variant of the stochastic multi-armed bandit problem where arms are known to be organized into different groups having the same mean. The groups are unknown but a lower bound $q$ on their size is known. This situation typically appears when each arm can be described with a list of categorical attributes, and the (unknown) mean reward function only depends on a subset of them, the others being redundant. In this case, $q$ is linked naturally to the number of attributes considered redundant, and the number of categories of each attribute. For this structured problem of practical relevance, we first derive the asymptotic regret lower bound and corresponding constrained optimization problem. They reveal the achievable regret can be substantially reduced when compared to the unstructured setup, possibly by a factor $q$. However, solving exactly the exact constrained optimization problem involves a combinatorial problem. We introduce a lower-bound inspired strategy involving a computationally efficient relaxation that is based on a sorting mechanism. We further prove it achieves a lower bound close to the optimal one up to a controlled factor, and achieves an asymptotic regret $q$ times smaller than the unstructured one. We believe this shows it is a valuable strategy for the practitioner. Last, we illustrate the performance of the considered strategy on numerical experiments involving a large number of arms.
| null |
Tracking Without Re-recognition in Humans and Machines
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2557a7b2e94197ff767970b67041697-Abstract.html
|
Drew Linsley, Girik Malik, Junkyung Kim, Lakshmi Narasimhan Govindarajan, Ennio Mingolla, Thomas Serre
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2557a7b2e94197ff767970b67041697-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13113-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a2557a7b2e94197ff767970b67041697-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fbAHHm_jyo2
|
https://papers.nips.cc/paper_files/paper/2021/file/a2557a7b2e94197ff767970b67041697-Supplemental.pdf
|
Imagine trying to track one particular fruitfly in a swarm of hundreds. Higher biological visual systems have evolved to track moving objects by relying on both their appearance and their motion trajectories. We investigate if state-of-the-art spatiotemporal deep neural networks are capable of the same. For this, we introduce PathTracker, a synthetic visual challenge that asks human observers and machines to track a target object in the midst of identical-looking "distractor" objects. While humans effortlessly learn PathTracker and generalize to systematic variations in task design, deep networks struggle. To address this limitation, we identify and model circuit mechanisms in biological brains that are implicated in tracking objects based on motion cues. When instantiated as a recurrent network, our circuit model learns to solve PathTracker with a robust visual strategy that rivals human performance and explains a significant proportion of their decision-making on the challenge. We also show that the success of this circuit model extends to object tracking in natural videos. Adding it to a transformer-based architecture for object tracking builds tolerance to visual nuisances that affect object appearance, establishing the new state of the art on the large-scale TrackingNet challenge. Our work highlights the importance of understanding human vision to improve computer vision.
| null |
Rethinking conditional GAN training: An approach using geometrically structured latent manifolds
|
https://papers.nips.cc/paper_files/paper/2021/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html
|
Sameera Ramasinghe, Moshiur Farazi, Salman H Khan, Nick Barnes, Stephen Gould
|
https://papers.nips.cc/paper_files/paper/2021/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13114-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a267f936e54d7c10a2bb70dbe6ad7a89-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Hox8lKfr82L
|
https://papers.nips.cc/paper_files/paper/2021/file/a267f936e54d7c10a2bb70dbe6ad7a89-Supplemental.pdf
|
Conditional GANs (cGAN), in their rudimentary form, suffer from critical drawbacks such as the lack of diversity in generated outputs and distortion between the latent and output manifolds. Although efforts have been made to improve results, they can suffer from unpleasant side-effects such as the topology mismatch between latent and output spaces. In contrast, we tackle this problem from a geometrical perspective and propose a novel training mechanism that increases both the diversity and the visual quality of a vanilla cGAN, by systematically encouraging a bi-lipschitz mapping between the latent and the output manifolds. We validate the efficacy of our solution on a baseline cGAN (i.e., Pix2Pix) which lacks diversity, and show that by only modifying its training mechanism (i.e., with our proposed Pix2Pix-Geo), one can achieve more diverse and realistic outputs on a broad set of image-to-image translation tasks.
| null |
How to transfer algorithmic reasoning knowledge to learn new algorithms?
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2802cade04644083dcde1c8c483ed9a-Abstract.html
|
Louis-Pascal Xhonneux, Andreea-Ioana Deac, Petar Veličković, Jian Tang
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2802cade04644083dcde1c8c483ed9a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13115-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a2802cade04644083dcde1c8c483ed9a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=q2JWz371le
|
https://papers.nips.cc/paper_files/paper/2021/file/a2802cade04644083dcde1c8c483ed9a-Supplemental.zip
|
Learning to execute algorithms is a fundamental problem that has been widely studied. Prior work (Veličković et al., 2019) has shown that to enable systematic generalisation on graph algorithms it is critical to have access to the intermediate steps of the program/algorithm. In many reasoning tasks, where algorithmic-style reasoning is important, we only have access to the input and output examples. Thus, inspired by the success of pre-training on similar tasks or data in Natural Language Processing (NLP) and Computer vision, we set out to study how we can transfer algorithmic reasoning knowledge. Specifically, we investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not. We investigate two major classes of graph algorithms, parallel algorithms such as breadth-first search and Bellman-Ford and sequential greedy algorithms such as Prims and Dijkstra. Due to the fundamental differences between algorithmic reasoning knowledge and feature extractors such as used in Computer vision or NLP, we hypothesis that standard transfer techniques will not be sufficient to achieve systematic generalisation. To investigate this empirically we create a dataset including 9 algorithms and 3 different graph types. We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
| null |
Fast Axiomatic Attribution for Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/a284df1155ec3e67286080500df36a9a-Abstract.html
|
Robin Hesse, Simone Schaub-Meyer, Stefan Roth
|
https://papers.nips.cc/paper_files/paper/2021/hash/a284df1155ec3e67286080500df36a9a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13116-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a284df1155ec3e67286080500df36a9a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=16Pv9PFDJB8
|
https://papers.nips.cc/paper_files/paper/2021/file/a284df1155ec3e67286080500df36a9a-Supplemental.pdf
|
Mitigating the dependence on spurious correlations present in the training dataset is a quickly emerging and important topic of deep learning. Recent approaches include priors on the feature attribution of a deep neural network (DNN) into the training process to reduce the dependence on unwanted features. However, until now one needed to trade off high-quality attributions, satisfying desirable axioms, against the time required to compute them. This in turn either led to long training times or ineffective attribution priors. In this work, we break this trade-off by considering a special class of efficiently axiomatically attributable DNNs for which an axiomatic feature attribution can be computed with only a single forward/backward pass. We formally prove that nonnegatively homogeneous DNNs, here termed $\mathcal{X}$-DNNs, are efficiently axiomatically attributable and show that they can be effortlessly constructed from a wide range of regular DNNs by simply removing the bias term of each layer. Various experiments demonstrate the advantages of $\mathcal{X}$-DNNs, beating state-of-the-art generic attribution methods on regular DNNs for training with attribution priors.
| null |
OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2915ad0d57ca8c644f99f9c3f20a918-Abstract.html
|
Chen Zhang, Shifeng Zhang, Fabio Maria Carlucci, Zhenguo Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2915ad0d57ca8c644f99f9c3f20a918-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13117-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a2915ad0d57ca8c644f99f9c3f20a918-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Me-tuhUjhKK
|
https://papers.nips.cc/paper_files/paper/2021/file/a2915ad0d57ca8c644f99f9c3f20a918-Supplemental.pdf
|
Explicit deep generative models (DGMs), e.g., VAEs and Normalizing Flows, have shown to offer an effective data modelling alternative for lossless compression. However, DGMs themselves normally require large storage space and thus contaminate the advantage brought by accurate data density estimation.To eliminate the requirement of saving separate models for different target datasets, we propose a novel setting that starts from a pretrained deep generative model and compresses the data batches while adapting the model with a dynamical system for only one epoch.We formalise this setting as that of One-Shot Online Adaptation (OSOA) of DGMs for lossless compression and propose a vanilla algorithm under this setting. Experimental results show that vanilla OSOA can save significant time versus training bespoke models and space versus using one model for all targets.With the same adaptation step number or adaptation time, it is shown vanilla OSOA can exhibit better space efficiency, e.g., $47\%$ less space, than fine-tuning the pretrained model and saving the fine-tuned model.Moreover, we showcase the potential of OSOA and motivate more sophisticated OSOA algorithms by showing further space or time efficiency with multiple updates per batch and early stopping.
| null |
Compressive Visual Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/a29a5ba2cb7bdeabba22de8c83321b46-Abstract.html
|
Kuang-Huei Lee, Anurag Arnab, Sergio Guadarrama, John Canny, Ian Fischer
|
https://papers.nips.cc/paper_files/paper/2021/hash/a29a5ba2cb7bdeabba22de8c83321b46-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13118-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a29a5ba2cb7bdeabba22de8c83321b46-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZYX1ff6H0Bs
|
https://papers.nips.cc/paper_files/paper/2021/file/a29a5ba2cb7bdeabba22de8c83321b46-Supplemental.pdf
|
Learning effective visual representations that generalize well without human supervision is a fundamental problem in order to apply Machine Learning to a wide variety of tasks. Recently, two families of self-supervised methods, contrastive learning and latent bootstrapping, exemplified by SimCLR and BYOL respectively, have made significant progress. In this work, we hypothesize that adding explicit information compression to these algorithms yields better and more robust representations. We verify this by developing SimCLR and BYOL formulations compatible with the Conditional Entropy Bottleneck (CEB) objective, allowing us to both measure and control the amount of compression in the learned representation, and observe their impact on downstream tasks. Furthermore, we explore the relationship between Lipschitz continuity and compression, showing a tractable lower bound on the Lipschitz constant of the encoders we learn. As Lipschitz continuity is closely related to robustness, this provides a new explanation for why compressed models are more robust. Our experiments confirm that adding compression to SimCLR and BYOL significantly improves linear evaluation accuracies and model robustness across a wide range of domain shifts. In particular, the compressed version of BYOL achieves 76.0% Top-1 linear evaluation accuracy on ImageNet with ResNet-50, and 78.8% with ResNet-50 2x.
| null |
Multi-Armed Bandits with Bounded Arm-Memory: Near-Optimal Guarantees for Best-Arm Identification and Regret Minimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2f04745390fd6897d09772b2cd1f581-Abstract.html
|
Arnab Maiti, Vishakha Patil, Arindam Khan
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2f04745390fd6897d09772b2cd1f581-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13119-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a2f04745390fd6897d09772b2cd1f581-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=sKWgT8WppC3
|
https://papers.nips.cc/paper_files/paper/2021/file/a2f04745390fd6897d09772b2cd1f581-Supplemental.pdf
|
We study the Stochastic Multi-armed Bandit problem under bounded arm-memory. In this setting, the arms arrive in a stream, and the number of arms that can be stored in the memory at any time, is bounded. The decision-maker can only pull arms that are present in the memory. We address the problem from the perspective of two standard objectives: 1) regret minimization, and 2) best-arm identification. For regret minimization, we settle an important open question by showing an almost tight guarantee. We show $\Omega(T^{2/3})$ cumulative regret in expectation for single-pass algorithms for arm-memory size of $(n-1)$, where $n$ is the number of arms. For best-arm identification, we provide an $(\varepsilon, \delta)$-PAC algorithm with arm memory size of $O(\log^*n)$ and $O(\frac{n}{\varepsilon^2}\cdot \log(\frac{1}{\delta}))$ optimal sample complexity.
| null |
Grounding inductive biases in natural images: invariance stems from variations in data
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2fe8c05877ec786290dd1450c3385cd-Abstract.html
|
Diane Bouchacourt, Mark Ibrahim, Ari Morcos
|
https://papers.nips.cc/paper_files/paper/2021/hash/a2fe8c05877ec786290dd1450c3385cd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13120-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a2fe8c05877ec786290dd1450c3385cd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=p7GujbewmRY
|
https://papers.nips.cc/paper_files/paper/2021/file/a2fe8c05877ec786290dd1450c3385cd-Supplemental.pdf
|
To perform well on unseen and potentially out-of-distribution samples, it is desirable for machine learning models to have a predictable response with respect to transformations affecting the factors of variation of the input. Here, we study the relative importance of several types of inductive biases towards such predictable behavior: the choice of data, their augmentations, and model architectures. Invariance is commonly achieved through hand-engineered data augmentation, but do standard data augmentations address transformations that explain variations in real data? While prior work has focused on synthetic data, we attempt here to characterize the factors of variation in a real dataset, ImageNet, and study the invariance of both standard residual networks and the recently proposed vision transformer with respect to changes in these factors. We show standard augmentation relies on a precise combination of translation and scale, with translation recapturing most of the performance improvement---despite the (approximate) translation invariance built in to convolutional architectures, such as residual networks. In fact, we found that scale and translation invariance was similar across residual networks and vision transformer models despite their markedly different architectural inductive biases. We show the training data itself is the main source of invariance, and that data augmentation only further increases the learned invariances. Notably, the invariances learned during training align with the ImageNet factors of variation we found. Finally, we find that the main factors of variation in ImageNet mostly relate to appearance and are specific to each class.
| null |
Directed Graph Contrastive Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/a3048e47310d6efaa4b1eaf55227bc92-Abstract.html
|
Zekun Tong, Yuxuan Liang, Henghui Ding, Yongxing Dai, Xinke Li, Changhu Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/a3048e47310d6efaa4b1eaf55227bc92-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13121-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a3048e47310d6efaa4b1eaf55227bc92-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=s6JD_xBS31
|
https://papers.nips.cc/paper_files/paper/2021/file/a3048e47310d6efaa4b1eaf55227bc92-Supplemental.pdf
|
Graph Contrastive Learning (GCL) has emerged to learn generalizable representations from contrastive views. However, it is still in its infancy with two concerns: 1) changing the graph structure through data augmentation to generate contrastive views may mislead the message passing scheme, as such graph changing action deprives the intrinsic graph structural information, especially the directional structure in directed graphs; 2) since GCL usually uses predefined contrastive views with hand-picking parameters, it does not take full advantage of the contrastive information provided by data augmentation, resulting in incomplete structure information for models learning. In this paper, we design a directed graph data augmentation method called Laplacian perturbation and theoretically analyze how it provides contrastive information without changing the directed graph structure. Moreover, we present a directed graph contrastive learning framework, which dynamically learns from all possible contrastive views generated by Laplacian perturbation. Then we train it using multi-task curriculum learning to progressively learn from multiple easy-to-difficult contrastive views. We empirically show that our model can retain more structural features of directed graphs than other GCL models because of its ability to provide complete contrastive information. Experiments on various benchmarks reveal our dominance over the state-of-the-art approaches.
| null |
Space-time Mixing Attention for Video Transformer
|
https://papers.nips.cc/paper_files/paper/2021/hash/a34bacf839b923770b2c360eefa26748-Abstract.html
|
Adrian Bulat, Juan Manuel Perez Rua, Swathikiran Sudhakaran, Brais Martinez, Georgios Tzimiropoulos
|
https://papers.nips.cc/paper_files/paper/2021/hash/a34bacf839b923770b2c360eefa26748-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13122-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a34bacf839b923770b2c360eefa26748-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QgX15Mdi1E_
|
https://papers.nips.cc/paper_files/paper/2021/file/a34bacf839b923770b2c360eefa26748-Supplemental.pdf
|
This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer's depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models.
| null |
Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis
|
https://papers.nips.cc/paper_files/paper/2021/hash/a34e1ddbb4d329167f50992ba59fe45a-Abstract.html
|
Atsushi Nitanda, Denny Wu, Taiji Suzuki
|
https://papers.nips.cc/paper_files/paper/2021/hash/a34e1ddbb4d329167f50992ba59fe45a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13123-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a34e1ddbb4d329167f50992ba59fe45a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jbcRU9dkxs
|
https://papers.nips.cc/paper_files/paper/2021/file/a34e1ddbb4d329167f50992ba59fe45a-Supplemental.pdf
|
We propose the particle dual averaging (PDA) method, which generalizes the dual averaging method in convex optimization to the optimization over probability distributions with quantitative runtime guarantee. The algorithm consists of an inner loop and outer loop: the inner loop utilizes the Langevin algorithm to approximately solve for a stationary distribution, which is then optimized in the outer loop. The method can be interpreted as an extension of the Langevin algorithm to naturally handle nonlinear functional on the probability space. An important application of the proposed method is the optimization of neural network in the mean field regime, which is theoretically attractive due to the presence of nonlinear feature learning, but quantitative convergence rate can be challenging to obtain. By adapting finite-dimensional convex optimization theory into the space of measures, we not only establish global convergence of PDA for two-layer mean field neural networks under more general settings and simpler analysis, but also provide quantitative polynomial runtime guarantee. Our theoretical results are supported by numerical simulations on neural networks with reasonable size.
| null |
Learning Tree Interpretation from Object Representation for Deep Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/a35fe7f7fe8217b4369a0af4244d1fca-Abstract.html
|
Guiliang Liu, Xiangyu Sun, Oliver Schulte, Pascal Poupart
|
https://papers.nips.cc/paper_files/paper/2021/hash/a35fe7f7fe8217b4369a0af4244d1fca-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/13124-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/a35fe7f7fe8217b4369a0af4244d1fca-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jb5fp_wQGHU
|
https://papers.nips.cc/paper_files/paper/2021/file/a35fe7f7fe8217b4369a0af4244d1fca-Supplemental.pdf
|
Interpreting Deep Reinforcement Learning (DRL) models is important to enhance trust and comply with transparency regulations. Existing methods typically explain a DRL model by visualizing the importance of low-level input features with super-pixels, attentions, or saliency maps. Our approach provides an interpretation based on high-level latent object features derived from a disentangled representation. We propose a Represent And Mimic (RAMi) framework for training 1) an identifiable latent representation to capture the independent factors of variation for the objects and 2) a mimic tree that extracts the causal impact of the latent features on DRL action values. To jointly optimize both the fidelity and the simplicity of a mimic tree, we derive a novel Minimum Description Length (MDL) objective based on the Information Bottleneck (IB) principle. Based on this objective, we describe a Monte Carlo Regression Tree Search (MCRTS) algorithm that explores different splits to find the IB-optimal mimic tree. Experiments show that our mimic tree achieves strong approximation performance with significantly fewer nodes than baseline models. We demonstrate the interpretability of our mimic tree by showing latent traversals, decision rules, causal impacts, and human evaluation results.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.