title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation
|
https://papers.nips.cc/paper_files/paper/2021/hash/61b4a64be663682e8cb037d9719ad8cd-Abstract.html
|
Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang
|
https://papers.nips.cc/paper_files/paper/2021/hash/61b4a64be663682e8cb037d9719ad8cd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12524-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/61b4a64be663682e8cb037d9719ad8cd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vllRjSTWcLs
|
https://papers.nips.cc/paper_files/paper/2021/file/61b4a64be663682e8cb037d9719ad8cd-Supplemental.pdf
|
This paper presents a simple yet effective approach to modeling space-time correspondences in the context of video object segmentation. Unlike most existing approaches, we establish correspondences directly between frames without re-encoding the mask features for every object, leading to a highly efficient and robust framework. With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion. We cast the aggregation process as a voting problem and find that the existing inner-product affinity leads to poor use of memory with a small (fixed) subset of memory nodes dominating the votes, regardless of the query. In light of this phenomenon, we propose using the negative squared Euclidean distance instead to compute the affinities. We validated that every memory node now has a chance to contribute, and experimentally showed that such diversified voting is beneficial to both memory efficiency and inference accuracy. The synergy of correspondence networks and diversified voting works exceedingly well, achieves new state-of-the-art results on both DAVIS and YouTubeVOS datasets while running significantly faster at 20+ FPS for multiple objects without bells and whistles.
| null |
Sparse Spiking Gradient Descent
|
https://papers.nips.cc/paper_files/paper/2021/hash/61f2585b0ebcf1f532c4d1ec9a7d51aa-Abstract.html
|
Nicolas Perez-Nieves, Dan Goodman
|
https://papers.nips.cc/paper_files/paper/2021/hash/61f2585b0ebcf1f532c4d1ec9a7d51aa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12525-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/61f2585b0ebcf1f532c4d1ec9a7d51aa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=aLE2sEtMNXv
|
https://papers.nips.cc/paper_files/paper/2021/file/61f2585b0ebcf1f532c4d1ec9a7d51aa-Supplemental.pdf
|
There is an increasing interest in emulating Spiking Neural Networks (SNNs) on neuromorphic computing devices due to their low energy consumption. Recent advances have allowed training SNNs to a point where they start to compete with traditional Artificial Neural Networks (ANNs) in terms of accuracy, while at the same time being energy efficient when run on neuromorphic hardware. However, the process of training SNNs is still based on dense tensor operations originally developed for ANNs which do not leverage the spatiotemporally sparse nature of SNNs. We present here the first sparse SNN backpropagation algorithm which achieves the same or better accuracy as current state of the art methods while being significantly faster and more memory efficient. We show the effectiveness of our method on real datasets of varying complexity (Fashion-MNIST, Neuromophic-MNIST and Spiking Heidelberg Digits) achieving a speedup in the backward pass of up to $150$x, and $85\%$ more memory efficient, without losing accuracy.
| null |
Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence
|
https://papers.nips.cc/paper_files/paper/2021/hash/61f3a6dbc9120ea78ef75544826c814e-Abstract.html
|
Deng-Bao Wang, Lei Feng, Min-Ling Zhang
|
https://papers.nips.cc/paper_files/paper/2021/hash/61f3a6dbc9120ea78ef75544826c814e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12526-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/61f3a6dbc9120ea78ef75544826c814e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NJS8kp15zzH
|
https://papers.nips.cc/paper_files/paper/2021/file/61f3a6dbc9120ea78ef75544826c814e-Supplemental.pdf
|
Capturing accurate uncertainty quantification of the prediction from deep neural networks is important in many real-world decision-making applications. A reliable predictor is expected to be accurate when it is confident about its predictions and indicate high uncertainty when it is likely to be inaccurate. However, modern neural networks have been found to be poorly calibrated, primarily in the direction of overconfidence. In recent years, there is a surge of research on model calibration by leveraging implicit or explicit regularization techniques during training, which obtain well calibration by avoiding overconfident outputs. In our study, we empirically found that despite the predictions obtained from these regularized models are better calibrated, they suffer from not being as calibratable, namely, it is harder to further calibrate their predictions with post-hoc calibration methods like temperature scaling and histogram binning. We conduct a series of empirical studies showing that overconfidence may not hurt final calibration performance if post-hoc calibration is allowed, rather, the penalty of confident outputs will compress the room of potential improvements in post-hoc calibration phase. Our experimental findings point out a new direction to improve calibration of DNNs by considering main training and post-hoc calibration as a unified framework.
| null |
Towards Efficient and Effective Adversarial Training
|
https://papers.nips.cc/paper_files/paper/2021/hash/62889e73828c756c961c5a6d6c01a463-Abstract.html
|
Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, Venkatesh Babu R
|
https://papers.nips.cc/paper_files/paper/2021/hash/62889e73828c756c961c5a6d6c01a463-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12527-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/62889e73828c756c961c5a6d6c01a463-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=kuK2VARZGnI
|
https://papers.nips.cc/paper_files/paper/2021/file/62889e73828c756c961c5a6d6c01a463-Supplemental.pdf
|
The vulnerability of Deep Neural Networks to adversarial attacks has spurred immense interest towards improving their robustness. However, present state-of-the-art adversarial defenses involve the use of 10-step adversaries during training, which renders them computationally infeasible for application to large-scale datasets. While the recent single-step defenses show promising direction, their robustness is not on par with multi-step training methods. In this work, we bridge this performance gap by introducing a novel Nuclear-Norm regularizer on network predictions to enforce function smoothing in the vicinity of data samples. While prior works consider each data sample independently, the proposed regularizer uses the joint statistics of adversarial samples across a training minibatch to enhance optimization during both attack generation and training, obtaining state-of-the-art results amongst efficient defenses. We achieve further gains by incorporating exponential averaging of network weights over training iterations. We finally introduce a Hybrid training approach that combines the effectiveness of a two-step variant of the proposed defense with the efficiency of a single-step defense. We demonstrate superior results when compared to multi-step defenses such as TRADES and PGD-AT as well, at a significantly lower computational cost.
| null |
Intriguing Properties of Contrastive Losses
|
https://papers.nips.cc/paper_files/paper/2021/hash/628f16b29939d1b060af49f66ae0f7f8-Abstract.html
|
Ting Chen, Calvin Luo, Lala Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/628f16b29939d1b060af49f66ae0f7f8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12528-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/628f16b29939d1b060af49f66ae0f7f8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rYhBGWYm6AU
|
https://papers.nips.cc/paper_files/paper/2021/file/628f16b29939d1b060af49f66ae0f7f8-Supplemental.pdf
|
We study three intriguing properties of contrastive learning. First, we generalize the standard contrastive loss to a broader family of losses, and we find that various instantiations of the generalized loss perform similarly under the presence of a multi-layer non-linear projection head. Second, we study if instance-based contrastive learning (with a global image representation) can learn well on images with multiple objects present. We find that meaningful hierarchical local features can be learned despite the fact that these objectives operate on global instance-level features. Finally, we study the phenomenon of feature suppression among competing features shared across augmented views, such as "color distribution" vs "object class". We construct datasets with explicit and controllable competing features, and show that, for contrastive learning, a few bits of easy-to-learn shared features can suppress, and even fully prevent, the learning of other sets of competing features. In scenarios where there are multiple objects in an image, the dominant object would suppress the learning of smaller objects. Existing contrastive learning methods critically rely on data augmentation to favor certain sets of features over others, and could suffer from learning saturation for scenarios where existing augmentations cannot fully address the feature suppression. This poses open challenges to existing contrastive learning techniques.
| null |
Detecting Moments and Highlights in Videos via Natural Language Queries
|
https://papers.nips.cc/paper_files/paper/2021/hash/62e0973455fd26eb03e91d5741a4a3bb-Abstract.html
|
Jie Lei, Tamara L Berg, Mohit Bansal
|
https://papers.nips.cc/paper_files/paper/2021/hash/62e0973455fd26eb03e91d5741a4a3bb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12529-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/62e0973455fd26eb03e91d5741a4a3bb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tfBBt_q4nHT
|
https://papers.nips.cc/paper_files/paper/2021/file/62e0973455fd26eb03e91d5741a4a3bb-Supplemental.pdf
|
Detecting customized moments and highlights from videos given natural language (NL) user queries is an important but under-studied topic. One of the challenges in pursuing this direction is the lack of annotated data. To address this issue, we present the Query-based Video Highlights (QVHighlights) dataset. It consists of over 10,000 YouTube videos, covering a wide range of topics, from everyday activities and travel in lifestyle vlog videos to social and political activities in news videos. Each video in the dataset is annotated with: (1) a human-written free-form NL query, (2) relevant moments in the video w.r.t. the query, and (3) five-point scale saliency scores for all query-relevant clips. This comprehensive annotation enables us to develop and evaluate systems that detect relevant moments as well as salient highlights for diverse, flexible user queries. We also present a strong baseline for this task, Moment-DETR, a transformer encoder-decoder model that views moment retrieval as a direct set prediction problem, taking extracted video and query representations as inputs and predicting moment coordinates and saliency scores end-to-end. While our model does not utilize any human prior, we show that it performs competitively when compared to well-engineered architectures. With weakly supervised pretraining using ASR captions, Moment-DETR substantially outperforms previous methods. Lastly, we present several ablations and visualizations of Moment-DETR. Data and code is publicly available at https://github.com/jayleicn/moment_detr.
| null |
Stochastic optimization under time drift: iterate averaging, step-decay schedules, and high probability guarantees
|
https://papers.nips.cc/paper_files/paper/2021/hash/62e7f2e090fe150ef8deb4466fdc81b3-Abstract.html
|
Joshua Cutler, Dmitriy Drusvyatskiy, Zaid Harchaoui
|
https://papers.nips.cc/paper_files/paper/2021/hash/62e7f2e090fe150ef8deb4466fdc81b3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12530-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/62e7f2e090fe150ef8deb4466fdc81b3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=w5j80GVGFsr
|
https://papers.nips.cc/paper_files/paper/2021/file/62e7f2e090fe150ef8deb4466fdc81b3-Supplemental.pdf
|
We consider the problem of minimizing a convex function that is evolving in time according to unknown and possibly stochastic dynamics. Such problems abound in the machine learning and signal processing literature, under the names of concept drift and stochastic tracking. We provide novel non-asymptotic convergence guarantees for stochastic algorithms with iterate averaging, focusing on bounds valid both in expectation and with high probability. Notably, we show that the tracking efficiency of the proximal stochastic gradient method depends only logarithmically on the initialization quality when equipped with a step-decay schedule.
| null |
Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/6332a8f62e3a9d5831724f2ffe55cae0-Abstract.html
|
Andreas Schlaginhaufen, Philippe Wenk, Andreas Krause, Florian Dorfler
|
https://papers.nips.cc/paper_files/paper/2021/hash/6332a8f62e3a9d5831724f2ffe55cae0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12531-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6332a8f62e3a9d5831724f2ffe55cae0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=u8HmtBBSVJS
|
https://papers.nips.cc/paper_files/paper/2021/file/6332a8f62e3a9d5831724f2ffe55cae0-Supplemental.pdf
|
Learning how complex dynamical systems evolve over time is a key challenge in system identification. For safety critical systems, it is often crucial that the learned model is guaranteed to converge to some equilibrium point. To this end, neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed. For practical applications however, {\em partial observations} are the norm. As we will demonstrate, initialization of unobserved augmented states can become a key problem for neural ODEs. To alleviate this issue, we propose to augment the system's state with its history. Inspired by state augmentation in discrete-time systems, we thus obtain {\em neural delay differential equations}. Based on classical time delay stability analysis, we then show how to ensure stability of the learned models, and theoretically analyze our approach. Our experiments demonstrate its applicability to stable system identification of partially observed systems and learning a stabilizing feedback policy in delayed feedback control.
| null |
An Uncertainty Principle is a Price of Privacy-Preserving Microdata
|
https://papers.nips.cc/paper_files/paper/2021/hash/639d79cc857a6c76c2723b7e014fccb0-Abstract.html
|
John Abowd, Robert Ashmead, Ryan Cumings-Menon, Simson Garfinkel, Daniel Kifer, Philip Leclerc, William Sexton, Ashley Simpson, Christine Task, Pavel Zhuravlev
|
https://papers.nips.cc/paper_files/paper/2021/hash/639d79cc857a6c76c2723b7e014fccb0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12532-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/639d79cc857a6c76c2723b7e014fccb0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6tGP5Z-QbMb
| null |
Privacy-protected microdata are often the desired output of a differentially private algorithm since microdata is familiar and convenient for downstream users. However, there is a statistical price for this kind of convenience. We show that an uncertainty principle governs the trade-off between accuracy for a population of interest (``sum query'') vs. accuracy for its component sub-populations (``point queries''). Compared to differentially private query answering systems that are not required to produce microdata, accuracy can degrade by a logarithmic factor. For example, in the case of pure differential privacy, without the microdata requirement, one can provide noisy answers to the sum query and all point queries while guaranteeing that each answer has squared error $O(1/\epsilon^2)$. With the microdata requirement, one must choose between allowing an additional $\log^2(d)$ factor ($d$ is the number of point queries) for some point queries or allowing an extra $O(d^2)$ factor for the sum query. We present lower bounds for pure, approximate, and concentrated differential privacy. We propose mitigation strategies and create a collection of benchmark datasets that can be used for public study of this problem.
| null |
Fairness in Ranking under Uncertainty
|
https://papers.nips.cc/paper_files/paper/2021/hash/63c3ddcc7b23daa1e42dc41f9a44a873-Abstract.html
|
Ashudeep Singh, David Kempe, Thorsten Joachims
|
https://papers.nips.cc/paper_files/paper/2021/hash/63c3ddcc7b23daa1e42dc41f9a44a873-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12533-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/63c3ddcc7b23daa1e42dc41f9a44a873-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=7wunGXQoC27
|
https://papers.nips.cc/paper_files/paper/2021/file/63c3ddcc7b23daa1e42dc41f9a44a873-Supplemental.pdf
|
Fairness has emerged as an important consideration in algorithmic decision making. Unfairness occurs when an agent with higher merit obtains a worse outcome than an agent with lower merit. Our central point is that a primary cause of unfairness is uncertainty. A principal or algorithm making decisions never has access to the agents' true merit, and instead uses proxy features that only imperfectly predict merit (e.g., GPA, star ratings, recommendation letters). None of these ever fully capture an agent's merit; yet existing approaches have mostly been defining fairness notions directly based on observed features and outcomes.Our primary point is that it is more principled to acknowledge and model the uncertainty explicitly. The role of observed features is to give rise to a posterior distribution of the agents' merits. We use this viewpoint to define a notion of approximate fairness in ranking. We call an algorithm $\phi$-fair (for $\phi \in [0,1]$) if it has the following property for all agents $x$ and all $k$: if agent $x$ is among the top $k$ agents with respect to merit with probability at least $\rho$ (according to the posterior merit distribution), then the algorithm places the agent among the top $k$ agents in its ranking with probability at least $\phi \rho$.We show how to compute rankings that optimally trade off approximate fairness against utility to the principal. In addition to the theoretical characterization, we present an empirical analysis of the potential impact of the approach in simulation studies. For real-world validation, we applied the approach in the context of a paper recommendation system that we built and fielded at the KDD 2020 conference.
| null |
Generalized Proximal Policy Optimization with Sample Reuse
|
https://papers.nips.cc/paper_files/paper/2021/hash/63c4b1baf3b4460fa9936b1a20919bec-Abstract.html
|
James Queeney, Yannis Paschalidis, Christos G Cassandras
|
https://papers.nips.cc/paper_files/paper/2021/hash/63c4b1baf3b4460fa9936b1a20919bec-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12534-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/63c4b1baf3b4460fa9936b1a20919bec-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=in_RVSTqYxK
|
https://papers.nips.cc/paper_files/paper/2021/file/63c4b1baf3b4460fa9936b1a20919bec-Supplemental.pdf
|
In real-world decision making tasks, it is critical for data-driven reinforcement learning methods to be both stable and sample efficient. On-policy methods typically generate reliable policy improvement throughout training, while off-policy methods make more efficient use of data through sample reuse. In this work, we combine the theoretically supported stability benefits of on-policy algorithms with the sample efficiency of off-policy algorithms. We develop policy improvement guarantees that are suitable for the off-policy setting, and connect these bounds to the clipping mechanism used in Proximal Policy Optimization. This motivates an off-policy version of the popular algorithm that we call Generalized Proximal Policy Optimization with Sample Reuse. We demonstrate both theoretically and empirically that our algorithm delivers improved performance by effectively balancing the competing goals of stability and sample efficiency.
| null |
Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/63dc7ed1010d3c3b8269faf0ba7491d4-Abstract.html
|
Gongfan Fang, Yifan Bao, Jie Song, Xinchao Wang, Donglin Xie, Chengchao Shen, Mingli Song
|
https://papers.nips.cc/paper_files/paper/2021/hash/63dc7ed1010d3c3b8269faf0ba7491d4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12535-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/63dc7ed1010d3c3b8269faf0ba7491d4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lU1tFeUyBTI
|
https://papers.nips.cc/paper_files/paper/2021/file/63dc7ed1010d3c3b8269faf0ba7491d4-Supplemental.pdf
|
Knowledge distillation~(KD) aims to craft a compact student model that imitates the behavior of a pre-trained teacher in a target domain. Prior KD approaches, despite their gratifying results, have largely relied on the premise that \emph{in-domain} data is available to carry out the knowledge transfer. Such an assumption, unfortunately, in many cases violates the practical setting, since the original training data or even the data domain is often unreachable due to privacy or copyright reasons. In this paper, we attempt to tackle an ambitious task, termed as \emph{out-of-domain} knowledge distillation~(OOD-KD), which allows us to conduct KD using only OOD data that can be readily obtained at a very low cost. Admittedly, OOD-KD is by nature a highly challenging task due to the agnostic domain gap. To this end, we introduce a handy yet surprisingly efficacious approach, dubbed as~\textit{MosaicKD}. The key insight behind MosaicKD lies in that, samples from various domains share common local patterns, even though their global semantic may vary significantly; these shared local patterns, in turn, can be re-assembled analogous to mosaic tiling, to approximate the in-domain data and to further alleviating the domain discrepancy. In MosaicKD, this is achieved through a four-player min-max game, in which a generator, a discriminator, a student network, are collectively trained in an adversarial manner, partially under the guidance of a pre-trained teacher. We validate MosaicKD over {classification and semantic segmentation tasks} across various benchmarks, and demonstrate that it yields results much superior to the state-of-the-art counterparts on OOD data. Our code is available at \url{https://github.com/zju-vipa/MosaicKD}.
| null |
Batch Active Learning at Scale
|
https://papers.nips.cc/paper_files/paper/2021/hash/64254db8396e404d9223914a0bd355d2-Abstract.html
|
Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, Sanjiv Kumar
|
https://papers.nips.cc/paper_files/paper/2021/hash/64254db8396e404d9223914a0bd355d2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12536-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/64254db8396e404d9223914a0bd355d2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zzdf0CirJM4
|
https://papers.nips.cc/paper_files/paper/2021/file/64254db8396e404d9223914a0bd355d2-Supplemental.pdf
|
The ability to train complex and highly effective models often requires an abundance of training data, which can easily become a bottleneck in cost, time, and computational resources. Batch active learning, which adaptively issues batched queries to a labeling oracle, is a common approach for addressing this problem. The practical benefits of batch sampling come with the downside of less adaptivity and the risk of sampling redundant examples within a batch -- a risk that grows with the batch size. In this work, we analyze an efficient active learning algorithm, which focuses on the large batch setting. In particular, we show that our sampling method, which combines notions of uncertainty and diversity, easily scales to batch sizes (100K-1M) several orders of magnitude larger than used in previous studies and provides significant improvements in model training efficiency compared to recent baselines. Finally, we provide an initial theoretical analysis, proving label complexity guarantees for a related sampling method, which we show is approximately equivalent to our sampling method in specific settings.
| null |
Joint Semantic Mining for Weakly Supervised RGB-D Salient Object Detection
|
https://papers.nips.cc/paper_files/paper/2021/hash/642e92efb79421734881b53e1e1b18b6-Abstract.html
|
Jingjing Li, Wei Ji, Qi Bi, Cheng Yan, Miao Zhang, Yongri Piao, Huchuan Lu, Li cheng
|
https://papers.nips.cc/paper_files/paper/2021/hash/642e92efb79421734881b53e1e1b18b6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12537-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/642e92efb79421734881b53e1e1b18b6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=mv-1sL8FMN5
|
https://papers.nips.cc/paper_files/paper/2021/file/642e92efb79421734881b53e1e1b18b6-Supplemental.pdf
|
Training saliency detection models with weak supervisions, e.g., image-level tags or captions, is appealing as it removes the costly demand of per-pixel annotations. Despite the rapid progress of RGB-D saliency detection in fully-supervised setting, it however remains an unexplored territory when only weak supervision signals are available. This paper is set to tackle the problem of weakly-supervised RGB-D salient object detection. The key insight in this effort is the idea of maintaining per-pixel pseudo-labels with iterative refinements by reconciling the multimodal input signals in our joint semantic mining (JSM). Considering the large variations in the raw depth map and the lack of explicit pixel-level supervisions, we propose spatial semantic modeling (SSM) to capture saliency-specific depth cues from the raw depth and produce depth-refined pseudo-labels. Moreover, tags and captions are incorporated via a fill-in-the-blank training in our textual semantic modeling (TSM) to estimate the confidences of competing pseudo-labels. At test time, our model involves only a light-weight sub-network of the training pipeline, i.e., it requires only an RGB image as input, thus allowing efficient inference. Extensive evaluations demonstrate the effectiveness of our approach under the weakly-supervised setting. Importantly, our method could also be adapted to work in both fully-supervised and unsupervised paradigms. In each of these scenarios, superior performance has been attained by our approach with comparing to the state-of-the-art dedicated methods. As a by-product, a CapS dataset is constructed by augmenting existing benchmark training set with additional image tags and captions.
| null |
Not All Images are Worth 16x16 Words: Dynamic Transformers for Efficient Image Recognition
|
https://papers.nips.cc/paper_files/paper/2021/hash/64517d8435994992e682b3e4aa0a0661-Abstract.html
|
Yulin Wang, Rui Huang, Shiji Song, Zeyi Huang, Gao Huang
|
https://papers.nips.cc/paper_files/paper/2021/hash/64517d8435994992e682b3e4aa0a0661-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12538-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/64517d8435994992e682b3e4aa0a0661-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=M0J1c3PqwKZ
|
https://papers.nips.cc/paper_files/paper/2021/file/64517d8435994992e682b3e4aa0a0661-Supplemental.pdf
|
Vision Transformers (ViT) have achieved remarkable success in large-scale image recognition. They split every 2D image into a fixed number of patches, each of which is treated as a token. Generally, representing an image with more tokens would lead to higher prediction accuracy, while it also results in drastically increased computational cost. To achieve a decent trade-off between accuracy and speed, the number of tokens is empirically set to 16x16 or 14x14. In this paper, we argue that every image has its own characteristics, and ideally the token number should be conditioned on each individual input. In fact, we have observed that there exist a considerable number of “easy” images which can be accurately predicted with a mere number of 4x4 tokens, while only a small fraction of “hard” ones need a finer representation. Inspired by this phenomenon, we propose a Dynamic Transformer to automatically configure a proper number of tokens for each input image. This is achieved by cascading multiple Transformers with increasing numbers of tokens, which are sequentially activated in an adaptive fashion at test time, i.e., the inference is terminated once a sufficiently confident prediction is produced. We further design efficient feature reuse and relationship reuse mechanisms across different components of the Dynamic Transformer to reduce redundant computations. Extensive empirical results on ImageNet, CIFAR-10, and CIFAR-100 demonstrate that our method significantly outperforms the competitive baselines in terms of both theoretical computational efficiency and practical inference speed. Code and pre-trained models (based on PyTorch and MindSpore) are available at https://github.com/blackfeather-wang/Dynamic-Vision-Transformer and https://github.com/blackfeather-wang/Dynamic-Vision-Transformer-MindSpore.
| null |
Contrastive Learning for Neural Topic Model
|
https://papers.nips.cc/paper_files/paper/2021/hash/6467c327eaf8940b4dd07a08c63c5e85-Abstract.html
|
Thong Nguyen, Anh Tuan Luu
|
https://papers.nips.cc/paper_files/paper/2021/hash/6467c327eaf8940b4dd07a08c63c5e85-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12539-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6467c327eaf8940b4dd07a08c63c5e85-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NEgqO9yB7e
|
https://papers.nips.cc/paper_files/paper/2021/file/6467c327eaf8940b4dd07a08c63c5e85-Supplemental.pdf
|
Recent empirical studies show that adversarial topic models (ATM) can successfully capture semantic patterns of the document by differentiating a document with another dissimilar sample. However, utilizing that discriminative-generative architecture has two important drawbacks: (1) the architecture does not relate similar documents, which has the same document-word distribution of salient words; (2) it restricts the ability to integrate external information, such as sentiments of the document, which has been shown to benefit the training of neural topic model. To address those issues, we revisit the adversarial topic architecture in the view point of mathematical analysis, propose a novel approach to re-formulate discriminative goal as an optimization problem, and design a novel sampling method which facilitates the integration of external variables. The reformulation encourages the model to incorporate the relations among similar samples and enforces the constraint on the similarity among dissimilar ones; while the sampling method, which is based on the internal input and reconstructed output, helps inform the model of salient words contributing to the main topic. Experimental results show that our framework outperforms other state-of-the-art neural topic models in three common benchmark datasets that belong to various domains, vocabulary sizes, and document lengths in terms of topic coherence.
| null |
Learning in two-player zero-sum partially observable Markov games with perfect recall
|
https://papers.nips.cc/paper_files/paper/2021/hash/646c9941d7fb1bc793a7929328ae3f2f-Abstract.html
|
Tadashi Kozuno, Pierre Ménard, Remi Munos, Michal Valko
|
https://papers.nips.cc/paper_files/paper/2021/hash/646c9941d7fb1bc793a7929328ae3f2f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12540-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/646c9941d7fb1bc793a7929328ae3f2f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1LLemKrsgQp
|
https://papers.nips.cc/paper_files/paper/2021/file/646c9941d7fb1bc793a7929328ae3f2f-Supplemental.pdf
|
We study the problem of learning a Nash equilibrium (NE) in an extensive game with imperfect information (EGII) through self-play. Precisely, we focus on two-player, zero-sum, episodic, tabular EGII under the \textit{perfect-recall} assumption where the only feedback is realizations of the game (bandit feedback). In particular the \textit{dynamics of the EGII is not known}---we can only access it by sampling or interacting with a game simulator. For this learning setting, we provide the Implicit Exploration Online Mirror Descent (IXOMD) algorithm. It is a model-free algorithm with a high-probability bound on convergence rate to the NE of order $1/\sqrt{T}$ where~$T$ is the number of played games. Moreover IXOMD is computationally efficient as it needs to perform the updates only along the sampled trajectory.
| null |
A Geometric Structure of Acceleration and Its Role in Making Gradients Small Fast
|
https://papers.nips.cc/paper_files/paper/2021/hash/647c722bf90a49140184672e0d3723e3-Abstract.html
|
Jongmin Lee, Chanwoo Park, Ernest Ryu
|
https://papers.nips.cc/paper_files/paper/2021/hash/647c722bf90a49140184672e0d3723e3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12541-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/647c722bf90a49140184672e0d3723e3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tTeJejS8vte
|
https://papers.nips.cc/paper_files/paper/2021/file/647c722bf90a49140184672e0d3723e3-Supplemental.pdf
|
Since Nesterov's seminal 1983 work, many accelerated first-order optimization methods have been proposed, but their analyses lacks a common unifying structure. In this work, we identify a geometric structure satisfied by a wide range of first-order accelerated methods. Using this geometric insight, we present several novel generalizations of accelerated methods. Most interesting among them is a method that reduces the squared gradient norm with $\mathcal{O}(1/K^4)$ rate in the prox-grad setup, faster than the $\mathcal{O}(1/K^3)$ rates of Nesterov's FGM or Kim and Fessler's FPGM-m.
| null |
ATISS: Autoregressive Transformers for Indoor Scene Synthesis
|
https://papers.nips.cc/paper_files/paper/2021/hash/64986d86a17424eeac96b08a6d519059-Abstract.html
|
Despoina Paschalidou, Amlan Kar, Maria Shugrina, Karsten Kreis, Andreas Geiger, Sanja Fidler
|
https://papers.nips.cc/paper_files/paper/2021/hash/64986d86a17424eeac96b08a6d519059-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12542-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/64986d86a17424eeac96b08a6d519059-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MtvKv_BDVV
|
https://papers.nips.cc/paper_files/paper/2021/file/64986d86a17424eeac96b08a6d519059-Supplemental.pdf
|
The ability to synthesize realistic and diverse indoor furniture layouts automatically or based on partial input, unlocks many applications, from better interactive 3D tools to data synthesis for training and simulation. In this paper, we present ATISS, a novel autoregressive transformer architecture for creating diverse and plausible synthetic indoor environments, given only the room type and its floor plan. In contrast to prior work, which poses scene synthesis as sequence generation, our model generates rooms as unordered sets of objects. We argue that this formulation is more natural, as it makes ATISS generally useful beyond fully automatic room layout synthesis. For example, the same trained model can be used in interactive applications for general scene completion, partial room re-arrangement with any objects specified by the user, as well as object suggestions for any partial room. To enable this, our model leverages the permutation equivariance of the transformer when conditioning on the partial scene, and is trained to be permutation-invariant across object orderings. Our model is trained end-to-end as an autoregressive generative model using only labeled 3D bounding boxes as supervision. Evaluations on four room types in the 3D-FRONT dataset demonstrate that our model consistently generates plausible room layouts that are more realistic than existing methods.In addition, it has fewer parameters, is simpler to implement and train and runs up to 8 times faster than existing methods.
| null |
Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/649adc59afdef2a8b9e943f94a04b02f-Abstract.html
|
Hassan Dbouk, Naresh Shanbhag
|
https://papers.nips.cc/paper_files/paper/2021/hash/649adc59afdef2a8b9e943f94a04b02f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12543-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/649adc59afdef2a8b9e943f94a04b02f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=bXTxva_xx6r
| null |
Despite their tremendous successes, convolutional neural networks (CNNs) incur high computational/storage costs and are vulnerable to adversarial perturbations. Recent works on robust model compression address these challenges by combining model compression techniques with adversarial training. But these methods are unable to improve throughput (frames-per-second) on real-life hardware while simultaneously preserving robustness to adversarial perturbations. To overcome this problem, we propose the method of Generalized Depthwise-Separable (GDWS) convolution - an efficient, universal, post-training approximation of a standard 2D convolution. GDWS dramatically improves the throughput of a standard pre-trained network on real-life hardware while preserving its robustness. Lastly, GDWS is scalable to large problem sizes since it operates on pre-trained models and doesn't require any additional training. We establish the optimality of GDWS as a 2D convolution approximator and present exact algorithms for constructing optimal GDWS convolutions under complexity and error constraints. We demonstrate the effectiveness of GDWS via extensive experiments on CIFAR-10, SVHN, and ImageNet datasets. Our code can be found at https://github.com/hsndbk4/GDWS.
| null |
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/649d45bf179296e31731adfd4df25588-Abstract.html
|
Christoph Dann, Mehryar Mohri, Tong Zhang, Julian Zimmert
|
https://papers.nips.cc/paper_files/paper/2021/hash/649d45bf179296e31731adfd4df25588-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12544-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/649d45bf179296e31731adfd4df25588-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ib6VSrtZcu9
|
https://papers.nips.cc/paper_files/paper/2021/file/649d45bf179296e31731adfd4df25588-Supplemental.pdf
|
Thompson Sampling is one of the most effective methods for contextual bandits and has been generalized to posterior sampling for certain MDP settings. However, existing posterior sampling methods for reinforcement learning are limited by being model-based or lack worst-case theoretical guarantees beyond linear MDPs. This paper proposes a new model-free formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees. We introduce novel proof techniques to show that under suitable conditions, the worst-case regret of our posterior sampling method matches the best known results of optimization based methods. In the linear MDP setting with dimension, the regret of our algorithm scales linearly with the dimension as compared to a quadratic dependence of the existing posterior sampling-based exploration algorithms.
| null |
Fast Federated Learning in the Presence of Arbitrary Device Unavailability
|
https://papers.nips.cc/paper_files/paper/2021/hash/64be20f6dd1dd46adf110cf871e3ed35-Abstract.html
|
Xinran Gu, Kaixuan Huang, Jingzhao Zhang, Longbo Huang
|
https://papers.nips.cc/paper_files/paper/2021/hash/64be20f6dd1dd46adf110cf871e3ed35-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12545-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/64be20f6dd1dd46adf110cf871e3ed35-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1_gaHBaRYt
|
https://papers.nips.cc/paper_files/paper/2021/file/64be20f6dd1dd46adf110cf871e3ed35-Supplemental.pdf
|
Federated learning (FL) coordinates with numerous heterogeneous devices to collaboratively train a shared model while preserving user privacy. Despite its multiple advantages, FL faces new challenges. One challenge arises when devices drop out of the training process. In this case, the convergence of popular FL algorithms such as FedAvg is severely influenced by the straggling devices. To tackle this challenge, we study federated learning algorithms in the presence of arbitrary device unavailability and propose an algorithm named Memory-augmented Impatient Federated Averaging (MIFA). Our algorithm efficiently avoids excessive latency induced by inactive devices, and corrects the gradient bias using the memorized latest updates from them. We prove that MIFA achieves minimax optimal convergence rates on non-i.i.d. data for both strongly convex and non-convex smooth functions. We also provide an explicit characterization of the improvement over baseline algorithms through a case study, and validate the results by numerical experiments on real-world datasets.
| null |
On The Structure of Parametric Tournaments with Application to Ranking from Pairwise Comparisons
|
https://papers.nips.cc/paper_files/paper/2021/hash/64dafb11e52edd3cd840bf24e56ddce6-Abstract.html
|
Vishnu Veerathu, Arun Rajkumar
|
https://papers.nips.cc/paper_files/paper/2021/hash/64dafb11e52edd3cd840bf24e56ddce6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12546-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/64dafb11e52edd3cd840bf24e56ddce6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nqutwR1WDBY
|
https://papers.nips.cc/paper_files/paper/2021/file/64dafb11e52edd3cd840bf24e56ddce6-Supplemental.pdf
|
We consider the classical problem of finding the minimum feedback arc set on tournaments (MFAST). The problem is NP-hard in general and we study it for important classes of tournaments that arise naturally in the problem of learning to rank from pairwise comparisons. Specifically, we consider tournaments classes that arise out of parametric preference matrices that can lead to cyclic preference relations. We investigate their structural properties via forbidden sub tournament configurations. Towards this, we introduce \emph{Tournament Dimension} - a combinatorial parameter that characterizes the size of a forbidden configuration for rank $r$ tournament classes i.e., classes that arise out pairwise preference matrices which lead to rank $r$ skew-symmetric matrices under a suitable link function. Our main result is a polynomial-time algorithm - \texttt{Rank2Rank} - that solves the MFAST problem for the rank $2$ tournament class. This is achieved via a geometric characterization that relies on our explicit construction of a forbidden configuration for this class. Building on our understanding of the rank-$2$ tournament class, we propose a very general and flexible parametric pairwise preference model called the local-global model which subsumes the popular Bradley-Terry-Luce/Thurstone classes to capture locally cyclic as well as globally acyclic preference relations. We develop a polynomial-time algorithm - \texttt{BlockRank2Rank}- to solve the MFAST problem on the associated Block-Rank $2$ tournament class. As an application, we study the problem of learning to rank from pairwise comparisons under the proposed local-global preference model. Exploiting our structural characterization, we propose \texttt{PairwiseBlockRank} - a pairwise ranking algorithm for this class. We show sample complexity bounds of \texttt{PairwiseBlockRank} to learn a good ranking under the proposed model. Finally, we conduct experiments on synthetic and real-world datasets to show the efficacy of the proposed algorithm.
| null |
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/64f1f27bf1b4ec22924fd0acb550c235-Abstract.html
|
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo
|
https://papers.nips.cc/paper_files/paper/2021/hash/64f1f27bf1b4ec22924fd0acb550c235-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12547-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/64f1f27bf1b4ec22924fd0acb550c235-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OG18MI5TRL
|
https://papers.nips.cc/paper_files/paper/2021/file/64f1f27bf1b4ec22924fd0acb550c235-Supplemental.pdf
|
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to Segformer-B5, which reaches much better performance and efficiency than previous counterparts.For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.
| null |
Fairness via Representation Neutralization
|
https://papers.nips.cc/paper_files/paper/2021/hash/64ff7983a47d331b13a81156e2f4d29d-Abstract.html
|
Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Awadallah, Xia Hu
|
https://papers.nips.cc/paper_files/paper/2021/hash/64ff7983a47d331b13a81156e2f4d29d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12548-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/64ff7983a47d331b13a81156e2f4d29d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=nHRGW_wETLQ
|
https://papers.nips.cc/paper_files/paper/2021/file/64ff7983a47d331b13a81156e2f4d29d-Supplemental.pdf
|
Existing bias mitigation methods for DNN models primarily work on learning debiased encoders. This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder. To address these limitations, we explore the following research question: Can we reduce the discrimination of DNN models by only debiasing the classification head, even with biased representations as inputs? To this end, we propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF) that achieves fairness by debiasing only the task-specific classification head of DNN models. To this end, we leverage samples with the same ground-truth label but different sensitive attributes, and use their neutralized representations to train the classification head of the DNN model. The key idea of RNF is to discourage the classification head from capturing spurious correlation between fairness sensitive information in encoder representations with specific class labels. To address low-resource settings with no access to sensitive attribute annotations, we leverage a bias-amplified model to generate proxy annotations for sensitive attributes. Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models with minimal degradation in task-specific performance.
| null |
Residual Relaxation for Multi-view Representation Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/6516c28727509c3db6280ae16254e916-Abstract.html
|
Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin
|
https://papers.nips.cc/paper_files/paper/2021/hash/6516c28727509c3db6280ae16254e916-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12549-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6516c28727509c3db6280ae16254e916-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rEBScZF6G70
|
https://papers.nips.cc/paper_files/paper/2021/file/6516c28727509c3db6280ae16254e916-Supplemental.pdf
|
Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation. In this paper, we notice that some other useful augmentations, such as image rotation, are harmful for multi-view methods because they cause a semantic shift that is too large to be aligned well. This observation motivates us to relax the exact alignment objective to better cultivate stronger augmentations. Taking image rotation as a case study, we develop a generic approach, Pretext-aware Residual Relaxation (Prelax), that relaxes the exact alignment by allowing an adaptive residual vector between different views and encoding the semantic shift through pretext-aware learning. Extensive experiments on different backbones show that our method can not only improve multi-view methods with existing augmentations, but also benefit from stronger image augmentations like rotation.
| null |
Do Vision Transformers See Like Convolutional Neural Networks?
|
https://papers.nips.cc/paper_files/paper/2021/hash/652cf38361a209088302ba2b8b7f51e0-Abstract.html
|
Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, Alexey Dosovitskiy
|
https://papers.nips.cc/paper_files/paper/2021/hash/652cf38361a209088302ba2b8b7f51e0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12550-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/652cf38361a209088302ba2b8b7f51e0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=R-616EWWKF5
|
https://papers.nips.cc/paper_files/paper/2021/file/652cf38361a209088302ba2b8b7f51e0-Supplemental.pdf
|
Convolutional neural networks (CNNs) have so far been the de-facto model for visual data. Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or even superior performance on image classification tasks. This raises a central question: how are Vision Transformers solving these tasks? Are they acting like convolutional networks, or learning entirely different visual representations? Analyzing the internal representation structure of ViTs and CNNs on image classification benchmarks, we find striking differences between the two architectures, such as ViT having more uniform representations across all layers. We explore how these differences arise, finding crucial roles played by self-attention, which enables early aggregation of global information, and ViT residual connections, which strongly propagate features from lower to higher layers. We study the ramifications for spatial localization, demonstrating ViTs successfully preserve input spatial information, with noticeable effects from different classification methods. Finally, we study the effect of (pretraining) dataset scale on intermediate features and transfer learning, and conclude with a discussion on connections to new architectures such as the MLP-Mixer.
| null |
Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/6531b32f8d02fece98ff36a64a7c8260-Abstract.html
|
Ali Taghibakhshi, Scott MacLachlan, Luke Olson, Matthew West
|
https://papers.nips.cc/paper_files/paper/2021/hash/6531b32f8d02fece98ff36a64a7c8260-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12551-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6531b32f8d02fece98ff36a64a7c8260-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WcY6S6PDuly
| null |
Large sparse linear systems of equations are ubiquitous in science and engineering, such as those arising from discretizations of partial differential equations. Algebraic multigrid (AMG) methods are one of the most common methods of solving such linear systems, with an extensive body of underlying mathematical theory. A system of linear equations defines a graph on the set of unknowns and each level of a multigrid solver requires the selection of an appropriate coarse graph along with restriction and interpolation operators that map to and from the coarse representation. The efficiency of the multigrid solver depends critically on this selection and many selection methods have been developed over the years. Recently, it has been demonstrated that it is possible to directly learn the AMG interpolation and restriction operators, given a coarse graph selection. In this paper, we consider the complementary problem of learning to coarsen graphs for a multigrid solver, a necessary step in developing fully learnable AMG methods. We propose a method using a reinforcement learning (RL) agent based on graph neural networks (GNNs), which can learn to perform graph coarsening on small planar training graphs and then be applied to unstructured large planar graphs, assuming bounded node degree. We demonstrate that this method can produce better coarse graphs than existing algorithms, even as the graph size increases and other properties of the graph are varied. We also propose an efficient inference procedure for performing graph coarsening that results in linear time complexity in graph size.
| null |
Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/654516d1b4df6917094de807156adc14-Abstract.html
|
Wenqing Zheng, Qiangqiang Guo, Hao Yang, Peihao Wang, Zhangyang Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/654516d1b4df6917094de807156adc14-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12552-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/654516d1b4df6917094de807156adc14-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=DJ6fmWG4qvW
|
https://papers.nips.cc/paper_files/paper/2021/file/654516d1b4df6917094de807156adc14-Supplemental.zip
|
Multi-agent control is a central theme in the Cyber-Physical Systems (CPS). However, current control methods either receive non-Markovian states due to insufficient sensing and decentralized design, or suffer from poor convergence. This paper presents the Delayed Propagation Transformer (DePT), a new transformer-based model that specializes in the global modeling of CPS while taking into account the immutable constraints from the physical world. DePT induces a cone-shaped spatial-temporal attention prior, which injects the information propagation and aggregation principles and enables a global view. With physical constraint inductive bias baked into its design, our DePT is ready to plug and play for a broad class of multi-agent systems. The experimental results on one of the most challenging CPS -- network-scale traffic signal control system in the open world -- show that our model outperformed the state-of-the-art expert methods on synthetic and real-world datasets. Our codes are released at: https://github.com/VITA-Group/DePT.
| null |
Explaining Latent Representations with a Corpus of Examples
|
https://papers.nips.cc/paper_files/paper/2021/hash/65658fde58ab3c2b6e5132a39fae7cb9-Abstract.html
|
Jonathan Crabbe, Zhaozhi Qian, Fergus Imrie, Mihaela van der Schaar
|
https://papers.nips.cc/paper_files/paper/2021/hash/65658fde58ab3c2b6e5132a39fae7cb9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12553-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65658fde58ab3c2b6e5132a39fae7cb9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=PIcuKeiWvj-
|
https://papers.nips.cc/paper_files/paper/2021/file/65658fde58ab3c2b6e5132a39fae7cb9-Supplemental.pdf
|
Modern machine learning models are complicated. Most of them rely on convoluted latent representations of their input to issue a prediction. To achieve greater transparency than a black-box that connects inputs to predictions, it is necessary to gain a deeper understanding of these latent representations. To that aim, we propose SimplEx: a user-centred method that provides example-based explanations with reference to a freely selected set of examples, called the corpus. SimplEx uses the corpus to improve the user’s understanding of the latent space with post-hoc explanations answering two questions: (1) Which corpus examples explain the prediction issued for a given test example? (2) What features of these corpus examples are relevant for the model to relate them to the test example? SimplEx provides an answer by reconstructing the test latent representation as a mixture of corpus latent representations. Further, we propose a novel approach, the integrated Jacobian, that allows SimplEx to make explicit the contribution of each corpus feature in the mixture. Through experiments on tasks ranging from mortality prediction to image classification, we demonstrate that these decompositions are robust and accurate. With illustrative use cases in medicine, we show that SimplEx empowers the user by highlighting relevant patterns in the corpus that explain model representations. Moreover, we demonstrate how the freedom in choosing the corpus allows the user to have personalized explanations in terms of examples that are meaningful for them.
| null |
Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/656f0dbf9392657eed7feefc486781fb-Abstract.html
|
Aran Nayebi, Alexander Attinger, Malcolm Campbell, Kiah Hardcastle, Isabel Low, Caitlin S Mallory, Gabriel Mel, Ben Sorscher, Alex H Williams, Surya Ganguli, Lisa Giocomo, Dan Yamins
|
https://papers.nips.cc/paper_files/paper/2021/hash/656f0dbf9392657eed7feefc486781fb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12554-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/656f0dbf9392657eed7feefc486781fb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_vypaVMDs51
|
https://papers.nips.cc/paper_files/paper/2021/file/656f0dbf9392657eed7feefc486781fb-Supplemental.pdf
|
Medial entorhinal cortex (MEC) supports a wide range of navigational and memory related behaviors.Well-known experimental results have revealed specialized cell types in MEC --- e.g. grid, border, and head-direction cells --- whose highly stereotypical response profiles are suggestive of the role they might play in supporting MEC functionality. However, the majority of MEC neurons do not exhibit stereotypical firing patterns.How should the response profiles of these more "heterogeneous" cells be described, and how do they contribute to behavior?In this work, we took a computational approach to addressing these questions.We first performed a statistical analysis that shows that heterogeneous MEC cells are just as reliable in their response patterns as the more stereotypical cell types, suggesting that they have a coherent functional role.Next, we evaluated a spectrum of candidate models in terms of their ability to describe the response profiles of both stereotypical and heterogeneous MEC cells.We found that recently developed task-optimized neural network models are substantially better than traditional grid cell-centric models at matching most MEC neuronal response profiles --- including those of grid cells themselves --- despite not being explicitly trained for this purpose.Specific choices of network architecture (such as gated nonlinearities and an explicit intermediate place cell representation) have an important effect on the ability of the model to generalize to novel scenarios, with the best of these models closely approaching the noise ceiling of the data itself.We then performed in silico experiments on this model to address questions involving the relative functional relevance of various cell types, finding that heterogeneous cells are likely to be just as involved in downstream functional outcomes (such as path integration) as grid and border cells.Finally, inspired by recent data showing that, going beyond their spatial response selectivity, MEC cells are also responsive to non-spatial rewards, we introduce a new MEC model that performs reward-modulated path integration.We find that this unified model matches neural recordings across all variable-reward conditions.Taken together, our results point toward a conceptually principled goal-driven modeling approach for moving future experimental and computational efforts beyond overly-simplistic single-cell stereotypes.
| null |
Beyond Smoothness: Incorporating Low-Rank Analysis into Nonparametric Density Estimation
|
https://papers.nips.cc/paper_files/paper/2021/hash/6591d327f6f731e589b0e869adadf940-Abstract.html
|
Robert A. Vandermeulen, Antoine Ledent
|
https://papers.nips.cc/paper_files/paper/2021/hash/6591d327f6f731e589b0e869adadf940-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12555-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6591d327f6f731e589b0e869adadf940-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=uholDBWSVP
|
https://papers.nips.cc/paper_files/paper/2021/file/6591d327f6f731e589b0e869adadf940-Supplemental.zip
|
The construction and theoretical analysis of the most popular universally consistent nonparametric density estimators hinge on one functional property: smoothness. In this paper we investigate the theoretical implications of incorporating a multi-view latent variable model, a type of low-rank model, into nonparametric density estimation. To do this we perform extensive analysis on histogram-style estimators that integrate a multi-view model. Our analysis culminates in showing that there exists a universally consistent histogram-style estimator that converges to any multi-view model with a finite number of Lipschitz continuous components at a rate of $\widetilde{O}(1/\sqrt[3]{n})$ in $L^1$ error. In contrast, the standard histogram estimator can converge at a rate slower than $1/\sqrt[d]{n}$ on the same class of densities. We also introduce a new nonparametric latent variable model based on the Tucker decomposition. A rudimentary implementation of our estimators experimentally demonstrates a considerable performance improvement over the standard histogram estimator. We also provide a thorough analysis of the sample complexity of our Tucker decomposition-based model and a variety of other results. Thus, our paper provides solid theoretical foundations for extending low-rank techniques to the nonparametric setting.
| null |
Multi-View Representation Learning via Total Correlation Objective
|
https://papers.nips.cc/paper_files/paper/2021/hash/65a99bb7a3115fdede20da98b08a370f-Abstract.html
|
HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim
|
https://papers.nips.cc/paper_files/paper/2021/hash/65a99bb7a3115fdede20da98b08a370f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12556-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65a99bb7a3115fdede20da98b08a370f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SV4NhqUoO8
|
https://papers.nips.cc/paper_files/paper/2021/file/65a99bb7a3115fdede20da98b08a370f-Supplemental.pdf
|
Multi-View Representation Learning (MVRL) aims to discover a shared representation of observations from different views with the complex underlying correlation. In this paper, we propose a variational approach which casts MVRL as maximizing the amount of total correlation reduced by the representation, aiming to learn a shared latent representation that is informative yet succinct to capture the correlation among multiple views. To this end, we introduce a tractable surrogate objective function under the proposed framework, which allows our method to fuse and calibrate the observations in the representation space. From the information-theoretic perspective, we show that our framework subsumes existing multi-view generative models. Lastly, we show that our approach straightforwardly extends to the Partial MVRL (PMVRL) setting, where the observations are missing without any regular pattern. We demonstrate the effectiveness of our approach in the multi-view translation and classification tasks, outperforming strong baseline methods.
| null |
FACMAC: Factored Multi-Agent Centralised Policy Gradients
|
https://papers.nips.cc/paper_files/paper/2021/hash/65b9eea6e1cc6bb9f0cd2a47751a186f-Abstract.html
|
Bei Peng, Tabish Rashid, Christian Schroeder de Witt, Pierre-Alexandre Kamienny, Philip Torr, Wendelin Boehmer, Shimon Whiteson
|
https://papers.nips.cc/paper_files/paper/2021/hash/65b9eea6e1cc6bb9f0cd2a47751a186f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12557-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65b9eea6e1cc6bb9f0cd2a47751a186f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wZYWwJvkneF
| null |
We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces. Like MADDPG, a popular multi-agent actor-critic method, our approach uses deep deterministic policy gradients to learn policies. However, FACMAC learns a centralised but factored critic, which combines per-agent utilities into the joint action-value function via a non-linear monotonic function, as in QMIX, a popular multi-agent $Q$-learning algorithm. However, unlike QMIX, there are no inherent constraints on factoring the critic. We thus also employ a nonmonotonic factorisation and empirically demonstrate that its increased representational capacity allows it to solve some tasks that cannot be solved with monolithic, or monotonically factored critics. In addition, FACMAC uses a centralised policy gradient estimator that optimises over the entire joint action space, rather than optimising over each agent's action space separately as in MADDPG. This allows for more coordinated policy changes and fully reaps the benefits of a centralised critic. We evaluate FACMAC on variants of the multi-agent particle environments, a novel multi-agent MuJoCo benchmark, and a challenging set of StarCraft II micromanagement tasks. Empirical results demonstrate FACMAC's superior performance over MADDPG and other baselines on all three domains.
| null |
EDGE: Explaining Deep Reinforcement Learning Policies
|
https://papers.nips.cc/paper_files/paper/2021/hash/65c89f5a9501a04c073b354f03791b1f-Abstract.html
|
Wenbo Guo, Xian Wu, Usmann Khan, Xinyu Xing
|
https://papers.nips.cc/paper_files/paper/2021/hash/65c89f5a9501a04c073b354f03791b1f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12558-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65c89f5a9501a04c073b354f03791b1f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Wp3we5kv6P
|
https://papers.nips.cc/paper_files/paper/2021/file/65c89f5a9501a04c073b354f03791b1f-Supplemental.pdf
|
With the rapid development of deep reinforcement learning (DRL) techniques, there is an increasing need to understand and interpret DRL policies. While recent research has developed explanation methods to interpret how an agent determines its moves, they cannot capture the importance of actions/states to a game's final result. In this work, we propose a novel self-explainable model that augments a Gaussian process with a customized kernel function and an interpretable predictor. Together with the proposed model, we also develop a parameter learning procedure that leverages inducing points and variational inference to improve learning efficiency. Using our proposed model, we can predict an agent's final rewards from its game episodes and extract time step importance within episodes as strategy-level explanations for that agent. Through experiments on Atari and MuJoCo games, we verify the explanation fidelity of our method and demonstrate how to employ interpretation to understand agent behavior, discover policy vulnerabilities, remediate policy errors, and even defend against adversarial attacks.
| null |
Learning to Assimilate in Chaotic Dynamical Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/65cc2c8205a05d7379fa3a6386f710e1-Abstract.html
|
Michael McCabe, Jed Brown
|
https://papers.nips.cc/paper_files/paper/2021/hash/65cc2c8205a05d7379fa3a6386f710e1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12559-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65cc2c8205a05d7379fa3a6386f710e1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ctusEbqyLwO
|
https://papers.nips.cc/paper_files/paper/2021/file/65cc2c8205a05d7379fa3a6386f710e1-Supplemental.pdf
|
The accuracy of simulation-based forecasting in chaotic systems is heavily dependent on high-quality estimates of the system state at the beginning of the forecast. Data assimilation methods are used to infer these initial conditions by systematically combining noisy, incomplete observations and numerical models of system dynamics to produce highly effective estimation schemes. We introduce a self-supervised framework, which we call \textit{amortized assimilation}, for learning to assimilate in dynamical systems. Amortized assimilation combines deep learning-based denoising with differentiable simulation, using independent neural networks to assimilate specific observation types while connecting the gradient flow between these sub-tasks with differentiable simulation and shared recurrent memory. This hybrid architecture admits a self-supervised training objective which is minimized by an unbiased estimator of the true system state even in the presence of only noisy training data. Numerical experiments across several chaotic benchmark systems highlight the improved effectiveness of our approach compared to widely-used data assimilation methods.
| null |
Object-aware Contrastive Learning for Debiased Scene Representation
|
https://papers.nips.cc/paper_files/paper/2021/hash/65d2ea03425887a717c435081cfc5dbb-Abstract.html
|
Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin
|
https://papers.nips.cc/paper_files/paper/2021/hash/65d2ea03425887a717c435081cfc5dbb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12560-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65d2ea03425887a717c435081cfc5dbb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=t4485RO6O8P
|
https://papers.nips.cc/paper_files/paper/2021/file/65d2ea03425887a717c435081cfc5dbb-Supplemental.pdf
|
Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations. However, the learned representations are often contextually biased to the spurious scene correlations of different objects or object and background, which may harm their generalization on the downstream tasks. To tackle the issue, we develop a novel object-aware contrastive learning framework that first (a) localizes objects in a self-supervised manner and then (b) debias scene correlations via appropriate data augmentations considering the inferred object locations. For (a), we propose the contrastive class activation map (ContraCAM), which finds the most discriminative regions (e.g., objects) in the image compared to the other images using the contrastively trained models. We further improve the ContraCAM to detect multiple objects and entire shapes via an iterative refinement procedure. For (b), we introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning, respectively. Our experiments demonstrate the effectiveness of our representation learning framework, particularly when trained under multi-object images or evaluated under the background (and distribution) shifted images. Code is available at https://github.com/alinlab/object-aware-contrastive.
| null |
Evaluating Efficient Performance Estimators of Neural Architectures
|
https://papers.nips.cc/paper_files/paper/2021/hash/65d90fc6d307590b14e9e1800d4e8eab-Abstract.html
|
Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, Yu Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/65d90fc6d307590b14e9e1800d4e8eab-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12561-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65d90fc6d307590b14e9e1800d4e8eab-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Esd7tGH3Spl
|
https://papers.nips.cc/paper_files/paper/2021/file/65d90fc6d307590b14e9e1800d4e8eab-Supplemental.pdf
|
Conducting efficient performance estimations of neural architectures is a major challenge in neural architecture search (NAS). To reduce the architecture training costs in NAS, one-shot estimators (OSEs) amortize the architecture training costs by sharing the parameters of one supernet between all architectures. Recently, zero-shot estimators (ZSEs) that involve no training are proposed to further reduce the architecture evaluation cost. Despite the high efficiency of these estimators, the quality of such estimations has not been thoroughly studied. In this paper, we conduct an extensive and organized assessment of OSEs and ZSEs on five NAS benchmarks: NAS-Bench-101/201/301, and NDS ResNet/ResNeXt-A. Specifically, we employ a set of NAS-oriented criteria to study the behavior of OSEs and ZSEs, and reveal their biases and variances. After analyzing how and why the OSE estimations are unsatisfying, we explore how to mitigate the correlation gap of OSEs from three perspectives. Through our analysis, we give out suggestions for future application and development of efficient architecture performance estimators. Furthermore, the analysis framework proposed in our work could be utilized in future research to give a more comprehensive understanding of newly designed architecture performance estimators. The code is available at https://github.com/walkerning/aw_nas.
| null |
A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose
|
https://papers.nips.cc/paper_files/paper/2021/hash/65fc9fb4897a89789352e211ca2d398f-Abstract.html
|
Shih-Yang Su, Frank Yu, Michael Zollhoefer, Helge Rhodin
|
https://papers.nips.cc/paper_files/paper/2021/hash/65fc9fb4897a89789352e211ca2d398f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12562-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/65fc9fb4897a89789352e211ca2d398f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lwwEh0OM61b
|
https://papers.nips.cc/paper_files/paper/2021/file/65fc9fb4897a89789352e211ca2d398f-Supplemental.zip
|
While deep learning reshaped the classical motion capture pipeline with feed-forward networks, generative models are required to recover fine alignment via iterative refinement. Unfortunately, the existing models are usually hand-crafted or learned in controlled conditions, only applicable to limited domains. We propose a method to learn a generative neural body model from unlabelled monocular videos by extending Neural Radiance Fields (NeRFs). We equip them with a skeleton to apply to time-varying and articulated motion. A key insight is that implicit models require the inverse of the forward kinematics used in explicit surface models. Our reparameterization defines spatial latent variables relative to the pose of body parts and thereby overcomes ill-posed inverse operations with an overparameterization. This enables learning volumetric body shape and appearance from scratch while jointly refining the articulated pose; all without ground truth labels for appearance, pose, or 3D shape on the input videos. When used for novel-view-synthesis and motion capture, our neural model improves accuracy on diverse datasets.
| null |
Differential Privacy Over Riemannian Manifolds
|
https://papers.nips.cc/paper_files/paper/2021/hash/6600e06fe9350b62c1e343504d4a7b86-Abstract.html
|
Matthew Reimherr, Karthik Bharath, Carlos Soto
|
https://papers.nips.cc/paper_files/paper/2021/hash/6600e06fe9350b62c1e343504d4a7b86-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12563-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6600e06fe9350b62c1e343504d4a7b86-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6oyeQ-1c_91
|
https://papers.nips.cc/paper_files/paper/2021/file/6600e06fe9350b62c1e343504d4a7b86-Supplemental.pdf
|
In this work we consider the problem of releasing a differentially private statistical summary that resides on a Riemannian manifold. We present an extension of the Laplace or K-norm mechanism that utilizes intrinsic distances and volumes on the manifold. We also consider in detail the specific case where the summary is the Fr\'echet mean of data residing on a manifold. We demonstrate that our mechanism is rate optimal and depends only on the dimension of the manifold, not on the dimension of any ambient space, while also showing how ignoring the manifold structure can decrease the utility of the sanitized summary. We illustrate our framework in two examples of particular interest in statistics: the space of symmetric positive definite matrices, which is used for covariance matrices, and the sphere, which can be used as a space for modeling discrete distributions.
| null |
How can classical multidimensional scaling go wrong?
|
https://papers.nips.cc/paper_files/paper/2021/hash/66121d1f782d29b62a286909165517bc-Abstract.html
|
Rishi Sonthalia, Greg Van Buskirk, Benjamin Raichel, Anna Gilbert
|
https://papers.nips.cc/paper_files/paper/2021/hash/66121d1f782d29b62a286909165517bc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12564-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/66121d1f782d29b62a286909165517bc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0jHeZ7-ehGr
|
https://papers.nips.cc/paper_files/paper/2021/file/66121d1f782d29b62a286909165517bc-Supplemental.zip
|
Given a matrix $D$ describing the pairwise dissimilarities of a data set, a common task is to embed the data points into Euclidean space. The classical multidimensional scaling (cMDS) algorithm is a widespread method to do this. However, theoretical analysis of the robustness of the algorithm and an in-depth analysis of its performance on non-Euclidean metrics is lacking. In this paper, we derive a formula, based on the eigenvalues of a matrix obtained from $D$, for the Frobenius norm of the difference between $D$ and the metric $D_{\text{cmds}}$ returned by cMDS. This error analysis leads us to the conclusion that when the derived matrix has a significant number of negative eigenvalues, then $\|D-D_{\text{cmds}}\|_F$, after initially decreasing, willeventually increase as we increase the dimension. Hence, counterintuitively, the quality of the embedding degrades as we increase the dimension. We empirically verify that the Frobenius norm increases as we increase the dimension for a variety of non-Euclidean metrics. We also show on several benchmark datasets that this degradation in the embedding results in the classification accuracy of both simple (e.g., 1-nearest neighbor) and complex (e.g., multi-layer neural nets) classifiers decreasing as we increase the embedding dimension.Finally, our analysis leads us to a new efficiently computable algorithm that returns a matrix $D_l$ that is at least as close to the original distances as $D_t$ (the Euclidean metric closest in $\ell_2$ distance). While $D_l$ is not metric, when given as input to cMDS instead of $D$, it empirically results in solutions whose distance to $D$ does not increase when we increase the dimension and the classification accuracy degrades less than the cMDS solution.
| null |
Modeling Heterogeneous Hierarchies with Relation-specific Hyperbolic Cones
|
https://papers.nips.cc/paper_files/paper/2021/hash/662a2e96162905620397b19c9d249781-Abstract.html
|
Yushi Bai, Zhitao Ying, Hongyu Ren, Jure Leskovec
|
https://papers.nips.cc/paper_files/paper/2021/hash/662a2e96162905620397b19c9d249781-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12565-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/662a2e96162905620397b19c9d249781-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=chuGnZMuye
|
https://papers.nips.cc/paper_files/paper/2021/file/662a2e96162905620397b19c9d249781-Supplemental.pdf
|
Hierarchical relations are prevalent and indispensable for organizing human knowledge captured by a knowledge graph (KG). The key property of hierarchical relations is that they induce a partial ordering over the entities, which needs to be modeled in order to allow for hierarchical reasoning. However, current KG embeddings can model only a single global hierarchy (single global partial ordering) and fail to model multiple heterogeneous hierarchies that exist in a single KG. Here we present ConE (Cone Embedding), a KG embedding model that is able to simultaneously model multiple hierarchical as well as non-hierarchical relations in a knowledge graph. ConE embeds entities into hyperbolic cones and models relations as transformations between the cones. In particular, ConE uses cone containment constraints in different subspaces of the hyperbolic embedding space to capture multiple heterogeneous hierarchies. Experiments on standard knowledge graph benchmarks show that ConE obtains state-of-the-art performance on hierarchical reasoning tasks as well as knowledge graph completion task on hierarchical graphs. In particular, our approach yields new state-of-the-art Hits@1 of 45.3% on WN18RR and 16.1% on DDB14 (0.231 MRR). As for hierarchical reasoning task, our approach outperforms previous best results by an average of 20% across the three datasets.
| null |
Non-asymptotic Error Bounds for Bidirectional GANs
|
https://papers.nips.cc/paper_files/paper/2021/hash/66be31e4c40d676991f2405aaecc6934-Abstract.html
|
Shiao Liu, Yunfei Yang, Jian Huang, Yuling Jiao, Yang Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/66be31e4c40d676991f2405aaecc6934-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12566-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/66be31e4c40d676991f2405aaecc6934-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ifo8sa57U2f
|
https://papers.nips.cc/paper_files/paper/2021/file/66be31e4c40d676991f2405aaecc6934-Supplemental.pdf
|
We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation error under the Dudley distance between the latent joint distribution and the data joint distribution with appropriately specified architecture of the neural networks used in the model. To the best of our knowledge, this is the first theoretical guarantee for the bidirectional GAN learning approach. An appealing feature of our results is that they do not assume the reference and the data distributions to have the same dimensions or these distributions to have bounded support. These assumptions are commonly assumed in the existing convergence analysis of the unidirectional GANs but may not be satisfied in practice. Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support. To prove these results, we construct neural network functions that push forward an empirical distribution to another arbitrary empirical distribution on a possibly different-dimensional space. We also develop a novel decomposition of the integral probability metric for the error analysis of bidirectional GANs. These basic theoretical results are of independent interest and can be applied to other related learning problems.
| null |
Confidence-Aware Imitation Learning from Demonstrations with Varying Optimality
|
https://papers.nips.cc/paper_files/paper/2021/hash/670e8a43b246801ca1eaca97b3e19189-Abstract.html
|
Songyuan Zhang, ZHANGJIE CAO, Dorsa Sadigh, Yanan Sui
|
https://papers.nips.cc/paper_files/paper/2021/hash/670e8a43b246801ca1eaca97b3e19189-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12567-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/670e8a43b246801ca1eaca97b3e19189-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=RcfJUrZzhoL
|
https://papers.nips.cc/paper_files/paper/2021/file/670e8a43b246801ca1eaca97b3e19189-Supplemental.pdf
|
Most existing imitation learning approaches assume the demonstrations are drawn from experts who are optimal, but relaxing this assumption enables us to use a wider range of data. Standard imitation learning may learn a suboptimal policy from demonstrations with varying optimality. Prior works use confidence scores or rankings to capture beneficial information from demonstrations with varying optimality, but they suffer from many limitations, e.g., manually annotated confidence scores or high average optimality of demonstrations. In this paper, we propose a general framework to learn from demonstrations with varying optimality that jointly learns the confidence score and a well-performing policy. Our approach, Confidence-Aware Imitation Learning (CAIL) learns a well-performing policy from confidence-reweighted demonstrations, while using an outer loss to track the performance of our model and to learn the confidence. We provide theoretical guarantees on the convergence of CAIL and evaluate its performance in both simulated and real robot experiments.Our results show that CAIL significantly outperforms other imitation learning methods from demonstrations with varying optimality. We further show that even without access to any optimal demonstrations, CAIL can still learn a successful policy, and outperforms prior work.
| null |
Answering Complex Causal Queries With the Maximum Causal Set Effect
|
https://papers.nips.cc/paper_files/paper/2021/hash/670f0c94cc5271fe6017eeffa642b7d3-Abstract.html
|
Zachary Markovich
|
https://papers.nips.cc/paper_files/paper/2021/hash/670f0c94cc5271fe6017eeffa642b7d3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12568-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/670f0c94cc5271fe6017eeffa642b7d3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9B0JMeySlZM
|
https://papers.nips.cc/paper_files/paper/2021/file/670f0c94cc5271fe6017eeffa642b7d3-Supplemental.pdf
|
The standard tools of causal inference have been developed to answer simple causal queries which can be easily formalized as a small number of statistical estimands in the context of a particular structural causal model (SCM); however, scientific theories often make diffuse predictions about a large number of causal variables. This article proposes a framework for parameterizing such complex causal queries as the maximum difference in causal effects associated with two sets of causal variables that have a researcher specified probability of occurring. We term this estimand the Maximum Causal Set Effect (MCSE) and develop an estimator for it that is asymptotically consistent and conservative in finite samples under assumptions that are standard in the causal inference literature. This estimator is also asymptotically normal and amenable to the non-parametric bootstrap, facilitating classical statistical inference about this novel estimand. We compare this estimator to more common latent variable approaches and find that it can uncover larger causal effects in both real world and simulated data.
| null |
Identifiability in inverse reinforcement learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/671f0311e2754fcdd37f70a8550379bc-Abstract.html
|
Haoyang Cao, Samuel Cohen, Lukasz Szpruch
|
https://papers.nips.cc/paper_files/paper/2021/hash/671f0311e2754fcdd37f70a8550379bc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12569-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/671f0311e2754fcdd37f70a8550379bc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VtlGqVzja48
|
https://papers.nips.cc/paper_files/paper/2021/file/671f0311e2754fcdd37f70a8550379bc-Supplemental.pdf
|
Inverse reinforcement learning attempts to reconstruct the reward function in a Markov decision problem, using observations of agent actions. As already observed in Russell [1998] the problem is ill-posed, and the reward function is not identifiable, even under the presence of perfect information about optimal behavior. We provide a resolution to this non-identifiability for problems with entropy regularization. For a given environment, we fully characterize the reward functions leading to a given policy and demonstrate that, given demonstrations of actions for the same reward under two distinct discount factors, or under sufficiently different environments, the unobserved reward can be recovered up to a constant. We also give general necessary and sufficient conditions for reconstruction of time-homogeneous rewards on finite horizons, and for action-independent rewards, generalizing recent results of Kim et al. [2021] and Fu et al. [2018].
| null |
A Probabilistic State Space Model for Joint Inference from Differential Equations and Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/6734fa703f6633ab896eecbdfad8953a-Abstract.html
|
Jonathan Schmidt, Nicholas Krämer, Philipp Hennig
|
https://papers.nips.cc/paper_files/paper/2021/hash/6734fa703f6633ab896eecbdfad8953a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12570-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6734fa703f6633ab896eecbdfad8953a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=7e4FLufwij
|
https://papers.nips.cc/paper_files/paper/2021/file/6734fa703f6633ab896eecbdfad8953a-Supplemental.pdf
|
Mechanistic models with differential equations are a key component of scientific applications of machine learning. Inference in such models is usually computationally demanding because it involves repeatedly solving the differential equation. The main problem here is that the numerical solver is hard to combine with standard inference techniques. Recent work in probabilistic numerics has developed a new class of solvers for ordinary differential equations (ODEs) that phrase the solution process directly in terms of Bayesian filtering. We here show that this allows such methods to be combined very directly, with conceptual and numerical ease, with latent force models in the ODE itself. It then becomes possible to perform approximate Bayesian inference on the latent force as well as the ODE solution in a single, linear complexity pass of an extended Kalman filter / smoother — that is, at the cost of computing a single ODE solution. We demonstrate the expressiveness and performance of the algorithm by training, among others, a non-parametric SIRD model on data from the COVID-19 outbreak.
| null |
On Plasticity, Invariance, and Mutually Frozen Weights in Sequential Task Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/6738fc33dd0b3906cd3626397cd247a7-Abstract.html
|
Julian Zilly, Alessandro Achille, Andrea Censi, Emilio Frazzoli
|
https://papers.nips.cc/paper_files/paper/2021/hash/6738fc33dd0b3906cd3626397cd247a7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12571-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6738fc33dd0b3906cd3626397cd247a7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Y10GtvGEgR
|
https://papers.nips.cc/paper_files/paper/2021/file/6738fc33dd0b3906cd3626397cd247a7-Supplemental.pdf
|
Plastic neural networks have the ability to adapt to new tasks. However, in a continual learning setting, the configuration of parameters learned in previous tasks can severely reduce the adaptability to future tasks. In particular, we show that, when using weight decay, weights in successive layers of a deep network may become "mutually frozen". This has a double effect: on the one hand, it makes the network updates more invariant to nuisance factors, providing a useful bias for future tasks. On the other hand, it can prevent the network from learning new tasks that require significantly different features. In this context, we find that the local input sensitivity of a deep model is correlated with its ability to adapt, thus leading to an intriguing trade-off between adaptability and invariance when training a deep model more than once. We then show that a simple intervention that "resets" the mutually frozen connections can improve transfer learning on a variety of visual classification tasks. The efficacy of "resetting" itself depends on the size of the target dataset and the difference of the pre-training and target domains, allowing us to achieve state-of-the-art results on some datasets.
| null |
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/678004486c119599ed7d199f47da043a-Abstract.html
|
Guanlin Liu, Lifeng LAI
|
https://papers.nips.cc/paper_files/paper/2021/hash/678004486c119599ed7d199f47da043a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12572-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/678004486c119599ed7d199f47da043a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=jdIR6KF-uFW
|
https://papers.nips.cc/paper_files/paper/2021/file/678004486c119599ed7d199f47da043a-Supplemental.pdf
|
Due to the broad range of applications of reinforcement learning (RL), understanding the effects of adversarial attacks against RL model is essential for the safe applications of this model. Prior theoretical works on adversarial attacks against RL mainly focus on either reward poisoning attacks or environment poisoning attacks. In this paper, we introduce a new class of attacks named action poisoning attacks, where an adversary can change the action signal selected by the agent. Compared with existing attack models, the attacker’s ability in the proposed action poisoning attack model is more restricted, which brings some design challenges. We study the action poisoning attack in both white-box and black-box settings. We introduce an adaptive attack scheme called LCB-H, which works for most RL agents in the black-box setting. We prove that LCB-H attack can force any efficient RL agent, whose dynamic regret scales sublinearly with the total number of steps taken, to choose actions according to a policy selected by the attacker very frequently, with only sublinear cost. In addition, we apply LCB-H attack against a very popular model-free RL algorithm: UCB-H. We show that, even in black-box setting, by spending only logarithm cost, the proposed LCB-H attack scheme can force the UCB-H agent to choose actions according to the policy selected by the attacker very frequently.
| null |
Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections
|
https://papers.nips.cc/paper_files/paper/2021/hash/6786f3c62fbf9021694f6e51cc07fe3c-Abstract.html
|
Kimia Nadjahi, Alain Durmus, Pierre E Jacob, Roland Badeau, Umut Simsekli
|
https://papers.nips.cc/paper_files/paper/2021/hash/6786f3c62fbf9021694f6e51cc07fe3c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12573-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6786f3c62fbf9021694f6e51cc07fe3c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=oa1AMhWKrS
|
https://papers.nips.cc/paper_files/paper/2021/file/6786f3c62fbf9021694f6e51cc07fe3c-Supplemental.zip
|
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits. Since it is defined as an expectation over random projections, SW is commonly approximated by Monte Carlo. We adopt a new perspective to approximate SW by making use of the concentration of measure phenomenon: under mild assumptions, one-dimensional projections of a high-dimensional random vector are approximately Gaussian. Based on this observation, we develop a simple deterministic approximation for SW. Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation. We derive nonasymptotical guarantees for our approach, and show that the approximation error goes to zero as the dimension increases, under a weak dependence condition on the data distribution. We validate our theoretical findings on synthetic datasets, and illustrate the proposed approximation on a generative modeling problem.
| null |
Causal Navigation by Continuous-time Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/67ba02d73c54f0b83c05507b7fb7267f-Abstract.html
|
Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner, Daniela Rus
|
https://papers.nips.cc/paper_files/paper/2021/hash/67ba02d73c54f0b83c05507b7fb7267f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12574-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/67ba02d73c54f0b83c05507b7fb7267f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ckVbQs5zD7_
|
https://papers.nips.cc/paper_files/paper/2021/file/67ba02d73c54f0b83c05507b7fb7267f-Supplemental.pdf
|
Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically over their discrete-time counterparts. We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments. Our results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail. These models learn complex causal control representations directly from raw visual inputs and scale to solve a variety of tasks using imitation learning.
| null |
Global Convergence of Online Optimization for Nonlinear Model Predictive Control
|
https://papers.nips.cc/paper_files/paper/2021/hash/67d16d00201083a2b118dd5128dd6f59-Abstract.html
|
Sen Na
|
https://papers.nips.cc/paper_files/paper/2021/hash/67d16d00201083a2b118dd5128dd6f59-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12575-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/67d16d00201083a2b118dd5128dd6f59-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=XI72RT3hnnF
|
https://papers.nips.cc/paper_files/paper/2021/file/67d16d00201083a2b118dd5128dd6f59-Supplemental.pdf
|
We study a real-time iteration (RTI) scheme for solving online optimization problem appeared in nonlinear optimal control. The proposed RTI scheme modifies the existing RTI-based model predictive control (MPC) algorithm, by selecting the stepsize of each Newton step at each sampling time using a differentiable exact augmented Lagrangian. The scheme can adaptively select the penalty parameters of augmented Lagrangian on the fly, which are shown to be stabilized after certain time periods. We prove under generic assumptions that, by involving stepsize selection instead of always using a full Newton step (like what most of the existing RTIs do), the scheme converges globally: for any initial point, the KKT residuals of the subproblems converge to zero. A key step is to show that augmented Lagrangian keeps decreasing as horizon moves forward. We demonstrate the global convergence behavior of the proposed RTI scheme in a numerical experiment.
| null |
Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions
|
https://papers.nips.cc/paper_files/paper/2021/hash/67d96d458abdef21792e6d8e590244e7-Abstract.html
|
Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling
|
https://papers.nips.cc/paper_files/paper/2021/hash/67d96d458abdef21792e6d8e590244e7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12576-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/67d96d458abdef21792e6d8e590244e7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6nbpPqUCIi7
|
https://papers.nips.cc/paper_files/paper/2021/file/67d96d458abdef21792e6d8e590244e7-Supplemental.pdf
|
Generative flows and diffusion models have been predominantly trained on ordinal data, for example natural images. This paper introduces two extensions of flows and diffusion for categorical data such as language or image segmentation: Argmax Flows and Multinomial Diffusion. Argmax Flows are defined by a composition of a continuous distribution (such as a normalizing flow), and an argmax function. To optimize this model, we learn a probabilistic inverse for the argmax that lifts the categorical data to a continuous space. Multinomial Diffusion gradually adds categorical noise in a diffusion process, for which the generative denoising process is learned. We demonstrate that our method outperforms existing dequantization approaches on text modelling and modelling on image segmentation maps in log-likelihood.
| null |
Learning with User-Level Privacy
|
https://papers.nips.cc/paper_files/paper/2021/hash/67e235e7f2fa8800d8375409b566e6b6-Abstract.html
|
Daniel Levy, Ziteng Sun, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh
|
https://papers.nips.cc/paper_files/paper/2021/hash/67e235e7f2fa8800d8375409b566e6b6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12577-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/67e235e7f2fa8800d8375409b566e6b6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=G1jmxFOtY_
|
https://papers.nips.cc/paper_files/paper/2021/file/67e235e7f2fa8800d8375409b566e6b6-Supplemental.pdf
|
We propose and analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints. Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution ($m \ge 1$ samples), providing more stringent but more realistic protection against information leaks. We show that for high-dimensional meanestimation, empirical risk minimization with smooth losses, stochastic convex optimization, and learning hypothesis classes with finite metric entropy, the privacy cost decreases as $O(1/\sqrt{m})$ as users provide more samples. In contrast, when increasing the number of users $n$, the privacy cost decreases at a faster $O(1/n)$ rate. We complement these results with lower bounds showing the minimax optimality of our algorithms for mean estimation and stochastic convex optimization. Our algorithms rely on novel techniques for private mean estimation in arbitrary dimension with error scaling as the concentration radius $\tau$ of the distribution rather than the entire range.
| null |
Don’t Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
|
https://papers.nips.cc/paper_files/paper/2021/hash/67ed94744426295f96268f4ac1881b46-Abstract.html
|
Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis
|
https://papers.nips.cc/paper_files/paper/2021/hash/67ed94744426295f96268f4ac1881b46-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12578-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/67ed94744426295f96268f4ac1881b46-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=waWmZSw0mn
|
https://papers.nips.cc/paper_files/paper/2021/file/67ed94744426295f96268f4ac1881b46-Supplemental.pdf
|
Although machine learning models trained on massive data have led to breakthroughs in several areas, their deployment in privacy-sensitive domains remains limited due to restricted access to data. Generative models trained with privacy constraints on private data can sidestep this challenge, providing indirect access to private data instead. We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy. DP-Sinkhorn minimizes the Sinkhorn divergence, a computationally efficient approximation to the exact optimal transport distance, between the model and data in a differentially private manner and uses a novel technique for controlling the bias-variance trade-off of gradient estimates. Unlike existing approaches for training differentially private generative models, which are mostly based on generative adversarial networks, we do not rely on adversarial objectives, which are notoriously difficult to optimize, especially in the presence of noise imposed by privacy constraints. Hence, DP-Sinkhorn is easy to train and deploy. Experimentally, we improve upon the state-of-the-art on multiple image modeling benchmarks and show differentially private synthesis of informative RGB images.
| null |
Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/67f7fb873eaf29526a11a9b7ac33bfac-Abstract.html
|
Mandela Patrick, Dylan Campbell, Yuki Asano, Ishan Misra, Florian Metze, Christoph Feichtenhofer, Andrea Vedaldi, João F. Henriques
|
https://papers.nips.cc/paper_files/paper/2021/hash/67f7fb873eaf29526a11a9b7ac33bfac-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12579-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/67f7fb873eaf29526a11a9b7ac33bfac-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=mfQxdSMWOF
|
https://papers.nips.cc/paper_files/paper/2021/file/67f7fb873eaf29526a11a9b7ac33bfac-Supplemental.pdf
|
In video transformers, the time dimension is often treated in the same way as the two spatial dimensions. However, in a scene where objects or the camera may move, a physical point imaged at one location in frame $t$ may be entirely unrelated to what is found at that location in frame $t+k$. These temporal correspondences should be modeled to facilitate learning about dynamic scenes. To this end, we propose a new drop-in block for video transformers - trajectory attention - that aggregates information along implicitly determined motion paths. We additionally propose a new method to address the quadratic dependence of computation and memory on the input size, which is particularly important for high resolution or long videos. While these ideas are useful in a range of settings, we apply them to the specific task of video action recognition with a transformer model and obtain state-of-the-art results on the Kinetics, Something-Something V2, and Epic-Kitchens datasets.
| null |
Variational Bayesian Optimistic Sampling
|
https://papers.nips.cc/paper_files/paper/2021/hash/680390c55bbd9ce416d1d69a9ab4760d-Abstract.html
|
Brendan O'Donoghue, Tor Lattimore
|
https://papers.nips.cc/paper_files/paper/2021/hash/680390c55bbd9ce416d1d69a9ab4760d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12580-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/680390c55bbd9ce416d1d69a9ab4760d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NtivXxYNhjc
|
https://papers.nips.cc/paper_files/paper/2021/file/680390c55bbd9ce416d1d69a9ab4760d-Supplemental.pdf
|
We consider online sequential decision problems where an agent must balance exploration and exploitation. We derive a set of Bayesian `optimistic' policies which, in the stochastic multi-armed bandit case, includes the Thompson sampling policy. We provide a new analysis showing that any algorithm producing policies in the optimistic set enjoys $\tilde O(\sqrt{AT})$ Bayesian regret for a problem with $A$ actions after $T$ rounds. We extend the regret analysis for optimistic policies to bilinear saddle-point problems which include zero-sum matrix games and constrained bandits as special cases. In this case we show that Thompson sampling can produce policies outside of the optimistic set and suffer linear regret in some instances. Finding a policy inside the optimistic set amounts to solving a convex optimization problem and we call the resulting algorithm `variational Bayesian optimistic sampling' (VBOS). The procedure works for any posteriors, \ie, it does not require the posterior to have any special properties, such as log-concavity, unimodality, or smoothness. The variational view of the problem has many useful properties, including the ability to tune the exploration-exploitation tradeoff, add regularization, incorporate constraints, and linearly parameterize the policy.
| null |
Cross-modal Domain Adaptation for Cost-Efficient Visual Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/68264bdb65b97eeae6788aa3348e553c-Abstract.html
|
Xiong-Hui Chen, Shengyi Jiang, Feng Xu, Zongzhang Zhang, Yang Yu
|
https://papers.nips.cc/paper_files/paper/2021/hash/68264bdb65b97eeae6788aa3348e553c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12581-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/68264bdb65b97eeae6788aa3348e553c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VLQV2vqjLf3
|
https://papers.nips.cc/paper_files/paper/2021/file/68264bdb65b97eeae6788aa3348e553c-Supplemental.pdf
|
In visual-input sim-to-real scenarios, to overcome the reality gap between images rendered in simulators and those from the real world, domain adaptation, i.e., learning an aligned representation space between simulators and the real world, then training and deploying policies in the aligned representation, is a promising direction. Previous methods focus on same-modal domain adaptation. However, those methods require building and running simulators that render high-quality images, which can be difficult and costly. In this paper, we consider a more cost-efficient setting of visual-input sim-to-real where only low-dimensional states are simulated. We first point out that the objective of learning mapping functions in previous methods that align the representation spaces is ill-posed, prone to yield an incorrect mapping. When the mapping crosses modalities, previous methods are easier to fail. Our algorithm, Cross-mOdal Domain Adaptation with Sequential structure (CODAS), mitigates the ill-posedness by utilizing the sequential nature of the data sampling process in RL tasks. Experiments on MuJoCo and Hand Manipulation Suite tasks show that the agents deployed with our method achieve similar performance as it has in the source domain, while those deployed with previous methods designed for same-modal domain adaptation suffer a larger performance gap.
| null |
D2C: Diffusion-Decoding Models for Few-Shot Conditional Generation
|
https://papers.nips.cc/paper_files/paper/2021/hash/682e0e796084e163c5ca053dd8573b0c-Abstract.html
|
Abhishek Sinha, Jiaming Song, Chenlin Meng, Stefano Ermon
|
https://papers.nips.cc/paper_files/paper/2021/hash/682e0e796084e163c5ca053dd8573b0c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12582-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/682e0e796084e163c5ca053dd8573b0c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4vUZPUKZsr5
|
https://papers.nips.cc/paper_files/paper/2021/file/682e0e796084e163c5ca053dd8573b0c-Supplemental.pdf
|
Conditional generative models of high-dimensional images have many applications, but supervision signals from conditions to images can be expensive to acquire. This paper describes Diffusion-Decoding models with Contrastive representations (D2C), a paradigm for training unconditional variational autoencoders (VAE) for few-shot conditional image generation. D2C uses a learned diffusion-based prior over the latent representations to improve generation and contrastive self-supervised learning to improve representation quality. D2C can adapt to novel generation tasks, conditioned on labels or manipulation constraints, by learning from as few as 100 labeled examples. On conditional generation from new labels, D2C achieves superior performance over state-of-the-art VAEs and diffusion models. On conditional image manipulation, D2C generations are two orders of magnitude faster to produce over StyleGAN2 ones and are preferred by 50% - 60% of the human evaluators in a double-blind study. We release our code at https://github.com/jiamings/d2c.
| null |
Continual Auxiliary Task Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/68331ff0427b551b68e911eebe35233b-Abstract.html
|
Matthew McLeod, Chunlok Lo, Matthew Schlegel, Andrew Jacobsen, Raksha Kumaraswamy, Martha White, Adam White
|
https://papers.nips.cc/paper_files/paper/2021/hash/68331ff0427b551b68e911eebe35233b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12583-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/68331ff0427b551b68e911eebe35233b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=EpL9IFAMa3
|
https://papers.nips.cc/paper_files/paper/2021/file/68331ff0427b551b68e911eebe35233b-Supplemental.pdf
|
Learning auxiliary tasks, such as multiple predictions about the world, can provide many benefits to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather useful data for those off-policy predictions. In this work, we investigate a reinforcement learning system designed to learn a collection of auxiliary tasks, with a behavior policy learning to take actions to improve those auxiliary predictions. We highlight the inherent non-stationarity in this continual auxiliary task learning problem, for both prediction learners and the behavior learner. We develop an algorithm based on successor features that facilitates tracking under non-stationary rewards, and prove the separation into learning successor features and rewards provides convergence rate improvements. We conduct an in-depth study into the resulting multi-prediction learning system.
| null |
Constrained Two-step Look-Ahead Bayesian Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/685217557383cd194b4f10ae4b39eebf-Abstract.html
|
Yunxiang Zhang, Xiangyu Zhang, Peter Frazier
|
https://papers.nips.cc/paper_files/paper/2021/hash/685217557383cd194b4f10ae4b39eebf-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12584-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/685217557383cd194b4f10ae4b39eebf-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=LhBigohtN1R
| null |
Recent advances in computationally efficient non-myopic Bayesian optimization offer improved query efficiency over traditional myopic methods like expected improvement, with only a modest increase in computational cost. These advances have been largely limited to unconstrained BO methods with only a few exceptions which require heavy computation. For instance, one existing multi-step lookahead constrained BO method (Lam & Willcox, 2017) relies on computationally expensive unreliable brute force derivative-free optimization of a Monte Carlo rollout acquisition function. Methods that use the reparameterization trick for more efficient derivative-based optimization of non-myopic acquisition functions in the unconstrained setting, like sample average approximation and infinitesimal perturbation analysis, do not extend: constraints introduce discontinuities in the sampled acquisition function surface. Moreover, we argue here that being non-myopic is even more important in constrained problems because fear of violating constraints pushes myopic methods away from sampling the boundary between feasible and infeasible regions, slowing the discovery of optimal solutions with tight constraints. In this paper, we propose a computationally efficient two-step lookahead constrained Bayesian optimization acquisition function (2-OPT-C) supporting both sequential and batch settings. To enable fast acquisition function optimization, we develop a novel likelihood ratio-based unbiased estimator of the gradient of the two-step optimal acquisition function that does not use the reparameterization trick. In numerical experiments, 2-OPT-C typically improves query efficiency by 2x or more over previous methods, and in some cases by 10x or more.
| null |
Learning with Labeling Induced Abstentions
|
https://papers.nips.cc/paper_files/paper/2021/hash/689041c2baed0f6d91050495d632d6e0-Abstract.html
|
Kareem Amin, Giulia DeSalvo, Afshin Rostamizadeh
|
https://papers.nips.cc/paper_files/paper/2021/hash/689041c2baed0f6d91050495d632d6e0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12585-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/689041c2baed0f6d91050495d632d6e0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-1OkHh56c2m
|
https://papers.nips.cc/paper_files/paper/2021/file/689041c2baed0f6d91050495d632d6e0-Supplemental.pdf
|
Consider a setting where we wish to automate an expensive task with a machine learning algorithm using a limited labeling resource. In such settings, examples routed for labeling are often out of scope for the machine learning algorithm. For example, in a spam detection setting, human reviewers not only provide labeled data but are such high-quality detectors of spam that examples routed to them no longer require machine evaluation. As a consequence, the distribution of examples routed to the machine is intimately tied to the process generating labels. We introduce a formalization of this setting, and give an algorithm that simultaneously learns a model and decides when to request a label by leveraging ideas from both the abstention and active learning literatures. We prove an upper bound on the algorithm's label complexity and a matching lower bound for any algorithm in this setting. We conduct a thorough set of experiments including an ablation study to test different components of our algorithm. We demonstrate the effectiveness of an efficient version of our algorithm over margin sampling on a variety of datasets.
| null |
SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning
|
https://papers.nips.cc/paper_files/paper/2021/hash/68bd22864919297c8c8a8c32378e89b4-Abstract.html
|
Mattia Atzeni, Jasmina Bogojeska, Andreas Loukas
|
https://papers.nips.cc/paper_files/paper/2021/hash/68bd22864919297c8c8a8c32378e89b4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12586-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/68bd22864919297c8c8a8c32378e89b4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=2CQQ_C1i0b
|
https://papers.nips.cc/paper_files/paper/2021/file/68bd22864919297c8c8a8c32378e89b4-Supplemental.pdf
|
State-of-the-art approaches to reasoning and question answering over knowledge graphs (KGs) usually scale with the number of edges and can only be applied effectively on small instance-dependent subgraphs. In this paper, we address this issue by showing that multi-hop and more complex logical reasoning can be accomplished separately without losing expressive power. Motivated by this insight, we propose an approach to multi-hop reasoning that scales linearly with the number of relation types in the graph, which is usually significantly smaller than the number of edges or nodes. This produces a set of candidate solutions that can be provably refined to recover the solution to the original problem. Our experiments on knowledge-based question answering show that our approach solves the multi-hop MetaQA dataset, achieves a new state-of-the-art on the more challenging WebQuestionsSP, is orders of magnitude more scalable than competitive approaches, and can achieve compositional generalization out of the training distribution.
| null |
Out-of-Distribution Generalization in Kernel Regression
|
https://papers.nips.cc/paper_files/paper/2021/hash/691dcb1d65f31967a874d18383b9da75-Abstract.html
|
Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan
|
https://papers.nips.cc/paper_files/paper/2021/hash/691dcb1d65f31967a874d18383b9da75-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12587-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/691dcb1d65f31967a874d18383b9da75-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=-h6Ldc0MO-
|
https://papers.nips.cc/paper_files/paper/2021/file/691dcb1d65f31967a874d18383b9da75-Supplemental.pdf
|
In real word applications, data generating process for training a machine learning model often differs from what the model encounters in the test stage. Understanding how and whether machine learning models generalize under such distributional shifts have been a theoretical challenge. Here, we study generalization in kernel regression when the training and test distributions are different using methods from statistical physics. Using the replica method, we derive an analytical formula for the out-of-distribution generalization error applicable to any kernel and real datasets. We identify an overlap matrix that quantifies the mismatch between distributions for a given kernel as a key determinant of generalization performance under distribution shift. Using our analytical expressions we elucidate various generalization phenomena including possible improvement in generalization when there is a mismatch. We develop procedures for optimizing training and test distributions for a given data budget to find best and worst case generalizations under the shift. We present applications of our theory to real and synthetic datasets and for many kernels. We compare results of our theory applied to Neural Tangent Kernel with simulations of wide networks and show agreement. We analyze linear regression in further depth.
| null |
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
|
https://papers.nips.cc/paper_files/paper/2021/hash/692baebec3bb4b53d7ebc3b9fabac31b-Abstract.html
|
Jingwei Sun, Ang Li, Louis DiValentin, Amin Hassanzadeh, Yiran Chen, Hai Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/692baebec3bb4b53d7ebc3b9fabac31b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12588-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/692baebec3bb4b53d7ebc3b9fabac31b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=96uH8HeGb9G
|
https://papers.nips.cc/paper_files/paper/2021/file/692baebec3bb4b53d7ebc3b9fabac31b-Supplemental.pdf
|
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. Several server-based defense approaches (e.g. robust aggregation), have been proposed to mitigate such attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model. The key idea of FL-WBC is to identify the parameter space where long-lasting attack effect on parameters resides and perturb that space during local training. Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC. We conduct experiments on FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model poisoning attacks. The results demonstrate that our method can effectively mitigate model poisoning attack impact on the global model within 5 communication rounds with nearly no accuracy drop under both IID and Non-IID settings. Our defense is also complementary to existing server-based robust aggregation approaches and can further improve the robustness of FL under extremely strong attacks.
| null |
Chebyshev-Cantelli PAC-Bayes-Bennett Inequality for the Weighted Majority Vote
|
https://papers.nips.cc/paper_files/paper/2021/hash/69386f6bb1dfed68692a24c8686939b9-Abstract.html
|
Yi-Shan Wu, Andres Masegosa, Stephan Lorenzen, Christian Igel, Yevgeny Seldin
|
https://papers.nips.cc/paper_files/paper/2021/hash/69386f6bb1dfed68692a24c8686939b9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12589-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HbTzvugzOp
|
https://papers.nips.cc/paper_files/paper/2021/file/69386f6bb1dfed68692a24c8686939b9-Supplemental.pdf
|
We present a new second-order oracle bound for the expected risk of a weighted majority vote. The bound is based on a novel parametric form of the Chebyshev-Cantelli inequality (a.k.a. one-sided Chebyshev’s), which is amenable to efficient minimization. The new form resolves the optimization challenge faced by prior oracle bounds based on the Chebyshev-Cantelli inequality, the C-bounds [Germain et al., 2015], and, at the same time, it improves on the oracle bound based on second order Markov’s inequality introduced by Masegosa et al. [2020]. We also derive a new concentration of measure inequality, which we name PAC-Bayes-Bennett, since it combines PAC-Bayesian bounding with Bennett’s inequality. We use it for empirical estimation of the oracle bound. The PAC-Bayes-Bennett inequality improves on the PAC-Bayes-Bernstein inequality of Seldin et al. [2012]. We provide an empirical evaluation demonstrating that the new bounds can improve on the work of Masegosa et al. [2020]. Both the parametric form of the Chebyshev-Cantelli inequality and the PAC-Bayes-Bennett inequality may be of independent interest for the study of concentration of measure in other domains.
| null |
A Multi-Implicit Neural Representation for Fonts
|
https://papers.nips.cc/paper_files/paper/2021/hash/6948bd44c91acd2b54ecdd1b132f10fb-Abstract.html
|
Pradyumna Reddy, Zhifei Zhang, Zhaowen Wang, Matthew Fisher, Hailin Jin, Niloy Mitra
|
https://papers.nips.cc/paper_files/paper/2021/hash/6948bd44c91acd2b54ecdd1b132f10fb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12590-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6948bd44c91acd2b54ecdd1b132f10fb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=59mdmZJV6IG
|
https://papers.nips.cc/paper_files/paper/2021/file/6948bd44c91acd2b54ecdd1b132f10fb-Supplemental.pdf
|
Fonts are ubiquitous across documents and come in a variety of styles. They are either represented in a native vector format or rasterized to produce fixed resolution images. In the first case, the non-standard representation prevents benefiting from latest network architectures for neural representations; while, in the latter case, the rasterized representation, when encoded via networks, results in loss of data fidelity, as font-specific discontinuities like edges and corners are difficult to represent using neural networks. Based on the observation that complex fonts can be represented by a superposition of a set of simpler occupancy functions, we introduce multi-implicits to represent fonts as a permutation-invariant set of learned implict functions, without losing features (e.g., edges and corners). However, while multi-implicits locally preserve font features, obtaining supervision in the form of ground truth multi-channel signals is a problem in itself. Instead, we propose how to train such a representation with only local supervision, while the proposed neural architecture directly finds globally consistent multi-implicits for font families. We extensively evaluate the proposed representation for various tasks including reconstruction, interpolation, and synthesis to demonstrate clear advantages with existing alternatives. Additionally, the representation naturally enables glyph completion, wherein a single characteristic font is used to synthesize a whole font family in the target style.
| null |
OctField: Hierarchical Implicit Functions for 3D Modeling
|
https://papers.nips.cc/paper_files/paper/2021/hash/698d51a19d8a121ce581499d7b701668-Abstract.html
|
Jia-Heng Tang, Weikai Chen, jie Yang, Bo Wang, Songrun Liu, Bo Yang, Lin Gao
|
https://papers.nips.cc/paper_files/paper/2021/hash/698d51a19d8a121ce581499d7b701668-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12591-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/698d51a19d8a121ce581499d7b701668-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zvTBIFQ43Sd
|
https://papers.nips.cc/paper_files/paper/2021/file/698d51a19d8a121ce581499d7b701668-Supplemental.pdf
|
Recent advances in localized implicit functions have enabled neural implicit representation to be scalable to large scenes.However, the regular subdivision of 3D space employed by these approaches fails to take into account the sparsity of the surface occupancy and the varying granularities of geometric details. As a result, its memory footprint grows cubically with the input volume, leading to a prohibitive computational cost even at a moderately dense decomposition. In this work, we present a learnable hierarchical implicit representation for 3D surfaces, coded OctField, that allows high-precision encoding of intricate surfaces with low memory and computational budget. The key to our approach is an adaptive decomposition of 3D scenes that only distributes local implicit functions around the surface of interest. We achieve this goal by introducing a hierarchical octree structure to adaptively subdivide the 3D space according to the surface occupancy and the richness of part geometry. As octree is discrete and non-differentiable, we further propose a novel hierarchical network that models the subdivision of octree cells as a probabilistic process and recursively encodes and decodes both octree structure and surface geometry in a differentiable manner. We demonstrate the value of OctField for a range of shape modeling and reconstruction tasks, showing superiority over alternative approaches.
| null |
The Inductive Bias of Quantum Kernels
|
https://papers.nips.cc/paper_files/paper/2021/hash/69adc1e107f7f7d035d7baf04342e1ca-Abstract.html
|
Jonas Kübler, Simon Buchholz, Bernhard Schölkopf
|
https://papers.nips.cc/paper_files/paper/2021/hash/69adc1e107f7f7d035d7baf04342e1ca-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12592-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/69adc1e107f7f7d035d7baf04342e1ca-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=iNqrOCPRmYQ
|
https://papers.nips.cc/paper_files/paper/2021/file/69adc1e107f7f7d035d7baf04342e1ca-Supplemental.pdf
|
It has been hypothesized that quantum computers may lend themselves well to applications in machine learning. In the present work, we analyze function classes defined via quantum kernels. Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute. However, having an exponentially large feature space renders the problem of generalization hard. Furthermore, being able to evaluate inner products in high dimensional spaces efficiently by itself does not guarantee a quantum advantage, as already classically tractable kernels can correspond to high- or infinite-dimensional reproducing kernel Hilbert spaces (RKHS). We analyze the spectral properties of quantum kernels and find that we can expect an advantage if their RKHS is low dimensional and contains functions that are hard to compute classically. If the target function is known to lie in this class, this implies a quantum advantage, as the quantum computer can encode this inductive bias, whereas there is no classically efficient way to constrain the function class in the same way. However, we show that finding suitable quantum kernels is not easy because the kernel evaluation might require exponentially many measurements. In conclusion, our message is a somewhat sobering one: we conjecture that quantum machine learning models can offer speed-ups only if we manage to encode knowledge about the problem at hand into quantum circuits, while encoding the same bias into a classical model would be hard. These situations may plausibly occur when learning on data generated by a quantum process, however, they appear to be harder to come by for classical datasets.
| null |
An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/69dd2eff9b6a421d5ce262b093bdab23-Abstract.html
|
Shashank Rajput, Kartik Sreenivasan, Dimitris Papailiopoulos, Amin Karbasi
|
https://papers.nips.cc/paper_files/paper/2021/hash/69dd2eff9b6a421d5ce262b093bdab23-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12593-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/69dd2eff9b6a421d5ce262b093bdab23-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=dFRbxGpNWw5
|
https://papers.nips.cc/paper_files/paper/2021/file/69dd2eff9b6a421d5ce262b093bdab23-Supplemental.pdf
|
It is well known that modern deep neural networks are powerful enough to memorize datasets even when the labels have been randomized. Recently, Vershynin(2020) settled a long standing question by Baum(1988), proving that deep threshold networks can memorize $n$ points in $d$ dimensions using $\widetilde{\mathcal{O}}(e^{1/\delta^2}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(e^{1/\delta^2}(d+\sqrt{n})+n)$ weights, where $\delta$ is the minimum distance between the points. In this work, we improve the dependence on $\delta$ from exponential to almost linear, proving that $\widetilde{\mathcal{O}}(\frac{1}{\delta}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(\frac{d}{\delta}+n)$ weights are sufficient. Our construction uses Gaussian random weights only in the first layer, while all the subsequent layers use binary or integer weights. We also prove new lower bounds by connecting memorization in neural networks to the purely geometric problem of separating $n$ points on a sphere using hyperplanes.
| null |
Pretraining Representations for Data-Efficient Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/69eba34671b3ef1ef38ee85caae6b2a1-Abstract.html
|
Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, Ankesh Anand, Laurent Charlin, R Devon Hjelm, Philip Bachman, Aaron C. Courville
|
https://papers.nips.cc/paper_files/paper/2021/hash/69eba34671b3ef1ef38ee85caae6b2a1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12594-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/69eba34671b3ef1ef38ee85caae6b2a1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=XpSAvlvnMa
|
https://papers.nips.cc/paper_files/paper/2021/file/69eba34671b3ef1ef38ee85caae6b2a1-Supplemental.zip
|
Data efficiency is a key challenge for deep reinforcement learning. We address this problem by using unlabeled data to pretrain an encoder which is then finetuned on a small amount of task-specific data. To encourage learning representations which capture diverse aspects of the underlying MDP, we employ a combination of latent dynamics modelling and unsupervised goal-conditioned RL. When limited to 100k steps of interaction on Atari games (equivalent to two hours of human experience), our approach significantly surpasses prior work combining offline representation pretraining with task-specific finetuning, and compares favourably with other pretraining methods that require orders of magnitude more data. Our approach shows particular promise when combined with larger models as well as more diverse, task-aligned observational data -- approaching human-level performance and data-efficiency on Atari in our best setting.
| null |
Universal Approximation Using Well-Conditioned Normalizing Flows
|
https://papers.nips.cc/paper_files/paper/2021/hash/69ec5030f78a9b735402d133317bf5f6-Abstract.html
|
Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski
|
https://papers.nips.cc/paper_files/paper/2021/hash/69ec5030f78a9b735402d133317bf5f6-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12595-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/69ec5030f78a9b735402d133317bf5f6-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=qLpJ0VWRuWk
|
https://papers.nips.cc/paper_files/paper/2021/file/69ec5030f78a9b735402d133317bf5f6-Supplemental.pdf
|
Normalizing flows are a widely used class of latent-variable generative models with a tractable likelihood. Affine-coupling models [Dinh et al., 2014, 2016] are a particularly common type of normalizing flows, for which the Jacobian of the latent-to-observable-variable transformation is triangular, allowing the likelihood to be computed in linear time. Despite the widespread usage of affine couplings, the special structure of the architecture makes understanding their representational power challenging. The question of universal approximation was only recently resolved by three parallel papers [Huang et al., 2020, Zhang et al., 2020, Koehler et al., 2020] – who showed reasonably regular distributions can be approximated arbitrarily well using affine couplings – albeit with networks with a nearly-singular Jacobian. As ill-conditioned Jacobians are an obstacle for likelihood-based training, the fundamental question remains: which distributions can be approximated using well-conditioned affine coupling flows? In this paper, we show that any log-concave distribution can be approximated using well-conditioned affine-coupling flows. In terms of proof techniques, we uncover and leverage deep connections between affine coupling architectures, underdamped Langevin dynamics (a stochastic differential equation often used to sample from Gibbs measures) and Hénon maps (a structured dynamical system that appears in the study of symplectic diffeomorphisms). In terms of informing practice, we approximate a padded version of the input distribution with iid Gaussians – a strategy which Koehler et al. [2020] empirically observed to result in better-conditioned flows, but had hitherto no theoretical grounding. Our proof can thus be seen as providing theoretical evidence for the benefits of Gaussian padding when training normalizing flows.
| null |
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs)
|
https://papers.nips.cc/paper_files/paper/2021/hash/69f62956429865909921fa916d61c1f8-Abstract.html
|
Zhiyuan Li, Sadhika Malladi, Sanjeev Arora
|
https://papers.nips.cc/paper_files/paper/2021/hash/69f62956429865909921fa916d61c1f8-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12596-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/69f62956429865909921fa916d61c1f8-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=goEdyJ_nVQI
|
https://papers.nips.cc/paper_files/paper/2021/file/69f62956429865909921fa916d61c1f8-Supplemental.pdf
|
It is generally recognized that finite learning rate (LR), in contrast to infinitesimal LR, is important for good generalization in real-life deep nets. Most attempted explanations propose approximating finite-LR SGD with Itô Stochastic Differential Equations (SDEs), but formal justification for this approximation (e.g., Li et al., 2019) only applies to SGD with tiny LR. Experimental verification of the approximation appears computationally infeasible. The current paper clarifies the picture with the following contributions: (a) An efficient simulation algorithm SVAG that provably converges to the conventionally used Itô SDE approximation. (b) A theoretically motivated testable necessary condition for the SDE approximation and its most famous implication, the linear scaling rule (Goyal et al., 2017), to hold.(c) Experiments using this simulation to demonstrate that the previously proposed SDE approximation can meaningfully capture the training and generalization properties of common deep nets.
| null |
Proportional Participatory Budgeting with Additive Utilities
|
https://papers.nips.cc/paper_files/paper/2021/hash/69f8ea31de0c00502b2ae571fbab1f95-Abstract.html
|
Grzegorz Pierczyński, Piotr Skowron, Dominik Peters
|
https://papers.nips.cc/paper_files/paper/2021/hash/69f8ea31de0c00502b2ae571fbab1f95-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12597-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/69f8ea31de0c00502b2ae571fbab1f95-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5rm0b_fsNZ
|
https://papers.nips.cc/paper_files/paper/2021/file/69f8ea31de0c00502b2ae571fbab1f95-Supplemental.pdf
|
We study voting rules for participatory budgeting, where a group of voters collectively decides which projects should be funded using a common budget. We allow the projects to have arbitrary costs, and the voters to have arbitrary additive valuations over the projects. We formulate two axioms that guarantee proportional representation to groups of voters with common interests. To the best of our knowledge, all known rules for participatory budgeting do not satisfy either of the two axioms; in addition we show that the most prominent proportional rule for committee elections, Proportional Approval Voting, cannot be adapted to arbitrary costs nor to additive valuations so that it would satisfy our axioms of proportionality. We construct a simple and attractive voting rule that satisfies one of our axioms (for arbitrary costs and arbitrary additive valuations), and that can be evaluated in polynomial time. We prove that our other stronger axiom is also satisfiable, though by a computationally more expensive and less natural voting rule.
| null |
Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a12d7ebc27cae44623468302c47ad74-Abstract.html
|
Lorenzo Noci, Kevin Roth, Gregor Bachmann, Sebastian Nowozin, Thomas Hofmann
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a12d7ebc27cae44623468302c47ad74-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12598-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a12d7ebc27cae44623468302c47ad74-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=H6y7EAf7s4P
|
https://papers.nips.cc/paper_files/paper/2021/file/6a12d7ebc27cae44623468302c47ad74-Supplemental.pdf
|
The “cold posterior effect” (CPE) in Bayesian deep learning describes the disturbing observation that the predictive performance of Bayesian neural networks can be significantly improved if the Bayes posterior is artificially sharpened using a temperature parameter T <1. The CPE is problematic in theory and practice and since the effect was identified many researchers have proposed hypotheses to explain the phenomenon. However, despite this intensive research effort the effect remains poorly understood. In this work we provide novel and nuanced evidence relevant to existing explanations for the cold posterior effect, disentangling three hypotheses: 1. The dataset curation hypothesis of Aitchison (2020): we show empirically that the CPE does not arise in a real curated data set but can be produced in a controlled experiment with varying curation strength. 2. The data augmentation hypothesis of Izmailov et al. (2021) and Fortuin et al. (2021): we show empirically that data augmentation is sufficient but not necessary for the CPE to be present. 3. The bad prior hypothesis of Wenzel et al. (2020): we use a simple experiment evaluating the relative importance of the prior and the likelihood, strongly linking the CPE to the prior. Our results demonstrate how the CPE can arise in isolation from synthetic curation, data augmentation, and bad priors. Cold posteriors observed “in the wild” are therefore unlikely to arise from a single simple cause; as a result, we do not expect a simple “fix” for cold posteriors.
| null |
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a130f1dc6f0c829f874e92e5458dced-Abstract.html
|
Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a130f1dc6f0c829f874e92e5458dced-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12599-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a130f1dc6f0c829f874e92e5458dced-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=WL7pr00_fnJ
|
https://papers.nips.cc/paper_files/paper/2021/file/6a130f1dc6f0c829f874e92e5458dced-Supplemental.pdf
|
There have been long-standing controversies and inconsistencies over the experiment setup and criteria for identifying the "winning ticket" in literature. To reconcile such, we revisit the definition of lottery ticket hypothesis, with comprehensive and more rigorous conditions. Under our new definition, we show concrete evidence to clarify whether the winning ticket exists across the major DNN architectures and/or applications. Through extensive experiments, we perform quantitative analysis on the correlations between winning tickets and various experimental factors, and empirically study the patterns of our observations. We find that the key training hyperparameters, such as learning rate and training epochs, as well as the architecture characteristics such as capacities and residual connections, are all highly correlated with whether and when the winning tickets can be identified. Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis. Our codes are publicly available at: https://github.com/boone891214/sanity-check-LTH.
| null |
Collaborative Causal Discovery with Atomic Interventions
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a1a681b16826ba2e48fedb229db3b65-Abstract.html
|
Raghavendra Addanki, Shiva Kasiviswanathan
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a1a681b16826ba2e48fedb229db3b65-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12600-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a1a681b16826ba2e48fedb229db3b65-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=35wwc2nc1a4
|
https://papers.nips.cc/paper_files/paper/2021/file/6a1a681b16826ba2e48fedb229db3b65-Supplemental.pdf
|
We introduce a new Collaborative Causal Discovery problem, through which we model a common scenario in which we have multiple independent entities each with their own causal graph, and the goal is to simultaneously learn all these causal graphs. We study this problem without the causal sufficiency assumption, using Maximal Ancestral Graphs (MAG) to model the causal graphs, and assuming that we have the ability to actively perform independent single vertex (or atomic) interventions on the entities. If the $M$ underlying (unknown) causal graphs of the entities satisfy a natural notion of clustering, we give algorithms that leverage this property and recovers all the causal graphs using roughly logarithmic in $M$ number of atomic interventions per entity. These are significantly fewer than $n$ atomic interventions per entity required to learn each causal graph separately, where $n$ is the number of observable nodes in the causal graph. We complement our results with a lower bound and discuss various extensions of our collaborative setting.
| null |
Towards optimally abstaining from prediction with OOD test examples
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a26c75d6a576c94654bfc4dda548c72-Abstract.html
|
Adam Kalai, Varun Kanade
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a26c75d6a576c94654bfc4dda548c72-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12601-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a26c75d6a576c94654bfc4dda548c72-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=P9_gOq5w7Eb
|
https://papers.nips.cc/paper_files/paper/2021/file/6a26c75d6a576c94654bfc4dda548c72-Supplemental.pdf
|
A common challenge across all areas of machine learning is that training data is not distributed like test data, due to natural shifts or adversarial examples; such examples are referred to as out-of-distribution (OOD) test examples. We consider a model where one may abstain from predicting, at a fixed cost. In particular, our transductive abstention algorithm takes labeled training examples and unlabeled test examples as input, and provides predictions with optimal prediction loss guarantees. The loss bounds match standard generalization bounds when test examples are i.i.d. from the training distribution, but add an additional term that is the cost of abstaining times the statistical distance between the train and test distribution (or the fraction of adversarial examples). For linear regression, we give a polynomial-time algorithm based on Celis-Dennis-Tapia optimization algorithms. For binary classification, we show how to efficiently implement it using a proper agnostic learner (i.e., an Empirical Risk Minimizer) for the class of interest. Our work builds on recent work of Goldwasser, Kalais, and Montasser (2020) who gave error and abstention guarantees for transductive binary classification.
| null |
TokenLearner: Adaptive Space-Time Tokenization for Videos
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a30e32e56fce5cf381895dfe6ca7b6f-Abstract.html
|
Michael Ryoo, AJ Piergiovanni, Anurag Arnab, Mostafa Dehghani, Anelia Angelova
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a30e32e56fce5cf381895dfe6ca7b6f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12602-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a30e32e56fce5cf381895dfe6ca7b6f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=z-l1kpDXs88
|
https://papers.nips.cc/paper_files/paper/2021/file/6a30e32e56fce5cf381895dfe6ca7b6f-Supplemental.pdf
|
In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks. Instead of relying on hand-designed splitting strategies to obtain visual tokens and processing a large number of densely sampled patches for attention, our approach learns to mine important tokens in visual data. This results in efficiently and effectively finding a few important visual tokens and enables modeling of pairwise attention between such tokens, over a longer temporal horizon for videos, or the spatial content in image frames. Our experiments demonstrate strong performance on several challenging benchmarks for video recognition tasks. Importantly, due to our tokens being adaptive, we accomplish competitive results at significantly reduced computational cost. We establish new state-of-the-arts on multiple video datasets, including Kinetics-400, Kinetics-600, Charades, and AViD.
| null |
Learning in Multi-Stage Decentralized Matching Markets
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a571fe98a2ba453e84923b447d79cff-Abstract.html
|
Xiaowu Dai, Michael Jordan
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a571fe98a2ba453e84923b447d79cff-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12603-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a571fe98a2ba453e84923b447d79cff-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Q2R6noQ3tn5
|
https://papers.nips.cc/paper_files/paper/2021/file/6a571fe98a2ba453e84923b447d79cff-Supplemental.pdf
|
Matching markets are often organized in a multi-stage and decentralized manner. Moreover, participants in real-world matching markets often have uncertain preferences. This article develops a framework for learning optimal strategies in such settings, based on a nonparametric statistical approach and variational analysis. We propose an efficient algorithm, built upon concepts of "lower uncertainty bound" and "calibrated decentralized matching," for maximizing the participants' expected payoff. We show that there exists a welfare-versus-fairness trade-off that is characterized by the uncertainty level of acceptance. Participants will strategically act in favor of a low uncertainty level to reduce competition and increase expected payoff. We prove that participants can be better off with multi-stage matching compared to single-stage matching. We demonstrate aspects of the theoretical predictions through simulations and an experiment using real data from college admissions.
| null |
Non-asymptotic convergence bounds for Wasserstein approximation using point clouds
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a61d423d02a1c56250dc23ae7ff12f3-Abstract.html
|
Quentin Mérigot, Filippo Santambrogio, Clément SARRAZIN
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a61d423d02a1c56250dc23ae7ff12f3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12604-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a61d423d02a1c56250dc23ae7ff12f3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_6j_jQiYB2c
| null |
Several issues in machine learning and inverse problems require to generate discrete data, as if sampled from a model probabilitydistribution. A common way to do so relies on the construction of a uniform probability distribution over a set of $N$ points whichminimizes the Wasserstein distance to the model distribution. This minimization problem, where the unknowns are the positions of the atoms, is non-convex. Yet, in most cases, a suitably adjusted version of Lloyd's algorithm in which Voronoi cells are replaced by Power cells, leads to configurations with small Wasserstein error. This is surprising because, again, of the non-convex nature of the problem, which moreover admits spurious critical points. We provide explicit upper bounds for the convergence speed of this Lloyd-type algorithm, starting from a cloud of points sufficiently far from each other. This already works after one step of the iteration procedure, and similar bounds can be deduced, for the corresponding gradient descent. These bounds naturally lead to a sort of Poliak-Łojasiewicz inequality for the Wasserstein distance cost, with an error term depending on the distances between Dirac masses in the discrete distribution.
| null |
Understanding Interlocking Dynamics of Cooperative Rationalization
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a711a119a8a7a9f877b5f379bfe9ea2-Abstract.html
|
Mo Yu, Yang Zhang, Shiyu Chang, Tommi Jaakkola
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a711a119a8a7a9f877b5f379bfe9ea2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12605-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a711a119a8a7a9f877b5f379bfe9ea2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1dq2MVDXot-
|
https://papers.nips.cc/paper_files/paper/2021/file/6a711a119a8a7a9f877b5f379bfe9ea2-Supplemental.pdf
|
Selective rationalization explains the prediction of complex neural networks by finding a small subset of the input that is sufficient to predict the neural model output. The selection mechanism is commonly integrated into the model itself by specifying a two-component cascaded system consisting of a rationale generator, which makes a binary selection of the input features (which is the rationale), and a predictor, which predicts the output based only on the selected features. The components are trained jointly to optimize prediction performance. In this paper, we reveal a major problem with such cooperative rationalization paradigm --- model interlocking. Inter-locking arises when the predictor overfits to the features selected by the generator thus reinforcing the generator's selection even if the selected rationales are sub-optimal. The fundamental cause of the interlocking problem is that the rationalization objective to be minimized is concave with respect to the generator’s selection policy. We propose a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection. The generator now realizes both soft and hard attention over the features and these are fed into the two different predictors. While the generator still seeks to support the original predictor performance, it also minimizes a gap between the two predictors. As we will show theoretically, since the attention-based predictor exhibits a better convexity property, A2R can overcome the concavity barrier. Our experiments on two synthetic benchmarks and two real datasets demonstrate that A2R can significantly alleviate the interlock problem and find explanations that better align with human judgments.
| null |
Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a971e08a01e6676d0f1a6e0dacbbd67-Abstract.html
|
Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian
|
https://papers.nips.cc/paper_files/paper/2021/hash/6a971e08a01e6676d0f1a6e0dacbbd67-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12606-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6a971e08a01e6676d0f1a6e0dacbbd67-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MqCzSKCQ1QB
|
https://papers.nips.cc/paper_files/paper/2021/file/6a971e08a01e6676d0f1a6e0dacbbd67-Supplemental.pdf
|
Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps. Being repetitive in nature during the inner maximization step, they take a huge time to train. We propose a non-iterative method that enforces the following ideas during training. Attribution maps are more aligned to the actual object in the image for adversarially robust models compared to naturally trained models. Also, the allowed set of pixels to perturb an image (that changes model decision) should be restricted to the object pixels only, which reduces the attack strength by limiting the attack space. Our method achieves significant performance gains with a little extra effort (10-20%) over existing AT models and outperforms all other methods in terms of adversarial as well as natural accuracy. We have performed extensive experimentation with CIFAR-10, CIFAR-100, and TinyImageNet datasets and reported results against many popular strong adversarial attacks to prove the effectiveness of our method.
| null |
Tactical Optimism and Pessimism for Deep Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/6abcc8f24321d1eb8c95855eab78ee95-Abstract.html
|
Ted Moskovitz, Jack Parker-Holder, Aldo Pacchiano, Michael Arbel, Michael Jordan
|
https://papers.nips.cc/paper_files/paper/2021/hash/6abcc8f24321d1eb8c95855eab78ee95-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12607-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6abcc8f24321d1eb8c95855eab78ee95-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=a4WgjcLeZIn
|
https://papers.nips.cc/paper_files/paper/2021/file/6abcc8f24321d1eb8c95855eab78ee95-Supplemental.pdf
|
In recent years, deep off-policy actor-critic algorithms have become a dominant approach to reinforcement learning for continuous control. One of the primary drivers of this improved performance is the use of pessimistic value updates to address function approximation errors, which previously led to disappointing performance. However, a direct consequence of pessimism is reduced exploration, running counter to theoretical support for the efficacy of optimism in the face of uncertainty. So which approach is best? In this work, we show that the most effective degree of optimism can vary both across tasks and over the course of learning. Inspired by this insight, we introduce a novel deep actor-critic framework, Tactical Optimistic and Pessimistic (TOP) estimation, which switches between optimistic and pessimistic value learning online. This is achieved by formulating the selection as a multi-arm bandit problem. We show in a series of continuous control tasks that TOP outperforms existing methods which rely on a fixed degree of optimism, setting a new state of the art in challenging pixel-based environments. Since our changes are simple to implement, we believe these insights can easily be incorporated into a multitude of off-policy algorithms.
| null |
Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/6add07cf50424b14fdf649da87843d01-Abstract.html
|
Siyuan Zhang, Nan Jiang
|
https://papers.nips.cc/paper_files/paper/2021/hash/6add07cf50424b14fdf649da87843d01-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12608-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6add07cf50424b14fdf649da87843d01-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9RFGrW9z9te
|
https://papers.nips.cc/paper_files/paper/2021/file/6add07cf50424b14fdf649da87843d01-Supplemental.pdf
|
How to select between policies and value functions produced by different training algorithms in offline reinforcement learning (RL)---which is crucial for hyperparameter tuning---is an important open question. Existing approaches based on off-policy evaluation (OPE) often require additional function approximation and hence hyperparameters, creating a chicken-and-egg situation. In this paper, we design hyperparameter-free algorithms for policy selection based on BVFT [XJ21], a recent theoretical advance in value-function selection, and demonstrate their effectiveness in discrete-action benchmarks such as Atari. To address performance degradation due to poor critics in continuous-action domains, we further combine BVFT with OPE to get the best of both worlds, and obtain a hyperparameter-tuning method for $Q$-function based OPE with theoretical guarantees as a side product.
| null |
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
|
https://papers.nips.cc/paper_files/paper/2021/hash/6aed000af86a084f9cb0264161e29dd3-Abstract.html
|
Samuel Horváth, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris, Nicholas Lane
|
https://papers.nips.cc/paper_files/paper/2021/hash/6aed000af86a084f9cb0264161e29dd3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12609-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6aed000af86a084f9cb0264161e29dd3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4fLr7H5D_eT
|
https://papers.nips.cc/paper_files/paper/2021/file/6aed000af86a084f9cb0264161e29dd3-Supplemental.pdf
|
Federated Learning (FL) has been gaining significant traction across different ML tasks, ranging from vision to keyboard predictions. In large-scale deployments, client heterogeneity is a fact and constitutes a primary problem for fairness, training performance and accuracy. Although significant efforts have been made into tackling statistical data heterogeneity, the diversity in the processing capabilities and network bandwidth of clients, termed system heterogeneity, has remained largely unexplored. Current solutions either disregard a large portion of available devices or set a uniform limit on the model's capacity, restricted by the least capable participants.In this work, we introduce Ordered Dropout, a mechanism that achieves an ordered, nested representation of knowledge in Neural Networks and enables the extraction of lower footprint submodels without the need for retraining. We further show that for linear maps our Ordered Dropout is equivalent to SVD. We employ this technique, along with a self-distillation methodology, in the realm of FL in a framework called FjORD. FjORD alleviates the problem of client system heterogeneity by tailoring the model width to the client's capabilities. Extensive evaluation on both CNNs and RNNs across diverse modalities shows that FjORD consistently leads to significant performance gains over state-of-the-art baselines while maintaining its nested structure.
| null |
Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings
|
https://papers.nips.cc/paper_files/paper/2021/hash/6b3c49bdba5be0d322334e30c459f8bd-Abstract.html
|
Ming Yin, Yu-Xiang Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/6b3c49bdba5be0d322334e30c459f8bd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12610-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6b3c49bdba5be0d322334e30c459f8bd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=yMf3SLah5-y
|
https://papers.nips.cc/paper_files/paper/2021/file/6b3c49bdba5be0d322334e30c459f8bd-Supplemental.pdf
|
This work studies the statistical limits of uniform convergence for offline policy evaluation (OPE) problems with model-based methods (for episodic MDP) and provides a unified framework towards optimal learning for several well-motivated offline tasks. Uniform OPE $\sup_\Pi|Q^\pi-\hat{Q}^\pi|<\epsilon$ is a stronger measure than the point-wise OPE and ensures offline learning when $\Pi$ contains all policies (the global class). In this paper, we establish an $\Omega(H^2 S/d_m\epsilon^2)$ lower bound (over model-based family) for the global uniform OPE and our main result establishes an upper bound of $\tilde{O}(H^2/d_m\epsilon^2)$ for the \emph{local} uniform convergence that applies to all \emph{near-empirically optimal} policies for the MDPs with \emph{stationary} transition. Here $d_m$ is the minimal marginal state-action probability. Critically, the highlight in achieving the optimal rate $\tilde{O}(H^2/d_m\epsilon^2)$ is our design of \emph{singleton absorbing MDP}, which is a new sharp analysis tool that works with the model-based approach. We generalize such a model-based framework to the new settings: offline task-agnostic and the offline reward-free with optimal complexity $\tilde{O}(H^2\log(K)/d_m\epsilon^2)$ ($K$ is the number of tasks) and $\tilde{O}(H^2S/d_m\epsilon^2)$ respectively. These results provide a unified solution for simultaneously solving different offline RL problems.
| null |
MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
|
https://papers.nips.cc/paper_files/paper/2021/hash/6b5754d737784b51ec5075c0dc437bf0-Abstract.html
|
Zhibo Zhu, Ziqi Liu, Ge Jin, Zhiqiang Zhang, Lei Chen, Jun Zhou, Jianyong Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/6b5754d737784b51ec5075c0dc437bf0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12611-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6b5754d737784b51ec5075c0dc437bf0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VeZQA9KdjMK
|
https://papers.nips.cc/paper_files/paper/2021/file/6b5754d737784b51ec5075c0dc437bf0-Supplemental.pdf
|
Time series forecasting is widely used in business intelligence, e.g., forecast stock market price, sales, and help the analysis of data trend. Most time series of interest are macroscopic time series that are aggregated from microscopic data. However, instead of directly modeling the macroscopic time series, rare literature studied the forecasting of macroscopic time series by leveraging data on the microscopic level. In this paper, we assume that the microscopic time series follow some unknown mixture probabilistic distributions. We theoretically show that as we identify the ground truth latent mixture components, the estimation of time series from each component could be improved because of lower variance, thus benefitting the estimation of macroscopic time series as well. Inspired by the power of Seq2seq and its variants on the modeling of time series data, we propose Mixture of Seq2seq (MixSeq), an end2end mixture model to cluster microscopic time series, where all the components come from a family of Seq2seq models parameterized by different parameters. Extensive experiments on both synthetic and real-world data show the superiority of our approach.
| null |
Pareto Domain Adaptation
|
https://papers.nips.cc/paper_files/paper/2021/hash/6ba3af5d7b2790e73f0de32e5c8c1798-Abstract.html
|
fangrui lv, Jian Liang, Kaixiong Gong, Shuang Li, Chi Harold Liu, Han Li, Di Liu, Guoren Wang
|
https://papers.nips.cc/paper_files/paper/2021/hash/6ba3af5d7b2790e73f0de32e5c8c1798-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12612-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6ba3af5d7b2790e73f0de32e5c8c1798-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=frgb7FsKWs3
|
https://papers.nips.cc/paper_files/paper/2021/file/6ba3af5d7b2790e73f0de32e5c8c1798-Supplemental.pdf
|
Domain adaptation (DA) attempts to transfer the knowledge from a labeled source domain to an unlabeled target domain that follows different distribution from the source. To achieve this, DA methods include a source classification objective to extract the source knowledge and a domain alignment objective to diminish the domain shift, ensuring knowledge transfer. Typically, former DA methods adopt some weight hyper-parameters to linearly combine the training objectives to form an overall objective. However, the gradient directions of these objectives may conflict with each other due to domain shift. Under such circumstances, the linear optimization scheme might decrease the overall objective value at the expense of damaging one of the training objectives, leading to restricted solutions. In this paper, we rethink the optimization scheme for DA from a gradient-based perspective. We propose a Pareto Domain Adaptation (ParetoDA) approach to control the overall optimization direction, aiming to cooperatively optimize all training objectives. Specifically, to reach a desirable solution on the target domain, we design a surrogate loss mimicking target classification. To improve target-prediction accuracy to support the mimicking, we propose a target-prediction refining mechanism which exploits domain labels via Bayes’ theorem. On the other hand, since prior knowledge of weighting schemes for objectives is often unavailable to guide optimization to approach the optimal solution on the target domain, we propose a dynamic preference mechanism to dynamically guide our cooperative optimization by the gradient of the surrogate loss on a held-out unlabeled target dataset. Our theoretical analyses show that the held-out data can guide but will not be over-fitted by the optimization. Extensive experiments on image classification and semantic segmentation benchmarks demonstrate the effectiveness of ParetoDA
| null |
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Effects, and Frontier Integrals
|
https://papers.nips.cc/paper_files/paper/2021/hash/6bf733bb7f81e866306e9b5f012419cb-Abstract.html
|
Lang Liu, Krishna Pillutla, Sean Welleck, Sewoong Oh, Yejin Choi, Zaid Harchaoui
|
https://papers.nips.cc/paper_files/paper/2021/hash/6bf733bb7f81e866306e9b5f012419cb-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12613-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6bf733bb7f81e866306e9b5f012419cb-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Z_J5bCb4Rra
|
https://papers.nips.cc/paper_files/paper/2021/file/6bf733bb7f81e866306e9b5f012419cb-Supplemental.pdf
|
The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance. Divergence frontiers have recently been proposed as an evaluation framework for generative models, due to their ability to measure the quality-diversity trade-off inherent to deep generative modeling. We establish non-asymptotic bounds on the sample complexity of divergence frontiers. We also introduce frontier integrals which provide summary statistics of divergence frontiers. We show how smoothed estimators such as Good-Turing or Krichevsky-Trofimov can overcome the missing mass problem and lead to faster rates of convergence. We illustrate the theoretical results with numerical examples from natural language processing and computer vision.
| null |
Consistency Regularization for Variational Auto-Encoders
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c19e0a6da12dc02239312f151072ddd-Abstract.html
|
Samarth Sinha, Adji Bousso Dieng
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c19e0a6da12dc02239312f151072ddd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12614-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6c19e0a6da12dc02239312f151072ddd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=djbC2A4uTHP
|
https://papers.nips.cc/paper_files/paper/2021/file/6c19e0a6da12dc02239312f151072ddd-Supplemental.zip
|
Variational Auto-Encoders (VAEs) are a powerful approach to unsupervised learning. They enable scalable approximate posterior inference in latent-variable models using variational inference. A VAE posits a variational family parameterized by a deep neural network---called an encoder---that takes data as input. This encoder is shared across all the observations, which amortizes the cost of inference. However the encoder of a VAE has the undesirable property that it maps a given observation and a semantics-preserving transformation of it to different latent representations. This "inconsistency" of the encoder lowers the quality of the learned representations, especially for downstream tasks, and also negatively affects generalization. In this paper, we propose a regularization method to enforce consistency in VAEs. The idea is to minimize the Kullback-Leibler (KL) divergence between the variational distribution when conditioning on the observation and the variational distribution when conditioning on a random semantics-preserving transformation of this observation. This regularization is applicable to any VAE. In our experiments we apply it to four different VAE variants on several benchmark datasets and found it always improves the quality of the learned representations but also leads to better generalization. In particular, when applied to the Nouveau VAE (NVAE), our regularization method yields state-of-the-art performance on MNIST, CIFAR-10, and CELEBA. We also applied our method to 3D data and found it learns representations of superior quality as measured by accuracy on a downstream classification task. Finally, we show our method can even outperform the triplet loss, an advanced and popular contrastive learning-based method for representation learning.
| null |
Score-based Generative Neural Networks for Large-Scale Optimal Transport
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c2e49911b68d315555d5b3eb0dd45bf-Abstract.html
|
Max Daniels, Tyler Maunu, Paul Hand
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c2e49911b68d315555d5b3eb0dd45bf-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12615-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6c2e49911b68d315555d5b3eb0dd45bf-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=PPzV1H4atM4
|
https://papers.nips.cc/paper_files/paper/2021/file/6c2e49911b68d315555d5b3eb0dd45bf-Supplemental.pdf
|
We consider the fundamental problem of sampling the optimal transport coupling between given source and target distributions. In certain cases, the optimal transport plan takes the form of a one-to-one mapping from the source support to the target support, but learning or even approximating such a map is computationally challenging for large and high-dimensional datasets due to the high cost of linear programming routines and an intrinsic curse of dimensionality. We study instead the Sinkhorn problem, a regularized form of optimal transport whose solutions are couplings between the source and the target distribution. We introduce a novel framework for learning the Sinkhorn coupling between two distributions in the form of a score-based generative model. Conditioned on source data, our procedure iterates Langevin Dynamics to sample target data according to the regularized optimal coupling. Key to this approach is a neural network parametrization of the Sinkhorn problem, and we prove convergence of gradient descent with respect to network parameters in this formulation. We demonstrate its empirical success on a variety of large scale optimal transport tasks.
| null |
Interactive Label Cleaning with Example-based Explanations
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c349155b122aa8ad5c877007e05f24f-Abstract.html
|
Stefano Teso, Andrea Bontempelli, Fausto Giunchiglia, Andrea Passerini
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c349155b122aa8ad5c877007e05f24f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12616-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6c349155b122aa8ad5c877007e05f24f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=T6m9bNI7C__
|
https://papers.nips.cc/paper_files/paper/2021/file/6c349155b122aa8ad5c877007e05f24f-Supplemental.pdf
|
We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look "suspicious" to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose CINCER, a novel approach that cleans both new and past data by identifying \emph{pairs of mutually incompatible examples}. Whenever it detects a suspicious example, CINCER identifies a counter-example in the training set that - according to the model - is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as \emph{explanations} of the model's suspicion, and highly influential, so to convey as much information as possible if relabeled. CINCER achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps in acquiring substantially better data and models, especially when paired with our FIM approximation.
| null |
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c351da15b5e8a743a21ee96a86e25df-Abstract.html
|
Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, Sanjeev Arora
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c351da15b5e8a743a21ee96a86e25df-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12617-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6c351da15b5e8a743a21ee96a86e25df-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Aa5oPXc_1IV
|
https://papers.nips.cc/paper_files/paper/2021/file/6c351da15b5e8a743a21ee96a86e25df-Supplemental.pdf
|
The generalization mystery of overparametrized deep nets has motivated efforts to understand how gradient descent (GD) converges to low-loss solutions that generalize well. Real-life neural networks are initialized from small random values and trained with cross-entropy loss for classification (unlike the "lazy" or "NTK" regime of training where analysis was more successful), and a recent sequence of results (Lyu and Li, 2020; Chizat and Bach, 2020; Ji and Telgarsky, 2020) provide theoretical evidence that GD may converge to the "max-margin" solution with zero loss, which presumably generalizes well. However, the global optimality of margin is proved only in some settings where neural nets are infinitely or exponentially wide. The current paper is able to establish this global optimality for two-layer Leaky ReLU nets trained with gradient flow on linearly separable and symmetric data, regardless of the width. The analysis also gives some theoretical justification for recent empirical findings (Kalimeris et al., 2019) on the so-called simplicity bias of GD towards linear or other "simple" classes of solutions, especially early in training. On the pessimistic side, the paper suggests that such results are fragile. A simple data manipulation can make gradient flow converge to a linear classifier with suboptimal margin.
| null |
Glance-and-Gaze Vision Transformer
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c524f9d5d7027454a783c841250ba71-Abstract.html
|
Qihang Yu, Yingda Xia, Yutong Bai, Yongyi Lu, Alan L. Yuille, Wei Shen
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c524f9d5d7027454a783c841250ba71-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12618-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6c524f9d5d7027454a783c841250ba71-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GitDcBlcg78
|
https://papers.nips.cc/paper_files/paper/2021/file/6c524f9d5d7027454a783c841250ba71-Supplemental.zip
|
Recently, there emerges a series of vision Transformers, which show superior performance with a more compact model size than conventional convolutional neural networks, thanks to the strong ability of Transformers to model long-range dependencies. However, the advantages of vision Transformers also come with a price: Self-attention, the core part of Transformer, has a quadratic complexity to the input sequence length. This leads to a dramatic increase of computation and memory cost with the increase of sequence length, thus introducing difficulties when applying Transformers to the vision tasks that require dense predictions based on high-resolution feature maps.In this paper, we propose a new vision Transformer, named Glance-and-Gaze Transformer (GG-Transformer), to address the aforementioned issues. It is motivated by the Glance and Gaze behavior of human beings when recognizing objects in natural scenes, with the ability to efficiently model both long-range dependencies and local context. In GG-Transformer, the Glance and Gaze behavior is realized by two parallel branches: The Glance branch is achieved by performing self-attention on the adaptively-dilated partitions of the input, which leads to a linear complexity while still enjoying a global receptive field; The Gaze branch is implemented by a simple depth-wise convolutional layer, which compensates local image context to the features obtained by the Glance mechanism. We empirically demonstrate our method achieves consistently superior performance over previous state-of-the-art Transformers on various vision tasks and benchmarks.
| null |
Stochastic $L^\natural$-convex Function Minimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c81c83c4bd0b58850495f603ab45a93-Abstract.html
|
Haixiang Zhang, Zeyu Zheng, Javad Lavaei
|
https://papers.nips.cc/paper_files/paper/2021/hash/6c81c83c4bd0b58850495f603ab45a93-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12619-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6c81c83c4bd0b58850495f603ab45a93-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=0hggPuM2b2
|
https://papers.nips.cc/paper_files/paper/2021/file/6c81c83c4bd0b58850495f603ab45a93-Supplemental.pdf
|
We study an extension of the stochastic submodular minimization problem, namely, the stochastic $L^\natural$-convex minimization problem. We develop the first polynomial-time algorithms that return a near-optimal solution with high probability. We design a novel truncation operation to further reduce the computational complexity of the proposed algorithms. When applied to a stochastic submodular function, the computational complexity of the proposed algorithms is lower than that of the existing stochastic submodular minimization algorithms. In addition, we provide a strongly polynomial approximate algorithm. The algorithm execution also does not require any prior knowledge about the objective function except the $L^\natural$-convexity. A lower bound on the computational complexity that is required to achieve a high probability error bound is also derived. Numerical experiments are implemented to demonstrate the efficiency of our theoretical findings.
| null |
Self-Supervised GANs with Label Augmentation
|
https://papers.nips.cc/paper_files/paper/2021/hash/6cb5da3513bd26085ee3fad631ebb37a-Abstract.html
|
Liang Hou, Huawei Shen, Qi Cao, Xueqi Cheng
|
https://papers.nips.cc/paper_files/paper/2021/hash/6cb5da3513bd26085ee3fad631ebb37a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12620-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6cb5da3513bd26085ee3fad631ebb37a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MT0pTKLyzkT
|
https://papers.nips.cc/paper_files/paper/2021/file/6cb5da3513bd26085ee3fad631ebb37a-Supplemental.pdf
|
Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of both the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator could converge to replicate the real data distribution. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across benchmark datasets.
| null |
Shape As Points: A Differentiable Poisson Solver
|
https://papers.nips.cc/paper_files/paper/2021/hash/6cd9313ed34ef58bad3fdd504355e72c-Abstract.html
|
Songyou Peng, Chiyu Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys, Andreas Geiger
|
https://papers.nips.cc/paper_files/paper/2021/hash/6cd9313ed34ef58bad3fdd504355e72c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12621-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6cd9313ed34ef58bad3fdd504355e72c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ecuu521mPpG
|
https://papers.nips.cc/paper_files/paper/2021/file/6cd9313ed34ef58bad3fdd504355e72c-Supplemental.pdf
|
In recent years, neural implicit representations gained popularity in 3D reconstruction due to their expressiveness and flexibility. However, the implicit nature of neural implicit representations results in slow inference times and requires careful initialization. In this paper, we revisit the classic yet ubiquitous point cloud representation and introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR) which allows for a GPU-accelerated fast solution of the indicator function given an oriented point cloud. The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field, enabling end-to-end optimization of surface reconstruction metrics such as Chamfer distance. This duality between points and meshes hence allows us to represent shapes as oriented point clouds, which are explicit, lightweight and expressive. Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude. Compared to other explicit representations such as points, patches, and meshes, SAP produces topology-agnostic, watertight manifold surfaces. We demonstrate the effectiveness of SAP on the task of surface reconstruction from unoriented point clouds and learning-based reconstruction.
| null |
Outcome-Driven Reinforcement Learning via Variational Inference
|
https://papers.nips.cc/paper_files/paper/2021/hash/6cdd60ea0045eb7a6ec44c54d29ed402-Abstract.html
|
Tim G. J. Rudner, Vitchyr Pong, Rowan McAllister, Yarin Gal, Sergey Levine
|
https://papers.nips.cc/paper_files/paper/2021/hash/6cdd60ea0045eb7a6ec44c54d29ed402-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12622-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6cdd60ea0045eb7a6ec44c54d29ed402-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4bzanicqvy8
|
https://papers.nips.cc/paper_files/paper/2021/file/6cdd60ea0045eb7a6ec44c54d29ed402-Supplemental.pdf
|
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
| null |
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/6ce8d8f3b038f737cefcdafcf3752452-Abstract.html
|
Yonggan Fu, Qixuan Yu, Yang Zhang, Shang Wu, Xu Ouyang, David Cox, Yingyan Lin
|
https://papers.nips.cc/paper_files/paper/2021/hash/6ce8d8f3b038f737cefcdafcf3752452-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12623-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6ce8d8f3b038f737cefcdafcf3752452-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=98zhe-xzviq
|
https://papers.nips.cc/paper_files/paper/2021/file/6ce8d8f3b038f737cefcdafcf3752452-Supplemental.pdf
|
Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks, i.e., an imperceptible perturbation to the input can mislead DNNs trained on clean images into making erroneous predictions. To tackle this, adversarial training is currently the most effective defense method, by augmenting the training set with adversarial samples generated on the fly. \textbf{Interestingly, we discover for the first time that there exist subnetworks with inborn robustness, matching or surpassing the robust accuracy of the adversarially trained networks with comparable model sizes, within randomly initialized networks without any model training}, indicating that adversarial training on model weights is not indispensable towards adversarial robustness. We name such subnetworks Robust Scratch Tickets (RSTs), which are also by nature efficient. Distinct from the popular lottery ticket hypothesis, neither the original dense networks nor the identified RSTs need to be trained. To validate and understand this fascinating finding, we further conduct extensive experiments to study the existence and properties of RSTs under different models, datasets, sparsity patterns, and attacks, drawing insights regarding the relationship between DNNs’ robustness and their initialization/overparameterization. Furthermore, we identify the poor adversarial transferability between RSTs of different sparsity ratios drawn from the same randomly initialized dense network, and propose a Random RST Switch (R2S) technique, which randomly switches between different RSTs, as a novel defense method built on top of RSTs. We believe our findings about RSTs have opened up a new perspective to study model robustness and extend the lottery ticket hypothesis.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.