title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
Scalable Quasi-Bayesian Inference for Instrumental Variable Regression
|
https://papers.nips.cc/paper_files/paper/2021/hash/56a3107cad6611c8337ee36d178ca129-Abstract.html
|
Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu
|
https://papers.nips.cc/paper_files/paper/2021/hash/56a3107cad6611c8337ee36d178ca129-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12424-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56a3107cad6611c8337ee36d178ca129-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ngiUsn5EIt
|
https://papers.nips.cc/paper_files/paper/2021/file/56a3107cad6611c8337ee36d178ca129-Supplemental.pdf
|
Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking. In this work we present a scalable quasi-Bayesian procedure for IV regression, building upon the recently developed kernelized IV models. Contrary to Bayesian modeling for IV, our approach does not require additional assumptions on the data generating process, and leads to a scalable approximate inference algorithm with time cost comparable to the corresponding point estimation methods. Our algorithm can be further extended to work with neural network models. We analyze the theoretical properties of the proposed quasi-posterior, and demonstrate through empirical evaluation the competitive performance of our method.
| null |
Kernel Identification Through Transformers
|
https://papers.nips.cc/paper_files/paper/2021/hash/56c3b2c6ea3a83aaeeff35eeb45d700d-Abstract.html
|
Fergus Simpson, Ian Davies, Vidhi Lalchand, Alessandro Vullo, Nicolas Durrande, Carl Edward Rasmussen
|
https://papers.nips.cc/paper_files/paper/2021/hash/56c3b2c6ea3a83aaeeff35eeb45d700d-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12425-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56c3b2c6ea3a83aaeeff35eeb45d700d-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=B0rmtp9q6-_
|
https://papers.nips.cc/paper_files/paper/2021/file/56c3b2c6ea3a83aaeeff35eeb45d700d-Supplemental.pdf
|
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models, as the chosen kernel determines both the inductive biases and prior support of functions under the GP prior. This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models. Drawing inspiration from recent progress in deep learning, we introduce a novel approach named KITT: Kernel Identification Through Transformers. KITT exploits a transformer-based architecture to generate kernel recommendations in under 0.1 seconds, which is several orders of magnitude faster than conventional kernel search algorithms. We train our model using synthetic data generated from priors over a vocabulary of known kernels. By exploiting the nature of the self-attention mechanism, KITT is able to process datasets with inputs of arbitrary dimension. We demonstrate that kernels chosen by KITT yield strong performance over a diverse collection of regression benchmarks.
| null |
Curriculum Design for Teaching via Demonstrations: Theory and Applications
|
https://papers.nips.cc/paper_files/paper/2021/hash/56c51a39a7c77d8084838cc920585bd0-Abstract.html
|
Gaurav Yengera, Rati Devidze, Parameswaran Kamalaruban, Adish Singla
|
https://papers.nips.cc/paper_files/paper/2021/hash/56c51a39a7c77d8084838cc920585bd0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12426-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56c51a39a7c77d8084838cc920585bd0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1J21t9pd1AU
|
https://papers.nips.cc/paper_files/paper/2021/file/56c51a39a7c77d8084838cc920585bd0-Supplemental.pdf
|
We consider the problem of teaching via demonstrations in sequential decision-making settings. In particular, we study how to design a personalized curriculum over demonstrations to speed up the learner's convergence. We provide a unified curriculum strategy for two popular learner models: Maximum Causal Entropy Inverse Reinforcement Learning (MaxEnt-IRL) and Cross-Entropy Behavioral Cloning (CrossEnt-BC). Our unified strategy induces a ranking over demonstrations based on a notion of difficulty scores computed w.r.t. the teacher's optimal policy and the learner's current policy. Compared to the state of the art, our strategy doesn't require access to the learner's internal dynamics and still enjoys similar convergence guarantees under mild technical conditions. Furthermore, we adapt our curriculum strategy to the setting where no teacher agent is present using task-specific difficulty scores. Experiments on a synthetic car driving environment and navigation-based environments demonstrate the effectiveness of our curriculum strategy.
| null |
Revenue maximization via machine learning with noisy data
|
https://papers.nips.cc/paper_files/paper/2021/hash/56d33021e640f5d64a611a71b5dc30a3-Abstract.html
|
Ellen Vitercik, Tom Yan
|
https://papers.nips.cc/paper_files/paper/2021/hash/56d33021e640f5d64a611a71b5dc30a3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12427-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56d33021e640f5d64a611a71b5dc30a3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cksOcsjnXh
|
https://papers.nips.cc/paper_files/paper/2021/file/56d33021e640f5d64a611a71b5dc30a3-Supplemental.pdf
|
Increasingly, copious amounts of consumer data are used to learn high-revenue mechanisms via machine learning. Existing research on mechanism design via machine learning assumes that there is a distribution over the buyers' values for the items for sale and that the learning algorithm's input is a training set sampled from this distribution. This setup makes the strong assumption that no noise is introduced during data collection. In order to help place mechanism design via machine learning on firm foundations, we investigate the extent to which this learning process is robust to noise. Optimizing revenue using noisy data is challenging because revenue functions are extremely volatile: an infinitesimal change in the buyers' values can cause a steep drop in revenue. Nonetheless, we provide guarantees when arbitrarily correlated noise is added to the training set; we only require that the noise has bounded magnitude or is sub-Gaussian. We conclude with an application of our guarantees to multi-task mechanism design, where there are multiple distributions over buyers' values and the goal is to learn a high-revenue mechanism per distribution. To our knowledge, we are the first to study mechanism design via machine learning with noisy data as well as multi-task mechanism design.
| null |
Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation
|
https://papers.nips.cc/paper_files/paper/2021/hash/56db57b4db0a6fcb7f9e0c0b504f6472-Abstract.html
|
Jinming Cui, Chaochao Chen, Lingjuan Lyu, Carl Yang, Wang Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/56db57b4db0a6fcb7f9e0c0b504f6472-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12428-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56db57b4db0a6fcb7f9e0c0b504f6472-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=1Sn5TeUi1pb
|
https://papers.nips.cc/paper_files/paper/2021/file/56db57b4db0a6fcb7f9e0c0b504f6472-Supplemental.pdf
|
Social recommendation has shown promising improvements over traditional systems since it leverages social correlation data as an additional input. Most existing work assumes that all data are available to the recommendation platform. However, in practice, user-item interaction data (e.g.,rating) and user-user social data are usually generated by different platforms, and both of which contain sensitive information. Therefore, "How to perform secure and efficient social recommendation across different platforms, where the data are highly-sparse in nature" remains an important challenge. In this work, we bring secure computation techniques into social recommendation, and propose S3Rec, a sparsity-aware secure cross-platform social recommendation framework. As a result, our model can not only improve the recommendation performance of the rating platform by incorporating the sparse social data on the social platform, but also protect data privacy of both platforms. Moreover, to further improve model training efficiency, we propose two secure sparse matrix multiplication protocols based on homomorphic encryption and private information retrieval. Our experiments on two benchmark datasets demonstrate the effectiveness of S3Rec.
| null |
Parallelizing Thompson Sampling
|
https://papers.nips.cc/paper_files/paper/2021/hash/56f0b515214a7ec9f08a4bbf9a56f7ba-Abstract.html
|
Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan
|
https://papers.nips.cc/paper_files/paper/2021/hash/56f0b515214a7ec9f08a4bbf9a56f7ba-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12429-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/56f0b515214a7ec9f08a4bbf9a56f7ba-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rdMQrE-loT5
|
https://papers.nips.cc/paper_files/paper/2021/file/56f0b515214a7ec9f08a4bbf9a56f7ba-Supplemental.pdf
|
How can we make use of information parallelism in online decision-making problems while efficiently balancing the exploration-exploitation trade-off? In this paper, we introduce a batch Thompson Sampling framework for two canonical online decision-making problems with partial feedback, namely, stochastic multi-arm bandit and linear contextual bandit. Over a time horizon $T$, our batch Thompson Sampling policy achieves the same (asymptotic) regret bound of a fully sequential one while carrying out only $O(\log T)$ batch queries. To achieve this exponential reduction, i.e., reducing the number of interactions from $T$ to $O(\log T)$, our batch policy dynamically determines the duration of each batch in order to balance the exploration-exploitation trade-off. We also demonstrate experimentally that dynamic batch allocation outperforms natural baselines.
| null |
Dynamic Causal Bayesian Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/577bcc914f9e55d5e4e4f82f9f00e7d4-Abstract.html
|
Virginia Aglietti, Neil Dhir, Javier González, Theodoros Damoulas
|
https://papers.nips.cc/paper_files/paper/2021/hash/577bcc914f9e55d5e4e4f82f9f00e7d4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12430-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/577bcc914f9e55d5e4e4f82f9f00e7d4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VhMwt_GhDy9
|
https://papers.nips.cc/paper_files/paper/2021/file/577bcc914f9e55d5e4e4f82f9f00e7d4-Supplemental.pdf
|
We study the problem of performing a sequence of optimal interventions in a dynamic causal system where both the target variable of interest, and the inputs, evolve over time. This problem arises in a variety of domains including healthcare, operational research and policy design. Our approach, which we call Dynamic Causal Bayesian Optimisation (DCBO), brings together ideas from decision making, causal inference and Gaussian process (GP) emulation. DCBO is useful in scenarios where the causal effects are changing over time. Indeed, at every time step, DCBO identifies a local optimal intervention by integrating both observational and past interventional data collected from the system. We give theoretical results detailing how one can transfer interventional information across time steps and define a dynamic causal GP model which can be used to find optimal interventions in practice. Finally, we demonstrate how DCBO identifies optimal interventions faster than competing approaches in multiple settings and applications.
| null |
Local Differential Privacy for Regret Minimization in Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/580760fb5def6e2ca8eaf601236d5b08-Abstract.html
|
Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, Matteo Pirotta
|
https://papers.nips.cc/paper_files/paper/2021/hash/580760fb5def6e2ca8eaf601236d5b08-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12431-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/580760fb5def6e2ca8eaf601236d5b08-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6irNdUxsyl
|
https://papers.nips.cc/paper_files/paper/2021/file/580760fb5def6e2ca8eaf601236d5b08-Supplemental.pdf
|
Reinforcement learning algorithms are widely used in domains where it is desirable to provide a personalized service. In these domains it is common that user data contains sensitive information that needs to be protected from third parties. Motivated by this, we study privacy in the context of finite-horizon Markov Decision Processes (MDPs) by requiring information to be obfuscated on the user side. We formulate this notion of privacy for RL by leveraging the local differential privacy (LDP) framework. We establish a lower bound for regret minimization in finite-horizon MDPs with LDP guarantees which shows that guaranteeing privacy has a multiplicative effect on the regret. This result shows that while LDP is an appealing notion of privacy, it makes the learning problem significantly more complex. Finally, we present an optimistic algorithm that simultaneously satisfies $\varepsilon$-LDP requirements, and achieves $\sqrt{K}/\varepsilon$ regret in any finite-horizon MDP after $K$ episodes, matching the lower bound dependency on the number of episodes $K$.
| null |
Emergent Discrete Communication in Semantic Spaces
|
https://papers.nips.cc/paper_files/paper/2021/hash/5812f92450ccaf17275500841c70924a-Abstract.html
|
Mycal Tucker, Huao Li, Siddharth Agrawal, Dana Hughes, Katia Sycara, Michael Lewis, Julie A Shah
|
https://papers.nips.cc/paper_files/paper/2021/hash/5812f92450ccaf17275500841c70924a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12432-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5812f92450ccaf17275500841c70924a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=hsqZ5v8PFyQ
|
https://papers.nips.cc/paper_files/paper/2021/file/5812f92450ccaf17275500841c70924a-Supplemental.pdf
|
Neural agents trained in reinforcement learning settings can learn to communicate among themselves via discrete tokens, accomplishing as a team what agents would be unable to do alone. However, the current standard of using one-hot vectors as discrete communication tokens prevents agents from acquiring more desirable aspects of communication such as zero-shot understanding. Inspired by word embedding techniques from natural language processing, we propose neural agent architectures that enables them to communicate via discrete tokens derived from a learned, continuous space. We show in a decision theoretic framework that our technique optimizes communication over a wide range of scenarios, whereas one-hot tokens are only optimal under restrictive assumptions. In self-play experiments, we validate that our trained agents learn to cluster tokens in semantically-meaningful ways, allowing them communicate in noisy environments where other techniques fail. Lastly, we demonstrate both that agents using our method can effectively respond to novel human communication and that humans can understand unlabeled emergent agent communication, outperforming the use of one-hot communication.
| null |
Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity
|
https://papers.nips.cc/paper_files/paper/2021/hash/58182b82110146887c02dbd78719e3d5-Abstract.html
|
Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith Hengen, Michal Valko, Eva Dyer
|
https://papers.nips.cc/paper_files/paper/2021/hash/58182b82110146887c02dbd78719e3d5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12433-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/58182b82110146887c02dbd78719e3d5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZRPRjfAF3yd
|
https://papers.nips.cc/paper_files/paper/2021/file/58182b82110146887c02dbd78719e3d5-Supplemental.pdf
|
Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
| null |
Equivariant Manifold Flows
|
https://papers.nips.cc/paper_files/paper/2021/hash/581b41df0cd50ace849e061ef74827fc-Abstract.html
|
Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser Nam Lim, Christopher M. De Sa
|
https://papers.nips.cc/paper_files/paper/2021/hash/581b41df0cd50ace849e061ef74827fc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12434-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/581b41df0cd50ace849e061ef74827fc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lzZX7E713nJ
|
https://papers.nips.cc/paper_files/paper/2021/file/581b41df0cd50ace849e061ef74827fc-Supplemental.pdf
|
Tractably modelling distributions over manifolds has long been an important goal in the natural sciences. Recent work has focused on developing general machine learning models to learn such distributions. However, for many applications these distributions must respect manifold symmetries—a trait which most previous models disregard. In this paper, we lay the theoretical foundations for learning symmetry-invariant distributions on arbitrary manifolds via equivariant manifold flows. We demonstrate the utility of our approach by learning quantum field theory-motivated invariant SU(n) densities and by correcting meteor impact dataset bias.
| null |
Scalable Bayesian GPFA with automatic relevance determination and discrete noise models
|
https://papers.nips.cc/paper_files/paper/2021/hash/58238e9ae2dd305d79c2ebc8c1883422-Abstract.html
|
Kristopher Jensen, Ta-Chu Kao, Jasmine Stone, Guillaume Hennequin
|
https://papers.nips.cc/paper_files/paper/2021/hash/58238e9ae2dd305d79c2ebc8c1883422-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12435-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/58238e9ae2dd305d79c2ebc8c1883422-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_IvXbsw3Zvu
|
https://papers.nips.cc/paper_files/paper/2021/file/58238e9ae2dd305d79c2ebc8c1883422-Supplemental.pdf
|
Latent variable models are ubiquitous in the exploratory analysis of neural population recordings, where they allow researchers to summarize the activity of large populations of neurons in lower dimensional ‘latent’ spaces. Existing methods can generally be categorized into (i) Bayesian methods that facilitate flexible incorporation of prior knowledge and uncertainty estimation, but which typically do not scale to large datasets; and (ii) highly parameterized methods without explicit priors that scale better but often struggle in the low-data regime. Here, we bridge this gap by developing a fully Bayesian yet scalable version of Gaussian process factor analysis (bGPFA), which models neural data as arising from a set of inferred latent processes with a prior that encourages smoothness over time. Additionally, bGPFA uses automatic relevance determination to infer the dimensionality of neural activity directly from the training data during optimization. To enable the analysis of continuous recordings without trial structure, we introduce a novel variational inference strategy that scales near-linearly in time and also allows for non-Gaussian noise models appropriate for electrophysiological recordings. We apply bGPFA to continuous recordings spanning 30 minutes with over 14 million data points from primate motor and somatosensory cortices during a self-paced reaching task. We show that neural activity progresses from an initial state at target onset to a reach- specific preparatory state well before movement onset. The distance between these initial and preparatory latent states is predictive of reaction times across reaches, suggesting that such preparatory dynamics have behavioral relevance despite the lack of externally imposed delay periods. Additionally, bGPFA discovers latent processes that evolve over slow timescales on the order of several seconds and contain complementary information about reaction time. These timescales are longer than those revealed by methods which focus on individual movement epochs and may reflect fluctuations in e.g. task engagement.
| null |
Recurrence along Depth: Deep Convolutional Neural Networks with Recurrent Layer Aggregation
|
https://papers.nips.cc/paper_files/paper/2021/hash/582967e09f1b30ca2539968da0a174fa-Abstract.html
|
Jingyu Zhao, Yanwen Fang, Guodong Li
|
https://papers.nips.cc/paper_files/paper/2021/hash/582967e09f1b30ca2539968da0a174fa-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12436-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/582967e09f1b30ca2539968da0a174fa-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=LDuzgy4iOXr
|
https://papers.nips.cc/paper_files/paper/2021/file/582967e09f1b30ca2539968da0a174fa-Supplemental.pdf
|
This paper introduces a concept of layer aggregation to describe how information from previous layers can be reused to better extract features at the current layer. While DenseNet is a typical example of the layer aggregation mechanism, its redundancy has been commonly criticized in the literature. This motivates us to propose a very light-weighted module, called recurrent layer aggregation (RLA), by making use of the sequential structure of layers in a deep CNN. Our RLA module is compatible with many mainstream deep CNNs, including ResNets, Xception and MobileNetV2, and its effectiveness is verified by our extensive experiments on image classification, object detection and instance segmentation tasks. Specifically, improvements can be uniformly observed on CIFAR, ImageNet and MS COCO datasets, and the corresponding RLA-Nets can surprisingly boost the performances by 2-3% on the object detection task. This evidences the power of our RLA module in helping main CNNs better learn structural information in images.
| null |
Independent Prototype Propagation for Zero-Shot Compositionality
|
https://papers.nips.cc/paper_files/paper/2021/hash/584b98aac2dddf59ee2cf19ca4ccb75e-Abstract.html
|
Frank Ruis, Gertjan Burghouts, Doina Bucur
|
https://papers.nips.cc/paper_files/paper/2021/hash/584b98aac2dddf59ee2cf19ca4ccb75e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12437-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/584b98aac2dddf59ee2cf19ca4ccb75e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=9Oolof9tfnD
|
https://papers.nips.cc/paper_files/paper/2021/file/584b98aac2dddf59ee2cf19ca4ccb75e-Supplemental.pdf
|
Humans are good at compositional zero-shot reasoning; someone who has never seen a zebra before could nevertheless recognize one when we tell them it looks like a horse with black and white stripes. Machine learning systems, on the other hand, usually leverage spurious correlations in the training data, and while such correlations can help recognize objects in context, they hurt generalization. To be able to deal with underspecified datasets while still leveraging contextual clues during classification, we propose ProtoProp, a novel prototype propagation graph method. First we learn prototypical representations of objects (e.g., zebra) that are independent w.r.t. their attribute labels (e.g., stripes) and vice versa. Next we propagate the independent prototypes through a compositional graph, to learn compositional prototypes of novel attribute-object combinations that reflect the dependencies of the target distribution. The method does not rely on any external data, such as class hierarchy graphs or pretrained word embeddings. We evaluate our approach on AO-Clevr, a synthetic and strongly visual dataset with clean labels, UT-Zappos, a noisy real-world dataset of fine-grained shoe types, and C-GQA, a large-scale object detection dataset modified for compositional zero-shot learning. We show that in the generalized compositional zero-shot setting we outperform state-of-the-art results, and through ablations we show the importance of each part of the method and their contribution to the final results. The code is available on github.
| null |
Universal Graph Convolutional Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/5857d68cd9280bc98d079fa912fd6740-Abstract.html
|
Di Jin, Zhizhi Yu, Cuiying Huo, Rui Wang, Xiao Wang, Dongxiao He, Jiawei Han
|
https://papers.nips.cc/paper_files/paper/2021/hash/5857d68cd9280bc98d079fa912fd6740-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12438-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5857d68cd9280bc98d079fa912fd6740-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MSXDyfli9vy
| null |
Graph Convolutional Networks (GCNs), aiming to obtain the representation of a node by aggregating its neighbors, have demonstrated great power in tackling various analytics tasks on graph (network) data. The remarkable performance of GCNs typically relies on the homophily assumption of networks, while such assumption cannot always be satisfied, since the heterophily or randomness are also widespread in real-world. This gives rise to one fundamental question: whether networks with different structural properties should adopt different propagation mechanisms? In this paper, we first conduct an experimental investigation. Surprisingly, we discover that there are actually segmentation rules for the propagation mechanism, i.e., 1-hop, 2-hop and $k$-nearest neighbor ($k$NN) neighbors are more suitable as neighborhoods of network with complete homophily, complete heterophily and randomness, respectively. However, the real-world networks are complex, and may present diverse structural properties, e.g., the network dominated by homophily may contain a small amount of randomness. So can we reasonably utilize these segmentation rules to design a universal propagation mechanism independent of the network structural assumption? To tackle this challenge, we develop a new universal GCN framework, namely U-GCN. It first introduces a multi-type convolution to extract information from 1-hop, 2-hop and $k$NN networks simultaneously, and then designs a discriminative aggregation to sufficiently fuse them aiming to given learning objectives. Extensive experiments demonstrate the superiority of U-GCN over state-of-the-arts. The code and data are available at https://github.com/jindi-tju.
| null |
Adversarial Feature Desensitization
|
https://papers.nips.cc/paper_files/paper/2021/hash/587b7b833034299fdd5f4b10e7dc9fca-Abstract.html
|
Pouya Bashivan, Reza Bayat, Adam Ibrahim, Kartik Ahuja, Mojtaba Faramarzi, Touraj Laleh, Blake Richards, Irina Rish
|
https://papers.nips.cc/paper_files/paper/2021/hash/587b7b833034299fdd5f4b10e7dc9fca-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12439-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/587b7b833034299fdd5f4b10e7dc9fca-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4e_Yvt47kh
| null |
Neural networks are known to be vulnerable to adversarial attacks -- slight but carefully constructed perturbations of the inputs which can drastically impair the network's performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. However, these models often remain vulnerable to new types of attacks not seen during training, and even to slightly stronger versions of previously seen attacks. In this work, we propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field. Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs. This is achieved through a game where we learn features that are both predictive and robust (insensitive to adversarial attacks), i.e. cannot be used to discriminate between natural and adversarial data. Empirical results on several benchmarks demonstrate the effectiveness of the proposed approach against a wide range of attack types and attack strengths. Our code is available at https://github.com/BashivanLab/afd.
| null |
Few-Shot Data-Driven Algorithms for Low Rank Approximation
|
https://papers.nips.cc/paper_files/paper/2021/hash/588da7a73a2e919a23cb9a419c4c6d44-Abstract.html
|
Piotr Indyk, Tal Wagner, David Woodruff
|
https://papers.nips.cc/paper_files/paper/2021/hash/588da7a73a2e919a23cb9a419c4c6d44-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12440-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/588da7a73a2e919a23cb9a419c4c6d44-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6dUJPrPPUau
|
https://papers.nips.cc/paper_files/paper/2021/file/588da7a73a2e919a23cb9a419c4c6d44-Supplemental.pdf
|
Recently, data-driven and learning-based algorithms for low rank matrix approximation were shown to outperform classical data-oblivious algorithms by wide margins in terms of accuracy. Those algorithms are based on the optimization of sparse sketching matrices, which lead to large savings in time and memory during testing. However, they require long training times on a large amount of existing data, and rely on access to specialized hardware and software. In this work, we develop new data-driven low rank approximation algorithms with better computational efficiency in the training phase, alleviating these drawbacks. Furthermore, our methods are interpretable: while previous algorithms choose the sketching matrix either at random or by black-box learning, we show that it can be set (or initialized) to clearly interpretable values extracted from the dataset. Our experiments show that our algorithms, either by themselves or in combination with previous methods, achieve significant empirical advantage over previous work, improving training times by up to an order of magnitude toward achieving the same target accuracy.
| null |
Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition
|
https://papers.nips.cc/paper_files/paper/2021/hash/58ae749f25eded36f486bc85feb3f0ab-Abstract.html
|
Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan Barron, Hendrik PA Lensch
|
https://papers.nips.cc/paper_files/paper/2021/hash/58ae749f25eded36f486bc85feb3f0ab-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12441-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/58ae749f25eded36f486bc85feb3f0ab-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fATZNtA1-V0
|
https://papers.nips.cc/paper_files/paper/2021/file/58ae749f25eded36f486bc85feb3f0ab-Supplemental.pdf
|
Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics. Neural approaches such as NeRF have achieved remarkable success in view synthesis, but do not explicitly perform decomposition and instead operate exclusively on radiance (the product of reflectance and illumination). Extensions to NeRF, such as NeRD, can perform decomposition but struggle to accurately recover detailed illumination, thereby significantly limiting realism. We propose a novel reflectance decomposition network that can estimate shape, BRDF, and per-image illumination given a set of object images captured under varying illumination. Our key technique is a novel illumination integration network called Neural-PIL that replaces a costly illumination integral operation in the rendering with a simple network query. In addition, we also learn deep low-dimensional priors on BRDF and illumination representations using novel smooth manifold auto-encoders. Our decompositions can result in considerably better BRDF and light estimates enabling more accurate novel view-synthesis and relighting compared to prior art. Project page: https://markboss.me/publication/2021-neural-pil/
| null |
Asymptotics of the Bootstrap via Stability with Applications to Inference with Model Selection
|
https://papers.nips.cc/paper_files/paper/2021/hash/58b7483ba899e0ce4d97ac5eecf6fa99-Abstract.html
|
Morgane Austern, Vasilis Syrgkanis
|
https://papers.nips.cc/paper_files/paper/2021/hash/58b7483ba899e0ce4d97ac5eecf6fa99-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12442-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/58b7483ba899e0ce4d97ac5eecf6fa99-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=5JPPOluv-bp
|
https://papers.nips.cc/paper_files/paper/2021/file/58b7483ba899e0ce4d97ac5eecf6fa99-Supplemental.pdf
|
One of the most commonly used methods for forming confidence intervals is the empirical bootstrap, which is especially expedient when the limiting distribution of the estimator is unknown. However, despite its ubiquitous role in machine learning, its theoretical properties are still not well understood. Recent developments in probability have provided new tools to study the bootstrap method. However, they have been applied only to specific applications and contexts, and it is unclear whether these techniques are applicable to the understanding of the consistency of the bootstrap in machine learning pipelines. In this paper, we derive general stability conditions under which the empirical bootstrap estimator is consistent and quantify the speed of convergence. Moreover, we propose alternative ways to use the bootstrap method to build confidence intervals with coverage guarantees. Finally, we illustrate the generality and tightness of our results by examples of interest for machine learning including for two-sample kernel tests after kernel selection and the empirical risk of stacked estimators.
| null |
Dynamic influence maximization
|
https://papers.nips.cc/paper_files/paper/2021/hash/58ec72df0caca51df569d0b497c33805-Abstract.html
|
Binghui Peng
|
https://papers.nips.cc/paper_files/paper/2021/hash/58ec72df0caca51df569d0b497c33805-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12443-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/58ec72df0caca51df569d0b497c33805-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=7RIYO406DB-
|
https://papers.nips.cc/paper_files/paper/2021/file/58ec72df0caca51df569d0b497c33805-Supplemental.pdf
|
We initiate a systematic study on {\em dynamic influence maximization} (DIM). In the DIM problem, one maintains a seed set $S$ of at most $k$ nodes in a dynamically involving social network, with the goal of maximizing the expected influence spread while minimizing the amortized updating cost. We consider two evolution models. In the {\em incremental model}, the social network gets enlarged over time and one only introduces new users and establishes new social links, we design an algorithm that achieves $(1-1/e-\epsilon)$-approximation to the optimal solution and has $k \cdot\mathsf{poly}(\log n, \epsilon^{-1})$ amortized running time, which matches the state-of-art offline algorithm with only poly-logarithmic overhead. In the fully dynamic model, users join in and leave, influence propagation gets strengthened or weakened in real time, we prove that under the Strong Exponential Time Hypothesis (SETH), no algorithm can achieve $2^{-(\log n)^{1-o(1)}}$-approximation unless the amortized running time is $n^{1-o(1)}$. On the technical side, we exploit novel adaptive sampling approaches that reduce DIM to the dynamic MAX-k coverage problem, and design an efficient $(1-1/e-\epsilon)$-approximation algorithm for it. Our lower bound leverages the recent developed distributed PCP framework.
| null |
Risk Monotonicity in Statistical Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5907c88df2965e500c98e948dfae20c0-Abstract.html
|
Zakaria Mhammedi
|
https://papers.nips.cc/paper_files/paper/2021/hash/5907c88df2965e500c98e948dfae20c0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12444-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5907c88df2965e500c98e948dfae20c0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=z5-chidgZU3
|
https://papers.nips.cc/paper_files/paper/2021/file/5907c88df2965e500c98e948dfae20c0-Supplemental.pdf
|
Acquisition of data is a difficult task in many applications of machine learning, and it is only natural that one hopes and expects the population risk to decrease (better performance) monotonically with increasing data points. It turns out, somewhat surprisingly, that this is not the case even for the most standard algorithms that minimize the empirical risk. Non-monotonic behavior of the risk and instability in training have manifested and appeared in the popular deep learning paradigm under the description of double descent. These problems highlight the current lack of understanding of learning algorithms and generalization. It is, therefore, crucial to pursue this concern and provide a characterization of such behavior. In this paper, we derive the first consistent and risk-monotonic (in high probability) algorithms for a general statistical learning setting under weak assumptions, consequently answering some questions posed by Viering et. al. 2019 on how to avoid non-monotonic behavior of risk curves. We further show that risk monotonicity need not necessarily come at the price of worse excess risk rates. To achieve this, we derive new empirical Bernstein-like concentration inequalities of independent interest that hold for certain non-i.i.d.~processes such as Martingale Difference Sequences.
| null |
Information is Power: Intrinsic Control via Information Capture
|
https://papers.nips.cc/paper_files/paper/2021/hash/59112692262234e3fad47fa8eabf03a4-Abstract.html
|
Nicholas Rhinehart, Jenny Wang, Glen Berseth, John Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine
|
https://papers.nips.cc/paper_files/paper/2021/hash/59112692262234e3fad47fa8eabf03a4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12445-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/59112692262234e3fad47fa8eabf03a4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MO76tBOz9RL
|
https://papers.nips.cc/paper_files/paper/2021/file/59112692262234e3fad47fa8eabf03a4-Supplemental.zip
|
Humans and animals explore their environment and acquire useful skills even in the absence of clear goals, exhibiting intrinsic motivation. The study of intrinsic motivation in artificial agents is concerned with the following question: what is a good general-purpose objective for an agent? We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model. This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states. We instantiate this approach as a deep reinforcement learning agent equipped with a deep variational Bayes filter. We find that our agent learns to discover, represent, and exercise control of dynamic objects in a variety of partially-observed environments sensed with visual observations without extrinsic reward.
| null |
Extracting Deformation-Aware Local Features by Learning to Deform
|
https://papers.nips.cc/paper_files/paper/2021/hash/5934c1ec0cd31e12bd9084d106bc2e32-Abstract.html
|
Guilherme Potje, Renato Martins, Felipe Chamone, Erickson Nascimento
|
https://papers.nips.cc/paper_files/paper/2021/hash/5934c1ec0cd31e12bd9084d106bc2e32-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12446-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5934c1ec0cd31e12bd9084d106bc2e32-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=IQdzjJtUGFl
|
https://papers.nips.cc/paper_files/paper/2021/file/5934c1ec0cd31e12bd9084d106bc2e32-Supplemental.pdf
|
Despite the advances in extracting local features achieved by handcrafted and learning-based descriptors, they are still limited by the lack of invariance to non-rigid transformations. In this paper, we present a new approach to compute features from still images that are robust to non-rigid deformations to circumvent the problem of matching deformable surfaces and objects. Our deformation-aware local descriptor, named DEAL, leverages a polar sampling and a spatial transformer warping to provide invariance to rotation, scale, and image deformations. We train the model architecture end-to-end by applying isometric non-rigid deformations to objects in a simulated environment as guidance to provide highly discriminative local features. The experiments show that our method outperforms state-of-the-art handcrafted, learning-based image, and RGB-D descriptors in different datasets with both real and realistic synthetic deformable objects in still images. The source code and trained model of the descriptor are publicly available at https://www.verlab.dcc.ufmg.br/descriptors/neurips2021.
| null |
Object-Centric Representation Learning with Generative Spatial-Temporal Factorization
|
https://papers.nips.cc/paper_files/paper/2021/hash/593906af0d138e69f49d251d3e7cbed0-Abstract.html
|
Nanbo Li, Muhammad Ahmed Raza, Wenbin Hu, Zhaole Sun, Robert Fisher
|
https://papers.nips.cc/paper_files/paper/2021/hash/593906af0d138e69f49d251d3e7cbed0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12447-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/593906af0d138e69f49d251d3e7cbed0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=cA8Yp87yTiR
|
https://papers.nips.cc/paper_files/paper/2021/file/593906af0d138e69f49d251d3e7cbed0-Supplemental.zip
|
Learning object-centric scene representations is essential for attaining structural understanding and abstraction of complex scenes. Yet, as current approaches for unsupervised object-centric representation learning are built upon either a stationary observer assumption or a static scene assumption, they often: i) suffer single-view spatial ambiguities, or ii) infer incorrectly or inaccurately object representations from dynamic scenes. To address this, we propose Dynamics-aware Multi-Object Network (DyMON), a method that broadens the scope of multi-view object-centric representation learning to dynamic scenes. We train DyMON on multi-view-dynamic-scene data and show that DyMON learns---without supervision---to factorize the entangled effects of observer motions and scene object dynamics from a sequence of observations, and constructs scene object spatial representations suitable for rendering at arbitrary times (querying across time) and from arbitrary viewpoints (querying across space). We also show that the factorized scene representations (w.r.t. objects) support querying about a single object by space and time independently.
| null |
Learning to Simulate Self-driven Particles System with Coordinated Policy Optimization
|
https://papers.nips.cc/paper_files/paper/2021/hash/594ca7adb3277c51a998252e2d4c906e-Abstract.html
|
Zhenghao Peng, Quanyi Li, Ka Ming Hui, Chunxiao Liu, Bolei Zhou
|
https://papers.nips.cc/paper_files/paper/2021/hash/594ca7adb3277c51a998252e2d4c906e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12448-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/594ca7adb3277c51a998252e2d4c906e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Yc4AdP1M9kT
|
https://papers.nips.cc/paper_files/paper/2021/file/594ca7adb3277c51a998252e2d4c906e-Supplemental.pdf
|
Self-Driven Particles (SDP) describe a category of multi-agent systems common in everyday life, such as flocking birds and traffic flows. In a SDP system, each agent pursues its own goal and constantly changes its cooperative or competitive behaviors with its nearby agents. Manually designing the controllers for such SDP system is time-consuming, while the resulting emergent behaviors are often not realistic nor generalizable. Thus the realistic simulation of SDP systems remains challenging. Reinforcement learning provides an appealing alternative for automating the development of the controller for SDP. However, previous multi-agent reinforcement learning (MARL) methods define the agents to be teammates or enemies before hand, which fail to capture the essence of SDP where the role of each agent varies to be cooperative or competitive even within one episode. To simulate SDP with MARL, a key challenge is to coordinate agents' behaviors while still maximizing individual objectives. Taking traffic simulation as the testing bed, in this work we develop a novel MARL method called Coordinated Policy Optimization (CoPO), which incorporates social psychology principle to learn neural controller for SDP. Experiments show that the proposed method can achieve superior performance compared to MARL baselines in various metrics. Noticeably the trained vehicles exhibit complex and diverse social behaviors that improve performance and safety of the population as a whole. Demo video and source code are available at: https://decisionforce.github.io/CoPO/
| null |
Gradient-based Hyperparameter Optimization Over Long Horizons
|
https://papers.nips.cc/paper_files/paper/2021/hash/596dedf4498e258e4bdc9fd70df9a859-Abstract.html
|
Paul Micaelli, Amos J. Storkey
|
https://papers.nips.cc/paper_files/paper/2021/hash/596dedf4498e258e4bdc9fd70df9a859-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12449-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/596dedf4498e258e4bdc9fd70df9a859-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6x8tcREIL2W
|
https://papers.nips.cc/paper_files/paper/2021/file/596dedf4498e258e4bdc9fd70df9a859-Supplemental.pdf
|
Gradient-based hyperparameter optimization has earned a widespread popularity in the context of few-shot meta-learning, but remains broadly impractical for tasks with long horizons (many gradient steps), due to memory scaling and gradient degradation issues. A common workaround is to learn hyperparameters online, but this introduces greediness which comes with a significant performance drop. We propose forward-mode differentiation with sharing (FDS), a simple and efficient algorithm which tackles memory scaling issues with forward-mode differentiation, and gradient degradation issues by sharing hyperparameters that are contiguous in time. We provide theoretical guarantees about the noise reduction properties of our algorithm, and demonstrate its efficiency empirically by differentiating through $\sim 10^4$ gradient steps of unrolled optimization. We consider large hyperparameter search ranges on CIFAR-10 where we significantly outperform greedy gradient-based alternatives, while achieving $\times 20$ speedups compared to the state-of-the-art black-box methods.
| null |
Stochastic Bias-Reduced Gradient Methods
|
https://papers.nips.cc/paper_files/paper/2021/hash/597c7b407a02cc0a92167e7a371eca25-Abstract.html
|
Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
|
https://papers.nips.cc/paper_files/paper/2021/hash/597c7b407a02cc0a92167e7a371eca25-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12450-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/597c7b407a02cc0a92167e7a371eca25-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Gm-0H9DZALK
|
https://papers.nips.cc/paper_files/paper/2021/file/597c7b407a02cc0a92167e7a371eca25-Supplemental.pdf
|
We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function $f$. In particular, we use a multilevel Monte-Carlo approach due to Blanchet and Glynn to turn any optimal stochastic gradient method into an estimator of $x_\star$ with bias $\delta$, variance $O(\log(1/\delta))$, and an expected sampling cost of $O(\log(1/\delta))$ stochastic gradient evaluations. As an immediate consequence, we obtain cheap and nearly unbiased gradient estimators for the Moreau envelope of any Lipschitz convex function. We demonstrate the potential of our estimator through four applications. First, we develop a method for minimizing the maximum of $N$ functions, improving on recent results and matching a lower bound up to logarithmic factors. Second and third, we recover state-of-the-art rates for projection-efficient and gradient-efficient optimization using simple algorithms with a transparent analysis. Finally, we show that an improved version of our estimator would yield a nearly linear-time, optimal-utility, differentially-private non-smooth stochastic optimization method.
| null |
The Causal-Neural Connection: Expressiveness, Learnability, and Inference
|
https://papers.nips.cc/paper_files/paper/2021/hash/5989add1703e4b0480f75e2390739f34-Abstract.html
|
Kevin Xia, Kai-Zhan Lee, Yoshua Bengio, Elias Bareinboim
|
https://papers.nips.cc/paper_files/paper/2021/hash/5989add1703e4b0480f75e2390739f34-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12451-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5989add1703e4b0480f75e2390739f34-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=hGmrNwR8qQP
| null |
One of the central elements of any causal inference is an object called structural causal model (SCM), which represents a collection of mechanisms and exogenous sources of random variation of the system under investigation (Pearl, 2000). An important property of many kinds of neural networks is universal approximability: the ability to approximate any function to arbitrary precision. Given this property, one may be tempted to surmise that a collection of neural nets is capable of learning any SCM by training on data generated by that SCM. In this paper, we show this is not the case by disentangling the notions of expressivity and learnability. Specifically, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020), which describes the limits of what can be learned from data, still holds for neural models. For instance, an arbitrarily complex and expressive neural net is unable to predict the effects of interventions given observational data alone. Given this result, we introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences. Building on this new class of models, we focus on solving two canonical tasks found in the literature known as causal identification and estimation. Leveraging the neural toolbox, we develop an algorithm that is both sufficient and necessary to determine whether a causal effect can be learned from data (i.e., causal identifiability); it then estimates the effect whenever identifiability holds (causal estimation). Simulations corroborate the proposed approach.
| null |
Validation Free and Replication Robust Volume-based Data Valuation
|
https://papers.nips.cc/paper_files/paper/2021/hash/59a3adea76fadcb6dd9e54c96fc155d1-Abstract.html
|
Xinyi Xu, Zhaoxuan Wu, Chuan Sheng Foo, Bryan Kian Hsiang Low
|
https://papers.nips.cc/paper_files/paper/2021/hash/59a3adea76fadcb6dd9e54c96fc155d1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12452-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/59a3adea76fadcb6dd9e54c96fc155d1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YLzoHUlf_k
|
https://papers.nips.cc/paper_files/paper/2021/file/59a3adea76fadcb6dd9e54c96fc155d1-Supplemental.pdf
|
Data valuation arises as a non-trivial challenge in real-world use cases such as collaborative machine learning, federated learning, trusted data sharing, data marketplaces. The value of data is often associated with the learning performance (e.g., validation accuracy) of a model trained on the data, which introduces a close coupling between data valuation and validation. However, a validation set may notbe available in practice and it can be challenging for the data providers to reach an agreement on the choice of the validation set. Another practical issue is that of data replication: Given the value of some data points, a dishonest data provider may replicate these data points to exploit the valuation for a larger reward/payment. We observe that the diversity of the data points is an inherent property of a dataset that is independent of validation. We formalize diversity via the volume of the data matrix (i.e., determinant of its left Gram), which allows us to establish a formal connection between the diversity of data and learning performance without requiring validation. Furthermore, we propose a robust volume measure with a theoretical guarantee on the replication robustness by following the intuition that copying the same data points does not increase the diversity of data. We perform extensive experiments to demonstrate its consistency in valuation and practical advantages over existing baselines and show that our method is model- and task-agnostic and can be flexibly adapted to handle various neural networks.
| null |
Implicit Finite-Horizon Approximation and Efficient Optimal Algorithms for Stochastic Shortest Path
|
https://papers.nips.cc/paper_files/paper/2021/hash/59b1deff341edb0b76ace57820cef237-Abstract.html
|
Liyu Chen, Mehdi Jafarnia-Jahromi, Rahul Jain, Haipeng Luo
|
https://papers.nips.cc/paper_files/paper/2021/hash/59b1deff341edb0b76ace57820cef237-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12453-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/59b1deff341edb0b76ace57820cef237-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=mo-EcqhZU2R
|
https://papers.nips.cc/paper_files/paper/2021/file/59b1deff341edb0b76ace57820cef237-Supplemental.zip
|
We introduce a generic template for developing regret minimization algorithms in the Stochastic Shortest Path (SSP) model, which achieves minimax optimal regret as long as certain properties are ensured. The key of our analysis is a new technique called implicit finite-horizon approximation, which approximates the SSP model by a finite-horizon counterpart only in the analysis without explicit implementation. Using this template, we develop two new algorithms: the first one is model-free (the first in the literature to our knowledge) and minimax optimal under strictly positive costs; the second one is model-based and minimax optimal even with zero-cost state-action pairs, matching the best existing result from [Tarbouriech et al., 2021b]. Importantly, both algorithms admit highly sparse updates, making them computationally more efficient than all existing algorithms. Moreover, both can be made completely parameter-free.
| null |
A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a499f6e26313e19bd4049009bbed5bd-Abstract.html
|
Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Guha Thakurta
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a499f6e26313e19bd4049009bbed5bd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12454-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5a499f6e26313e19bd4049009bbed5bd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=A1Y8cGB9w72
|
https://papers.nips.cc/paper_files/paper/2021/file/5a499f6e26313e19bd4049009bbed5bd-Supplemental.pdf
|
Poisoning attacks have emerged as a significant security threat to machine learning algorithms. It has been demonstrated that adversaries who make small changes to the training set, such as adding specially crafted data points, can hurt the performance of the output model. Most of these attacks require the full knowledge of training data. This leaves open the possibility of achieving the same attack results using poisoning attacks that do not have the full knowledge of the clean training set.In this work, we initiate a theoretical study of the problem above. Specifically, for the case of feature selection with LASSO, we show that \emph{full information} adversaries (that craft poisoning examples based on the rest of the training data) are provably much more devastating compared to the optimal attacker that is \emph{oblivious} to the training set yet has access to the distribution of the data. Our separation result shows that the two settings of data-aware and data-oblivious are fundamentally different and we cannot hope to achieve the same attack or defense results in these scenarios.
| null |
Deep Learning Through the Lens of Example Difficulty
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a4b25aaed25c2ee1b74de72dc03c14e-Abstract.html
|
Robert Baldock, Hartmut Maennel, Behnam Neyshabur
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a4b25aaed25c2ee1b74de72dc03c14e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12455-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5a4b25aaed25c2ee1b74de72dc03c14e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fmgYOUahK9
| null |
Existing work on understanding deep learning often employs measures that compress all data-dependent information into a few numbers. In this work, we adopt a perspective based on the role of individual examples. We introduce a measure of the computational difficulty of making a prediction for a given input: the (effective) prediction depth. Our extensive investigation reveals surprising yet simple relationships between the prediction depth of a given input and the model’s uncertainty, confidence, accuracy and speed of learning for that data point. We further categorize difficult examples into three interpretable groups, demonstrate how these groups are processed differently inside deep models and showcase how this understanding allows us to improve prediction accuracy. Insights from our study lead to a coherent view of a number of separately reported phenomena in the literature: early layers generalize while later layers memorize; early layers converge faster and networks learn easy data and simple functions first.
| null |
R-Drop: Regularized Dropout for Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a66b9200f29ac3fa0ae244cc2a51b39-Abstract.html
|
xiaobo liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a66b9200f29ac3fa0ae244cc2a51b39-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12456-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5a66b9200f29ac3fa0ae244cc2a51b39-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=bw5Arp3O3eY
|
https://papers.nips.cc/paper_files/paper/2021/file/5a66b9200f29ac3fa0ae244cc2a51b39-Supplemental.pdf
|
Dropout is a powerful and widely used technique to regularize the training of deep neural networks. Though effective and performing well, the randomness introduced by dropout causes unnegligible inconsistency between training and inference. In this paper, we introduce a simple consistency training strategy to regularize dropout, namely R-Drop, which forces the output distributions of different sub models generated by dropout to be consistent with each other. Specifically, for each training sample, R-Drop minimizes the bidirectional KL-divergence between the output distributions of two sub models sampled by dropout. Theoretical analysis reveals that R-Drop reduces the above inconsistency. Experiments on $\bf{5}$ widely used deep learning tasks ($\bf{18}$ datasets in total), including neural machine translation, abstractive summarization, language understanding, language modeling, and image classification, show that R-Drop is universally effective. In particular, it yields substantial improvements when applied to fine-tune large-scale pre-trained models, e.g., ViT, RoBERTa-large, and BART, and achieves state-of-the-art (SOTA) performances with the vanilla Transformer model on WMT14 English$\to$German translation ($\bf{30.91}$ BLEU) and WMT14 English$\to$French translation ($\bf{43.95}$ BLEU), even surpassing models trained with extra large-scale data and expert-designed advanced variants of Transformer models. Our code is available at GitHub\footnote{\url{https://github.com/dropreg/R-Drop}}.
| null |
Diversity Enhanced Active Learning with Strictly Proper Scoring Rules
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a7b238ba0f6502e5d6be14424b20ded-Abstract.html
|
Wei Tan, Lan Du, Wray Buntine
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a7b238ba0f6502e5d6be14424b20ded-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12457-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5a7b238ba0f6502e5d6be14424b20ded-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=S74dteBBSVO
|
https://papers.nips.cc/paper_files/paper/2021/file/5a7b238ba0f6502e5d6be14424b20ded-Supplemental.pdf
|
We study acquisition functions for active learning (AL) for text classification. The Expected Loss Reduction (ELR) method focuses on a Bayesian estimate of the reduction in classification error, recently updated with Mean Objective Cost of Uncertainty (MOCU). We convert the ELR framework to estimate the increase in (strictly proper) scores like log probability or negative mean square error, which we call Bayesian Estimate of Mean Proper Scores (BEMPS). We also prove convergence results borrowing techniques used with MOCU. In order to allow better experimentation with the new acquisition functions, we develop a complementary batch AL algorithm, which encourages diversity in the vector of expected changes in scores for unlabelled data. To allow high performance text classifiers, we combine ensembling and dynamic validation set construction on pretrained language models. Extensive experimental evaluation then explores how these different acquisition functions perform. The results show that the use of mean square error and log probability with BEMPS yields robust acquisition functions, which consistently outperform the others tested.
| null |
SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a9542c773018268fc6271f7afeea969-Abstract.html
|
Sungmin Cha, beomyoung kim, YoungJoon Yoo, Taesup Moon
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a9542c773018268fc6271f7afeea969-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12458-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5a9542c773018268fc6271f7afeea969-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8tgchc2XhD
|
https://papers.nips.cc/paper_files/paper/2021/file/5a9542c773018268fc6271f7afeea969-Supplemental.pdf
|
We consider a class-incremental semantic segmentation (CISS) problem. While some recently proposed algorithms utilized variants of knowledge distillation (KD) technique to tackle the problem, they only partially addressed the key additional challenges in CISS that causes the catastrophic forgetting; \textit{i.e.}, the semantic drift of the background class and multi-label prediction issue. To better address these challenges, we propose a new method, dubbed as SSUL-M (Semantic Segmentation with Unknown Label with Memory), by carefully combining several techniques tailored for semantic segmentation. More specifically, we make three main contributions; (1) modeling \textit{unknown} class within the background class to help learning future classes (help plasticity), (2) \textit{freezing} backbone network and past classifiers with binary cross-entropy loss and pseudo-labeling to overcome catastrophic forgetting (help stability), and (3) utilizing \textit{tiny exemplar memory} for the first time in CISS to improve \textit{both} plasticity and stability. As a result, we show our method achieves significantly better performance than the recent state-of-the-art baselines on the standard benchmark datasets. Furthermore, we justify our contributions with thorough and extensive ablation analyses and discuss different natures of the CISS problem compared to the standard class-incremental learning for classification. The official code is available at https://github.com/clovaai/SSUL.
| null |
Lower and Upper Bounds on the Pseudo-Dimension of Tensor Network Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a9d8bf5b7a4b35f3110dde8673bdda2-Abstract.html
|
Behnoush Khavari, Guillaume Rabusseau
|
https://papers.nips.cc/paper_files/paper/2021/hash/5a9d8bf5b7a4b35f3110dde8673bdda2-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12459-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5a9d8bf5b7a4b35f3110dde8673bdda2-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=16r0qOLv_2i
|
https://papers.nips.cc/paper_files/paper/2021/file/5a9d8bf5b7a4b35f3110dde8673bdda2-Supplemental.pdf
|
Tensor network methods have been a key ingredient of advances in condensed matter physics and have recently sparked interest in the machine learning community for their ability to compactly represent very high-dimensional objects. Tensor network methods can for example be used to efficiently learn linear models in exponentially large feature spaces [Stoudenmire and Schwab, 2016]. In this work, we derive upper and lower bounds on the VC dimension and pseudo-dimension of a large class of tensor network models for classification, regression and completion. Our upper bounds hold for linear models parameterized by arbitrary tensor network structures, and we derive lower bounds for common tensor decomposition models~(CP, Tensor Train, Tensor Ring and Tucker) showing the tightness of our general upper bound. These results are used to derive a generalization bound which can be applied to classification with low rank matrices as well as linear classifiers based on any of the commonly used tensor decomposition models. As a corollary of our results, we obtain a bound on the VC dimension of the matrix product state classifier introduced in [Stoudenmire and Schwab, 2016] as a function of the so-called bond dimension~(i.e. tensor train rank), which answers an open problem listed by Cirac, Garre-Rubio and Pérez-García in [Cirac et al., 2019].
| null |
What Makes Multi-Modal Learning Better than Single (Provably)
|
https://papers.nips.cc/paper_files/paper/2021/hash/5aa3405a3f865c10f420a4a7b55cbff3-Abstract.html
|
Yu Huang, Chenzhuang Du, Zihui Xue, Xuanyao Chen, Hang Zhao, Longbo Huang
|
https://papers.nips.cc/paper_files/paper/2021/hash/5aa3405a3f865c10f420a4a7b55cbff3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12460-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5aa3405a3f865c10f420a4a7b55cbff3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=UlSjqPEkI1V
|
https://papers.nips.cc/paper_files/paper/2021/file/5aa3405a3f865c10f420a4a7b55cbff3-Supplemental.pdf
|
The world provides us with data of multiple modalities. Intuitively, models fusing data from different modalities outperform their uni-modal counterparts, since more information is aggregated. Recently, joining the success of deep learning, there is an influential line of work on deep multi-modal learning, which has remarkable empirical results on various applications. However, theoretical justifications in this field are notably lacking. Can multi-modal learning provably perform better than uni-modal?In this paper, we answer this question under a most popular multi-modal fusion framework, which firstly encodes features from different modalities into a common latent space and seamlessly maps the latent representations into the task space. We prove that learning with multiple modalities achieves a smaller population risk than only using its subset of modalities. The main intuition is that the former has a more accurate estimate of the latent space representation. To the best of our knowledge, this is the first theoretical treatment to capture important qualitative phenomena observed in real multi-modal applications from the generalization perspective. Combining with experiment results, we show that multi-modal learning does possess an appealing formal guarantee.
| null |
Quantifying and Improving Transferability in Domain Generalization
|
https://papers.nips.cc/paper_files/paper/2021/hash/5adaacd4531b78ff8b5cedfe3f4d5212-Abstract.html
|
Guojun Zhang, Han Zhao, Yaoliang Yu, Pascal Poupart
|
https://papers.nips.cc/paper_files/paper/2021/hash/5adaacd4531b78ff8b5cedfe3f4d5212-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12461-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5adaacd4531b78ff8b5cedfe3f4d5212-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=SQqKl8I6xD8
|
https://papers.nips.cc/paper_files/paper/2021/file/5adaacd4531b78ff8b5cedfe3f4d5212-Supplemental.pdf
|
Out-of-distribution generalization is one of the key challenges when transferring a model from the lab to the real world. Existing efforts mostly focus on building invariant features among source and target domains. Based on invariant features, a high-performing classifier on source domains could hopefully behave equally well on a target domain. In other words, we hope the invariant features to be \emph{transferable}. However, in practice, there are no perfectly transferable features, and some algorithms seem to learn ``more transferable'' features than others. How can we understand and quantify such \emph{transferability}? In this paper, we formally define transferability that one can quantify and compute in domain generalization. We point out the difference and connection with common discrepancy measures between domains, such as total variation and Wasserstein distance. We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability. Empirically, we evaluate the transferability of the feature embeddings learned by existing algorithms for domain generalization. Surprisingly, we find that many algorithms are not quite learning transferable features, although few could still survive. In light of this, we propose a new algorithm for learning transferable features and test it over various benchmark datasets, including RotatedMNIST, PACS, Office-Home and WILDS-FMoW. Experimental results show that the proposed algorithm achieves consistent improvement over many state-of-the-art algorithms, corroborating our theoretical findings.
| null |
Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b168fdba5ee5ea262cc2d4c0b457697-Abstract.html
|
Youngseog Chung, Willie Neiswanger, Ian Char, Jeff Schneider
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b168fdba5ee5ea262cc2d4c0b457697-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12462-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5b168fdba5ee5ea262cc2d4c0b457697-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=QbVza2PKM7T
|
https://papers.nips.cc/paper_files/paper/2021/file/5b168fdba5ee5ea262cc2d4c0b457697-Supplemental.pdf
|
Among the many ways of quantifying uncertainty in a regression setting, specifying the full quantile function is attractive, as quantiles are amenable to interpretation and evaluation. A model that predicts the true conditional quantiles for each input, at all quantile levels, presents a correct and efficient representation of the underlying uncertainty. To achieve this, many current quantile-based methods focus on optimizing the pinball loss. However, this loss restricts the scope of applicable regression models, limits the ability to target many desirable properties (e.g. calibration, sharpness, centered intervals), and may produce poor conditional quantiles. In this work, we develop new quantile methods that address these shortcomings. In particular, we propose methods that can apply to any class of regression model, select an explicit balance between calibration and sharpness, optimize for calibration of centered intervals, and produce more accurate conditional quantiles. We provide a thorough experimental evaluation of our methods, which includes a high dimensional uncertainty quantification task in nuclear fusion.
| null |
Dynamic Inference with Neural Interpreters
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b4e9aa703d0bfa11041debaa2d1b633-Abstract.html
|
Nasim Rahaman, Muhammad Waleed Gondal, Shruti Joshi, Peter Gehler, Yoshua Bengio, Francesco Locatello, Bernhard Schölkopf
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b4e9aa703d0bfa11041debaa2d1b633-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12463-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5b4e9aa703d0bfa11041debaa2d1b633-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=IUjt25DtqC4
|
https://papers.nips.cc/paper_files/paper/2021/file/5b4e9aa703d0bfa11041debaa2d1b633-Supplemental.pdf
|
Modern neural network architectures can leverage large amounts of data to generalize well within the training distribution. However, they are less capable of systematic generalization to data drawn from unseen but related distributions, a feat that is hypothesized to require compositional reasoning and reuse of knowledge. In this work, we present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules, which we call functions. Inputs to the model are routed through a sequence of functions in a way that is end-to-end learned. The proposed architecture can flexibly compose computation along width and depth, and lends itself well to capacity extension after training. To demonstrate the versatility of Neural Interpreters, we evaluate it in two distinct settings: image classification and visual abstract reasoning on Raven Progressive Matrices. In the former, we show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner. In the latter, we find that Neural Interpreters are competitive with respect to the state-of-the-art in terms of systematic generalization.
| null |
Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b658d2a925565f0755e035597f8d22f-Abstract.html
|
Kirill Struminsky, Artyom Gadetsky, Denis Rakitin, Danil Karpushkin, Dmitry P. Vetrov
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b658d2a925565f0755e035597f8d22f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12464-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5b658d2a925565f0755e035597f8d22f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=V2nQ_he-go_
| null |
Structured latent variables allow incorporating meaningful prior knowledge into deep learning models. However, learning with such variables remains challenging because of their discrete nature. Nowadays, the standard learning approach is to define a latent variable as a perturbed algorithm output and to use a differentiable surrogate for training. In general, the surrogate puts additional constraints on the model and inevitably leads to biased gradients. To alleviate these shortcomings, we extend the Gumbel-Max trick to define distributions over structured domains. We avoid the differentiable surrogates by leveraging the score function estimators for optimization. In particular, we highlight a family of recursive algorithms with a common feature we call stochastic invariant. The feature allows us to construct reliable gradient estimates and control variates without additional constraints on the model. In our experiments, we consider various structured latent variable models and achieve results competitive with relaxation-based counterparts.
| null |
Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b970a1d9be0fd100063fd6cd688b73e-Abstract.html
|
Greg Ver Steeg, Aram Galstyan
|
https://papers.nips.cc/paper_files/paper/2021/hash/5b970a1d9be0fd100063fd6cd688b73e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12465-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5b970a1d9be0fd100063fd6cd688b73e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lo8A3c27IOl
|
https://papers.nips.cc/paper_files/paper/2021/file/5b970a1d9be0fd100063fd6cd688b73e-Supplemental.pdf
|
Sampling from an unnormalized probability distribution is a fundamental problem in machine learning with applications including Bayesian modeling, latent factor inference, and energy-based model training. After decades of research, variations of MCMC remain the default approach to sampling despite slow convergence. Auxiliary neural models can learn to speed up MCMC, but the overhead for training the extra model can be prohibitive. We propose a fundamentally different approach to this problem via a new Hamiltonian dynamics with a non-Newtonian momentum. In contrast to MCMC approaches like Hamiltonian Monte Carlo, no stochastic step is required. Instead, the proposed deterministic dynamics in an extended state space exactly sample the target distribution, specified by an energy function, under an assumption of ergodicity. Alternatively, the dynamics can be interpreted as a normalizing flow that samples a specified energy model without training. The proposed Energy Sampling Hamiltonian (ESH) dynamics have a simple form that can be solved with existing ODE solvers, but we derive a specialized solver that exhibits much better performance. ESH dynamics converge faster than their MCMC competitors enabling faster, more stable training of neural network energy models.
| null |
Dynamic Normalization and Relay for Video Action Recognition
|
https://papers.nips.cc/paper_files/paper/2021/hash/5bd529d5b07b647a8863cf71e98d651a-Abstract.html
|
Dongqi Cai, Anbang Yao, Yurong Chen
|
https://papers.nips.cc/paper_files/paper/2021/hash/5bd529d5b07b647a8863cf71e98d651a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12466-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5bd529d5b07b647a8863cf71e98d651a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=AWadl8GeCEG
|
https://papers.nips.cc/paper_files/paper/2021/file/5bd529d5b07b647a8863cf71e98d651a-Supplemental.pdf
|
Convolutional Neural Networks (CNNs) have been the dominant model for video action recognition. Due to the huge memory and compute demand, popular action recognition networks need to be trained with small batch sizes, which makes learning discriminative spatial-temporal representations for videos become a challenging problem. In this paper, we present Dynamic Normalization and Relay (DNR), an improved normalization design, to augment the spatial-temporal representation learning of any deep action recognition model, adapting to small batch size training settings. We observe that state-of-the-art action recognition networks usually apply the same normalization parameters to all video data, and ignore the dependencies of the estimated normalization parameters between neighboring frames (at the same layer) and between neighboring layers (with all frames of a video clip). Inspired by this, DNR introduces two dynamic normalization relay modules to explore the potentials of cross-temporal and cross-layer feature distribution dependencies for estimating accurate layer-wise normalization parameters. These two DNR modules are instantiated as a light-weight recurrent structure conditioned on the current input features, and the normalization parameters estimated from the neighboring frames based features at the same layer or from the whole video clip based features at the preceding layers. We first plug DNR into prevailing 2D CNN backbones and test its performance on public action recognition datasets including Kinetics and Something-Something. Experimental results show that DNR brings large performance improvements to the baselines, achieving over 4.4% absolute margins in top-1 accuracy without training bells and whistles. More experiments on 3D backbones and several latest 2D spatial-temporal networks further validate its effectiveness. Code will be available at https://github.com/caidonkey/dnr.
| null |
Robust Visual Reasoning via Language Guided Neural Module Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/5bd53571b97884635d13910db49626bc-Abstract.html
|
Arjun Akula, Varun Jampani, Soravit Changpinyo, Song-Chun Zhu
|
https://papers.nips.cc/paper_files/paper/2021/hash/5bd53571b97884635d13910db49626bc-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12467-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5bd53571b97884635d13910db49626bc-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=T1f0YKPP_K
|
https://papers.nips.cc/paper_files/paper/2021/file/5bd53571b97884635d13910db49626bc-Supplemental.pdf
|
Neural module networks (NMN) are a popular approach for solving multi-modal tasks such as visual question answering (VQA) and visual referring expression recognition (REF). A key limitation in prior implementations of NMN is that the neural modules do not effectively capture the association between the visual input and the relevant neighbourhood context of the textual input. This limits their generalizability. For instance, NMN fail to understand new concepts such as “yellow sphere to the left" even when it is a combination of known concepts from train data: “blue sphere", “yellow cube", and “metallic cube to the left". In this paper, we address this limitation by introducing a language-guided adaptive convolution layer (LG-Conv) into NMN, in which the filter weights of convolutions are explicitly multiplied with a spatially varying language-guided kernel. Our model allows the neural module to adaptively co-attend over potential objects of interest from the visual and textual inputs. Extensive experiments on VQA and REF tasks demonstrate the effectiveness of our approach. Additionally, we propose a new challenging out-of-distribution test split for REF task, which we call C3-Ref+, for explicitly evaluating the NMN’s ability to generalize well to adversarial perturbations and unseen combinations of known concepts. Experiments on C3-Ref+ further demonstrate the generalization capabilities of our approach.
| null |
True Few-Shot Learning with Language Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c04925674920eb58467fb52ce4ef728-Abstract.html
|
Ethan Perez, Douwe Kiela, Kyunghyun Cho
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c04925674920eb58467fb52ce4ef728-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12468-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c04925674920eb58467fb52ce4ef728-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ShnM-rRh4T
|
https://papers.nips.cc/paper_files/paper/2021/file/5c04925674920eb58467fb52ce4ef728-Supplemental.pdf
|
Pretrained language models (LMs) perform well on many tasks even when learning from a few examples, but prior work uses many held-out examples to tune various aspects of learning, such as hyperparameters, training objectives, and natural language templates ("prompts"). Here, we evaluate the few-shot ability of LMs when such held-out examples are unavailable, a setting we call true few-shot learning. We test two model selection criteria, cross-validation and minimum description length, for choosing LM prompts and hyperparameters in the true few-shot setting. On average, both marginally outperform random selection and greatly underperform selection based on held-out examples. Moreover, selection criteria often prefer models that perform significantly worse than randomly-selected ones. We find similar results even when taking into account our uncertainty in a model's true performance during selection, as well as when varying the amount of computation and number of examples used for selection. Overall, our findings suggest that prior work significantly overestimated the true few-shot ability of LMs given the difficulty of few-shot model selection.
| null |
Selective Sampling for Online Best-arm Identification
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c333c4ffd55c7a3576e6a614d81af82-Abstract.html
|
Romain Camilleri, Zhihan Xiong, Maryam Fazel, Lalit Jain, Kevin G. Jamieson
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c333c4ffd55c7a3576e6a614d81af82-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12469-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c333c4ffd55c7a3576e6a614d81af82-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=aoXERVeC7cC
|
https://papers.nips.cc/paper_files/paper/2021/file/5c333c4ffd55c7a3576e6a614d81af82-Supplemental.pdf
|
This work considers the problem of selective-sampling for best-arm identification. Given a set of potential options $\mathcal{Z}\subset\mathbb{R}^d$, a learner aims to compute with probability greater than $1-\delta$, $\arg\max_{z\in \mathcal{Z}} z^{\top}\theta_{\ast}$ where $\theta_{\ast}$ is unknown. At each time step, a potential measurement $x_t\in \mathcal{X}\subset\mathbb{R}^d$ is drawn IID and the learner can either choose to take the measurement, in which case they observe a noisy measurement of $x^{\top}\theta_{\ast}$, or to abstain from taking the measurement and wait for a potentially more informative point to arrive in the stream. Hence the learner faces a fundamental trade-off between the number of labeled samples they take and when they have collected enough evidence to declare the best arm and stop sampling. The main results of this work precisely characterize this trade-off between labeled samples and stopping time and provide an algorithm that nearly-optimally achieves the minimal label complexity given a desired stopping time. In addition, we show that the optimal decision rule has a simple geometric form based on deciding whether a point is in an ellipse or not. Finally, our framework is general enough to capture binary classification improving upon previous works.
| null |
Multi-task Learning of Order-Consistent Causal Graphs
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c3a3b139a11689e0bc55abd95e20e39-Abstract.html
|
Xinshi Chen, Haoran Sun, Caleb Ellington, Eric Xing, Le Song
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c3a3b139a11689e0bc55abd95e20e39-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12470-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c3a3b139a11689e0bc55abd95e20e39-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=zweDnxxWRe
|
https://papers.nips.cc/paper_files/paper/2021/file/5c3a3b139a11689e0bc55abd95e20e39-Supplemental.pdf
|
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.
| null |
Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c53292c032b6cb8510041c54274e65f-Abstract.html
|
Yining Ma, Jingwen Li, Zhiguang Cao, Wen Song, Le Zhang, Zhenghua Chen, Jing Tang
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c53292c032b6cb8510041c54274e65f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12471-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c53292c032b6cb8510041c54274e65f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=TmLqkYn71gV
| null |
Recently, Transformer has become a prevailing deep architecture for solving vehicle routing problems (VRPs). However, it is less effective in learning improvement models for VRP because its positional encoding (PE) method is not suitable in representing VRP solutions. This paper presents a novel Dual-Aspect Collaborative Transformer (DACT) to learn embeddings for the node and positional features separately, instead of fusing them together as done in existing ones, so as to avoid potential noises and incompatible correlations. Moreover, the positional features are embedded through a novel cyclic positional encoding (CPE) method to allow Transformer to effectively capture the circularity and symmetry of VRP solutions (i.e., cyclic sequences). We train DACT using Proximal Policy Optimization and design a curriculum learning strategy for better sample efficiency. We apply DACT to solve the traveling salesman problem (TSP) and capacitated vehicle routing problem (CVRP). Results show that our DACT outperforms existing Transformer based improvement models, and exhibits much better generalization performance across different problem sizes on synthetic and benchmark instances, respectively.
| null |
Learning interaction rules from multi-animal trajectories via augmented behavioral models
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c572eca050594c7bc3c36e7e8ab9550-Abstract.html
|
Keisuke Fujii, Naoya Takeishi, Kazushi Tsutsui, Emyo Fujioka, Nozomi Nishiumi, Ryoya Tanaka, Mika Fukushiro, Kaoru Ide, Hiroyoshi Kohno, Ken Yoda, Susumu Takahashi, Shizuko Hiryu, Yoshinobu Kawahara
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c572eca050594c7bc3c36e7e8ab9550-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12472-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c572eca050594c7bc3c36e7e8ab9550-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=aMxdmZTH8b
|
https://papers.nips.cc/paper_files/paper/2021/file/5c572eca050594c7bc3c36e7e8ab9550-Supplemental.pdf
|
Extracting the interaction rules of biological agents from movement sequences pose challenges in various domains. Granger causality is a practical framework for analyzing the interactions from observed time-series data; however, this framework ignores the structures and assumptions of the generative process in animal behaviors, which may lead to interpretational problems and sometimes erroneous assessments of causality. In this paper, we propose a new framework for learning Granger causality from multi-animal trajectories via augmented theory-based behavioral models with interpretable data-driven models. We adopt an approach for augmenting incomplete multi-agent behavioral models described by time-varying dynamical systems with neural networks. For efficient and interpretable learning, our model leverages theory-based architectures separating navigation and motion processes, and the theory-guided regularization for reliable behavioral modeling. This can provide interpretable signs of Granger-causal effects over time, i.e., when specific others cause the approach or separation. In experiments using synthetic datasets, our method achieved better performance than various baselines. We then analyzed multi-animal datasets of mice, flies, birds, and bats, which verified our method and obtained novel biological insights.
| null |
Differentiable Synthesis of Program Architectures
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c5a93a042235058b1ef7b0ac1e11b67-Abstract.html
|
Guofeng Cui, He Zhu
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c5a93a042235058b1ef7b0ac1e11b67-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12473-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c5a93a042235058b1ef7b0ac1e11b67-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ivXd1iOKx9M
|
https://papers.nips.cc/paper_files/paper/2021/file/5c5a93a042235058b1ef7b0ac1e11b67-Supplemental.pdf
|
Differentiable programs have recently attracted much interest due to their interpretability, compositionality, and their efficiency to leverage differentiable training. However, synthesizing differentiable programs requires optimizing over a combinatorial, rapidly exploded space of program architectures. Despite the development of effective pruning heuristics, previous works essentially enumerate the discrete search space of program architectures, which is inefficient. We propose to encode program architecture search as learning the probability distribution over all possible program derivations induced by a context-free grammar. This allows the search algorithm to efficiently prune away unlikely program derivations to synthesize optimal program architectures. To this end, an efficient gradient-descent based method is developed to conduct program architecture search in a continuous relaxation of the discrete space of grammar rules. Experiment results on four sequence classification tasks demonstrate that our program synthesizer excels in discovering program architectures that lead to differentiable programs with higher F1 scores, while being more efficient than state-of-the-art program synthesis methods.
| null |
Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c5bc7df3d37b2a7ea29e1b47b2bd4ab-Abstract.html
|
Leonard Berrada, Sumanth Dathathri, Krishnamurthy Dvijotham, Robert Stanforth, Rudy R. Bunel, Jonathan Uesato, Sven Gowal, M. Pawan Kumar
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c5bc7df3d37b2a7ea29e1b47b2bd4ab-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12474-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c5bc7df3d37b2a7ea29e1b47b2bd4ab-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=oAxm0Wz7Bv
|
https://papers.nips.cc/paper_files/paper/2021/file/5c5bc7df3d37b2a7ea29e1b47b2bd4ab-Supplemental.pdf
|
Most real world applications require dealing with stochasticity like sensor noise or predictive uncertainty, where formal specifications of desired behavior are inherently probabilistic. Despite the promise of formal verification in ensuring the reliability of neural networks, progress in the direction of probabilistic specifications has been limited. In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e.g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations). We then propose a general technique to verify such specifications by generalizing the notion of Lagrangian duality, replacing standard Lagrangian multipliers with "functional multipliers" that can be arbitrary functions of the activations at a given layer. We show that an optimal choice of functional multipliers leads to exact verification (i.e., sound and complete verification), and for specific forms of multipliers, we develop tractable practical verification algorithms. We empirically validate our algorithms by applying them to Bayesian Neural Networks (BNNs) and MC Dropout Networks, and certifying properties such as adversarial robustness and robust detection of out-of-distribution (OOD) data. On these tasks we are able to provide significantly stronger guarantees when compared to prior work -- for instance, for a VGG-64 MC-Dropout CNN trained on CIFAR-10 in a verification-agnostic manner, we improve the certified AUC (a verified lower bound on the true AUC) for robust OOD detection (on CIFAR-100) from $0 \% \rightarrow 29\%$. Similarly, for a BNN trained on MNIST, we improve on the $\ell_\infty$ robust accuracy from $60.2 \% \rightarrow 74.6\%$. Further, on a novel specification -- distributionally robust OOD detection -- we improve on the certified AUC from $5\% \rightarrow 23\%$.
| null |
Oracle-Efficient Regret Minimization in Factored MDPs with Unknown Structure
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c936263f3428a40227908d5a3847c0b-Abstract.html
|
Aviv Rosenberg, Yishay Mansour
|
https://papers.nips.cc/paper_files/paper/2021/hash/5c936263f3428a40227908d5a3847c0b-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12475-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5c936263f3428a40227908d5a3847c0b-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=BS4SiQ3U9t6
|
https://papers.nips.cc/paper_files/paper/2021/file/5c936263f3428a40227908d5a3847c0b-Supplemental.pdf
|
We study regret minimization in non-episodic factored Markov decision processes (FMDPs), where all existing algorithms make the strong assumption that the factored structure of the FMDP is known to the learner in advance. In this paper, we provide the first algorithm that learns the structure of the FMDP while minimizing the regret. Our algorithm is based on the optimism in face of uncertainty principle, combined with a simple statistical method for structure learning, and can be implemented efficiently given oracle-access to an FMDP planner. Moreover, we give a variant of our algorithm that remains efficient even when the oracle is limited to non-factored actions, which is the case with almost all existing approximate planners. Finally, we leverage our techniques to prove a novel lower bound for the known structure case, closing the gap to the regret bound of Chen et al. [2021].
| null |
Linear-Time Probabilistic Solution of Boundary Value Problems
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html
|
Nicholas Krämer, Philipp Hennig
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12476-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=U9NNzquYEHC
|
https://papers.nips.cc/paper_files/paper/2021/file/5ca3e9b122f61f8f06494c97b1afccf3-Supplemental.pdf
|
We propose a fast algorithm for the probabilistic solution of boundary value problems (BVPs), which are ordinary differential equations subject to boundary conditions. In contrast to previous work, we introduce a Gauss-Markov prior and tailor it specifically to BVPs, which allows computing a posterior distribution over the solution in linear time, at a quality and cost comparable to that of well-established, non-probabilistic methods. Our model further delivers uncertainty quantification, mesh refinement, and hyperparameter adaptation. We demonstrate how these practical considerations positively impact the efficiency of the scheme. Altogether, this results in a practically usable probabilistic BVP solver that is (in contrast to non-probabilistic algorithms) natively compatible with other parts of the statistical modelling tool-chain.
| null |
Lifelong Domain Adaptation via Consolidated Internal Distribution
|
https://papers.nips.cc/paper_files/paper/2021/hash/5caf41d62364d5b41a893adc1a9dd5d4-Abstract.html
|
Mohammad Rostami
|
https://papers.nips.cc/paper_files/paper/2021/hash/5caf41d62364d5b41a893adc1a9dd5d4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12477-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5caf41d62364d5b41a893adc1a9dd5d4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=lpW-UP8VKcg
|
https://papers.nips.cc/paper_files/paper/2021/file/5caf41d62364d5b41a893adc1a9dd5d4-Supplemental.pdf
|
We develop an algorithm to address unsupervised domain adaptation (UDA) in continual learning (CL) settings. The goal is to update a model continually to learn distributional shifts across sequentially arriving tasks with unlabeled data while retaining the knowledge about the past learned tasks. Existing UDA algorithms address the challenge of domain shift, but they require simultaneous access to the datasets of the source and the target domains. On the other hand, existing works on CL can handle tasks with labeled data. Our solution is based on consolidating the learned internal distribution for improved model generalization on new domains and benefitting from experience replay to overcome catastrophic forgetting.
| null |
Counterbalancing Learning and Strategic Incentives in Allocation Markets
|
https://papers.nips.cc/paper_files/paper/2021/hash/5cc3749a6e56ef6d656735dff9176074-Abstract.html
|
Jamie Kang, Faidra Monachou, Moran Koren, Itai Ashlagi
|
https://papers.nips.cc/paper_files/paper/2021/hash/5cc3749a6e56ef6d656735dff9176074-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12478-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5cc3749a6e56ef6d656735dff9176074-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vFKvKIwcHw9
|
https://papers.nips.cc/paper_files/paper/2021/file/5cc3749a6e56ef6d656735dff9176074-Supplemental.pdf
|
Motivated by the high discard rate of donated organs in the United States, we study an allocation problem in the presence of learning and strategic incentives. We consider a setting where a benevolent social planner decides whether and how to allocate a single indivisible object to a queue of strategic agents. The object has a common true quality, good or bad, which is ex-ante unknown to everyone. Each agent holds an informative, yet noisy, private signal about the quality. To make a correct allocation decision the planner attempts to learn the object quality by truthfully eliciting agents' signals. Under the commonly applied sequential offering mechanism, we show that learning is hampered by the presence of strategic incentives as herding may emerge. This can result in incorrect allocation and welfare loss. To overcome these issues, we propose a novel class of incentive-compatible mechanisms. Our mechanism involves a batch-by-batch, dynamic voting process using a majority rule. We prove that the proposed voting mechanisms improve the probability of correct allocation whenever agents are sufficiently well informed. Particularly, we show that such an improvement can be achieved via a simple greedy algorithm. We quantify the improvement using simulations.
| null |
Controlling Neural Networks with Rule Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/5cd5058bca53951ffa7801bcdf421651-Abstract.html
|
Sungyong Seo, Sercan Arik, Jinsung Yoon, Xiang Zhang, Kihyuk Sohn, Tomas Pfister
|
https://papers.nips.cc/paper_files/paper/2021/hash/5cd5058bca53951ffa7801bcdf421651-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12479-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5cd5058bca53951ffa7801bcdf421651-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=owQmPJ9q9u
|
https://papers.nips.cc/paper_files/paper/2021/file/5cd5058bca53951ffa7801bcdf421651-Supplemental.zip
|
We propose a novel training method that integrates rules into deep learning, in a way the strengths of the rules are controllable at inference. Deep Neural Networks with Controllable Rule Representations (DeepCTRL) incorporates a rule encoder into the model coupled with a rule-based objective, enabling a shared representation for decision making. DeepCTRL is agnostic to data type and model architecture. It can be applied to any kind of rule defined for inputs and outputs. The key aspect of DeepCTRL is that it does not require retraining to adapt the rule strength -- at inference, the user can adjust it based on the desired operation point on accuracy vs. rule verification ratio. In real-world domains where incorporating rules is critical -- such as Physics, Retail and Healthcare -- we show the effectiveness of DeepCTRL in teaching rules for deep learning. DeepCTRL improves the trust and reliability of the trained models by significantly increasing their rule verification ratio, while also providing accuracy gains at downstream tasks. Additionally, DeepCTRL enables novel use cases such as hypothesis testing of the rules on data samples, and unsupervised adaptation based on shared rules between datasets.
| null |
Making the most of your day: online learning for optimal allocation of time
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d2c2cee8ab0b9a36bd1ed7196bd6c4a-Abstract.html
|
Etienne Boursier, Tristan Garrec, Vianney Perchet, Marco Scarsini
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d2c2cee8ab0b9a36bd1ed7196bd6c4a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12480-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5d2c2cee8ab0b9a36bd1ed7196bd6c4a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=AlvGTwr_t0S
|
https://papers.nips.cc/paper_files/paper/2021/file/5d2c2cee8ab0b9a36bd1ed7196bd6c4a-Supplemental.pdf
|
We study online learning for optimal allocation when the resource to be allocated is time. An agent receives task proposals sequentially according to a Poisson process and can either accept or reject a proposed task. If she accepts the proposal, she is busy for the duration of the task and obtains a reward that depends on the task duration. If she rejects it, she remains on hold until a new task proposal arrives. We study the regret incurred by the agent first when she knows her reward function but does not know the distribution of the task duration, and then when she does not know her reward function, either. Faster rates are finally obtained by adding structural assumptions on the distribution of rides or on the reward function. This natural setting bears similarities with contextual (one-armed) bandits, but with the crucial difference that the normalized reward associated to a context depends on the whole distribution of contexts.
| null |
Federated Reconstruction: Partially Local Federated Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d44a2b0d85aa1a4dd3f218be6422c66-Abstract.html
|
Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, John Rush, Sushant Prakash
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d44a2b0d85aa1a4dd3f218be6422c66-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12481-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5d44a2b0d85aa1a4dd3f218be6422c66-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Hk0olMdZOIU
|
https://papers.nips.cc/paper_files/paper/2021/file/5d44a2b0d85aa1a4dd3f218be6422c66-Supplemental.zip
|
Personalization methods in federated learning aim to balance the benefits of federated and local training for data availability, communication cost, and robustness to client heterogeneity. Approaches that require clients to communicate all model parameters can be undesirable due to privacy and communication constraints. Other approaches require always-available or stateful clients, impractical in large-scale cross-device settings. We introduce Federated Reconstruction, the first model-agnostic framework for partially local federated learning suitable for training and inference at scale. We motivate the framework via a connection to model-agnostic meta learning, empirically demonstrate its performance over existing approaches for collaborative filtering and next word prediction, and release an open-source library for evaluating approaches in this setting. We also describe the successful deployment of this approach at scale for federated collaborative filtering in a mobile keyboard application.
| null |
Optimal prediction of Markov chains with and without spectral gap
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d69dc892ba6e79fda0c6a1e286f24c5-Abstract.html
|
Yanjun Han, Soham Jana, Yihong Wu
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d69dc892ba6e79fda0c6a1e286f24c5-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12482-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5d69dc892ba6e79fda0c6a1e286f24c5-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=dHc1p5eoecb
|
https://papers.nips.cc/paper_files/paper/2021/file/5d69dc892ba6e79fda0c6a1e286f24c5-Supplemental.pdf
|
We study the following learning problem with dependent data: Given a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the distribution of the next state. For $3 \leq k \leq O(\sqrt{n})$, the optimal prediction risk in the Kullback-Leibler divergence is shown to be $\Theta(\frac{k^2}{n}\log \frac{n}{k^2})$, in contrast to the optimal rate of $\Theta(\frac{\log \log n}{n})$ for $k=2$ previously shown in Falahatgar et al in 2016. These nonparametric rates can be attributed to the memory in the data, as the spectral gap of the Markov chain can be arbitrarily small. To quantify the memory effect, we study irreducible reversible chains with a prescribed spectral gap. In addition to characterizing the optimal prediction risk for two states, we show that, as long as the spectral gap is not excessively small, the prediction risk in the Markov model is $O(\frac{k^2}{n})$, which coincides with that of an iid model with the same number of parameters.
| null |
Subquadratic Overparameterization for Shallow Neural Networks
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d9e4a04afb9f3608ccc76c1ffa7573e-Abstract.html
|
ChaeHwan Song, Ali Ramezani-Kebrya, Thomas Pethick, Armin Eftekhari, Volkan Cevher
|
https://papers.nips.cc/paper_files/paper/2021/hash/5d9e4a04afb9f3608ccc76c1ffa7573e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12483-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5d9e4a04afb9f3608ccc76c1ffa7573e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NhbFhfM960
|
https://papers.nips.cc/paper_files/paper/2021/file/5d9e4a04afb9f3608ccc76c1ffa7573e-Supplemental.pdf
|
Overparameterization refers to the important phenomenon where the width of a neural network is chosen such that learning algorithms can provably attain zero loss in nonconvex training. The existing theory establishes such global convergence using various initialization strategies, training modifications, and width scalings. In particular, the state-of-the-art results require the width to scale quadratically with the number of training data under standard initialization strategies used in practice for best generalization performance. In contrast, the most recent results obtain linear scaling either with requiring initializations that lead to the "lazy-training", or training only a single layer. In this work, we provide an analytical framework that allows us to adopt standard initialization strategies, possibly avoid lazy training, and train all layers simultaneously in basic shallow neural networks while attaining a desirable subquadratic scaling on the network width. We achieve the desiderata via Polyak-Lojasiewicz condition, smoothness, and standard assumptions on data, and use tools from random matrix theory.
| null |
Continuous Doubly Constrained Batch Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5da713a690c067105aeb2fae32403405-Abstract.html
|
Rasool Fakoor, Jonas W. Mueller, Kavosh Asadi, Pratik Chaudhari, Alexander J. Smola
|
https://papers.nips.cc/paper_files/paper/2021/hash/5da713a690c067105aeb2fae32403405-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12484-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5da713a690c067105aeb2fae32403405-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=O8uSRrmTeSQ
|
https://papers.nips.cc/paper_files/paper/2021/file/5da713a690c067105aeb2fae32403405-Supplemental.pdf
|
Reliant on too many experiments to learn good actions, current Reinforcement Learning (RL) algorithms have limited applicability in real-world settings, which can be too expensive to allow exploration. We propose an algorithm for batch RL, where effective policies are learned using only a fixed offline dataset instead of online interactions with the environment. The limited data in batch RL produces inherent uncertainty in value estimates of states/actions that were insufficiently represented in the training data. This leads to particularly severe extrapolation when our candidate policies diverge from one that generated the data. We propose to mitigate this issue via two straightforward penalties: a policy-constraint to reduce this divergence and a value-constraint that discourages overly optimistic estimates. Over a comprehensive set of $32$ continuous-action batch RL benchmarks, our approach compares favorably to state-of-the-art methods, regardless of how the offline data were collected.
| null |
Bridging Explicit and Implicit Deep Generative Models via Neural Stein Estimators
|
https://papers.nips.cc/paper_files/paper/2021/hash/5db60c98209913790e4fcce4597ee37c-Abstract.html
|
Qitian Wu, Rui Gao, Hongyuan Zha
|
https://papers.nips.cc/paper_files/paper/2021/hash/5db60c98209913790e4fcce4597ee37c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12485-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5db60c98209913790e4fcce4597ee37c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=wHxnK7Ucogy
|
https://papers.nips.cc/paper_files/paper/2021/file/5db60c98209913790e4fcce4597ee37c-Supplemental.pdf
|
There are two types of deep generative models: explicit and implicit. The former defines an explicit density form that allows likelihood inference; while the latter targets a flexible transformation from random noise to generated samples. While the two classes of generative models have shown great power in many applications, both of them, when used alone, suffer from respective limitations and drawbacks. To take full advantages of both models and enable mutual compensation, we propose a novel joint training framework that bridges an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy. We show that our method 1) induces novel mutual regularization via kernel Sobolev norm penalization and Moreau-Yosida regularization, and 2) stabilizes the training dynamics. Empirically, we demonstrate that proposed method can facilitate the density estimator to more accurately identify data modes and guide the generator to output higher-quality samples, comparing with training a single counterpart. The new approach also shows promising results when the training samples are contaminated or limited.
| null |
Score-based Generative Modeling in Latent Space
|
https://papers.nips.cc/paper_files/paper/2021/hash/5dca4c6b9e244d24a30b4c45601d9720-Abstract.html
|
Arash Vahdat, Karsten Kreis, Jan Kautz
|
https://papers.nips.cc/paper_files/paper/2021/hash/5dca4c6b9e244d24a30b4c45601d9720-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12486-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5dca4c6b9e244d24a30b4c45601d9720-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=P9TYG0j-wtG
|
https://papers.nips.cc/paper_files/paper/2021/file/5dca4c6b9e244d24a30b4c45601d9720-Supplemental.pdf
|
Score-based generative models (SGMs) have recently demonstrated impressive results in terms of both sample quality and distribution coverage. However, they are usually applied directly in data space and often require thousands of network evaluations for sampling. Here, we propose the Latent Score-based Generative Model (LSGM), a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework. Moving from data to latent space allows us to train more expressive generative models, apply SGMs to non-continuous data, and learn smoother SGMs in a smaller space, resulting in fewer network evaluations and faster sampling. To enable training LSGMs end-to-end in a scalable and stable manner, we (i) introduce a new score-matching objective suitable to the LSGM setting, (ii) propose a novel parameterization of the score function that allows SGM to focus on the mismatch of the target distribution with respect to a simple Normal one, and (iii) analytically derive multiple techniques for variance reduction of the training objective. LSGM obtains a state-of-the-art FID score of 2.10 on CIFAR-10, outperforming all existing generative results on this dataset. On CelebA-HQ-256, LSGM is on a par with previous SGMs in sample quality while outperforming them in sampling time by two orders of magnitude. In modeling binary images, LSGM achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset.
| null |
Deep Conditional Gaussian Mixture Model for Constrained Clustering
|
https://papers.nips.cc/paper_files/paper/2021/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html
|
Laura Manduchi, Kieran Chin-Cheong, Holger Michel, Sven Wellmann, Julia Vogt
|
https://papers.nips.cc/paper_files/paper/2021/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12487-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Blq2djlaP9U
| null |
Constrained clustering has gained significant attention in the field of machine learning as it can leverage prior information on a growing amount of only partially labeled data. Following recent advances in deep generative models, we propose a novel framework for constrained clustering that is intuitive, interpretable, and can be trained efficiently in the framework of stochastic gradient variational inference. By explicitly integrating domain knowledge in the form of probabilistic relations, our proposed model (DC-GMM) uncovers the underlying distribution of data conditioned on prior clustering preferences, expressed as \textit{pairwise constraints}. These constraints guide the clustering process towards a desirable partition of the data by indicating which samples should or should not belong to the same cluster. We provide extensive experiments to demonstrate that DC-GMM shows superior clustering performances and robustness compared to state-of-the-art deep constrained clustering methods on a wide range of data sets. We further demonstrate the usefulness of our approach on two challenging real-world applications.
| null |
Bootstrap Your Object Detector via Mixed Training
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e15fb59326e7a9c3d6558ca74621683-Abstract.html
|
Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Stephen Lin, Han Hu, Xiang Bai
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e15fb59326e7a9c3d6558ca74621683-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12488-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5e15fb59326e7a9c3d6558ca74621683-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=B9yXBaZDUxp
| null |
We introduce MixTraining, a new training paradigm for object detection that can improve the performance of existing detectors for free. MixTraining enhances data augmentation by utilizing augmentations of different strengths while excluding the strong augmentations of certain training samples that may be detrimental to training. In addition, it addresses localization noise and missing labels in human annotations by incorporating pseudo boxes that can compensate for these errors. Both of these MixTraining capabilities are made possible through bootstrapping on the detector, which can be used to predict the difficulty of training on a strong augmentation, as well as to generate reliable pseudo boxes thanks to the robustness of neural networks to labeling error. MixTraining is found to bring consistent improvements across various detectors on the COCO dataset. In particular, the performance of Faster R-CNN~\cite{ren2015faster} with a ResNet-50~\cite{he2016deep} backbone is improved from 41.7 mAP to 44.0 mAP, and the accuracy of Cascade-RCNN~\cite{cai2018cascade} with a Swin-Small~\cite{liu2021swin} backbone is raised from 50.9 mAP to 52.8 mAP.
| null |
Tensor decompositions of higher-order correlations by nonlinear Hebbian plasticity
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e34a2b4c23f4de585fb09a7f546f527-Abstract.html
|
Gabriel Ocker, Michael Buice
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e34a2b4c23f4de585fb09a7f546f527-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12489-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5e34a2b4c23f4de585fb09a7f546f527-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=8v_4EVifBqX
|
https://papers.nips.cc/paper_files/paper/2021/file/5e34a2b4c23f4de585fb09a7f546f527-Supplemental.pdf
|
Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher- order input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.
| null |
Online Adaptation to Label Distribution Shift
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e6bd7a6970cd4325e587f02667f7f73-Abstract.html
|
Ruihan Wu, Chuan Guo, Yi Su, Kilian Q. Weinberger
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e6bd7a6970cd4325e587f02667f7f73-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12490-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5e6bd7a6970cd4325e587f02667f7f73-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=6h14cMLgb5q
| null |
Machine learning models often encounter distribution shifts when deployed in the real world. In this paper, we focus on adaptation to label distribution shift in the online setting, where the test-time label distribution is continually changing and the model must dynamically adapt to it without observing the true label. This setting is common in many real world scenarios such as medical diagnosis, where disease prevalences can vary substantially at different times of the year. Leveraging a novel analysis, we show that the lack of true label does not hinder estimation of the expected test loss, which enables the reduction of online label shift adaptation to conventional online learning. Informed by this observation, we propose adaptation algorithms inspired by classical online learning techniques such as Follow The Leader (FTL) and Online Gradient Descent (OGD) and derive their regret bounds. We empirically verify our findings under both simulated and real world label distribution shifts and show that OGD is particularly effective and robust to a variety of challenging label shift scenarios.
| null |
One Explanation is Not Enough: Structured Attention Graphs for Image Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e751896e527c862bf67251a474b3819-Abstract.html
|
Vivswan Shitole, Fuxin Li, Minsuk Kahng, Prasad Tadepalli, Alan Fern
|
https://papers.nips.cc/paper_files/paper/2021/hash/5e751896e527c862bf67251a474b3819-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12491-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5e751896e527c862bf67251a474b3819-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=ZYU8HEjL7KQ
| null |
Attention maps are popular tools for explaining the decisions of convolutional neural networks (CNNs) for image classification. Typically, for each image of interest, a single attention map is produced, which assigns weights to pixels based on their importance to the classification. We argue that a single attention map provides an incomplete understanding since there are often many other maps that explain a classification equally well. In this paper, we propose to utilize a beam search algorithm to systematically search for multiple explanations for each image. Results show that there are indeed multiple relatively localized explanations for many images. However, naively showing multiple explanations to users can be overwhelming and does not reveal their common and distinct structures. We introduce structured attention graphs (SAGs), which compactly represent sets of attention maps for an image by visualizing how different combinations of image regions impact the confidence of a classifier. An approach to computing a compact and representative SAG for visualization is proposed via diverse sampling. We conduct a user study comparing the use of SAGs to traditional attention maps for answering comparative counterfactual questions about image classifications. Our results show that the users are significantly more accurate when presented with SAGs compared to standard attention map baselines.
| null |
Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ea1649a31336092c05438df996a3e59-Abstract.html
|
Zhaozhi Qian, William Zame, Lucas Fleuren, Paul Elbers, Mihaela van der Schaar
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ea1649a31336092c05438df996a3e59-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12492-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5ea1649a31336092c05438df996a3e59-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=tDqef76wFaO
| null |
Modeling a system's temporal behaviour in reaction to external stimuli is a fundamental problem in many areas. Pure Machine Learning (ML) approaches often fail in the small sample regime and cannot provide actionable insights beyond predictions. A promising modification has been to incorporate expert domain knowledge into ML models. The application we consider is predicting the patient health status and disease progression over time, where a wealth of domain knowledge is available from pharmacology. Pharmacological models describe the dynamics of carefully-chosen medically meaningful variables in terms of systems of Ordinary Differential Equations (ODEs). However, these models only describe a limited collection of variables, and these variables are often not observable in clinical environments. To close this gap, we propose the latent hybridisation model (LHM) that integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system and to link the expert and latent variables to observable quantities. We evaluated LHM on synthetic data as well as real-world intensive care data of COVID-19 patients. LHM consistently outperforms previous works, especially when few training samples are available such as at the beginning of the pandemic.
| null |
Shifted Chunk Transformer for Spatio-Temporal Representational Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5edc4f7dce28c711afc6265b4f99bf57-Abstract.html
|
Xuefan Zha, Wentao Zhu, Lv Xun, Sen Yang, Ji Liu
|
https://papers.nips.cc/paper_files/paper/2021/hash/5edc4f7dce28c711afc6265b4f99bf57-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12493-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5edc4f7dce28c711afc6265b4f99bf57-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=fDSDkiiXHzj
|
https://papers.nips.cc/paper_files/paper/2021/file/5edc4f7dce28c711afc6265b4f99bf57-Supplemental.pdf
|
Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation.Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models, e.g., LSTM, to learn the intra-frame and inter-frame features. Recently, Transformer models have successfully dominated the study of natural language processing (NLP), image classification, etc. However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch. To tackle the training difficulty and enhance the spatio-temporal learning, we construct a shifted chunk Transformer with pure self-attention blocks. Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global videoclip. Our shifted self-attention can also effectively model complicated inter-frame variances. Furthermore, we build a clip encoder based on Transformer to model long-term temporal dependencies. We conduct thorough ablation studies to validate each component and hyper-parameters in our shifted chunk Transformer, and it outperforms previous state-of-the-art approaches on Kinetics-400, Kinetics-600,UCF101, and HMDB51.
| null |
Faster proximal algorithms for matrix optimization using Jacobi-based eigenvalue methods
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ef78f63ba22e7dfb2fa44613311b932-Abstract.html
|
Hamza Fawzi, Harry Goulbourne
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ef78f63ba22e7dfb2fa44613311b932-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12494-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5ef78f63ba22e7dfb2fa44613311b932-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=dfMekDuTbNH
|
https://papers.nips.cc/paper_files/paper/2021/file/5ef78f63ba22e7dfb2fa44613311b932-Supplemental.pdf
|
We consider proximal splitting algorithms for convex optimization problems over matrices. A significant computational bottleneck in many of these algorithms is the need to compute a full eigenvalue or singular value decomposition at each iteration for the evaluation of a proximal operator.In this paper we propose to use an old and surprisingly simple method due to Jacobi to compute these eigenvalue and singular value decompositions, and we demonstrate that it can lead to substantial gains in terms of computation time compared to standard approaches. We rely on three essential properties of this method: (a) its ability to exploit an approximate decomposition as an initial point, which in the case of iterative optimization algorithms can be obtained from the previous iterate; (b) its parallel nature which makes it a great fit for hardware accelerators such as GPUs, now common in machine learning, and (c) its simple termination criterion which allows us to trade-off accuracy with computation time. We demonstrate the efficacy of this approach on a variety of algorithms and problems, and show that, on a GPU, we can obtain 5 to 10x speed-ups in the evaluation of proximal operators compared to standard CPU or GPU linear algebra routines. Our findings are supported by new theoretical results providing guarantees on the approximation quality of proximal operators obtained using approximate eigenvalue or singular value decompositions.
| null |
Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f1d3986fae10ed2994d14ecd89892d7-Abstract.html
|
Josh Rozner, Christopher Potts, Kyle Mahowald
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f1d3986fae10ed2994d14ecd89892d7-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12495-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5f1d3986fae10ed2994d14ecd89892d7-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ah5CMODl52
|
https://papers.nips.cc/paper_files/paper/2021/file/5f1d3986fae10ed2994d14ecd89892d7-Supplemental.pdf
|
Cryptic crosswords, the dominant crossword variety in the UK, are a promising target for advancing NLP systems that seek to process semantically complex, highly compositional language. Cryptic clues read like fluent natural language but are adversarially composed of two parts: a definition and a wordplay cipher requiring character-level manipulations. Expert humans use creative intelligence to solve cryptics, flexibly combining linguistic, world, and domain knowledge. In this paper, we make two main contributions. First, we present a dataset of cryptic clues as a challenging new benchmark for NLP systems that seek to process compositional language in more creative, human-like ways. After showing that three non-neural approaches and T5, a state-of-the-art neural language model, do not achieve good performance, we make our second main contribution: a novel curriculum approach, in which the model is first fine-tuned on related tasks such as unscrambling words. We also introduce a challenging data split, examine the meta-linguistic capabilities of subword-tokenized models, and investigate model systematicity by perturbing the wordplay part of clues, showing that T5 exhibits behavior partially consistent with human solving strategies. Although our curricular approach considerably improves on the T5 baseline, our best-performing model still fails to generalize to the extent that humans can. Thus, cryptic crosswords remain an unsolved challenge for NLP systems and a potential source of future innovation.
| null |
An Improved Analysis of Gradient Tracking for Decentralized Machine Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f25fbe144e4a81a1b0080b6c1032778-Abstract.html
|
Anastasiia Koloskova, Tao Lin, Sebastian U. Stich
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f25fbe144e4a81a1b0080b6c1032778-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12496-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5f25fbe144e4a81a1b0080b6c1032778-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=CmI7NqBR4Ua
|
https://papers.nips.cc/paper_files/paper/2021/file/5f25fbe144e4a81a1b0080b6c1032778-Supplemental.pdf
|
We consider decentralized machine learning over a network where the training data is distributed across $n$ agents, each of which can compute stochastic model updates on their local data. The agent's common goal is to find a model that minimizes the average of all local loss functions. While gradient tracking (GT) algorithms can overcome a key challenge, namely accounting for differences between workers' local data distributions, the known convergence rates for GT algorithms are not optimal with respect to their dependence on the mixing parameter $p$ (related to the spectral gap of the connectivity matrix).We provide a tighter analysis of the GT method in the stochastic strongly convex, convex and non-convex settings. We improve the dependency on $p$ from $\mathcal{O}(p^{-2})$ to $\mathcal{O}(p^{-1}c^{-1})$ in the noiseless case and from $\mathcal{O}(p^{-3/2})$ to $\mathcal{O}(p^{-1/2}c^{-1})$ in the general stochastic case, where $c \geq p$ is related to the negative eigenvalues of the connectivity matrix (and is a constant in most practical applications). This improvement was possible due to a new proof technique which could be of independent interest.
| null |
Entropic Desired Dynamics for Intrinsic Control
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f7f02b7e4ade23430f345f954c938c1-Abstract.html
|
Steven Hansen, Guillaume Desjardins, Kate Baumli, David Warde-Farley, Nicolas Heess, Simon Osindero, Volodymyr Mnih
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f7f02b7e4ade23430f345f954c938c1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12497-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5f7f02b7e4ade23430f345f954c938c1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Juk1LKbFvd
|
https://papers.nips.cc/paper_files/paper/2021/file/5f7f02b7e4ade23430f345f954c938c1-Supplemental.pdf
|
An agent might be said, informally, to have mastery of its environment when it has maximised the effective number of states it can reliably reach. In practice, this often means maximizing the number of latent codes that can be discriminated from future states under some short time horizon (e.g. \cite{eysenbach2018diversity}). By situating these latent codes in a globally consistent coordinate system, we show that agents can reliably reach more states in the long term while still optimizing a local objective. A simple instantiation of this idea, \textbf{E}ntropic \textbf{D}esired \textbf{D}ynamics for \textbf{I}ntrinsic \textbf{C}on\textbf{T}rol (EDDICT), assumes fixed additive latent dynamics, which results in tractable learning and an interpretable latent space. Compared to prior methods, EDDICT's globally consistent codes allow it to be far more exploratory, as demonstrated by improved state coverage and increased unsupervised performance on hard exploration games such as Montezuma's Revenge.
| null |
Exploring Cross-Video and Cross-Modality Signals for Weakly-Supervised Audio-Visual Video Parsing
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f93f983524def3dca464469d2cf9f3e-Abstract.html
|
Yan-Bo Lin, Hung-Yu Tseng, Hsin-Ying Lee, Yen-Yu Lin, Ming-Hsuan Yang
|
https://papers.nips.cc/paper_files/paper/2021/hash/5f93f983524def3dca464469d2cf9f3e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12498-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5f93f983524def3dca464469d2cf9f3e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=V5V1vGrI2z
| null |
The audio-visual video parsing task aims to temporally parse a video into audio or visual event categories. However, it is labor intensive to temporally annotate audio and visual events and thus hampers the learning of a parsing model. To this end, we propose to explore additional cross-video and cross-modality supervisory signals to facilitate weakly-supervised audio-visual video parsing. The proposed method exploits both the common and diverse event semantics across videos to identify audio or visual events. In addition, our method explores event co-occurrence across audio, visual, and audio-visual streams. We leverage the explored cross-modality co-occurrence to localize segments of target events while excluding irrelevant ones. The discovered supervisory signals across different videos and modalities can greatly facilitate the training with only video-level annotations. Quantitative and qualitative results demonstrate that the proposed method performs favorably against existing methods on weakly-supervised audio-visual video parsing.
| null |
Littlestone Classes are Privately Online Learnable
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fbb4eb0e7c2cedf731ec7c18e344141-Abstract.html
|
Noah Golowich, Roi Livni
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fbb4eb0e7c2cedf731ec7c18e344141-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12499-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5fbb4eb0e7c2cedf731ec7c18e344141-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=4bKbEP9b65v
|
https://papers.nips.cc/paper_files/paper/2021/file/5fbb4eb0e7c2cedf731ec7c18e344141-Supplemental.pdf
|
We consider the problem of online classification under a privacy constraint. In this setting a learner observes sequentially a stream of labelled examples $(x_t, y_t)$, for $1 \leq t \leq T$, and returns at each iteration $t$ a hypothesis $h_t$ which is used to predict the label of each new example $x_t$. The learner's performance is measured by her regret against a known hypothesis class $\mathcal{H}$. We require that the algorithm satisfies the following privacy constraint: the sequence $h_1, \ldots, h_T$ of hypotheses output by the algorithm needs to be an $(\epsilon, \delta)$-differentially private function of the whole input sequence $(x_1, y_1), \ldots, (x_T, y_T)$.We provide the first non-trivial regret bound for the realizable setting. Specifically, we show that if the class $\mathcal{H}$ has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most $O(\log T)$ mistakes -- comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor. Moreover, for general values of the Littlestone dimension $d$, the same mistake bound holds but with a doubly-exponential in $d$ factor. A recent line of work has demonstrated a strong connection between classes that are online learnable and those that are differentially-private learnable. Our results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting).We also discuss an adaptive setting and provide a sublinear regret bound of $O(\sqrt{T})$.
| null |
Dual Parameterization of Sparse Variational Gaussian Processes
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fcc629edc0cfa360016263112fe8058-Abstract.html
|
Vincent ADAM, Paul Chang, Mohammad Emtiyaz Khan, Arno Solin
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fcc629edc0cfa360016263112fe8058-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12500-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5fcc629edc0cfa360016263112fe8058-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=b-88mXTMg4J
|
https://papers.nips.cc/paper_files/paper/2021/file/5fcc629edc0cfa360016263112fe8058-Supplemental.pdf
|
Sparse variational Gaussian process (SVGP) methods are a common choice for non-conjugate Gaussian process inference because of their computational benefits. In this paper, we improve their computational efficiency by using a dual parameterization where each data example is assigned dual parameters, similarly to site parameters used in expectation propagation. Our dual parameterization speeds-up inference using natural gradient descent, and provides a tighter evidence lower bound for hyperparameter learning. The approach has the same memory cost as the current SVGP methods, but it is faster and more accurate.
| null |
Learning to dehaze with polarization
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fd0b37cd7dbbb00f97ba6ce92bf5add-Abstract.html
|
Chu Zhou, Minggui Teng, Yufei Han, Chao Xu, Boxin Shi
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fd0b37cd7dbbb00f97ba6ce92bf5add-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12501-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5fd0b37cd7dbbb00f97ba6ce92bf5add-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Ua9Vi0QqwD4
|
https://papers.nips.cc/paper_files/paper/2021/file/5fd0b37cd7dbbb00f97ba6ce92bf5add-Supplemental.pdf
|
Haze, a common kind of bad weather caused by atmospheric scattering, decreases the visibility of scenes and degenerates the performance of computer vision algorithms. Single-image dehazing methods have shown their effectiveness in a large variety of scenes, however, they are based on handcrafted priors or learned features, which do not generalize well to real-world images. Polarization information can be used to relieve its ill-posedness, however, real-world images are still challenging since existing polarization-based methods usually assume that the transmitted light is not significantly polarized, and they require specific clues to estimate necessary physical parameters. In this paper, we propose a generalized physical formation model of hazy images and a robust polarization-based dehazing pipeline without the above assumption or requirement, along with a neural network tailored to the pipeline. Experimental results show that our approach achieves state-of-the-art performance on both synthetic data and real-world hazy images.
| null |
Conservative Data Sharing for Multi-Task Offline Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fd2c06f558321eff612bbbe455f6fbd-Abstract.html
|
Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fd2c06f558321eff612bbbe455f6fbd-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12502-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5fd2c06f558321eff612bbbe455f6fbd-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=YxxzNLfXBz
|
https://papers.nips.cc/paper_files/paper/2021/file/5fd2c06f558321eff612bbbe455f6fbd-Supplemental.zip
|
Offline reinforcement learning (RL) algorithms have shown promising results in domains where abundant pre-collected data is available. However, prior methods focus on solving individual problems from scratch with an offline dataset without considering how an offline RL agent can acquire multiple skills. We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks, and utilize all of this data to learn behaviors for all the tasks more effectively rather than training each one in isolation. However, sharing data across all tasks in multi-task offline RL performs surprisingly poorly in practice. Thorough empirical analysis, we find that sharing data can actually exacerbate the distributional shift between the learned policy and the dataset, which in turn can lead to divergence of the learned policy and poor performance. To address this challenge, we develop a simple technique for data- sharing in multi-task offline RL that routes data based on the improvement over the task-specific data. We call this approach conservative data sharing (CDS), and it can be applied with multiple single-task offline RL methods. On a range of challenging multi-task locomotion, navigation, and vision-based robotic manipulation problems, CDS achieves the best or comparable performance compared to prior offline multi- task RL methods and previous data sharing approaches.
| null |
Universal Rate-Distortion-Perception Representations for Lossy Compression
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fde40544cff0001484ecae2466ce96e-Abstract.html
|
George Zhang, Jingjing Qian, Jun Chen, Ashish Khisti
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fde40544cff0001484ecae2466ce96e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12503-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5fde40544cff0001484ecae2466ce96e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=_wdgJCH-Jf
|
https://papers.nips.cc/paper_files/paper/2021/file/5fde40544cff0001484ecae2466ce96e-Supplemental.zip
|
In the context of lossy compression, Blau \& Michaeli (2019) adopt a mathematical notion of perceptual quality and define the information rate-distortion-perception function, generalizing the classical rate-distortion tradeoff. We consider the notion of universal representations in which one may fix an encoder and vary the decoder to achieve any point within a collection of distortion and perception constraints. We prove that the corresponding information-theoretic universal rate-distortion-perception function is operationally achievable in an approximate sense. Under MSE distortion, we show that the entire distortion-perception tradeoff of a Gaussian source can be achieved by a single encoder of the same rate asymptotically. We then characterize the achievable distortion-perception region for a fixed representation in the case of arbitrary distributions, and identify conditions under which the aforementioned results continue to hold approximately. This motivates the study of practical constructions that are approximately universal across the RDP tradeoff, thereby alleviating the need to design a new encoder for each objective. We provide experimental results on MNIST and SVHN suggesting that on image compression tasks, the operational tradeoffs achieved by machine learning models with a fixed encoder suffer only a small penalty when compared to their variable encoder counterparts.
| null |
What’s a good imputation to predict with missing values?
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fe8fdc79ce292c39c5f209d734b7206-Abstract.html
|
Marine Le Morvan, Julie Josse, Erwan Scornet, Gael Varoquaux
|
https://papers.nips.cc/paper_files/paper/2021/hash/5fe8fdc79ce292c39c5f209d734b7206-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12504-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5fe8fdc79ce292c39c5f209d734b7206-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=usxt30HpW66
|
https://papers.nips.cc/paper_files/paper/2021/file/5fe8fdc79ce292c39c5f209d734b7206-Supplemental.pdf
|
How to learn a good predictor on data with missing values? Most efforts focus on first imputing as well as possible and second learning on the completed data to predict the outcome. Yet, this widespread practice has no theoretical grounding. Here we show that for almost all imputation functions, an impute-then-regress procedure with a powerful learner is Bayes optimal. This result holds for all missing-values mechanisms, in contrast with the classic statistical results that require missing-at-random settings to use imputation in probabilistic modeling. Moreover, it implies that perfect conditional imputation is not needed for good prediction asymptotically. In fact, we show that on perfectly imputed data the best regression function will generally be discontinuous, which makes it hard to learn. Crafting instead the imputation so as to leave the regression function unchanged simply shifts the problem to learning discontinuous imputations. Rather, we suggest that it is easier to learn imputation and regression jointly. We propose such a procedure, adapting NeuMiss, a neural network capturing the conditional links across observed and unobserved variables whatever the missing-value pattern. Our experiments confirm that joint imputation and regression through NeuMiss is better than various two step procedures in a finite-sample regime.
| null |
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ffaa9f5182c2a36843f438bb1fdbdea-Abstract.html
|
Ben Eysenbach, Sergey Levine, Russ R. Salakhutdinov
|
https://papers.nips.cc/paper_files/paper/2021/hash/5ffaa9f5182c2a36843f438bb1fdbdea-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12505-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/5ffaa9f5182c2a36843f438bb1fdbdea-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=VXeoK3fJZhW
|
https://papers.nips.cc/paper_files/paper/2021/file/5ffaa9f5182c2a36843f438bb1fdbdea-Supplemental.pdf
|
Reinforcement learning (RL) algorithms assume that users specify tasks by manually writing down a reward function. However, this process can be laborious and demands considerable technical expertise. Can we devise RL algorithms that instead enable users to specify tasks simply by providing examples of successful outcomes? In this paper, we derive a control algorithm that maximizes the future probability of these successful outcome examples. Prior work has approached similar problems with a two-stage process, first learning a reward function and then optimizing this reward function using another reinforcement learning algorithm. In contrast, our method directly learns a value function from transitions and successful outcomes, without learning this intermediate reward function. Our method therefore requires fewer hyperparameters to tune and lines of code to debug. We show that our method satisfies a new data-driven Bellman equation, where examples take the place of the typical reward function term. Experiments show that our approach outperforms prior methods that learn explicit reward functions.
| null |
Hierarchical Skills for Efficient Exploration
|
https://papers.nips.cc/paper_files/paper/2021/hash/60106888f8977b71e1f15db7bc9a88d1-Abstract.html
|
Jonas Gehring, Gabriel Synnaeve, Andreas Krause, Nicolas Usunier
|
https://papers.nips.cc/paper_files/paper/2021/hash/60106888f8977b71e1f15db7bc9a88d1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12506-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/60106888f8977b71e1f15db7bc9a88d1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=NbaEmFm2mUW
|
https://papers.nips.cc/paper_files/paper/2021/file/60106888f8977b71e1f15db7bc9a88d1-Supplemental.pdf
|
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration. However, prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design. In previous work on continuous control, the sensitivity of methods to this trade-off has not been addressed explicitly, as locomotion provides a suitable prior for navigation tasks, which have been of foremost interest. In this work, we analyze this trade-off for low-level policy pre-training with a new benchmark suite of diverse, sparse-reward tasks for bipedal robots. We alleviate the need for prior knowledge by proposing a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner. For utilization on downstream tasks, we present a three-layered hierarchical learning algorithm to automatically trade off between general and specific skills as required by the respective task. In our experiments, we show that our approach performs this trade-off effectively and achieves better results than current state-of-the-art methods for end-to-end hierarchical reinforcement learning and unsupervised skill discovery.
| null |
Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models
|
https://papers.nips.cc/paper_files/paper/2021/hash/60243f9b1ac2dba11ff8131c8f4431e0-Abstract.html
|
Phil Chen, Mikhal Itkina, Ransalu Senanayake, Mykel J Kochenderfer
|
https://papers.nips.cc/paper_files/paper/2021/hash/60243f9b1ac2dba11ff8131c8f4431e0-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12507-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/60243f9b1ac2dba11ff8131c8f4431e0-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=rqjfa49ODLE
|
https://papers.nips.cc/paper_files/paper/2021/file/60243f9b1ac2dba11ff8131c8f4431e0-Supplemental.pdf
|
Many applications of generative models rely on the marginalization of their high-dimensional output probability distributions. Normalization functions that yield sparse probability distributions can make exact marginalization more computationally tractable. However, sparse normalization functions usually require alternative loss functions for training since the log-likelihood is undefined for sparse probability distributions. Furthermore, many sparse normalization functions often collapse the multimodality of distributions. In this work, we present ev-softmax, a sparse normalization function that preserves the multimodality of probability distributions. We derive its properties, including its gradient in closed-form, and introduce a continuous family of approximations to ev-softmax that have full support and can be trained with probabilistic loss functions such as negative log-likelihood and Kullback-Leibler divergence. We evaluate our method on a variety of generative models, including variational autoencoders and auto-regressive architectures. Our method outperforms existing dense and sparse normalization techniques in distributional accuracy. We demonstrate that ev-softmax successfully reduces the dimensionality of probability distributions while maintaining multimodality.
| null |
Submodular + Concave
|
https://papers.nips.cc/paper_files/paper/2021/hash/602443a3d6907117d8b4a308844e963e-Abstract.html
|
Siddharth Mitra, Moran Feldman, Amin Karbasi
|
https://papers.nips.cc/paper_files/paper/2021/hash/602443a3d6907117d8b4a308844e963e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12508-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/602443a3d6907117d8b4a308844e963e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=OQLCPvYnMOv
|
https://papers.nips.cc/paper_files/paper/2021/file/602443a3d6907117d8b4a308844e963e-Supplemental.zip
|
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions and provide constant factor approximation guarantees for (non-convex/non-concave) continuous submodular functions. In this work, we initiate the study of the maximization of functions of the form $F(x) = G(x) +C(x)$ over a solvable convex body $P$, where $G$ is a smooth DR-submodular function and $C$ is a smooth concave function. This class of functions is a strict extension of both concave and continuous DR-submodular functions for which no theoretical guarantee is known. We provide a suite of Frank-Wolfe style algorithms, which, depending on the nature of the objective function (i.e., if $G$ and $C$ are monotone or not, and non-negative or not) and on the nature of the set $P$ (i.e., whether it is downward closed or not), provide $1-1/e$, $1/e$, or $1/2$ approximation guarantees. We then use our algorithms to get a framework to smoothly interpolate between choosing a diverse set of elements from a given ground set (corresponding to the mode of a determinantal point process) and choosing a clustered set of elements (corresponding to the maxima of a suitable concave function). Additionally, we apply our algorithms to various functions in the above class (DR-submodular + concave) in both constrained and unconstrained settings, and show that our algorithms consistently outperform natural baselines.
| null |
DeepGEM: Generalized Expectation-Maximization for Blind Inversion
|
https://papers.nips.cc/paper_files/paper/2021/hash/606c90a06173d69682feb83037a68fec-Abstract.html
|
Angela Gao, Jorge Castellanos, Yisong Yue, Zachary Ross, Katherine Bouman
|
https://papers.nips.cc/paper_files/paper/2021/hash/606c90a06173d69682feb83037a68fec-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12509-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/606c90a06173d69682feb83037a68fec-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GERI2kZ84V
|
https://papers.nips.cc/paper_files/paper/2021/file/606c90a06173d69682feb83037a68fec-Supplemental.pdf
|
Typically, inversion algorithms assume that a forward model, which relates a source to its resulting measurements, is known and fixed. Using collected indirect measurements and the forward model, the goal becomes to recover the source. When the forward model is unknown, or imperfect, artifacts due to model mismatch occur in the recovery of the source. In this paper, we study the problem of blind inversion: solving an inverse problem with unknown or imperfect knowledge of the forward model parameters. We propose DeepGEM, a variational Expectation-Maximization (EM) framework that can be used to solve for the unknown parameters of the forward model in an unsupervised manner. DeepGEM makes use of a normalizing flow generative network to efficiently capture complex posterior distributions, which leads to more accurate evaluation of the source's posterior distribution used in EM. We showcase the effectiveness of our DeepGEM approach by achieving strong performance on the challenging problem of blind seismic tomography, where we significantly outperform the standard method used in seismology. We also demonstrate the generality of DeepGEM by applying it to a simple case of blind deconvolution.
| null |
Learning to Generate Visual Questions with Noisy Supervision
|
https://papers.nips.cc/paper_files/paper/2021/hash/60792d855cd8a912a97711f91a1f155c-Abstract.html
|
Shen Kai, Lingfei Wu, Siliang Tang, Yueting Zhuang, zhen he, Zhuoye Ding, Yun Xiao, Bo Long
|
https://papers.nips.cc/paper_files/paper/2021/hash/60792d855cd8a912a97711f91a1f155c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12510-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/60792d855cd8a912a97711f91a1f155c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=LMotP3zsq_d
| null |
The task of visual question generation (VQG) aims to generate human-like neural questions from an image and potentially other side information (e.g., answer type or the answer itself). Existing works often suffer from the severe one image to many questions mapping problem, which generates uninformative and non-referential questions. Recent work has demonstrated that by leveraging double visual and answer hints, a model can faithfully generate much better quality questions. However, visual hints are not available naturally. Despite they proposed a simple rule-based similarity matching method to obtain candidate visual hints, they could be very noisy practically and thus restrict the quality of generated questions. In this paper, we present a novel learning approach for double-hints based VQG, which can be cast as a weakly supervised learning problem with noises. The key rationale is that the salient visual regions of interest can be viewed as a constraint to improve the generation procedure for producing high-quality questions. As a result, given the predicted salient visual regions of interest, we can focus on estimating the probability of being ground-truth questions, which in turn implicitly measures the quality of predicted visual hints. Experimental results on two benchmark datasets show that our proposed method outperforms the state-of-the-art approaches by a large margin on a variety of metrics, including both automatic machine metrics and human evaluation.
| null |
Pure Exploration in Kernel and Neural Bandits
|
https://papers.nips.cc/paper_files/paper/2021/hash/6084e82a08cb979cf75ae28aed37ecd4-Abstract.html
|
Yinglun Zhu, Dongruo Zhou, Ruoxi Jiang, Quanquan Gu, Rebecca Willett, Robert Nowak
|
https://papers.nips.cc/paper_files/paper/2021/hash/6084e82a08cb979cf75ae28aed37ecd4-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12511-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6084e82a08cb979cf75ae28aed37ecd4-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=X_jSy6seRj
|
https://papers.nips.cc/paper_files/paper/2021/file/6084e82a08cb979cf75ae28aed37ecd4-Supplemental.pdf
|
We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms. To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space and carefully deal with the induced model misspecifications. Our approach is conceptually very different from existing works that can either only handle low-dimensional linear bandits or passively deal with model misspecifications. We showcase the application of our approach to two pure exploration settings that were previously under-studied: (1) the reward function belongs to a possibly infinite-dimensional Reproducing Kernel Hilbert Space, and (2) the reward function is nonlinear and can be approximated by neural networks. Our main results provide sample complexity guarantees that only depend on the effective dimension of the feature spaces in the kernel or neural representations. Extensive experiments conducted on both synthetic and real-world datasets demonstrate the efficacy of our methods.
| null |
Numerical Composition of Differential Privacy
|
https://papers.nips.cc/paper_files/paper/2021/hash/6097d8f3714205740f30debe1166744e-Abstract.html
|
Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz
|
https://papers.nips.cc/paper_files/paper/2021/hash/6097d8f3714205740f30debe1166744e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12512-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6097d8f3714205740f30debe1166744e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=GSXEx6iYd0
|
https://papers.nips.cc/paper_files/paper/2021/file/6097d8f3714205740f30debe1166744e-Supplemental.pdf
|
We give a fast algorithm to compose privacy guarantees of differentially private (DP) algorithms to arbitrary accuracy. Our method is based on the notion of privacy loss random variables to quantify the privacy loss of DP algorithms. The running time and memory needed for our algorithm to approximate the privacy curve of a DP algorithm composed with itself $k$ times is $\tilde{O}(\sqrt{k})$. This improves over the best prior method by Koskela et al. (2020) which requires $\tilde{\Omega}(k^{1.5})$ running time. We demonstrate the utility of our algorithm by accurately computing the privacy loss of DP-SGD algorithm of Abadi et al. (2016) and showing that our algorithm speeds up the privacy computations by a few orders of magnitude compared to prior work, while maintaining similar accuracy.
| null |
Coresets for Classification – Simplified and Strengthened
|
https://papers.nips.cc/paper_files/paper/2021/hash/6098ed616e715171f0dabad60a8e5197-Abstract.html
|
Tung Mai, Cameron Musco, Anup Rao
|
https://papers.nips.cc/paper_files/paper/2021/hash/6098ed616e715171f0dabad60a8e5197-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12513-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6098ed616e715171f0dabad60a8e5197-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=qz0MLeaTP1C
|
https://papers.nips.cc/paper_files/paper/2021/file/6098ed616e715171f0dabad60a8e5197-Supplemental.pdf
|
We give relative error coresets for training linear classifiers with a broad class of loss functions, including the logistic loss and hinge loss. Our construction achieves $(1\pm \epsilon)$ relative error with $\tilde O(d \cdot \mu_y(X)^2/\epsilon^2)$ points, where $\mu_y(X)$ is a natural complexity measure of the data matrix $X \in \mathbb{R}^{n \times d}$ and label vector $y \in \{-1,1\}^n$, introduced by Munteanu et al. 2018. Our result is based on subsampling data points with probabilities proportional to their $\ell_1$ $Lewis$ $weights$. It significantly improves on existing theoretical bounds and performs well in practice, outperforming uniform subsampling along with other importance sampling methods. Our sampling distribution does not depend on the labels, so can be used for active learning. It also does not depend on the specific loss function, so a single coreset can be used in multiple training scenarios.
| null |
Sequential Algorithms for Testing Closeness of Distributions
|
https://papers.nips.cc/paper_files/paper/2021/hash/609c5e5089a9aa967232aba2a4d03114-Abstract.html
|
Aadil Oufkir, Omar Fawzi, Nicolas Flammarion, Aurélien Garivier
|
https://papers.nips.cc/paper_files/paper/2021/hash/609c5e5089a9aa967232aba2a4d03114-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12514-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/609c5e5089a9aa967232aba2a4d03114-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=UDe_F-4EeHd
|
https://papers.nips.cc/paper_files/paper/2021/file/609c5e5089a9aa967232aba2a4d03114-Supplemental.pdf
|
What advantage do sequential procedures provide over batch algorithms for testing properties of unknown distributions? Focusing on the problem of testing whether two distributions $\mathcal{D}_1$ and $\mathcal{D}_2$ on $\{1,\dots, n\}$ are equal or $\epsilon$-far, we give several answers to this question. We show that for a small alphabet size $n$, there is a sequential algorithm that outperforms any batch algorithm by a factor of at least $4$ in terms sample complexity. For a general alphabet size $n$, we give a sequential algorithm that uses no more samples than its batch counterpart, and possibly fewer if the actual distance between $\mathcal{D}_1$ and $\mathcal{D}_2$ is larger than $\epsilon$. As a corollary, letting $\epsilon$ go to $0$, we obtain a sequential algorithm for testing closeness (with no a priori bound on the distance between $\mathcal{D}_1$ and $\mathcal{D}_2$) with a sample complexity $\tilde{\mathcal{O}}(\frac{n^{2/3}}{TV(\mathcal{D}_1, \mathcal{D}_2)^{4/3}})$: this improves over the $\tilde{\mathcal{O}}(\frac{n/\log n}{TV(\mathcal{D}_1, \mathcal{D}_2)^{2} })$ tester of [Daskalakis and Kawase 2017] and is optimal up to multiplicative constants. We also establish limitations of sequential algorithms for the problem of testing closeness: they can improve the worst case number of samples by at most a constant factor.
| null |
Overlapping Spaces for Compact Graph Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/60b2149f6bafd1cc9d505496f09160ba-Abstract.html
|
Kirill Shevkunov, Liudmila Prokhorenkova
|
https://papers.nips.cc/paper_files/paper/2021/hash/60b2149f6bafd1cc9d505496f09160ba-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12515-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/60b2149f6bafd1cc9d505496f09160ba-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=FFtcBBVIg1T
|
https://papers.nips.cc/paper_files/paper/2021/file/60b2149f6bafd1cc9d505496f09160ba-Supplemental.pdf
|
Various non-trivial spaces are becoming popular for embedding structured data such as graphs, texts, or images. Following spherical and hyperbolic spaces, more general product spaces have been proposed. However, searching for the best configuration of a product space is a resource-intensive procedure, which reduces the practical applicability of the idea. We generalize the concept of product space and introduce an overlapping space that does not have the configuration search problem. The main idea is to allow subsets of coordinates to be shared between spaces of different types (Euclidean, hyperbolic, spherical). As a result, we often need fewer coordinates to store the objects. Additionally, we propose an optimization algorithm that automatically learns the optimal configuration. Our experiments confirm that overlapping spaces outperform the competitors in graph embedding tasks with different evaluation metrics. We also perform an empirical analysis in a realistic information retrieval setup, where we compare all spaces by incorporating them into DSSM. In this case, the proposed overlapping space consistently achieves nearly optimal results without any configuration tuning. This allows for reducing training time, which can be essential in large-scale applications.
| null |
Hyperparameter Tuning is All You Need for LISTA
|
https://papers.nips.cc/paper_files/paper/2021/hash/60c97bef031ec312b512c08565c1868e-Abstract.html
|
Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin
|
https://papers.nips.cc/paper_files/paper/2021/hash/60c97bef031ec312b512c08565c1868e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12516-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/60c97bef031ec312b512c08565c1868e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=81Erd42Wimi
|
https://papers.nips.cc/paper_files/paper/2021/file/60c97bef031ec312b512c08565c1868e-Supplemental.pdf
|
Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unrolling an iterative algorithm and training it like a neural network. It has had great success on sparse recovery. In this paper, we show that adding momentum to intermediate variables in the LISTA network achieves a better convergence rate and, in particular, the network with instance-optimal parameters is superlinearly convergent. Moreover, our new theoretical results lead to a practical approach of automatically and adaptively calculating the parameters of a LISTA network layer based on its previous layers. Perhaps most surprisingly, such an adaptive-parameter procedure reduces the training of LISTA to tuning only three hyperparameters from data: a new record set in the context of the recent advances on trimming down LISTA complexity. We call this new ultra-light weight network HyperLISTA. Compared to state-of-the-art LISTA models, HyperLISTA achieves almost the same performance on seen data distributions and performs better when tested on unseen distributions (specifically, those with different sparsity levels and nonzero magnitudes). Code is available: https://github.com/VITA-Group/HyperLISTA.
| null |
Foundations of Symbolic Languages for Model Interpretability
|
https://papers.nips.cc/paper_files/paper/2021/hash/60cb558c40e4f18479664069d9642d5a-Abstract.html
|
Marcelo Arenas, Daniel Báez, Pablo Barceló, Jorge Pérez, Bernardo Subercaseaux
|
https://papers.nips.cc/paper_files/paper/2021/hash/60cb558c40e4f18479664069d9642d5a-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12517-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/60cb558c40e4f18479664069d9642d5a-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Jyxmk4wUoQV
|
https://papers.nips.cc/paper_files/paper/2021/file/60cb558c40e4f18479664069d9642d5a-Supplemental.pdf
|
Several queries and scores have recently been proposed to explain individual predictions over ML models. Examples include queries based on “anchors”, which are parts of an instance that are sufficient to justify its classification, and “feature-perturbation” scores such as SHAP. Given the need for flexible, reliable, and easy-to-apply interpretability methods for ML models, we foresee the need for developing declarative languages to naturally specify different explainability queries. We do this in a principled way by rooting such a language in a logic called FOIL, which allows for expressing many simple but important explainability queries, and might serve as a core for more expressive interpretability languages. We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable: decision trees and more general decision diagrams. Since the number of possible inputs for an ML model is exponential in its dimension, tractability of the FOIL evaluation problem is delicate but can be achieved by either restricting the structure of the models, or the fragment of FOIL being evaluated. We also present a prototype implementation of FOIL wrapped in a high-level declarative language and perform experiments showing that such a language can be used in practice.
| null |
Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism
|
https://papers.nips.cc/paper_files/paper/2021/hash/60ce36723c17bbac504f2ef4c8a46995-Abstract.html
|
Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, Stuart Russell
|
https://papers.nips.cc/paper_files/paper/2021/hash/60ce36723c17bbac504f2ef4c8a46995-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12518-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/60ce36723c17bbac504f2ef4c8a46995-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=G8A_Nl0yim6
|
https://papers.nips.cc/paper_files/paper/2021/file/60ce36723c17bbac504f2ef4c8a46995-Supplemental.pdf
|
Offline (or batch) reinforcement learning (RL) algorithms seek to learn an optimal policy from a fixed dataset without active data collection. Based on the composition of the offline dataset, two main methods are used: imitation learning which is suitable for expert datasets, and vanilla offline RL which often requires uniform coverage datasets. From a practical standpoint, datasets often deviate from these two extremes and the exact data composition is usually unknown. To bridge this gap, we present a new offline RL framework that smoothly interpolates between the two extremes of data composition, hence unifying imitation learning and vanilla offline RL. The new framework is centered around a weak version of the concentrability coefficient that measures the deviation of the behavior policy from the expert policy alone. Under this new framework, we ask: can one develop an algorithm that achieves a minimax optimal rate adaptive to unknown data composition? To address this question, we consider a lower confidence bound (LCB) algorithm developed based on pessimism in the face of uncertainty in offline RL. We study finite-sample properties of LCB as well as information-theoretic limits in multi-armed bandits, contextual bandits, and Markov decision processes (MDPs). Our analysis reveals surprising facts about optimality rates. In particular, in both contextual bandits and RL, LCB achieves a faster rate of $1/N$ for nearly-expert datasets compared to the usual rate of $1/\sqrt{N}$ in offline RL, where $N$ is the batch dataset sample size. In contextual bandits with at least two contexts, we prove that LCB is adaptively optimal for the entire data composition range, achieving a smooth transition from imitation learning to offline RL. We further show that LCB is almost adaptively optimal in MDPs.
| null |
Impression learning: Online representation learning with synaptic plasticity
|
https://papers.nips.cc/paper_files/paper/2021/hash/615299acbbac3e21302bbc435091ad9f-Abstract.html
|
Colin Bredenberg, Benjamin Lyo, Eero Simoncelli, Cristina Savin
|
https://papers.nips.cc/paper_files/paper/2021/hash/615299acbbac3e21302bbc435091ad9f-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12519-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/615299acbbac3e21302bbc435091ad9f-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=MAorPaLqam_
|
https://papers.nips.cc/paper_files/paper/2021/file/615299acbbac3e21302bbc435091ad9f-Supplemental.pdf
|
Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience. Here, we derive an unsupervised local synaptic plasticity rule that trains neural circuits to infer latent structure from sensory stimuli via a novel loss function for approximate online Bayesian inference. The learning algorithm is driven by a local error signal computed between two factors that jointly contribute to neural activity: stimulus drive and internal predictions --- the network's 'impression' of the stimulus. Physiologically, we associate these two components with the basal and apical dendrites of pyramidal neurons, respectively. We show that learning can be implemented online, is capable of capturing temporal dependencies in continuous input streams, and generalizes to hierarchical architectures. Furthermore, we demonstrate both analytically and empirically that the algorithm is more data-efficient than a three-factor plasticity alternative, enabling it to learn statistics of high-dimensional, naturalistic inputs. Overall, the model provides a bridge from mechanistic accounts of synaptic plasticity to algorithmic descriptions of unsupervised probabilistic learning and inference.
| null |
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
|
https://papers.nips.cc/paper_files/paper/2021/hash/618faa1728eb2ef6e3733645273ab145-Abstract.html
|
Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas Wallis, Wieland Brendel
|
https://papers.nips.cc/paper_files/paper/2021/hash/618faa1728eb2ef6e3733645273ab145-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12520-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/618faa1728eb2ef6e3733645273ab145-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=vLPqnPf9k0
|
https://papers.nips.cc/paper_files/paper/2021/file/618faa1728eb2ef6e3733645273ab145-Supplemental.zip
|
A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence. One widely used approach towards this goal is to visualize unit responses via activation maximization. These feature visualizations are purported to provide humans with precise information about the image features that cause a unit to be activated - an advantage over other alternatives like strongly activating dataset samples. If humans indeed gain causal insight from visualizations, this should enable them to predict the effect of an intervention, such as how occluding a certain patch of the image (say, a dog's head) changes a unit's activation. Here, we test this hypothesis by asking humans to decide which of two square occlusions causes a larger change to a unit's activation.Both a large-scale crowdsourced experiment and measurements with experts show that on average the extremely activating feature visualizations by Olah et al. (2017) indeed help humans on this task ($68 \pm 4$% accuracy; baseline performance without any visualizations is $60 \pm 3$%). However, they do not provide any substantial advantage over other visualizations (such as e.g. dataset samples), which yield similar performance ($66\pm3$% to $67 \pm3$% accuracy). Taken together, we propose an objective psychophysical task to quantify the benefit of unit-level interpretability methods for humans, and find no evidence that a widely-used feature visualization method provides humans with better "causal understanding" of unit activations than simple alternative visualizations.
| null |
Fixes That Fail: Self-Defeating Improvements in Machine-Learning Systems
|
https://papers.nips.cc/paper_files/paper/2021/hash/619427579e7b067421f6aa89d4a8990c-Abstract.html
|
Ruihan Wu, Chuan Guo, Awni Hannun, Laurens van der Maaten
|
https://papers.nips.cc/paper_files/paper/2021/hash/619427579e7b067421f6aa89d4a8990c-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12521-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/619427579e7b067421f6aa89d4a8990c-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=xZvuqfT6Otj
|
https://papers.nips.cc/paper_files/paper/2021/file/619427579e7b067421f6aa89d4a8990c-Supplemental.zip
|
Machine-learning systems such as self-driving cars or virtual assistants are composed of a large number of machine-learning models that recognize image content, transcribe speech, analyze natural language, infer preferences, rank options, etc. Models in these systems are often developed and trained independently, which raises an obvious concern: Can improving a machine-learning model make the overall system worse? We answer this question affirmatively by showing that improving a model can deteriorate the performance of downstream models, even after those downstream models are retrained. Such self-defeating improvements are the result of entanglement between the models in the system. We perform an error decomposition of systems with multiple machine-learning models, which sheds light on the types of errors that can lead to self-defeating improvements. We also present the results of experiments which show that self-defeating improvements emerge in a realistic stereo-based detection system for cars and pedestrians.
| null |
Coarse-to-fine Animal Pose and Shape Estimation
|
https://papers.nips.cc/paper_files/paper/2021/hash/6195f47dcff14b8f242aa333cdb2703e-Abstract.html
|
Chen Li, Gim Hee Lee
|
https://papers.nips.cc/paper_files/paper/2021/hash/6195f47dcff14b8f242aa333cdb2703e-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12522-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/6195f47dcff14b8f242aa333cdb2703e-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=HxuQiq1SnyS
|
https://papers.nips.cc/paper_files/paper/2021/file/6195f47dcff14b8f242aa333cdb2703e-Supplemental.pdf
|
Most existing animal pose and shape estimation approaches reconstruct animal meshes with a parametric SMAL model. This is because the low-dimensional pose and shape parameters of the SMAL model makes it easier for deep networks to learn the high-dimensional animal meshes. However, the SMAL model is learned from scans of toy animals with limited pose and shape variations, and thus may not be able to represent highly varying real animals well. This may result in poor fittings of the estimated meshes to the 2D evidences, e.g. 2D keypoints or silhouettes. To mitigate this problem, we propose a coarse-to-fine approach to reconstruct 3D animal mesh from a single image. The coarse estimation stage first estimates the pose, shape and translation parameters of the SMAL model. The estimated meshes are then used as a starting point by a graph convolutional network (GCN) to predict a per-vertex deformation in the refinement stage. This combination of SMAL-based and vertex-based representations benefits from both parametric and non-parametric representations. We design our mesh refinement GCN (MRGCN) as an encoder-decoder structure with hierarchical feature representations to overcome the limited receptive field of traditional GCNs. Moreover, we observe that the global image feature used by existing animal mesh reconstruction works is unable to capture detailed shape information for mesh refinement. We thus introduce a local feature extractor to retrieve a vertex-level feature and use it together with the global feature as the input of the MRGCN. We test our approach on the StanfordExtra dataset and achieve state-of-the-art results. Furthermore, we test the generalization capacity of our approach on the Animal Pose and BADJA datasets. Our code is available at the project website.
| null |
Meta-Learning Sparse Implicit Neural Representations
|
https://papers.nips.cc/paper_files/paper/2021/hash/61b1fb3f59e28c67f3925f3c79be81a1-Abstract.html
|
Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin
|
https://papers.nips.cc/paper_files/paper/2021/hash/61b1fb3f59e28c67f3925f3c79be81a1-Abstract.html
|
NIPS 2021
|
https://papers.nips.cc/paper_files/paper/12523-/bibtex
|
https://papers.nips.cc/paper_files/paper/2021/file/61b1fb3f59e28c67f3925f3c79be81a1-Paper.pdf
|
https://papers.nips.cchttps://openreview.net/forum?id=Tn0PnRY877g
|
https://papers.nips.cc/paper_files/paper/2021/file/61b1fb3f59e28c67f3925f3c79be81a1-Supplemental.pdf
|
Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example. Being capable of conveying fine details in a high dimensional signal, unboundedly of its domain, implicit neural representations ensure many advantages over conventional discrete representations. However, the current approach is difficult to scale for a large number of signals or a data set, since learning a neural representation---which is parameter heavy by itself---for each signal individually requires a lot of memory and computations. To address this issue, we propose to leverage a meta-learning approach in combination with network compression under a sparsity constraint, such that it renders a well-initialized sparse parameterization that evolves quickly to represent a set of unseen signals in the subsequent training. We empirically demonstrate that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models with the same number of parameters, when trained to fit each signal using the same number of optimization steps.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.