title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
The future is log-Gaussian: ResNets and their infinite-depth-and-width limit at initialization
https://papers.nips.cc/paper_files/paper/2021/hash/412758d043dd247bddea07c7ec558c31-Abstract.html
Mufan Li, Mihai Nica, Dan Roy
https://papers.nips.cc/paper_files/paper/2021/hash/412758d043dd247bddea07c7ec558c31-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12224-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/412758d043dd247bddea07c7ec558c31-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-h99IwQN-f
https://papers.nips.cc/paper_files/paper/2021/file/412758d043dd247bddea07c7ec558c31-Supplemental.zip
Theoretical results show that neural networks can be approximated by Gaussian processes in the infinite-width limit. However, for fully connected networks, it has been previously shown that for any fixed network width, $n$, the Gaussian approximation gets worse as the network depth, $d$, increases. Given that modern networks are deep, this raises the question of how well modern architectures, like ResNets, are captured by the infinite-width limit. To provide a better approximation, we study ReLU ResNets in the infinite-depth-and-width limit, where \emph{both} depth and width tend to infinity as their ratio, $d/n$, remains constant. In contrast to the Gaussian infinite-width limit, we show theoretically that the network exhibits log-Gaussian behaviour at initialization in the infinite-depth-and-width limit, with parameters depending on the ratio $d/n$. Using Monte Carlo simulations, we demonstrate that even basic properties of standard ResNet architectures are poorly captured by the Gaussian limit, but remarkably well captured by our log-Gaussian limit. Moreover, our analysis reveals that ReLU ResNets at initialization are hypoactivated: fewer than half of the ReLUs are activated. Additionally, we calculate the interlayer correlations, which have the effect of exponentially increasing the variance of the network output. Based on our analysis, we introduce \emph{Balanced ResNets}, a simple architecture modification, which eliminates hypoactivation and interlayer correlations and is more amenable to theoretical analysis.
null
Grammar-Based Grounded Lexicon Learning
https://papers.nips.cc/paper_files/paper/2021/hash/4158f6d19559955bae372bb00f6204e4-Abstract.html
Jiayuan Mao, Freda Shi, Jiajun Wu, Roger Levy, Josh Tenenbaum
https://papers.nips.cc/paper_files/paper/2021/hash/4158f6d19559955bae372bb00f6204e4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12225-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4158f6d19559955bae372bb00f6204e4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VJQMp5xu24
https://papers.nips.cc/paper_files/paper/2021/file/4158f6d19559955bae372bb00f6204e4-Supplemental.pdf
We present Grammar-Based Grounded Language Learning (G2L2), a lexicalist approach toward learning a compositional and grounded meaning representation of language from grounded data, such as paired images and texts. At the core of G2L2 is a collection of lexicon entries, which map each word to a tuple of a syntactic type and a neuro-symbolic semantic program. For example, the word shiny has a syntactic type of adjective; its neuro-symbolic semantic program has the symbolic form $\lambda x.\textit{filter}(x, \textbf{SHINY})$, where the concept SHINY is associated with a neural network embedding, which will be used to classify shiny objects. Given an input sentence, G2L2 first looks up the lexicon entries associated with each token. It then derives the meaning of the sentence as an executable neuro-symbolic program by composing lexical meanings based on syntax. The recovered meaning programs can be executed on grounded inputs. To facilitate learning in an exponentially-growing compositional space, we introduce a joint parsing and expected execution algorithm, which does local marginalization over derivations to reduce the training time. We evaluate G2L2 on two domains: visual reasoning and language-driven navigation. Results show that G2L2 can generalize from small amounts of data to novel compositions of words.
null
Distributed Deep Learning In Open Collaborations
https://papers.nips.cc/paper_files/paper/2021/hash/41a60377ba920919939d83326ebee5a1-Abstract.html
Michael Diskin, Alexey Bukhtiyarov, Max Ryabinin, Lucile Saulnier, quentin lhoest, Anton Sinitsin, Dmitry Popov, Dmitry V. Pyrkin, Maxim Kashirin, Alexander Borzunov, Albert Villanova del Moral, Denis Mazur, Ilia Kobelev, Yacine Jernite, Thomas Wolf, Gennady Pekhimenko
https://papers.nips.cc/paper_files/paper/2021/hash/41a60377ba920919939d83326ebee5a1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12226-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/41a60377ba920919939d83326ebee5a1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FYHktcK-7v
https://papers.nips.cc/paper_files/paper/2021/file/41a60377ba920919939d83326ebee5a1-Supplemental.pdf
Modern deep learning applications require increasingly more compute to train state-of-the-art models. To address this demand, large corporations and institutions use dedicated High-Performance Computing clusters, whose construction and maintenance are both environmentally costly and well beyond the budget of most organizations. As a result, some research directions become the exclusive domain of a few large industrial and even fewer academic actors. To alleviate this disparity, smaller groups may pool their computational resources and run collaborative experiments that benefit all participants. This paradigm, known as grid- or volunteer computing, has seen successful applications in numerous scientific areas. However, using this approach for machine learning is difficult due to high latency, asymmetric bandwidth, and several challenges unique to volunteer computing. In this work, we carefully analyze these constraints and propose a novel algorithmic framework designed specifically for collaborative training. We demonstrate the effectiveness of our approach for SwAV and ALBERT pretraining in realistic conditions and achieve performance comparable to traditional setups at a fraction of the cost. Finally, we provide a detailed report of successful collaborative language model pretraining with nearly 50 participants.
null
Neural Ensemble Search for Uncertainty Estimation and Dataset Shift
https://papers.nips.cc/paper_files/paper/2021/hash/41a6fd31aa2e75c3c6d427db3d17ea80-Abstract.html
Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris C Holmes, Frank Hutter, Yee Teh
https://papers.nips.cc/paper_files/paper/2021/hash/41a6fd31aa2e75c3c6d427db3d17ea80-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12227-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/41a6fd31aa2e75c3c6d427db3d17ea80-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HiYDAwAGWud
https://papers.nips.cc/paper_files/paper/2021/file/41a6fd31aa2e75c3c6d427db3d17ea80-Supplemental.pdf
Ensembles of neural networks achieve superior performance compared to standalone networks in terms of accuracy, uncertainty calibration and robustness to dataset shift. Deep ensembles, a state-of-the-art method for uncertainty estimation, only ensemble random initializations of a fixed architecture. Instead, we propose two methods for automatically constructing ensembles with varying architectures, which implicitly trade-off individual architectures’ strengths against the ensemble’s diversity and exploit architectural variation as a source of diversity. On a variety of classification tasks and modern architecture search spaces, we show that the resulting ensembles outperform deep ensembles not only in terms of accuracy but also uncertainty calibration and robustness to dataset shift. Our further analysis and ablation studies provide evidence of higher ensemble diversity due to architectural variation, resulting in ensembles that can outperform deep ensembles, even when having weaker average base learners. To foster reproducibility, our code is available: https://github.com/automl/nes
null
Finding Bipartite Components in Hypergraphs
https://papers.nips.cc/paper_files/paper/2021/hash/41bacf567aefc61b3076c74d8925128f-Abstract.html
Peter Macgregor, He Sun
https://papers.nips.cc/paper_files/paper/2021/hash/41bacf567aefc61b3076c74d8925128f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12228-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/41bacf567aefc61b3076c74d8925128f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fhDSTihtiB6
https://papers.nips.cc/paper_files/paper/2021/file/41bacf567aefc61b3076c74d8925128f-Supplemental.pdf
Hypergraphs are important objects to model ternary or higher-order relations of objects, and have a number of applications in analysing many complex datasets occurring in practice. In this work we study a new heat diffusion process in hypergraphs, and employ this process to design a polynomial-time algorithm that approximately finds bipartite components in a hypergraph. We theoretically prove the performance of our proposed algorithm, and compare it against the previous state-of-the-art through extensive experimental analysis on both synthetic and real-world datasets. We find that our new algorithm consistently and significantly outperforms the previous state-of-the-art across a wide range of hypergraphs.
null
Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation
https://papers.nips.cc/paper_files/paper/2021/hash/41da609c519d77b29be442f8c1105647-Abstract.html
Soojung Yang, Doyeong Hwang, Seul Lee, Seongok Ryu, Sung Ju Hwang
https://papers.nips.cc/paper_files/paper/2021/hash/41da609c519d77b29be442f8c1105647-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12229-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/41da609c519d77b29be442f8c1105647-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=onyYGbBJ2Mh
https://papers.nips.cc/paper_files/paper/2021/file/41da609c519d77b29be442f8c1105647-Supplemental.pdf
Recently, utilizing reinforcement learning (RL) to generate molecules with desired properties has been highlighted as a promising strategy for drug design. Molecular docking program -- a physical simulation that estimates protein-small molecule binding affinity -- can be an ideal reward scoring function for RL, as it is a straightforward proxy of the therapeutic potential. Still, two imminent challenges exist for this task. First, the models often fail to generate chemically realistic and pharmacochemically acceptable molecules. Second, the docking score optimization is a difficult exploration problem that involves many local optima and less smooth surface with respect to molecular structure. To tackle these challenges, we propose a novel RL framework that generates pharmacochemically acceptable molecules with large docking scores. Our method -- Fragment-based generative RL with Explorative Experience replay for Drug design (FREED) -- constrains the generated molecules to a realistic and qualified chemical space and effectively explores the space to find drugs by coupling our fragment-based generation method and a novel error-prioritized experience replay (PER). We also show that our model performs well on both de novo and scaffold-based schemes. Our model produces molecules of higher quality compared to existing methods while achieving state-of-the-art performance on two of three targets in terms of the docking scores of the generated molecules. We further show with ablation studies that our method, predictive error-PER (FREED(PE)), significantly improves the model performance.
null
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
https://papers.nips.cc/paper_files/paper/2021/hash/42299f06ee419aa5d9d07798b56779e2-Abstract.html
Spencer Frei, Quanquan Gu
https://papers.nips.cc/paper_files/paper/2021/hash/42299f06ee419aa5d9d07798b56779e2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12230-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/42299f06ee419aa5d9d07798b56779e2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NVpGLJUuPx5
null
Although the optimization objectives for learning neural networks are highly non-convex, gradient-based methods have been wildly successful at learning neural networks in practice. This juxtaposition has led to a number of recent studies on provable guarantees for neural networks trained by gradient descent. Unfortunately, the techniques in these works are often highly specific to the particular setup in each problem, making it difficult to generalize across different settings. To address this drawback in the literature, we propose a unified non-convex optimization framework for the analysis of neural network training. We introduce the notions of proxy convexity and proxy Polyak-Lojasiewicz (PL) inequalities, which are satisfied if the original objective function induces a proxy objective function that is implicitly minimized when using gradient methods. We show that stochastic gradient descent (SGD) on objectives satisfying proxy convexity or the proxy PL inequality leads to efficient guarantees for proxy objective functions. We further show that many existing guarantees for neural networks trained by gradient descent can be unified through proxy convexity and proxy PL inequalities.
null
Covariance-Aware Private Mean Estimation Without Private Covariance Estimation
https://papers.nips.cc/paper_files/paper/2021/hash/42778ef0b5805a96f9511e20b5611fce-Abstract.html
Gavin Brown, Marco Gaboardi, Adam Smith, Jonathan Ullman, Lydia Zakynthinou
https://papers.nips.cc/paper_files/paper/2021/hash/42778ef0b5805a96f9511e20b5611fce-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12231-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/42778ef0b5805a96f9511e20b5611fce-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=INBO6h9gtG
https://papers.nips.cc/paper_files/paper/2021/file/42778ef0b5805a96f9511e20b5611fce-Supplemental.pdf
We present two sample-efficient differentially private mean estimators for $d$-dimensional (sub)Gaussian distributions with unknown covariance. Informally, given $n \gtrsim d/\alpha^2$ samples from such a distribution with mean $\mu$ and covariance $\Sigma$, our estimators output $\tilde\mu$ such that $\| \tilde\mu - \mu \|_{\Sigma} \leq \alpha$, where $\| \cdot \|_{\Sigma}$ is the \emph{Mahalanobis distance}. All previous estimators with the same guarantee either require strong a priori bounds on the covariance matrix or require $\Omega(d^{3/2})$ samples. Each of our estimators is based on a simple, general approach to designing differentially private mechanisms, but with novel technical steps to make the estimator private and sample-efficient. Our first estimator samples a point with approximately maximum Tukey depth using the exponential mechanism, but restricted to the set of points of large Tukey depth. Proving that this mechanism is private requires a novel analysis. Our second estimator perturbs the empirical mean of the data set with noise calibrated to the empirical covariance. Only the mean is released, however; the covariance is only used internally. Its sample complexity guarantees hold more generally for subgaussian distributions, albeit with a slightly worse dependence on the privacy parameter. For both estimators, careful preprocessing of the data is required to satisfy differential privacy.
null
Label consistency in overfitted generalized $k$-means
https://papers.nips.cc/paper_files/paper/2021/hash/427e3427c5f38a41bb9cb26525b22fba-Abstract.html
Linfan Zhang, Arash Amini
https://papers.nips.cc/paper_files/paper/2021/hash/427e3427c5f38a41bb9cb26525b22fba-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12232-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/427e3427c5f38a41bb9cb26525b22fba-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_FPtOcc0ygy
https://papers.nips.cc/paper_files/paper/2021/file/427e3427c5f38a41bb9cb26525b22fba-Supplemental.pdf
We provide theoretical guarantees for label consistency in generalized $k$-means problems, with an emphasis on the overfitted case where the number of clusters used by the algorithm is more than the ground truth. We provide conditions under which the estimated labels are close to a refinement of the true cluster labels. We consider both exact and approximate recovery of the labels. Our results hold for any constant-factor approximation to the $k$-means problem. The results are also model-free and only based on bounds on the maximum or average distance of the data points to the true cluster centers. These centers themselves are loosely defined and can be taken to be any set of points for which the aforementioned distances can be controlled. We show the usefulness of the results with applications to some manifold clustering problems.
null
Open-set Label Noise Can Improve Robustness Against Inherent Label Noise
https://papers.nips.cc/paper_files/paper/2021/hash/428fca9bc1921c25c5121f9da7815cde-Abstract.html
Hongxin Wei, Lue Tao, RENCHUNZI XIE, Bo An
https://papers.nips.cc/paper_files/paper/2021/hash/428fca9bc1921c25c5121f9da7815cde-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12233-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/428fca9bc1921c25c5121f9da7815cde-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sK6CtXIgKcp
https://papers.nips.cc/paper_files/paper/2021/file/428fca9bc1921c25c5121f9da7815cde-Supplemental.pdf
Learning with noisy labels is a practically challenging problem in weakly supervised learning. In the existing literature, open-set noises are always considered to be poisonous for generalization, similar to closed-set noises. In this paper, we empirically show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels. Inspired by the observations, we propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training. With ODNL, the extra capacity of the neural network can be largely consumed in a way that does not interfere with learning patterns from clean data. Through the lens of SGD noise, we show that the noises induced by our method are random-direction, conflict-free and biased, which may help the model converge to a flat minimum with superior stability and enforce the model to produce conservative predictions on Out-of-Distribution instances. Extensive experimental results on benchmark datasets with various types of noisy labels demonstrate that the proposed method not only enhances the performance of many existing robust algorithms but also achieves significant improvement on Out-of-Distribution detection tasks even in the label noise setting.
null
The Complexity of Sparse Tensor PCA
https://papers.nips.cc/paper_files/paper/2021/hash/42a6845a557bef704ad8ac9cb4461d43-Abstract.html
Davin Choo, Tommaso d'Orsi
https://papers.nips.cc/paper_files/paper/2021/hash/42a6845a557bef704ad8ac9cb4461d43-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12234-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/42a6845a557bef704ad8ac9cb4461d43-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FKCTeO1fsvH
https://papers.nips.cc/paper_files/paper/2021/file/42a6845a557bef704ad8ac9cb4461d43-Supplemental.pdf
We study the problem of sparse tensor principal component analysis: given a tensor $\pmb Y = \pmb W + \lambda x^{\otimes p}$ with $\pmb W \in \otimes^p \mathbb{R}^n$ having i.i.d. Gaussian entries, the goal is to recover the $k$-sparse unit vector $x \in \mathbb{R}^n$. The model captures both sparse PCA (in its Wigner form) and tensor PCA.For the highly sparse regime of $k \leq \sqrt{n}$, we present a family of algorithms that smoothly interpolates between a simple polynomial-time algorithm and the exponential-time exhaustive search algorithm. For any $1 \leq t \leq k$, our algorithms recovers the sparse vector for signal-to-noise ratio $\lambda \geq \tilde{\mathcal{O}} (\sqrt{t} \cdot (k/t)^{p/2})$ in time $\tilde{\mathcal{O}}(n^{p+t})$, capturing the state-of-the-art guarantees for the matrix settings (in both the polynomial-time and sub-exponential time regimes).Our results naturally extend to the case of $r$ distinct $k$-sparse signals with disjoint supports, with guarantees that are independent of the number of spikes. Even in the restricted case of sparse PCA, known algorithms only recover the sparse vectors for $\lambda \geq \tilde{\mathcal{O}}(k \cdot r)$ while our algorithms require $\lambda \geq \tilde{\mathcal{O}}(k)$.Finally, by analyzing the low-degree likelihood ratio, we complement these algorithmic results with rigorous evidence illustrating the trade-offs between signal-to-noise ratio and running time. This lower bound captures the known lower bounds for both sparse PCA and tensor PCA. In this general model, we observe a more intricate three-way trade-off between the number of samples $n$, the sparsity $k$, and the tensor power $p$.
null
Learning to Elect
https://papers.nips.cc/paper_files/paper/2021/hash/42d6c7d61481d1c21bd1635f59edae05-Abstract.html
Cem Anil, Xuchan Bao
https://papers.nips.cc/paper_files/paper/2021/hash/42d6c7d61481d1c21bd1635f59edae05-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12235-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/42d6c7d61481d1c21bd1635f59edae05-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7HQiArc-sKf
https://papers.nips.cc/paper_files/paper/2021/file/42d6c7d61481d1c21bd1635f59edae05-Supplemental.pdf
Voting systems have a wide range of applications including recommender systems, web search, product design and elections. Limited by the lack of general-purpose analytical tools, it is difficult to hand-engineer desirable voting rules for each use case. For this reason, it is appealing to automatically discover voting rules geared towards each scenario. In this paper, we show that set-input neural network architectures such as Set Transformers, fully-connected graph networks and DeepSets are both theoretically and empirically well-suited for learning voting rules. In particular, we show that these network models can not only mimic a number of existing voting rules to compelling accuracy --- both position-based (such as Plurality and Borda) and comparison-based (such as Kemeny, Copeland and Maximin) --- but also discover near-optimal voting rules that maximize different social welfare functions. Furthermore, the learned voting rules generalize well to different voter utility distributions and election sizes unseen during training.
null
KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support
https://papers.nips.cc/paper_files/paper/2021/hash/433a6ea5429d6d75f0be9bf9da26e24c-Abstract.html
Pierre Glaser, Michael Arbel, Arthur Gretton
https://papers.nips.cc/paper_files/paper/2021/hash/433a6ea5429d6d75f0be9bf9da26e24c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12236-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/433a6ea5429d6d75f0be9bf9da26e24c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZBeCVICs1Ua
https://papers.nips.cc/paper_files/paper/2021/file/433a6ea5429d6d75f0be9bf9da26e24c-Supplemental.pdf
We study the gradient flow for a relaxed approximation to the Kullback-Leibler (KL) divergencebetween a moving source and a fixed target distribution.This approximation, termed theKALE (KL approximate lower-bound estimator), solves a regularized version ofthe Fenchel dual problem defining the KL over a restricted class of functions.When using a Reproducing Kernel Hilbert Space (RKHS) to define the functionclass, we show that the KALE continuously interpolates between the KL and theMaximum Mean Discrepancy (MMD). Like the MMD and other Integral ProbabilityMetrics, the KALE remains well defined for mutually singulardistributions. Nonetheless, the KALE inherits from the limiting KL a greater sensitivity to mismatch in the support of the distributions, compared with the MMD. These two properties make theKALE gradient flow particularly well suited when the target distribution is supported on a low-dimensional manifold. Under an assumption of sufficient smoothness of the trajectories, we show the global convergence of the KALE flow. We propose a particle implementation of the flow given initial samples from the source and the target distribution, which we use to empirically confirm the KALE's properties.
null
When Is Generalizable Reinforcement Learning Tractable?
https://papers.nips.cc/paper_files/paper/2021/hash/437d46a857214c997956eaf0e3b21a55-Abstract.html
Dhruv Malik, Yuanzhi Li, Pradeep Ravikumar
https://papers.nips.cc/paper_files/paper/2021/hash/437d46a857214c997956eaf0e3b21a55-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12237-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/437d46a857214c997956eaf0e3b21a55-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lHvy0DLYWm
https://papers.nips.cc/paper_files/paper/2021/file/437d46a857214c997956eaf0e3b21a55-Supplemental.pdf
Agents trained by reinforcement learning (RL) often fail to generalize beyond the environment they were trained in, even when presented with new scenarios that seem similar to the training environment. We study the query complexity required to train RL agents that generalize to multiple environments. Intuitively, tractable generalization is only possible when the environments are similar or close in some sense. To capture this, we introduce Weak Proximity, a natural structural condition that requires the environments to have highly similar transition and reward functions and share a policy providing optimal value. Despite such shared structure, we prove that tractable generalization is impossible in the worst case. This holds even when each individual environment can be efficiently solved to obtain an optimal linear policy, and when the agent possesses a generative model. Our lower bound applies to the more complex task of representation learning for efficient generalization to multiple environments. On the positive side, we introduce Strong Proximity, a strengthened condition which we prove is sufficient for efficient generalization.
null
Relational Self-Attention: What's Missing in Attention for Video Understanding
https://papers.nips.cc/paper_files/paper/2021/hash/4392e631da381761421d5e1e0c3de25f-Abstract.html
Manjin Kim, Heeseung Kwon, CHUNYU WANG, Suha Kwak, Minsu Cho
https://papers.nips.cc/paper_files/paper/2021/hash/4392e631da381761421d5e1e0c3de25f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12238-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4392e631da381761421d5e1e0c3de25f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DLKakJ2W-In
https://papers.nips.cc/paper_files/paper/2021/file/4392e631da381761421d5e1e0c3de25f-Supplemental.pdf
Convolution has been arguably the most important feature transform for modern neural networks, leading to the advance of deep learning. Recent emergence of Transformer networks, which replace convolution layers with self-attention blocks, has revealed the limitation of stationary convolution kernels and opened the door to the era of dynamic feature transforms. The existing dynamic transforms, including self-attention, however, are all limited for video understanding where correspondence relations in space and time, i.e., motion information, are crucial for effective representation. In this work, we introduce a relational feature transform, dubbed the relational self-attention (RSA), that leverages rich structures of spatio-temporal relations in videos by dynamically generating relational kernels and aggregating relational contexts. Our experiments and ablation studies show that the RSA network substantially outperforms convolution and self-attention counterparts, achieving the state of the art on the standard motion-centric benchmarks for video action recognition, such as Something-Something-V1&V2, Diving48, and FineGym.
null
Towards Enabling Meta-Learning from Target Models
https://papers.nips.cc/paper_files/paper/2021/hash/43baa6762fa81bb43b39c62553b2970d-Abstract.html
Su Lu, Han-Jia Ye, Le Gan, De-Chuan Zhan
https://papers.nips.cc/paper_files/paper/2021/hash/43baa6762fa81bb43b39c62553b2970d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12239-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/43baa6762fa81bb43b39c62553b2970d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Htnjc4kHsNF
https://papers.nips.cc/paper_files/paper/2021/file/43baa6762fa81bb43b39c62553b2970d-Supplemental.pdf
Meta-learning can extract an inductive bias from previous learning experience and assist the training of new tasks. It is often realized through optimizing a meta-model with the evaluation loss of task-specific solvers. Most existing algorithms sample non-overlapping $\mathit{support}$ sets and $\mathit{query}$ sets to train and evaluate the solvers respectively due to simplicity ($\mathcal{S}$/$\mathcal{Q}$ protocol). Different from $\mathcal{S}$/$\mathcal{Q}$ protocol, we can also evaluate a task-specific solver by comparing it to a target model $\mathcal{T}$, which is the optimal model for this task or a model that behaves well enough on this task ($\mathcal{S}$/$\mathcal{T}$ protocol). Although being short of research, $\mathcal{S}$/$\mathcal{T}$ protocol has unique advantages such as offering more informative supervision, but it is computationally expensive. This paper looks into this special evaluation method and takes a step towards putting it into practice. We find that with a small ratio of tasks armed with target models, classic meta-learning algorithms can be improved a lot without consuming many resources. We empirically verify the effectiveness of $\mathcal{S}$/$\mathcal{T}$ protocol in a typical application of meta-learning, $\mathit{i.e.}$, few-shot learning. In detail, after constructing target models by fine-tuning the pre-trained network on those hard tasks, we match the task-specific solvers and target models via knowledge distillation.
null
A Near-Optimal Algorithm for Debiasing Trained Machine Learning Models
https://papers.nips.cc/paper_files/paper/2021/hash/43c656628a4a479e108ed86f7a28a010-Abstract.html
Ibrahim M. Alabdulmohsin, Mario Lucic
https://papers.nips.cc/paper_files/paper/2021/hash/43c656628a4a479e108ed86f7a28a010-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12240-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/43c656628a4a479e108ed86f7a28a010-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=H5TBqNFPKSJ
https://papers.nips.cc/paper_files/paper/2021/file/43c656628a4a479e108ed86f7a28a010-Supplemental.pdf
We present a scalable post-processing algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk. We empirically validate its advantages on standard benchmark datasets across both classical algorithms as well as modern DNN architectures and demonstrate that it outperforms previous post-processing methods while performing on par with in-processing. In addition, we show that the proposed algorithm is particularly effective for models trained at scale where post-processing is a natural and practical choice.
null
GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement
https://papers.nips.cc/paper_files/paper/2021/hash/43ec517d68b6edd3015b3edc9a11367b-Abstract.html
Martin Engelcke, Oiwi Parker Jones, Ingmar Posner
https://papers.nips.cc/paper_files/paper/2021/hash/43ec517d68b6edd3015b3edc9a11367b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12241-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/43ec517d68b6edd3015b3edc9a11367b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ws4BkjI1l-q
null
Advances in unsupervised learning of object-representations have culminated in the development of a broad range of methods for unsupervised object segmentation and interpretable object-centric scene generation. These methods, however, are limited to simulated and real-world datasets with limited visual complexity. Moreover, object representations are often inferred using RNNs which do not scale well to large images or iterative refinement which avoids imposing an unnatural ordering on objects in an image but requires the a priori initialisation of a fixed number of object representations. In contrast to established paradigms, this work proposes an embedding-based approach in which embeddings of pixels are clustered in a differentiable fashion using a stochastic stick-breaking process. Similar to iterative refinement, this clustering procedure also leads to randomly ordered object representations, but without the need of initialising a fixed number of clusters a priori. This is used to develop a new model, GENESIS-v2, which can infer a variable number of object representations without using RNNs or iterative refinement. We show that GENESIS-v2 performs strongly in comparison to recent baselines in terms of unsupervised image segmentation and object-centric scene generation on established synthetic datasets as well as more complex real-world datasets.
null
How Data Augmentation affects Optimization for Linear Regression
https://papers.nips.cc/paper_files/paper/2021/hash/442b548e816f05640dec68f497ca38ac-Abstract.html
Boris Hanin, Yi Sun
https://papers.nips.cc/paper_files/paper/2021/hash/442b548e816f05640dec68f497ca38ac-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12242-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/442b548e816f05640dec68f497ca38ac-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wRFj6EKvpl
https://papers.nips.cc/paper_files/paper/2021/file/442b548e816f05640dec68f497ca38ac-Supplemental.pdf
Though data augmentation has rapidly emerged as a key tool for optimization in modern machine learning, a clear picture of how augmentation schedules affect optimization and interact with optimization hyperparameters such as learning rate is nascent. In the spirit of classical convex optimization and recent work on implicit bias, the present work analyzes the effect of augmentation on optimization in the simple convex setting of linear regression with MSE loss.We find joint schedules for learning rate and data augmentation scheme under which augmented gradient descent provably converges and characterize the resulting minimum. Our results apply to arbitrary augmentation schemes, revealing complex interactions between learning rates and augmentations even in the convex setting. Our approach interprets augmented (S)GD as a stochastic optimization method for a time-varying sequence of proxy losses. This gives a unified way to analyze learning rate, batch size, and augmentations ranging from additive noise to random projections. From this perspective, our results, which also give rates of convergence, can be viewed as Monro-Robbins type conditions for augmented (S)GD.
null
An Exact Characterization of the Generalization Error for the Gibbs Algorithm
https://papers.nips.cc/paper_files/paper/2021/hash/445e24b5f22cacb9d51a837c10e91a3f-Abstract.html
Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell
https://papers.nips.cc/paper_files/paper/2021/hash/445e24b5f22cacb9d51a837c10e91a3f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12243-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/445e24b5f22cacb9d51a837c10e91a3f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XnIYa2OG2sr
https://papers.nips.cc/paper_files/paper/2021/file/445e24b5f22cacb9d51a837c10e91a3f-Supplemental.pdf
Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm. However, existing bounds are often loose and lack of guarantees. As a result, they may fail to characterize the exact generalization ability of a learning algorithm.Our main contribution is an exact characterization of the expected generalization error of the well-known Gibbs algorithm (a.k.a. Gibbs posterior) using symmetrized KL information between the input training samples and the output hypothesis. Our result can be applied to tighten existing expected generalization error and PAC-Bayesian bounds. Our approach is versatile, as it also characterizes the generalization error of the Gibbs algorithm with data-dependent regularizer and that of the Gibbs algorithm in the asymptotic regime, where it converges to the empirical risk minimization algorithm. Of particular relevance, our results highlight the role the symmetrized KL information plays in controlling the generalization error of the Gibbs algorithm.
null
Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning
https://papers.nips.cc/paper_files/paper/2021/hash/4476b929e30dd0c4e8bdbcc82c6ba23a-Abstract.html
Alberto Maria Metelli, Alessio Russo, Marcello Restelli
https://papers.nips.cc/paper_files/paper/2021/hash/4476b929e30dd0c4e8bdbcc82c6ba23a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12244-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4476b929e30dd0c4e8bdbcc82c6ba23a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_8vCV7AxPZ
https://papers.nips.cc/paper_files/paper/2021/file/4476b929e30dd0c4e8bdbcc82c6ba23a-Supplemental.pdf
Importance Sampling (IS) is a widely used building block for a large variety of off-policy estimation and learning algorithms. However, empirical and theoretical studies have progressively shown that vanilla IS leads to poor estimations whenever the behavioral and target policies are too dissimilar. In this paper, we analyze the theoretical properties of the IS estimator by deriving a novel anticoncentration bound that formalizes the intuition behind its undesired behavior. Then, we propose a new class of IS transformations, based on the notion of power mean. To the best of our knowledge, the resulting estimator is the first to achieve, under certain conditions, two key properties: (i) it displays a subgaussian concentration rate; (ii) it preserves the differentiability in the target distribution. Finally, we provide numerical simulations on both synthetic examples and contextual bandits, in comparison with off-policy evaluation and learning baselines.
null
Rethinking gradient sparsification as total error minimization
https://papers.nips.cc/paper_files/paper/2021/hash/447b0408b80078338810051bb38b177f-Abstract.html
Atal Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis
https://papers.nips.cc/paper_files/paper/2021/hash/447b0408b80078338810051bb38b177f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12245-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/447b0408b80078338810051bb38b177f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XL9DWRG7mJn
https://papers.nips.cc/paper_files/paper/2021/file/447b0408b80078338810051bb38b177f-Supplemental.pdf
Gradient compression is a widely-established remedy to tackle the communication bottleneck in distributed training of large deep neural networks (DNNs). Under the error-feedback framework, Top-$k$ sparsification, sometimes with $k$ as little as 0.1% of the gradient size, enables training to the same model quality as the uncompressed case for a similar iteration count. From the optimization perspective, we find that Top-$k$ is the communication-optimal sparsifier given a per-iteration $k$ element budget.We argue that to further the benefits of gradient sparsification, especially for DNNs, a different perspective is necessary — one that moves from per-iteration optimality to consider optimality for the entire training.We identify that the total error — the sum of the compression errors for all iterations — encapsulates sparsification throughout training. Then, we propose a communication complexity model that minimizes the total error under a communication budget for the entire training. We find that the hard-threshold sparsifier, a variant of the Top-$k$ sparsifier with $k$ determined by a constant hard-threshold, is the optimal sparsifier for this model. Motivated by this, we provide convex and non-convex convergence analyses for the hard-threshold sparsifier with error-feedback. We show that hard-threshold has the same asymptotic convergence and linear speedup property as SGD in both the case, and unlike with Top-$k$ sparsifier, has no impact due to data-heterogeneity. Our diverse experiments on various DNNs and a logistic regression model demonstrate that the hard-threshold sparsifier is more communication-efficient than Top-$k$.
null
Approximate optimization of convex functions with outlier noise
https://papers.nips.cc/paper_files/paper/2021/hash/44b422a6d1df1d47db5d50a8d0aaca5d-Abstract.html
Anindya De, Sanjeev Khanna, Huan Li, MohammadHesam NikpeySalekde
https://papers.nips.cc/paper_files/paper/2021/hash/44b422a6d1df1d47db5d50a8d0aaca5d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12246-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/44b422a6d1df1d47db5d50a8d0aaca5d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=r6Khc1Lq9z1
https://papers.nips.cc/paper_files/paper/2021/file/44b422a6d1df1d47db5d50a8d0aaca5d-Supplemental.pdf
We study the problem of minimizing a convex function given by a zeroth order oracle that is possibly corrupted by {\em outlier noise}. Specifically, we assume the function values at some points of the domain are corrupted arbitrarily by an adversary, with the only restriction being that the total volume of corrupted points is bounded. The goal then is to find a point close to the function's minimizer using access to the corrupted oracle.We first prove a lower bound result showing that, somewhat surprisingly, one cannot hope to approximate the minimizer {\em nearly as well} as one might expect, even if one is allowed {\em an unbounded number} of queries to the oracle. Complementing this negative result, we then develop an efficient algorithm that outputs a point close to the minimizer of the convex function, where the specific distance matches {\em exactly}, up to constant factors, the distance bound shown in our lower bound result.
null
Fair Classification with Adversarial Perturbations
https://papers.nips.cc/paper_files/paper/2021/hash/44e207aecc63505eb828d442de03f2e9-Abstract.html
L. Elisa Celis, Anay Mehrotra, Nisheeth Vishnoi
https://papers.nips.cc/paper_files/paper/2021/hash/44e207aecc63505eb828d442de03f2e9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12247-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/44e207aecc63505eb828d442de03f2e9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LEqVjnffcWo
https://papers.nips.cc/paper_files/paper/2021/file/44e207aecc63505eb828d442de03f2e9-Supplemental.pdf
We study fair classification in the presence of an omniscient adversary that, given an $\eta$, is allowed to choose an arbitrary $\eta$-fraction of the training samples and arbitrarily perturb their protected attributes. The motivation comes from settings in which protected attributes can be incorrect due to strategic misreporting, malicious actors, or errors in imputation; and prior approaches that make stochastic or independence assumptions on errors may not satisfy their guarantees in this adversarial setting. Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness. Our framework works with multiple and non-binary protected attributes, is designed for the large class of linear-fractional fairness metrics, and can also handle perturbations besides protected attributes. We prove near-tightness of our framework's guarantees for natural hypothesis classes: no algorithm can have significantly better accuracy and any algorithm with better fairness must have lower accuracy. Empirically, we evaluate the classifiers produced by our framework for statistical rate on real-world and synthetic datasets for a family of adversaries.
null
Distributed Saddle-Point Problems Under Data Similarity
https://papers.nips.cc/paper_files/paper/2021/hash/44e65d3e9bc2f88b2b3d566de51a5381-Abstract.html
Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov
https://papers.nips.cc/paper_files/paper/2021/hash/44e65d3e9bc2f88b2b3d566de51a5381-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12248-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/44e65d3e9bc2f88b2b3d566de51a5381-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=dJcUhDVu1G
https://papers.nips.cc/paper_files/paper/2021/file/44e65d3e9bc2f88b2b3d566de51a5381-Supplemental.pdf
We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type--master/workers (thus centralized) architectures and mesh (thus decentralized) networks. The local functions at each node are assumed to be \textit{similar}, due to statistical data similarity or otherwise. We establish lower complexity bounds for a fairly general class of algorithms solving the SPP. We show that a given suboptimality $\epsilon>0$ is achieved over master/workers networks in $\Omega\big(\Delta\cdot \delta/\mu\cdot \log (1/\varepsilon)\big)$ rounds of communications, where $\delta>0$ measures the degree of similarity of the local functions, $\mu$ is their strong convexity constant, and $\Delta$ is the diameter of the network. The lower communication complexity bound over mesh networks reads $\Omega\big(1/{\sqrt{\rho}} \cdot {\delta}/{\mu}\cdot\log (1/\varepsilon)\big)$, where $\rho$ is the (normalized) eigengap of the gossip matrix used for the communication between neighbouring nodes. We then propose algorithms matching the lower bounds over either types of networks (up to log-factors). We assess the effectiveness of the proposed algorithms on a robust regression problem.
null
Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces
https://papers.nips.cc/paper_files/paper/2021/hash/44e76e99b5e194377e955b13fb12f630-Abstract.html
Aryan Deshwal, Jana Doppa
https://papers.nips.cc/paper_files/paper/2021/hash/44e76e99b5e194377e955b13fb12f630-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12249-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/44e76e99b5e194377e955b13fb12f630-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fxHzZlo4dxe
https://papers.nips.cc/paper_files/paper/2021/file/44e76e99b5e194377e955b13fb12f630-Supplemental.zip
We consider the problem of optimizing combinatorial spaces (e.g., sequences, trees, and graphs) using expensive black-box function evaluations. For example, optimizing molecules for drug design using physical lab experiments. Bayesian optimization (BO) is an efficient framework for solving such problems by intelligently selecting the inputs with high utility guided by a learned surrogate model. A recent BO approach for combinatorial spaces is through a reduction to BO over continuous spaces by learning a latent representation of structures using deep generative models (DGMs). The selected input from the continuous space is decoded into a discrete structure for performing function evaluation. However, the surrogate model over the latent space only uses the information learned by the DGM, which may not have the desired inductive bias to approximate the target black-box function. To overcome this drawback, this paper proposes a principled approach referred as LADDER. The key idea is to define a novel structure-coupled kernel that explicitly integrates the structural information from decoded structures with the learned latent space representation for better surrogate modeling. Our experiments on real-world benchmarks show that LADDER significantly improves over the BO over latent space method, and performs better or similar to state-of-the-art methods.
null
Gradual Domain Adaptation without Indexed Intermediate Domains
https://papers.nips.cc/paper_files/paper/2021/hash/45017f6511f91be700fda3d118034994-Abstract.html
Hong-You Chen, Wei-Lun Chao
https://papers.nips.cc/paper_files/paper/2021/hash/45017f6511f91be700fda3d118034994-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12250-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/45017f6511f91be700fda3d118034994-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jZ6FlEB78CG
https://papers.nips.cc/paper_files/paper/2021/file/45017f6511f91be700fda3d118034994-Supplemental.pdf
The effectiveness of unsupervised domain adaptation degrades when there is a large discrepancy between the source and target domains. Gradual domain adaption (GDA) is one promising way to mitigate such an issue, by leveraging additional unlabeled data that gradually shift from the source to the target. Through sequentially adapting the model along the "indexed" intermediate domains, GDA substantially improves the overall adaptation performance. In practice, however, the extra unlabeled data may not be separated into intermediate domains and indexed properly, limiting the applicability of GDA. In this paper, we investigate how to discover the sequence of intermediate domains when it is not already available. Concretely, we propose a coarse-to-fine framework, which starts with a coarse domain discovery step via progressive domain discriminator training. This coarse domain sequence then undergoes a fine indexing step via a novel cycle-consistency loss, which encourages the next intermediate domain to preserve sufficient discriminative knowledge of the current intermediate domain. The resulting domain sequence can then be used by a GDA algorithm. On benchmark data sets of GDA, we show that our approach, which we name Intermediate DOmain Labeler (IDOL), can lead to comparable or even better adaptation performance compared to the pre-defined domain sequence, making GDA more applicable and robust to the quality of domain sequences. Codes are available at https://github.com/hongyouc/IDOL.
null
K-level Reasoning for Zero-Shot Coordination in Hanabi
https://papers.nips.cc/paper_files/paper/2021/hash/4547dff5fd7604f18c8ee32cf3da41d7-Abstract.html
Brandon Cui, Hengyuan Hu, Luis Pineda, Jakob Foerster
https://papers.nips.cc/paper_files/paper/2021/hash/4547dff5fd7604f18c8ee32cf3da41d7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12251-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4547dff5fd7604f18c8ee32cf3da41d7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Nb03vOtUfz
https://papers.nips.cc/paper_files/paper/2021/file/4547dff5fd7604f18c8ee32cf3da41d7-Supplemental.pdf
The standard problem setting in cooperative multi-agent settings is \emph{self-play} (SP), where the goal is to train a \emph{team} of agents that works well together. However, optimal SP policies commonly contain arbitrary conventions (``handshakes'') and are not compatible with other, independently trained agents or humans. This latter desiderata was recently formalized by \cite{Hu2020-OtherPlay} as the \emph{zero-shot coordination} (ZSC) setting and partially addressed with their \emph{Other-Play} (OP) algorithm, which showed improved ZSC and human-AI performance in the card game Hanabi. OP assumes access to the symmetries of the environment and prevents agents from breaking these in a mutually \emph{incompatible} way during training. However, as the authors point out, discovering symmetries for a given environment is a computationally hard problem. Instead, we show that through a simple adaption of k-level reasoning (KLR) \cite{Costa-Gomes2006-K-level}, synchronously training all levels, we can obtain competitive ZSC and ad-hoc teamplay performance in Hanabi, including when paired with a human-like proxy bot. We also introduce a new method, synchronous-k-level reasoning with a best response (SyKLRBR), which further improves performance on our synchronous KLR by co-training a best response.
null
Learning Markov State Abstractions for Deep Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/454cecc4829279e64d624cd8a8c9ddf1-Abstract.html
Cameron Allen, Neev Parikh, Omer Gottesman, George Konidaris
https://papers.nips.cc/paper_files/paper/2021/hash/454cecc4829279e64d624cd8a8c9ddf1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12252-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/454cecc4829279e64d624cd8a8c9ddf1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jVzGglbNuW5
https://papers.nips.cc/paper_files/paper/2021/file/454cecc4829279e64d624cd8a8c9ddf1-Supplemental.pdf
A fundamental assumption of reinforcement learning in Markov decision processes (MDPs) is that the relevant decision process is, in fact, Markov. However, when MDPs have rich observations, agents typically learn by way of an abstract state representation, and such representations are not guaranteed to preserve the Markov property. We introduce a novel set of conditions and prove that they are sufficient for learning a Markov abstract state representation. We then describe a practical training procedure that combines inverse model estimation and temporal contrastive learning to learn an abstraction that approximately satisfies these conditions. Our novel training objective is compatible with both online and offline training: it does not require a reward signal, but agents can capitalize on reward information when available. We empirically evaluate our approach on a visual gridworld domain and a set of continuous control benchmarks. Our approach learns representations that capture the underlying structure of the domain and lead to improved sample efficiency over state-of-the-art deep reinforcement learning with visual features---often matching or exceeding the performance achieved with hand-designed compact state information.
null
Towards Deeper Deep Reinforcement Learning with Spectral Normalization
https://papers.nips.cc/paper_files/paper/2021/hash/4588e674d3f0faf985047d4c3f13ed0d-Abstract.html
Nils Bjorck, Carla P. Gomes, Kilian Q. Weinberger
https://papers.nips.cc/paper_files/paper/2021/hash/4588e674d3f0faf985047d4c3f13ed0d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12253-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4588e674d3f0faf985047d4c3f13ed0d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PesaDDyvSk
https://papers.nips.cc/paper_files/paper/2021/file/4588e674d3f0faf985047d4c3f13ed0d-Supplemental.pdf
In computer vision and natural language processing, innovations in model architecture that increase model capacity have reliably translated into gains in performance. In stark contrast with this trend, state-of-the-art reinforcement learning (RL) algorithms often use small MLPs, and gains in performance typically originate from algorithmic innovations. It is natural to hypothesize that small datasets in RL necessitate simple models to avoid overfitting; however, this hypothesis is untested. In this paper we investigate how RL agents are affected by exchanging the small MLPs with larger modern networks with skip connections and normalization, focusing specifically on actor-critic algorithms. We empirically verify that naively adopting such architectures leads to instabilities and poor performance, likely contributing to the popularity of simple models in practice. However, we show that dataset size is not the limiting factor, and instead argue that instability from taking gradients through the critic is the culprit. We demonstrate that spectral normalization (SN) can mitigate this issue and enable stable training with large modern architectures. After smoothing with SN, larger models yield significant performance improvements --- suggesting that more ``easy'' gains may be had by focusing on model architectures in addition to algorithmic innovations.
null
Functionally Regionalized Knowledge Transfer for Low-resource Drug Discovery
https://papers.nips.cc/paper_files/paper/2021/hash/459a4ddcb586f24efd9395aa7662bc7c-Abstract.html
Huaxiu Yao, Ying Wei, Long-Kai Huang, Ding Xue, Junzhou Huang, Zhenhui (Jessie) Li
https://papers.nips.cc/paper_files/paper/2021/hash/459a4ddcb586f24efd9395aa7662bc7c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12254-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/459a4ddcb586f24efd9395aa7662bc7c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Dti5bw14YZF
https://papers.nips.cc/paper_files/paper/2021/file/459a4ddcb586f24efd9395aa7662bc7c-Supplemental.pdf
More recently, there has been a surge of interest in employing machine learning approaches to expedite the drug discovery process where virtual screening for hit discovery and ADMET prediction for lead optimization play essential roles. One of the main obstacles to the wide success of machine learning approaches in these two tasks is that the number of compounds labeled with activities or ADMET properties is too small to build an effective predictive model. This paper seeks to remedy the problem by transferring the knowledge from previous assays, namely in-vivo experiments, by different laboratories and against various target proteins. To accommodate these wildly different assays and capture the similarity between assays, we propose a functional rationalized meta-learning algorithm FRML for such knowledge transfer. FRML constructs the predictive model with layers of neural sub-networks or so-called functional regions. Building on this, FRML shares an initialization for the weights of the predictive model across all assays, while customizes it to each assay with a region localization network choosing the pertinent regions. The compositionality of the model improves the capacity of generalization to various and even out-of-distribution tasks. Empirical results on both virtual screening and ADMET prediction validate the superiority of FRML over state-of-the-art baselines powered with interpretability in assay relationship.
null
Memory-Efficient Approximation Algorithms for Max-k-Cut and Correlation Clustering
https://papers.nips.cc/paper_files/paper/2021/hash/45c166d697d65080d54501403b433256-Abstract.html
Nimita Shinde, Vishnu Narayanan, James Saunderson
https://papers.nips.cc/paper_files/paper/2021/hash/45c166d697d65080d54501403b433256-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12255-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/45c166d697d65080d54501403b433256-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=90EVPQ7uCV
https://papers.nips.cc/paper_files/paper/2021/file/45c166d697d65080d54501403b433256-Supplemental.pdf
Max-k-Cut and correlation clustering are fundamental graph partitioning problems. For a graph $G=(V,E)$ with $n$ vertices, the methods with the best approximation guarantees for Max-k-Cut and the Max-Agree variant of correlation clustering involve solving SDPs with $\mathcal{O}(n^2)$ constraints and variables. Large-scale instances of SDPs, thus, present a memory bottleneck. In this paper, we develop simple polynomial-time Gaussian sampling-based algorithms for these two problems that use $\mathcal{O}(n+|E|)$ memory and nearly achieve the best existing approximation guarantees. For dense graphs arriving in a stream, we eliminate the dependence on $|E|$ in the storage complexity at the cost of a slightly worse approximation ratio by combining our approach with sparsification.
null
Panoptic 3D Scene Reconstruction From a Single RGB Image
https://papers.nips.cc/paper_files/paper/2021/hash/46031b3d04dc90994ca317a7c55c4289-Abstract.html
Manuel Dahnert, Ji Hou, Matthias Niessner, Angela Dai
https://papers.nips.cc/paper_files/paper/2021/hash/46031b3d04dc90994ca317a7c55c4289-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12256-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/46031b3d04dc90994ca317a7c55c4289-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=BwzggTWi8bM
https://papers.nips.cc/paper_files/paper/2021/file/46031b3d04dc90994ca317a7c55c4289-Supplemental.zip
Richly segmented 3D scene reconstructions are an integral basis for many high-level scene understanding tasks, such as for robotics, motion planning, or augmented reality. Existing works in 3D perception from a single RGB image tend to focus on geometric reconstruction only, or geometric reconstruction with semantic segmentation or instance segmentation.Inspired by 2D panoptic segmentation, we propose to unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into the task of panoptic 3D scene reconstruction -- from a single RGB image, predicting the complete geometric reconstruction of the scene in the camera frustum of the image, along with semantic and instance segmentations.We propose a new approach for holistic 3D scene understanding from a single RGB image which learns to lift and propagate 2D features from an input image to a 3D volumetric scene representation.Our panoptic 3D reconstruction metric evaluates both geometric reconstruction quality as well as panoptic segmentation.Our experiments demonstrate that our approach for panoptic 3D scene reconstruction outperforms alternative approaches for this task.
null
Measuring Generalization with Optimal Transport
https://papers.nips.cc/paper_files/paper/2021/hash/4607f7fff0dce694258e1c637512aa9d-Abstract.html
Ching-Yao Chuang, Youssef Mroueh, Kristjan Greenewald, Antonio Torralba, Stefanie Jegelka
https://papers.nips.cc/paper_files/paper/2021/hash/4607f7fff0dce694258e1c637512aa9d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12257-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4607f7fff0dce694258e1c637512aa9d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Yt89iqqswiM
https://papers.nips.cc/paper_files/paper/2021/file/4607f7fff0dce694258e1c637512aa9d-Supplemental.pdf
Understanding the generalization of deep neural networks is one of the most important tasks in deep learning. Although much progress has been made, theoretical error bounds still often behave disparately from empirical observations. In this work, we develop margin-based generalization bounds, where the margins are normalized with optimal transport costs between independent random subsets sampled from the training distribution. In particular, the optimal transport cost can be interpreted as a generalization of variance which captures the structural properties of the learned feature space. Our bounds robustly predict the generalization error, given training data and network parameters, on large scale datasets. Theoretically, we demonstrate that the concentration and separation of features play crucial roles in generalization, supporting empirical results in the literature.
null
Uniform Concentration Bounds toward a Unified Framework for Robust Clustering
https://papers.nips.cc/paper_files/paper/2021/hash/460b491b917d4185ed1f5be97229721a-Abstract.html
Debolina Paul, Saptarshi Chakraborty, Swagatam Das, Jason Xu
https://papers.nips.cc/paper_files/paper/2021/hash/460b491b917d4185ed1f5be97229721a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12258-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/460b491b917d4185ed1f5be97229721a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=cSVl6MtPIEX
https://papers.nips.cc/paper_files/paper/2021/file/460b491b917d4185ed1f5be97229721a-Supplemental.pdf
Recent advances in center-based clustering continue to improve upon the drawbacks of Lloyd's celebrated $k$-means algorithm over $60$ years after its introduction. Various methods seek to address poor local minima, sensitivity to outliers, and data that are not well-suited to Euclidean measures of fit, but many are supported largely empirically. Moreover, combining such approaches in a piecemeal manner can result in ad hoc methods, and the limited theoretical results supporting each individual contribution may no longer hold. Toward addressing these issues in a principled way, this paper proposes a cohesive robust framework for center-based clustering under a general class of dissimilarity measures. In particular, we present a rigorous theoretical treatment within a Median-of-Means (MoM) estimation framework, showing that it subsumes several popular $k$-means variants. In addition to unifying existing methods, we derive uniform concentration bounds that complete their analyses, and bridge these results to the MoM framework via Dudley's chaining arguments. Importantly, we neither require any assumptions on the distribution of the outlying observations nor on the relative number of observations $n$ to features $p$. We establish strong consistency and an error rate of $O(n^{-1/2})$ under mild conditions, surpassing the best-known results in the literature. The methods are empirically validated thoroughly on real and synthetic datasets.
null
Learning Signal-Agnostic Manifolds of Neural Fields
https://papers.nips.cc/paper_files/paper/2021/hash/4639475d6782a08c1e964f9a4329a254-Abstract.html
Yilun Du, Katie Collins, Josh Tenenbaum, Vincent Sitzmann
https://papers.nips.cc/paper_files/paper/2021/hash/4639475d6782a08c1e964f9a4329a254-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12259-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4639475d6782a08c1e964f9a4329a254-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3rjYr0K-OGC
https://papers.nips.cc/paper_files/paper/2021/file/4639475d6782a08c1e964f9a4329a254-Supplemental.pdf
Deep neural networks have been used widely to learn the latent structure of datasets, across modalities such as images, shapes, and audio signals. However, existing models are generally modality-dependent, requiring custom architectures and objectives to process different classes of signals. We leverage neural fields to capture the underlying structure in image, shape, audio and cross-modal audiovisual domains in a modality-independent manner. We cast our task as one of learning a manifold, where we aim to infer a low-dimensional, locally linear subspace in which our data resides. By enforcing coverage of the manifold, local linearity, and local isometry, our model -- dubbed GEM -- learns to capture the underlying structure of datasets across modalities. We can then travel along linear regions of our manifold to obtain perceptually consistent interpolations between samples, and can further use GEM to recover points on our manifold and glean not only diverse completions of input images, but cross-modal hallucinations of audio or image signals. Finally, we show that by walking across the underlying manifold of GEM, we may generate new samples in our signal domains.
null
Low-dimensional Structure in the Space of Language Representations is Reflected in Brain Responses
https://papers.nips.cc/paper_files/paper/2021/hash/464074179972cbbd75a39abc6954cd12-Abstract.html
Richard Antonello, Javier S. Turek, Vy Vo, Alexander Huth
https://papers.nips.cc/paper_files/paper/2021/hash/464074179972cbbd75a39abc6954cd12-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12260-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/464074179972cbbd75a39abc6954cd12-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=UYI6Sk_3Nox
https://papers.nips.cc/paper_files/paper/2021/file/464074179972cbbd75a39abc6954cd12-Supplemental.pdf
How related are the representations learned by neural language models, translation models, and language tagging tasks? We answer this question by adapting an encoder-decoder transfer learning method from computer vision to investigate the structure among 100 different feature spaces extracted from hidden representations of various networks trained on language tasks.This method reveals a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings. We call this low-dimensional structure a language representation embedding because it encodes the relationships between representations needed to process language for a variety of NLP tasks. We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI. Additionally, we find that the principal dimension of this structure can be used to create a metric which highlights the brain's natural language processing hierarchy. This suggests that the embedding captures some part of the brain's natural language representation structure.
null
On the Suboptimality of Thompson Sampling in High Dimensions
https://papers.nips.cc/paper_files/paper/2021/hash/46489c17893dfdcf028883202cefd6d1-Abstract.html
Raymond Zhang, Richard Combes
https://papers.nips.cc/paper_files/paper/2021/hash/46489c17893dfdcf028883202cefd6d1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12261-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/46489c17893dfdcf028883202cefd6d1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sjLs5OXcL7j
https://papers.nips.cc/paper_files/paper/2021/file/46489c17893dfdcf028883202cefd6d1-Supplemental.pdf
In this paper we consider Thompson Sampling for combinatorial semi-bandits. We demonstrate that, perhaps surprisingly, Thompson Sampling is sub-optimal for this problem in the sense that its regret scales exponentially in the ambient dimension, and its minimax regret scales almost linearly. This phenomenon occurs under a wide variety of assumptions including both non-linear and linear reward functions in the Bernoulli distribution setting. We also show that including a fixed amount of forced exploration to Thompson Sampling does not alleviate the problem. We complement our theoretical results with numerical results and show that in practice Thompson Sampling indeed can perform very poorly in some high dimension situations.
null
Learning Debiased and Disentangled Representations for Semantic Segmentation
https://papers.nips.cc/paper_files/paper/2021/hash/465636eb4a7ff4b267f3b765d07a02da-Abstract.html
Sanghyeok Chu, Dongwan Kim, Bohyung Han
https://papers.nips.cc/paper_files/paper/2021/hash/465636eb4a7ff4b267f3b765d07a02da-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12262-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/465636eb4a7ff4b267f3b765d07a02da-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sUFdZqWeMM
https://papers.nips.cc/paper_files/paper/2021/file/465636eb4a7ff4b267f3b765d07a02da-Supplemental.pdf
Deep neural networks are susceptible to learn biased models with entangled feature representations, which may lead to subpar performances on various downstream tasks. This is particularly true for under-represented classes, where a lack of diversity in the data exacerbates the tendency. This limitation has been addressed mostly in classification tasks, but there is little study on additional challenges that may appear in more complex dense prediction problems including semantic segmentation. To this end, we propose a model-agnostic and stochastic training scheme for semantic segmentation, which facilitates the learning of debiased and disentangled representations. For each class, we first extract class-specific information from the highly entangled feature map. Then, information related to a randomly sampled class is suppressed by a feature selection process in the feature space. By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes, and the model is able to learn more debiased and disentangled feature representations. Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks, with especially notable performance gains on under-represented classes.
null
Diversity Matters When Learning From Ensembles
https://papers.nips.cc/paper_files/paper/2021/hash/466473650870501e3600d9a1b4ee5d44-Abstract.html
Giung Nam, Jongmin Yoon, Yoonho Lee, Juho Lee
https://papers.nips.cc/paper_files/paper/2021/hash/466473650870501e3600d9a1b4ee5d44-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12263-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/466473650870501e3600d9a1b4ee5d44-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=f_eOQN87eXc
https://papers.nips.cc/paper_files/paper/2021/file/466473650870501e3600d9a1b4ee5d44-Supplemental.pdf
Deep ensembles excel in large-scale image classification tasks both in terms of prediction accuracy and calibration. Despite being simple to train, the computation and memory cost of deep ensembles limits their practicability. While some recent works propose to distill an ensemble model into a single model to reduce such costs, there is still a performance gap between the ensemble and distilled models. We propose a simple approach for reducing this gap, i.e., making the distilled performance close to the full ensemble. Our key assumption is that a distilled model should absorb as much function diversity inside the ensemble as possible. We first empirically show that the typical distillation procedure does not effectively transfer such diversity, especially for complex models that achieve near-zero training error. To fix this, we propose a perturbation strategy for distillation that reveals diversity by seeking inputs for which ensemble member outputs disagree. We empirically show that a model distilled with such perturbed samples indeed exhibits enhanced diversity, leading to improved performance.
null
Locally Valid and Discriminative Prediction Intervals for Deep Learning Models
https://papers.nips.cc/paper_files/paper/2021/hash/46c7cb50b373877fb2f8d5c4517bb969-Abstract.html
Zhen Lin, Shubhendu Trivedi, Jimeng Sun
https://papers.nips.cc/paper_files/paper/2021/hash/46c7cb50b373877fb2f8d5c4517bb969-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12264-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/46c7cb50b373877fb2f8d5c4517bb969-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xfDXF0I_bt
https://papers.nips.cc/paper_files/paper/2021/file/46c7cb50b373877fb2f8d5c4517bb969-Supplemental.zip
Crucial for building trust in deep learning models for critical real-world applications is efficient and theoretically sound uncertainty quantification, a task that continues to be challenging. Useful uncertainty information is expected to have two key properties: It should be valid (guaranteeing coverage) and discriminative (more uncertain when the expected risk is high). Moreover, when combined with deep learning (DL) methods, it should be scalable and affect the DL model performance minimally. Most existing Bayesian methods lack frequentist coverage guarantees and usually affect model performance. The few available frequentist methods are rarely discriminative and/or violate coverage guarantees due to unrealistic assumptions. Moreover, many methods are expensive or require substantial modifications to the base neural network. Building upon recent advances in conformal prediction [13, 33] and leveraging the classical idea of kernel regression, we propose Locally Valid and Discriminative prediction intervals (LVD), a simple, efficient, and lightweight method to construct discriminative prediction intervals (PIs) for almost any DL model. With no assumptions on the data distribution, such PIs also offer finite-sample local coverage guarantees (contrasted to the simpler marginal coverage). We empirically verify, using diverse datasets, that besides being the only locally valid method for DL, LVD also exceeds or matches the performance (including coverage rate and prediction accuracy) of existing uncertainty quantification methods, while offering additional benefits in scalability and flexibility.
null
Personalized Federated Learning With Gaussian Processes
https://papers.nips.cc/paper_files/paper/2021/hash/46d0671dd4117ea366031f87f3aa0093-Abstract.html
Idan Achituve, Aviv Shamsian, Aviv Navon, Gal Chechik, Ethan Fetaya
https://papers.nips.cc/paper_files/paper/2021/hash/46d0671dd4117ea366031f87f3aa0093-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12265-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/46d0671dd4117ea366031f87f3aa0093-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=byCQ9Uu4PD
https://papers.nips.cc/paper_files/paper/2021/file/46d0671dd4117ea366031f87f3aa0093-Supplemental.pdf
Federated learning aims to learn a global model that performs well on client devices with limited cross-client communication. Personalized federated learning (PFL) further extends this setup to handle data heterogeneity between clients by learning personalized models. A key challenge in this setting is to learn effectively across clients even though each client has unique data that is often limited in size. Here we present pFedGP, a solution to PFL that is based on Gaussian processes (GPs) with deep kernel learning. GPs are highly expressive models that work well in the low data regime due to their Bayesian nature.However, applying GPs to PFL raises multiple challenges. Mainly, GPs performance depends heavily on access to a good kernel function, and learning a kernel requires a large training set. Therefore, we propose learning a shared kernel function across all clients, parameterized by a neural network, with a personal GP classifier for each client. We further extend pFedGP to include inducing points using two novel methods, the first helps to improve generalization in the low data regime and the second reduces the computational cost. We derive a PAC-Bayes generalization bound on novel clients and empirically show that it gives non-vacuous guarantees. Extensive experiments on standard PFL benchmarks with CIFAR-10, CIFAR-100, and CINIC-10, and on a new setup of learning under input noise show that pFedGP achieves well-calibrated predictions while significantly outperforming baseline methods, reaching up to 21% in accuracy gain.
null
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
https://papers.nips.cc/paper_files/paper/2021/hash/46e0eae7d5217c79c3ef6b4c212b8c6f-Abstract.html
Yuan Cao, Quanquan Gu, Mikhail Belkin
https://papers.nips.cc/paper_files/paper/2021/hash/46e0eae7d5217c79c3ef6b4c212b8c6f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12266-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/46e0eae7d5217c79c3ef6b4c212b8c6f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ChWy1anEuow
https://papers.nips.cc/paper_files/paper/2021/file/46e0eae7d5217c79c3ef6b4c212b8c6f-Supplemental.pdf
Modern machine learning systems such as deep neural networks are often highly over-parameterized so that they can fit the noisy training data exactly, yet they can still achieve small test errors in practice. In this paper, we study this "benign overfitting" phenomenon of the maximum margin classifier for linear classification problems. Specifically, we consider data generated from sub-Gaussian mixtures, and provide a tight risk bound for the maximum margin linear classifier in the over-parameterized setting. Our results precisely characterize the condition under which benign overfitting can occur in linear classification problems, and improve on previous work. They also have direct implications for over-parameterized logistic regression.
null
Implicit SVD for Graph Representation Learning
https://papers.nips.cc/paper_files/paper/2021/hash/46fc943ecd56441056a560ba37d0b9e8-Abstract.html
Sami Abu-El-Haija, Hesham Mostafa, Marcel Nassar, Valentino Crespi, Greg Ver Steeg, Aram Galstyan
https://papers.nips.cc/paper_files/paper/2021/hash/46fc943ecd56441056a560ba37d0b9e8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12267-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/46fc943ecd56441056a560ba37d0b9e8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9Jsop0faZtU
https://papers.nips.cc/paper_files/paper/2021/file/46fc943ecd56441056a560ba37d0b9e8-Supplemental.pdf
Recent improvements in the performance of state-of-the-art (SOTA) methods for Graph Representational Learning (GRL) have come at the cost of significant computational resource requirements for training, e.g., for calculating gradients via backprop over many data epochs. Meanwhile, Singular Value Decomposition (SVD) can find closed-form solutions to convex problems, using merely a handful of epochs. In this paper, we make GRL more computationally tractable for those with modest hardware. We design a framework that computes SVD of *implicitly* defined matrices, and apply this framework to several GRL tasks. For each task, we derive first-order approximation of a SOTA model, where we design (expensive-to-store) matrix $\mathbf{M}$ and train the model, in closed-form, via SVD of $\mathbf{M}$, without calculating entries of $\mathbf{M}$. By converging to a unique point in one step, and without calculating gradients, our models show competitive empirical test performance over various graphs such as article citation and biological interaction networks. More importantly, SVD can initialize a deeper model, that is architected to be non-linear almost everywhere, though behaves linearly when its parameters reside on a hyperplane, onto which SVD initializes. The deeper model can then be fine-tuned within only a few epochs. Overall, our algorithm trains hundreds of times faster than state-of-the-art methods, while competing on test empirical performance. We open-source our implementation at: https://github.com/samihaija/isvd
null
Offline Model-based Adaptable Policy Learning
https://papers.nips.cc/paper_files/paper/2021/hash/470e7a4f017a5476afb7eeb3f8b96f9b-Abstract.html
Xiong-Hui Chen, Yang Yu, Qingyang Li, Fan-Ming Luo, Zhiwei Qin, Wenjie Shang, Jieping Ye
https://papers.nips.cc/paper_files/paper/2021/hash/470e7a4f017a5476afb7eeb3f8b96f9b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12268-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/470e7a4f017a5476afb7eeb3f8b96f9b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lrdXc17jm6
https://papers.nips.cc/paper_files/paper/2021/file/470e7a4f017a5476afb7eeb3f8b96f9b-Supplemental.pdf
In reinforcement learning, a promising direction to avoid online trial-and-error costs is learning from an offline dataset. Current offline reinforcement learning methods commonly learn in the policy space constrained to in-support regions by the offline dataset, in order to ensure the robustness of the outcome policies. Such constraints, however, also limit the potential of the outcome policies. In this paper, to release the potential of offline policy learning, we investigate the decision-making problems in out-of-support regions directly and propose offline Model-based Adaptable Policy LEarning (MAPLE). By this approach, instead of learning in in-support regions, we learn an adaptable policy that can adapt its behavior in out-of-support regions when deployed. We conduct experiments on MuJoCo controlling tasks with offline datasets. The results show that the proposed method can make robust decisions in out-of-support regions and achieve better performance than SOTA algorithms.
null
Multilingual Pre-training with Universal Dependency Learning
https://papers.nips.cc/paper_files/paper/2021/hash/473803f0f2ebd77d83ee60daaa61f381-Abstract.html
Kailai Sun, Zuchao Li, Hai Zhao
https://papers.nips.cc/paper_files/paper/2021/hash/473803f0f2ebd77d83ee60daaa61f381-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12269-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/473803f0f2ebd77d83ee60daaa61f381-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=34kP--v0qVT
https://papers.nips.cc/paper_files/paper/2021/file/473803f0f2ebd77d83ee60daaa61f381-Supplemental.pdf
The pre-trained language model (PrLM) demonstrates domination in downstream natural language processing tasks, in which multilingual PrLM takes advantage of language universality to alleviate the issue of limited resources for low-resource languages. Despite its successes, the performance of multilingual PrLM is still unsatisfactory, when multilingual PrLMs only focus on plain text and ignore obvious universal linguistic structure clues. Existing PrLMs have shown that monolingual linguistic structure knowledge may bring about better performance. Thus we propose a novel multilingual PrLM that supports both explicit universal dependency parsing and implicit language modeling. Syntax in terms of universal dependency parse serves as not only pre-training objective but also learned representation in our model, which brings unprecedented PrLM interpretability and convenience in downstream task use. Our model outperforms two popular multilingual PrLM, multilingual-BERT and XLM-R, on cross-lingual natural language understanding (NLU) benchmarks and linguistic structure parsing datasets, demonstrating the effectiveness and stronger cross-lingual modeling capabilities of our approach.
null
Parameter-free HE-friendly Logistic Regression
https://papers.nips.cc/paper_files/paper/2021/hash/477bdb55b231264bb53a7942fd84254d-Abstract.html
Junyoung Byun, Woojin Lee, Jaewook Lee
https://papers.nips.cc/paper_files/paper/2021/hash/477bdb55b231264bb53a7942fd84254d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12270-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/477bdb55b231264bb53a7942fd84254d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0DBYkHfkZlk
https://papers.nips.cc/paper_files/paper/2021/file/477bdb55b231264bb53a7942fd84254d-Supplemental.pdf
Privacy in machine learning has been widely recognized as an essential ethical and legal issue, because the data used for machine learning may contain sensitive information. Homomorphic encryption has recently attracted attention as a key solution to preserve privacy in machine learning applications. However, current approaches on the training of encrypted machine learning have relied heavily on hyperparameter selection, which should be avoided owing to the extreme difficulty of conducting validation on encrypted data. In this study, we propose an effective privacy-preserving logistic regression method that is free from the approximation of the sigmoid function and hyperparameter selection. In our framework, a logistic regression model can be transformed into the corresponding ridge regression for the logit function. We provide a theoretical background for our framework by suggesting a new generalization error bound on the encrypted data. Experiments on various real-world data show that our framework achieves better classification results while reducing latency by $\sim68\%$, compared to the previous models.
null
Active clustering for labeling training data
https://papers.nips.cc/paper_files/paper/2021/hash/47841cc9e552bd5c40164db7073b817b-Abstract.html
Quentin Lutz, Elie de Panafieu, Maya Stein, Alex Scott
https://papers.nips.cc/paper_files/paper/2021/hash/47841cc9e552bd5c40164db7073b817b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12271-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/47841cc9e552bd5c40164db7073b817b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EocGDCLaw-d
https://papers.nips.cc/paper_files/paper/2021/file/47841cc9e552bd5c40164db7073b817b-Supplemental.pdf
Gathering training data is a key step of any supervised learning task, and it is both critical and expensive. Critical, because the quantity and quality of the training data has a high impact on the performance of the learned function. Expensive, because most practical cases rely on humans-in-the-loop to label the data. The process of determining the correct labels is much more expensive than comparing two items to see whether they belong to the same class. Thus motivated, we propose a setting for training data gathering where the human experts perform the comparatively cheap task of answering pairwise queries, and the computer groups the items into classes (which can be labeled cheaply at the very end of the process). Given the items, we consider two random models for the classes: one where the set partition they form is drawn uniformly, the other one where each item chooses its class independently following a fixed distribution. In the first model, we characterize the algorithms that minimize the average number of queries required to cluster the items and analyze their complexity. In the second model, we analyze a specific algorithm family, propose as a conjecture that they reach the minimum average number of queries and compare their performance to a random approach. We also propose solutions to handle errors or inconsistencies in the experts' answers.
null
Exploring Social Posterior Collapse in Variational Autoencoder for Interaction Modeling
https://papers.nips.cc/paper_files/paper/2021/hash/47951a40efc0d2f7da8ff1ecbfde80f4-Abstract.html
Chen Tang, Wei Zhan, Masayoshi Tomizuka
https://papers.nips.cc/paper_files/paper/2021/hash/47951a40efc0d2f7da8ff1ecbfde80f4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12272-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/47951a40efc0d2f7da8ff1ecbfde80f4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iNUKmzaL-M5
https://papers.nips.cc/paper_files/paper/2021/file/47951a40efc0d2f7da8ff1ecbfde80f4-Supplemental.pdf
Multi-agent behavior modeling and trajectory forecasting are crucial for the safe navigation of autonomous agents in interactive scenarios. Variational Autoencoder (VAE) has been widely applied in multi-agent interaction modeling to generate diverse behavior and learn a low-dimensional representation for interacting systems. However, existing literature did not formally discuss if a VAE-based model can properly encode interaction into its latent space. In this work, we argue that one of the typical formulations of VAEs in multi-agent modeling suffers from an issue we refer to as social posterior collapse, i.e., the model is prone to ignoring historical social context when predicting the future trajectory of an agent. It could cause significant prediction errors and poor generalization performance. We analyze the reason behind this under-explored phenomenon and propose several measures to tackle it. Afterward, we implement the proposed framework and experiment on real-world datasets for multi-agent trajectory prediction. In particular, we propose a novel sparse graph attention message-passing (sparse-GAMP) layer, which helps us detect social posterior collapse in our experiments. In the experiments, we verify that social posterior collapse indeed occurs. Also, the proposed measures are effective in alleviating the issue. As a result, the model attains better generalization performance when historical social context is informative for prediction.
null
Ensembling Graph Predictions for AMR Parsing
https://papers.nips.cc/paper_files/paper/2021/hash/479b4864e55e12e0fb411eadb115c095-Abstract.html
Thanh Lam Hoang, Gabriele Picco, Yufang Hou, Young-Suk Lee, Lam Nguyen, Dzung Phan, Vanessa Lopez, Ramon Fernandez Astudillo
https://papers.nips.cc/paper_files/paper/2021/hash/479b4864e55e12e0fb411eadb115c095-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12273-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/479b4864e55e12e0fb411eadb115c095-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lmm2W2ICtjk
https://papers.nips.cc/paper_files/paper/2021/file/479b4864e55e12e0fb411eadb115c095-Supplemental.pdf
In many machine learning tasks, models are trained to predict structure data such as graphs. For example, in natural language processing, it is very common to parse texts into dependency trees or abstract meaning representation (AMR) graphs. On the other hand, ensemble methods combine predictions from multiple models to create a new one that is more robust and accurate than individual predictions. In the literature, there are many ensembling techniques proposed for classification or regression problems, however, ensemble graph prediction has not been studied thoroughly. In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions. As the problem is NP-Hard, we propose an efficient heuristic algorithm to approximate the optimal solution. To validate our approach, we carried out experiments in AMR parsing problems. The experimental results demonstrate that the proposed approach can combine the strength of state-of-the-art AMR parsers to create new predictions that are more accurate than any individual models in five standard benchmark datasets.
null
On the interplay between data structure and loss function in classification problems
https://papers.nips.cc/paper_files/paper/2021/hash/47a5feca4ce02883a5643e295c7ce6cd-Abstract.html
Stéphane d'Ascoli, Marylou Gabrié, Levent Sagun, Giulio Biroli
https://papers.nips.cc/paper_files/paper/2021/hash/47a5feca4ce02883a5643e295c7ce6cd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12274-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/47a5feca4ce02883a5643e295c7ce6cd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZYJ1r6sStU
https://papers.nips.cc/paper_files/paper/2021/file/47a5feca4ce02883a5643e295c7ce6cd-Supplemental.pdf
One of the central features of modern machine learning models, including deep neural networks, is their generalization ability on structured data in the over-parametrized regime. In this work, we consider an analytically solvable setup to investigate how properties of data impact learning in classification problems, and compare the results obtained for quadratic loss and logistic loss. Using methods from statistical physics, we obtain a precise asymptotic expression for the train and test errors of random feature models trained on a simple model of structured data. The input covariance is built from independent blocks allowing us to tune the saliency of low-dimensional structures and their alignment with respect to the target function.Our results show in particular that in the over-parametrized regime, the impact of data structure on both train and test error curves is greater for logistic loss than for mean-squared loss: the easier the task, the wider the gap in performance between the two losses at the advantage of the logistic. Numerical experiments on MNIST and CIFAR10 confirm our insights.
null
Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems
https://papers.nips.cc/paper_files/paper/2021/hash/47a658229eb2368a99f1d032c8848542-Abstract.html
Suhas Kowshik, Dheeraj Nagaraj, Prateek Jain, Praneeth Netrapalli
https://papers.nips.cc/paper_files/paper/2021/hash/47a658229eb2368a99f1d032c8848542-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12275-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/47a658229eb2368a99f1d032c8848542-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=B83B16bWvuI
https://papers.nips.cc/paper_files/paper/2021/file/47a658229eb2368a99f1d032c8848542-Supplemental.pdf
We consider the setting of vector valued non-linear dynamical systems $X_{t+1} = \phi(A^{*} X_t) + \eta_t$, where $\eta_t$ is unbiased noise and $\phi : \mathbb{R} \to \mathbb{R}$ is a known link function that satisfies certain {\em expansivity property}. The goal is to learn $A^{*}$ from a single trajectory $X_1,\cdots , X_T$ of {\em dependent or correlated} samples.While the problem is well-studied in the linear case, where $\phi$ is identity, with optimal error rates even for non-mixing systems, existing results in the non-linear case hold only for mixing systems. In this work, we improve existing results for learning nonlinear systems in a number of ways: a) we provide the first offline algorithm that can learn non-linear dynamical systems without the mixing assumption, b) we significantly improve upon the sample complexity of existing results for mixing systems, c) in the much harder one-pass, streaming setting we study a SGD with Reverse Experience Replay (SGD-RER) method, and demonstrate that for mixing systems, it achieves the same sample complexity as our offline algorithm, d) we justify the expansivity assumption by showing that for the popular ReLU link function --- a non-expansive but easy to learn link function with i.i.d. samples --- any method would require exponentially many samples (with respect to dimension of $X_t$) from the dynamical system. We validate our results via. simulations and demonstrate that a naive application of SGD can be highly sub-optimal. Indeed, our work demonstrates that for correlated data, specialized methods designed for the dependency structure in data can significantly outperform standard SGD based methods.
null
Mixture Proportion Estimation and PU Learning:A Modern Approach
https://papers.nips.cc/paper_files/paper/2021/hash/47b4f1bfdf6d298682e610ad74b37dca-Abstract.html
Saurabh Garg, Yifan Wu, Alexander J. Smola, Sivaraman Balakrishnan, Zachary Lipton
https://papers.nips.cc/paper_files/paper/2021/hash/47b4f1bfdf6d298682e610ad74b37dca-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12276-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/47b4f1bfdf6d298682e610ad74b37dca-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bJz3cFePTna
https://papers.nips.cc/paper_files/paper/2021/file/47b4f1bfdf6d298682e610ad74b37dca-Supplemental.pdf
Given only positive examples and unlabeled examples (from both positive and negative classes), we might hope nevertheless to estimate an accurate positive-versus-negative classifier. Formally, this task is broken down into two subtasks: (i) Mixture Proportion Estimation (MPE)---determining the fraction of positive examples in the unlabeled data; and (ii) PU-learning---given such an estimate, learning the desired positive-versus-negative classifier. Unfortunately, classical methods for both problems break down in high-dimensional settings. Meanwhile, recently proposed heuristics lack theoretical coherence and depend precariously on hyperparameter tuning. In this paper, we propose two simple techniques: Best Bin Estimation (BBE) (for MPE); and Conditional Value Ignoring Risk (CVIR), a simple objective for PU-learning. Both methods dominate previous approaches empirically, and for BBE, we establish formal guarantees that hold whenever we can train a model to cleanly separate out a small subset of positive examples. Our final algorithm (TED)$^n$, alternates between the two procedures, significantly improving both our mixture proportion estimator and classifier
null
Escape saddle points by a simple gradient-descent based algorithm
https://papers.nips.cc/paper_files/paper/2021/hash/47bd8ac1becf213f155a82244b4a696a-Abstract.html
Chenyi Zhang, Tongyang Li
https://papers.nips.cc/paper_files/paper/2021/hash/47bd8ac1becf213f155a82244b4a696a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12277-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/47bd8ac1becf213f155a82244b4a696a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lEf52hTHq0Q
https://papers.nips.cc/paper_files/paper/2021/file/47bd8ac1becf213f155a82244b4a696a-Supplemental.zip
Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$, it outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log n/\epsilon^{1.75})$ iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}(\log^4 n/\epsilon^{2})$ or $\tilde{O}(\log^6 n/\epsilon^{1.75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$. For the stochastic setting, our algorithm outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log^{2} n/\epsilon^{4})$ iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in $\log n$ compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.
null
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/48000647b315f6f00f913caa757a70b3-Abstract.html
Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh
https://papers.nips.cc/paper_files/paper/2021/hash/48000647b315f6f00f913caa757a70b3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12278-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48000647b315f6f00f913caa757a70b3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=T3_AJr9-R5g
https://papers.nips.cc/paper_files/paper/2021/file/48000647b315f6f00f913caa757a70b3-Supplemental.pdf
The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate. Recent work has investigated the even harder case of sparse training, where the DNN weights are, for as much as possible, already sparse to reduce computational costs during training. Existing sparse training methods are often empirical and can have lower accuracy relative to the dense baseline. In this paper, we present a general approach called Alternating Compressed/DeCompressed (AC/DC) training of DNNs, demonstrate convergence for a variant of the algorithm, and show that AC/DC outperforms existing sparse training methods in accuracy at similar computational budgets; at high sparsity levels, AC/DC even outperforms existing methods that rely on accurate pre-trained dense models. An important property of AC/DC is that it allows co-training of dense and sparse models, yielding accurate sparse-dense model pairs at the end of the training process. This is useful in practice, where compressed variants may be desirable for deployment in resource-constrained settings without re-doing the entire training flow, and also provides us with insights into the accuracy gap between dense and compressed models.
null
HyperSPNs: Compact and Expressive Probabilistic Circuits
https://papers.nips.cc/paper_files/paper/2021/hash/481fbfa59da2581098e841b7afc122f1-Abstract.html
Andy Shih, Dorsa Sadigh, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2021/hash/481fbfa59da2581098e841b7afc122f1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12279-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/481fbfa59da2581098e841b7afc122f1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=31NfehDva-h
https://papers.nips.cc/paper_files/paper/2021/file/481fbfa59da2581098e841b7afc122f1-Supplemental.pdf
Probabilistic circuits (PCs) are a family of generative models which allows for the computation of exact likelihoods and marginals of its probability distributions. PCs are both expressive and tractable, and serve as popular choices for discrete density estimation tasks. However, large PCs are susceptible to overfitting, and only a few regularization strategies (e.g., dropout, weight-decay) have been explored. We propose HyperSPNs: a new paradigm of generating the mixture weights of large PCs using a small-scale neural network. Our framework can be viewed as a soft weight-sharing strategy, which combines the greater expressiveness of large models with the better generalization and memory-footprint properties of small models. We show the merits of our regularization strategy on two state-of-the-art PC families introduced in recent literature -- RAT-SPNs and EiNETs -- and demonstrate generalization improvements in both models on a suite of density estimation benchmarks in both discrete and continuous domains.
null
Scaling Vision with Sparse Mixture of Experts
https://papers.nips.cc/paper_files/paper/2021/hash/48237d9f2dea8c74c2a72126cf63d933-Abstract.html
Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, Neil Houlsby
https://papers.nips.cc/paper_files/paper/2021/hash/48237d9f2dea8c74c2a72126cf63d933-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12280-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48237d9f2dea8c74c2a72126cf63d933-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NGPmH3vbAA_
null
Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent scalability in Natural Language Processing. In Computer Vision, however, almost all performant networks are "dense", that is, every input is processed by every parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision Transformer, that is scalable and competitive with the largest dense networks. When applied to image recognition, V-MoE matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time. Further, we propose an extension to the routing algorithm that can prioritize subsets of each input across the entire batch, leading to adaptive per-image compute. This allows V-MoE to trade-off performance and compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE to scale vision models, and train a 15B parameter model that attains 90.35% on ImageNet.
null
Two-sided fairness in rankings via Lorenz dominance
https://papers.nips.cc/paper_files/paper/2021/hash/48259990138bc03361556fb3f94c5d45-Abstract.html
Virginie Do, Sam Corbett-Davies, Jamal Atif, Nicolas Usunier
https://papers.nips.cc/paper_files/paper/2021/hash/48259990138bc03361556fb3f94c5d45-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12281-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48259990138bc03361556fb3f94c5d45-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=uPWdkoZHgba
https://papers.nips.cc/paper_files/paper/2021/file/48259990138bc03361556fb3f94c5d45-Supplemental.pdf
We consider the problem of generating rankings that are fair towards both users and item producers in recommender systems. We address both usual recommendation (e.g., of music or movies) and reciprocal recommendation (e.g., dating). Following concepts of distributive justice in welfare economics, our notion of fairness aims at increasing the utility of the worse-off individuals, which we formalize using the criterion of Lorenz efficiency. It guarantees that rankings are Pareto efficient, and that they maximally redistribute utility from better-off to worse-off, at a given level of overall utility. We propose to generate rankings by maximizing concave welfare functions, and develop an efficient inference procedure based on the Frank-Wolfe algorithm. We prove that unlike existing approaches based on fairness constraints, our approach always produces fair rankings. Our experiments also show that it increases the utility of the worse-off at lower costs in terms of overall utility.
null
Stability & Generalisation of Gradient Descent for Shallow Neural Networks without the Neural Tangent Kernel
https://papers.nips.cc/paper_files/paper/2021/hash/483101a6bc4e6c46a86222eb65fbcb6a-Abstract.html
Dominic Richards, Ilja Kuzborskij
https://papers.nips.cc/paper_files/paper/2021/hash/483101a6bc4e6c46a86222eb65fbcb6a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12282-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/483101a6bc4e6c46a86222eb65fbcb6a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=JOOsoL_J6Fc
https://papers.nips.cc/paper_files/paper/2021/file/483101a6bc4e6c46a86222eb65fbcb6a-Supplemental.pdf
We revisit on-average algorithmic stability of Gradient Descent (GD) for training overparameterised shallow neural networks and prove new generalisation and excess risk bounds without the Neural Tangent Kernel (NTK) or Polyak-Łojasiewicz (PL) assumptions. In particular, we show oracle type bounds which reveal that the generalisation and excess risk of GD is controlled by an interpolating network with the shortest GD path from initialisation (in a sense, an interpolating network with the smallest relative norm). While this was known for kernelised interpolants, our proof applies directly to networks trained by GD without intermediate kernelisation. At the same time, by relaxing oracle inequalities developed here we recover existing NTK-based risk bounds in a straightforward way, which demonstrates that our analysis is tighter. Finally, unlike most of the NTK-based analyses we focus on regression with label noise and show that GD with early stopping is consistent
null
Adversarial Intrinsic Motivation for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/486c0401c56bf7ec2daa9eba58907da9-Abstract.html
Ishan Durugkar, Mauricio Tec, Scott Niekum, Peter Stone
https://papers.nips.cc/paper_files/paper/2021/hash/486c0401c56bf7ec2daa9eba58907da9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12283-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/486c0401c56bf7ec2daa9eba58907da9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=GYr3qnFKgU
https://papers.nips.cc/paper_files/paper/2021/file/486c0401c56bf7ec2daa9eba58907da9-Supplemental.pdf
Learning with an objective to minimize the mismatch with a reference distribution has been shown to be useful for generative modeling and imitation learning. In this paper, we investigate whether one such objective, the Wasserstein-1 distance between a policy's state visitation distribution and a target distribution, can be utilized effectively for reinforcement learning (RL) tasks. Specifically, this paper focuses on goal-conditioned reinforcement learning where the idealized (unachievable) target distribution has full measure at the goal. This paper introduces a quasimetric specific to Markov Decision Processes (MDPs) and uses this quasimetric to estimate the above Wasserstein-1 distance. It further shows that the policy that minimizes this Wasserstein-1 distance is the policy that reaches the goal in as few steps as possible. Our approach, termed Adversarial Intrinsic Motivation (AIM), estimates this Wasserstein-1 distance through its dual objective and uses it to compute a supplemental reward function. Our experiments show that this reward function changes smoothly with respect to transitions in the MDP and directs the agent's exploration to find the goal efficiently. Additionally, we combine AIM with Hindsight Experience Replay (HER) and show that the resulting algorithm accelerates learning significantly on several simulated robotics tasks when compared to other rewards that encourage exploration or accelerate learning.
null
Machine Learning for Variance Reduction in Online Experiments
https://papers.nips.cc/paper_files/paper/2021/hash/488b084119a1c7a4950f00706ec7ea16-Abstract.html
Yongyi Guo, Dominic Coey, Mikael Konutgan, Wenting Li, Chris Schoener, Matt Goldman
https://papers.nips.cc/paper_files/paper/2021/hash/488b084119a1c7a4950f00706ec7ea16-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12284-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/488b084119a1c7a4950f00706ec7ea16-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=w5fW0TNWPyc
https://papers.nips.cc/paper_files/paper/2021/file/488b084119a1c7a4950f00706ec7ea16-Supplemental.pdf
We consider the problem of variance reduction in randomized controlled trials, through the use of covariates correlated with the outcome but independent of the treatment. We propose a machine learning regression-adjusted treatment effect estimator, which we call MLRATE. MLRATE uses machine learning predictors of the outcome to reduce estimator variance. It employs cross-fitting to avoid overfitting biases, and we prove consistency and asymptotic normality under general conditions. MLRATE is robust to poor predictions from the machine learning step: if the predictions are uncorrelated with the outcomes, the estimator performs asymptotically no worse than the standard difference-in-means estimator, while if predictions are highly correlated with outcomes, the efficiency gains are large. In A/A tests, for a set of 48 outcome metrics commonly monitored in Facebook experiments, the estimator has over $70\%$ lower variance than the simple difference-in-means estimator, and about $19\%$ lower variance than the common univariate procedure which adjusts only for pre-experiment values of the outcome.
null
L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/48aedb8880cab8c45637abc7493ecddd-Abstract.html
Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray Chen, David Pan
https://papers.nips.cc/paper_files/paper/2021/hash/48aedb8880cab8c45637abc7493ecddd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12285-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48aedb8880cab8c45637abc7493ecddd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RF7AA89cfzl
https://papers.nips.cc/paper_files/paper/2021/file/48aedb8880cab8c45637abc7493ecddd-Supplemental.pdf
Silicon-photonics-based optical neural network (ONN) is a promising hardware platform that could represent a paradigm shift in efficient AI with its CMOS-compatibility, flexibility, ultra-low execution latency, and high energy efficiency. In-situ training on the online programmable photonic chips is appealing but still encounters challenging issues in on-chip implementability, scalability, and efficiency. In this work, we propose a closed-loop ONN on-chip learning framework L2ight to enable scalable ONN mapping and efficient in-situ learning. L2ight adopts a three-stage learning flow that first calibrates the complicated photonic circuit states under challenging physical constraints, then performs photonic core mapping via combined analytical solving and zeroth-order optimization. A subspace learning procedure with multi-level sparsity is integrated into L2ight to enable in-situ gradient evaluation and fast adaptation, unleashing the power of optics for real on-chip intelligence. Extensive experiments demonstrate our proposed L2ight outperforms prior ONN training protocols with 3-order-of-magnitude higher scalability and over 30x better efficiency, when benchmarked on various models and learning tasks. This synergistic framework is the first scalable on-chip learning solution that pushes this emerging field from intractable to scalable and further to efficient for next-generation self-learnable photonic neural chips. From a co-design perspective, L2ight also provides essential insights for hardware-restricted unitary subspace optimization and efficient sparse training. We open-source our framework at the link.
null
Towards Gradient-based Bilevel Optimization with Non-convex Followers and Beyond
https://papers.nips.cc/paper_files/paper/2021/hash/48bea99c85bcbaaba618ba10a6f69e44-Abstract.html
Risheng Liu, Yaohua Liu, Shangzhi Zeng, Jin Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/48bea99c85bcbaaba618ba10a6f69e44-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12286-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48bea99c85bcbaaba618ba10a6f69e44-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=b83ibRX55T
https://papers.nips.cc/paper_files/paper/2021/file/48bea99c85bcbaaba618ba10a6f69e44-Supplemental.pdf
In recent years, Bi-Level Optimization (BLO) techniques have received extensive attentions from both learning and vision communities. A variety of BLO models in complex and practical tasks are of non-convex follower structure in nature (a.k.a., without Lower-Level Convexity, LLC for short). However, this challenging class of BLOs is lack of developments on both efficient solution strategies and solid theoretical guarantees. In this work, we propose a new algorithmic framework, named Initialization Auxiliary and Pessimistic Trajectory Truncated Gradient Method (IAPTT-GM), to partially address the above issues. In particular, by introducing an auxiliary as initialization to guide the optimization dynamics and designing a pessimistic trajectory truncation operation, we construct a reliable approximate version of the original BLO in the absence of LLC hypothesis. Our theoretical investigations establish the convergence of solutions returned by IAPTT-GM towards those of the original BLO without LLC. As an additional bonus, we also theoretically justify the quality of our IAPTT-GM embedded with Nesterov's accelerated dynamics under LLC. The experimental results confirm both the convergence of our algorithm without LLC, and the theoretical findings under LLC.
null
Multi-Facet Clustering Variational Autoencoders
https://papers.nips.cc/paper_files/paper/2021/hash/48cb136b65a69e8c2aa22913a0d91b2f-Abstract.html
Fabian Falck, Haoting Zhang, Matthew Willetts, George Nicholson, Christopher Yau, Chris C Holmes
https://papers.nips.cc/paper_files/paper/2021/hash/48cb136b65a69e8c2aa22913a0d91b2f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12287-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48cb136b65a69e8c2aa22913a0d91b2f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=JbqW3KmmE6
https://papers.nips.cc/paper_files/paper/2021/file/48cb136b65a69e8c2aa22913a0d91b2f-Supplemental.pdf
Work in deep clustering focuses on finding a single partition of data. However, high-dimensional data, such as images, typically feature multiple interesting characteristics one could cluster over. For example, images of objects against a background could be clustered over the shape of the object and separately by the colour of the background. In this paper, we introduce Multi-Facet Clustering Variational Autoencoders (MFCVAE), a novel class of variational autoencoders with a hierarchy of latent variables, each with a Mixture-of-Gaussians prior, that learns multiple clusterings simultaneously, and is trained fully unsupervised and end-to-end. MFCVAE uses a progressively-trained ladder architecture which leads to highly stable performance. We provide novel theoretical results for optimising the ELBO analytically with respect to the categorical variational posterior distribution, correcting earlier influential theoretical work. On image benchmarks, we demonstrate that our approach separates out and clusters over different aspects of the data in a disentangled manner. We also show other advantages of our model: the compositionality of its latent space and that it provides controlled generation of samples.
null
Synthetic Design: An Optimization Approach to Experimental Design with Synthetic Controls
https://papers.nips.cc/paper_files/paper/2021/hash/48d23e87eb98cc2227b5a8c33fa00680-Abstract.html
Nick Doudchenko, Khashayar Khosravi, Jean Pouget-Abadie, Sébastien Lahaie, Miles Lubin, Vahab Mirrokni, Jann Spiess, guido imbens
https://papers.nips.cc/paper_files/paper/2021/hash/48d23e87eb98cc2227b5a8c33fa00680-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12288-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48d23e87eb98cc2227b5a8c33fa00680-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lS_rOGT9lfG
https://papers.nips.cc/paper_files/paper/2021/file/48d23e87eb98cc2227b5a8c33fa00680-Supplemental.pdf
We investigate the optimal design of experimental studies that have pre-treatment outcome data available. The average treatment effect is estimated as the difference between the weighted average outcomes of the treated and control units. A number of commonly used approaches fit this formulation, including the difference-in-means estimator and a variety of synthetic-control techniques. We propose several methods for choosing the set of treated units in conjunction with the weights. Observing the NP-hardness of the problem, we introduce a mixed-integer programming formulation which selects both the treatment and control sets and unit weightings. We prove that these proposed approaches lead to qualitatively different experimental units being selected for treatment. We use simulations based on publicly available data from the US Bureau of Labor Statistics that show improvements in terms of mean squared error and statistical power when compared to simple and commonly used alternatives such as randomized trials.
null
Ranking Policy Decisions
https://papers.nips.cc/paper_files/paper/2021/hash/48db71587df6c7c442e5b76cc723169a-Abstract.html
Hadrien Pouget, Hana Chockler, Youcheng Sun, Daniel Kroening
https://papers.nips.cc/paper_files/paper/2021/hash/48db71587df6c7c442e5b76cc723169a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12289-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48db71587df6c7c442e5b76cc723169a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=GNO4e26auiF
https://papers.nips.cc/paper_files/paper/2021/file/48db71587df6c7c442e5b76cc723169a-Supplemental.zip
Policies trained via Reinforcement Learning (RL) without human intervention are often needlessly complex, making them difficult to analyse and interpret. In a run with $n$ time steps, a policy will make $n$ decisions on actions to take; we conjecture that only a small subset of these decisions delivers value over selecting a simple default action. Given a trained policy, we propose a novel black-box method based on statistical fault localisation that ranks the states of the environment according to the importance of decisions made in those states. We argue that among other things, the ranked list of states can help explain and understand the policy. As the ranking method is statistical, a direct evaluation of its quality is hard. As a proxy for quality, we use the ranking to create new, simpler policies from the original ones by pruning decisions identified as unimportant (that is, replacing them by default actions) and measuring the impact on performance. Our experimental results on a diverse set of standard benchmarks demonstrate that pruned policies can perform on a level comparable to the original policies. We show that naive approaches for ranking policies, e.g. ranking based on the frequency of visiting a state, do not result in high-performing pruned policies. To the best of our knowledge, there are no similar techniques for ranking RL policies' decisions.
null
Searching the Search Space of Vision Transformer
https://papers.nips.cc/paper_files/paper/2021/hash/48e95c45c8217961bf6cd7696d80d238-Abstract.html
Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao, Haibin Ling
https://papers.nips.cc/paper_files/paper/2021/hash/48e95c45c8217961bf6cd7696d80d238-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12290-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/48e95c45c8217961bf6cd7696d80d238-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AVS8CamBecS
https://papers.nips.cc/paper_files/paper/2021/file/48e95c45c8217961bf6cd7696d80d238-Supplemental.pdf
Vision Transformer has shown great visual representation power in substantial vision tasks such as recognition and detection, and thus been attracting fast-growing efforts on manually designing more effective architectures. In this paper, we propose to use neural architecture search to automate this process, by searching not only the architecture but also the search space. The central idea is to gradually evolve different search dimensions guided by their E-T Error computed using a weight-sharing supernet. Moreover, we provide design guidelines of general vision transformers with extensive analysis according to the space searching process, which could promote the understanding of vision transformer. Remarkably, the searched models, named S3 (short for Searching the Search Space), from the searched space achieve superior performance to recently proposed models, such as Swin, DeiT and ViT, when evaluated on ImageNet. The effectiveness of S3 is also illustrated on object detection, semantic segmentation and visual question answering, demonstrating its generality to downstream vision and vision-language tasks. Code and models will be available at https://github.com/microsoft/Cream.
null
Relative stability toward diffeomorphisms indicates performance in deep nets
https://papers.nips.cc/paper_files/paper/2021/hash/497476fe61816251905e8baafdf54c23-Abstract.html
Leonardo Petrini, Alessandro Favero, Mario Geiger, Matthieu Wyart
https://papers.nips.cc/paper_files/paper/2021/hash/497476fe61816251905e8baafdf54c23-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12291-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/497476fe61816251905e8baafdf54c23-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RSc-kfiLMNn
https://papers.nips.cc/paper_files/paper/2021/file/497476fe61816251905e8baafdf54c23-Supplemental.pdf
Understanding why deep nets can classify data in large dimensions remains a challenge. It has been proposed that they do so by becoming stable to diffeomorphisms, yet existing empirical measurements support that it is often not the case. We revisit this question by defining a maximum-entropy distribution on diffeomorphisms, that allows to study typical diffeomorphisms of a given norm. We confirm that stability toward diffeomorphisms does not strongly correlate to performance on benchmark data sets of images. By contrast, we find that the stability toward diffeomorphisms relative to that of generic transformations $R_f$ correlates remarkably with the test error $\epsilon_t$. It is of order unity at initialization but decreases by several decades during training for state-of-the-art architectures. For CIFAR10 and 15 known architectures, we find $\epsilon_t\approx 0.2\sqrt{R_f}$, suggesting that obtaining a small $R_f$ is important to achieve good performance. We study how $R_f$ depends on the size of the training set and compare it to a simple model of invariant learning.
null
Raw Nav-merge Seismic Data to Subsurface Properties with MLP based Multi-Modal Information Unscrambler
https://papers.nips.cc/paper_files/paper/2021/hash/498f2c21688f6451d9f5fd09d53edda7-Abstract.html
Aditya Desai, Zhaozhuo Xu, Menal Gupta, Anu Chandran, Antoine Vial-Aussavy, Anshumali Shrivastava
https://papers.nips.cc/paper_files/paper/2021/hash/498f2c21688f6451d9f5fd09d53edda7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12292-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/498f2c21688f6451d9f5fd09d53edda7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HLalhDvDwrQ
https://papers.nips.cc/paper_files/paper/2021/file/498f2c21688f6451d9f5fd09d53edda7-Supplemental.pdf
Traditional seismic inversion (SI) maps the hundreds of terabytes of raw-field data to subsurface properties in gigabytes. This inversion process is expensive, requiring over a year of human and computational effort. Recently, data-driven approaches equipped with Deep learning (DL) are envisioned to improve SI efficiency. However, these improvements are restricted to data with highly reduced scale and complexity. To extend these approaches to real-scale seismic data, researchers need to process raw nav-merge seismic data into an image and perform convolution. We argue that this convolution-based way of SI is not only computationally expensive but also conceptually problematic. Seismic data is not naturally an image and need not be processed as images. In this work, we go beyond convolution and propose a novel SI method. We solve the scalability of SI by proposing a new auxiliary learning paradigm for SI (Aux-SI). This paradigm breaks the SI into local inversion tasks, which predicts each small chunk of subsurface properties using surrounding seismic data. Aux-SI combines these local predictions to obtain the entire subsurface model. However, even this local inversion is still challenging due to: (1) high-dimensional, spatially irregular multi-modal seismic data, (2) there is no concrete spatial mapping (or alignment) between subsurface properties and raw data. To handle these challenges, we propose an all-MLP architecture, Multi-Modal Information Unscrambler (MMI-Unscrambler), that unscrambles seismic information by ingesting all available multi-modal data. The experiment shows that MMI-Unscrambler outperforms both SOTA U-Net and Transformer models on simulation data. We also scale MMI-Unscrambler to raw-field nav-merge data on Gulf-of-Mexico to obtain a geologically sound velocity model with an SSIM score of 0.8. To the best of our knowledge, this is the first successful demonstration of the DL approach on SI for real, large-scale, and complicated raw field data.
null
Inverse Problems Leveraging Pre-trained Contrastive Representations
https://papers.nips.cc/paper_files/paper/2021/hash/498f940d9b933c529b06aa96d18f7eda-Abstract.html
Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis
https://papers.nips.cc/paper_files/paper/2021/hash/498f940d9b933c529b06aa96d18f7eda-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12293-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/498f940d9b933c529b06aa96d18f7eda-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HCOdL3dWab
https://papers.nips.cc/paper_files/paper/2021/file/498f940d9b933c529b06aa96d18f7eda-Supplemental.zip
We study a new family of inverse problems for recovering representations of corrupted data. We assume access to a pre-trained representation learning network R(x) that operates on clean images, like CLIP. The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A. We propose a supervised inversion method that uses a contrastive objective to obtain excellent representations for highly corrupted images. Using a linear probe on our robust representations, we achieve a higher accuracy than end-to-end supervised baselines when classifying images with various types of distortions, including blurring, additive noise, and random pixel masking. We evaluate on a subset of ImageNet and observe that our method is robust to varying levels of distortion. Our method outperforms end-to-end baselines even with a fraction of the labeled data in a wide range of forward operators.
null
The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation
https://papers.nips.cc/paper_files/paper/2021/hash/4990974d150d0de5e6e15a1454fe6b0f-Abstract.html
Thibault Sejourne, Francois-Xavier Vialard, Gabriel Peyré
https://papers.nips.cc/paper_files/paper/2021/hash/4990974d150d0de5e6e15a1454fe6b0f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12294-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4990974d150d0de5e6e15a1454fe6b0f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3-GCM92yaB3
https://papers.nips.cc/paper_files/paper/2021/file/4990974d150d0de5e6e15a1454fe6b0f-Supplemental.pdf
Comparing metric measure spaces (i.e. a metric space endowed with a probability distribution) is at the heart of many machine learning problems. The most popular distance between such metric measure spaces is the Gromov-Wasserstein (GW) distance, which is the solution of a quadratic assignment problem. The GW distance is however limited to the comparison of metric measure spaces endowed with a \emph{probability} distribution. To alleviate this issue, we introduce two Unbalanced Gromov-Wasserstein formulations: a distance and a more tractable upper-bounding relaxation. They both allow the comparison of metric spaces equipped with arbitrary positive measures up to isometries. The first formulation is a positive and definite divergence based on a relaxation of the mass conservation constraint using a novel type of quadratically-homogeneous divergence. This divergence works hand in hand with the entropic regularization approach which is popular to solve large scale optimal transport problems. We show that the underlying non-convex optimization problem can be efficiently tackled using a highly parallelizable and GPU-friendly iterative scheme. The second formulation is a distance between mm-spaces up to isometries based on a conic lifting. Lastly, we provide numerical experiments on synthetic and domain adaptation data with a Positive-Unlabeled learning task to highlight the salient features of the unbalanced divergence and its potential applications in ML.
null
Diffusion Models Beat GANs on Image Synthesis
https://papers.nips.cc/paper_files/paper/2021/hash/49ad23d1ec9fa4bd8d77d02681df5cfa-Abstract.html
Prafulla Dhariwal, Alexander Nichol
https://papers.nips.cc/paper_files/paper/2021/hash/49ad23d1ec9fa4bd8d77d02681df5cfa-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12295-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AAWuCvzaVt
https://papers.nips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Supplemental.pdf
We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128$\times$128, 4.59 on ImageNet 256$\times$256, and 7.72 on ImageNet 512$\times$512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256$\times$256 and 3.85 on ImageNet 512$\times$512.
null
Learning MDPs from Features: Predict-Then-Optimize for Sequential Decision Making by Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/49e863b146f3b5470ee222ee84669b1c-Abstract.html
Kai Wang, Sanket Shah, Haipeng Chen, Andrew Perrault, Finale Doshi-Velez, Milind Tambe
https://papers.nips.cc/paper_files/paper/2021/hash/49e863b146f3b5470ee222ee84669b1c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12296-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/49e863b146f3b5470ee222ee84669b1c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-mGv2KxQ43D
https://papers.nips.cc/paper_files/paper/2021/file/49e863b146f3b5470ee222ee84669b1c-Supplemental.pdf
In the predict-then-optimize framework, the objective is to train a predictive model, mapping from environment features to parameters of an optimization problem, which maximizes decision quality when the optimization is subsequently solved. Recent work on decision-focused learning shows that embedding the optimization problem in the training pipeline can improve decision quality and help generalize better to unseen tasks compared to relying on an intermediate loss function for evaluating prediction quality. We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) that are solved via reinforcement learning. In particular, we are given environment features and a set of trajectories from training MDPs, which we use to train a predictive model that generalizes to unseen test MDPs without trajectories. Two significant computational challenges arise in applying decision-focused learning to MDPs: (i) large state and action spaces make it infeasible for existing techniques to differentiate through MDP problems, and (ii) the high-dimensional policy space, as parameterized by a neural network, makes differentiating through a policy expensive. We resolve the first challenge by sampling provably unbiased derivatives to approximate and differentiate through optimality conditions, and the second challenge by using a low-rank approximation to the high-dimensional sample-based derivatives. We implement both Bellman-based and policy gradient-based decision-focused learning on three different MDP problems with missing parameters, and show that decision-focused learning performs better in generalization to unseen tasks.
null
A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms
https://papers.nips.cc/paper_files/paper/2021/hash/49ef08ad6e7f26d7f200e1b2b9e6e4ac-Abstract.html
Anand Kalvit, Assaf Zeevi
https://papers.nips.cc/paper_files/paper/2021/hash/49ef08ad6e7f26d7f200e1b2b9e6e4ac-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12297-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/49ef08ad6e7f26d7f200e1b2b9e6e4ac-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rC3zu-OqnII
https://papers.nips.cc/paper_files/paper/2021/file/49ef08ad6e7f26d7f200e1b2b9e6e4ac-Supplemental.pdf
One of the key drivers of complexity in the classical (stochastic) multi-armed bandit (MAB) problem is the difference between mean rewards in the top two arms, also known as the instance gap. The celebrated Upper Confidence Bound (UCB) policy is among the simplest optimism-based MAB algorithms that naturally adapts to this gap: for a horizon of play n, it achieves optimal O(log n) regret in instances with "large" gaps, and a near-optimal O(\sqrt{n log n}) minimax regret when the gap can be arbitrarily "small." This paper provides new results on the arm-sampling behavior of UCB, leading to several important insights. Among these, it is shown that arm-sampling rates under UCB are asymptotically deterministic, regardless of the problem complexity. This discovery facilitates new sharp asymptotics and a novel alternative proof for the O(\sqrt{n log n}) minimax regret of UCB. Furthermore, the paper also provides the first complete process-level characterization of the MAB problem in the conventional diffusion scaling. Among other things, the "small" gap worst-case lens adopted in this paper also reveals profound distinctions between the behavior of UCB and Thompson Sampling, such as an "incomplete learning" phenomenon characteristic of the latter.
null
SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/4a06d868d044c50af0cf9bc82d2fc19f-Abstract.html
Amir Hertz, Or Perel, Raja Giryes, Olga Sorkine-hornung, Daniel Cohen-or
https://papers.nips.cc/paper_files/paper/2021/hash/4a06d868d044c50af0cf9bc82d2fc19f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12298-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4a06d868d044c50af0cf9bc82d2fc19f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_wPmKqEMxss
null
Multilayer-perceptrons (MLP) are known to struggle learning functions of high-frequencies, and in particular, instances of wide frequency bands.We present a progressive mapping scheme for input signals of MLP networks, enabling them to better fit a wide range of frequencies without sacrificing training stability or requiring any domain specific preprocessing. We introduce Spatially Adaptive Progressive Encoding (SAPE) layers, which gradually unmask signal components with increasing frequencies as a function of time and space. The progressive exposure of frequencies is monitored by a feedback loop throughout the neural optimization process, allowing changes to propagate at different rates among local spatial portions of the signal space. We demonstrate the advantage of our method on variety of domains and applications: regression of low dimensional signals and images, representation learning of occupancy networks, and a geometric task of mesh transfer between 3D shapes.
null
A Biased Graph Neural Network Sampler with Near-Optimal Regret
https://papers.nips.cc/paper_files/paper/2021/hash/4a08142c38dbe374195d41c04562d9f8-Abstract.html
Qingru Zhang, David Wipf, Quan Gan, Le Song
https://papers.nips.cc/paper_files/paper/2021/hash/4a08142c38dbe374195d41c04562d9f8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12299-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4a08142c38dbe374195d41c04562d9f8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=qpdc7sCpbi
https://papers.nips.cc/paper_files/paper/2021/file/4a08142c38dbe374195d41c04562d9f8-Supplemental.pdf
Graph neural networks (GNN) have recently emerged as a vehicle for applying deep network architectures to graph and relational data. However, given the increasing size of industrial datasets, in many practical situations, the message passing computations required for sharing information across GNN layers are no longer scalable. Although various sampling methods have been introduced to approximate full-graph training within a tractable budget, there remain unresolved complications such as high variances and limited theoretical guarantees. To address these issues, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem but with a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded pay outs. And unlike prior bandit-GNN use cases, the resulting policy leads to near-optimal regret while accounting for the GNN training dynamics introduced by SGD. From a practical standpoint, this translates into lower variance estimates and competitive or superior test accuracy across several benchmarks.
null
Equilibrium Refinement for the Age of Machines: The One-Sided Quasi-Perfect Equilibrium
https://papers.nips.cc/paper_files/paper/2021/hash/4a3050ae2c77da4f9c90e2e58e8e520f-Abstract.html
Gabriele Farina, Tuomas Sandholm
https://papers.nips.cc/paper_files/paper/2021/hash/4a3050ae2c77da4f9c90e2e58e8e520f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12300-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4a3050ae2c77da4f9c90e2e58e8e520f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iVL-2vJsy4e
https://papers.nips.cc/paper_files/paper/2021/file/4a3050ae2c77da4f9c90e2e58e8e520f-Supplemental.pdf
In two-player zero-sum extensive-form games, Nash equilibrium prescribes optimal strategies against perfectly rational opponents. However, it does not guarantee rational play in parts of the game tree that can only be reached by the players making mistakes. This can be problematic when operationalizing equilibria in the real world among imperfect players. Trembling-hand refinements are a sound remedy to this issue, and are subsets of Nash equilibria that are designed to handle the possibility that any of the players may make mistakes. In this paper, we initiate the study of equilibrium refinements for settings where one of the players is perfectly rational (the ``machine'') and the other may make mistakes. As we show, this endeavor has many pitfalls: many intuitively appealing approaches to refinement fail in various ways. On the positive side, we introduce a modification of the classical quasi-perfect equilibrium (QPE) refinement, which we call the one-sided quasi-perfect equilibrium. Unlike QPE, one-sided QPE only accounts for mistakes from one player and assumes that no mistakes will be made by the machine. We present experiments on standard benchmark games and an endgame from the famous man-machine match where the AI Libratus was the first to beat top human specialist professionals in heads-up no-limit Texas hold'em poker. We show that one-sided QPE can be computed more efficiently than all known prior refinements, paving the way to wider adoption of Nash equilibrium refinements in settings with perfectly rational machines (or humans perfectly actuating machine-generated strategies) that interact with players prone to mistakes. We also show that one-sided QPE tends to play better than a Nash equilibrium strategy against imperfect opponents.
null
Interpreting Representation Quality of DNNs for 3D Point Cloud Processing
https://papers.nips.cc/paper_files/paper/2021/hash/4a3e00961a08879c34f91ca0070ea2f5-Abstract.html
Wen Shen, Qihan Ren, Dongrui Liu, Quanshi Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/4a3e00961a08879c34f91ca0070ea2f5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12301-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4a3e00961a08879c34f91ca0070ea2f5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=QMG2bzvk5HV
https://papers.nips.cc/paper_files/paper/2021/file/4a3e00961a08879c34f91ca0070ea2f5-Supplemental.pdf
In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing. We propose a method to disentangle the overall model vulnerability into the sensitivity to the rotation, the translation, the scale, and local 3D structures. Besides, we also propose metrics to evaluate the spatial smoothness of encoding 3D structures, and the representation complexity of the DNN. Based on such analysis, experiments expose representation problems with classic DNNs, and explain the utility of the adversarial training. The code will be released when this paper is accepted.
null
How Fine-Tuning Allows for Effective Meta-Learning
https://papers.nips.cc/paper_files/paper/2021/hash/4a533591763dfa743a13affab1a85793-Abstract.html
Kurtland Chua, Qi Lei, Jason D. Lee
https://papers.nips.cc/paper_files/paper/2021/hash/4a533591763dfa743a13affab1a85793-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12302-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4a533591763dfa743a13affab1a85793-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-KGLlWv6kIc
https://papers.nips.cc/paper_files/paper/2021/file/4a533591763dfa743a13affab1a85793-Supplemental.zip
Representation learning has served as a key tool for meta-learning, enabling rapid learning of new tasks. Recent works like MAML learn task-specific representations by finding an initial representation requiring minimal per-task adaptation (i.e. a fine-tuning-based objective). We present a theoretical framework for analyzing a MAML-like algorithm, assuming all available tasks require approximately the same representation. We then provide risk bounds on predictors found by fine-tuning via gradient descent, demonstrating that the method provably leverages the shared structure. We illustrate these bounds in the logistic regression and neural network settings. In contrast, we establish settings where learning one representation for all tasks (i.e. using a "frozen representation" objective) fails. Notably, any such algorithm cannot outperform directly learning the target task with no other information, in the worst case. This separation underscores the benefit of fine-tuning-based over “frozen representation” objectives in few-shot learning.
null
Cooperative Stochastic Bandits with Asynchronous Agents and Constrained Feedback
https://papers.nips.cc/paper_files/paper/2021/hash/4a5876b450b45371f6cfe5047ac8cd45-Abstract.html
Lin Yang, Yu-Zhen Janice Chen, Stephen Pasteris, Mohammad Hajiesmaili, John C. S. Lui, Don Towsley
https://papers.nips.cc/paper_files/paper/2021/hash/4a5876b450b45371f6cfe5047ac8cd45-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12303-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4a5876b450b45371f6cfe5047ac8cd45-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=IEniJ8TiV1
https://papers.nips.cc/paper_files/paper/2021/file/4a5876b450b45371f6cfe5047ac8cd45-Supplemental.pdf
This paper studies a cooperative multi-armed bandit problem with $M$ agents cooperating together to solve the same instance of a $K$-armed stochastic bandit problem with the goal of maximizing the cumulative reward of agents. The agents are heterogeneous in (i) their limited access to a local subset of arms; and (ii) their decision-making rounds, i.e., agents are asynchronous with different decision-making gaps. The goal is to find the global optimal arm and agents are able to pull any arm, however, they observe the reward only when the selected arm is local.The challenge is a tradeoff for agents between pulling a local arm with the possibility of observing the feedback, or relying on the observations of other agents that might occur at different rates. Naive extensions of traditional algorithms lead to an arbitrarily poor regret as a function of aggregate action frequency of any $\textit{suboptimal}$ arm located in slow agents. We resolve this issue by proposing a novel two-stage learning algorithm, called $\texttt{CO-LCB}$ algorithm, whose regret is a function of aggregate action frequency of agents containing the $\textit{optimal}$ arm. We also show that the regret of $\texttt{CO-LCB}$ matches the regret lower bound up to a small factor.
null
Multiple Descent: Design Your Own Generalization Curve
https://papers.nips.cc/paper_files/paper/2021/hash/4ae67a7dd7e491f8fb6f9ea0cf25dfdb-Abstract.html
Lin Chen, Yifei Min, Mikhail Belkin, Amin Karbasi
https://papers.nips.cc/paper_files/paper/2021/hash/4ae67a7dd7e491f8fb6f9ea0cf25dfdb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12304-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4ae67a7dd7e491f8fb6f9ea0cf25dfdb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rh0vIXw6i33
https://papers.nips.cc/paper_files/paper/2021/file/4ae67a7dd7e491f8fb6f9ea0cf25dfdb-Supplemental.pdf
This paper explores the generalization loss of linear regression in variably parameterized families of models, both under-parameterized and over-parameterized. We show that the generalization curve can have an arbitrary number of peaks, and moreover, the locations of those peaks can be explicitly controlled. Our results highlight the fact that both the classical U-shaped generalization curve and the recently observed double descent curve are not intrinsic properties of the model family. Instead, their emergence is due to the interaction between the properties of the data and the inductive biases of learning algorithms.
null
On Empirical Risk Minimization with Dependent and Heavy-Tailed Data
https://papers.nips.cc/paper_files/paper/2021/hash/4afa19649ae378da31a423bcd78a97c8-Abstract.html
Abhishek Roy, Krishnakumar Balasubramanian, Murat A. Erdogdu
https://papers.nips.cc/paper_files/paper/2021/hash/4afa19649ae378da31a423bcd78a97c8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12305-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4afa19649ae378da31a423bcd78a97c8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Tzkev89HeLZ
https://papers.nips.cc/paper_files/paper/2021/file/4afa19649ae378da31a423bcd78a97c8-Supplemental.pdf
In this work, we establish risk bounds for Empirical Risk Minimization (ERM) with both dependent and heavy-tailed data-generating processes. We do so by extending the seminal works~\cite{pmlr-v35-mendelson14, mendelson2018learning} on the analysis of ERM with heavy-tailed but independent and identically distributed observations, to the strictly stationary exponentially $\beta$-mixing case. We allow for the interaction between the noise and inputs to be even polynomially heavy-tailed, which covers a significantly large class of heavy-tailed models beyond what is analyzed in the learning theory literature. We illustrate our theoretical results by obtaining rates of convergence for high-dimensional linear regression with dependent and heavy-tailed data.
null
Gone Fishing: Neural Active Learning with Fisher Embeddings
https://papers.nips.cc/paper_files/paper/2021/hash/4afe044911ed2c247005912512ace23b-Abstract.html
Jordan Ash, Surbhi Goel, Akshay Krishnamurthy, Sham Kakade
https://papers.nips.cc/paper_files/paper/2021/hash/4afe044911ed2c247005912512ace23b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12306-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4afe044911ed2c247005912512ace23b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DHnThtAyoPj
https://papers.nips.cc/paper_files/paper/2021/file/4afe044911ed2c247005912512ace23b-Supplemental.zip
There is an increasing need for effective active learning algorithms that are compatible with deep neural networks. This paper motivates and revisits a classic, Fisher-based active selection objective, and proposes BAIT, a practical, tractable, and high-performing algorithm that makes it viable for use with neural models. BAIT draws inspiration from the theoretical analysis of maximum likelihood estimators (MLE) for parametric models. It selects batches of samples by optimizing a bound on the MLE error in terms of the Fisher information, which we show can be implemented efficiently at scale by exploiting linear-algebraic structure especially amenable to execution on modern hardware. Our experiments demonstrate that BAIT outperforms the previous state of the art on both classification and regression problems, and is flexible enough to be used with a variety of model architectures.
null
On Riemannian Optimization over Positive Definite Matrices with the Bures-Wasserstein Geometry
https://papers.nips.cc/paper_files/paper/2021/hash/4b04b0dcd2ade339a3d7ce13252a29d4-Abstract.html
Andi Han, Bamdev Mishra, Pratik Kumar Jawanpuria, Junbin Gao
https://papers.nips.cc/paper_files/paper/2021/hash/4b04b0dcd2ade339a3d7ce13252a29d4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12307-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4b04b0dcd2ade339a3d7ce13252a29d4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZCHxGFmc62a
null
In this paper, we comparatively analyze the Bures-Wasserstein (BW) geometry with the popular Affine-Invariant (AI) geometry for Riemannian optimization on the symmetric positive definite (SPD) matrix manifold. Our study begins with an observation that the BW metric has a linear dependence on SPD matrices in contrast to the quadratic dependence of the AI metric. We build on this to show that the BW metric is a more suitable and robust choice for several Riemannian optimization problems over ill-conditioned SPD matrices. We show that the BW geometry has a non-negative curvature, which further improves convergence rates of algorithms over the non-positively curved AI geometry. Finally, we verify that several popular cost functions, which are known to be geodesic convex under the AI geometry, are also geodesic convex under the BW geometry. Extensive experiments on various applications support our findings.
null
Refining Language Models with Compositional Explanations
https://papers.nips.cc/paper_files/paper/2021/hash/4b26dc4663ccf960c8538d595d0a1d3a-Abstract.html
Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, Xiang Ren
https://papers.nips.cc/paper_files/paper/2021/hash/4b26dc4663ccf960c8538d595d0a1d3a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12308-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4b26dc4663ccf960c8538d595d0a1d3a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=dkw9OQMn1t
https://papers.nips.cc/paper_files/paper/2021/file/4b26dc4663ccf960c8538d595d0a1d3a-Supplemental.pdf
Pre-trained language models have been successful on text classification tasks, but are prone to learning spurious correlations from biased datasets, and are thus vulnerable when making inferences in a new domain. Prior work reveals such spurious patterns via post-hoc explanation algorithms which compute the importance of input features. Further, the model is regularized to align the importance scores with human knowledge, so that the unintended model behaviors are eliminated. However, such a regularization technique lacks flexibility and coverage, since only importance scores towards a pre-defined list of features are adjusted, while more complex human knowledge such as feature interaction and pattern generalization can hardly be incorporated. In this work, we propose to refine a learned language model for a target domain by collecting human-provided compositional explanations regarding observed biases. By parsing these explanations into executable logic rules, the human-specified refinement advice from a small set of explanations can be generalized to more training examples. We additionally introduce a regularization term allowing adjustments for both importance and interaction of features to better rectify model behavior. We demonstrate the effectiveness of the proposed approach on two text classification tasks by showing improved performance in target domain as well as improved model fairness after refinement.
null
Going Beyond Linear RL: Sample Efficient Neural Function Approximation
https://papers.nips.cc/paper_files/paper/2021/hash/4b4edc2630fe75800ddc29a7b4070add-Abstract.html
Baihe Huang, Kaixuan Huang, Sham Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang
https://papers.nips.cc/paper_files/paper/2021/hash/4b4edc2630fe75800ddc29a7b4070add-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12309-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4b4edc2630fe75800ddc29a7b4070add-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9S7jZvhS7SP
https://papers.nips.cc/paper_files/paper/2021/file/4b4edc2630fe75800ddc29a7b4070add-Supplemental.pdf
Deep Reinforcement Learning (RL) powered by neural net approximation of the Q function has had enormous empirical success. While the theory of RL has traditionally focused on linear function approximation (or eluder dimension) approaches, little is known about nonlinear RL with neural net approximations of the Q functions. This is the focus of this work, where we study function approximation with two-layer neural networks (considering both ReLU and polynomial activation functions). Our first result is a computationally and statistically efficient algorithm in the generative model setting under completeness for two-layer neural networks. Our second result considers this setting but under only realizability of the neural net function class. Here, assuming deterministic dynamics, the sample complexity scales linearly in the algebraic dimension. In all cases, our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
null
Scalable Neural Data Server: A Data Recommender for Transfer Learning
https://papers.nips.cc/paper_files/paper/2021/hash/4b55df75e2e804bab559aa885be40310-Abstract.html
Tianshi Cao, Sasha (Alexandre) Doubov, David Acuna, Sanja Fidler
https://papers.nips.cc/paper_files/paper/2021/hash/4b55df75e2e804bab559aa885be40310-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12310-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4b55df75e2e804bab559aa885be40310-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NEQYGJr1qL3
https://papers.nips.cc/paper_files/paper/2021/file/4b55df75e2e804bab559aa885be40310-Supplemental.pdf
Absence of large-scale labeled data in the practitioner's target domain can be a bottleneck to applying machine learning algorithms in practice. Transfer learning is a popular strategy for leveraging additional data to improve the downstream performance, but finding the most relevant data to transfer from can be challenging. Neural Data Server (NDS), a search engine that recommends relevant data for a given downstream task, has been previously proposed to address this problem (Yan et al., 2020). NDS uses a mixture of experts trained on data sources to estimate similarity between each source and the downstream task. Thus, the computational cost to each user grows with the number of sources and requires an expensive training step for each data provider.To address these issues, we propose Scalable Neural Data Server (SNDS), a large-scale search engine that can theoretically index thousands of datasets to serve relevant ML data to end users. SNDS trains the mixture of experts on intermediary datasets during initialization, and represents both data sources and downstream tasks by their proximity to the intermediary datasets. As such, computational cost incurred by users of SNDS remains fixed as new datasets are added to the server, without pre-training for the data providers.We validate SNDS on a plethora of real world tasks and find that data recommended by SNDS improves downstream task performance over baselines. We also demonstrate the scalability of our system by demonstrating its ability to select relevant data for transfer outside of the natural image setting.
null
What can linearized neural networks actually say about generalization?
https://papers.nips.cc/paper_files/paper/2021/hash/4b5deb9a14d66ab0acc3b8a2360cde7c-Abstract.html
Guillermo Ortiz-Jimenez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
https://papers.nips.cc/paper_files/paper/2021/hash/4b5deb9a14d66ab0acc3b8a2360cde7c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12311-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4b5deb9a14d66ab0acc3b8a2360cde7c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KLS346_Asf
https://papers.nips.cc/paper_files/paper/2021/file/4b5deb9a14d66ab0acc3b8a2360cde7c-Supplemental.pdf
For certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization, but for the networks used in practice, the empirical NTK only provides a rough first-order approximation. Still, a growing body of work keeps leveraging this approximation to successfully analyze important deep learning phenomena and design algorithms for new applications. In our work, we provide strong empirical evidence to determine the practical validity of such approximation by conducting a systematic comparison of the behavior of different neural networks and their linear approximations on different tasks. We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks, even when they achieve very different performances. However, in contrast to what was previously reported, we discover that neural networks do not always perform better than their kernel approximations, and reveal that the performance gap heavily depends on architecture, dataset size and training task. We discover that networks overfit to these tasks mostly due to the evolution of their kernel during training, thus, revealing a new type of implicit bias.
null
CATs: Cost Aggregation Transformers for Visual Correspondence
https://papers.nips.cc/paper_files/paper/2021/hash/4b6538a44a1dfdc2b83477cd76dee98e-Abstract.html
Seokju Cho, Sunghwan Hong, Sangryul Jeon, Yunsung Lee, Kwanghoon Sohn, Seungryong Kim
https://papers.nips.cc/paper_files/paper/2021/hash/4b6538a44a1dfdc2b83477cd76dee98e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12312-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4b6538a44a1dfdc2b83477cd76dee98e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=eVuMspr9cu5
https://papers.nips.cc/paper_files/paper/2021/file/4b6538a44a1dfdc2b83477cd76dee98e-Supplemental.pdf
We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Cost aggregation is a highly important process in matching tasks, which the matching accuracy depends on the quality of its output. Compared to hand-crafted or CNN-based methods addressing the cost aggregation, in that either lacks robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to fully leverage self-attention mechanism. Specifically, we include appearance affinity modeling to aid the cost aggregation process in order to disambiguate the noisy initial correlation maps and propose multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. We then combine with swapping self-attention technique and residual connections not only to enforce consistent matching, but also to ease the learning process, which we find that these result in an apparent performance boost. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models are available at https://sunghwanhong.github.io/CATs/.
null
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
https://papers.nips.cc/paper_files/paper/2021/hash/4b85256c4881edb6c0776df5d81f6236-Abstract.html
Alon Cohen, Amit Daniely, Yoel Drori, Tomer Koren, Mariano Schain
https://papers.nips.cc/paper_files/paper/2021/hash/4b85256c4881edb6c0776df5d81f6236-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12313-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4b85256c4881edb6c0776df5d81f6236-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5tSmnxXb0cx
null
We consider the problem of stochastic optimization with delayed gradients in which, at each time step $t$, the algorithm makes an update using a stale stochastic gradient from step $t - d_t$ for some arbitrary delay $d_t$. This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires $O( \sigma^2/\epsilon^4 + \tau/\epsilon^2 )$ steps for finding an $\epsilon$-stationary point $x$. Here, $\tau$ is the \emph{average} delay $\frac{1}{T}\sum_{t=1}^T d_t$ and $\sigma^2$ is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the \emph{maximal} delay $\max_{t} d_t$, that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
null
Consistent Non-Parametric Methods for Maximizing Robustness
https://papers.nips.cc/paper_files/paper/2021/hash/4bb236de7787ceedafdff83bb8ea4710-Abstract.html
Robi Bhattacharjee, Kamalika Chaudhuri
https://papers.nips.cc/paper_files/paper/2021/hash/4bb236de7787ceedafdff83bb8ea4710-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12314-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4bb236de7787ceedafdff83bb8ea4710-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vAMh-dcNMcR
https://papers.nips.cc/paper_files/paper/2021/file/4bb236de7787ceedafdff83bb8ea4710-Supplemental.pdf
Learning classifiers that are robust to adversarial examples has received a great deal of recent attention. A major drawback of the standard robust learning framework is the imposition of an artificial robustness radius $r$ that applies to all inputs, and ignores the fact that data may be highly heterogeneous. In particular, it is plausible that robustness regions should be larger in some regions of data, and smaller in other. In this paper, we address this limitation by proposing a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. We then argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal. We then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit object, and show that both nearest neighbors and kernel classifiers (under certain assumptions) suffice.
null
Generalizable Multi-linear Attention Network
https://papers.nips.cc/paper_files/paper/2021/hash/4bbdcc0e821637155ac4217bdab70d2e-Abstract.html
Tao Jin, Zhou Zhao
https://papers.nips.cc/paper_files/paper/2021/hash/4bbdcc0e821637155ac4217bdab70d2e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12315-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4bbdcc0e821637155ac4217bdab70d2e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=dZ33IBX-uRm
https://papers.nips.cc/paper_files/paper/2021/file/4bbdcc0e821637155ac4217bdab70d2e-Supplemental.pdf
The majority of existing multimodal sequential learning methods focus on how to obtain effective representations and ignore the importance of multimodal fusion. Bilinear attention network (BAN) is a commonly used fusion method, which leverages tensor operations to associate the features of different modalities. However, BAN has a poor compatibility for more modalities, since the computational complexity of the attention map increases exponentially with the number of modalities. Based on this concern, we propose a new method called generalizable multi-linear attention network (MAN), which can associate as many modalities as possible in linear complexity with hierarchical approximation decomposition (HAD). Besides, considering the fact that softmax attention kernels cannot be decomposed as linear operation directly, we adopt the addition random features (ARF) mechanism to approximate the non-linear softmax functions with enough theoretical analysis. We conduct extensive experiments on four datasets of three tasks (multimodal sentiment analysis, multimodal speaker traits recognition, and video retrieval), the experimental results show that MAN could achieve competitive results compared with the state-of-the-art methods, showcasing the effectiveness of the approximation decomposition and addition random features mechanism.
null
Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning
https://papers.nips.cc/paper_files/paper/2021/hash/4be49c79f233b4f4070794825c323733-Abstract.html
Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, Long Jin
https://papers.nips.cc/paper_files/paper/2021/hash/4be49c79f233b4f4070794825c323733-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12316-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4be49c79f233b4f4070794825c323733-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Hcr9mgBG6ds
https://papers.nips.cc/paper_files/paper/2021/file/4be49c79f233b4f4070794825c323733-Supplemental.pdf
In this paper, we provide a theory of using graph neural networks (GNNs) for multi-node representation learning (where we are interested in learning a representation for a set of more than one node, such as link). We know that GNN is designed to learn single-node representations. When we want to learn a node set representation involving multiple nodes, a common practice in previous works is to directly aggregate the single-node representations obtained by a GNN into a joint node set representation. In this paper, we show a fundamental constraint of such an approach, namely the inability to capture the dependence between nodes in the node set, and argue that directly aggregating individual node representations does not lead to an effective joint representation for multiple nodes. Then, we notice that a few previous successful works for multi-node representation learning, including SEAL, Distance Encoding, and ID-GNN, all used node labeling. These methods first label nodes in the graph according to their relationships with the target node set before applying a GNN. Then, the node representations obtained in the labeled graph are aggregated into a node set representation. By investigating their inner mechanisms, we unify these node labeling techniques into a single and most general form---labeling trick. We prove that with labeling trick a sufficiently expressive GNN learns the most expressive node set representations, thus in principle solves any joint learning tasks over node sets. Experiments on one important two-node representation learning task, link prediction, verified our theory. Our work explains the superior performance of previous node-labeling-based methods, and establishes a theoretical foundation of using GNNs for multi-node representation learning.
null
SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients
https://papers.nips.cc/paper_files/paper/2021/hash/4be5a36cbaca8ab9d2066debfe4e65c1-Abstract.html
Feihu Huang, Junyi Li, Heng Huang
https://papers.nips.cc/paper_files/paper/2021/hash/4be5a36cbaca8ab9d2066debfe4e65c1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12317-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4be5a36cbaca8ab9d2066debfe4e65c1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nFdJSm9dy83
https://papers.nips.cc/paper_files/paper/2021/file/4be5a36cbaca8ab9d2066debfe4e65c1-Supplemental.pdf
Adaptive gradient methods have shown excellent performances for solving many machine learning problems. Although multiple adaptive gradient methods were recently studied, they mainly focus on either empirical or theoretical aspects and also only work for specific problems by using some specific adaptive learning rates. Thus, it is desired to design a universal framework for practical algorithms of adaptive gradients with theoretical guarantee to solve general problems. To fill this gap, we propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Moreover, our framework can flexibly integrate the momentum and variance reduced techniques. In particular, our novel framework provides the convergence analysis support for adaptive gradient methods under the nonconvex setting. In theoretical analysis, we prove that our SUPER-ADAM algorithm can achieve the best known gradient (i.e., stochastic first-order oracle (SFO)) complexity of $\tilde{O}(\epsilon^{-3})$ for finding an $\epsilon$-stationary point of nonconvex optimization, which matches the lower bound for stochastic smooth nonconvex optimization. In numerical experiments, we employ various deep learning tasks to validate that our algorithm consistently outperforms the existing adaptive algorithms. Code is available at https://github.com/LIJUNYI95/SuperAdam
null
General Nonlinearities in SO(2)-Equivariant CNNs
https://papers.nips.cc/paper_files/paper/2021/hash/4bfbd52f4e8466dc12aaf30b7e057b66-Abstract.html
Daniel Franzen, Michael Wand
https://papers.nips.cc/paper_files/paper/2021/hash/4bfbd52f4e8466dc12aaf30b7e057b66-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12318-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4bfbd52f4e8466dc12aaf30b7e057b66-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=PFBHMlpaWY
https://papers.nips.cc/paper_files/paper/2021/file/4bfbd52f4e8466dc12aaf30b7e057b66-Supplemental.zip
Invariance under symmetry is an important problem in machine learning. Our paper looks specifically at equivariant neural networks where transformations of inputs yield homomorphic transformations of outputs. Here, steerable CNNs have emerged as the standard solution. An inherent problem of steerable representations is that general nonlinear layers break equivariance, thus restricting architectural choices. Our paper applies harmonic distortion analysis to illuminate the effect of nonlinearities on Fourier representations of SO(2). We develop a novel FFT-based algorithm for computing representations of non-linearly transformed activations while maintaining band-limitation. It yields exact equivariance for polynomial (approximations of) nonlinearities, as well as approximate solutions with tunable accuracy for general functions. We apply the approach to build a fully E(3)-equivariant network for sampled 3D surface data. In experiments with 2D and 3D data, we obtain results that compare favorably to the state-of-the-art in terms of accuracy while permitting continuous symmetry and exact equivariance.
null
Denoising Normalizing Flow
https://papers.nips.cc/paper_files/paper/2021/hash/4c07fe24771249c343e70c32289c1192-Abstract.html
Christian Horvat, Jean-Pascal Pfister
https://papers.nips.cc/paper_files/paper/2021/hash/4c07fe24771249c343e70c32289c1192-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12319-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4c07fe24771249c343e70c32289c1192-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=F-H4oe3MXXI
https://papers.nips.cc/paper_files/paper/2021/file/4c07fe24771249c343e70c32289c1192-Supplemental.pdf
Normalizing flows (NF) are expressive as well as tractable density estimation methods whenever the support of the density is diffeomorphic to the entire data-space. However, real-world data sets typically live on (or very close to) low-dimensional manifolds thereby challenging the applicability of standard NF on real-world problems. Here we propose a novel method - called Denoising Normalizing Flow (DNF) - that estimates the density on the low-dimensional manifold while learning the manifold as well. The DNF works in 3 steps. First, it inflates the manifold - making it diffeomorphic to the entire data-space. Secondly, it learns an NF on the inflated manifold and finally it learns a denoising mapping - similarly to denoising autoencoders. The DNF relies on a single cost function and does not require to alternate between a density estimation phase and a manifold learning phase - as it is the case with other recent methods. Furthermore, we show that the DNF can learn meaningful low-dimensional representations from naturalistic images as well as generate high-quality samples.
null
Attention over Learned Object Embeddings Enables Complex Visual Reasoning
https://papers.nips.cc/paper_files/paper/2021/hash/4c26774d852f62440fc746ea4cdd57f6-Abstract.html
David Ding, Felix Hill, Adam Santoro, Malcolm Reynolds, Matt Botvinick
https://papers.nips.cc/paper_files/paper/2021/hash/4c26774d852f62440fc746ea4cdd57f6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12320-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4c26774d852f62440fc746ea4cdd57f6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lHmhW2zmVN
https://papers.nips.cc/paper_files/paper/2021/file/4c26774d852f62440fc746ea4cdd57f6-Supplemental.pdf
Neural networks have achieved success in a wide array of perceptual tasks but often fail at tasks involving both perception and higher-level reasoning. On these more challenging tasks, bespoke approaches (such as modular symbolic components, independent dynamics models or semantic parsers) targeted towards that specific type of task have typically performed better. The downside to these targeted approaches, however, is that they can be more brittle than general-purpose neural networks, requiring significant modification or even redesign according to the particular task at hand. Here, we propose a more general neural-network-based approach to dynamic visual reasoning problems that obtains state-of-the-art performance on three different domains, in each case outperforming bespoke modular approaches tailored specifically to the task. Our method relies on learned object-centric representations, self-attention and self-supervised dynamics learning, and all three elements together are required for strong performance to emerge. The success of this combination suggests that there may be no need to trade off flexibility for performance on problems involving spatio-temporal or causal-style reasoning. With the right soft biases and learning objectives in a neural network we may be able to attain the best of both worlds.
null
Differentially Private Federated Bayesian Optimization with Distributed Exploration
https://papers.nips.cc/paper_files/paper/2021/hash/4c27cea8526af8cfee3be5e183ac9605-Abstract.html
Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet
https://papers.nips.cc/paper_files/paper/2021/hash/4c27cea8526af8cfee3be5e183ac9605-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12321-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4c27cea8526af8cfee3be5e183ac9605-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aj8x18_Te9
null
Bayesian optimization (BO) has recently been extended to the federated learning (FL) setting by the federated Thompson sampling (FTS) algorithm, which has promising applications such as federated hyperparameter tuning. However, FTS is not equipped with a rigorous privacy guarantee which is an important consideration in FL. Recent works have incorporated differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms. Following this general DP framework, our work here integrates DP into FTS to preserve user-level privacy. We also leverage the ability of this general DP framework to handle different parameter vectors, as well as the technique of local modeling for BO, to further improve the utility of our algorithm through distributed exploration (DE). The resulting differentially private FTS with DE (DP-FTS-DE) algorithm is endowed with theoretical guarantees for both the privacy and utility and is amenable to interesting theoretical insights about the privacy-utility trade-off. We also use real-world experiments to show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee (small privacy loss) and induces a trade-off between privacy and utility.
null
Differentiable Learning Under Triage
https://papers.nips.cc/paper_files/paper/2021/hash/4c4c937b67cc8d785cea1e42ccea185c-Abstract.html
Nastaran Okati, Abir De, Manuel Rodriguez
https://papers.nips.cc/paper_files/paper/2021/hash/4c4c937b67cc8d785cea1e42ccea185c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12322-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4c4c937b67cc8d785cea1e42ccea185c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bdA60x7yG0T
https://papers.nips.cc/paper_files/paper/2021/file/4c4c937b67cc8d785cea1e42ccea185c-Supplemental.pdf
Multiple lines of evidence suggest that predictive models may benefit from algorithmic triage. Under algorithmic triage, a predictive model does not predict all instances but instead defers some of them to human experts. However, the interplay between the prediction accuracy of the model and the human experts under algorithmic triage is not well understood. In this work, we start by formally characterizing under which circumstances a predictive model may benefit from algorithmic triage. In doing so, we also demonstrate that models trained for full automation may be suboptimal under triage. Then, given any model and desired level of triage, we show that the optimal triage policy is a deterministic threshold rule in which triage decisions are derived deterministically by thresholding the difference between the model and human errors on a per-instance level. Building upon these results, we introduce a practical gradient-based algorithm that is guaranteed to find a sequence of predictive models and triage policies of increasing performance. Experiments on a wide variety of supervised learning tasks using synthetic and real data from two important applications---content moderation and scientific discovery---illustrate our theoretical results and show that the models and triage policies provided by our gradient-based algorithm outperform those provided by several competitive baselines.
null
ROI Maximization in Stochastic Online Decision-Making
https://papers.nips.cc/paper_files/paper/2021/hash/4c4ea5258ef3fb3fb1fc48fee9b4408c-Abstract.html
Nicolò Cesa-Bianchi, Tom Cesari, Yishay Mansour, Vianney Perchet
https://papers.nips.cc/paper_files/paper/2021/hash/4c4ea5258ef3fb3fb1fc48fee9b4408c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12323-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/4c4ea5258ef3fb3fb1fc48fee9b4408c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=euUgX7XM9j
https://papers.nips.cc/paper_files/paper/2021/file/4c4ea5258ef3fb3fb1fc48fee9b4408c-Supplemental.pdf
We introduce a novel theoretical framework for Return On Investment (ROI) maximization in repeated decision-making. Our setting is motivated by the use case of companies that regularly receive proposals for technological innovations and want to quickly decide whether they are worth implementing. We design an algorithm for learning ROI-maximizing decision-making policies over a sequence of innovation proposals. Our algorithm provably converges to an optimal policy in class $\Pi$ at a rate of order $\min\big\{1/(N\Delta^2),N^{-1/3}\}$, where $N$ is the number of innovations and $\Delta$ is the suboptimality gap in $\Pi$. A significant hurdle of our formulation, which sets it aside from other online learning problems such as bandits, is that running a policy does not provide an unbiased estimate of its performance.
null