input
stringlengths
286
19k
output
stringlengths
1
15.8k
metadata
dict
_instance_id
stringlengths
15
62
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis. While the latent space of a typical GAN consists of input vectors, randomly sampled from the standard Gaussian distribution, the latent space of RPGAN consists of random paths in a generator network. As we show, this design allows to associate different layers of the generator with different regions of the latent space, providing their natural interpretability. With experiments on standard benchmarks, we demonstrate that RPGAN reveals several interesting insights about roles that different layers play in the image generation process. Aside from interpretability, the RPGAN model also provides competitive generation quality and allows efficient incremental learning on new data. Nowadays, deep generative models are an active research direction in the machine learning community. The dominant methods for generative modeling, such as Generative Adversarial Networks (GANs), are currently able to produce diverse photorealistic images (Brock et al., 2019; Karras et al., 2019) . These methods are not only popular among academicians, but are also a crucial component in a wide range of applications, including image editing (Isola et al., 2017; , super-resolution (Ledig et al., 2017) , video generation (Wang et al., 2018) and many others. Along with practical importance, a key benefit of accurate generative models is a more complete understanding of the internal structure of the data. Insights about the data generation process can result both in the development of new machine learning techniques as well as advances in industrial applications. However, most state-of-the-art generative models employ deep multi-layer architectures, which are difficult to interpret or explain. While many works investigate interpretability of discriminative models (Zeiler & Fergus, 2014; Simonyan et al., 2013; Mahendran & Vedaldi, 2015) , only a few (Chen et al., 2016; Bau et al., 2019) address the understanding of generative ones. In this work, we propose the Random Path GAN (RPGAN) -an alternative design of GANs that allows natural interpretability of the generator network. In traditional GAN generators, the stochastic component that influences individual samples is a noisy input vector, typically sampled from the standard Gaussian distribution. In contrast, RPGAN generators instead use stochastic routing during the forward pass as their source of stochasticity. In a nutshell, the RPGAN generator contains several instances of the corresponding layer. For each sample, only one random instance of each layer is activated during generation. The training of the RPGAN can then be performed in the same adversarial manner as in traditional GANs. In the sections below, we show how RPGAN allows to understand the factors of variation captured by the particular layer and reveals several interesting findings about the image generation process, e.g. that different layers are "responsible for" coloring or objection location. As a practical advantage, RPGANs can be efficiently updated to new data via the simple addition of new instances to the bucket, avoiding re-training the full model from scratch. Finally, we observe that RPGANs allow the construction of generative models without nonlinearities, which can significantly speed up the generation process for fully-connected layers. In summary, the main contributions of our paper are the following: • We introduce RPGAN -GAN with an alternative source of stochasticity, based on random routing. While being close to traditional GANs in terms of generation quality, RPGAN allows natural interpretability and efficient model updates with new data. • With extensive experiments on standard benchmarks we reveal several insights about the image generation process. Many of our insights confirm and extend recent findings from Bau et al. (2019) . Note, that our scheme is more general compared to the technique from Bau et al. (2019) as RPGAN does not require labeled datasets or pretrained segmentation models. • We open-source the PyTorch implementation of RPGAN with common generator architectures 1 . The rest of this paper is organized as follows. In Section 2 we review relevant ideas from prior art. The proposed Random Path GAN design is described in Section 3 and experimentally evaluated in Section 4. Section 5 concludes the paper and discusses possible directions for future work. In this paper, we address the interpretability of generative models. In particular, we have introduced RPGAN, an alternative design of generative adversarial networks, which allows natural interpretation of different generator layers via using random routing as a source of stochasticity. With experiments on several datasets, we provide evidence that different layers are responsible for the different factors of variation in generated images, which is consistent with findings from previous work. As a possible direction of future research, one can use the RPGAN analysis to construct efficient models, e.g., via identification of redundant parts of the generator for pruning or inference speedup. If the number of blocks is too low, the resulting latent space appears to have insufficient cardinality to cover the dataset. On the other hand, a too high number of blocks results in a difficult training procedure and also fails.
We introduce an alternative GAN design based on random routes in generator, which can serve as a tool for generative models interpretability.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:438
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deep artificial neural networks can achieve an extremely small difference between training and test accuracies on identically distributed training and test sets, which is a standard measure of generalization. However, the training and test sets may not be sufficiently representative of the empirical sample set, which consists of real-world input samples. When samples are drawn from an underrepresented or unrepresented subset during inference, the gap between the training and inference accuracies can be significant. To address this problem, we first reformulate a classification algorithm as a procedure for searching for a source code that maps input features to classes. We then derive a necessary and sufficient condition for generalization using a universal cognitive similarity metric, namely information distance, based on Kolmogorov complexity. Using this condition, we formulate an optimization problem to learn a more general classification function. To achieve this end, we extend the input features by concatenating encodings of them, and then train the classifier on the extended features. As an illustration of this idea, we focus on image classification, where we use channel codes on the input features as a systematic way to improve the degree to which the training and test sets are representative of the empirical sample set. To showcase our theoretical findings, considering that corrupted or perturbed input features belong to the empirical sample set, but typically not to the training and test sets, we demonstrate through extensive systematic experiments that, as a result of learning a more general classification function, a model trained on encoded input features is significantly more robust to common corruptions, e.g., Gaussian and shot noise, as well as adversarial perturbations, e.g., those found via projected gradient descent, than the model trained on uncoded input features. Generalization error in deep learning is typically defined as the difference between training and test errors measured on identically distributed training and test sets. This traditional approach fails to take into account how representative these sets are of the empirical sample set from which input samples are drawn at inference time. When the training and test sets are not sufficiently representative of the empirical sample set, the difference between training and inference errors can be significant, thus rendering the learned classification function ineffective. The lack of the latter kind of generalization results in unreliable decisions, raising questions about how robust, fair, and safe a learned classification function is (Varshney & Alemzadeh, 2017) . A natural question then arises: is there a necessary and sufficient condition ensuring that deep learning classifiers generalize in this broader sense? If so, how can this condition be satisfied in a real-world setting? To answer these questions, we draw on algorithmic information theory, which proposes a complexity measure, Kolmogorov complexity, as the absolute information content of any object, e.g., a computer program, function, or set. After deriving a necessary and sufficient condition for generalization using the information distance (Bennett et al., 1998) , which is a universal cognitive similarity metric based on Kolmogorov complexity, and formulating an optimization problem for generalization, we turn our attention to coding theory in order to learn a more general classification function by extending the input features to a classifier with systematically generated encodings of the original features. We presented a theoretical and experimental framework for defining and understanding generalization in deep learning, defined as the difference between training and inference errors. The theoretical findings and experimental results show that a learned classification function must be sufficiently complex for a classification task in order to be closer to the true classification function. Another insight from this study is that concatenating encodings of input features to the original input features helps to achieve generalization in deep learning by enabling the classifier to learn relations between features not captured by the original inputs. Experiments demonstrate that a model trained on arbitrarily encoded input features is more robust to common corruptions and adversarial perturbations and that using more encodings may be beneficial to minimize the generalization error. Designing input codes to help a DNN learn a more general classification function with a minimum number of encodings is an intriguing research direction to achieve reliability in machine learning.
We present a theoretical and experimental framework for defining, understanding, and achieving generalization, and as a result robustness, in deep learning by drawing on algorithmic information theory and coding theory.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:439
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: For understanding generic documents, information like font sizes, column layout, and generally the positioning of words may carry semantic information that is crucial for solving a downstream document intelligence task. Our novel BERTgrid, which is based on Chargrid by Katti et al. (2018), represents a document as a grid of contextualized word piece embedding vectors, thereby making its spatial structure and semantics accessible to the processing neural network. The contextualized embedding vectors are retrieved from a BERT language model. We use BERTgrid in combination with a fully convolutional network on a semantic instance segmentation task for extracting fields from invoices. We demonstrate its performance on tabulated line item and document header field extraction. Documents often come in a variety of layouts and formats. For instance, a single document may contain isolated text boxes, tabular arrangements, multiple columns, and different font sizes. This layout can carry crucial semantic information. In classical natural language processing (NLP), however, the layout information is completely discarded as the document text is simply a sequence of words. Without access to the layout, a downstream task such as extraction of tabulated data can become much harder -and in some cases impossible to solve -since the necessary serialization may lead to severe information loss. Instead of working on the textual level, it is possible to directly apply methods from computer vision (CV) (e.g. Ren et al. (2015) ) to work on the raw document pixel level which naturally retains the two-dimensional (2D) document structure. However, this is impractical, as a machine learning model would first need to learn textual information from the raw pixel data followed by the semantics. Recent approaches have designed a hybrid between NLP and CV methods for document intelligence: Chargrid (Katti et al. (2018) ), followed more recently by CUTIE (Zhao et al. (2019) ), construct a 2D grid of characters or words from a document and feed it into a neural model, thereby preserving the spatial arrangement of the document. The symbols in the original document are embedded in some vector space, yielding a rank-3 tensor (width, height, embedding). Both papers report significant benefits of using such a grid approach over purely sequential 1D input representations, especially for semantically understanding tabulated or otherwise spatially arranged text like line items. With our contribution BERTgrid, we incorporate contextualized embedding into the grid document representation. More specifically, we use a BERT language model (Devlin et al. (2019) ) pre-trained on a large pool of unlabeled documents from the target domain to compute contextualized feature vectors for every word piece in a document. We demonstrate the effectiveness of BERTgrid on an invoice information extraction task from document tables and headers. We compare our results to Chargrid and find significant improvements from 61.76% ± 0.72 to 65.48% ± 0.58 on an invoice dataset previously described in Katti et al. (2018) . Tab. 1 shows the results in terms of the evaluation measure for different input representations. All results are averaged over four randomly initialized training runs. Katti et al. (2018); Zhao et al. (2019) have shown that grid-based approaches like [Chargrid] or [Wordgrid] outperform conventional sequential models as well as purely image-based methods, so we use [Chargrid] as our baseline, with 61.76% ± 0.72. We assume the performance of BERTgrid stems from (i) embedding on the word-piece level and (ii) contextualization. Rather than learning to represent words first, the network directly gets access to semantically meaningful word(-piece)-level information. For instance, words such as avenue, street, and drive are very different when embedded on the character level, but will be mapped to approximately the same embedding vector. We observe that both [C+Wordgrid] and [C+BERTgrid] converge faster than [Chargrid] which supports this statement. During language model pre-training on the large, unlabeled dataset, knowledge about the language of invoices is distilled into the BERT model parameters. Compared to simpler, non-contextualized embedding methods such as word2vec, it has sufficient capacity to capture complex dependencies. This distilled knowledge is made accessible via the BERTgrid representation and eases the downstream task significantly. We acknowledge the BERT model has only access to S, not D. Future work could use 2D positional encodings to preserve the layout structure also during language model pre-training and inference.
Grid-based document representation with contextualized embedding vectors for documents with 2D layouts
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:44
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Many approaches to causal discovery are limited by their inability to discriminate between Markov equivalent graphs given only observational data. We formulate causal discovery as a marginal likelihood based Bayesian model selection problem. We adopt a parameterization based on the notion of the independence of causal mechanisms which renders Markov equivalent graphs distinguishable. We complement this with an empirical Bayesian approach to setting priors so that the actual underlying causal graph is assigned a higher marginal likelihood than its alternatives. Adopting a Bayesian approach also allows for straightforward modeling of unobserved confounding variables, for which we provide a variational algorithm to approximate the marginal likelihood, since this desirable feat renders the computation of the marginal likelihood intractable. We believe that the Bayesian approach to causal discovery both allows the rich methodology of Bayesian inference to be used in various difficult aspects of this problem and provides a unifying framework to causal discovery research. We demonstrate promising results in experiments conducted on real data, supporting our modeling approach and our inference methodology. Causal networks (CNs) are special Bayesian networks where all edges reflect causal relations (Pearl, 2009 ). The aim of causal structure learning is identifying the CN underlying the observed data. In this paper, we focus on the problem of scoring causal graphs using marginal likelihood in a way that identifies the unique causal generative graph. Succeeding to do so is very valuable, since once the correct CN is selected, various causal inference tasks such as estimating causal effects or examining confounder distributions becomes straightforward in a Bayesian framework. A central challenge in such an attempt, however, is adopting a prior selection policy that not only allows discriminating between Markov equivalent graphs but also assigns higher marginal likelihood score to the actual underlying CN. The key notion underlying our solution to first part of this challenge is the widely accepted principle of independence of the cause-effect mechanisms (Janzing et al., 2012) , that is, the natural mechanisms that generate the cause and the effect (based on cause) must be independent of each other. We embody this assumption by assuming the mutual independence of the parameters pertaining to cause and effect distributions in a Bayesian model, a line of reasoning that is natural to this modeling perspective, where parameters are modeled as random variables (Spiegelhalter et al., 1993; Heckerman et al., 1995; Geiger et al., 1997; Blei et al., 2003) . By assigning independent priors to the cause and effect variables, we render them statistically independent. Critically, this assignment of independent priors also breaks the likelihood equivalence between Markov equivalent graphs. This is contrast to other ways of selecting independent priors such as the BDeu prior, which leads to assigning equal marginal likelihood to Markov equivalent graphs (Heckerman et al., 1995) . As mentioned above, though breaking likelihood equivalence does not necessarily lead to assigning a higher marginal likelihood to the actual underlying CN, it is a prerequisite for doing so 1 . The second part of the problem is adapting a prior selection policy that leads to assigning a higher marginal likelihood to the actual CN compared to its alternatives. In this work, we use an empirical Bayesian approach in selecting the hyperparameters of the independent priors described above, as we learn the priors that lead to assigning higher marginal likelihood to the actual CN from labeled data. The current approach is in the intersection of various other approaches in the literature, thereby combining many of their respective advantages (Spirtes and Zhang, 2016; Glymour et al., 2019) . It is based on the notion of mechanism independence similar to Janzing et al. (2012) ; Zhang et al. (2015) , does not assume causal sufficiency similar to Silva et al. (2006) ; Shimizu et al. (2009) ; Janzing et al. ( , 2012 ; Zhang et al. (2015) ; Schölkopf et al. (2016) , can theoretically work on arbitrary graph structures that possibly include latent variables similar to Spirtes et al. (1993) , and can discriminate between Markov equivalent structures similar to Shimizu et al. (2006) ; Zhang and Hyvärinen (2008); Hoyer et al. (2009); Janzing et al. (2012); Zhang et al. (2015) . Our approach diverges from other Bayesian methods (Stegle et al., 2010; Shimizu and Bollen, 2014; Zhang et al., 2016) in various dimensions such as by being able to distinguish between Markov equivalent causal graphs, using marginal likelihood (or approximations thereof) instead of surrogate scores such as BIC, or being able to model non-linear relationships. In Section 2, we introduce an example model for continuous observations and latent categorical confounders. To approximate the marginal likelihood in graphs which include latent confounders, we present a variational inference algorithm in Section 3. After testing our approach on various real data sets in Section 4, we present our conclusions and further avenues of research in Section 5. Overall, we show that Bayesian model selection is a promising framework that can facilitate causal research significantly both through conceptual unification and increased performance. Given that Bayesian modeling is agnostic to specific variable types, conditional distributions, and to approximate inference methodology, the value of a successful Bayesian modeling approach for causal research is immense. Though our empirical Bayesian approach to setting priors can be useful in various contexts (e.g. in data sets where only some of the bivariate causal directions are known), finding other principled ways of assigning (or integrating out) priors that do not require labeled data is an important direction for future research. Conducting causal discovery with different variable types, and/or different distributions would also be beneficial for demonstrating current approach's viability in various contexts.
We cast causal structure discovery as a Bayesian model selection in a way that allows us to discriminate between Markov equivalent graphs to identify the unique causal graph.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:440
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, ``structure'' is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with~\cite{BDJP17} has instead considered structure to come from a generative model $G: \R^k \to \R^n$. We present two results establishing the difficulty of this latter task, showing that existing bounds are tight. First, we provide a lower bound matching the~\cite{BDJP17} upper bound for compressed sensing from $L$-Lipschitz generative models $G$. In particular, there exists such a function that requires roughly $\Omega(k \log L)$ linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network $G: \R^{2k} \to \R^n$ with $O(1)$ layers and $O(kn)$ activations per layer, such that the range of $G$ contains all $k$-sparse vectors. In compressed sensing, one would like to learn a structured signal x ∈ R n from a limited number of linear measurements y ≈ Ax. This is motivated by two observations: first, there are many situations where linear measurements are easy, in settings as varied as streaming algorithms, single-pixel cameras, genetic testing, and MRIs. Second, the unknown signals x being observed are structured or "compressible": although x lies in R n , it would take far fewer than n words to describe x. In such a situation, one can hope to estimate x well from a number of linear measurements that is closer to the size of the compressed representation of x than to its ambient dimension n. In order to do compressed sensing, you need a formal notion of how signals are expected to be structured. The classic answer is to use sparsity. Given linear measurements 1 y = Ax of an arbitrary vector x ∈ R n , one can hope to recover an estimate x * of x satisfying for some constant C and norm · . In this paper, we will focus on the 2 norm and achieving the guarantee with 3/4 probability. Thus, if x is well-approximated by a k-sparse vector x , it should be accurately recovered. Classic results such as [CRT06] show that (1) is achievable when A consists of m = O(k log n k ) independent Gaussian linear measurements. This bound is tight, and in fact no distribution of matrices with fewer rows can achieve this guarantee in either 1 or 2 [DIPW10] . Although compressed sensing has had success, sparsity is a limited notion of structure. Can we learn a richer model of signal structure from data, and use this to perform recovery? In recent years, deep convolutional neural networks have had great success in producing rich models for representing the manifold of images, notably with generative adversarial networks (GANs) [GPAM + 14] and variational autoencoders (VAEs) [KW14] . These methods produce generative models G : R k → R n that allow approximate sampling from the distribution of images. So a natural question is whether these generative models can be used for compressed sensing. In [BJPD17] it was shown how to use generative models to achieve a guarantee analogous to (1): for any L-Lipschitz G : R k → R n , one can achieve where r, δ > 0 are parameters, B k (r) denotes the radius-r 2 ball in R k and Lipschitzness is defined with respect to the 2 -norms, using only m = O(k log Lr δ ) measurements. Thus, the recovered vector is almost as good as the nearest point in the range of the generative model, rather than in the set of k-sparse vectors. We will refer to the problem of achieving the guarantee in (2) as "functionsparse recovery". Our main theorem is that the [BJPD17] result is tight: for any setting of parameters n, k, L, r, δ, there exists an L-Lipschitz function G : R k → R n such that any algorithm achieving (2) with 3/4 probability must have Ω(min(k log Lr δ , n)) linear measurements. Notably, the additive error δ that was unnecessary in sparse recovery is necessary for general Lipschitz generative model recovery. A concurrent paper [LS19] proves a lower bound for a restricted version of (2). They show a lower bound when the vector that x lies in the image of G and for a particular value of δ. Our results, in comparison, apply to the most general version of the problem and are proven using a simpler communication complexity technique. The second result in this paper is to directly relate the two notions of structure: sparsity and generative models. We produce a simple Lipschitz neural network G sp : R 2k → R n , with ReLU activations, 2 hidden layers, and maximum width O(kn), so that the range of G contains all k-sparse vectors. A second result of [BJPD17] is that for ReLU-based neural networks, one can avoid the additive δ term and achieve a different result from (2): using O(kd log W ) measurements, if d is the depth and W is the maximum number of activations per layer. Applying this result to our sparsity-producing network G sp implies, with O(k log n) measurements, recovery achieving the standard sparsity guarantee (1). So the generative-model representation of structure really is more powerful than sparsity.
Lower bound for compressed sensing w/ generative models that matches known upper bounds
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:441
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We hypothesize that end-to-end neural image captioning systems work seemingly well because they exploit and learn ‘distributional similarity’ in a multimodal feature space, by mapping a test image to similar training images in this space and generating a caption from the same space. To validate our hypothesis, we focus on the ‘image’ side of image captioning, and vary the input image representation but keep the RNN text generation model of a CNN-RNN constant. We propose a sparse bag-of-objects vector as an interpretable representation to investigate our distributional similarity hypothesis. We found that image captioning models (i) are capable of separating structure from noisy input representations; (ii) experience virtually no significant performance loss when a high dimensional representation is compressed to a lower dimensional space; (iii) cluster images with similar visual and linguistic information together; (iv) are heavily reliant on test sets with a similar distribution as the training set; (v) repeatedly generate the same captions by matching images and ‘retrieving’ a caption in the joint visual-textual space. Our experiments all point to one fact: that our distributional similarity hypothesis holds. We conclude that, regardless of the image representation, image captioning systems seem to match images and generate captions in a learned joint image-text semantic subspace. Image description generation, or image captioning (IC), is the task of automatically generating a textual description for a given image. The generated text is expected to describe, in a single sentence, what is visually depicted in the image, for example the entities/objects present in the image, their attributes, the actions/activities performed, entity/object interactions (including quantification), the location/scene, etc. (e.g. "a man riding a bike on the street").Significant progress has been made with end-to-end approaches to tackling this problem, where large-scale parallel image-description datasets such as Flickr30k BID41 and MSCOCO BID4 are used to train a CNN-RNN based neural network IC system BID36 BID17 . Such systems have demonstrated impressive performance in the COCO captioning challenge 1 according to automatic metrics, seemingly even surpassing human performance in many instances (e.g. CIDEr score > 1.0 vs. human's 0.85) BID3 . However, in reality, the performance of end-to-end systems is still far from satisfactory according to metrics based on human judgement 2 . Thus, despite the progress, this task is currently far from being a solved problem.In this paper, we challenge the common assumption that end-to-end IC systems are able to achieve strong performance because they have learned to 'understand' and infer semantic information from visual representations, i.e. they can for example deduce that "a boy is playing football" purely by learning directly from mid-level image features and the corresponding textual descriptions in an implicit manner, without explicitly modeling the presence of boy, ball, green field, etc. in the image. It is believed that the IC system has managed to infer that the phrase green field is associated with some 'green-like' area in the image and is thus generated in the output description, or that the word boy is generated because of some CNN activations corresponding to a young person. However, there seems to be no concrete evidence that this is the case. Instead, we hypothesize that the apparently strong performance of end-to-end systems is attributed to the fact that they are exploiting the distributional similarity in the multimodal feature space. To our best knowledge, our paper gives the first empirical analysis on visual representations for the task of image captioning.What we mean by 'distributional similarity' is that IC systems essentially attempt to match images from the training set that is most similar to a test image, and generate a caption from the most similar training instances (or generate a 'novel' description from a combination of training instances, for example by 'averaging' the descriptions). Previous work has alluded to this observation BID16 BID36 , but it has not been thoroughly investigated. This phenomena could also be in part attributed to the fact that the datasets are repetitive and simplistic, with an almost constant and predictable linguistic structure BID18 BID7 BID36 .In this paper we investigate the hypothesis of distributional similarity in IC by focusing on the image side of image captioning. Most previous work has concentrated on the text side of image captioning, e.g. by optimizing the language modelling capabilities of the RNN BID27 BID19 to improve its performance on automatic metrics. While there have been efforts on improving IC by utilizing or modeling images more effectively, for example by using attention over mid-level image features and high-level object proposals BID1 , in this work we are specifically interested in interpretability and we focus on using a simpler (and faster) model for empirical evaluation. We explore the basic yet effective CNN-RNN model BID17 , and investigate the representational contributions while keeping the RNN generator constant. More advanced models can be considered specific variants of BID17 .It is worth noting that we are interested in demonstrating the phenomenon of distributional similarity in IC, rather than achieving or improving state-of-the-art performance, As such, we do not resort to fine-tuning or extensive hyperparameter optimization or ensembles. Therefore, our model is not comparable to state-of-the-art models such as BID36 , which optimize IC by fine-tuning the image representations, exploring beam size, scheduled sampling, and using ensemble models. Instead, we vary only the image representation to demonstrate that end-to-end IC systems utilize distributional similarity on the image side to generate captions, regardless of the image representation used.Our main contributions are:• An IC experiment where we vary the input image representation but keep the RNN text generation model constant (Section 3). This experiment demonstrates that regardless of the image representation (a continuous image embedding or a sparse, low-dimensional vector), end-to-end IC systems seem to utilize a visual-semantic subspace for IC.• The introduction of a simple, sparse bag-of-objects representation that contains information about the presence of objects in the images. We use this as a tool to investigate the contribution of images in the image captioning framework.• The introduction of pseudo-random vectors derived from object-level representations as a means to evaluate IC systems. Our results show that end-to-end models in this framework are remarkably capable of separating structure from noisy input representations.• An experiment where IC models are conditioned on image representations factorized and compresssed to a lower dimensional space (Section 4.1). We show that high dimensional image embeddings that are factorized to a lower dimensional representation and used as input to an IC model result in virtually no significant loss in performance, further strengthening our claim that IC models perform similarity matching rather than image understanding.• An analysis of different image representations and their transformed representations (Sections 4.2 and 4.3). We visualize the initial visual subspace and the learned joint visual semantic subspace and observe that the visual semantic subspace has learned to cluster images with similar visual and linguistic information together, further validating our claims of distributional similarity.• An experiment where the IC model is tested on an out-of-domain dataset (Section 4.4), which has a slightly different image distribution. We observe that models, including the state-of-the-art models, show a better performance on test sets that have a similar distribution as the training. However, their performance deteriorates when the distributions are slightly different.• An analysis on the uniqueness of captions generated by IC models using different image representations (Section 4.5) . We hypothesize that the captions are often repeated as they are usually generated by matching images in the joint space and retrieving a relevant caption. Our experiments validate this claim.Overall, the study suggests that regardless of the representation used, end-to-end IC models implicitly learn and exploit multimodal similarity spaces rather than performing actual image understanding.This study is in line with the recent work that explore understanding of deep learning models and the representational interpretations BID23 BID32 BID30 and works that have tried to delve into the image captioning task BID7 BID36 . To the best of our knowledge, ours is the first work that investigates IC focusing specifically on image representations and their effects. We hypothesized that IC systems essentially exploit a distributional similarity space to 'generate' image captions, by attempting to match a test image to similar training image(s) and generate an image caption from these similar images. Our study focused on the image side of image captioning:We varied the image representations while keeping the text generation component of an end-toend CNN-RNN model constant. We found that regardless of the image representation, end-to-end IC systems seem to match images and generate captions in a visual-semantic subspace for IC. We conclude that:• A sparse, low-dimensional bags-of-objects representation can be used as a tool to investigate the contribution of images in IC; we demonstrated that such a vector is sufficient for generating good image captions; • End-to-end IC models are remarkably capable of separating structure from noisy input representations, as demonstrated by pseudo-random vectors; • End-to-end IC models suffer virtually no significant loss in performance when a high dimensional representation is factorized to a lower dimensional space; • End-to-end IC models have learned a joint visual-textual semantic subspace by clustering images with similar visual and linguistic information together; • End-to-end IC models rely on test sets with a similar distribution as the training set for generating good captions; • End-to-end IC models repeatedly generate the same captions by matching images in the joint visual-textual space and 'retrieving' a caption in the learned joint space.All the observations above strengthen our distributional similarity hypothesis -that end-to-end IC performs image matching and generates captions for a test image from similar image(s) from the training set -rather than performing actual image understanding. Our findings provide novel insights into what end-to-end IC systems are actually doing, which previous work only suggests or hints at without concretely demonstrating the distributional similarity hypothesis. We believe our findings are important for the IC community to further advance image captioning in a more informed manner. (c) Bag of objects: person (1), tie (1) Figure 5 : Example outputs from our system with different representations, the sub-captions indicate the annotation along with the frequency in braces. We also show the CIDEr score and the difference in CIDEr score relative to the Bag of Objects representation.A ANALYSIS ON GENERATED CAPTIONSHere, we provide a qualitative analysis of different image representations presented and gain some insights into how they contribute to the the IC task. The Bag of Objects representation led to a strong performance in IC despite being extremely sparse and low-dimensional (80 dimensions). Analyzing the test split, we found that each vector consists of only 2.86 non-zero entries on average (standard deviation 1.8, median 2). Thus, with the minimal information being provided to the generator RNN, we find it surprising that it is able to perform so well.We compare the output of the remaining models against the Bag of Objects representation by investigating what each representation adds to or subtracts from this simple, yet strong model. We start by selecting images (from the test split) annotated with the exact same Bag of Objects representation -which should result in the same caption. For our qualitative analysis, several sets of one to three MSCOCO categories were manually chosen. For each set, images were selected such that there is exactly one instance of each category in the set and zero for others. We then shortlisted images where the captions generated by the Bag of Objects model produced the five highest and five lowest CIDEr scores (ten images per set). We then compare the captions sampled for each of the other representations. Figure 5 shows some example outputs from this analysis. In Figure 5a , Bag of Objects achieved a high CIDEr score despite only being given "bird" as input, mainly by 'guessing' that the bird will be perching/sitting on a branch. The object-based Softmax (VGG and ResNet) models gave an even more accurate description as "owl" is the top-1 prediction of both representations (96% confidence for VGG, 77% for ResNet). Places365 predicted "swamp" and "forest". The Penultimate features on the other hand struggled with representing the images correctly. In Figure 5b , Bag of Objects struggled with lack of information (only "airplane" is given), the Softmax features mainly predicted "chainlink fence", Places365 predicted "kennel" (hence the dog description), and it most likely that Penultimate has captured the fence-like features in the image rather than the plane. In Figure 5c , the Softmax features generally managed to generate a caption describing a woman despite not explicitly containing the 'woman' category. This is because other correlated categories were predicted, such as "mask", "wig", "perfume", "hairspray" and in the case of Places365 "beauty salon" and "dressing room".
This paper presents an empirical analysis on the role of different types of image representations and probes the properties of these representations for the task of image captioning.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:442
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the execution-guided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%. Many mission-critical applications in health care, financial markets, and business process management store their information in relational databases BID10 BID22 BID16 . Users access that information using a query language such as SQL. Although expressive and powerful, SQL is difficult to master for non-technical users. Even for an expert, writing SQL queries can be challenging as it requires knowing the exact schema of the database and the roles of various entities in the query. Hence, a long-standing goal has been to allow users to interact with the database through natural language BID0 BID24 .The key to achieving this goal is understanding the semantics of the natural language statements and mapping them to the intended SQL. This problem, also known as NL2SQL, was previously understudied largely due to the availability of annotation. Without paired natural language statement and SQL query, a weak supervision approach may be adopted which reduces supervision from annotated SQL queries to answers BID19 . This is a more difficult learning problem. Therefore only with recent release of a number of large-scale annotated NL2SQL datasets BID36 BID6 , we start to see a surge of interest in solving this problem.Existing NL2SQL approaches largely fall into two categories: sequence-to-sequence style neural "machine translation " systems BID36 BID5 and sets of modularized models with each predicting a specific part of the SQL queries BID32 BID34 . The former class suffer from the requirement of labeling a single ground truth query while multiple semantically equivalent queries exist for each intent. For example , as noticed by BID36 , the ordering of filtering conditions in a query does not affect execution but affects generation. To account for this, techniques such as reinforcement learning have been used on top of those sequenceto-sequence models. The second class of models employ a sequence-to-set approach: they first predict table columns present in the query and then independently predict the rest for each column. This avoids the ordering issue, but makes it harder to leverage inter-dependencies among conditions.In this work, we develop a sequence-to-action parsing approach (Section 3) for the NL2SQL problem. It incrementally fills the slots of a SQL query with actions from an inventory designed for this task. Taking inspiration from training oracles in incremental syntactic parsing BID8 , we further propose to use non-deterministic oracles (Section 4) for training the incremental parsers. These oracles permit multiple correct action continuations from a partial parse, thus are able to account for the logical form variations. Our model combines the advantage of a sequence-to-sequence model that captures inter-dependencies within sequence of predictions and a SELECT`Height (ft)Ẁ HERE Name="Willis Tower" AND Location="Chicago" DISPLAYFORM0 What is the height of Willis Tower in Chicago?Figure 1: Our running example . The input is a natural language question and a table schema, and the output is an executable SQL query. Table contents are shown here, but unknown to our models.modularized model that avoids any standarized linearization of the logical forms. We evaluate our models on the WikiSQL dataset and observe a performance improvement of 2.1% when comparing non-deterministic oracles with traditional static oracles. We further combine our approach and the execution-guided decoding strategy ) and achieve a new state-of-the-art performance with 87.1% test execution accuracy. Experiments on a filtered ATIS dataset in addition confirm that our models can be applied to other NL2SQL datasets. In this paper, we introduce a sequence-to-action incremental parsing approach for the NL2SQL task. With the observation that multiple SQL queries can have the same or very similar semantics corresponding to a given natural language question, we propose to use non-deterministic oracles during training. On the WikiSQL dataset, our model trained with the non-deterministic oracles achieves an execution accuracy of 83.7%, which is 2.3% higher than the current state of the art. We also discuss using execution-guided decoding in combination with our model. This leads to a further improvement of 3.4%, achieving a new state-of-the-art 87.1% execution accuracy on the test set.To the best of our knowledge, our work is the first to use non-deterministic oracles for training incremental semantic parsers. Designing such non-deterministic oracles requires identification of multiple correct transition sequences for a given training instance, and an algorithm that decides the possible continuations for any intermediate state that will lead to one of the desired terminal states. We have shown promising results for WikiSQL and filtered ATIS dataset and it would be interesting to extend our work to other more complex NL2SQL tasks and to other semantic parsing domains.
We design incremental sequence-to-action parsers for text-to-SQL task and achieve SOTA results. We further improve by using non-deterministic oracles to allow multiple correct action sequences.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:443
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS. Neural architecture search (NAS) has been applied successfully to design model architectures for image classification and language modeling BID0 BID3 BID6 . NAS however is computationally expensive and time consuming: for example, use 450 GPUs and train for 3-4 days. Meanwhile, using less resources tends to produce less compelling results BID31 BID0 .The main computational bottleneck of NAS is the training of each child model to convergence to measure its accuracy. We believe that it is very inefficient and wasteful to train every child model and then throw away all the trained weights even though the child models have much in common. The graph represents the entire search space while the red arrows define a model in the search space, which is decided by a controller. Here we assume that node 1 is the input to the model whereas nodes 3, 5, and 6 are the outputs of the model.The goal of this work is to remove this inefficiency by enabling more sharing between the child models. This idea is similar to the concept of weight inheritance in neuro-evolution (e.g., BID33 ). To understand our method, we first need to understand the standard NAS. In standard NAS BID0 , an RNN controller is trained by policy gradient to search for a good architecture, which is basically a computational graph. Our observation is that all of the graphs, that NAS has iterated over, can be viewed as sub-graphs of a larger graph. In other words, we can represent the space of these graphs as a single directed acyclic graph (DAG) . As illustrated in FIG0 , a neural network architecture can be found by taking a subset of edges in this DAG. This design is advantageous because it enables sharing parameters among all architectures in the search space. Neural Architecture Search (NAS) is an important advance that allows faster architecture design for neural networks. However, the computational expense of NAS prevents it from being widely adopted. In this paper, we presented ENAS, an alternative method to NAS, that requires three orders of magnitude less resources×time. The key insight of our method is to share parameters across child models during architecture search. This insight is implemented by having NAS search for a path within a larger model. We demonstrate empirically that the method works well on both CIFAR-10 and Penn Treebank datasets.The shared parameters ω between different recurrent cells thus consist of all the matrices DISPLAYFORM0 ,j , and W (h),j . The controller decides the connection j and the activation function f , for each ∈ {2, 3, ..., N }. The layers that are never selected by any subsequent layers are averaged and sent to a softmax head, or to higher recurrent layers. As in the case of convolutional models, to stabilize the training of ω, we add a batch normalization layer after the average of the layers that are not selected.B Details for CIFAR-10 Search Spaces B.1 Details on Search Space 1: ChannelsWe use a block size of S = 32, resulting in C/S = 256/32 = 8 blocks per branch per layer. Each branch configuration has its own embedding and softmax head. To elaborate, this means that a time step in the controller RNN that predicts the configuration for any branch should have a softmax matrix of size H × (2 C/S − 1), where H = 64 is the hidden dimension of the RNN, and 2 C/S − 1 = 255 is the number of possible binary masks for that branch. Each branch also has an embedding matrix of size (2 C/S − 1) × H, from which the row corresponding to the sampled binary mask is selected and sent to the next time step.Layers 4 and 8 of our 12-layer network are max pooling layers with a kernel size of 2 × 2 and a stride of 2, and reduce each spatial dimension of the layers' outputs by a factor of 2. Within each group of 3 layers where the spatial dimensions of the layers remain constant, we connect each layer to all layers before it BID17 .
An approach that speeds up neural architecture search by 10x, whilst using 100x less computing resource.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:444
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Nowadays, deep neural networks (DNNs) have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech. Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable. In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees (GBDT), which are often the top choice for tabular problems. In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data. In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning. With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks. We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data. The recent rise of deep neural networks (DNN) resulted in a substantial breakthrough for a large number of machine learning tasks in computer vision, natural language processing, speech recognition, reinforcement learning (Goodfellow et al., 2016) . Both gradient-based optimization via backpropagation (Rumelhart et al., 1985) and hierarchical representation learning appear to be crucial in increasing the performance of machine learning for these problems by a large margin. While the superiority of deep architectures in these domains is undoubtful, machine learning for tabular data still did not fully benefit from the DNN power. Namely, the state-of-the-art performance in problems with tabular heterogeneous data is often achieved by "shallow" models, such as gradient boosted decision trees (GBDT) (Friedman, 2001; Chen & Guestrin, 2016; Ke et al., 2017; Prokhorenkova et al., 2018) . While the importance of deep learning on tabular data is recognized by the ML community, and many works address this problem (Zhou & Feng, 2017; Miller et al., 2017; Lay et al., 2018; Feng et al., 2018; Ke et al., 2018) , the proposed DNN approaches do not consistently outperform the state-of-the-art shallow models by a notable margin. In particular, to the best of our knowledge, there is still no universal DNN approach that was shown to systematically outperform the leading GBDT packages (e.g., XGBoost (Chen & Guestrin, 2016) ). As additional evidence, a large number of Kaggle ML competitions with tabular data are still won by the shallow GBDT methods (Harasymiv, 2015) . Overall, at the moment, there is no dominant deep learning solution for tabular data problems, and we aim to reduce this gap by our paper. We introduce Neural Oblivious Decision Ensembles (NODE), a new DNN architecture, designed to work with tabular problems. The NODE architecture is partially inspired by the recent CatBoost package (Prokhorenkova et al., 2018) , which was shown to provide state-of-the-art performance on a large number of tabular datasets. In a nutshell, CatBoost performs gradient boosting on oblivious decision trees (decision tables) (Kohavi, 1994; Lou & Obukhov, 2017) , which makes inference very efficient, and the method is quite resistant to overfitting. In its essence, the proposed NODE architecture generalizes CatBoost, making the splitting feature choice and decision tree routing differentiable. As a result, the NODE architecture is fully differentiable and could be incorporated in any computational graph of existing DL packages, such as TensorFlow or PyTorch. Furthermore, NODE allows constructing multi-layer architectures, which resembles "deep" GBDT that is trained end-to-end, which was never proposed before. Besides the usage of oblivious decision tables, another important design choice is the recent entmax transformation (Peters et al., 2019) , which effectively performs a "soft" splitting feature choice in decision trees inside the NODE architecture. As discussed in the following sections, these design choices are critical to obtain state-of-the-art performance. In a large number of experiments, we compare the proposed approach with the leading GBDT implementations with tuned hyperparameters and demonstrate that NODE outperforms competitors consistently on most of the datasets. Overall, the main contributions of our paper can be summarized as follows: 1. We introduce a new DNN architecture for machine learning on tabular data. To the best of our knowledge, our method is the first successful example of deep architectures that substantially outperforms leading GBDT packages on tabular data. 2. Via an extensive experimental evaluation on a large number of datasets, we show that the proposed NODE architecture outperforms existing GBDT implementations. 3. The PyTorch implementation of NODE is available online 1 . The rest of the paper is organized as follows. In Section 2 we review prior work relevant to our method. The proposed Neural Oblivious Decision Ensembles architecture is described in Section 3 and experimentally evaluated in Section 4. Section 5 concludes the paper. In this paper, we introduce a new DNN architecture for deep learning on heterogeneous tabular data. The architecture is differentiable deep GBDTs, trained end-to-end via backpropagation. In extensive experiments, we demonstrate the advantages of our architecture over existing competitors with the default and tuned hyperparameters. A promising research direction is incorporating the NODE layer into complex pipelines trained via back-propagation. For instance, in multi-modal problems, the NODE layer could be employed as a way to incorporate the tabular data, as CNNs are currently used for images, or RNNs are used for sequences. library to optimize Catboost, XGBoost, and FCNN hyperparameters. For each method, we perform 50 steps of Tree-structured Parzen Estimator (TPE) optimization algorithm. As a final configuration, we choose the set of hyperparameters, corresponding to the smallest loss on the validation set.
We propose a new DNN architecture for deep learning on tabular data
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:445
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Person re-identification (re-ID) aims at identifying the same persons' images across different cameras. However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one. State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain. Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinders the model's capability on further improving feature representations on the target domain. In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models. However, conventional triplet loss cannot work with softly refined labels. To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance. The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.1% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks. In this work, we propose an unsupervised Mutual Mean-Teaching (MMT) framework to tackle the problem of noisy pseudo labels in clustering-based unsupervised domain adaptation methods for person re-ID. The key is to conduct pseudo label refinery to better model inter-sample relations in the target domain by optimizing with the off-line refined hard pseudo labels and on-line refined soft pseudo labels in a collaborative training manner. Moreover, a novel soft softmax-triplet loss is proposed to support learning with softly refined triplet labels for optimal performances. Our method significantly outperforms all existing person re-ID methods on domain adaptation task with up to 18.2% improvements. Two temporal average models are introduced in our proposed MMT framework to provide more complementary soft labels and avoid training error amplification. Such average models are more de-coupled by ensembling the past parameters and provide more independent predictions, which is ignored by previous methods with peer-teaching strategy (Han et al., 2018; Zhang et al., 2018b ). Despite we have verified the effectiveness of such design in Table 2 by removing the temporal average model, denoted as "Baseline+MMT-500 (w/o E[θ])", we would like to visualize the training process by plotting the KL divergence between peer networks' predictions for further comparison. As illustrated in Figure 3 , the predictions by two temporal average models ("Proposed MMT-500") always keep a larger distance than predictions by two ordinary networks ("Proposed MMT-500 (w/o E[θ])"), which indicates that the temporal average models could prevent the two networks in our MMT from converging to each other soon under the collaborative training strategy. We utilize weighting factors of λ t tri = 0.8, λ t id = 0.5 in all our experiments by tuning on Duketo-Market task with IBN-ResNet-50 backbone and 500 pseudo identities. To further analyse the impact of different λ t tri and λ t id on different tasks, we conduct comparison experiments by varying the value of one parameter and keep the others fixed. Our MMT framework is robust and insensitive to different parameters except when the hard classification loss is eliminated with λ t id = 1.0. The weighting factor of hard and soft triplet losses λ t tri . In Figure 4 (a-b) , we investigate the effect of the weighting factor λ t tri in equation 9, where the weight for soft softmax-triplet loss is λ t tri and the weight for hard triplet loss is (1 − λ t tri ). We test our proposed MMT-500 with both ResNet-50 and IBN-ResNet-50 backbones when λ t tri is varying from 0.0, 0.3, 0.5, 0.8 and 1.0. Specifically, the soft softmax-triplet loss is removed from the final training objective (equation 9) when λ t tri is equal to 0.0, and the hard triplet loss is eliminated when λ t tri is set to 1.0. We observe
A framework that conducts online refinement of pseudo labels with a novel soft softmax-triplet loss for unsupervised domain adaptation on person re-identification.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:446
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We present the first end-to-end verifier of audio classifiers. Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures (e.g., LSTM). The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio (e.g., Fast Fourier Transform) while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update. We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation (only prior scalable method), for a perturbation of -90dB. Recent advances in deep learning have enabled replacement of traditional voice recognition systems with a single neural network trained from data (Graves et al., 2013; Hannun et al., 2014; Amodei et al., 2016) . Wide adoption of these networks in consumer devices poses a threat to their safety when exposed to a malicious adversary. Indeed, it was recently shown that an adversary can inject noise unrecognizable to a human and force the network to misclassify (Szegedy et al., 2013; Goodfellow et al., 2014; Zhang et al., 2017; Carlini & Wagner, 2018; Carlini et al., 2016; Qin et al., 2019; Neekhara et al., 2019; Yang et al., 2019; Esmaeilpour et al., 2019) , exposing a serious security flaw. Ideally, when deploying an automated speech recognition system we would like to guarantee that the system is robust against noise injected by an adversary. There has been substantial recent work on certifying robustness of computer vision models (Katz et al., 2017; Ehlers, 2017; Ruan et al., 2018; Tjeng et al., 2019; Anderson et al., 2018; Wong et al., 2018; Raghunathan et al., 2018; Dvijotham et al., 2019; Weng et al., 2018; Zhang et al., 2018; Salman et al., 2019; Gehr et al., 2018; Singh et al., 2018; 2019a; Wang et al., 2018; Singh et al., 2019b) . However, the audio domain poses unique challenges not addressed by prior certification work for vision. Differences between audio and vision models Concretely, while an input to a vision model is a raw image, audio models typically come with a complex preprocessing stage (that involves non-trivial non-linear operations such as logarithm) which extracts relevant features from the signal. Additionally, audio systems typically use recurrent architectures (Chiu et al., 2017) which computer vision verifiers do not handle as they focus on fully-connected, convolutional and residual architectures. This work We address both of these challenges and propose an end-to-end verification method for neural network based audio classifiers and an implementation of this method in a system called DAC (Deep Audio Certifier). Our threat model assumes an attacker can introduce a noise-based perturbation to the raw audio input signal. The goal then is to certify that, for any signal that the attacker can produce, the neural network classifies the signal to the correct label. We perform verification of this property using the framework of abstract interpretation (Gehr et al., 2018) . At a high level, the idea is to maintain an abstraction capturing all possible behaviors of both the audio processing stage and the neural network. The flow of DAC is shown in Fig. 1 where all abstractions are dark blue shapes. Here, all possible signals an attacker can obtain are captured using an abstraction s (i) (a convex relaxation). This abstraction is then propagated through the audio processing stage (shown in green boxes). The key components of this step are abstract transformers. For each audio processing operation (e.g. FFT) we create an abstract transformer which receives an abstraction representing an approximation of all possible inputs to the operation and outputs a new abstraction which approximates all possible outputs of the operation. The result of the audio processing stage is the abstraction x (i) . The shape x (i) is then used as input to the recurrent LSTM unit (light blue) which maintains an abstraction of a hidden state h (i−1) . LSTM consists of multiple operations and we create a custom abstract transformer for each of those. The result of the transformers in LSTM is a new hidden state h (i) . If this was the last frame in the signal (meaning i = T ), then hidden state h (T ) is passed through the fully connected layer of the neural network and, again using the abstract transformer, the final abstract shape a is obtained at the output (at the right of Fig. 1 ). Finally, to certify the property we check if each concrete output in the abstraction a classifies to the correct label (this is typically easy). If this is true, the output of the network is correct for all inputs that the attacker can create. Related work on RNN certification The work of (Ko et al., 2019) proposes the POPQORN verifier for recurrent neural networks (RNN). We note that POPQORN does not handle the audio preprocessing pipeline. Even though POPQORN cannot directly verify audio classifiers, their approximations for LSTM non-linearities can be integrated in DAC. This results in ≈ 200× slowdown with small decrease in the volume of the approximation. The massive slowdown makes their approximations unsuitable for certifying audio classifiers. In contrast, using our custom abstract transformers for LSTM non-linearities, DAC can precisely certify end-to-end robustness of challenging audio classifiers in few minutes. Our main contributions are: 1. A novel and efficient method to certify robustness of neural network audio classifiers to noise-based perturbations. The method is based on new abstract transformers which handle non-linear operations used in both audio processing and recurrent architectures. 2. An implementation of both verification and provably robust training in a system called DAC. We evaluated DAC on common audio classification benchmarks, showing it scales to realistic networks and is far more precise (97% to 2%) than the next best scalable method. We presented the first verifier for certifying audio classifiers. The key idea was to create abstract transformers for non-linear operations used in the audio processing stage and the recurrent network. These transformers compute an optimal (area-wise) approximation under assumptions representable in the underlying convex relaxation and enable sound handling of the entire pipeline. Our evaluation shows that DAC is practically effective and achieves high verification rates on different datasets. by the smaller volume under the each plane. Then for any x, y, f (x, y 1 ) < f (x, y 2 ) and f (x 1 , y) < f (x 2 , y). Thus, since z u x is independent to y, it is sufficient to show z We can easily know that f (x, u y ) is concave at x ≥ 0 and convex at x ≤ 0 by the second derivation of f . (a) Consider the case of u x > 0. Let x 0 be the x coordinate of the crossing of f (x, u y ) and . Again, by convexity of . Again, by convexity of With analogous steps, z l y can be shown to lie under the curve. Choosing the plane with larger volume underneath it allows to minimize the expected difference between the true curve and the lower bound plane under the randomly chosen domain. The proof of upper bounds will follow the same steps with the first case. z u x in this case is exactly same as before, but since f (x, y) goes below 0 when y < 0, z u y has to anchor at (l x , l y ) instead of (u x , l y ) since f (l x , l y ) ≥ f (u x , l y ) and convexity of f in the region. The proof steps do not differ much from the previous proofs. Again, the proof for lower bound is similar as before, but note that z l x needs to choose maximum between the two slopes. This is due to the sign of the values. Since f (u x , l y ) < 0 is the minimum in the region and it grows along x gets smaller, both D i f (u x , l y ) and f (ux,ly)−f (lx,ly) ux−lx are less than zero.
We present the first approach to certify robustness of neural networks against noise-based perturbations in the audio domain.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:447
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Since deep neural networks are over-parameterized, they can memorize noisy examples. We address such memorizing issue in the presence of annotation noise. From the fact that deep neural networks cannot generalize neighborhoods of the features acquired via memorization, we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation. Based on this, we propose a novel training method called Learning with Ensemble Consensus (LEC) that prevents overfitting noisy examples by eliminating them using the consensus of an ensemble of perturbed networks. One of the proposed LECs, LTEC outperforms the current state-of-the-art methods on noisy MNIST, CIFAR-10, and CIFAR-100 in an efficient manner. Deep neural networks (DNNs) have shown excellent performance (Krizhevsky et al., 2012; He et al., 2016) on visual recognition datasets (Deng et al., 2009) . However, it is difficult to obtain highquality labeled datasets in practice (Wang et al., 2018a) . Even worse, DNNs could not generalize the training data in the presence of noisy examples . Therefore, there is an increasing demand for robust training methods. In general, DNNs optimized with SGD first generalize clean examples under label noise . Based on this, recent studies consider examples that incur small losses on the network that does not overfit noisy examples as being clean (Han et al., 2018; Shen & Sanghavi, 2019) . However, such small-loss examples may be corrupted, particularly under a high level of noise. Hence, choosing safe examples from the noisy dataset with small-loss criteria may be impractical. To address this, we find the method of screening out noisy examples among small-loss examples by focusing on well-known observations: (i) noisy examples are learned via memorization rather than via generalization and (ii) under a certain perturbation, network predictions for memorized features easily fluctuate, while those for generalized features do not. Based on these two observations, we hypothesize that out of small-loss examples, training losses of noisy examples would increase by injecting certain perturbation to network parameters, while those of clean examples would not. This suggests that examples that consistently incur small losses under multiple perturbations can be considered as being clean. Since this idea comes from an artifact of SGD optimization, it can be applied to any architecture optimized with SGD. In this work, we introduce a method of perturbing parameters to filter noisy examples out of smallloss examples. By embedding the filtering into training, we propose a new robust training scheme termed learning with ensemble consensus (LEC). In LEC, the network is first trained on the entire training set for a while and then trained on the intersection of small-loss examples of the ensemble of perturbed networks. We present three LECs with different perturbations and evaluate their effectiveness on three benchmark datasets with random label noise (Goldberger & Ben-Reuven, 2016; Ma et al., 2018) , open-set noise (Wang et al., 2018b) , and semantic noise. The proposed LEC outperforms existing robust training methods by efficiently removing noisy examples from training batches. Generalization of DNNs. Although DNNs are over-parameterized, they have impressive generalization ability (Krizhevsky et al., 2012; He et al., 2016) . Some studies argue that gradient-based optimization plays an important role in regularizing DNNs (Neyshabur et al., 2014; . show that DNNs optimized with gradient-based methods generalize clean examples in the early stage of training. Since mislabeling reduces the correlation with other training examples, it is likely that noisy examples are learned via memorization. Therefore, we analyze the difference between generalized and memorized features to discriminate clean and noisy examples. Training DNNs with Noisy datasets. Label noise issues can be addressed by reducing negative impact of noisy examples. One direction is to train with a modified loss function based on the noise distribution. Most studies of this direction estimate the noise distribution prior to training as it is not accessible in general (Sukhbaatar et al., 2014; Goldberger & Ben-Reuven, 2016; Patrini et al., 2017; Hendrycks et al., 2018) . Another direction is to train with modified labels using the current model prediction (Reed et al., 2014; Ma et al., 2018) . Aside from these directions, recent work suggests a method of exploiting small-loss examples (Jiang et al., 2017; Han et al., 2018; Yu et al., 2019; Shen & Sanghavi, 2019) based on the generalization ability of DNNs. However, it is still hard to find clean examples by relying on training losses. This study presents a simple method to overcome such a problem of small-loss criteria. This work presents the method of generating and using the ensemble for robust training. We explore three simple perturbation methods to generate the ensemble and then develop the way of identifying noisy examples through ensemble consensus on small-loss examples. Along with growing attention to the use of small-loss examples for robust training, we expect that our ensemble method will be useful for such training methods.
This work presents a method of generating and using ensembles effectively to identify noisy examples in the presence of annotation noise.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:448
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an $\ell_1$ regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data. Recovering sparse conditional independence graphs from data is a fundamental problem in high dimensional statistics and time series analysis, and it has found applications in diverse areas. In computational biology, a sparse graph structure between gene expression data may be used to understand gene regulatory networks; in finance, a sparse graph structure between financial timeseries may be used to understand the relationship between different financial assets. A popular formulation of the problem is an 1 regularization log-determinant estimation of the precision matrix. Based on this convex formulation, many algorithms have been designed to solve this problem efficiently, and one can formally prove that under a list of conditions, the solution of the optimization problem is guaranteed to recover the graph structure with high probability. However, convex optimization based approaches have their own limitations. The hyperparameters, such as the regularization parameters and learning rate, may depend on unknown constants, and need to be tuned carefully to achieve the recovery results. Furthermore, the formulation uses a single regularization parameter for all entries in the precision matrix, which may not be optimal. It is intuitive that one may obtain better recovery results by allowing the regularization parameters to vary across the entries in the precision matrix. However, such flexibility will lead to a quadratic increase in the number of hyperparameters, but it is hard for traditional approaches to search over a large number of hyperparameters. Thus, a new paradigm may be needed for designing more effective sparse recovery algorithms. Recently, there has been a surge of interest in a new paradigm of algorithm design, where algorithms are augmented with learning modules trained directly with data, rather than prescribing every step of the algorithms. This is meaningful because very often a family of optimization problems needs to be solved again and again, similar in structures but different in data. A data-driven algorithm may be able to leverage this distribution of problem instances, and learn an algorithm which performs better than traditional convex formulation. In our case, the sparse graph recovery problem may also need to be solved again and again, where the underlying graphs are different but have similar degree distribution, the magnitude of the precision matrix entries, etc. For instance, gene regulatory networks may be rewiring depending on the time and conditions, and we want to estimate them from gene In our experiments, we show that the AM architecture provides very good inductive bias, allowing the model to learn very effective sparse graph recovery algorithm with a small amount of training data. In all cases, the learned algorithm can recover sparse graph structures with much fewer data points from a new problem, and it also works well in recovering gene regulatory networks based on realistic gene expression data generators. Related works. Belilovsky et al. (2017) considers CNN based architecture that directly maps empirical covariance matrices to estimated graph structures. Previous works have parameterized optimization algorithms as recurrent neural networks or policies in reinforcement learning. For instance, Andrychowicz et al. (2016) considered directly parameterizing optimization algorithm as an RNN based framework for learning to learn. Li & Malik (2016) approach the problem of automating algorithm design from reinforcement learning perspective and represent any particular optimization algorithm as a policy. Khalil et al. (2017) learn combinatorial optimzation over graph via deep Q-learning. These works did not consider the structures of our sparse graph recovery problem. Another interesting line of approach is to develop deep neural networks based on unfolding an iterative algorithm Gregor & LeCun (2010) ; ; . developed ALISTA which is based on unrolling the Iterative Shrinkage Thresholding Algorithm (ISTA). Sun et al. (2016) developed 'ADMM-Net', which is also developed for compressive sensing of MRI data. Though these seminal works were primarily developed for compressive sensing applications, they alluded to the general theme of using unrolled algorithms as inductive biases. We thus identify a suitable unrolled algorithm and leverage its inductive bias to solve the sparse graph recovery problem. We presented a novel neural network, GLAD, for the sparse graph recovery problem based on an unrolled Alternating Minimization algorithm. We theoretically prove the linear convergence of AM algorithm as well as empirically show that learning can further improve the sparse graph recovery. The learned GLAD model is able to push the sample complexity limits thereby highlighting the potential of using algorithms as inductive biases for deep learning architectures. Further development of theory is needed to fully understand and realize the potential of this new direction. Alternating Minimization is performing Taking the gradient of the objective function with respect to Θ to be zero, we have Taking the gradient of the objective function with respect to Z to be zero, we have where Solving the above two equations, we obtain: where B LINEAR CONVERGENCE RATE ANALYSIS m , where ρ is the l 1 penalty, d is the dimension of problem and m is the number of samples, the Alternate Minimization algorithm has linear convergence rate for optimization objective defined in (6). The k th iteration of the AM algorithm satisfies, where 0 < C λ < 1 is a constant depending on λ. We will reuse the following notations in the appendix: The update rules for Alternating Minimization are: Assumptions: With reference to the theory developed in Rothman et al. (2008), we make the following assumptions about the true model. (O P (·) is used to denote bounded in probability. ) We now proceed towards the proof: Lemma 2. For any x, y, k ∈ R, k > 0, x = y, Proof. where is the largest eigenvalue of X in absolute value. Proof. First we factorize X using eigen decomposition, X = Q X D X Q X , where Q X and D X are orthogonal matrix and diagonal matrix, respectively. Then we have, Similarly, the above equation holds for Y . Therefore, where we define Q := Q Y Q X . Similarly, we have, Then the i-th entry on the diagonal of ji . Using the fact that D X and D Y are diagonal, we have, The last step makes use of Similarly, using (42), we have, Assuming X − Y F > 0 (otherwise (37) trivially holds), using (52) and (50), we have, Using lemma (2), we have, Therefore, Lemma 4. Under assumption (2), the output of the k-th and where 0 < C λ < 1 is a constant depending on λ. Proof. The first part is easy to show, if we observe that in the second update step of AM (8), η ρ/λ is a contraction under metric d(X, Y ) = X − Y F . Therefore we have, Next we will prove the second part. To simplify notation, we let A(X) = X X + 4 λ I. Using the first update step of AM (7), we have, where The last derivation step makes use of the triangle inequality. Using lemma (3), we have, Therefore where Λ max (X) is the largest eigenvalue of X in absolute value. The rest is to show that both Λ max (Y λ ) and Λ max (Y k+1 ) are bounded using assumption (2). For Λ max (Y k+1 ), we have, Combining (62) and (68), we have, Therefore, Continuing with (73), we have, Since Z λ is the minimizer of a strongly convex function, its norm is bounded. And we also have Therefore both Λ max (Y λ ) and Λ max (Y k+1 ) are bounded in (70), i.e. 0 < C λ < 1 is a constant only depending on λ. m , where ρ is the l 1 penalty, d is the dimension of problem and m is the number of samples, the Alternate Minimization algorithm has linear convergence rate for optimization objective defined in (6). The k th iteration of the AM algorithm satisfies, where 0 < C λ < 1 is a constant depending on λ. Proof. (1) Error between Θ λ and Θ G Combining the following two equations: Note that by the optimality condition, ∇ z f ( Θ λ , Z λ , ρ, λ) = 0, we have the fixed point equation λ and we have: Since G is σ G -strongly convex, where σ G is independent of the sample covariance matrix Σ * as the hessian of G is independent of Σ * . Therefore, Proof. (2) Error between Θ G and Θ * Corollary 5 (Theorem 1. of Rothman et al. (2008)). Let Θ G be the minimizer for the optimization C EXPERIMENTAL DETAILS This section contains the detailed settings used in the experimental evaluation section.
A data-driven learning algorithm based on unrolling the Alternating Minimization optimization for sparse graph recovery.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:449
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers. However, an attacker is not usually able to directly modify another agent's observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents. The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior. We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent. Videos are available at https://attackingrl.github.io. The discovery of adversarial examples for image classifiers prompted a new field of research into adversarial attacks and defenses (Szegedy et al., 2014) . Recent work has shown that deep RL policies are also vulnerable to adversarial perturbations of image observations Kos and Song, 2017) . However, real-world RL agents inhabit natural environments populated by other agents, including humans, who can only modify observations through their actions. We explore whether it's possible to attack a victim policy by building an adversarial policy that takes actions in a shared environment, inducing natural observations which have adversarial effects on the victim. RL has been applied in settings as varied as autonomous driving (Dosovitskiy et al., 2017) , negotiation (Lewis et al., 2017) and automated trading (Noonan, 2017) . In domains such as these, an attacker cannot usually directly modify the victim policy's input. For example, in autonomous driving pedestrians and other drivers can take actions in the world that affect the camera image, but only in a physically realistic fashion. They cannot add noise to arbitrary pixels, or make a building disappear. Similarly, in financial trading an attacker can send orders to an exchange which will appear in the victim's market data feed, but the attacker cannot modify observations of a third party's orders. Contributions. Our paper makes three key contributions. First, we have proposed a novel threat model of natural adversarial observations produced by an adversarial policy taking actions in a shared environment. Second, we demonstrate that adversarial policies exist in a range of zero-sum simulated robotics games against state-of-the-art victims trained via self-play to be robust to adversaries. Third, we verify the adversarial policies win by confusing the victim, not by learning a generally strong policy. Specifically, we find the adversary induces highly off-distribution activations in the victim, and that victim performance increases when it is blind to the adversary's position. We repeated the hyperparameter sweep for fine-tuning victim policies for the defence experiments, but obtained similar results. For simplicity, we therefore chose to use the same hyperparameters throughout. We used a mixture of in-house and cloud infrastructure to perform these experiments. It takes around 8 hours to train an adversary for a single victim using 4 cores of an Intel Xeon Platinum 8000 (Skylake) processor.
Deep RL policies can be attacked by other agents taking actions so as to create natural observations that are adversarial.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:45
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Counterfactual regret minimization (CFR) is a fundamental and effective technique for solving Imperfect Information Games (IIG). However, the original CFR algorithm only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games. In this paper, we propose a double neural representation for the IIGs, where one neural network represents the cumulative regret, and the other represents the average strategy. Such neural representations allow us to avoid manual game abstraction and carry out end-to-end optimization. To make the learning efficient, we also developed several novel techniques including a robust sampling method and a mini-batch Monte Carlo Counterfactual Regret Minimization (MCCFR) method, which may be of independent interests. Empirically, on games tractable to tabular approaches, neural strategies trained with our algorithm converge comparably to their tabular counterparts, and significantly outperform those based on deep reinforcement learning. On extremely large games with billions of decision nodes, our approach achieved strong performance while using hundreds of times less memory than the tabular CFR. On head-to-head matches of hands-up no-limit texas hold'em, our neural agent beat the strong agent ABS-CFR by $9.8\pm4.1$ chips per game. It's a successful application of neural CFR in large games. While significant advance has been made in addressing large perfect information games, such as Go (Silver et al., 2016) , solving imperfect information games remains a challenging task. For Imperfect Information Games (IIG), a player has only partial knowledge about her opponents before making a decision, so that she has to reason under the uncertainty about her opponents' information while exploiting the opponents' uncertainty about herself. Thus, IIGs provide more realistic modeling than perfect information games for many real-world applications, such as trading, traffic routing, and politics. Nash equilibrium is a typical solution concept for a two-player perfect-recall IIG. One of the most effective approaches is CFR (Zinkevich et al., 2007) , which minimizes the overall counterfactual regret so that the average strategies converge to a Nash equilibrium. However the original CFR only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games. To tackle this challenge, one can simplify the game by grouping similar states together to solve the simplified (abstracted) game approximately via tabular CFR (Zinkevich et al., 2007; Lanctot et al., 2009) . Constructing an effective abstraction, however, demands rich domain knowledge and its solution may be a coarse approximation of true equilibrium. Function approximation can be used to replace the tabular representation. Waugh et al. (2015) combines regression tree function approximation with CFR based on handcrafted features, which is called Regression CFR (RCFR). However, since RCFR uses full traversals of the game tree, it is still impractical for large games. Moravcik et al. (2017) propose a seminal approach DeepStack, which uses fully connected neural networks to represent players' counterfactual values, tabular CFR however was used in the subgame solving. Jin et al. (2017) use deep reinforcement learning to solve regret minimization problem for single-agent settings, which is different from two-player perfect-recall IIGs. To learn approximate Nash equilibrium for IIGs in an end-to-end manner, Heinrich et al. (2015) and Heinrich & Silver (2016) propose eXtensive-form Fictitious Play (XFP) and Neural Fictitious Self-Play (NFSP), respectively, based on deep reinforcement learning. In a NFSP model, the neural strategies are updated by selecting the best responses to their opponents' average strategies. These approaches are advantageous in the sense that they do not rely on abstracting the game, and accordingly their strategies can improve continuously with more optimization iterations. However fictitious play empirically converges much slower than CFR-based approaches. Srinivasan et al. (2018) use actor-critic policy optimization methods to minimize regret and achieve performance comparable to NFSP. Thus it remains an open question whether a purely neural-based end-to-end approach can achieve comparable performance to tabular based CFR approach. In the paper, we solve this open question by designing a double neural counterfactual regret minimization (DNCFR) algorithm 2 . To make a neural representation, we modeled imperfect information game by a novel recurrent neural network with attention. Furthermore, in order to improve the convergence of the neural algorithm, we also developed a new sampling technique which converged much more efficient than the outcome sampling, while being more memory efficient than the external sampling. In the experiment, we conducted a set of ablation studies related to each novelty. The experiments showed DNCRF converged to comparable results produced by its tabular counterpart while performing much better than NFSP. In addition, we tested DNCFR on extremely large game, heads-up no-limit Texas Hold'em (HUNL). The experiments showed that DNCFR with only a few number of parameters achieved strong neural strategy and beat ABS-CFR. h∈H denotes a possible history (or state), which consists of each player's hidden variable and actions taken by all players including chance. The empty sequence ∅ is a member of H. h j h denotes h j is a prefix of h. Z ⊆ H denotes the terminal histories and any member z ∈Z is not a prefix of any other sequences. A(h)={a:ha∈H} is the set of available actions after non-terminal history h ∈ H \Z. A player function P assigns a member of N ∪{c} to each non-terminal history, where c is the chance ( we set c=−1). P (h) is the player who takes an action after history h. For each player i, imperfect information is denoted by information set (infoset) I i . All states h∈I i are indistinguishable to i. I i refers to the set of infosets of i. The utility function u i (z) defines the payoff of i at state z. See appendix B.1 for more details. Solving IIGs via function approximation methods is an important and challenging problem. Neural Fictitious Self-Play (NFSP) (Heinrich & Silver, 2016 ) is a function approximation method based on deep reinforcement learning, which is a prior leading method to solve IIG. However, fictitious play empirically converges slower than CFR-based approaches in many settings. Recently, Lockhart et al. (2019) propose a new framework to directly optimize the final policy against worst-case opponents. However, the authors consider only small games. Regression CFR (RCFR) (Waugh et al., 2015) is a function approximation method based on CFR. However, RCFR needs to traverse the full game tree. Such traversal is intractable in large games. In addition, RCFR uses hand-crafted features and regression tree to estimate cumulative regret rather than learning features from data. Deep learning empirically performs better than regression tree in many areas, such as the Transformer and BERT in natural language models (Ashish Vaswani, 2017; Jacob Devlin, 2018) . In the past year, concurrent works deep CFR (DCFR) (Brown et al., 2018) and single deep CFR (SD-CFR) (Steinberger, 2019) have been proposed to address this problem via deep learning. DCFR, SDCFR, RCFR and our DNCFR are based on the framework of counterfactual regret minimization. However, there are many differences in several important aspects, which are listed as follows. (1) We represent the extensive-form game by recurrent neural network. The proposed LSTM with attention performs better than fully connected network (see details in Section 3.2). (2) DNCFR updates the cumulative regret only based on the additionally collected samples in current iteration rather than using the samples in a big reservoir (see details in Section 3.3.1). (3) It's important to use squared-loss for the average strategies rather than log loss. Because the log loss is based on the big reservoir samples up to T -th iteration, it is very memory-expensive (see details in Section 3.3.2). (4) Another important aspect to make deep learning model work is that we divide regret by √ T and renormalize the regret, because the cumulative regret can grow unboundedly (see details in Section 3.3.1). (5) Also, DNCFR collects data by an efficiently unbiased mini-batch robust sampling method, which may be of independent interests to the IIG communities (see details in Section 4). There are also big differences in the experimental evaluations. In our method, we conduct a set of ablation studies in various settings. We believe that our ablation studies are informative and could have a significant impact on these kinds of algorithms. Also, we evaluate DNCFR on extremely large games while RCFR and SDCFR are only evaluated on small toy games. We proposed a novel double neural counterfactual regret minimization approach to solve large IIGs by combining many novel techniques, such as recurrent neural representation, attention, robust sampling, and mini-batch MCCFR. We conduct a set of ablation studies and the results show that these techniques may be of independent interests. This is a successful application of applying deep learning into large IIG. We believe DNCFR and other related neural methods open up a promising direction for future work. A GAME RULES
We proposed a double neural framework to solve large-scale imperfect information game.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:450
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We present the first verification that a neural network for perception tasks produces a correct output within a specified tolerance for every input of interest. We define correctness relative to a specification which identifies 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Tiling the state and input spaces with a finite number of tiles, obtaining ground truth bounds from the state tiles and network output bounds from the input tiles, then comparing the ground truth and network output bounds delivers an upper bound on the network output error for any input of interest. Results from two case studies highlight the ability of our technique to deliver tight error bounds for all inputs of interest and show how the error bounds vary over the state and input spaces. Neural networks are now recognized as powerful function approximators with impressive performance across a wide range of applications, especially perception tasks (e.g. vision, speech recognition). Current techniques, however, provide no correctness guarantees on such neural perception systemsthere is currently no way to verify that a neural network provides correct outputs (within a specified tolerance) for all inputs of interest. The closest the field has come is robustness verification, which aims to verify if the network prediction is stable for all inputs in some neighborhood around a selected input point. But robustness verification does not verify for all inputs of interest -it only verifies around local regions. Besides, it does not guarantee that the output, even if stable, is actually correct -there is no specification that defines the correct output for any input except for the manually-labeled center point of each region. We present the first correctness verification of neural networks for perception -the first verification that a neural network produces a correct output within a specified tolerance for every input of interest. Neural networks are often used to predict some property of the world given an observation such as an image or audio recording. We therefore define correctness relative to a specification which identifies 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Then the inputs of interest are all inputs that can be observed from the state space via the observation process. We define the set of inputs of interest as the feasible input space. Because the quantity of interest that the network predicts is some property of the state of the world, the state defines the ground truth output (and therefore defines the correct output for each input to the neural network). We present Tiler, the algorithm for correctness verification of neural networks. Evaluating the correctness of the network on a single state is straightforward -use the observation process to obtain the possible inputs for that state, use the neural network to obtain the possible outputs, then compare the outputs to the ground truth from the state. To do correctness verification, we generalize this idea to work with tiled state and input spaces. We cover the state and input spaces with a finite number of tiles: each state tile comprises a set of states; each input tile is the image of the corresponding state tile under the observation process. The state tiles provide ground truth bounds for the corresponding input tiles. We use recently developed techniques from the robustness verification literature to obtain network output bounds for each input tile (Xiang et al., 2018; Gehr et al., 2018; Weng et al., 2018; Bastani et al., 2016; Lomuscio and Maganti, 2017; Tjeng et al., 2019) . A comparison of the ground truth and output bounds delivers an error upper bound for that region of the state space. The error bounds for all the tiles jointly provide the correctness verification result. We present two case studies. The first involves a world with a (idealized) fixed road and a camera that can vary its horizontal offset and viewing angle with respect to the centerline of the road (Section 5). The state of the world is therefore characterized by the offset δ and the viewing angle θ. A neural network takes the camera image as input and predicts the offset and the viewing angle. The state space includes the δ and θ of interest. The observation process is the camera imaging process, which maps camera positions to images. This state space and the camera imaging process provide the specification. The feasible input space is the set of camera images that can be observed from all camera positions of interest. For each image, the camera positions of all the states that can produce the image give the possible ground truths. We tile the state space using a grid on (δ, θ). Each state tile gives a bound on the ground truth of δ and θ. We then apply the observation process to project each state tile into the image space. We compute a bounding box for each input tile and apply techniques from robustness verification (Tjeng et al., 2019) to obtain neural network output bounds for each input tile. Comparing the ground truth bounds and the network output bounds gives upper bounds on network prediction error for each tile. We verify that our trained neural network provides good accuracy across the majority of the state space of interest and bound the maximum error the network will ever produce on any feasible input. The second case study verifies a neural network that classifies a LiDAR measurement of a sign in an (idealized) scene into one of three shapes (Section 6). The state space includes the position of the LiDAR sensor and the shape of the sign. We tile the state space, project each tile into the input space via the LiDAR observation process, and again apply techniques from robustness verification to verify the network, including identifying regions of the input space where the network may deliver an incorrect classification. The techniques presented in this paper work with specifications provided by the combination of a state space of the world and an observation process that converts states into neural network inputs. Results from the case studies highlight how well the approach works for a state space characterized by several attributes and a camera imaging or LiDAR measurement observation process. We anticipate that the technique will also work well for other problems that have a low dimensional state space (but potentially a high dimensional input space). For higher dimensional state spaces, the framework makes it possible to systematically target specific regions of the input space to verify. Potential applications include targeted verification, directed testing, and the identification of illegal inputs for which the network is not expected to work on.
We present the first verification that a neural network for perception tasks produces a correct output within a specified tolerance for every input of interest.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:451
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deep generative models have achieved remarkable progress in recent years. Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges. One of the most popular metrics for evaluating generative models is the log-likelihood. While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders (VAE) or generative adversarial networks (GAN) can be efficiently estimated using annealed importance sampling (AIS). In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models. We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate. We evaluate lossy compression rates of different deep generative models such as VAEs, GANs (and its variants) and adversarial autoencoders (AAE) on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone. Generative models of images represent one of the most exciting areas of rapid progress of AI (Brock et al., 2019; Karras et al., 2018b; a) . However, evaluating the performance of generative models remains a significant challenge. Many of the most successful models, most notably Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) , are implicit generative models for which computation of log-likelihoods is intractable or even undefined. Evaluation typically focuses on metrics such as the Inception score (Salimans et al., 2016) or the Fréchet Inception Distance (FID) score (Heusel et al., 2017) , which do not have nearly the same degree of theoretical underpinning as likelihood-based metrics. Log-likelihoods are one of the most important measures of generative models. Their utility is evidenced by the fact that likelihoods (or equivalent metrics such as perplexity or bits-per-dimension) are reported in nearly all cases where it's convenient to compute them. Unfortunately, computation of log-likelihoods for implicit generative models remains a difficult problem. Furthermore, log-likelihoods have important conceptual limitations. For continuous inputs in the image domain, the metric is often dominated by the fine-grained distribution over pixels rather than the high-level structure. For models with low-dimensional support, one needs to assign an observation model, such as (rather arbitrary) isotropic Gaussian noise (Wu et al., 2016) . Lossless compression metrics for GANs often give absurdly large bits-per-dimension (e.g. 10 14 ) which fails to reflect the true performance of the model (Grover et al., 2018; Danihelka et al., 2017) . See Theis et al. (2015) for more discussion of limitations of likelihood-based evaluation. Typically, one is not interested in describing the pixels of an image directly, and it would be sufficient to generate images close to the true data distribution in some metric such as Euclidean distance. For this reason, there has been much interest in Wasserstein distance as a criterion for generative models, since the measure exploits precisely this metric structure Gulrajani et al., 2017; Salimans et al., 2018) . However, Wasserstein distance remains difficult to approximate, and hence it is not routinely used to evaluate generative models. We aim to achieve the best of both worlds by measuring lossy compression rates of deep generative models. In particular, we aim to estimate the rate distortion function, which measures the number of bits required to match a distribution to within a given distortion. Like Wasserstein distance, it can exploit the metric structure of the observation space, but like log-likelihoods, it connects to the rich literature of probabilistic and information theoretic analysis of generative models. By focusing on different parts of the rate distortion curve, one can achieve different tradeoffs between the description length and the fidelity of reconstruction -thereby fixing the problem whereby lossless compression focuses on the details at the expense of high-level structure. It has the further advantage that the distortion metric need not have a probabilistic interpretation; hence, one is free to use more perceptually valid distortion metrics such as structural similarity (SSIM) (Wang et al., 2004) or distances between hidden representations of a convolutional network (Huang et al., 2018) . Algorithmically, computing rate distortion functions raises similar challenges to estimating loglikelihoods. We show that the rate distortion curve can be computed by finding the normalizing constants of a family of unnormalized probability distributions over the noise variables z. Interestingly, when the distortion metric is squared error, these distributions correspond to the posterior distributions of z for Gaussian observation models with different variances; hence, the rate distortion analysis generalizes the evaluation of log-likelihoods with Gaussian observation models. Annealed Importance Sampling (AIS) (Neal, 2001 ) is currently the most effective general-purpose method for estimating log-likelihoods of implicit generative models, and was used by Wu et al. (2016) to compare log-likelihoods of a variety of implicit generative models. The algorithm is based on gradually interpolating between a tractable initial distribution and an intractable target distribution. We show that when AIS is used to estimate log-likelihoods under a Gaussian observation model, the sequence of intermediate distributions corresponds to precisely the distributions needed to compute the rate distortion curve. Since AIS maintains a stochastic lower bound on the normalizing constants of these distributions, it automatically produces an upper bound on the entire rate distortion curve. Furthermore, the tightness of the bound can be validated on simulated data using bidirectional Monte Carlo (BDMC) (Grosse et al., 2015; Wu et al., 2016) . Hence, we can approximate the entire rate distortion curve for roughly the same computational cost as a single log-likelihood estimate. We use our rate distortion approximations to study a variety of variational autoencoders (VAEs) (Kingma & Welling, 2013) , GANs and adversarial autoencoders (AAE) (Makhzani et al., 2015) , and arrive at a number of insights not obtainable from log-likelihoods alone. For instance, we observe that VAEs and GANs have different rate distortion tradeoffs: While VAEs with larger code size can generally achieve better lossless compression rates, their performances drop at lossy compression in the low-rate regime. Conversely, expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime. We find that increasing the capacity of GANs by increasing the code size (width) has a qualitatively different effect on the rate distortion tradeoffs than increasing the depth. We also find that that different GAN variants with the same code size achieve nearly identical RD curves, and that the code size dominates the performance differences between GANs. In this work, we studied rate distortion approximations for evaluating different generative models such as VAEs, GANs and AAEs. We showed that rate distortion curves provide more insights about the model than the log-likelihood alone while requiring roughly the same computational cost. For instance, we observed that while VAEs with larger code size can generally achieve better lossless compression rates, their performances drop at lossy compression in the low-rate regime. Conversely, expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime. This may help explain the success of large GAN architectures (Brock et al., 2019; Karras et al., 2018a; b) . We also discovered that increasing the capacity of GANs by increasing the code size (width) has a very different effect than increasing the depth. The former extends the rate distortion curves leftwards, while the latter pushes the curves down. We also found that different GAN variants with the same code size has almost similar rate distortion curves, and that the code size dominates the algorithmic differences of GANs. Overall, lossy compression yields a richer and more complete picture of the distribution modeling performance of generative models. The ability to quantitatively measure performance tradeoffs should lead to algorithmic insights which can improve these models.
We study rate distortion approximations for evaluating deep generative models, and show that rate distortion curves provide more insights about the model than the log-likelihood alone while requiring roughly the same computational cost.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:452
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time. Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks. To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context. Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents. We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments. We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot: We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads. Both model-based and model-free reinforcement learning (RL) methods generally operate in one of two regimes: all training is performed in advance, producing a model or policy that can be used at test-time to make decisions in settings that approximately match those seen during training; or, training is performed online (e.g., as in the case of online temporal-difference learning), in which case the agent can slowly modify its behavior as it interacts with the environment. However, in both of these cases, dynamic changes such as failure of a robot's components, encountering a new terrain, environmental factors such as lighting and wind, or other unexpected perturbations, can cause the agent to fail. In contrast, humans can rapidly adapt their behavior to unseen physical perturbations and changes in their dynamics BID6 : adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children that can walk on carpet and grass can quickly figure out how to walk on ice without having to relearn how to walk. How is this possible? If an agent has encountered a large number of perturbations in the past, it can in principle use that experience to learn how to adapt. In this work, we propose a meta-learning approach for learning online adaptation.Motivated by the ability to tackle real-world applications, we specifically develop a model-based meta-reinforcement learning algorithm. In this setting, data for updating the model is readily available at every timestep in the form of recent experiences. But more crucially, the meta-training process for training such an adaptive model can be much more sample efficient than model-free meta-RL approaches BID11 BID55 . Further, our approach foregoes the episodic framework on which model-free meta-RL approaches rely on, where tasks are pre-defined to be different rewards or environments, and tasks exist at the trajectory level only. Instead, our method considers each timestep to potentially be a new "task, " where any detail or setting could have changed at any timestep. This view induces a more general meta-RL problem setting by allowing the notion of a task to represent anything from existing in a different part of the state space, to experiencing disturbances, or attempting to achieve a new goal.Learning to adapt a model alleviates a central challenge of model-based reinforcement learning: the problem of acquiring a global model that is accurate throughout the entire state space. Furthermore, even if it were practical to train a globally accurate dynamics model, the dynamics inherently change as a function of uncontrollable and often unobservable environmental factors, such as those mentioned above. If we have a model that can adapt online, it need not be perfect everywhere a priori. This property has previously been exploited by adaptive control methods BID2 BID45 BID38 ; but, scaling such methods to complex tasks and nonlinear systems is exceptionally difficult. Even when working with deep neural networks, which have been used to model complex nonlinear systems BID21 , it is exceptionally difficult to enable adaptation, since such models typically require large amounts of data and many gradient steps to learn effectively. By specifically training a neural network model to require only a small amount of experience to adapt, we can enable effective online adaptation in complex environments while putting less pressure on needing a perfect global model.The primary contribution of our work is an efficient meta reinforcement learning approach that achieves online adaptation in dynamic environments. To the best knowledge of the authors, this is the first meta-reinforcement learning algorithm to be applied in a real robotic system. Our algorithm efficiently trains a global model that is capable to use its recent experiences to quickly adapt, achieving fast online adaptation in dynamic environments. We evaluate two versions of our approach, recurrence-based adaptive learner (ReBAL) and gradient-based adaptive learner (GrBAL) on stochastic and simulated continuous control tasks with complex contact dynamics (Fig. 2) . In our experiments, we show a quadrupedal "ant" adapting to the failure of different legs, as well as a "half-cheetah" robot adapting to the failure off different joints, navigating terrains with different slopes, and walking on floating platforms of varying buoyancy. Our model-based meta RL method attains substantial improvement over prior approaches, including standard model-based methods, online model-adaptive methods, model-free methods, and prior meta-reinforcement learning methods, when trained with similar amounts of data. In all experiments, meta-training across multiple tasks is sample efficient, using only the equivalent of 1.5 − 3 hours of real-world experience, roughly 10× less than what model-free methods require to learn a single task. Finally, we demonstrate GrBAL on a real dynamic legged millirobot (see Fig 2) . To highlight not only the sample efficiency of our meta model-based reinforcement learning approach, but also the importance of fast online adaptation in the real world, we show the agent's learned ability to adapt online to tasks such as a missing leg, novel terrains and slopes, miscalibration or errors in pose estimation, and new payloads to be pulled. In this work, we present an approach for model-based meta-RL that enables fast, online adaptation of large and expressive models in dynamic environments. We show that meta-learning a model for online adaptation results in a method that is able to adapt to unseen situations or sudden and drastic changes in the environment, and is also sample efficient to train. We provide two instantiations of our approach (ReBAL and GrBAL), and we provide a comparison with other prior methods on a range of continuous control tasks. Finally, we show that (compared to model-free meta-RL approaches), our approach is practical for real-world applications, and that this capability to adapt quickly is particularly important under complex real-world dynamics.
A model-based meta-RL algorithm that enables a real robot to adapt online in dynamic environments
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:453
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents. Model-free deep reinforcement learning (DRL) approaches (e.g., deep Q-learning BID14 ], DDPG BID12 ], A3C BID16 ], etc) have been applied extensively in many simulated environments with complete information and relatively simple game dynamics (e.g., Atari games, Go ], Doom, etc). The learned agent, which acts reactively based on the current game situation, can even achieve superhuman performance.However, for complicated environments, planning ahead (or "predicting the future") before making an actual decision is important. Such a planning procedure requires a forward model that estimates the next state s t+1 given the current state s t and action a t , which is in general non-trivial to construct and estimate from the high-dimensional raw input. For partially observable environments (e.g., Real-time Strategy Games like StarCraft), constructing a forward model is more difficult even with a perfect domain knowledge of the game, due to the deliberate concealing of information and the additional requirement to capture the belief of the unknown for the agent.A natural question now arises. Could we borrow the success of model-free approach to learn a forward model? Note that in model-free approaches, a single shared network (called "trunk") is often used to extract features from the input game situation to obtain a latent representation. From the latent space, multiple reinforcement learning quantities (Q-function, value function V , advantage function A, etc) are predicted via simple linear transformations and used for decision making. Strong performance of these approaches indicates that the learned latent space must have captured key ingredients of the input situation and remains low-dimensional. Therefore, it is an excellent candidate for the state representation of a forward model.In this paper, we study whether it is possible to use the latent space learned by model-free approaches to construct forward models. We use MiniRTS ], an efficient and simple twoplayer Real-time Strategy (RTS) game. MiniRTS captures the basic dynamics of its kind: the agent builds units (workers and troops) that consume resources, gathers resources, explores regions out of sights ("fog of war"), defends enemy's attack, and invades enemy's base. This game is incomplete information, because the agent can only see within its sight, and does not know the action of its opponent by default. Rather than unit based control as in ; ; ], the agent uses 9 discrete actions to control the overall strategy (e.g., build a particular kind of troops, attack or defend).Our contributions are three-fold: First, we propose to study the relationship between the latent space learned by model-free approaches and the state representation of forward models. Very few works (e.g, DARLA BID10 ], DQN BID15 ]) in model-free RL study these properties in depth, let alone using the latent state in model-based approaches for incomplete information game. To our knowledge, we are one of the first works to explore such directions. Second , we improve the performance of model-based agent in MiniRTS by input feature design and show that the latent space learned from actor-critic models BID16 ] can reconstruct critical information of the game, e.g., Hit Point of the base and available resources. Finally , we propose novel algorithms that learn a forward model that maps a latent state h t to its future counterpart h t (t > t) with reduced drifting. Such a forward model enables us to use model-based planning such as Monte-Carlo Tree Search (MCTS) in incomplete information games. We show positive performance (8% higher than random planning) in terms of win rates against rule-based agents. Latent space learned by model-free reinforcement learning encodes important information for an agent to make sensible decisions to maximize the reward in a complicated simulated environment. In this paper, we verify the power of latent space of successfully trained model-free agent, and propose several methods to learn forward models on this space, in a real-time strategy game with incomplete information. Despite an extremely hard problem, we learn forward models that make it possible to use planning approaches such as Monte Carlo Tree Search, and show consistently positive gains over baselines.A lot of future works follow. As a first step, although we show that it is possible to learn a forward model for incomplete information Real-time Strategy games to enable model-based planning in the latent space, it remains an open problem how to improve its performance. It is possible that despite a good forward model is learned, the value function is not good enough, e.g., putting too much focus on the on-policy trajectory, for Monte-Carlo Tree Search. Also, in this paper we use predefined 9 global actions for the game. How to automatically learn global actions from unit-based commands that are exponentially large, is still an challenging issue to solve.
The paper analyzes the latent space learned by model-free approaches in a miniature incomplete information game, trains a forward model in the latent space and apply it to Monte-Carlo Tree Search, yielding positive performance.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:454
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Several state of the art convolutional networks rely on inter-connecting different layers to ease the flow of information and gradient between their input and output layers. These techniques have enabled practitioners to successfully train deep convolutional networks with hundreds of layers. Particularly, a novel way of interconnecting layers was introduced as the Dense Convolutional Network (DenseNet) and has achieved state of the art performance on relevant image recognition tasks. Despite their notable empirical success, their theoretical understanding is still limited. In this work, we address this problem by analyzing the effect of layer interconnection on the overall expressive power of a convolutional network. In particular, the connections used in DenseNet are compared with other types of inter-layer connectivity. We carry out a tensor analysis on the expressive power inter-connections on convolutional arithmetic circuits (ConvACs) and relate our results to standard convolutional networks. The analysis leads to performance bounds and practical guidelines for design of ConvACs. The generalization of these results are discussed for other kinds of convolutional networks via generalized tensor decompositions. Recently, densely connected networks such as FractalNet BID8 , ResNet BID6 , and DenseNet BID7 , have obtained state of the art performance on large problems where highly deep network configurations are used. Adding dense connections between different layers of a network virtually shortens its depth, thus allowing a better flow of information and gradient through the network. This makes possible the training of highly deep models. Models with these types of connections have been successfully trained with hundreds of layers. More specifically, DenseNets have achieved state of the art performance on the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets, using models of up to 1 thousand layers in depth. Nevertheless, whether these connections provide a fundamental enhancement on the expressive power of a network, or just improve the training of the model, is still an open question. In BID7 , DenseNet models with 3 times less parameters than its counterpart (ResNets) were able to achieve the same performance on the ImageNet challenge. Moreover, a theoretical understanding of why the connections used by DenseNets lead to better performance compared with FractalNets or ResNets is still pending.Despite the popularity of these models, there are few theoretical frameworks explaining the power of these models and providing insights to their performance. In , the authors considered convolutional networks with linear activations and product pooling layers, called convolutional arithmetic circuits (ConvACs), and argued for the expressiveness of deep networks using a tensor based analysis. This analysis has been extended to rectifier based convolutional networks via generalization of the tensor product . In , it was shown that ConvACs enjoy a greater expressive power than rectifier based models despite the popularity of rectifier based networks in practice. Indeed the empirical relevance of ConvAC was demonstrated through an architecture called SimNets . In addition, the generative ConvAC of BID11 achieved state of the art performance in classification of images with missing pixels. These results served as motivation for the works of ; ; BID9 ; BID10 , where different aspects of ConvACs were studied from a theoretical perspective.In the inductive bias introduced by pooling geometries was studied. Later, BID9 makes use of the quantum entanglement measure to analyze the inductive bias introduced by the correlations among the channels of ConvACs. Moreover, BID10 generalizes the convolutional layer of ConvACs by allowing overlapping receptive fields, in other words permitting stride values lower than the convolution patch size. These locally overlapping connections led to an enhancement on the expressive capacity of ConvACs. The notion of inter-layer connectivity for ConvACs was addressed by in the context of sequential data processing, such as audio and text related tasks. In that work, the expressive capabilities of interconnecting processing blocks from a sequence was studied. Nevertheless, these types of interconnections are related to the sequential nature of the problem and different from the ones used in ResNet, FractalNet and DenseNet.In this work, we extend the tensor analysis framework of to obtain insightful knowledge about the effect of dense connections, from the kind used in DenseNets, FractalNet and ResNet, on the expressiveness of deep ConvACs. We study the expressive capabilities provided by different types of dense connections. Moreover, from these results we derive performance bounds and practical guidelines for selection of the hyperparameters of a deep ConvAC, such as layer widths and the topology of dense connections. These results serve as the first step into understanding dense connectivity in rectifier networks as well, since they can be further extended to include rectifier linear units, in the same spirit as the generalization of the tensor products done by .The remainder of this paper is organized as follows. In Section 2, we introduce the notation and basic concepts from tensor algebra. In Section 3, we present the tensor representation of ConvACs as introduced by , and later in Section 4, we obtain tensor representations for densely connected ConvACs. In Section 5, performance bounds and design guidelines are derived for densely connected ConvACs.
We analyze the expressive power of the connections used in DenseNets via tensor decompositions.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:455
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We consider the following central question in the field of Deep Reinforcement Learning (DRL): How can we use implicit human feedback to accelerate and optimize the training of a DRL algorithm? State-of-the-art methods rely on any human feedback to be provided explicitly, requiring the active participation of humans (e.g., expert labeling, demonstrations, etc.). In this work, we investigate an alternative paradigm, where non-expert humans are silently observing (and assessing) the agent interacting with the environment. The human's intrinsic reactions to the agent's behavior is sensed as implicit feedback by placing electrodes on the human scalp and monitoring what are known as event-related electric potentials. The implicit feedback is then used to augment the agent's learning in the RL tasks. We develop a system to obtain and accurately decode the implicit human feedback (specifically error-related event potentials) for state-action pairs in an Atari-type environment. As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games using an electroencephalogram (EEG) cap, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm with the intent of accelerating its learning of the game. Building atop the baseline, we then make the following novel contributions in our work: (i) We argue that the definition of error-potentials is generalizable across different environments; specifically we show that error-potentials of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the error-potentials. (ii) We propose two different frameworks to combine recent advances in DRL into the error-potential based feedback system in a sample-efficient manner, allowing humans to provide implicit feedback while training in the loop, or prior to the training of the RL agent. (iii) Finally, we scale the implicit human feedback (via ErrP) based RL to reasonably complex environments (games) and demonstrate the significance of our approach through synthetic and real user experiments. Deep Reinforcement Learning (DRL) algorithms have now beaten human experts in Go (Silver et al., 2017) , taught robots to become parkour masters , and enabled truly autonomous vehicles (Wang et al., 2018) . However, current state-of-the-art RL agents equipped with deep neural networks are inherently complex, difficult and time-intensive to train. Particularly in complex environments with sparse reward functions (e.g., maze navigation), the DRL agents need an inordinate amount of interaction with the environment to learn the optimal policy. Human participation can potentially help DRL algorithms by accelerating their training and reducing the learning costs without compromising final performance. This potential has inspired a several research efforts where either an alternative (or supplementary) feedback is obtained from the human participant (Knox, 2012) . Such approaches despite being highly effective, severely burden the human-in-the-loop demanding either expert demonstrations (Ross et al., 2011) or explicit feedback (Christiano et al., 2017) . In this paper, we investigate an alternative paradigm that substantially increases the richness of the reward functions, while not severely burdening the human-in-the-loop. We study the use of electroencephalogram (EEG) based brain waves of the human-in-the-loop to generate the reward functions that can be used by the DRL algorithms. Such a model will benefit from the natural rich activity of a powerful sensor (the human brain), but at the same time not burden the human if the activity being relied upon is intrinsic. This paradigm is inspired by a high-level error-processing system in humans that generates error-related potential/negativity (ErrP or ERN) (Scheffers et al., 1996) .When a human recognizes an error made by an agent, the elicited ErrP can be captured through EEG to inform agent about the sub-optimality of the taken action in the particular state. As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm. We show that a full access approach to obtain feedback on every state-action pair while RL agent is learning, can significantly speedup the training convergence of RL agent. We contend that while obtaining such implicit human feedback through EEG is less burdensome, it is still a time-intensive task for the subject and the experimenter alike. This, combined with the noisy EEG signals and stochasticity in inferring error-potentials, raises significant challenges in terms of the practicality of the solution. In this context, we first argue that the definition of ErrPs is generalizable across different environments. We show that ErrPs of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the ErrP. This is notably different from previous approaches (Chavarriaga & Millán, 2010; Salazar-Gomez et al., 2017) , where the labeled ErrPs are obtained in the same environment (where the RL task is performed). For any new and unseen environment, it does not require the human to go through the training phase again, and assumes no prior knowledge about the optimal state-action pairs of the environment. We present two different frameworks to combine recent advances in DRL into the implicit human feedback mechanism (via ErrP) in a practical, sample-efficient manner. This reduces the cost of human supervision sufficiently allowing the DRL systems to train. Relying on Active Learning (AL) methods, our first framework allows humans to provide implicit feedback in the loop, while an RL agent is being trained. An uncertainty based acquisition function is modeled to select the samples state-action pairs for querying the implicit human feedback. However, as a human is always required to be in the loop, our second framework allows humans to provide their feedback implicitly before the agent starts training. Based on the human feedback obtained during pre-training, a quality (Q) function is learned over these imperfect demonstrations to provide the supplementary reward to the RL agent. We present results from real ErrP experiments to evaluate the acceleration in learning, and sample efficiency, in both frameworks. In summary, the novel contributions of our work are, 1. We demonstrate the generalizability of error-potentials over various Atari-like environments (discrete grid-based navigation games, studied in this work), enabling the estimation of implicit human feedback in new and unseen environments. 2. We propose two different frameworks to combine recent advances in DRL into ErrP based feedback system in a practical, sample-efficient manner. The first framework allows humans to provide implicit feedback while training in the loop. Taking advantage of recent approaches in learning from imperfect demonstrations, in the second framework, the implicit human feedback is obtained prior to the training of the RL agent. 3. We scale the implicit human feedback (via ErrP) based RL to reasonably complex environments and demonstrate the significance of our approach through synthetic and real user experiments. Daniel et al. (2015) ; El Asri et al. (2016); Wang et al. (2016) studied RL from human rankings or ratings, however rely on explicit human feedback, and assume that the feedback is noiseless. Demonstrations have been commonly used to improve the efficiency of RL (Kim et al., 2013; Chemali & Lazaric, 2015; Piot et al., 2014) , and a common paradigm is to initialize RL algorithms with good policy or Q function (Nair et al., 2018; Hester et al., 2018; Gao et al., 2018) . In this work, we use rely on implicit feedback from non-expert humans (via ErrPs) which is inherently noisy. (Chavarriaga & Millán, 2010; Iturrate et al., 2010; Salazar-Gomez et al., 2017) demonstrate the benefit of ErrPs in a very simple setting (i.e., very small state-space), and use ErrP-based feedback as the only reward. Moreover, in all of these works, the ErrP decoder is trained on a similar game (or robotic task), essentially using the knowledge that is supposed to be unknown in the RL task. In our work, we use labeled ErrPs examples of very simple and known environments to train the ErrP decoder, and combine with the recent advances in DRL in a sample-efficient manner for reasonably complex environments. Consider a Markov Decision Process (MDP) problem M , as a tuple < X , A, P, P 0 , R, γ >, with state-space X , action-space A, transition kernel P , initial state distribution P 0 , accompanied with reward function R, and discounting factor 0 ≤ γ ≤ 1. Here the random variable Z(s, a) denotes the accumulated discounted future rewards starting from state s and action a. We first demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm. Then we argue that the definition of ErrPs is generalizable across different environment. In the ideal approach, we validate the augmentation effect of ErrP labels on RL algorithms by the full access method. Then, in the practical approach, we propose two augmentation frameworks for RL agent, applicable to different situations. The first is to integrate human into the training loop of RL agent based on active learning, while the second is to learn a reward function from imperfect demonstrations labeled by ErrP. The demonstration of the generalizability of error-potentials is limited across the environments presented in the paper. We have considered discrete grid-based reasonably complex navigation games. The validation of the generalization to a variety of Atari and Robotic environments is the subject of the future work. We also plan to test our framework of integrating implicit human feedback (via ErrPs) over robotic environments, and text the generalization capability of error-potentials between virtual and physical worlds. As future work, we plan to investigate as to how machines can be assisted in RL by using intrinsic EEG-based cooperations among humans and machines. are bandpass filtered in [0.5, 40] Hz. Epochs of 800ms were extracted relative to pre-stimulus 200ms baseline, and were subjected to spatial filtering. In spatial filtering, prototype responses of each class, i.e., "correct" and "erroneous", are computed by averaging all training trials in the corresponding classes("xDAWN Spatial Filter" (Rivet et al., 2009; Barachant & Congedo, 2014; ). "xDAWN filtering" projects the EEG signals from sensor space (i.e., electrode space) to the source space (i.e., a low-dimensional space constituted by the actual neuronal ensembles in brain firing coherently). The covariance matrix of each epoch is computed, and concatenated with the prototype responses of the class. Further, dimensionality reduction is achieved by selecting relevant channels through backward elimination . The filtered signals are projected to the tangent space for feature extraction. The obtained feature vector is first normalized (using L1 norm) and fed to a regularized regression model. A threshold value is selected for the final decision by maximizing accuracy offline on the training set. We present the algorithm to decode the ErrP signals in Algorithm 2. Algorithm 2: Riemannian Geometry based ErrP classification algorithm Input : raw EEG signals EEG 1 Pre-process raw EEG signals ; 2 Spatial Filtering: xDAWN Spatial Filter (nf ilter) ; 3 Electrode Selection: ElectrodeSelect (nelec, metric='riemann') ; 4 Tangent Space Projection : TangentSpace(metric = "logeuclid") Normalize using L1 norm ; 5 Regression: ElasticNet ; 6 Select decision threshold by maximizing accuracy
We use implicit human feedback (via error-potentials, EEG) to accelerate and optimize the training of a DRL algorithm, in a practical manner.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:456
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deep learning has demonstrated abilities to learn complex structures, but they can be restricted by available data. Recently, Consensus Networks (CNs) were proposed to alleviate data sparsity by utilizing features from multiple modalities, but they too have been limited by the size of labeled data. In this paper, we extend CN to Transductive Consensus Networks (TCNs), suitable for semi-supervised learning. In TCNs, different modalities of input are compressed into latent representations, which we encourage to become indistinguishable during iterative adversarial training. To understand TCNs two mechanisms, consensus and classification, we put forward its three variants in ablation studies on these mechanisms. To further investigate TCN models, we treat the latent representations as probability distributions and measure their similarities as the negative relative Jensen-Shannon divergences. We show that a consensus state beneficial for classification desires a stable but imperfect similarity between the representations. Overall, TCNs outperform or align with the best benchmark algorithms given 20 to 200 labeled samples on the Bank Marketing and the DementiaBank datasets. Deep learning has demonstrated impressive capacities to learn complicated structures from massive data sets. However, acquiring sufficient labeled data can be expensive or difficult (e.g., for specific pathological populations BID10 ). Transductive learning (a set of semi-supervised algorithms) uses intrinsic structures among unlabeled data to boost classifier performance. In the real world, data can spread across multiple modalities (e.g., visual, acoustic, and text) in typical tasks, although many existing transductive algorithms do not exploit the structure across these modalities. Co-training [3] and tri-training BID23 use one classifier per modality to supervise each other, but they can only apply to two and three modalities respectively.Recently, Consensus Networks (CNs) BID24 incorporated the idea of co-training. Not limited by the number of modalities, CNs showed promising results on detecting cognitive impairments from multi-modal datasets of speech. A consensus network contains several interpreters (one per modality), a discriminator, and a classifier. The interpreters try to produce low-dimensional representations of input data that are indistinguishable by the discriminator. The classifier makes predictions based on these representation vectors.Despite promising results, CN is limited by the amount of available training data. This motivates our extension into semi-supervised learning with our Transductive Consensus Network (TCN).TCNs operate in two mechanisms: as consensus or classifier. The consensus mechanism urges the modality representations to resemble each other (trained on the whole dataset without using labels), and the classifier mechanism optimizes the networks to retain information useful for classification (trained on the labeled dataset). To illustrate the importance of these two mechanisms in an ablation study, we also put forward its three variants: TCN-embed, TCN-svm, and TCN-AE in §3. By this ablation study, we show that both mechanisms should function together via iterative training.To further reveal the mechanisms of TCN, we formulate in §3.5 the similarity between latent representations using negative Jensen-Shannon divergences. By monitoring their similarities, we show that a meaningful consensus state prefers representations to have suboptimal similarities.In experiments ( §4), we compare TCN to its three variants, TCN's multimodal supervised learning counterpart (CN), and several other semi-supervised learning benchmark algorithms on two datasets: Bank Marketing (from the UCI repository) and DementiaBank (a dataset of pathological speech in multiple modalities). On both datasets, the F-scores of TCN align with the best benchmark models when there are more labeled data available, and outperform benchmarks (including tri-training) given as few as 20 labeled points. In this paper, we present Transductive Consensus Networks (TCNs) that extend consensus networks with semi-supervised learning. We identify two mechanisms in which TCNs function, i.e., the consensus and classifier mechanisms. With three TCN variants in an ablation study, we show the importance of both mechanisms. Moreover, by treating the representations as probability distributions and defining their similarity as negative relative JS divergences, we show that although the consensus mechanism urges high similarities, a good consensus state might not need perfect similarities between modality representations.In the future, several avenues may be considered. To start with, building consensus networks using other types of neural networks may be considered. In addition, more exploration could be done to find a more explainable metric to describe the extent of agreement. Currently, we use −
TCN for multimodal semi-supervised learning + ablation study of its mechanisms + interpretations of latent representations
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:457
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings. However, some of the most popular of these optimization tools - namely Adam, Adagrad and the more recent Amsgrad - remain to be generalized to Riemannian manifolds. We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product. Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms. Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincare ball. Developing powerful stochastic gradient-based optimization algorithms is of major importance for a variety of application domains. In particular, for computational efficiency, it is common to opt for a first order method, when the number of parameters to be optimized is great enough. Such cases have recently become ubiquitous in engineering and computational sciences, from the optimization of deep neural networks to learning embeddings over large vocabularies.This new need resulted in the development of empirically very successful first order methods such as ADAGRAD BID5 , ADADELTA BID29 , ADAM BID9 or its recent update AMSGRAD BID18 .Note that these algorithms are designed to optimize parameters living in a Euclidean space R n , which has often been considered as the default geometry to be used for continuous variables. However , a recent line of work has been concerned with the optimization of parameters lying on a Riemannian manifold, a more general setting allowing non-Euclidean geometries. This family of algorithms has already found numerous applications, including for instance solving Lyapunov equations BID27 , matrix factorization BID23 , geometric programming BID22 , dictionary learning BID2 or hyperbolic taxonomy embedding BID15 BID6 BID4 BID14 .A few first order stochastic methods have already been generalized to this setting (see section 6), the seminal one being Riemannian stochastic gradient descent (RSGD) BID1 , along with new methods for their convergence analysis in the geodesically convex case . However, the above mentioned empirically successful adaptive methods, together with their convergence analysis, remain to find their respective Riemannian counterparts.Indeed, the adaptivity of these algorithms can be thought of as assigning one learning rate per coordinate of the parameter vector. However, on a Riemannian manifold, one is generally not given an intrinsic coordinate system, rendering meaningless the notions sparsity or coordinate-wise update.Our contributions. In this work we (i) explain why generalizing these adaptive schemes to the most agnostic Riemannian setting in an intrinsic manner is compromised, and (ii) propose generalizations of the algorithms together with their convergence analysis in the particular case of a product of manifolds where each manifold represents one "coordinate" of the adaptive scheme. Finally, we (iii) empirically support our claims on the realistic task of hyperbolic taxonomy embedding.Our initial motivation. The particular application that motivated us in developing Riemannian versions of ADAGRAD and ADAM was the learning of symbolic embeddings in non-Euclidean spaces. As an example, the GloVe algorithm BID17 ) − an unsupervised method for learning Euclidean word embeddings capturing semantic/syntactic relationships − benefits significantly from optimizing with ADAGRAD compared to using SGD, presumably because different words are sampled at different frequencies. Hence the absence of Riemannian adaptive algorithms could constitute a significant obstacle to the development of competitive optimization-based Riemannian embedding methods. In particular, we believe that the recent rise of embedding methods in hyperbolic spaces could benefit from such developments BID15 BID6 b; BID4 BID28 . Driven by recent work in learning non-Euclidean embeddings for symbolic data, we propose to generalize popular adaptive optimization tools (e.g. ADAM, AMSGRAD, ADAGRAD) to Cartesian products of Riemannian manifolds in a principled and intrinsic manner. We derive convergence rates that are similar to the Euclidean corresponding models. Experimentally we show that our methods outperform popular non-adaptive methods such as RSGD on the realistic task of hyperbolic word taxonomy embedding. DISPLAYFORM0 i * . Combining the following formula 8 : DISPLAYFORM1 with the following inequality (given by lemma 6): DISPLAYFORM2 yields DISPLAYFORM3 where the use the notation ·, · x i for ρ DISPLAYFORM4 Now applying Cauchy-Schwarz' and Young's inequalities to the last term yields DISPLAYFORM5 From the geodesic convexity of f t for 1 ≤ t ≤ T , we have DISPLAYFORM6 Let's look at the first term. Using β 1t ≤ β 1 and with a change of indices, we have DISPLAYFORM7 where the last equality comes from a standard telescopic summation. We now need the following lemma.Lemma 3. DISPLAYFORM8 Proof. Let's start by separating the last term, and removing the hat on v. Using that β 1k ≤ β 1 for all k ∈ [T ], (1 − β 1j )β DISPLAYFORM9 Finally, (1 − β 1j ) ≤ 1 and
Adapting Adam, Amsgrad, Adagrad to Riemannian manifolds.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:458
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks. State-of-the-art effectiveness of deep neural networks has made it the technique of choice in a variety of fields, including computer vision (He et al., 2016) , natural language processing (Sutskever et al., 2014) , and speech recognition (Hinton et al., 2012) . However, there have been a myriad of demonstrations showing that deep neural networks can be easily fooled by carefully perturbing pixels in an image through what have become known as adversarial example attacks (Szegedy et al., 2014; Goodfellow et al., 2015; Carlini & Wagner, 2017b; Vorobeychik & Kantarcioglu, 2018) . In response, a large literature has emerged on defending deep neural networks against adversarial examples, typically either proposing techniques for learning more robust neural network models (Wong & Kolter, 2018; Wong et al., 2018; Raghunathan et al., 2018b; Cohen et al., 2019; Madry et al., 2018) , or by detecting adversarial inputs (Metzen et al., 2017; Xu et al., 2018) . Particularly concerning, however, have been a number of demonstrations that implement adversarial perturbations directly in physical objects that are subsequently captured by a camera, and then fed through the deep neural network classifier (Boloor et al., 2019; Eykholt et al., 2018; Athalye et al., 2018b; Brown et al., 2018) . Among the most significant of such physical attacks on deep neural networks are three that we specifically consider here: 1) the attack which fools face recognition by using adversarially designed eyeglass frames (Sharif et al., 2016) , 2) the attack which fools stop sign classification by adding adversarially crafted stickers (Eykholt et al., 2018) , and 3) the universal adversarial patch attack, which causes targeted misclassification of any object with the adversarially designed sticker (patch) (Brown et al., 2018) . Oddly, while considerable attention has been devoted to defending against adversarial perturbation attacks in the digital space, there are no effective methods specifically to defend against such physical attacks. Our first contribution is an empirical evaluation of the effectiveness of conventional approaches to robust ML against two physically realizable attacks: the eyeglass frame attack on face recognition (Sharif et al., 2016) and the sticker attack on stop signs (Eykholt et al., 2018) . Specifically, we study the performance on adversarial training and randomized smoothing against these attacks, and show that both have limited effectiveness in this context (quite ineffective in some settings, and somewhat more effective, but still not highly robust, in others), despite showing moderate effectiveness against l ∞ and l 2 attacks, respectively. Our second contribution is a novel abstract attack model which more directly captures the nature of common physically realizable attacks than the conventional l p -based models. Specifically, we consider a simple class of rectangular occlusion attacks in which the attacker places a rectangular sticker onto an image, with both the location and the content of the sticker adversarially chosen. We develop several algorithms for computing such adversarial occlusions, and use adversarial training to obtain neural network models that are robust to these. We then experimentally demonstrate that our proposed approach is significantly more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage l p -based attack models. Related Work While many approaches for defending deep learning in vision applications have been proposed, robust learning methods have been particularly promising, since alternatives are often defeated soon after being proposed (Madry et al., 2018; Raghunathan et al., 2018a; Wong & Kolter, 2018; Vorobeychik & Kantarcioglu, 2018) . The standard solution approach for this problem is an adaptation of Stochastic Gradient Descent (SGD) where gradients are either with respect to the loss at the optimal adversarial perturbation for each i (or approximation thereof, such as using heuristic local search (Goodfellow et al., 2015; Madry et al., 2018) or a convex over-approximation (Raghunathan et al., 2018b; Wang et al., 2018) ), or with respect to the dual of the convex relaxation of the attacker maximization problem (Raghunathan et al., 2018a; Wong & Kolter, 2018; Wong et al., 2018) . Despite these advances, adversarial training a la Madry et al. (2018) remains the most practically effective method for hardening neural networks against adversarial examples with l ∞ -norm perturbation constraints. Recently, randomized smoothing emerged as another class of techniques for obtaining robustness (Lecuyer et al., 2019; Cohen et al., 2019) , with the strongest results in the context of l 2 -norm attacks. In addition to training neural networks that are robust by construction, a number of methods study the problem of detecting adversarial examples (Metzen et al., 2017; Xu et al., 2018) , with mixed results (Carlini & Wagner, 2017a) . Of particular interest is recent work on detecting physical adversarial examples (Chou et al., 2018) . However, detection is inherently weaker than robustness, which is our goal, as even perfect detection does not resolve the question of how to make decisions on adversarial examples. Finally, our work is in the spirit of other recent efforts that characterize robustness of neural networks to physically realistic perturbations, such as translations, rotations, blurring, and contrast (Engstrom et al., 2019; Hendrycks & Dietterich, 2019) . There are two possible reasons why conventional robust ML perform poorly against physical attacks: 1) adversarial models involving l p -bounded perturbations are too hard to enable effective robust learning, and 2) the conventional attack model is too much of a mismatch for realistic physical attacks. In Appendix B, we present evidence supporting the latter. Specifically, we find that conventional robust ML models exhibit much higher robustness when faced with the l p -bounded attacks they are trained to be robust to. As we have shown, conventional methods for making deep learning approaches for image classification robust to physically realizable attacks tend to be relatively ineffective. In contrast, a new threat model we proposed, rectangular occlusion attacks (ROA), coupled with adversarial training, achieves high robustness against several prominent examples of physical attacks. While we explored a number of variations of ROA attacks as a means to achieve robustness against physical attacks, numerous questions remain. For example, can we develop effective methods to certify robustness against ROA, and are the resulting approaches as effective in practice as our method based on a combination of heuristically computed attacks and adversarial training? Are there other types of occlusions that are more effective? Answers to these and related questions may prove a promising path towards practical robustness of deep learning when deployed for downstream applications of computer vision such as autonomous driving and face recognition. (Parkhi et al., 2015 ) is a benchmark for face recognition, containing 2622 subjusts with 2.6 million images in total. We chose ten subjects: A. J. Buckley, A. R. Rahman, Aamir Khan, Aaron Staton, Aaron Tveit, Aaron Yoo, Abbie Cornish, Abel Ferrara, Abigail Breslin, and Abigail Spencer, and subselected face images pertaining only to these individuals. Since approximately half of the images cannot be downloaded, our final dataset contains 300-500 images for each subject. We used the standard corp-and-resize method to process the data to be 224 × 224 pixels, and split the dataset into training, validation, and test according to a 7:2:1 ratio for each subject. In total, the data set has 3178 images in the training set, 922 images in the validation set, and 470 images in the test set. We use the VGGFace convolutional neural network (Parkhi et al., 2015) model, a variant of the VGG16 model containing 5 convolutional layer blocks and 3 fully connected layers. We make use of standard transfer learning as we only classify 10 subjects, keeping the convolutional layers as same as VGGFace structure, 3 but changing the fully connected layer to be 1024 → 1024 →10 instead of 4096 → 4096 →2622. Specifically, in our Pytorch implementation, we convert the images from RGB to BGR channel orders and subtract the mean value [129.1863, 104.7624, 93 .5940] in order to use the pretrained weights from VGG-Face on convolutional layers. We set the batch size to be 64 and use Pytorch built-in Adam Optimizer with an initial learning rate of 10 −4 and default parameters in Pytorch. 4 We drop the learning rate by 0.1 every 10 epochs. Additionally, we used validation set accuracy to keep track of model performance and choose a model in case of overfitting. After 30 epochs of training, the model successfully obtains 98.94 % on test data.
Defending Against Physically Realizable Attacks on Image Classification
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:459
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: GloVe and Skip-gram word embedding methods learn word vectors by decomposing a denoised matrix of word co-occurrences into a product of low-rank matrices. In this work, we propose an iterative algorithm for computing word vectors based on modeling word co-occurrence matrices with Generalized Low Rank Models. Our algorithm generalizes both Skip-gram and GloVe as well as giving rise to other embedding methods based on the specified co-occurrence matrix, distribution of co-occurences, and the number of iterations in the iterative algorithm. For example, using a Tweedie distribution with one iteration results in GloVe and using a Multinomial distribution with full-convergence mode results in Skip-gram. Experimental results demonstrate that multiple iterations of our algorithm improves results over the GloVe method on the Google word analogy similarity task. Word embeddings are low dimensional vector representations of words or phrases. They are applied to word analogy tasks and used as feature vectors in numerous tasks within natural language processing, computational linguistics, and machine learning. They are constructed by various methods which rely on the distributional hypothesis popularized by Firth: "words are characterized by the company they keep" BID9 . Two seminal methodological approaches to finding word embeddings are Skip-gram [Mikolov et al., 2013a] and GloVe [Pennington et al., 2014] . Both methods input a corpus D, process it into a word co-occurence matrix X, then output word vectors with some dimension d.Skip-gram processes a corpus with w words into a count co-occurence matrix X ∈ R w×w , where x ij is the number of times word w i appears in the same context as the word w j . Here, two words being in the same context means that they're within l c tokens of each other. Define this co-occurence matrix to be the count co-occurence matrix. Next, Skip-gram [Pennington et al., 2014 where u u u T i is the i th row of U , then defines the word vectors to be the rows ofÛ .GloVe processes a corpus with w words into a harmonic co-occurence matrix X ∈ R w×w where x ij is the harmonic sum of the number of tokens between words w i and w j over each co-occurrence. That is, x ij = p1<p2,|p1−p2|≤lc,D(p1)=wi,D(p2)=wj h(x ij ) u u u DISPLAYFORM0 where a i and b j are bias terms, h(x ij ) = (min{x ij , x max }) .75 is the weight, and x max is some prespecified cutoff. GloVe then defines the estimated word vectors to be the rows of 1 2Û + 1 2V . In both Skip-gram and GloVe, a matrix of co-occurences X is introduced by processing the corpus, and an objective function is introduced to find a low rank factorization related to the co-occurences X. In this paper, we derive the objective functions from a model-based perspective. We introduce an iterative algorithm, and show that problem (1) results from running the iterative algorithm on full-convergence mode for a Multinomial model and problem (2) is one step of the iterative algorithm for a Tweedie model. This algorithm additionally allows us to introduce methods to "fill in the gaps" between Skip-gram and GloVe and to introduce altogether new methods for finding word vectors. We present a general model-based methodology for finding word vectors from a corpus. This methodology involves choosing the distribution of a chosen co-occurrence matrix to be an exponential dispersion family and choosing the number of iterations to run our algorithm.In Table 1 , we see that our methodology unifies the dominant word embedding methods available in the literature and provides new and improved methods. We introduce an extension of Skip-gram that is stopped before full-convergence analagously to GloVe and an extension to GloVe beyond one iteration. Experimental results on a small corpus demonstrate our method improves upon GloVe and Skip-gram on the Google word analogy similarity task. It is our hope that this methodology can lead to the development of better, more statistically sound, word embeddings and consequently improve results on many other downstream tasks.
We present a novel iterative algorithm based on generalized low rank models for computing and interpreting word embedding models.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:46
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge. However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process. Thus, neural networks that are deployed in the real world often struggle in scenarios where the data distribution is non-stationary (concept drift), imbalanced, or not always fully available, i.e., rare edge cases. We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity (DHP) Softmax layer that adds a rapid learning plastic component (compressed episodic memory) to the fixed (slow changing) parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale. We demonstrate the flexibility of our method by integrating well-known task-specific synaptic consolidation methods to penalize changes in the slow weights that are important for each target task. We evaluate our approach on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and introduce an imbalanced variant of Permuted MNIST --- a dataset that combines the challenges of class imbalance and concept drift. Our proposed model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting. A key aspect of human intelligence is the ability to continually adapt and learn in dynamic environments, a characteristic which is challenging to embed into artificial intelligence. Recent advances in machine learning (ML) have shown tremendous improvements in various problems, by learning to solve one complex task very well, through extensive training on large datasets with millions of training examples or more. However, most of the ML models that are used during deployment in the real-world are exposed to non-stationarity where the distributions of acquired data changes over time. Therefore, after learning is complete, and these models are further trained with new data, responding to distributional changes, performance degrades with respect to the original data. This phenomenon known as catastrophic forgetting or catastrophic interference (McCloskey & Cohen, 1989; French, 1999 ) presents a crucial problem for deep neural networks (DNNs) that are tasked with continual learning (Ring, 1994) , also called lifelong learning (Thrun & Mitchell, 1995; Thrun, 1998) . In continual learning, the goal is to adapt and learn consecutive tasks without forgetting how to perform well on previously learned tasks, enabling models that are scalable and efficient over long timescales. In most supervised learning methods, DNN architectures require independent and identically distributed (iid) samples from a stationary training distribution. However, for ML systems in realworld applications that require continual learning, the iid assumption is easily violated when: (1) There is concept drift in the training data distribution. (2) There are imbalanced class distributions and concept drift occuring simultaneously. (3) Data representing all scenarios in which the learner is expected to perform are not initially available. In such situations, learning systems face the "stability-plasticity dilemma" which is a well-known problem for artificial and biological neural networks (Carpenter & Grossberg, 1987; Abraham & Robins, 2005) . This presents a continual learning challenge for an ML system where the model needs to provide a balance between its plasticity (to integrate new knowledge) and stability (to preserve existing knowledge). In biological neural networks, synaptic plasticity has been argued to play an important role in learning and memory (Howland & Wang, 2008; Takeuchi et al., 2013; Bailey et al., 2015) and two major theories have been proposed to explain a human's ability to perform continual learning. The first theory is inspired by synaptic consolidation in the mammalian neocortex (Benna & Fusi, 2016) where a subset of synapses are rendered less plastic and therefore preserved for a longer timescale. The general idea for this approach is to consolidate and preserve synaptic parameters that are considered important for the previously learned tasks. This is normally achieved through task-specific updates of synaptic weights in a neural network. The second is the complementary learning system (CLS) theory (McClelland et al., 1995; Kumaran et al., 2016) , which suggests that humans extract highlevel structural information and store it in different brain areas while retaining episodic memories. Recent work on differentiable plasticity has shown that neural networks with "fast weights" that leverage Hebbian learning rules (Hebb, 1949) can be trained end-to-end through backpropagation and stochastic gradient descent (SGD) to optimize the standard "slow weights", as well as also the amount of plasticity in each synaptic connection (Miconi, 2016; Miconi et al., 2018) . These works use slow weights to refer to the weights normally used to train vanilla neural networks, which are updated slowly and are often associated with long-term memory. The fast weights represent the weights that are superimposed on the slow weights and change quickly from one time step to the next based on input representations. These fast weights behave as a form of short-term memory that enable "reactivation" of long-term memory traces in the slow weights. Miconi et al. (2018) showed that simple plastic networks with learned plasticity outperform networks with uniform plasticity on various problems. Moreover, there have been several approaches proposed recently for overcoming the catastrophic forgetting problem in fixed-capacity models by dynamically adjusting the plasticity of each synapse based on its importance for retaining past memories (Parisi et al., 2019) . Here, we extend the work on differentiable plasticity to the task-incremental continual learning setting (van de Ven & Tolias, 2019) , where tasks arrive in a batch-like fashion, and have clear boundaries. We develop a Differentiable Hebbian Consolidation 1 model that is capable of adapting quickly to changing environments as well as consolidating previous knowledge by selectively adjusting the plasticity of synapses. We modify the traditional softmax layer and propose to augment the slow weights in the final fully-connected (FC) layer (softmax output layer) with a set of plastic weights implemented using Differentiable Hebbian Plasticity (DHP). Furthermore, we demonstrate the flexibility of our model by combining it with recent task-specific synaptic consolidation based approaches to overcoming catastrophic forgetting such as elastic weight consolidation (Kirkpatrick et al., 2017; Schwarz et al., 2018) , synaptic intelligence (Zenke et al., 2017b) and memory aware synapses (Aljundi et al., 2018) . Our model unifies core concepts from Hebbian plasticity, synaptic consolidation and CLS theory to enable rapid adaptation to new unseen data, while consolidating synapses and leveraging compressed episodic memories in the softmax layer to remember previous knowledge and mitigate catastrophic forgetting. We test our proposed method on established benchmark problems including the Permuted MNIST (Goodfellow et al., 2013) , Split MNIST (Zenke et al., 2017b) and Vision Datasets Mixture (Ritter et al., 2018) benchmarks. We also introduce the Imbalanced Permuted MNIST problem and show that plastic networks with task-specific synaptic consolidation methods outperform networks with uniform plasticity.
Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks and with the combination of task-specific synaptic consolidation can improve the ability to alleviate catastrophic forgetting in continual learning.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:460
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: In order to choose a neural network architecture that will be effective for a particular modeling problem, one must understand the limitations imposed by each of the potential options. These limitations are typically described in terms of information theoretic bounds, or by comparing the relative complexity needed to approximate example functions between different architectures. In this paper, we examine the topological constraints that the architecture of a neural network imposes on the level sets of all the functions that it is able to approximate. This approach is novel for both the nature of the limitations and the fact that they are independent of network depth for a broad family of activation functions. Neural networks have become the model of choice in a variety of machine learning applications, due to their flexibility and generality. However, selecting network architectures and other hyperparameters is typically a matter of trial and error. To make the choice of neural network architecture more straightforward, we need to understand the limits of each architecture, both in terms of what kinds of functions any given network architecture can approximate and how those limitations impact its ability to learn functions within those limits.A number of papers (3; 6; 11; 13) have shown that neural networks with a single hidden layer are a universal approximator, i.e. that they can approximate any continuous function on a compact domain to arbitrary accuracy if the hidden layer is allowed to have an arbitrarily high dimension. In practice, however, the neural networks that have proved most effective tend to have a large number of relatively low-dimensional hidden layers. This raises the question of whether neural networks with an arbitrary number of hidden layers of bounded dimension are also a universal approximator.In this paper we demonstrate a fairly general limitation on functions that can be approximated with the L ∞ norm on compact subsets of a Euclidean input space by layered, fully-connected feedforward neural networks of arbitrary depth and activation functions from a broad family including sigmoids and ReLus, but with layer widths bounded by the dimension of the input space. By a layered network, we mean that hidden nodes are grouped into successive layers and each node is only connected to nodes in the previous layer and the next layer. The constraints on the functions are defined in terms of topological properties of the level sets in the input space.This analysis is not meant to suggest that deep networks are worse than shallow networks, but rather to better understand how and why they will perform differently on different data sets. In fact, these limitations may be part of the reason deep nets have proven more effective on datasets whose structures are compatible with these limitations.By a level set, we mean the set of all points in the input space that the model maps to a given value in the output space. For classification models, a level set is just a decision boundary for a particular cutoff. For regression problems, level sets don't have a common interpretation.The main result of the paper, Theorem 1, states that the deep, skinny neural network architectures described above cannot approximate any function with a level set that is bounded in the input space. This can be rephrased as saying that for every function that can be approximated, every level set must be unbounded, extending off to infinity.While a number of recent papers have made impressive progress in understanding the limitations of different neural network architectures, this result is notable because it is independent of the number of layers in the network, and because the limitations are defined in terms of a very simple topological property. Topological tools have recently been employed to study the properties of data sets within the field known as Topological Data Analysis (9), but this paper exploits topological ideas to examine the topology of the models themselves. By demonstrating topological constraints on a widely used family of models, we suggest that there is further potential to apply topological ideas to understand the strengths and weaknesses of algorithms and methodologies across machine learning.After discussing the context and related work in Section 2, we introduce the basic definitions and notation in Section 3, then state the main Theorem and outline the proof in Section 4. The detailed proof is presented in Sections 5 and 6. We present experimental results that demonstrate the constraints in Section 7, then in Section 8 we present conclusions from this work. In this paper, we describe topological limitations on the types of functions that can be approximated by deep, skinny neural networks, independent of the number of hidden layers. We prove the result using standard set theoretic topology, then present examples that visually demonstrate the result.
This paper proves that skinny neural networks cannot approximate certain functions, no matter how deep they are.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:461
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs. Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops. In this paper, we present Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants directly from program execution traces. Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories (SMT) for loop invariants from program execution traces. We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset. CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset. Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40 times faster than existing approaches. We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset. Program verification offers a principled approach for systematically eliminating different classes of bugs and proving the correctness of programs. However, as programs have become increasingly complex, real-world program verification often requires prohibitively expensive manual effort (Wilcox et al., 2015; Gu et al., 2016; Chajed et al., 2019) . Recent efforts have focused on automating the program verification process, but automated verification of general programs with unbounded loops remains an open problem (Nelson et al., 2017; . Verifying programs with loops requires determining loop invariants, which captures the effect of the loop on the program state irrespective of the actual number of loop iterations. Automatically inferring correct loop invariants is a challenging problem that is undecidable in general and difficult to solve in practice (Blass & Gurevich, 2001; Furia et al., 2014) . Existing approaches use stochastic search (Sharma & Aiken, 2016) , heurstics-based search (Galeotti et al., 2015) , PAC learning based on counter examples (Padhi & Millstein, 2017) , or reinforcement learning (Si et al., 2018) . However, these approaches often struggle to learn complex, real-world loop invariants. In this paper, we introduce a new approach to learning loop invariants by modeling the loop behavior from program execution traces using a new type of neural architecture. We note that inferring loop invariants can be posed as learning formulas in Satisfiability Modulo Theories (SMT) (Biere et al., 2009 ) over program variables collected from program execution traces (Nguyen et al., 2017) . In principle, Neural networks seem well suited to this task because they can act as universal function approximators and have been successfully applied in various domains that require modeling of arbitrary functions (Hornik et al., 1989; Goodfellow et al., 2016) . However, loop invariants must be represented as explicit SMT formulas to be usable for program verification. Unfortunately, existing methods for extracting logical rules from general neural architectures lack sufficient precision (Augasta & Kathirvalavakumar, 2012) , while inductive logic learning lacks sufficient expressiveness for use in verification (Evans & Grefenstette, 2018) . We address this issue by developing a novel neural architecture, Continuous Logic Network (CLN), which is able to efficiently learn explicit and precise representations of SMT formulas by using continuous truth values. Unlike existing neural architectures, CLNs can represent a learned SMT formula explicitly in its structure and thus allow us to precisely extract the exact formula from a trained model. In order to train CLNs, we introduce a new semantic mapping for SMT formulas to continuous truth values. Our semantic mapping builds on BL, or basic fuzzy logic (Hájek, 2013) , to support general SMT formulas in a continuous logic setting. We further prove that our semantic model is sound (i.e., truth assignments for the formulas are consistent with their discrete counterparts) and complete (i.e., all formulas can be represented) with regard to the discrete SMT formula space. These properties allow CLNs to represent any quantifier-free SMT formula operating on mixed integer-real arithmetic as an end-to-end differentiable series of operations. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms state-of-the-art tools on the Code2Inv dataset by solving all 124 theoretically solvable problems in the dataset. This is 20 problems more than LoopInvGen, the winner of the SyGus 2018 competition loop invariant track (Padhi & Millstein, 2017) . Moreover, CLN2INV finds invariants for each program in 1.1 second on average, more than 40 times faster than LoopInvGen. We also demonstrate that CLN2INV is able to learn complex, real-world loop invariants with combinations of conjunctions and disjunctions of multivariable constraints. Our main contributions are: • We introduce a new semantic mapping for assigning continuous truth values to SMT formulas that is theoretically grounded and enables learning formulas through backpropagation. We further prove that our semantic model is sound and complete. • We develop a novel neural architecture, Continuous Logic Networks (CLNs), that to the best of our knowledge is the first to efficiently learn precise and explicit SMT formulas by construction. • We use CLNs to implement a new loop invariant inference system, CLN2INV, that is the first to solve all 124 theoretically solvable problems in the Code2Inv dataset, 20 more than the existing methods. CLN2INV is able to find invariants for each problem in 1.1 second on average, 40× faster than existing systems. • We further show CLN2INV is able to learn 12 more complex loop invariants than the ones present in the Code2Inv dataset with combinations of multivariable constraints. Related Work. Traditionally, loop invariant learning relies on stochastic or heuristics-guided search (Sharma & Aiken, 2016; Galeotti et al., 2015) . Other approaches like NumInv analyze traces and discover conjunctions of equalities by solving a system of linear equations (Sharma et al., 2013; Nguyen et al., 2017) . LoopInvGen uses PAC learning of CNF using counter-examples (Padhi et al., 2016; Padhi & Millstein, 2017) . By contrast, Code2Inv learns to guess loop invariants using reinforcement learning with recurrent and graph neural networks (Si et al., 2018) . However, these approaches struggle to learn complex invariants. Unlike these works, CLN2INV efficiently learns complex invariants directly from execution traces. There is a extensive work on PAC learning of boolean formulas, but learning precise formulas require a prohibitively large number of samples (Kearns et al., 1994) . Several recent works use differentiable logic to learn boolean logic formulas from noisy data (Kimmig et al., 2012; Evans & Grefenstette, 2018; Payani & Fekri, 2019) or improving adversarial robustness by applying logical rules to training (Fischer et al., 2019) . By contrast, our work learns precise SMT formulas directly by construction, allowing us to learn richer predicates with compact representation in a noiseless setting. A variety of numerical relaxations have been applied to SAT and SMT solving. Application-specific approximations using methods such as interval overapproximation and slack variables have been developed for different classes of SMT (Eggers et al., 2008; Nuzzo et al., 2010) . More recent work has applied recurrent and graph neural networks to Circuit SAT problems and unsat core detection (Amizadeh et al., 2019; Selsam et al., 2019; Selsam & Bjørner, 2019) . FastSMT uses embeddings from natural language processing like skip-gram and bag-of-words to represent formulas for search strategy optimization (Balunovic et al., 2018) . Unlike these approaches, we relax the SMT semantics directly to generate a differentiable representation of SMT. We develop a novel neural architecture that explicitly and precisely learns SMT formulas by construction. We achieve this by introducing a new sound and complete semantic mapping for SMT that enables learning formulas through backpropagation. We use CLNs to implement a loop invariant inference system, CLN2INV, that is the first to solve all theoretically solvable problems in the Code2Inv benchmark and takes only 1.1 second on average. We believe that the CLN architecture will also be beneficial for other domains that require learning SMT formulas. A CONTINUOUS PREDICATES Figure 5 shows examples of shifted sigmoids for S(>), S(≥), and S(=). Combing these results, we have For any t-norm, we have 0 ⊗ 1 = 0, 1 ⊗ 1 = 1, and 1 ⊗ 0 = 0. Put it altogether, we have (f (t, u; B, ) ⊗ g(t, u; B, )) = 1 t = u 0 t = u which concludes the proof.
We introduce the Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants and general SMT formulas.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:462
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Single cell RNA sequencing (scRNAseq) technology enables quantifying gene expression profiles by individual cells within cancer. Dimension reduction methods have been commonly used for cell clustering analysis and visualization of the data. Current dimension reduction methods tend overly eliminate the expression variations correspond to less dominating characteristics, such we fail to find the homogenious properties of cancer development. In this paper, we proposed a new and clustering analysis method for scRNAseq data, namely BBSC, via implementing a binarization of the gene expression profile into on/off frequency changes with a Boolean matrix factorization. The low rank representation of expression matrix recovered by BBSC increase the resolution in identifying distinct cell types or functions. Application of BBSC on two cancer scRNAseq data successfully discovered both homogeneous and heterogeneous cancer cell clusters. Further finding showed potential in preventing cancer progression. Cancer the biggest deadly threat to human has been a huge puzzle since its determination in 1775. From once considered as contagious to nowadays cancer immunotherapy, the modern medication continues to evolve in tackling this problem (Dougan et al., 2019) . And yet, not enough to make a huge difference, 1,762,450 people have been diagnosed with cancer and 606,880 has died in 2018 (Siegel et al., 2019) . The development of single cell RNA sequencing (scRNA-seq), which measures each single cell in cancer tissue with over 20,000 dimension of genes (features), picturized the hologram of cancer and its micro-environment with high resolution (Picelli et al., 2014; Puram et al., 2017; Tirosh et al., 2016) . As illustrated in Figure 1A , classic analysis pipeline takes a linear (PCA) or non-linear (t-SNE) dimension reduction of the high dimensional input data, by which loadings of the top bases are further used for cell clustering and visualization (Tirosh et al., 2016) . Figure 1: Classic analysis pipeline for scRNA-seq data and Melanoma example Cancer cell heterogeneity hampers theraputic development. We use the melanoma dataset as an example. Cells in a scRNA-seq data are always with multiple crossed conditions, such as types of cancer, origin of patients and different cell types. By analyzing melanoma scRNA-seq data with classic pipeline, we differentiated the cell type of each cell in its cancer microenvironment (CME) (figure 1B). All cell types other than cancer cell are constituted by multiple patients ( figure 1C ), validated the accuracy of classic pipeline in cell type identification. While on cancer cell, each patient forms a distinct cluster (highlighted in shadow), suggesting confounding patient-wise heterogeneity. Similar phenomenon also exists in breast cancer and head and neck cancer. On the other hand, being an investment-heavy industry like medical industry, the uniqueness of each cancer patient contradicts its general principle as to In addition, f follows a beta distribution accounting for the collective effect of the probability to shift the expression from off to on (k on ) and from on to off (k of f ). y denotes the true expression of gene i inside cell j and x is the observation of y with Gaussian error. Recent study revealed that, regulated by enhancers, burst frequency f is the major facilitator of cell type specific gene expression landscape (Larsson et al., 2019) . Though f and k size cannot be precisely fitted from our observed data, since y follows the Poisson distribution of the pure product of k size and f , we could still capture the most significant frequency changes across different cells. That is, we could infer whether f is above or equal to zero, corresponding to expression/no-expression of the gene, from our observed data. Counting this property, we thus propose the following approximate gene expression bi-state models. where F denotes a latent binary matrix of f , which is considered as a low rank representation of k different cell types, generated by the Boolean product of two binary matrix A and B plus a Boolean flipping error E. Y denotes the true quantitative expression level generated from F , and X is considered as a measure of Y with i.i.d. Gaussian error . Here our approach takes the approximating Y by Hadamard product between X and n×k ⊗B k×m , i.e. where n×k andB k×m are the estimation of A n×k and B k×m . Bi-state and Boolean matrix factorization for scRNA-seq data (BBSC). In sight of this, we developed a novel scRNA-seq pattern mining and analysis pipeline namely BBSC (Figure 2 ), by implementing a data binarization process for the inference of ON/OFF bi-state expression patterns. In addition, we proposed a fast binary matrix factorization (BMF) method, namely PFAST, adapting to the large scale of scRNA-seq data. BBSC can be easily implemented with classic dimension reduction based analysis procedure. Application of BBSC on scRNA-seq of the head and neck cancer and melanoma data successfully revealed the cancer homogeneity hence increased the sensitivity in identifying sub types of cells. In addition, cancer cell clusters expressing the epithelial mesenchymal transition (EMT) markers were specifically identified by BBSC in head and neck cancer study, which consist cancer cells from different patient samples, suggesting heterogeneous cancer cells may adopt a similar strategy in cancer metastasis process. We summarize our contributions as follows: • We constructed a scRNA-seq analysis pipeline, BBSC, for retrieving cancer homogeneity properties. BBSC is by far the first analysis pipeline accounting the fundamental interplay between cell type and gene expression in the analysis of scRNA-seq data. • As a major component in BBSC pipeline, we proposed a fast and efficient BMF algorithm, PFAST, in adapting to the large scale of scRNA-seq data. • In the analysis of head and neck cancer data, BBSC identified that cancer cell may adapt similar strategies in metastasis. This finding could be applied to prevent cancer progression. Enabled by the development of single cell technology, we now can observe the complicated biological process like cancer with unprecedented resolution. However, the classic analysis pipeline fails to deliver detailed information: 1) it does not reveal common characteristic of cancer cell in different cancer patients. 2) Even it separates functional cells; it fails to reveal intra-cluster heterogeneity. To solve above problems, we have developed BBSC analysis pipeline. Rooted from casting the frequency change in gene expression, we have applied BMF in the feature selection process, which avoids adding new expensive and potentially noisy information. We have applied tailored binarizing process for each dataset. Moreover, to deal with big scale tall matrix like scRNAseq data, we have developed a fast and efficient algorithm called PFAST. Letting alone its fast speed in handling large-scale data, it shows high accuracy compared with state-of-art BMF algorithms. We have applied BBSC on two high quality cancer studies, head and neck cancer and melanoma. In both datasets, BBSC shutters the big clusters into several sub clusters, and promotes a gateway to analysis intra-cluster heterogeneity. Moreover, BBSC manages to get common cancer sub cell clusters in both datasets, and decreases the patient-wise heterogeneity that hindered cancer therapeutic development. We next have justified the biological meanings of BBSC derived sub clusters by looking into the sub cancer clusters in head and neck cancer. By analyzing their detailed expression profile, We find out that the common clusters are in the EMT transition process indicating these cancer cells play an important part in cancer metastasis. While patient specific clusters are in the early EMT process indicating that these cells are still in the original cancer micro environment. These findings have first justified the biological importance of BBSC derived sub clusters. Secondly, it brings much insightful ideas in the clinical application. We now can hypothesize that when cancer cells seek metastasis, they will transform into similar states that are common across different patients. The characteristic of the common clusters may serve as target in preventing cancer metastasis. Furthermore, we validate that the heterogeneity of cancer comes from the original cancer tissue. Also BBSC shows promising results in deciphering this kind of heterogeneity. Especially in head and neck cancer study, BBSC distinctly divides cancer cells from the same patient into two sub clusters. Due to our limited expertise in cancer biology, we did not look closely in this property. However, we believe this would bring insightful ideas in the cause of cancer origin heterogeneity. Overall BBSC is an efficient and valuable analysis platform for scRNAseq or other single cell data. It is capable to bring insightful knowledge for our detailed understanding of complicated biological process.
Our finding shed lights in preventing cancer progression
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:463
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We display proofs of concept under 2 flow architectures: discrete autoregressive flows enable bidirectionality, allowing for example tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows (i.e., with layer structure from RealNVP) enable parallel generation such as exact nonautoregressive text modeling. There have been many recent advances in normalizing flows, a technique for constructing high-dimensional continuous distributions from invertible transformations of simple distributions BID22 BID25 BID23 . Applications for high-dimensional continuous distributions are widespread: these include latent variable models with expressive posterior approximations BID22 BID20 BID12 , parallel image generation BID6 BID11 , parallel speech synthesis BID19 , and general-purpose density estimation BID18 .Normalizing flows are based on the change-of-variables formula, which derives a density given an invertible function applied to continuous events. There have not been analogous advances for discrete distributions, where flows are typically thought to not be applicable. Instead, most research for discrete data has focused on building either latent-variable models with approximate inference BID2 , or increasingly sophisticated autoregressive models that assume a fixed ordering of the data BID0 BID26 . In this paper , we present an alternative for flexible modeling of discrete sequences by extending continuous normalizing flows to the discrete setting. We demonstrate proofs of concept of discrete flows with two architectures:1. Discrete autoregressive flows enable multiple levels of autoregressivity. For example, one can design a bidirectional language model of text where each token depends on both left-to-right and right-to-left contexts while maintaining an exact likelihood and sampling.2. Discrete bipartite flows (i.e. , with flow structure similar to RealNVP BID6 ) enable flexible models with parallel generation. For example, one can design nonautoregressive text models which maintain an exact likelihood for training and evaluation.
We extend autoregressive flows and RealNVP to discrete data.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:464
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We present a Deep Neural Network with Spike Assisted Feature Extraction (SAFE-DNN) to improve robustness of classification under stochastic perturbation of inputs. The proposed network augments a DNN with unsupervised learning of low-level features using spiking neuron network (SNN) with Spike-Time-Dependent-Plasticity (STDP). The complete network learns to ignore local perturbation while performing global feature detection and classification. The experimental results on CIFAR-10 and ImageNet subset demonstrate improved noise robustness for multiple DNN architectures without sacrificing accuracy on clean images. There is a growing interest in deploying DNNs in autonomous systems interacting with physical world such as autonomous vehicles and robotics. It is important that an autonomous systems make reliable classifications even with noisy data. However, in a deep convolutional neural networks (CNN) trained using stochastic gradient descent (SGD), pixel level perturbation can cause kernels to generate incorrect feature maps. Such errors can propagate through network and degrade the classification accuracy (Nazaré et al. (2017) ; Luo & Yang (2014) ). Approaches for improving robustness of a DNN to pixel perturbation can be broadly divided into two complementary categories. First, many research efforts have developed image de-noising (or filtering) networks that can pre-process an image before classification, but at the expense of additional latency in the processing pipeline (Ronneberger et al. (2015) ; Na et al. (2019) ; Xie et al. (2012) ; Zhussip & Chun (2018) ; Soltanayev & Chun (2018) ; Zhang et al. (2017) ). De-noising is an effective approach to improve accuracy under noise but can degrade accuracy for clean images (Na et al. (2019) ). Moreover, de-noising networks trained on a certain noise type do not perform well if the a different noise structure is experienced during inference (Zhussip & Chun (2018) ). Advanced de-noising networks are capable of generalizing to multiple levels of a type of noise and effective for different noise types (Zhussip & Chun (2018) ; Soltanayev & Chun (2018) ; Zhang et al. (2017) ). But high complexity of these network makes them less suitable for real-time applications and lightweight platforms with limited computational and memory resources. An orthogonal approach is to develop a classification network that is inherently robust to input perturbations. Example approaches include training with noisy data, introducing noise to network parameters during training, and using pixel level regularization (Milyaev & Laptev (2017) ; Nazaré et al. (2017) ; Luo & Yang (2014) ; Na et al. (2018) ; Long et al. (2019) ). These approaches do not change the processing pipeline or increase computational and memory demand during inference. However, training-based approaches to design robust DNNs also degrade classification accuracy for clean images, and more importantly, are effective only when noise structure (and magnitude) during training and inference closely match. Therefore, a new class of DNN architecture is necessary for autonomous system that is inherently resilient to input perturbations of different type and magnitude without requiring training on noisy data, as well as computationally efficient. Towards this end, this paper proposes a new class of DNN architecture that integrates features extracted via unsupervised neuro-inspired learning and supervised training. The neuro-inspired learning, in particular, spiking neural network (SNN) with spike-timing-dependent plasticity (STDP) is an alternative and unsupervised approach to learning features in input data (Hebb et al. (1950) ; (2019)). However, the classification accuracy of a STDP-learned SNN for complex datasets is much lower than a that of a DNN. The fundamental premise of this paper is that, augmenting the feature space of a supervised (trained) DNN with features extracted by an SNN via STDP-based learning increases robustness of the DNN to input perturbations. We argue that stochastic gradient descent (SGD) based back-propagation in a DNN enables global learning between low-level pixel-to-pixel interactions and high-level detection and classification. On the other hand, STDP performs unsupervised local learning and extracts low-level features under spatial correlation. By integrating features from global (supervised training) and local (STDP) learning, the hybrid network "learns to ignore" locally uncorrelated perturbations (noise) in pixels while extracting the correct feature representation from the overall image. Consequently, hybridization of SGD and STDP enables robust image classification under noisy input while preserving the accuracy of the baseline DNN for clean images. We present a hybrid network architecture, referred to as Spike Assisted Feature Extraction based Deep Neural Network (SAFE-DNN), to establish the preceding premise. We develop an integrated learning/training methodology to couple the features extracted via neuro-inspired learning and supervised training. In particular, this paper makes the following contributions: • We present a SAFE-DNN architecture ( Figure 1 ) that couples STDP-based robust learning of local features with SGD based supervised training. This is achieved by integrating a spiking convolutional module within a DNN pipeline. • We present a novel frequency-dependent stochastic STDP learning rule for the spiking convolutional demonstrating local competitive learning of low level features. The proposed learning method makes the feature extracted by the spiking convolutional module robust to local perturbations in the input image. • We develop a methodology to transform the STDP-based spiking convolution to an equivalent CNN. This is achieved by using a novel special neuron activation unit (SAU), a non-spiking activation function, that facilitates integration of the SNN extracted features within the DNN thereby creating a single fully-trainable deep network. The supervised (SGD-based) training is performed in that deep network after freezing the STDP-learnt weights in the spiking CNN module. We present implementations of SAFE-DNN based on different deep networks including MobileNet, ResNet and DenseNet (Sandler et al. (2018) , He et al. (2015) , Huang et al. (2016) ) to show the versatility of our network architecture. Experiment is conducted for CIFRA10 and ImageNet subset considering different types of noise, including Gaussian, Wald, Poisson, Salt&Paper, and adversarial noise demonstrating robust classification under input noise. Unlike training-based approaches, SAFE-DNN shows improved accuracy for a wide range of noise structure and magnitude without requiring any prior knowledge of the perturbation during training and inference and does not degrade the accuracy for clean images (even shows marginal improvement in many cases). SAFE-DNN complements, and can be integrated with, de-noising networks for input pre-processing. However, unlike de-noising networks, the SAFE-DNN has negligible computation and memory overhead, and does not introduce new stages in the processing pipeline. Hence, SAFE-DNN is an attractive architecture for resource-constrained autonomous platforms with real-time processing. We note that, SAFE-DNN differs from deep SNNs that convert a pre-trained DNN to SNN (Sengupta et al. (2019) , Hu et al. (2018) ). Such networks function as a spiking network during inference to reduce energy; however, the learning is still based on supervision and back-propagation. In contrast, SAFE-DNN hybridizes STDP and SGD during learning but creates a single hybrid network operating as a DNN during inference. In this paper we present SAFE-DNN as a deep learning architecture that integrates spiking convolutional network with STDP based learning into a conventional DNN for robust low level feature extraction. The experimental results show that SAFE-DNN improves robustness to different input perturbations without any prior knowledge of the noise during training/inference. SAFE-DNN is compatible with various DNN designs and incurs negligible computation/memory overhead. Hence, it is an attractive candidate for real-time autonomous systems operating in noisy environment.
A noise robust deep learning architecture.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:465
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks. The success of neural embeddings has prompted significant amounts of research into applications in domains other than language. One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling. For both NLP and graph-based tasks, embeddings in high-dimensional Euclidean spaces have been learned. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space. We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space. We provide experimental evidence that hyperbolic embeddings significantly outperform Euclidean embeddings on vertex classification tasks for several real-world public datasets. Embeddings are used to represent complex high-dimensional data in lower-dimensional continuous spaces BID28 BID3 . Embedded representations provide three principal benefits over sparse schemes: They encapsulate similarity, are compact, and perform better as inputs to machine learning models BID29 . These benefits are particularly important for graph-structured data where the native representation is the adjacency matrix, which is typically a sparse matrix of connection weights.Neural embedding models are a flavour of embedding where the embedded representation corresponds to a subset of the connection weights in a neural network (see FIG2 ), which are learned through backpropagation. Neural embedding models have been shown to improve performance on many tasks across multiple domains, including word analogies (Mikolov et al., 2013a; BID20 , machine translation BID31 ), document comparison (Kusner et al., 2015 , missing edge prediction BID12 , vertex attribution BID26 , product recommendations BID10 BID1 , customer value prediction BID14 BID6 and item categorisation BID2 . In all cases, the embeddings are learned without labels (unsupervised) from a sequence of tokens. Previous work on neural embedding models has either either explicitly or implicitly (by using the Euclidean dot product) assumed that the embedding space is Euclidean. However, recent work in the field of complex networks has found that many interesting networks, particularly those with a scale-free structure such as the Internet BID30 BID5 or academic citations BID8 BID7 can be well described with a geometry which is non-Euclidean, such as hyperbolic geometry. Even more recently the problem of mapping graphs and datasets to a low-dimensional hyperbolic space has been addressed in BID24 and BID4 . Here we use a neural embedding approach based on the Skipgram architecture to find hyperbolic embeddings.There are two reasons why embedding complex networks in hyperbolic geometry can be expected to perform better than Euclidean geometry. The first is that complex networks exhibit a hierarchical structure. Hyperbolic geometry provides a continuous analogue of tree-like graphs, and even infinite trees have nearly isometric embeddings in hyperbolic space BID11 . The second property is that complex networks have power-law degree distributions, resulting in high-degree hub vertices. All tiles are of constant area in hyperbolic space, but shrink to zero area at the boundary of the disk in Euclidean space. c Hub and spokes graph. It is impossible to embed this graph in two-dimensional Euclidean space and preserve the properties that (1) all spokes are the same distance from the hub, (2) all spokes are the same distance from each other, and (3) the distance between spokes along the circumference is more than twice the distance to the hub. In hyperbolic space such embeddings exist. FIG1 shows a simple hub-and-spoke graph where each spoke is a distance R from the hub and 2R from each other. For an embedding in two-dimensional Euclidean space it is impossible to reproduce this geometry for more than two spokes. However, in hyperbolic space, large numbers of spokes that satisfy these geometrical constraints can be embedded because the circumference of a circle expands exponentially rather than polynomially with the radius.The starting point for our model is the celebrated Skipgram architecture (Mikolov et al., 2013a; b) shown in FIG2 . Skipgram is a shallow neural network with three layers: (1) An input projection layer that maps from a one-hot-encoded token to a distributed representation, (2) a hidden layer, and (3) an output softmax layer. Skipgram is trained on a sequence of words that is decomposed into (input word, context word)-pairs. The model uses two separate vector representations, one for the input words and another for the context words, with the input representation comprising the learned embedding. The (input word, context word)-pairs are generated by running a fixed length sliding window over a word sequence. Words are initially randomly allocated to vectors within the two vector spaces. Then, for each training word pair, the vector representations of the observed input and context words are pushed towards each other and away from all other words (see FIG2 ). The model can be extended to network structured data using random walks to create sequences of vertices. Vertices are then treated exactly analogously to words in the NLP formulation. This was originally proposed as DeepWalk BID26 . Extensions varying the nature of the random walks have been explored in LINE BID32 and Node2vec BID12 .Contribution In this paper, we introduce the new concept of neural embeddings in hyperbolic space. We formulate backpropagation in hyperbolic space and show that using the natural geometry of complex networks improves performance in vertex classification tasks across multiple networks. At the same time, BID24 independently proposed a hyperbolic embedding algorithm that has similarities to ours. The key differences are that BID24 try to fit the hyperbolic distance between nodes using cartesian coordinates in the Poincaré disk, whereas we use a modified cosine distance in a spherical hyperbolic coordinate system. Our approach does not require a numerical constraint to prevent points from 'falling off' the edge of the disk and becoming infinitely distant from the others.
We learn neural embeddings of graphs in hyperbolic instead of Euclidean space
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:466
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS) plays a pivotal role in fostering the development of new Knowledge Engineering (KE) tools, and in emphasising the importance of principled approaches for all the different KE aspects that are needed for the successful long-term use of planning in real-world applications. In this paper, as an exercise in synthesis and for the sake of stimulating thoughts and discussion, we review the format of previous ICKEPS, to suggest alternative formats for future competitions, ideally to motivate someone to step up and organise the next ones. The International Competition on Knowledge Engineering for Planning and Scheduling (ICKEPS) has been running since 2005 as an almost biennial event promoting the development and importance of the use of knowledge engineering (KE) methods and techniques within this area. The aim of the competition series is to foster developments in the knowledge-based and domain modelling aspects of Automated Planning, to accelerate knowledge engineering research, to encourage the creation and sharing of prototype tools and software platforms that promise more rapid, accessible, and effective ways to construct reliable and efficient Automated Planning systems.The latest competition took place in 2016 1 BID3 , which aimed at on-site domain modelling, and highlighted a number of major issues. Most teams did not use any of the existing KE tools, and thus relied only on their expertise. Second, existing tools do not effectively support cooperation, which is needed to cope with the growing complexity of planning applications. Finally, and more worryingly, the number of participants of ICKEPS is still not very large, especially when compared with the latest edition of the International Planning Competition: this suggests that the planning community underestimates the importance of knowledge engineering, despite of its enormous impact on applicability of domain-independent planning in real-world scenarios. Accidental complexity issues BID2 , for instance, can prevent the exploitation of automated planning approaches in complex scenarios, and even an unfortunate ordering of elements in the domain model can adversely affect the performance of planning engines BID7 .Given the pivotal role played by ICKEPS in promoting the importance of principled KE approaches and tools, we believe it is important to evolve and adapt its format in order to attract and engage a larger number of participants. In this paper, we review the format of past competitions, in order to highlight weaknesses and strengths both from organisers' and participants' perspective. Building on top of this analysis, we suggest some alternative formats that may help future ICKEPS organisers in performing their tasks.It should be noted, though, that the aim of this paper is twofold: to review formats and suggest improvements to ICKEPS, and -more importantly-to make a call for action for organising future competitions focused on KE aspects of planning and scheduling. Concluding this paper, we believe that there is a strong need to organise the ICKEPS competitions in order to increase awareness of KE techniques, tool and issues in the ICAPS and general AI communities. The success of future ICK-EPS competitions (e.g. considerable increase of the number of participants) can, in consequence, influence the domainindependent AI planning field by making it accessible for use (by planning non-experts) in various application domains. To give some motivation and inspiration for the future ICKEPS competitions, we, in this paper, provided a review of the format of the past ICKEPS competitions, and suggested two possibly new formats that, we believe, can at-tract more participants and possibly avoid an excessive burden of organisers.We believe that the paper initiates a fruitful discussion about the format of future ICKEPS competitions as well as motivate potential organisers to step up and organise the next competition(s).
Ideas for future ICKEPS
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:467
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations. The sequence-to-sequence framework has demonstrated success in natural-language sequence transduction tasks such as machine translation. More recently, neural techniques have been applied to do single-document, abstractive (paraphrasing) text summarization of news articles BID15 , BID9 ). In this prior work, the input to supervised models ranged from the first sentence to the entire text of an article, and they are trained end-to-end to predict reference summaries. Doing this end-to-end requires a significant number of parallel article-summary pairs since language understanding is a pre-requisite to generate fluent summaries.In contrast, we consider the task of multi-document summarization, where the input is a collection of related documents from which a summary is distilled. Prior work has focused on extractive summarization, which select sentences or phrases from the input to form the summaries, rather than generating new text. There has been limited application of abstractive neural methods and one possible reason is the paucity of large, labeled datasets.In this work, we consider English Wikipedia as a supervised machine learning task for multidocument summarization where the input is comprised of a Wikipedia topic (title of article) and a collection of non-Wikipedia reference documents, and the target is the Wikipedia article text. We describe the first attempt to abstractively generate the first section, or lead, of Wikipedia articles conditioned on reference text. In addition to running strong baseline models on the task, we modify the Transformer architecture BID18 to only consist of a decoder, which performs better in the case of longer input sequences compared to recurrent neural network (RNN) and Transformer encoder-decoder models. Finally we show our modeling improvements allow us to generate entire Wikipedia articles. In FIG3 , we show the predictions from three different models (using tf-idf extraction, and the combined corpus) along with the Wikipedia ground truth. As the perplexity decreases we see improvements in the model outputs, in terms of fluency, factual accuracy, and narrative complexity. In particular, the T-DMCA model offers a respectable alternative to the Wikipedia version and is more succinct, while mentioning key facts, such as where the law firm was located, when and how it was formed, and the rise and fall of the firm.In manual inspection of model outputs, we noticed an unexpected side-effect: models learn to translate names from English into multiple languages, e.g. Rohit Viswanath into Hindi (see FIG4 ). Although we did not do a systematic evaluation of the translations, we found they are often correct, and often they are not found in the Wikipedia article itself. We also verified that in general the translation is not merely copied from the source, such as example cases where the target language is the incorrect one (e.g. translation of an English name into Ukrainian). We have shown that generating Wikipedia can be approached as a multi-document summarization problem with a large, parallel dataset, and demonstrated a two-stage extractive-abstractive framework for carrying it out. The coarse extraction method used in the first stage appears to have a significant effect on final performance, suggesting further research on improving it would be fruitful. We introduce a new, decoder-only sequence transduction model for the abstractive stage, capable of handling very long input-output examples. This model significantly outperforms traditional encoderdecoder architectures on long sequences, allowing us to condition on many reference documents and to generate coherent and informative Wikipedia articles.
We generate Wikipedia articles abstractively conditioned on source document text.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:468
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Abstract Stochastic gradient descent (SGD) and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability. Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way. This makes it possible to control the way models are trained in much greater detail. We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks. One of the most common methods of training neural networks is stochastic gradient descent (SGD) (Bottou et al. (2016) ). SGD has strong theoretical guarantees, including convergence in locally non-convex optimization problems (Lee et al. (2016) ). It also shows improved generalization and stability when compared to other optimization algorithms (Smith & Le (2018) ). There have been various efforts in improving the speed and generalization of SGD. One popular modification is to use an adaptive gradient (Duchi et al. (2011) ), which scales the gradient step size to be larger in directions with consistently small gradients. Adam, an implementation that combines SGD with momentum and an adaptive step size inversely proportional to the RMS gradient, has been particularly successful at speeding up training and solving particular problems (Kingma & Ba (2014) ). However, at other problems it pays a penalty in worse generalization (Wilson et al. (2017) ; Keskar & Socher (2017) ), and it requires additional modifications to achieve a convergence guarantee (Reddi et al. (2018) ; Li & Orabona (2018) ). Here we develop an intuition for adaptive gradient methods that allows us to unify Adam with SGD in a natural way. The new optimizer, SoftAdam, descends in a direction that mixes the SGD with Adam update steps. As such, it should be able to achieve equal or better optimization results across a variety of problems. In this paper, we have motivated and demonstrated a new optimization algorithm that naturally unifies SGD and Adam. We have focused our empirical results on the default hyper-parameter setting, η = 1, and predetermined learning schedules. With these parameters, the algorithm was shown to produce optimization that is better than or equal to SGD and Adam on image classification tasks. It also performed significantly better than SGD on language modeling tasks. Together with finding the optimal values for η, we expect a better understanding of the learning schedule to bring light to the way in which the adaptive gradient methods improve convergence. SoftAdam now also makes it possible to create a learning schedule on η, which may be another fruitful avenue of research, expanding on the work of Ward et al. (2018) . Better understanding of how adaptive gradients improve the convergence of practical machine learning models during training will enable larger models to be trained to more accurately in less time. This paper provides a useful intuition for how that occurs and provides a new algorithm that can be used to improve performance across a diverse set of problems. # S t a t e i n i t i a l i z a t i o n i f l e n ( s t a t e ) == 0 : s t a t e [ " s t e p " ] = 0 # E x p o n e n t i a l moving a v e r a g e o f g r a d i e n t v a l u e s s t a t e [ " e x p a v g " ] = t o r c h . z e r o s l i k e ( p . d a t a ) # E x p o n e n t i a l moving a v e r a g e o f # s q u a r e d g r a d i e n t v a l u e s s t a t e [ " e x p a v g s q " ] = t o r c h . z e r o s l i k e ( p . d a t a ) e x p a v g , e x p a v g s q = ( s t a t e [ " e x p a v g " ] , s t a t e [ " e x p a v g s q " ] , ) b e t a 1 , b e t a 2 = g r o u p [ " b e t a s " ] s t a t e [ " s t e p " ] += 1 b e t a 2 h a t = min ( b e t a 2 , 1 . 0 − 1 . 0 / ( s t a t e [ " s t e p " ] ) ) r b e t a = ( 1 − b e t a 2 ) / ( 1 − b e t a 2 h a t ) e t a h a t 2 = ( g r o u p [ " e t a " ] * g r o u p [ " e t a " ] * r b e t a ) # Decay t h e f i r s t and s e c o n d moment w i t h t h e # r u n n i n g a v e r a g e c o e f f i c i e n t e x p a v g . mul ( b e t a 1 ) . a d d r e t u r n l o s s
An algorithm for unifying SGD and Adam and empirical study of its performance
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:469
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives. Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data. Adding process noise to deterministic simulators can induce a failure in the simulator resulting in no return value for certain inputs -- a property we describe as ``brittle.'' We investigate and address the wasted computation that arises from these failures, and the effect of such failures on downstream inference tasks. We show that performing inference in this space can be viewed as rejection sampling, and train a conditional normalizing flow as a proposal over noise values such that there is a low probability that the simulator crashes, increasing computational efficiency and inference fidelity for a fixed sample budget when used as the proposal in an approximate inference algorithm. In order to compensate for epistemic uncertainty due to modelling approximations and unmodeled aleatoric uncertainty, deterministic simulators are often "converted" to "stochastic" simulators by randomly perturbing the state at each time step. In practice, models adapted in this way often provide better inferences (Møller et al., 2011; Saarinen et al., 2008; Lv et al., 2008; Pimblott and LaVerne, 1990; Renard et al., 2013) . State-independent white noise with heuristically tuned variance is often used to perturb the state (Adhikari and Agrawal, 2013; Brockwell and Davis, 2016; Fox, 1997; Reddy and Clinton, 2016; Du and Sam, 2006; Allen, 2017; Mbalawata et al., 2013) . However, naively adding noise to the state will, in many applications, render the perturbed input state "invalid," inducing failure (Razavi et al., 2019; Lucas et al., 2013; Sheikholeslami et al., 2019) . These failures waste computational resources and reduce sample diversity, worsening inference performance. Examples of failure modes include ordinary differential equation (ODE) solvers not converging to the required tolerance in the allocated time, or, the state crossing into an unhandled configuration, such as solid bodies overlapping. Establishing the cause of failure is non-trivial and hence, the simulation artifact can be sensitive to seemingly inconsequential alterations to the state -a property we describe as "brittle." The principal contribution of this paper is a technique for minimizing this failure rate. We proceed by first framing sampling from brittle simulators as rejection sampling. We then eliminate rejections by learning the state-dependent density over perturbations that do not induce failure, using conditional autoregressive flows (Papamakarios et al., 2017) . Doing so renders the joint distribution unchanged and retains the interpretability afforded by the simulator, but improves sample efficiency. We show that using the learned proposal increases the fidelity of the inference results attainable on a range of examples. In this paper we have tackled reducing simulator failures caused by naively perturbing the input state. We achieve this by defining these simulators as rejection samplers and learning a conditional autoregressive flow to estimate the state-dependent proposal distribution conditioned on acceptance. We show that using this learned proposal reduces the variance of inference results when used as the proposal in a subsequent approximate inference scheme. This work has readily transferable practical contributions in the scientific community where naively modified simulation platforms are widely deployed.
We learn a conditional autoregressive flow to propose perturbations that don't induce simulator failure, improving inference performance.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:47
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The use of imitation learning to learn a single policy for a complex task that has multiple modes or hierarchical structure can be challenging. In fact, previous work has shown that when the modes are known, learning separate policies for each mode or sub-task can greatly improve the performance of imitation learning. In this work, we discover the interaction between sub-tasks from their resulting state-action trajectory sequences using a directed graphical model. We propose a new algorithm based on the generative adversarial imitation learning framework which automatically learns sub-task policies from unsegmented demonstrations. Our approach maximizes the directed information flow in the graphical model between sub-task latent variables and their generated trajectories. We also show how our approach connects with the existing Options framework, which is commonly used to learn hierarchical policies. Complex human activities can often be broken down into various simpler sub-activities or sub-tasks that can serve as the basic building blocks for completing a variety of complicated tasks. For instance, when driving a car, a driver may perform several simpler sub-tasks such as driving straight in a lane, changing lanes, executing a turn and braking, in different orders and for varying times depending on the source, destination, traffic conditions etc. Using imitation learning to learn a single monolithic policy to represent a structured activity can be challenging as it does not make explicit the sub-structure between the parts within the activity. In this work, we develop an imitation learning framework that can learn a policy for each of these sub-tasks given unsegmented activity demonstrations and also learn a macro-policy which dictates switching from one sub-task policy to another. Learning sub-task specific policies has the benefit of shared learning. Each such sub-task policy also needs to specialize over a restricted state space, thus making the learning problem easier.Previous works in imitation learning BID16 BID7 focus on learning each sub-task specific policy using segmented expert demonstrations by modeling the variability in each sub-task policy using a latent variable. This latent variable is inferred by enforcing high mutual information between the latent variable and expert demonstrations. This information theoretic perspective is equivalent to the graphical model shown in FIG0 (Left), where the node c represents the latent variable. However, since learning sub-task policies requires isolated demonstrations for each sub-task, this setup is difficult to scale to many real world scenarios where providing such segmented trajectories is cumbersome. Further, this setup does not learn a macro-policy to combine the learned sub-task policies in meaningful ways to achieve different tasks.In our work, we aim to learn each sub-task policy directly from unsegmented activity demonstrations. For example, given a task consisting of three sub-tasks -A, B and C, we wish to learn a policy to complete sub-task A, learn when to transition from A to B, finish sub-task B and so on. To achieve this we use a causal graphical model, which can be represented as a Dynamic Bayesian Network as GAIL Li et al. (2017) . Right: Causal model in this work. The latent code causes the policy to produce a trajectory. The current trajectory, and latent code produce the next latent code shown in FIG0 (Right). The nodes c t denote latent variables which indicate the currently active sub-task and the nodes τ t denote the state-action pair at time t. We consider as given, a set of expert demonstrations, each of which is represented by τ = {τ 1 , · · · , τ T } and has a corresponding sequence of latent factors c = {c 1 , · · · , c T −1 }. The sub-activity at time t dictates what state-action pair was generated at time t. The previous sub-task and the current state together cause the selection of the next sub-task.As we will discuss in Section 3, extending the use of mutual information to learn sub-task policies from unsegmented demonstrations is problematic, as it requires learning the macro-policy as a conditional probability distribution which depends on the unobserved future. This unobserved future is unknown during earlier points of interaction ( FIG0 ). To alleviate this, in our work we aim to force the policy to generate trajectories that maximize the directed information or causal information BID17 flow from trajectories to latent factors of variation within the trajectories instead of mutual information. Using directed information requires us to learn a causally conditioned probability distribution BID12 which depends only on the observed past while allowing the unobserved future to be sequentially revealed. Further, since there exists feedback in our causal graphical model i.e., information flows from the latent variables to trajectories and vice versa, directed information also provides a better upper bound on this information flow between the latent variables and expert trajectories than does the conventional mutual information BID17 BID12 .We also draw connections with existing work on learning sub-task policies using imitation learning with the options framework BID27 BID3 . We show that our work, while derived using the information theoretic perspective of maximizing directed information, bears a close resemblance to applying the options framework in a generative adversarial imitation setting. Thus , our approach combines the benefits of learning hierarchical policies using the options framework with the robustness of generative adversarial imitation learning, helping overcome problems such as compounding errors that plague behaviour cloning.In summary, the main contributions of our work include:• We extend existing generative adversarial imitation learning frameworks to allow for learning of sub-task specific policies by maximizing directed information in a causal graph of subactivity latent variables and observed trajectory variables.• We draw connections between previous works on imitation learning with sub-task policies using options and show that our proposed approach can also be seen as option learning in a generative adversarial setting.• We show through experiments on both discrete and continuous state-action spaces, the ability of our approach to segment expert demonstrations into meaningful sub-tasks and combine sub-task specific policies to perform the desired task.2 RELATED WORK Learning separate sub-task policies can help improve the performance of imitation learning when the demonstrated task is complex and has a hierarchical structure. In this work, we present an algorithm that infers these latent sub-task policies directly from given unstructured and unlabelled expert demonstrations. We model the problem of imitation learning as a directed graph with sub-task latent variables and observed trajectory variables. We use the notion of directed information in a generative adversarial imitation learning framework to learn sub-task and macro policies. We further show theoretical connections with the options literature as used in hierarchical reinforcement and imitation learning. We evaluate our method on both discrete and continuous environments. Our experiments show that our method is able to segment the expert demonstrations into different sub-tasks, learn sub-task specific policies and also learn a macro-policy that can combines these sub-task. TAB3 : Experiment settings for all the different environments for both DirectedInfo-GAIL and VAE-pretraining step respectively. Thus, by maximizing directed information instead of mutual information, we can learn a posterior distribution over the next latent factor c given the latent factors discovered up to now and the trajectory followed up to now, thereby removing the dependence on the future trajectory. In practice, we do not consider the H(c) term. This gives us the objective, DISPLAYFORM0 In practice, we fix q from the VAE pre-training and only minimize over the policy π in equation 4. BID24 to train our policy network with = 0.2. For the VAE pre-training step we set the VAE learning rate also to 3e −4 . For the Gumbel-Softmax distribution we set an initial temperature τ = 5.0. The temperature is annealed using using an exponential decay with the following schedule τ = max(0.1, exp −kt ), where k = 3e − 3 and t is the current epoch.
Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:470
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The Convolutional Neural Network (CNN) has been successfully applied in many fields during recent decades; however it lacks the ability to utilize prior domain knowledge when dealing with many realistic problems. We present a framework called Geometric Operator Convolutional Neural Network (GO-CNN) that uses domain knowledge, wherein the kernel of the first convolutional layer is replaced with a kernel generated by a geometric operator function. This framework integrates many conventional geometric operators, which allows it to adapt to a diverse range of problems. Under certain conditions, we theoretically analyze the convergence and the bound of the generalization errors between GO-CNNs and common CNNs. Although the geometric operator convolution kernels have fewer trainable parameters than common convolution kernels, the experimental results indicate that GO-CNN performs more accurately than common CNN on CIFAR-10/100. Furthermore, GO-CNN reduces dependence on the amount of training examples and enhances adversarial stability. Convolutional Neural Networks have been successfully applied in many fields during recent decades, but the theoretical understanding of the deep neural network is still in the preliminary stages. Although CNNs have strong expressive abilities, they have two clear deficiencies. First, as complex functional mappings, CNNs, like black boxes, cannot take full advantage of domain knowledge and prior information. Second, when little data is available for a certain task, CNNs' generalization ability weakens. This is due to overfitting, which may occur due to the large number of parameters and the large model size. Stemming from these two defects, a great deal of research has been done to modify CNNs BID7 Wang et al., 2018; Sarwar et al., 2017) .Before CNNs were applied, traditional geometric operators had developed quite well. Each geometric operator represents the precipitation of domain knowledge and prior information. For example, the Sobel operator (Works) is a discrete difference operator, which can extract image edge information for edge detection. The Schmid operator (Schmid, 2001 ) is an isotropic circular operator, which extracts texture information from images for face recognition. The Histogram of Oriented Gradients (HOG) BID8 ) is a statistic operator of gradient direction, which extracts edge direction distributions from images for pedestrian detection and other uses.Many computer vision tasks require domain knowledge and prior information. For example, in BID2 , the texture information from the image is used for an auxiliary diagnosis of a fracture. Geometric operators can make use of domain knowledge and prior information, but cannot automatically change parameter values by learning from data. Convolutional Neural Networks have strong data expression abilities and learning abilities, but they struggle to make use of domain knowledge. For better data learning, we have combined the two. It is natural to directly use geometric operators for pre-processing, and then classify the data through a Convolutional Neural Network (Yao et al., 2016) . However, this method uses human experience to select geometric operator parameter values, and then carries out the Convolutional Neural Network learning separately. This method is a kind of two-stage technique, and without reducing parameter redundancy in a Convolutional Neural Network, it is difficult to achieve global optimization. The method proposed in this paper directly constructs geometric operator convolution and then integrates geometric operator convolution into a Convolutional Neural Network to form a new framework -the Geometric Operator Convolutional Neural Network. This method achieves global optimizations and utilizes the properties of geometric operators.In summary, the contributions of this work are as follows:• This framework can integrates many conventional geometric operators, which reveals its broad customization capabilities when handling diverse problems.• In theory, the same approximation accuracy and generalization error bounds are achieved when geometric operators meet certain conditions.• The Geometric Operator Convolutional Neural Network not only reduces the redundancy of the parameters, but also reduces the dependence on the amount of the training samples.• The Geometric Operator Convolutional Neural Network enhances adversarial stability. In this paper, we present a novel framework named the Geometric Operator Convolution Neural Network, where the kernel in the first convolutional layer is replaced with kernels generated by geometric operator functions. This new network boasts several contributions. Firstly, the GO-CNN is customizable for diverse situations. Secondly, there is a theoretical guarantee in the learning framework of the GO-CNN. Thirdly, the GO-CNN reduces the dependence on training samples. Lastly, the GO-CNN enhances adversarial stability. In the future, we can explore a more appropriate geometric operator convolution block.
Traditional image processing algorithms are combined with Convolutional Neural Networks,a new neural network.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:471
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Determinantal point processes (DPPs) is an effective tool to deliver diversity on multiple machine learning and computer vision tasks. Under deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflict with diversity requirement. We note, however, there has been no deep learning paradigms to optimize DPP directly since it involves matrix inversion which may result in highly computational instability. This fact greatly hinders the wide use of DPP on some specific objectives where DPP serves as a term to measure the feature diversity. In this paper, we devise a simple but effective algorithm to address this issue to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels. By further taking into account some geometric constraints, our algorithm seeks to generate valid sub-gradients of DPP term in case when the DPP gram matrix is not invertible (no gradients exist in this case). In this sense, our algorithm can be easily incorporated with multiple deep learning tasks. Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems. Diversity is desired in multiple machine learning and computer vision tasks (e.g., image hashing (Chen et al., 2017; Carreira-Perpinán & Raziperchikolaei, 2016) , descriptor learning , metric learning (Mishchuk et al., 2017) and video summarization (Sharghi et al., 2018; Liu et al., 2017) ), in which sub-sampled points or learned features need to spread out through a specific bounded space. Originated from quantum physics, determinantal point processes (DPP) have shown its power in delivering such properties Kulesza & Taskar, 2011b) . Compared with other diversity-oriented techniques (e.g., entropy (Zadeh et al., 2017) and orthogonality ), DPP shows its superiority as it incorporates only one single metric and delivers genuine diversity on any bounded space Affandi et al., 2013; Gillenwater et al., 2012) . Therefore, DPP has been utilized in a large body of diversity-oriented tasks. In general, sample points from a DPP tend to distribute diversely within a bounded space A . Given a positive semi-definite kernel function κ : A × A → R, the probability of a discrete point set X ⊂ A under a DPP with kernel function κ can be characterized as: where L is a |X | × |X | matrix with entry L ij = κ(x i , x j ) and x i , x j ∈ X . L is called L-ensemble. Note that A is a continuous space, whereas X is finite. In the Hilbert space associated with κ, larger determinant implies larger spanned volume, thus the mapped points tend not to be similar or linearly dependent. DPP can be viewed from two perspectives: sampling and learning. A comprehensive introduction to mathematical fundamentals of DPP for sampling from a discrete space can be found in . Based on this, a line of works has been proposed (Kulesza & Taskar, 2011a; Kang, 2013; Hennig & Garnett, 2016) . In this paper, we concentrate on learning DPPs. In learning of DPP, the term det(L) is typically treated as a singleton diversity measurement and is extended to learning paradigms on continuous space (Chao et al., 2015; Kulesza & Taskar, 2010; Affandi et al., 2014) . There are generally two lines of strategies to learn DPPs: Approximation. This type of methods is to convert DPP into a simpler format which can ease and stabilize the computation. low-rank approximation proves powerful in easing the computational burden (Gartrell et al., 2017) , in which the gram matrix is factorized as L = BB where B ∈ n×m with m n. This decomposition can also reduce the complexity which is originally a cubic time of |L|. Kulesza & Taskar (2011b) explicitly expressed the kernel with κ(x, y) = σ 1 σ 2 δ(x ) δ(y ), where σ measures the intrinsic quality of the feature and δ(·) is function mapping input x to a feature space. In this sense, the pairwise similarity is calculated in Euclidean feature space with cosine distance. Elfeki et al. (2019) suggest approximating a given distribution by approximating the eigenvalues of the corresponding DPP. As such , the computation can be eased and become stable. Following this, DPP is also applied on some visual tasks, such as video summarization (Sharghi et al., 2018) , ranking (Liu et al., 2017) and image classification (Xie et al., 2017) . It can be noted that the approximation is not straightforward for DPP, thus cannot fully deliver the diversity property (e.g. resulting in rank-deficiency). Direct optimization. While the aforementioned methods optimize DPP with specific approximation, a series of efforts also seek to optimize the DPP term directly (Gillenwater et al., 2014; Mariet & Sra, 2015; Bardenet & Titsias, 2015) . In this setting, the whole gram matrix L corresponding to the pairwise similarity among features is updated directly, which allows accommodating more flexible feature mapping functions rather than an approximation. Gillenwater et al. (2014) proposed an Expectation-Maximization algorithm to update marginal kernel DPP K = L(L + I) −1 , together with a baseline K-Ascent derived from projected gradient ascent (Levitin & Polyak, 1966) . Mariet & Sra (2015) extended DPP from a fixed-point perspective and Bardenet & Titsias (2015) proposed to optimize DPP upon a lower bound in variational inference fashion. A key problem of such line of works is that the computation is not differentiable, making it difficult to be used in deep learning frameworks. To the best of our knowledge, there is no previous method incorporating DPP as a feature-level diversity metric in deep learning. A key difficulty in doing so is that the calculation of the gradient of det(L) involves matrix inversion, which can be unstable and inaccurate in GPUs. Though KAscent seems to be a naive rule, it still needs explicit matrix inversion in the first step before the projection procedure. This fact greatly hinders the tight integration of DPP with deep networks. Some alternative methods seek to reach diversity under more constrained settings. For example, resorted to a global pairwise orthogonality constraint in hyper-sphere and Zadeh et al. (2017) employed statistical moments to measure the diversity. However, compared with DPP, such measurements are unable to fully characterize diversity in an arbitrary bounded space. In this paper, rather than providing more efficient DPP solvers, we concentrate on delivering a feasible feature-level DPP integration under the deep learning framework. To this end, we revisit the spectral decomposition of DPP and propose a sub-gradient generation method which can be tightly integrated with deep learning. Our method differs from either approximation or direct optimization by introducing a "differentiable direct optimization" procedure, thus can produce genuinely diverse features in continuous bounded space. Our method is stable and scalable to the relatively large dataset with a specific mini-batch sampling strategy, which is verified by several experiments on various tasks. Notations: Bold lower case x and bold upper case K represent vector and matrix, respectively. det(·) and Tr(·) calculate the determinant and trace of a matrix, respectively. A ⊗ B is the element-wise product of matrices A and B. |X | and |x| measure the cardinality of a finite set X and the L 2 length of a vector x, respectively. x, y calculates the inner product of the two vectors. x = diag(X) transforms a diagonal matrix X into its vector form x, and vice versa. We refer "positive semi-definite" and "positive definite" to PSD and PD, respectively. Denote the real numbers. In this paper, we investigated the problem of learning diverse features via a determinantal point process under deep learning framework. To overcome the instability in computing the gradient which involves the matrix inverse, we developed an efficient and reliable procedure called proper spectral sub-gradient generation. The generated proper sub-gradient can replace the true gradient and performs well in applications. We also considered how to constrain the features into a bounded space, since in such a way one can ensure the behavior of the network more predictable. To this end, we further incorporated Wasserstein GAN into our framework. Together, DPP+WGAN showed significant performance on both some common criteria and feature space utility. A APPENDIX
We proposed a specific back-propagation method via proper spectral sub-gradient to integrate determinantal point process to deep learning framework.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:472
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The quality of a machine translation system depends largely on the availability of sizable parallel corpora. For the recently popular Neural Machine Translation (NMT) framework, data sparsity problem can become even more severe. With large amount of tunable parameters, the NMT model may overfit to the existing language pairs while failing to understand the general diversity in language. In this paper, we advocate to broadcast every sentence pair as two groups of similar sentences to incorporate more diversity in language expressions, which we name as parallel cluster. Then we define a more general cluster-to-cluster correspondence score and train our model to maximize this score. Since direct maximization is difficult, we derive its lower-bound as our surrogate objective, which is found to generalize point-point Maximum Likelihood Estimation (MLE) and point-to-cluster Reward Augmented Maximum Likelihood (RAML) algorithms as special cases. Based on this novel objective function, we delineate four potential systems to realize our cluster-to-cluster framework and test their performances in three recognized translation tasks, each task with forward and reverse translation directions. In each of the six experiments, our proposed four parallel systems have consistently proved to outperform the MLE baseline, RL (Reinforcement Learning) and RAML systems significantly. Finally, we have performed case study to empirically analyze the strength of the cluster-to-cluster NMT framework. Recently, an encode-decoder neural architecture has surged and gained its popularity in machine translation. In this framework, the encoder builds up a representation of the source sentence and the decoder uses its previous RNN hidden state and attention mechanism to generate target translation. In order to better memorize the input information, an attention mechanism has been exploited to further boost its performance. In order to train the attentive encoder-decoder architecture, Maximum Likelihood Estimation (MLE) algorithm has been widely used, which aims at maximizing the point-to-point (one sentence to one sentence) log-likelihood of data pairs in a given dataset. However, this algorithm has severely suffered from data sparsity problem, or in other word, maximizing only likelihood the existing language pairs might make the model blind to all the non-existing similar sentence pairs. Thus, the large neural model might overfit to certain prototypes existing in the training set while failing to generalize more unseen but similar scenarios in test time.hurting its semantic meaning. 2) Model-Centroid Augmentation (RL), and BID13 leverage model-generated candidates as pseudo training samples, which are weighted with rewards to enhance the model learning. By exploring self-generated candidates, the model is able to understand the diversity in the output space. In pseudo-learning algorithms, both RAML and RL can be interpreted as broadcasting a target ground truth as a cluster of analogues while leaving the source input untouched, which though helps the model understand target diversity, fails to capture the input diversity. In order to explore both sides' diversity, we advocate a novel and general cluster-to-cluster framework of pseudo learning, which first broadcasts both source and target sentence as clusters and then train the model to comprehend their correspondence, as described in FIG0 .In this paper, we first introduce the concept of parallel cluster, then design the cluster-to-cluster correspondence score as our optimization objective, based on which, we derive its lower bound KL-divergence as our surrogate objective for model training. In order to realize our proposed framework, we design four parallel systems and apply them to three recognized machine translation tasks with both forward and reverse translation directions, these four systems have all demonstrated their advantages over the existing competing algorithms in six translation tasks. In the appendices, we draw samples from the parallel clusters and further analyze their properties to verify our motivation.The contributions of our paper can be summarized as follows: 1) We are the first to propose the concept of cluster-to-cluster framework, which provides a novel perspective to current sequence-tosequence learning problems. 2) We delineate the framework and arrive in a novel KL-divergence loss function and generalizes several existing algorithms as special cases, which provides a highlevel understanding about the previous algorithms.2 RELATED LITERATURE In this paper, we propose a cluster-to-cluster learning framework and incorporate this concept into neural machine translation. Our designed systems have proved to be efficient in helping current NMT model to generalize in both source and target sides. In the cluster-to-cluster framework, the cooperation of four agents can augment valuable samples and alleviate data sparsity, and achieve significant improvement compared with strong baseline systems. We believe the concept of clusterto-cluster learning can be applicable to a wide range of natural language or computer vision tasks, which will be explored in the future. Appendices A SYSTEM-DESIGN Sequence to sequence problem (machine translation) can be considered to produce an output sequence Y = (y 1 , y 2 , . . . , y T ), y t ∈ A given an input X. Given input-target pairs (X, Y * ), the generated sequence Y on test is evaluated with task-specific score R(Y, Y * ). Recurrent neural networks have been widely used in sequence to sequence prediction tasks. As proposed in and , the basic idea is to first encode the input sequence as a variablelength feature vectors, then apply attention mechanism to compute weighted average over the input vectors and summarize a context vector, with which, previous hidden states and previous label are fed into the decoder RNN to predict the next state and its label. In our approach, attention-based encoder-decoder is leveraged for both the translation and cluster models, shown as: DISPLAYFORM0 A.1 RL NMT In order to train our RL system as well as adaptive cluster, we need to define a task-level reward as driving signal. Instead of directly applying BLEU or other evaluation metric, we advocate to use a surrogate n-gram match interpolation, as shown as: DISPLAYFORM1 where N n denotes the number of n-gram match between Y and Y * . In order to alleviate sequencereward sparseness, we further split it as a series of local reward to drive model's policy search at every time step. Formally, we write the step-wise reward r(y t |y 1:t−1 , Y * ) as following. (22) where N (Y,Ỹ ) represents the occurrence of n-gramỸ in sequence Y , specifically, if a certain nsequence y t−n+1:t appears in reference and it's not repeating more than needed, then we assign a corresponding matching score to y t , the policy gradient is described as: DISPLAYFORM2 DISPLAYFORM3 A.2 RAML NMT In order to sample from the intractable payoff distribution for system-A/B as well as our implemented RAML system, we adopt stratified sampling technique described in . Given a sentence Y * , we first sample an edit distance m, and then randomly select m positions to replace the original labels. For each sentence, we randomly sample four candidates to perform RAML training. DISPLAYFORM4 B MATHEMATICAL ANALYSIS We optimize the model parameters of our cluster-to-cluster models by minimizing the lower-bound KL-divergence instead of maximizing the original correspondence score, to characterize the difference between the two objective function, we analyze the relationships between these two functions below: DISPLAYFORM5 which can be further written as: DISPLAYFORM6 therefore, we can derive: DISPLAYFORM7 Since both cluster and translation confidence score c(Y |Y * , X * ) and w(Y |X, X * ) require computing the marginalized probability p(Y |X * ) known to be intractable for variable-length sequences, here we adopt different mechanisms to approximate them. In system-A and C, we simplify DISPLAYFORM8 pη(Y |X * ) . In system-B and D, since Y is broadcast through the translation system, the marginalized probabilityp(Y |X * ) is close to one, we discard this factor and approximate c(Y |Y DISPLAYFORM9
We invent a novel cluster-to-cluster framework for NMT training, which can better understand the both source and target language diversity.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:473
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain the problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities. In this work, we propose Neural Logic Inductive Learning, a differentiable ILP framework that learns explanatory rules from data. We demonstrate that NLIL can scale to very large datasets while being able to search over complex and expressive rules. More importantly, we show that a scalable ILP method is effective in explaining decisions of supervised models, which provides an alternative perspective for inspecting the decision process of machine learning systems.
An efficient differentiable ILP model that learns first-order logic rules that can explain the data.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:474
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems. We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems. The existence of adversarial examples for neural networks has until now been largely a theoretical concern. While minute, carefully-crafted perturbations can cause targeted misclassification in a neural network, adversarial examples produced using standard techniques lose adversariality when directly translated to the physical world as they are captured over varying viewpoints and affected by natural phenomena such as lighting and camera noise. This suggests that practical systems may not be at risk because adversarial examples generated using standard techniques are not robust in the physical world. We show that neural network-based classifiers are vulnerable to physical-world adversarial examples. We introduce a new algorithm for reliably producing physical 3D objects that are adversarial over a distribution of viewpoints. FIG0 shows an example of an adversarial object constructed using our approach, where a 3D-printed turtle is consistently classified as rifle by an ImageNet classifier. In this paper, we demonstrate the efficacy and generality of our method, demonstrating conclusively that adversarial examples are a concern in real-world systems. The results and quantative analysis in this section demonstrate the efficacy of EOT and confirm the existence of physical adversarial examples. Here, we perform a qualitative analysis of the results:Modeling Perception. The EOT algorithm as presented in Section 2 presents a general method to construct adversarial examples over a chosen perceptual distribution, but notably gives no guarantees for observations of the image outside of the chosen distribution. In constructing physical-world adversarial objects, we use a crude, high-variance approximation of the rendering and capture process, and this succeeds in ensuring robustness to a diverse set of environments; see, for example, FIG5 , which shows the same adversarial turtle in vastly different environments. In specialized 1 Although the viewpoints were not selected in any way and were simply the result of walking around the objects, moving them up/down, etc., we hesitate to call them "random" since they were not in fact generated numerically or sampled from a concrete distribution, in contrast with the rendered 3D examples. domains, however, a domain expert may opt to model the perceptual distribution precisely in order to better constrain the search space. Our work shows that adversarial examples pose a practical threat to systems using neural networkbased image classifiers. By introducing EOT, a general-purpose algorithm for creating robust adversarial examples under any chosen distribution, and modeling 3D rendering and printing within the framework of EOT, we succeed in fabricating three-dimensional adversarial objects. With access only to low-cost commercially available 3D printing technology, we successfully print physical adversarial objects that are strongly classified as a chosen target class over a variety of angles, viewpoints, and lighting conditions by a standard ImageNet classifier.
We introduce a new method for synthesizing adversarial examples robust in the physical world and use it to fabricate the first 3D adversarial objects.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:475
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Representations of sets are challenging to learn because operations on sets should be permutation-invariant. To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end. The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models. We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art results: number sorting, image mosaics, classification from image mosaics, and visual question answering. Consider a task where each input sample is a set of feature vectors with each feature vector describing an object in an image (for example: person, table, cat). Because there is no a priori ordering of these objects, it is important that the model is invariant to the order that the elements appear in the set. However, this puts restrictions on what can be learned efficiently. The typical approach is to compose elementwise operations with permutation-invariant reduction operations, such as summing (Zaheer et al., 2017) or taking the maximum (Qi et al., 2017) over the whole set. Since the reduction operator compresses a set of any size down to a single descriptor, this can be a significant bottleneck in what information about the set can be represented efficiently (Qi et al., 2017; Le & Duan, 2018; Murphy et al., 2019) .We take an alternative approach based on an idea explored in Vinyals et al. (2015a) , where they find that some permutations of sets allow for easier learning on a task than others. They do this by ordering the set elements in some predetermined way and feeding the resulting sequence into a recurrent neural network. For instance, it makes sense that if the task is to output the top-n numbers from a set of numbers, it is useful if the input is already sorted in descending order before being fed into an RNN. This approach leverages the representational capabilities of traditional sequential models such as LSTMs, but requires some prior knowledge of what order might be useful.Our idea is to learn such a permutation purely from data without requiring a priori knowledge (section 2). The key aspect is to turn a set into a sequence in a way that is both permutation-invariant, as well as differentiable so that it is learnable. Our main contribution is a Permutation-Optimisation (PO) module that satisfies these requirements: it optimises a permutation in the forward pass of a neural network using pairwise comparisons. By feeding the resulting sequence into a traditional model such as an LSTM, we can learn a flexible, permutation-invariant representation of the set while avoiding the bottleneck that a simple reduction operator would introduce. Techniques used in our model may also be applicable to other set problems where permutation-invariance is desired, building on the literature of approaches to dealing with permutation-invariance (section 3).In four different experiments, we show improvements over existing methods (section 4). The former two tasks measure the ability to learn a particular permutation as target: number sorting and image mosaics. We achieve state-of-the-art performance with our model, which shows that our method is suitable for representing permutations in general. The latter two tasks test whether a model can learn to solve a task that requires it to come up with a suitable permutation implicitly: classification from image mosaics and visual question answering. We provide no supervision of what the permutation should be; the model has to learn by itself what permutation is most useful for the task at hand. In the ordering cost C, elements of X are compared to each other (blue represents a negative value, red represents a positive value). Gradients are applied to unnormalised permutations P (t) , which are normalised to proper permutations P (t) .Here, our model also beats the existing models and we improve the performance of a state-of-the-art model in VQA with it. This shows that our PO module is able to learn good permutation-invariant representations of sets using our approach. In this paper, we discussed our Permutation-Optimisation module to learn permutations of sets using an optimisation-based approach. In various experiments, we verified the merit of our approach for learning permutations and, from them, set representations. We think that the optimisation-based approach to processing sets is currently underappreciated and hope that the techniques and results in this paper will inspire new algorithms for processing sets in a permutation-invariant manner. Of course, there is plenty of work to be done. For example, we have only explored one possible function for the total cost; different functions capturing different properties may be used. The main drawback of our approach is the cubic time complexity in the set size compared to the quadratic complexity of Mena et al. FORMULA0 , which limits our model to tasks where the number of elements is relatively small. While this is acceptable on the real-world dataset that we used -VQA with up to 100 object proposals per image -with only a 30% increase in computation time, our method does not scale to the much larger set sizes encountered in domains such as point cloud classification. Improvements in the optimisation algorithm may improve this situation, perhaps through a divide-and-conquer approach.We believe that going beyond tensors as basic data structures is important for enabling higher-level reasoning. As a fundamental mathematical object, sets are a natural step forward from tensors for modelling unordered collections. The property of permutation invariance lends itself to greater abstraction by allowing data that has no obvious ordering to be processed, and we took a step towards this by learning an ordering that existing neural networks are able to take advantage of.
Learn how to permute a set, then encode permuted set with RNN to obtain a set representation.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:476
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The physical design of a robot and the policy that controls its motion are inherently coupled. However, existing approaches largely ignore this coupling, instead choosing to alternate between separate design and control phases, which requires expert intuition throughout and risks convergence to suboptimal designs. In this work, we propose a method that jointly optimizes over the physical design of a robot and the corresponding control policy in a model-free fashion, without any need for expert supervision. Given an arbitrary robot morphology, our method maintains a distribution over the design parameters and uses reinforcement learning to train a neural network controller. Throughout training, we refine the robot distribution to maximize the expected reward. This results in an assignment to the robot parameters and neural network policy that are jointly optimal. We evaluate our approach in the context of legged locomotion, and demonstrate that it discovers novel robot designs and walking gaits for several different morphologies, achieving performance comparable to or better than that of hand-crafted designs. An agent's ability to navigate through and interact with its environment depends not just on its skill at planning and controlling its motion, but also on its physical design. Different physical designs are inherently better suited to different tasks and environments. By making appropriate choices during fabrication, mechanical elements can be designed to improve robustness to non-idealities such as errors in perception, delays in actuation, etc., and indeed, make control problem an easier one to solve. At the same time, robots that take different forms may find completely different control strategies to be optimal to complete the same task. Therefore, the physical and computational design of an agent are inherently coupled, and must ideally be jointly optimized if the robot is to successfully complete a task in a particular environment.Consider the development of a legged robot for locomotion. Variations in physical design will require changes to the joint torques in order to preserve a particular locomotion behavior (e.g., a heavier torso requires greater torque at the ankle), and will likely result in completely different walking gaits, even when the morphology is preserved. In fact, some changes to design may render locomotion impossible for the target operating environment (e.g., a robot with long feet may be unable to locomote up an incline). Meanwhile, careful choice of bipedal design enables passive walking BID20 BID9 BID4 . It is therefore beneficial to not simply consider the robot's design or gait to be fixed, but to optimize both jointly for the target environment and task. Similar co-design can be beneficial in other settings-for example for the control policy and physical characteristics of digits in robotic grippers for grasping.While a robot's physical design and the corresponding control policy are inherently coupled, most existing methods ignore this coupling, instead choosing to alternate between separate design and control phases. Existing approaches that jointly reason over design and control BID7 BID12 BID46 assume knowledge of an accurate model of the robot dynamics and require expert supervision (e.g., to provide a suitable initial design and guide the optimization process). However, these restrictive assumptions limits their applicability to a handful of specific settings, and often yield solutions heavily influenced by expert intuition.In this work, we seek a general approach-one that can optimize a robot's physical characteristics jointly with controllers of a desired complexity (Fig. 1) , that can be applied to general tasks in some DISPLAYFORM0 Figure 1: Our algorithm learns a robot's physical design jointly with the control policy. Here we show the learned designs evolving over time for the Hopper (top left), the Walker2d (top right) and the Ant (bottom), each with the default Roboschool design for comparison. Scale is fixed for each robot. Note that these designs correspond to modes of the distribution over robot designs that our algorithm maintains during training.given environment, and that can explore the joint search space of physical design and computational control in a purely data-driven way, without a model of the robot dynamics and independent of the biases of expert intuition. We develop this approach in the context of determining the physical parameters of an articulated agent-the lengths and thicknesses of each limbs in a given morphologythrough joint training with a neural network for control, with the objective of achieving locomotion. Our method maintains a distribution over these physical parameters, and simultaneously trains the parameters of this distribution with those of a neural network controller, using deep reinforcement learning. In this way, we pursue a design distribution and control policy that are jointly optimal for the given task and environment. Experimental results show that starting from random initializations, our approach is able to find novel designs and walking gaits that match or exceed the performance of manually designed agents. To the best of our knowledge, our method is the first to successfully carry out such a joint optimization of design and control in a completely model-free manner. We proposed what is, to the best of our knowledge, the first model-free algorithm that jointly optimizes over the physical design of a robot and the corresponding control policy, without any need for expert supervision. Given an arbitrary morphology, our robot maintains a distribution over the robot design parameters and learns these parameters together with a neural network controller using policy gradient-based reinforcement learning. This results in an assignment to the policy over robot parameters and the control policy that are jointly optimal. We evaluated our approach on a series of different legged robot morphologies, demonstrating that it results in novel robot designs and walking gaits, achieving performance that either matches or exceeds that of manually defined designs.Our findings suggest several avenues for future work. The most direct is extending the current approach to find optimized designs for uneven terrain, the presence of obstacles, changes in slope, variations in friction, etc. We are also interested in extending our framework to relax the assumption that the morphology is pre-defined. Finally, we are investigating applications to different types of agents and design spaces beyond legged robots (e.g., end-effectors), and exploring appropriate stochastic parameterization for such designs.
Use deep reinforcement learning to design the physical attributes of a robot jointly with a control policy.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:477
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data. Applications of neural networks often consider learning in the context of a single task. However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks. Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks. However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits. Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide. In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training. To this end, we introduce matrix-interleaving (Mint), a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix. By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks. On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both. While deep learning has enabled remarkable levels of generalization through the use of function approximators, this comes at the cost of large amounts of data, which remains a critical challenge in deploying deep learning to a number of domains. When combined with deep networks, multitask learning offers the promise of building more powerful representations using less data per task, leading to greater performance and data efficiency. However, multi-task deep learning has also posed considerable challenges. Numerous works have observed that joint training on multiple tasks can actually decrease task performance due to the negative influence of other tasks (Parisotto et al., 2015; Rusu et al., 2016a) . Indeed, training networks entirely independently on each task has remained a strong approach, to the point that multiple multi-task methods have first trained models independently before using them to train a multi-tasking model (Parisotto et al., 2015; Rusu et al., 2016a; Ghosh et al., 2017; Teh et al., 2017; . Moreover, our experiments in Section 6 indicate that three recently proposed methods for multi-task learning are all surpassed by training models independently per task. However, training independent models will only work well when provided enough data per task, and precludes potential positive data-efficiency gains from multi-task learning, only providing protection against negative transfer. Further, while a number of works have successfully shared parameters, finding an architecture with the appropriate level of parameter sharing for a given problem domain can require a considerable amount of manual engineering. In this work, we aim to develop a multi-task learning method that can perform well both when tasks share very little and when they share a large amount of structure. To address this problem, we consider how a single neural network model can represent two extremes: independent models, when optimization challenges prevail, or a single model with shared weights, when sharing is beneficial. Further, we would like such a model to be able to represent intermediate levels of model sharing, when appliable. One option for performing independent training within a single model is to put separate networks with independent weights into a single model, using the task ID to select which network prediction to output. However, this prevents any sharing. An alternative approach is to condition the model on the task ID, through various conditioning approaches, including additive and multiplicative approaches such as FiLM (Perez et al., 2018) . In fact, point-wise multiplicative conditioning, as proposed in FiLM, can indeed represent separate networks by selecting which parts of the network to be used for different tasks, as can a number of other approaches in multi-task learning (Rosenbaum et al., 2017; 2019; Fernando et al., 2017 ). Yet, these approaches still require an optimization over shared parameters in order to select which parameters are used for each task. These shared parameters can introduce significant optimization challenges. We instead consider how to allow a model to perform optimization on only shared parameters, only disjoint parameters, or any combination thereof. We can achieve this by simply interleaving learned per-task matrices at each layer of a jointly-trained neural network. When optimization over shared parameters is ineffective, the model can still represent a full neural network per task using only the per-task matrices, resulting in independent training; while using identical per-task matrices results in standard joint training. Intermediately, a mix of shared and per-task parameters may be used. In effect, by incorporating these matrices into the network, the optimizer itself can automatically and dynamically modulate the degree to which a representation is shared between tasks, depending on the problem domain and the optimization progress, and can do so without having to optimize shared parameters. The primary contribution of this paper is a simple yet effective approach for multi-task learning that can represent and smoothly interpolate between independent training and joint training, via matrix interleaving (Mint). We describe how we can implement Mint in deep multi-task models and show its effectiveness in improving data efficiency and generalization in multi-task settings while providing intuition about the reasons why this architecture performs so well. Further, we show that the model can be extended to goal-conditioned reinforcement learning in a straightforward manner by allowing the model to generate the interleaved matrices conditioned on task information such as the goal. We evaluate Mint on sets of tasks with both high and low levels of shared structure and find that it performs well in both settings, performing comparably to or outperforming both joint training and independent training, effectively combining the best elements of both. Further, in comparison to previous methods that use multiplicative interactions for continual learning (Cheung et al., 2019) and for general conditioning (Perez et al., 2018) , Mint is better able to separate tasks by avoiding the need to optimize over shared parameters and can empirically produce substantially better performance on a range of challenging multi-task problems. Finally, Mint also outperforms state-of-the-art approaches for multi-task learning while being significantly simpler to implement. Simultaneous optimization of multiple, potentially unrelated tasks can prove challenging for deep neural networks. Recent multi-task learning architectures attempt to mitigate this issue by providing alternative pathways for information to flow through a neural network for each task. In this paper, we introduce a new multi-task learning module, Mint, which provides theoretical guarantees of universal approximation even for multi-task settings with no shared structure. We conjecture that this property, not shared by similar multi-task architectures, enables Mint to outperform other multi-task approaches on a variety of reinforcement learning benchmarks. We also observe that Mint is able to match or improve upon the performance of independent training. While Mint exhibits strong performance gains over previous methods, one potential limitation is that the task matrices may introduce a significant number of parameters, particularly as the number of tasks increases. As discussed, this can be alleviated for problem domains with many tasks, by learning a single neural network that produces the matrices and biases conditioned on the task descriptor. Further, in our experiments, we find that Mint-based networks can outperform prior methods while using comparable or fewer parameters. In summary, Mint is a simple, yet effective approach for deep multi-task learning. Its implementation requires minimal modifications over standard deep networks. As a result, we expect it to be straightforward for future work to build upon or use Mint for more effective multi-task learning in deep networks. A PROOF OF THEOREM 1 Lemma 1. For a given α i , applying Mint to y (l−1) can express an arbitrary affine transformation at layer l for each task.
We propose an approach that endows a single model with the ability to represent both extremes: joint training and independent training, which leads to effective multi-task learning.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:478
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Training agents to operate in one environment often yields overfitted models that are unable to generalize to the changes in that environment. However, due to the numerous variations that can occur in the real-world, the agent is often required to be robust in order to be useful. This has not been the case for agents trained with reinforcement learning (RL) algorithms. In this paper, we investigate the overfitting of RL agents to the training environments in visual navigation tasks. Our experiments show that deep RL agents can overfit even when trained on multiple environments simultaneously. We propose a regularization method which combines RL with supervised learning methods by adding a term to the RL objective that would encourage the invariance of a policy to variations in the observations that ought not to affect the action taken. The results of this method, called invariance regularization, show an improvement in the generalization of policies to environments not seen during training. Learning control policies from high-dimensional sensory input has been gaining more traction lately due to the popularity of deep reinforcement learning (DRL) Mnih et al. (2015) ; ; Zhang et al. (2018b) ; Rakelly et al. (2019) , which enables learning the perception and control modules simultaneously. However, most of the work done in RL chooses to evaluate the learned policies in the same environment in which training occurred Cobbe et al. (2018) . Using the same environments to train and test agents does not give any insight into the generalization abilities of the learned policy. There could be a number of changes in the environment at test time that would degrade the agent's performance. Variations could appear in the visual aspects that determine the agent's observation, the physical structure that determines the agent's state and even some aspects that are related to the agent's goal (Figure 1 ). For example, different observations of the same room are encountered at different times of the day (different lighting conditions). New obstacles could be present. Levels of a game could be different, yet playing a few levels should often be enough to figure out how to play the rest. Such variations might result in a new environment where the control model that defined the training environment has changed. A robust policy should generalize from its experience and perform the same skills in the presence of these variations. DRL agents have been notorious for overfitting to their training environments Cobbe et al. (2018) . An agent could have drastically different performance on testing environments even if it manages to maximize the reward during training Zhang et al. (2018a) . Supervised learning algorithms have been shown to have some generalization guarantees when adding proper regularization Mohri et al. (2018) . However, these guarantees are weakened in reinforcement learning algorithms where the source of the data is not i.i.d.. In order to make use of the progress of DRL algorithms in practice we need policies that are robust to possible changes in the sensory inputs, surrounding structure and even some aspects of the task. In this paper we study the notion of generalization that is appropriate for visual navigation control policies that are learned with DRL. We present: (1) a study of the generalization of visual control policies to certain changes in the underlying dynamical system; (2) an alternative training method that combines DRL with supervised learning, thus using DRL to learn a controller while leveraging the generalization properties of supervised learning. In our experiments we use the VizDoom platform Kempka et al. (2016) which is easily customizable and enables the generation of numerous variants of a given environment. We present a study of the generalization capabilities of visual navigation agents trained with deep reinforcement learning algorithms. We formalize what it means to generalize in the context of a POMDP. We find that the tendency of RL agent to overfit even when exposed to large training sets is quite visible. We show that using domain randomization with RL, without adding invariant features to the input such as the depth maps, is not enough to generalize. In the second part, we proposed Invariance Regularization (IR), a method that attempts to regularize the RL model with a supervised learning loss. It improves the generalization success and displays stable performance across different seeds. In this work, we focused our experimentation on generalization to changes in the input observation. However, it is also interesting to generalize the learned skills to different architectural designs of the environment, just as one one wishes to generalize to different levels of the game as proposed in the retro competition Nichol et al. (2018) . Another avenue of future work is to explore the appropriate transformation function T of the observations.One might consider an adaptive form of T learned with data augmentation Cubuk et al. (2018) or adversarial examples Goodfellow et al. (2015 The first part consists of training RL on the observations of the original training environment, while the second part can be seen as a supervised learning objective on the transformed observations, as shown in Algorithm 1. The first step trains RL on one environment and then use the actions that the trained policy would have taken in that environment to tune the model with supervised learning on the textured environments. In the reported experiments using the split version, the model is trained with one iteration of the algorithm. Therefore, the training process has two stages, train RL then train with a supervised learning setup, without iterating between both.
We propose a regularization term that, when added to the reinforcement learning objective, allows the policy to maximize the reward and simultaneously learn to be invariant to the irrelevant changes within the input..
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:479
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Multi-hop question answering requires models to gather information from different parts of a text to answer a question. Most current approaches learn to address this task in an end-to-end way with neural networks, without maintaining an explicit representation of the reasoning process. We propose a method to extract a discrete reasoning chain over the text, which consists of a series of sentences leading to the answer. We then feed the extracted chains to a BERT-based QA model to do final answer prediction. Critically, we do not rely on gold annotated chains or ``supporting facts:'' at training time, we derive pseudogold reasoning chains using heuristics based on named entity recognition and coreference resolution. Nor do we rely on these annotations at test time, as our model learns to extract chains from raw text alone. We test our approach on two recently proposed large multi-hop question answering datasets: WikiHop and HotpotQA, and achieve state-of-art performance on WikiHop and strong performance on HotpotQA. Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way. Furthermore, human evaluation shows that our extracted chains allow humans to give answers with high confidence, indicating that these are a strong intermediate abstraction for this task.
We improve answering of questions that require multi-hop reasoning extracting an intermediate chain of sentences.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:48
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT. In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet. An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations. In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT. Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines. Visual information has been introduced for neural machine translation in some previous studies (NMT) Barrault et al., 2018; Ive et al., 2019) though the contribution of images is still an open question (Elliott, 2018; Caglayan et al., 2019) . Typically, each bilingual (or multilingual) parallel sentence pair is annotated manually by one image describing the content of this sentence pair. The bilingual parallel corpora with manual image annotations are used to train a multimodel NMT model by an end-to-end framework, and results are reported on a specific data set, Multi30K . One strong point of the multimodel NMT model is the ability to use visual information to improve the quality of the target translation. However, the effectiveness heavily relies on the availability of bilingual parallel sentence pairs with manual image annotations, which hinders the image applicability to the NMT. As a result, the visual information is only applied to the translation task over a small and specific multimodel data set Multi30K , but not to large-scale text-only NMT (Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017) and low-resource text-only NMT (Fadaee et al., 2017; Lample et al., 2018; . In addition, because of the high cost of annotation, the content of one bilingual parallel sentence pair is only represented by a single image, which is weak in capturing the diversity of visual information. The current situation of introducing visual information results in a bottleneck in the multimodel NMT, and is not feasible for text-only NMT and low-resource NMT. In this paper, we present a universal visual representation (VR) method 1 relying only on image-monolingual annotations instead of the existing approach that depends on image-bilingual annotations, thus breaking the bottleneck of using visual information in NMT. In detail, we transform the existing sentence-image pairs into topic-image lookup table from a small-scale multimodel data set Multi30K. During the training and decoding process, a group of images with similar topic to the source sentence will be retrieved from the topic-image lookup table learned by the term frequency-inverse document frequency, and thus is encoded as image representations by a pretrained ResNet (He et al., 2016) . A simple and effective attention layer is then designed to fuse the image representations and the original source sentence representations as input to the decoder for predicting target translations. In particular, the proposed approach can be easily integrated into the text-only NMT model without annotating large-scale bilingual parallel corpora. The proposed method was evaluated on four widely-used translation datasets, including the WMT'16 Englishto-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K which are standard corpora for NMT and multi-modal machine translation (MMT) evaluation. Experiments and analysis show effectiveness. In summary, our contributions are primarily three-fold: 1. We present a universal visual representation method that overcomes the shortcomings of the bilingual (or multilingual) parallel data with manual image annotations for MMT. 2. The proposed method enables the text-only NMT to use the multimodality of visual information without annotating the existing large scale bilingual parallel data. 3. Experiments on different scales of translation tasks verified the effectiveness and generality of the proposed approach. This work presents a universal visual representation method for neural machine translation relying on monolingual image annotations, which breaks the restraint of heavy dependency on bilingual sentence-image pairs in the current multimodal NMT setting. In particular, this method enables visual information to be applied to large-scale text-only NMT through a topic-image lookup. We hope this work sheds some light for future MMT research. In the future, we will try to adopt the proposed method to other tasks.
This work proposed a universal visual representation for neural machine translation (NMT) using retrieved images with similar topics to source sentence, extending image applicability in NMT.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:480
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: This paper introduces a novel framework for learning algorithms to solve online combinatorial optimization problems. Towards this goal, we introduce a number of key ideas from traditional algorithms and complexity theory. First, we draw a new connection between primal-dual methods and reinforcement learning. Next, we introduce the concept of adversarial distributions (universal and high-entropy training sets), which are distributions that encourage the learner to find algorithms that work well in the worst case. We test our new ideas on a number of optimization problem such as the AdWords problem, the online knapsack problem, and the secretary problem. Our results indicate that the models have learned behaviours that are consistent with the traditional optimal algorithms for these problems. Machine learning has led to dramatic improvements in our capabilities to solve problems previously considered intractable. Besides the obvious empirical evidence of success, there has also been a strong parallel effort in the theory of ML which aims to explain why, when, and how ML techniques work.Our goal in this paper is to explore whether machine learning can be used to learn algorithms for classic combinatorial optimization problems. We will define this question more specifically by connecting to three concepts from traditional algorithms and complexity theory. In this work, we introduced several ideas from traditional algorithmic thinking to train neural networks to solve online optimization problems. In the problems that we consider, our results show that RL was able to find key characteristics of the optimal "pen-and-paper" algorithms. However, in some instances (such as in the knapsack and secretary problem), we saw that some state augmentation was needed in order for the learner to more adequately recover the optimal algorithms. In this work, we took a step towards that by having the RL environment encode that state in a form usable by the agent. In future work, we plan to remove the state augmentation from the RL environment and force the agent to learn the state augmentation as part of the training process. FIG3 compares the agent's learned algorithm with the optimal algorithm in the binary setting. FIG3 plots the threshold for the agent's learned algorithm in the value setting with changing distributions. Observe that both have learned a threshold at around 1/e.
By combining ideas from traditional algorithms design and reinforcement learning, we introduce a novel framework for learning algorithms that solve online combinatorial optimization problems.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:481
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as 'black box' systems. Using a functional view of these networks gives us a useful new lens with which to understand them. This allows us us to theoretically or experimentally probe properties of these networks, including the effect of standard initializations, the value of depth, the underlying loss surface, and the origins of generalization. One key result is that generalization results from smoothness of the functional approximation, combined with a flat initial approximation. This smoothness increases with number of units, explaining why massively overparamaterized networks continue to generalize well. Deep neural networks, trained via gradient descent, have revolutionized the field of machine learning. Despite their widespread adoption, theoretical understanding of fundamental properties of deep learning -the true value of depth, the root cause of implicit regularization, and the seemingly 'unreasonable' generalization achieved by overparameterized networks -remains mysterious. Empirically, it is known that depth is critical to the success of deep learning. Theoretically, it has been proven that maximum expressivity grows exponentially with depth, with a smaller number of trainable parameters (Raghu et al., 2017; Poole et al., 2016) . This theoretical capacity may not be used, as recently shown explicitly by (Hanin & Rolnick, 2019) . Instead, the number of regions within a trained network is proportional to the total number of hidden units, regardless of depth. Clearly deep networks perform better, but what is the value of depth if not in increasing expressivity? Another major factor leading to the success and widespread adoption of deep learning has been its surprisingly high generalization performance (Zhang et al., 2016) . In contrast to other machine learning techniques, continuing to add parameters to a deep network (beyond zero training loss) tends to improve generalization performance. This is even for networks that are massively overparameterized, wherein according to traditional ML theory they should (over)fit all the training data (Neyshabur et al., 2015) . How does training deep networks with excess capacity lead to generalization? And how can it be that this generalization error decreases with overparameterization? We believe that taking a functional view allows us a new, useful lens with which to explore and understand these issues. In particular, we focus on shallow and deep fully connected univariate ReLU networks, whose parameters will always result in a Continuous Piecewise Linear (CPWL) approximation to the target function. We provide theoretical results for shallow networks, with experiments showing that these qualitative results hold in deeper nets. Our approach is related to previous work from (Savarese et al., 2019; Arora et al., 2019; Frankle & Carbin, 2018) in that we wish to characterize parameterization and generalization. We differ from these other works by using small widths, rather than massively overparamaterized or infinite, and by using a functional parameterization to measure properties such as smoothness. Other prior works such as (Serra et al., 2017; Arora et al., 2016; Montufar et al., 2014) attempt to provide theoretical upper or lower bounds to the number of induced pieces in ReLU networks, whereas we are more interested in the empirical number of pieces in example tasks. Interestingly, (Serra et al., 2017) also takes a functional view, but is not interested in training and generalization as we are. Previous work (Advani & Saxe, 2017) has hinted at the importance of small norm initialization, but the functional perspective allows us to prove generalization properties in shallow networks.
A functional approach reveals that flat initialization, preserved by gradient descent, leads to generalization ability.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:482
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: It is well-known that deeper neural networks are harder to train than shallower ones. In this short paper, we use the (full) eigenvalue spectrum of the Hessian to explore how the loss landscape changes as the network gets deeper, and as residual connections are added to the architecture. Computing a series of quantitative measures on the Hessian spectrum, we show that the Hessian eigenvalue distribution in deeper networks has substantially heavier tails (equivalently, more outlier eigenvalues), which makes the network harder to optimize with first-order methods. We show that adding residual connections mitigates this effect substantially, suggesting a mechanism by which residual connections improve training. Practical experience in deep learning suggests that the increased capacity that comes with deeper models can significantly improve their predictive performance. It has also been observed that as the network becomes deeper, training becomes harder. In convolutional neural networks (CNNs), residual connections BID5 are used to alleviate this problem. Various explanations are provided for this phenomenon: BID6 suggests that residual connections reduce the flatness of the landscape, whereas BID3 questions this premise, noting that the extremal eigenvalues of the loss Hessian are much larger when residual connections are present: large Hessian eigenvalues indicate that the curvature of the loss is much sharper, and less flat. In a different line of work, BID0 observes that the gradients with respect to inputs in deeper networks decorrelate with depth, and suggest that residual connections reduce the 'shattering' of the gradients.In this paper, we explore the interaction between depth and the loss geometry. We first establish that gradient explosion or vanishing is not responsible for the slowing down of training, as is commonly believed. Searching for an alternative explanation, we study the Hessian eigenvalue density (using the tools introduced in BID3 to obtain estimates of the eigenvalue histogram or density). The classical theory of strongly convex optimization tells us that optimization is slow when the spectrum simultaneously contains very small and very large eigenvalues (i.e., optimization rate is dependent on κ = λ max /λ min ). Following this intuition, we focus on examining the relative spread of the Hessian eigenvalues. In particular, we quantify the extent of the large outliers by computing some scale-invariant classical statistics of the Hessian eigenvalues, namely the skewness and kurtosis. Finally, we observe that in comparable models with residual connections, these magnitude of these outliers is substantially mitigated. In BID3 , it is hypothesised that batch normalization suppresses large outlier eigenvalues, thereby speeding up training; in this paper, we present evidence that residual connections speed up training through essentially the same channel.Throughout, the dataset of interest is CIFAR-10; we describe the specific model architectures used in Appendix A. In this paper, we have presented qualitative and quantitative evidence that depth increases outlier eigenvalues in the Hessian, and that residual connections mitigate this. We believe that this touches upon some of the fundamental dynamics of optimizing neural networks, and that any theoretical explanation of residual connections needs to explain this.
Network depth increases outlier eigenvalues in the Hessian. Residual connections mitigate this.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:483
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: In the context of optimization, a gradient of a neural network indicates the amount a specific weight should change with respect to the loss. Therefore, small gradients indicate a good value of the weight that requires no change and can be kept frozen during training. This paper provides an experimental study on the importance of a neural network weights, and to which extent do they need to be updated. We wish to show that starting from the third epoch, freezing weights which have no informative gradient and are less likely to be changed during training, results in a very slight drop in the overall accuracy (and in sometimes better). We experiment on the MNIST, CIFAR10 and Flickr8k datasets using several architectures (VGG19, ResNet-110 and DenseNet-121). On CIFAR10, we show that freezing 80% of the VGG19 network parameters from the third epoch onwards results in 0.24% drop in accuracy, while freezing 50% of Resnet-110 parameters results in 0.9% drop in accuracy and finally freezing 70% of Densnet-121 parameters results in 0.57% drop in accuracy. Furthermore, to experiemnt with real-life applications, we train an image captioning model with attention mechanism on the Flickr8k dataset using LSTM networks, freezing 60% of the parameters from the third epoch onwards, resulting in a better BLEU-4 score than the fully trained model. Our source code can be found in the appendix. The immense success of deep neural networks we are witnessing since the deep learning revolution occurred is surprising. A large variety of vision and language applications ranging from image classification, object detection, image synthesis, image super-resolution, image captioning, language modeling.... etc. has proved that neural networks possess a powerful capability of learning very complex data. However, training these networks to perform as expected is very time-consuming and requires powerful graphical processing units (GPUs) . A recently published open-source project by NVIDIA 1 claimed that training a generative adversarial network (GAN) took more than 6 days on 8 Tesla V100 GPUs. However, we argue that a lot of parameters involved during training are important for update only for the first few epochs (in our experiments, the first two epochs only), and can be frozen for the rest of the training epochs. The backpropagation algorithm is the base algorithm used to optimize deep neural networks. For each weight, a gradient is computed with respect to the loss which indicates the amount a weight should change. Large gradients correspond to a large change that will occur in the weight, while small ones (near to zero) indicate that the weight is nearly optimized and does not need much change. In particular, if a gradient for a particular weight is zero or close to zero, this means that it has either reached its optimal solution, or it is stuck at a saddle point. The former means that the weight has a good value and is less likely to change throughout the training and can be kept frozen. In this paper, we wish to show the redundancy of weights in a neural network that have no influence and can be kept frozen during training. In particular, we demonstrate that fully training a model with all its weights is required for the first two epochs only. To justify this, we propose an experimental technique named Partial Backpropagation, which freezes weights that have gradients very near to zero and are less likely to change, with the rest of the weights trained normally. This induces a very slight drop in accuracy (and no harm in accuracy for lesser freezing). An overview of our experimental technque is shown in Figure 1 . Note that in Figure 1(b) , the red weights are frozen and not removed or zeroed out. We can further visualize the histogram of gradients across the network layers to have a better understanding of their distributions. In Figure 2 , we visualize the distribution of gradients from several layers in a VGG19 convolutional network (Simonyan & Zisserman, 2015) . In particular, we visualize the gradients of layers 3, 7, 10 and 13 after training for 2 epochs. We can see a large number of gradients with values very near to zero, suggesting that a lot of weights in these layers have already been optimized and are less likely to change throughout the training. We provided an experimental study on the importance of a neural network weights, and to which extent do they need to be updated. Through our experiments, we emphasized the number of redundant parameters that carry no informative gradient, which if frozen from the third epoch onwards, slightly effect (and in sometimes do not) the overall accuracy of the model. To prove our concern, we ran experiments on the MNIST and CIFAR10 datasets using several CNN architectures (VGG19, ResNet-110 and DenseNet-121), as well as the Flick8k dataset using an image captioning architecture composed of LSTM networks with attention mechanism. Our experiments successfully prove the concern of this paper.
An experimental paper that proves the amount of redundant weights that can be freezed from the third epoch only, with only a very slight drop in accuracy.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:484
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum. While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning. LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples. It works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn. In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution. We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10. We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks. We further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties. Deep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples. However, successfully training deep networks to solve problems under such conditions is mystifyingly hard (Erhan et al. (2009) ; Larochelle et al. (2007) ). The go-to solution in most cases is Stochastic Gradient Descent with mini-batches (simple batch learning) and its derivatives. While offering a standardized solution, simple batch learning often fails to find solutions that are simultaneously stable, highly generalizable and scalable to large systems (Das et al. (2016) ; Keskar et al. (2016) ; Goyal et al. (2017) ; You et al. (2017) ). This is a by-product of how mini-batches are constructed. For example, the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution; small batch sizes help achieve more generalizable solutions, but do not scale as well to vast computational resources as large mini-batches. It is hard to construct a solution that is a perfect compromise between all cases. Two lines of work, curriculum learning and label smoothing, offer alternative strategies to improve learning in deep networks. Curriculum learning, inspired by strategies used for humans (Skinner (1958) ; Avrahami et al. (1997) ), works by gradually increasing the conceptual difficulty of samples used to train deep networks ; Florensa et al. (2017) ; Graves et al. (2017) ). This has been shown to improve performance on corrupted (Jiang et al. (2017) ) and small datasets (Fan et al. (2018) ). More recently, deep networks have been used to categorize samples (Weinshall et al. (2018) ) and variations on the pace with which these samples were shown to deep networks were analyzed in-depth (Hacohen & Weinshall (2019) ). To the best of our knowledge, previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order. This introduces computational overheads e.g. pre-computing the relative difficulty of samples, and also reduces the effective amount of data from which a model can learn in early epochs. Further, curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks. A complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima. In this regard, label smoothing offers an important solution that is invariant to the underlying architecture. Early works like Xie et al. (2016) replace ground-truth labels with noise while Reed et al. (2014) uses other models' outputs to prevent over-fitting. This idea was extended in Bagherinezhad et al. (2018) to an iterative method which uses logits obtained from previously trained versions of the same deep network. While Miyato et al. (2015) use local distributional smoothness, based on the robustness of a model's distribution around a data point, to regularize outcomes, Pereyra et al. (2017) penalized highly confident outputs directly. Closest in spirit to our work is the label smoothing method defined in Szegedy et al. (2016) , which offers an alternative target distribution for all training samples with no extra data augmentation. In general, label smoothing is applied to all examples regardless of how it affects the network's understanding of them. Further, in methods which use other models to provide logits/labels, often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset, both of which introduce additional computation. In this work, we propose LILAC, Learning with Incremental Labels and Adaptive Compensation, which emphasizes a label-based curriculum and adaptive compensation, to improve upon previous methods and obtain highly accurate and stable solutions. LILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples. It works in two key phases, 1) incremental label introduction and 2) adaptive compensation. In the first phase, we incrementally introduce groups of labels in the training process. Data, corresponding to labels not yet introduced to the model, use a single fake label selected from within the dataset. Once a network has been trained for a fixed number of epochs with this setup, an additional set of ground-truth labels is introduced to the network and the training process continues. In recursively revealing labels, LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples. Once all ground-truth labels are revealed the adaptive compensation phase of training is initiated. This phase mirrors conventional batch learning, except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution. Thus, we avoid adjusting labels across the entire dataset, like previous methods, while elevating the stability and average performance of the model. Further, instead of being pre-computed by an alternative model, these softer distributions are generated on-the-fly from the outputs of the model being trained. We apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines. While incremental and continual learning work on evolving data distributions with the addition of memory constraints ((Rebuffi et al., 2017; Castro et al., 2018) and derivative works), knowledge distillation ( Rolnick et al., 2018) and similar works) or other requirements, this work is a departure into using negative mining and focused training to improve learning on a fully available dataset. In incremental/continual learning works, often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset, distinguished by Seen and Unseen labels. Thus, it avoids data deficient learning. Further, works like Bucher et al. (2016) ; Li et al. (2013) ; Wang & Gupta (2015) emphasize the importance of hard negative mining, both in size and diversity, in improving learning. Although the original formulation of negative mining was based on imbalanced data, recent object detection works have highlighted its importance in contrasting and improving learning in neural networks. To summarize, our main contributions in LILAC are as follows, • we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples, • our method adaptively compensates incorrectly labelled samples by softening their target distribution which improves performance and removes external computational overheads, • we improve average recognition accuracy and decrease the standard deviation of performance across several image classification benchmarks compared to batch learning, a property not shared by other curriculum learning and label smoothing methods. In the incremental phase, we initially replace the ground-truth labels of several class using a constant held-out label. Gradually, over the course of several fixed intervals of training we reveal the true label. Within a fixed interval of training, we keep constant two sets of data, "Seen", whose groundtruth labels are known and "Unseen", whose labels are replaced by a fake value. When training, Illustration of the evolution of data partitions in the incremental label introduction phase for a four label dataset. In the first incremental step, only one label is used for training while the remaining data use label 4. A short period of training is performed with this fixed setup, where data from U is uniformly sampled to match the number of samples from S, in every mini-batch. The final incremental step depicted is equivalent to batch learning since all the labels are available to the network. Once all the ground-truth labels are revealed we begin the adaptive compensation phase described in Sec. 2.2. mini-batches are uniformly sampled from the entire training set, but the instances from "Unseen" classes use the held-out label. By the end of the final interval, we reveal all ground-truth labels. We now describe the incremental phase in more detail. At the beginning of the incremental label introduction phase, we virtually partition data into two mutually exclusive sets, S : Seen and U : Unseen, as shown in Fig. 1 . Data samples in S use their ground-truth labels as target values while those in U use a designated unseen label, which is held constant throughout the entire training process. LILAC assumes a random ordering of labels, Or(M ), where M denotes the total number of labels in the dataset. Within this ordering, the number of labels and corresponding data initially placed in S is defined by the variable b. The remaining labels, M − b, are initially placed in U and incrementally revealed in intervals of m labels, a hyper-parameter defined by the user. Training in the incremental phase happens at fixed intervals of E epochs each. Within a fixed interval, the virtual data partition is held constant. Every mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch, labels are obtained based on their placement in S or U. Then the number of samples from U is reduced or augmented, using a uniform prior, to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label. Finally, the curated mini-batches of data are used to train the neural network. At the end of each fixed interval, we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval. In this work, we proposed LILAC which rethinks curriculum learning based on incrementally learning labels instead of samples. This approach helps kick-start the learning process from a substantially better starting point while making the learned embedding space amenable to adaptive negative logit compensation. Both these techniques combine well in LILAC to show the highest performance on CIFAR-10 for simple data augmentations while easily outperforming batch and curriculum learning and label smoothing on comparable network architectures. The next step in unlocking the full potential of this setup is to extend this setup to include a confidence measure on the predictions of network so that it can handle the effects of dropout or partial inputs. In further expanding LILAC's ability to handle partial inputs, we aim to explore its effect on standard incremental learning (memory constrained) while also extending it applicability to more complex neural network architectures. A LILAC: ALGORITHM Table 8 : The table captures the effect of varying the number of epochs used for the fixed training intervals in the incremental label introduction phase. Across CIFAR-10 there is an obvious peak after which the mean value decreases. However, in STL-10 there seems to be a consistent increase, with the assumption of minor noise. Finally, in CIFAR-100 there isn't a clear pattern. From the results in Table 8 , we observe that the choice of E is dependent on the dataset. There isn't an explicit pattern that can be used to select the value of E without trial runs. Further, the available run-time is an important constraint when select E from a range of values since both m and E affect it.
A novel approach to curriculum learning by incrementally learning labels and adaptively smoothing labels for mis-classified samples which boost average performance and decreases standard deviation.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:485
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the ``long tail'' of this distribution requires enormous amounts of data. Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling. Natural language yields a Zipfian distribution BID28 which tells us that a core set of words (at the head of the distribution) are frequent and ubiquitous, while a significantly larger number (in the long tail) are rare. Learning representations for rare words is a well-known challenge of natural language understanding, since the standard end-to-end supervised learning methods require many occurrences of each word to generalize well.The typical remedy to the rare word problem is to learn embeddings for some proportion of the head of the distribution, possibly shifted towards the domain-specific vocabulary of the dataset or task at hand, and to treat all other words as out-of-vocabulary (OOV), replacing them with an unknown word "UNK" token with a shared embedding. This essentially heuristic solution is inelegant, as words from technical domains, names of people, places, institutions, and so on will lack a specific representation unless sufficient data are available to justify their inclusion in the vocabulary. This forces model designers to rely on overly large vocabularies, as observed by BID17 BID22 , which are parametrically expensive, or to employ vocabulary selection strategies BID16 . In both cases, we face the issue that words in the tail of the Zipfian distribution will typically still be too rare to learn good representations for through standard embedding methods. Some models, such as in the work of BID13 , have sought to deal with the open vocabulary problem by obtaining representations of words from characters. This is successful at capturing the semantics of morphological derivations (e.g. "running" from "run") but puts significant pressure on the encoder to capture semantic distinctions amongst syntactically similar but semantically unrelated words (e.g. "run" vs. "rung"). Additionally, nothing about the spelling of named entities, e.g. "The Beatles", tells you anything about their semantics (namely that they are a rock band).In this paper we propose a new method for computing embeddings "on the fly", which jointly addresses the large vocabulary problem and the paucity of data for learning representations in the long tail of the Zipfian distribution. This method, which we illustrate in FIG0 , can be summarized as follows: instead of directly learning separate representations for all words in a potentially unbounded vocabulary, we train a network to predict the representations of words based on auxiliary data. Such auxiliary data need only satisfy the general requirement that it describe some aspect of the semantics of the word for which a representation is needed. Examples of such data could be dictionary definitions, Wikipedia infoboxes, linguistic descriptions of named entities obtained from Wikipedia articles, or something as simple as the spelling of a word. We will refer to the content of auxiliary data as "definitions" throughout the paper, regardless of the source. Several sources of auxiliary data can be used simultaneously as input to a neural network that will compute a combined representation.These representations can then be used for out-of-vocabulary words, or combined with withinvocabulary word embeddings directly trained on the task of interest or pretrained from an external data source BID18 BID20 . Crucially , the auxiliary data encoders are trained jointly with the objective, ensuring the preservation of semantic alignment with representations of within-vocabulary words. In the present paper, we will focus on a subset of these approaches and auxiliary data sources, restricting ourselves to producing out-of-vocabulary words embeddings from dictionary data, spelling, or both.The obvious use case for our method would be datasets and tasks where there are many rare terms such as technical writing or bio/medical text BID6 . On such datasets , attempting to learn global vectors-for example GloVe embeddings BID20 -from external data, would only provide coverage for common words and would be unlikely to be exposed to sufficient (or any) examples of domain-specific technical terms to learn good enough representations. However, there are no (or significantly fewer) established neural network-based baselines on these tasks, which makes it harder to validate baseline results. Instead, we present results on a trio of well-established tasks, namely reading comprehension, recognizing textual entailment, and a variant on language modelling. For each task, we compare baseline models with embeddings trained directly only on the task objective to those same models with our on the fly embedding method. Additionally, we report results for the same models with pretrained GLoVe vectors as input which we do not update. We aim to show how the gap in results between the baseline and the data-rich GLoVe-based models can be partially but substantially closed merely through the introduction of relatively small amounts of auxiliary definitions. Quantitative results show that auxiliary data improves performance. Qualitative evaluation indicates our method allows models to draw and exploit connections defined in auxiliary data, along the lines of synonymy and semantic relatedness. We showed how different sources of auxiliary information, such as the spelling and a dictionary of definitions can be used to produce on the fly useful embeddings for rare words. While it was known before that adding the spelling information to the model is helpful, it is often hard or not possible to infer the meaning directly from the characters, as confirmed by our entailment recognition experiments. Our more general approach offers endless possibilities of adding other data sources and learning end-to-end to extract the relevant bits of information from them. Our experiments with a dictionary of definitions show the feasibility of the approach, as we report improvements over using just the spelling on question answering and semantic entailment classification tasks. Our qualitative investigations on the question answering data confirms our intuition on where the improvement comes from. It is also clear from them that adding more auxiliary data would help, and that it would probably be also useful to add definitions not just for words, but also for phrases (see "Mark Twain" from Section 4.1). We are planning to add more data sources (e.g. first sentences from Wikipedia articles) and better use the available ones (WordNet has definitions of phrasal verbs like "come across") in our future work.An important question that we did not touch in this paper is how to deal with rare words in the auxiliary information, such as dictionary definitions. Based on our qualitative investigations (see the example with "arrow" and "weapon" in Section 4.1), we believe that better handling rare words in the auxiliary information could substantially improve the proposed method. It would be natural to use on the fly embeddings similarly to the ones that we produce for words from the input, but the straight-forward approach of computing them on request would be very computation and memory hungry. One would furthermore have to resolve cyclical dependencies, which are unfortunately common in dictionary data (when e.g. "entertainment" is defined using "diverting" and "diverting" is defined using "entertainment"). In our future work we want to investigate asynchronous training of on the fly embeddings and the main model. In this paper, we have shown that introducing relatively small amounts of auxiliary data and a method for computing embeddings on the fly using that data bridges the gap between data-poor setups, where embeddings need to be learned directly from the end task, and data-rich setups, where embeddings can be pretrained and sufficient external data exists to ensure in-domain lexical coverage.A large representative corpus to pretrain word embeddings is not always available and our method is applicable when one has access only to limited auxiliary data. Learning end-to-end from auxiliary sources can be extremely data efficient when these sources represent compressed relevant information about the word, as dictionary definitions do. A related desirable aspect of our approach is that it may partially return the control over what a language processing system does into the hands of engineers or even users: when dissatisfied with the output, they may edit or add auxiliary information to the system to make it perform as desired. Furthermore, domain adaptation with our method could be carried out simply by using other sources of auxiliary knowledge, for example definitions of domain-specific technical terms in order to understand medical texts. Overall, the aforementioned properties of our method make it a promising alternative to the existing approaches to handling rare words.
We propose a method to deal with rare words by computing their embedding from definitions.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:486
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications. In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks. Unlike the discriminative (or softmax) classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions. Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution. Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks. Out-of-distribution (OOD) detection, also known as novelty detection, refers to the task of identifying the samples that differ in some respect from the training samples. Recently, deep neural networks (DNNs) turned out to show unpredictable behaviors in case of mismatch between the training and testing data distributions; for example, they tend to make high confidence prediction for the samples that are drawn from OOD or belong to unseen classes (Szegedy et al., 2014; Moosavi-Dezfooli et al., 2017) . For this reason, accurately measuring the distributional uncertainty (Malinin & Gales, 2018) of DNNs becomes one of the important challenges in many real-world applications where we can hardly control the testing data distribution. Several recent studies have tried to simply detect OOD samples using the confidence score defined by softmax probability (Hendrycks & Gimpel, 2017; Liang et al., 2018) or Mahalanobis distance from class means (Lee et al., 2018) , and they showed promising results even without re-training the model. However, all of them employ the DNNs designed for a discriminative (or softmax) classifier, which has limited power to locate OOD samples distinguishable with in-distribution (ID) samples in their latent space. To be specific, the softmax classifier is optimized to learn the discriminative latent space where the training samples are aligned along their corresponding class weight vectors, maximizing the softmax probability for the target classes. As pointed out in (Hendrycks & Gimpel, 2017) , OOD samples are more likely to have small values of the softmax probability for all known classes, which means that their latent vectors get closer to the origin. As a result, there could be a large overlap between two sets of ID and OOD samples in the latent space (Figure 1 ), which eventually reduces the gap between their confidence scores and degrades the performance as well. In addition, most of existing confidence scores adopt additional calibration techniques Hinton et al., 2015) to enhance the reliability of the detection, but they include several hyperparameters whose optimal values vary depending on the testing data distribution. In this situation, they utilized a small portion of each test set (containing both ID and OOD samples) for validation, and reported the results evaluated on the rest by using the optimal hyperparameter values for each test case. Considering the motivation of OOD detection that prior knowledge of test distributions is not available before we encounter them, such process of tuning the hyperparameters for each test case is not practical when deploying the DNNs in practice. In this paper, we propose a novel objective to train DNNs with a generative (or distance) classifier which is capable of effectively identifying OOD test samples. The main difference of our deep generative classifier is to learn separable class-conditional distributions in the latent space, by explicitly modeling them as a DNN layer. The generative classifier places OOD samples further apart from the distributions of all given classes, without utilizing OOD samples for its validation. Thus, based on the Euclidean distance between a test sample and the centers of the obtained class-conditional distributions, we can calculate how likely and how confidently the sample belongs to each class. This can be interpreted as a multi-class extension of unsupervised anomaly detection (Ruff et al., 2018) , and Gaussian discriminant analysis provides the theoretical background for incorporating the generative classifier into the DNNs. Our extensive experiments on images and tabular data demonstrate that the proposed classifier distinguishes OOD samples more accurately than the state-of-the-art method, while maintaining the classification accuracy for ID samples. This paper introduces a deep learning objective to learn the multi-class generative classifier, by fusing the concept of Gaussian discriminant analysis with DNNs. Unlike the conventional softmax classifier, our generative (or distance) classifier learns the class-conditional distributions to be separated from each other and follow the Gaussian distribution at the same time, thus it is able to effectively distinguish OOD samples from ID samples. We empirically show that our confidence score beats other competing methods in detecting both OOD tabular data and OOD images, and also the distance classifier can be easily combined with various types of DNNs to further improve their performances.
This paper proposes a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:487
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: One of the most prevalent symptoms among the elderly population, dementia, can be detected by classifiers trained on linguistic features extracted from narrative transcripts. However, these linguistic features are impacted in a similar but different fashion by the normal aging process. Aging is therefore a confounding factor, whose effects have been hard for machine learning classifiers to isolate. In this paper, we show that deep neural network (DNN) classifiers can infer ages from linguistic features, which is an entanglement that could lead to unfairness across age groups. We show this problem is caused by undesired activations of v-structures in causality diagrams, and it could be addressed with fair representation learning. We build neural network classifiers that learn low-dimensional representations reflecting the impacts of dementia yet discarding the effects of age. To evaluate these classifiers, we specify a model-agnostic score $\Delta_{eo}^{(N)}$ measuring how classifier results are disentangled from age. Our best models outperform baseline neural network classifiers in disentanglement, while compromising accuracy by as little as 2.56\% and 2.25\% on DementiaBank and the Famous People dataset respectively. One in three seniors die of Alzheimer's and other types of dementia in the United States (Association, 2018) . Although its causes are not yet fully understood, dementia impacts people's cognitive abilities in a detectable manner. This includes different syntactic distributions in narrative descriptions BID28 , more pausing BID29 , higher levels of difficulty in recalling stories BID21 , and impaired memory generally BID20 . Fortunately, linguistic features can be used to train classifiers to detect various cognitive impairments. For example, BID8 detected primary progressive aphasia with up to 100% accuracy, and classified subtypes of primary progressive aphasia with up to 79% accuracy on a set of 40 participants using lexical-syntactic and acoustic features. BID7 classified dementia from control participants with 82% accuracy on narrative speech.However, dementia is not the only factor causing such detectable changes in linguistic features of speech. Aging also impairs cognitive abilities BID11 , but in subtly different ways from dementia. For example, aging inhibits fluid cognitive abilities (e.g., cognitive processing speed) much more than the consolidated abilities (e.g., those related to cumulative skills and memories) BID4 . In other words, the detected changes of linguistic features, including more pauses and decreased short-term memories, could attribute to just normal aging process instead of dementia. Unfortunately, due to the high correlation between dementia and aging, it can be difficult to disentangle symptoms are caused by dementia or aging BID24 . Age is therefore a confounding factor in detecting dementia.The effects of confounding factors are hard for traditional machine learning algorithms to isolate, and this is largely due to sampling biases in the data. For example, some algorithms predict higher risk of criminal recidivism for people with darker skin colors BID15 , others identify images of smiling Asians as blinking BID19 , and GloVe word embeddings can project European-American names significantly closer to the words like 'pleasant' than African-American names BID3 . It is preferable for classifiers to make decisions without biasing too heavily on demographic factors, and therefore to isolate the effects of confounding factors. However, as we will show in Experiments, traditional neural network classifiers bias on age to infer dementia; this can lead to otherwise avoidable false positives and false negatives that are especially important to avoid in the medical domain. Graphically, if both age A and dementia D cause changes in a feature X, the result is a v-structure BID17 A → X ← D which is activated upon observing X. In other words, the confounder A affects P (D|X) if we train the classifier in traditional ways, which is to collect data points {(X, D) (i) } and to learn an inference model P (D|X) approximating the affected P (D|X).Traditionally , there are several ways to eliminate the effects of confounding factors A.Controlling A gives a posterior distribution P (D|X, A)P (A). This is unfortunately unrealistic for small, imbalanced clinical datasets, in which sparsity may require stratification. However, the stratified distributions P (D|X, A) can be far from a meaningful representation of the real world (as we will show, e.g., in FIG3 ). Moreover, a discrepancy in the sizes of age groups can skew the age prior P (A), which would seriously inhibit the generalizability of a classifier.Controlling X Conducting a randomized control trial (RCT) on X removes all causal paths leading "towards" the variable X, which gives a de-confounded dataset P (D|do(X)) according to the notation in BID27 . However, RCTs on X are even less practical because simultaneously controlling multiple features produces exponential number of scenarios, and doing this to more than 400 features require far more data points than any available dataset.Pre-adjusting X according to a pre-trained model X = f (A) per feature could also approximately generate the dataset P (D|do(X)). However, such a model should consider participant differences, otherwise interpolating using a fixed age A would give exactly the same features for everybody. The participant differences , however, are best characterized via X, which are the values you want to predict.To overcome the various problems with these methods, we let our classifiers be aware of cognitive impairments while actively filtering out any information related to aging. This is a fair representation learning framework that protects age as a "sensitive attribute".Fair representation learning frameworks can be used to train classifiers to equally consider the subjects with different sensitive attributes. A sensitive attribute (or "protected attribute ") can be race, age, or other variables whose impact should be ignored. In the framework proposed by BID32 , classifiers were penalized for the differences in classification probabilities among different demographic groups. After training, the classifiers produced better demographic similarities while compromising only a little overall accuracy. To push the fair representation learning idea further , adversarial training can be incorporated. BID9 introduced generative adversarial networks, in which a generator and a discriminator are iteratively optimized against each other. Incorporating adversarial training, BID22 proposed a framework to learn a latent representation of data in order to limit its adversary's ability to classify based on the sensitive attributes.However, these approaches to fair representation learning only handle binary attributes. E.g., BID22 binarized age. To apply to cognitive impairments detection, we want to represent age on a continuous scale (with some granularity if necessary). We formulate a fairness metric for evaluating the ability of a classifier to isolate a continuous-valued attribute. We also propose four models that compress high-dimensional feature vectors into low-dimensional representations which encrypt age from an adversary. We show empirically that our models achieve better fairness metrics than baseline deep neural network classifiers, while compromising accuracies by as little as 2.56% and 2.25% on our two empirical datasets, respectively. We evaluate the performances of our four proposed neural networks against the DNN baseline. As an additional ablation study, two variants of age-indep-entropy are also evaluated. TAB1 : Evaluation results of our representation learning models. The "age-indep" prefix are replaced with "*" in model names. age-indep-simple and age-indep-autoencoder have better disentanglement scores, while the rest two models could have better accuracy.Accuracy The fair representation learning models compromise accuracy, in comparison to DNN baselines. This confirms that part of the classification power of DNNs come from biasing with regards to age. On DementiaBank, the age-indep-autoencoder reduces accuracy the least (only 2.56% in comparison to the DNN baseline). On the Famous People data, age-indep-consensus and age-indep-entropy models compromise accuracies by only 2.25% and 2.75% respectively, which are not statistically different from the DNN baseline 7 .Disentanglement In comparison to DNN baselines, our fair representation learning models improve disentanglement/fairness 8 , the improvements are mostly significant when measured by the two-group scores ∆eo . Also, the five-group scores ∆eo are less stable for both datasets, and the scores in the Famous People have higher variances than in DementiaBank. Following is an explanation . DementiaBank has ∼400 data samples. In 5-fold cross validation , each of the five age groups has only ∼16 samples during evaluation. Famous People data contains ∼250 samples, which increases the variance. When the number of groups, N of ∆ (N ) eo , is kept small (e.g., ∼100 samples per label per group, as in DementiaBank N = 2), the fairness metrics are stable. Here, we identify the problem of entangling age in the detection of cognitive impairments. After explaining this problem with causality diagrams, we formulate it into a fair representation learning task, and propose a fairness score to measure the extent of disentanglement. We put forward four fair representation learning models that learn low-dimensional representations of data samples containing as little age information as possible. Our best model improves upon the DNN baseline in our fairness metrics, while compromising as little accuracy as 2.56% (on DementiaBank) and 2.25% (on the Famous People dataset).7 p = 0.20, 0.16 on 38-DoF one-tailed t-tests, respectively. 8 On DementiaBank, p = 0.01 and 0.03 for age-indep-simple and age-indep-entropy on ∆ (2) eo respectively; these are significant. p = 0.08 and 0.09 on age-indep-autoencoder and age-indep-consensus-net on ∆ (2) eo respectively; these are marginally significant. However, these differences are not as significant on ∆ Proof of Theorem For each of the age groups: |p a −p| + |n a −n| ≤ max{|p a − 0| + |n a − 0|, |p a − 0.5| + |n a − 0.5|} ≤ max{0.5, 1} = 1 Summing up the N a age groups results in our upper bound N a for non-trivial classifiers.
Show that age confounds cognitive impairment detection + solve with fair representation learning + propose metrics and models.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:488
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Much of the focus in the design of deep neural networks had been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios. As a result, there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical on-device edge usage by introducing NetScore, a new metric designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network. In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 60 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2012) dataset. The evaluation results across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field.
We introduce NetScore, new metric designed to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:489
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Normalizing constant (also called partition function, Bayesian evidence, or marginal likelihood) is one of the central goals of Bayesian inference, yet most of the existing methods are both expensive and inaccurate. Here we develop a new approach, starting from posterior samples obtained with a standard Markov Chain Monte Carlo (MCMC). We apply a novel Normalizing Flow (NF) approach to obtain an analytic density estimator from these samples, followed by Optimal Bridge Sampling (OBS) to obtain the normalizing constant. We compare our method which we call Gaussianized Bridge Sampling (GBS) to existing methods such as Nested Sampling (NS) and Annealed Importance Sampling (AIS) on several examples, showing our method is both significantly faster and substantially more accurate than these methods, and comes with a reliable error estimation. Normalizing constant, also called partition function, Bayesian evidence, or marginal likelihood, is the central object of Bayesian methodology. Despite its importance, existing methods are both inaccurate and slow, and may require specialized tuning. One such method is Annealed Importance Sampling (AIS), and its alternative, Reverse AIS (RAIS), which can give stochastic lower and upper bounds to the normalizing constant, bracketing the true value (Neal, 2001; Grosse et al., 2015) . However, as the tempered distribution may vary substantially with temperature, it can be expensive to obtain good samples at each temperature, which can lead to poor estimates (Murray et al., 2006) . Nested sampling (NS) is another popular alternative (Skilling, 2004; Handley et al., 2015) , which can be significantly more expensive than standard sampling methods in higher dimensions but, as we show, can also lead to very inaccurate estimates. Moreover, there is no simple way to know how accurate the estimate is. Here we develop a new approach to the problem, combining Normalizing Flow (NF) density estimators with Optimal Bridge Sampling (OBS). In a typical Bayesian inference application, we first obtain posterior samples using one of the standard Markov Chain Monte Carlo (MCMC) methods. In our approach we use these samples to derive the normalizing constant with relatively few additional likelihood evaluations required, making the additional cost of normalizing constant estimation small compared to posterior sampling. All of our calculations are run on standard CPU platforms, and will be available in the BayesFast Python package. We present a new method to estimate the normalizing constant (Bayesian evidence) in the context of Bayesian analysis. Our starting point are the samples from the posterior using standard MCMC based methods, and we assume that these have converged to the correct probability distribution. In our approach we combine OBS with INT, a novel NF based density estimator, showing on several high dimensional examples that our method outperforms other approaches in terms of accuracy and computational cost, and provides a reliable error estimate.
We develop a new method for normalization constant (Bayesian evidence) estimation using Optimal Bridge Sampling and a novel Normalizing Flow, which is shown to outperform existing methods in terms of accuracy and computational time.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:49
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, especially white-box targeted attacks. This paper studies the problem of how aggressive white-box targeted attacks can be to go beyond widely used Top-1 attacks. We propose to learn ordered Top-k attacks (k>=1), which enforce the Top-k predicted labels of an adversarial example to be the k (randomly) selected and ordered labels (the ground-truth label is exclusive). Two methods are presented. First, we extend the vanilla Carlini-Wagner (C&W) method and use it as a strong baseline. Second, we present an adversarial distillation framework consisting of two components: (i) Computing an adversarial probability distribution for any given ordered Top-$k$ targeted labels. (ii) Learning adversarial examples by minimizing the Kullback-Leibler (KL) divergence between the adversarial distribution and the predicted distribution, together with the perturbation energy penalty. In computing adversarial distributions, we explore how to leverage label semantic similarities, leading to knowledge-oriented attacks. In experiments, we test Top-k (k=1,2,5,10) attacks in the ImageNet-1000 val dataset using two popular DNNs trained with the clean ImageNet-1000 train dataset, ResNet-50 and DenseNet-121. Overall, the adversarial distillation approach obtains the best results, especially by large margin when computation budget is limited.. It reduces the perturbation energy consistently with the same attack success rate on all the four k's, and improve the attack success rate by large margin against the modified C&W method for k=10. Despite the recent dramatic progress, deep neural networks (DNNs) (LeCun et al., 1998; Krizhevsky et al., 2012; He et al., 2016; Szegedy et al., 2016) trained for visual recognition tasks (e.g., image classification) can be easily fooled by so-called adversarial attacks which utilize visually imperceptible, carefully-crafted perturbations to cause networks to misclassify inputs in arbitrarily chosen ways in the close set of labels used in training (Nguyen et al., 2015; Szegedy et al., 2014; Athalye & Sutskever, 2017; Carlini & Wagner, 2016) , even with one-pixel attacks (Su et al., 2017) . The existence of adversarial attacks hinders the deployment of DNNs-based visual recognition systems in a wide range of applications such as autonomous driving and smart medical diagnosis in the long-run. In this paper, we are interested in learning visually-imperceptible targeted attacks under the whitebox setting in image classification tasks. In the literature, most methods address targeted attacks in the Top-1 manner, in which an adversarial attack is said to be successful if a randomly selected label (not the ground-truth label) is predicted as the Top-1 label with the added perturbation satisfying to be visually-imperceptible. One question arises, • The "robustness" of an attack method itself : How far is the attack method able to push the underlying ground-truth label in the prediction of the learned adversarial examples? Table 1 shows the evaluation results of the "robustness" of different attack methods. The widely used C&W method (Carlini & Wagner, 2016) does not push the GT labels very far, especially when smaller perturbation energy is aimed using larger search range (e.g., the average rank of the GT label is 2.6 for C&W 9×1000 ). Consider Top-5, if the ground-truth labels of adversarial examples still largely appear in the Top-5 of the prediction, we may be over-confident about the 100% ASR, (He et al., 2016) . Please see Sec. 4 for detail of experimental settings. Method ASR Proportion of GT Labels in Top-k (smaller is better) Average Rank of GT Labels (larger is better) Top-3 Top-5 Top-10 Top-50 Top-100 C&W9×30 (Carlini & Wagner, 2016) 99.9 36.9 50.5 66.3 90.0 95.1 20.4 C&W9×1000 (Carlini & Wagner, 2016) 100 71.9 87.0 96.1 99.9 100 2.6 FGSM (Goodfellow et al., 2015) 80.7 25.5 37.8 52.8 81.2 89.2 44.2 PGD10 (Madry et al., 2018) 100 3.3 6.7 12 34.7 43.9 306.5 MIFGSM10 (Dong et al., 2018) 99.9 0.7 1.9 6.0 22.5 32.3 404.4 especially when some downstream modules may rely on Top-5 predictions in their decision making. But, the three untargeted attack approaches are much better in terms of pushing the GT labels since they are usually move against the GT label explicitly in the optimization, but their perturbation energies are usually much larger. As we shall show, more "robust" attack methods can be developed by harnessing the advantages of the two types of attack methods. In addition, the targeted Top-1 attack setting could limit the flexibility of attacks, and may lead to less rich perturbations. To facilitate explicit control of targeted attacks and enable more "robust" attack methods, one natural solution, which is the focus of this paper, is to develop ordered Top-k targeted attacks which enforce the Top-k predicted labels of an adversarial example to be the k (randomly) selected and ordered labels (k ≥ 1, the GT label is exclusive). In this paper, we present two methods of learning ordered Top-k attacks. The basic idea is to design proper adversarial objective functions that result in imperceptible perturbations for any test image through iterative gradient-based back-propagation. First, we extend the vanilla Carlini-Wagner (C&W) method (Carlini & Wagner, 2016) and use it as a strong baseline. Second, we present an adversarial distillation (AD) framework consisting of two components: (i) Computing an adversarial probability distribution for any given ordered Top-k targeted labels. (ii) Learning adversarial examples by minimizing the Kullback-Leibler (KL) divergence between the adversarial distribution and the predicted distribution, together with the perturbation energy penalty. The proposed AD framework can be viewed as applying the network distillation frameworks (Hinton et al., 2015; Bucila et al., 2006; Papernot et al., 2016) for "the bad" induced by target adversarial distributions. To compute a proper adversarial distribution for any given ordered Top-k targeted labels, the AD framework is motivated by two aspects: (i) The difference between the objective functions used by the C&W method and the three untargeted attack methods (Table 1) respectively. The former maximizes the margin of the logits between the target and the runner-up (either GT or ResNet-50. AD is better than the modified C&W method (CW * ). The thickness represents the 2 energy (thinner is better). Please see Sec. 4 for detail of experimental settings. not), while the latter maximizes the cross-entropy between the prediction probabilities (softmax of logits) and the one-hot distribution of the ground-truth. (ii) The label smoothing methods Pereyra et al., 2017) , which are often used to improve the performance of DNNs by addressing the over-confidence issue in the one-hot vector encoding of labels. More specifically, we explore how to leverage label semantic similarities in computing "smoothed" adversarial distributions, leading to knowledge-oriented attacks. We measure label semantic similarities using the cosine distance between some off-the-shelf word2vec embedding of labels such as the pretrained Glove embedding (Pennington et al., 2014) . Along this direction, another question of interest is further investigated: Are all Top-k targets equally challenging for an attack approach? In experiments, we test Top-k (k = 1, 2, 5, 10) in the ImageNet-1000 (Russakovsky et al., 2015) val dataset using two popular DNNs trained with clean ImageNet-1000 train dataset, ResNet-50 (He et al., 2016) and DenseNet-121 (Huang et al., 2017) respectively. Overall, the adversarial distillation approach obtains the best results. It reduces the perturbation energy consistently with the same attack success rate on all the four k's, and improve the attack success rate by large margin against the modified C&W method for k = 10 (see Fig. 1 ). We observe that Top-k targets that are distant from the GT label in terms of either label semantic distance or prediction scores of clean images are actually more difficulty to attack. In summary, not only can ordered Top-k attacks improve the "robustness" of attacks, but also they provide insights on how aggressive adversarial attacks can be (under affordable optimization budgets). Our Contributions. This paper makes three main contributions to the field of learning adversarial attacks: (i) The problem in study is novel. Learning ordered Top-k adversarial attacks is an important problem that reflects the robustness of attacks themselves, but has not been addressed in the literature. (ii) The proposed adversarial distillation framework is effective, especially when k is large (such as k = 5, 10). (iii) The proposed knowledge-oriented adversarial distillation is novel. It worth exploring the existing distillation framework for a novel problem (ordered Top-k adversarial attacks) with some novel modifications (knowledge-oriented target distributions as "teachers"). This paper proposes to extend the traditional Top-1 targeted attack setting to the ordered Top-k setting (k ≥ 1) under the white-box attack protocol. The ordered Top-k targeted attacks can improve the robustness of attacks themselves. To our knowledge, it is the first work studying this ordered Top-k attacks. To learn the ordered Top-k attacks, we present a conceptually simple yet effective adversarial distillation framework motivated by network distillation. We also develop a modified C&W method as the strong baseline for the ordered Top-k targeted attacks. In experiments, the proposed method is tested in ImageNet-1000 using two popular DNNs, ResNet-50 and DenseNet-121, with consistently better results obtained. We investigate the effectiveness of label semantic knowledge in designing the adversarial distribution for distilling the ordered Top-k targeted attacks. Discussions. We have shown that the proposed AD method is generally applicable to learn ordered Top-k attacks. But, we note that the two components of the AD framework are in their simplest forms in this paper, and need to be more thoroughly studied: designing more informative adversarial distributions to guide the optimization to learn adversarial examples better and faster, and investigating loss functions other than KL divergence such as the Jensen-Shannon (JS) divergence or the Earth-Mover distance. On the other hand, we observed that the proposed AD method is more effective when computation budget is limited (e.g., using the 9 × 30 search scheme). This leads to the theoretically and computationally interesting question whether different attack methods all will work comparably well if the computation budget is not limited. Of course, in practice, we prefer more powerful ones when only limited computation budget is allowed. Furthermore, we observed that both the modified C&W method and the AD method largely do not work in learning Top-k (k ≥ 20) attacks with the two search schema (9 × 30 and 9 × 1000). We are working on addressing the aforementioned issues to test the Top-k (k ≥ 20) cases, thus providing a thorough empirical answer to the question: how aggressive can adversarial attacks be?
ordered Top-k adversarial attacks
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:490
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend. In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters on par or lower than previous models. It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models. Our implementation is available online. Graphs are ubiquitous in the real world and its description through scientific models. They are used to study the spread of information, to optimize delivery, to recommend new books, to suggest friends, or to find a party's potential voters. Deep learning approaches have achieved great success on many important graph problems such as link prediction BID15 , graph classification BID12 BID31 BID13 and semi-supervised node classification BID43 BID21 .There are many approaches for leveraging deep learning algorithms on graphs. Node embedding methods use random walks or matrix factorization to directly train individual node embeddings, often without using node features and usually in an unsupervised manner, i.e. without leveraging node classes BID33 BID40 BID30 BID15 BID35 . Many other approaches use both graph structure and node features in a supervised setting. Examples for these include spectral graph convolutional neural networks BID6 BID11 , message passing (or neighbor aggregation) algorithms BID19 BID21 BID16 BID34 BID28 BID13 , and neighbor aggregation via recurrent neural networks BID36 BID24 BID10 . Among these categories, the class of message passing algorithms has garnered particular attention recently due to its flexibility and good performance.Several works have been aimed at improving the basic neighborhood aggregation scheme by using attention mechanisms BID19 BID16 BID41 , random walks BID0 BID44 , edge features BID19 BID13 BID37 and making it more scalable on large graphs BID44 . However, all of these methods only use the information of a very limited neighborhood for each node. A larger neighborhood would be desirable to provide the model with more information, especially for nodes in the periphery or in a sparsely labelled setting.Increasing the size of the neighborhood used by these algorithms, i.e. their range, is not trivial since neighborhood aggregation in this scheme is essentially a type of Laplacian smoothing and too many layers lead to oversmoothing . BID42 highlighted the same problem by establishing a relationship between the message passing algorithm termed Graph Convolutional Network (GCN) by BID21 and a random walk. Using this relationship we see that GCN converges to this random walk's limit distribution as the number of layers increases. The limit distribution is a property of the graph as a whole and does not take the random walk's starting (root) node into account. As such it is unsuited to describe the root node's neighborhood. Hence, GCN's performance necessarily deteriorates for a high number of layers (or aggregation/propagation steps).To solve this issue, in this paper, we first highlight the inherent connection between the limit distribution and PageRank BID32 . We then propose an algorithm that utilizes a propagation scheme derived from personalized PageRank instead. This algorithm adds a chance of teleporting back to the root node, which ensures that the PageRank score encodes the local neighborhood for every root node BID32 . The teleport probability allows us to balance the needs of preserving locality (i.e. staying close to the root node to avoid oversmoothing) and leveraging the information from a large neighborhood. We show that this propagation scheme permits the use of far more (in fact, infinitely many) propagation steps without leading to oversmoothing.Moreover, while propagation and classification are inherently intertwined in message passing, our proposed algorithm separates the neural network from the propagation scheme. This allows us to achieve a much higher range without changing the neural network, whereas in the message passing scheme every additional propagation step would require an additional layer. It also permits the independent development of the propagation algorithm and the neural network generating predictions from node features. That is, we can combine any state-of-the-art prediction method with our propagation scheme. We even found that adding our propagation scheme during inference significantly improves the accuracy of networks that were trained without using any graph information.Our model achieves state-of-the-art results while requiring fewer parameters and less training time compared to most competing models, with a computational complexity that is linear in the number of edges. We show these results in the most thorough study (including significance testing) of message passing models using graphs with text-based features that has been done so far. In this paper we have introduced personalized propagation of neural predictions (PPNP) and its fast approximation, APPNP. We derived this model by considering the relationship between GCN and PageRank and extending it to personalized PageRank. This simple model decouples prediction and propagation and solves the limited range problem inherent in many message passing models without introducing any additional parameters. It uses the information from a large, adjustable (via the teleport probability α) neighborhood for classifying each node. The model is computationally efficient and outperforms several state-of-the-art methods for semi-supervised classification on multiple graphs in the most thorough study which has been done for GCN-like models so far.For future work it would be interesting to combine PPNP with more complex neural networks used e.g. in computer vision or natural language processing. Furthermore, faster or incremental approximations of personalized PageRank BID2 BID3 BID25 and more sophisticated propagation schemes would also benefit the method.A EXISTENCE OF Π PPRThe matrix DISPLAYFORM0 exists iff the determinant det(I n − (1 − α)Â) = 0, which is the case iff det( − 1 1−α I n ) = 0, i.e. iff 1 1−α is not an eigenvalue ofÂ. This value is always larger than 1 since the teleport probability α ∈ (0, 1]. Furthermore, the symmetrically normalized matrix has the same eigenvalues as the row-stochastic matrixà rw . This can be shown by multiplying the eigenvalue equationÂv = λv . The largest eigenvalue of a row-stochastic matrix is 1, as can be proven using the Gershgorin circle theorem. Hence, B CONVERGENCE OF APPNP APPNP uses the iterative equation DISPLAYFORM1 After the k-th propagation step, the resulting predictions are DISPLAYFORM2 H.If we take the limit k → ∞ the left term tends to 0 and the right term becomes a geometric series. The series converges since α ∈ (0, 1] and is symmetrically normalized and therefore det(Â) ≤ 1, resulting in The sampling procedure is illustrated in FIG3 . The data is first split into a visible and a test set. For the visible set 1500 nodes were sampled for the citation graphs and 5000 for MICROSOFT ACADEMIC. The test set contains all remaining nodes. We use three different label sets in each experiment: A training set of 20 nodes per class, an early stopping set of 500 nodes and either a validation or test set. The validation set contains the remaining nodes of the visible set. We use 20 random seeds for determining the splits. These seeds are drawn once and fixed across runs to facilitate comparisons. We use one set of seeds for the validation splits and a different set for the test splits. Each experiment is run with 5 random initializations on each data split, leading to a total of 100 runs per experiment. DISPLAYFORM3 The early stopping criterion uses a patience of p = 100 and an (unreachably high) maximum of n = 10 000 epochs. The patience is reset whenever the accuracy increases or the loss decreases on the early stopping set. We choose the parameter set achieving the highest accuracy and break ties by selecting the lowest loss on this set. This criterion was inspired by GAT BID41 .We used TensorFlow (Martín BID26 for all experiments except bootstrapped feature propagation. All uncertainties and confidence intervals correspond to a confidence level of 95 % and were calculated by bootstrapping with 1000 samples.We use the Adam optimizer with a learning rate of l = 0.01 and cross-entropy loss for all models BID20 . Weights are initialized as described in BID14 . The feature matrix is L 1 normalized per row.
Personalized propagation of neural predictions (PPNP) improves graph neural networks by separating them into prediction and propagation via personalized PageRank.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:491
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Most of recent work in cross-lingual word embeddings is severely Anglocentric. The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting. With this work, however, we challenge these practices. First, we show that the choice of hub language can significantly impact downstream lexicon induction performance. Second, we both expand the current evaluation dictionary collection to include all language pairs using triangulation, and also create new dictionaries for under-represented languages. Evaluating established methods over all these language pairs sheds light into their suitability and presents new challenges for the field. Finally, in our analysis we identify general guidelines for strong cross-lingual embeddings baselines, based on more than just Anglocentric experiments. Continuous distributional vectors for representing words (embeddings) (Turian et al., 2010) have become ubiquitous in modern, neural NLP. Cross-lingual representations (Mikolov et al., 2013) additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI). BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging (Zhang et al., 2016) , parsing (Ammar et al., 2016) , document classification (Klementiev et al., 2012) , and machine translation (Irvine and CallisonBurch, 2013; Artetxe et al., 2018b; Lample et al., 2018) . Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively) . First, monolingual word embeddings are learned over large swaths of text; such pre-trained word embeddings, in fact, are available for several languages and are widely used, like the fastText Wikipedia vectors (Grave et al., 2018) . Second, a mapping between the languages is learned, in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision (Zou et al., 2013) , under minimal supervision e.g. using only identical strings (Smith et al., 2017) , or even in a completely unsupervised fashion (Zhang et al., 2017; Conneau et al., 2018) . Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned to (hereinafter "the hub"). We outline the details in Section 2. Despite all the recent progress in learning cross-lingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric. Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one. We argue and empirically show, however, that English is a poor hub language choice. In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language). However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages. This Anglocentricity is even more evident at the evaluation stage. The lexica most commonly used for evaluation are the MUSE lexica (Conneau et al., 2018) which cover 45 languages, but with translations only from and into English. Even still, alternative evaluation dictionaries are also very English-and European-centric: Dinu and Baroni (2014) report results on English-Italian, Artetxe et al. (2017) on English-German and English-Finnish, Zhang et al. (2017) on Spanish-English and Italian-English, and Artetxe et al. (2018a) between English and Italian, German, Finnish, Spanish, and Turkish. We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems (Aronoff and Fudeman, 2011) . These two factors allow for an overly easy evaluation setting which does not necessarily generalize to other language pairs. In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages. With this work, we attempt to address these shortcomings, providing the following contributions: • We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly different performance (e.g., by more than 10 percentage points for BWE over distant languages). We also show that often English is a suboptimal hub for MWE. • We identify some general guidelines for choosing a hub language which could lead to stronger baselines; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees). For distant languages, multilingual systems should in most cases be preferred over bilingual ones. • We provide resources for training and evaluation on non-Anglocentric language pairs. We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 2352 lexicons covering 49 languages, and we present results on a subset of them. We also create new evaluation lexica for under-resourced languages using Azerbaijani, Belarusian, and Galician as our test cases. We additionally provide recipes for creating such dictionaries for any language pair with available parallel data. With this work we challenge the standard practices in learning cross-lingual word embeddings. We empirically showed that the choice of the hub language is an important parameter that affects lexicon induction performance in both bilingual (between distant languages) and multilingual settings. More importantly, we hope that by providing new dictionaries and baseline results on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs. Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric cross-lingual word embeddings. A Does evaluation directionality matter? We also explored whether there are significant differences between the evaluated quality of aligned spaces, when computed on both directions (src-trg and trg-src). We find that the evaluation direction indeed matters a lot, when the languages of the evaluation pair are very distant, in terms of morphological complexity and data availability (which affects the quality of the original embeddings). A prominent example, from our European-languages experiment, are evaluation pairs involving Az or Be. When evaluating on the Az-XX and Be-XX dictionaries, the word translation P@1 is more than 20 percentage points higher than when evaluating on the opposite direction (XX-Az or XX-Be). For example, Es-Az has a mere P@1 of 9.9, while Az-Es achieves a P@1 of 44.9. This observation holds even between very related languages (cf. Ru-Be: 12.8, Be-Ru: 41.1 and Tr-Az: 8.4, Az-Tr: 32.0), which supports our hypothesis that this difference is also due to the quality of the pre-trained embeddings. It is important to note that such directionality differences are not observed when evaluating distant pairs with presumably high-quality pre-trained embeddings e.g. Tr-Sk or Tr-Es; the P@1 for both directions is very close.
The choice of the hub (target) language affects the quality of cross-lingual embeddings, which shouldn't be evaluated only on English-centric dictionaries.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:492
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Interpreting generative adversarial network (GAN) training as approximate divergence minimization has been theoretically insightful, has spurred discussion, and has lead to theoretically and practically interesting extensions such as f-GANs and Wasserstein GANs. For both classic GANs and f-GANs, there is an original variant of training and a "non-saturating" variant which uses an alternative form of generator update. The original variant is theoretically easier to study, but the alternative variant frequently performs better and is recommended for use in practice. The alternative generator update is often regarded as a simple modification to deal with optimization issues, and it appears to be a common misconception that the two variants minimize the same divergence. In this short note we derive the divergences approximately minimized by the original and alternative variants of GAN and f-GAN training. This highlights important differences between the two variants. For example, we show that the alternative variant of KL-GAN training actually minimizes the reverse KL divergence, and that the alternative variant of conventional GAN training minimizes a "softened" version of the reverse KL. We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training.
Typical GAN training doesn't optimize Jensen-Shannon, but something like a reverse KL divergence.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:493
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: REINFORCE can be used to train models in structured prediction settings to directly optimize the test-time objective. However, the common case of sampling one prediction per datapoint (input) is data-inefficient. We show that by drawing multiple samples (predictions) per datapoint, we can learn with significantly less data, as we freely obtain a REINFORCE baseline to reduce variance. Additionally we derive a REINFORCE estimator with baseline, based on sampling without replacement. Combined with a recent technique to sample sequences without replacement using Stochastic Beam Search, this improves the training procedure for a sequence model that predicts the solution to the Travelling Salesman Problem. REINFORCE (Williams, 1992 ) is a well known policy optimization algorithm that learns directly from experience. Variants of it have been used to train models for a wide range of structured prediction tasks, such as Neural Machine Translation BID12 BID0 , Image Captioning (Vinyals et al., 2015b) and predicting solutions (tours) for the Travelling Salesman Problem (TSP) BID1 BID6 . As opposed to maximum likelihood (supervised) learning, the appeal of using REINFORCE for structured prediction is that it directly optimizes the test-time performance.When using REINFORCE, often for each datapoint (e.g. a sentence, image or TSP instance) only a single sample/prediction (e.g. a translation, caption or tour) is used to construct a gradient estimate. From a classic Reinforcement Learning (RL) point of view, this makes sense, as we may not be able to evaluate multiple sampled actions for a state (datapoint). However, from a data point of view, this is inefficient if we can actually evaluate multiple samples, such as in a structured prediction setting. Reinforcement Learning with multiple samples/predictions for a single datapoint has been used before (e.g. BID14 ; ), but we use the samples as counterfactual information by constructing a (local, for a single datapoint) REINFORCE baseline. A similar idea was applied for variational inference by BID10 .Many structured prediction tasks can be formulated in terms of sequence modelling, which is the focus of this paper. In most sequence modelling tasks, the objective is a deterministic function of the predicted sequence. As a result , duplicate sampled sequences are uninformative and therefore do not improve the quality of the gradient estimate. To solve this problem, we propose to use sampling without replacement to construct a better gradient estimate. This is inspired by recent work by BID7 , who introduce Stochastic Beam Search as a method to sample sequences without replacement, and use this to construct a (normalized) importance-weighted estimator for (sentence level) BLEU score. We extend this idea to estimate policy gradients using REINFORCE, and we show how to use the same set of samples (without replacement) to construct a baseline. This way we can leverage sampling without replacement to improve training of sequence models.In our experiment, we consider the TSP and show that using REINFORCE with multiple samples is beneficial compared to single sample REINFORCE, both computationally and in terms of data-efficiency. Additionally, for a sample size of 4 − 8 samples per datapoint, sampling without replacement results in slightly faster learning. In this paper, we have derived REINFORCE estimators based on drawing multiple samples, with and without replacement, and evaluated the effectiveness of the proposed estimators in a structured prediction setting: the prediction of tours for the TSP. The derived estimators yield results comparable to recent results using REINFORCE with a strong greedy rollout baseline, at greater data-efficiency and computational efficiency.These estimators are especially well suited for structured prediction settings, where the domain is too large to compute exact gradients, but we are able to take multiple samples for the same datapoint, and the objective is a deterministic function of the sampled prediction. We hope the proposed estimators have potential to be used to improve training efficiency in more structured prediction settings, for example in the context of Neural Machine Translation or Image Captioning, where depending on the entropy of the model, sampling without replacement may yield a beneficial improvement.
We show that by drawing multiple samples (predictions) per input (datapoint), we can learn with less data as we freely obtain a REINFORCE baseline.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:494
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Reinforcement learning (RL) is a powerful technique to train an agent to perform a task. However, an agent that is trained using RL is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to achieve, each task being specified as reaching a certain parametrized subset of the state-space. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment (Videos and code available at: https://sites.google.com/view/goalgeneration4rl). Our method can also learn to achieve tasks with sparse rewards, which pose significant challenges for traditional RL methods. Reinforcement learning (RL) can be used to train an agent to perform a task by optimizing a reward function. Recently, a number of impressive results have been demonstrated by training agents using RL: such agents have been trained to defeat a champion Go player BID16 , to outperform humans in 49 Atari games (Guo et al., 2016; Mnih et al., 2015) , and to perform a variety of difficult robotics tasks (Lillicrap et al., 2015; BID18 . In each of the above cases, the agent is trained to optimize a single reward function in order to learn to perform a single task. However, there are many real-world environments in which a robot will need to be able to perform not a single task but a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. We consider the problem of maximizing the average success rate of our agent over all possible goals, where success is defined as the probability of successfully reaching each goal by the current policy.In order to efficiently maximize this objective, the algorithm must intelligently choose which goals to focus on at every training stage: goals should be at the appropriate level of difficulty for the current policy. To do so, our algorithm allows an agent to generate its own reward functions, defined with respect to target subsets of the state space, called goals. We generate such goals using a Goal Generative Adversarial Network (Goal GAN), a variation of to the GANs introduced by Goodfellow et al. (2014) . A goal discriminator is trained to evaluate whether a goal is at the appropriate level of difficulty for the current policy, and a goal generator is trained to generate goals that meet this criteria. We show that such a framework allows an agent to quickly learn a policy that reaches all feasible goals in its environment, with no prior knowledge about the environment or the tasks being performed. Our method automatically creates a curriculum, in which, at each step, the generator generates goals that are only slightly more difficult than the goals that the agent already knows how to achieve.In summary, our main contribution is a method for automatic curriculum generation that considerably improves the sample efficiency of learning to reach all feasible goals in the environment.Learning to reach multiple goals is useful for multi-task settings such as navigation or manipulation, in which we want the agent to perform a wide range of tasks. Our method also naturally handles sparse reward functions, without needing to manually modify the reward function for every task, based on prior task knowledge. Instead, our method dynamically modifies the probability distribution from which goals are sampled to ensure that the generated goals are always at the appropriate difficulty level, until the agent learns to reach all goals within the feasible goal space. We propose a new paradigm in RL where the objective is to train a single policy to succeed on a variety of goals, under sparse rewards. To solve this problem we develop a method for automatic curriculum generation that dynamically adapts to the current performance of the agent. The curriculum is obtained without any prior knowledge of the environment or of the tasks being performed. We use generative adversarial training to automatically generate goals for our policy that are always at the appropriate level of difficulty (i.e. not too hard and not too easy). In the future we want to combine our goal-proposing strategy with recent multi-goal approaches like HER BID1 ) that could greatly benefit from better ways to select the next goal to train on. Another promising line of research is to build hierarchy on top of the multi-task policy that we obtain with our method by training a higher-level policy that outputs the goal for the lower level multi-task policy (like in Heess et al. (2016 ) or in Florensa et al. (2017a ). The hierarchy could also be introduced by replacing our current feed-forward neural network policy by an architecture that learns to build implicit plans (Mnih et al., 2016; BID18 , or by leveraging expert demonstrations to extract sub-goals BID23 , although none of these approaches tackles yet the multi-task learning problem formulated in this work.
We efficiently solve multi-task problems with an automatic curriculum generation algorithm based on a generative model that tracks the learning agent's performance.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:495
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples. A number of adversarial attacks on neural networks have been recently proposed. To counter these attacks, a number of authors have proposed a range of defenses. However, these defenses are often quickly broken by new and revised attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?In this paper, we identify a broad class of problems for which adversarial examples cannot be avoided. We also derive fundamental limits on the susceptibility of a classifier to adversarial attacks that depend on properties of the data distribution as well as the dimensionality of the dataset.Adversarial examples occur when a small perturbation to an image changes its class label. There are different ways of measuring what it means for a perturbation to be "small"; as such, our analysis considers a range of different norms. While the ∞ -norm is commonly used, adversarial examples can be crafted in any p -norm (see FIG0 ). We will see that the choice of norm can have a dramatic effect on the strength of theoretical guarantees for the existence of adversarial examples. Our analysis also extends to the 0 -norm, which yields "sparse" adversarial examples that only perturb a small subset of image pixels FIG2 ). BID19 on Resnet50 , along with the distance between the base image and the adversarial example, and the top class label. There are a number of ways to escape the guarantees of adversarial examples made by Theorems 1-4. One potential escape is for the class density functions to take on extremely large values (i.e., exponentially large U c ); the dependence of U c on n is addressed separately in Section 8.Unbounded density functions and low-dimensional data manifolds In practice, image datasets might lie on low-dimensional manifolds within the cube, and the support of these distributions could have measure zero, making the density function infinite (i.e., U c = ∞). The arguments above are still relevant (at least in theory) in this case; we can expand the data manifold by adding a uniform random noise to each image pixel of magnitude at most 1 . The expanded dataset has positive volume. Then, adversarial examples of this expanded dataset can be crafted with perturbations of size 2 . This method of expanding the manifold before crafting adversarial examples is often used in practice. BID39 proposed adding a small perturbation to step off the image manifold before crafting adversarial examples. This strategy is also used during adversarial training BID19 .Adding a "don't know" class The analysis above assumes the classifier assigns a label to every point in the cube. If a classifier has the ability to say "I don't know," rather than assign a label to every input, then the region of the cube that is assigned class labels might be very small, and adversarial examples could be escaped even if the other assumptions of Theorem 4 are satisfied. In this case, it would still be easy for the adversary to degrade classifier performance by perturbing images into the "don't know" class.Feature squeezing If decreasing the dimensionality of data does not lead to substantially increased values for U c (we see in Section 8 that this is a reasonable assumption) or loss in accuracy (a stronger assumption), measuring data in lower dimensions could increase robustness. This can be done via an auto-encoder BID22 BID30 , JPEG encoding BID9 , or quantization BID43 .Computational hardness It may be computationally hard to craft adversarial examples because of local flatness of the classification function, obscurity of the classifier function, or other computational difficulties. Computational hardness could prevent adversarial attacks in practice, even if adversarial examples still exist.
This paper identifies classes of problems for which adversarial examples are inescapable, and derives fundamental bounds on the susceptibility of any classifier to adversarial examples.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:496
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory band- width and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN -- wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks. A promising approach to lower the compute and memory requirements of convolutional deeplearning workloads is through the use of low numeric precision algorithms. Operating in lower precision mode reduces computation as well as data movement and storage requirements. Due to such efficiency benefits, there are many existing works which propose low-precision deep neural networks (DNNs) BID27 BID12 BID14 BID6 ; BID24 , even down to 2-bit ternary mode BID29 BID11 BID25 and 1-bit binary mode BID28 BID2 BID16 BID23 . However, majority of existing works in low-precision DNNs sacrifice accuracy over the baseline full-precision networks. Further, most prior works target reducing the precision of the model parameters (network weights). This primarily benefits the inference step only when batch sizes are small.We observe that activation maps (neuron outputs) occupy more memory compared to the model parameters for batch sizes typical during training. This observation holds even during inference when batch size is around eight or more. Based on this observation, we study schemes for training and inference using low-precision DNNs where we reduce the precision of activation maps as well as the model parameters without sacrificing network accuracy.To improve both execution efficiency and accuracy of low-precision networks, we reduce both the precision of activation maps and model parameters and increase the number of filter maps in a layer. We call networks using this scheme wide reduced-precision networks (WRPN) and find that this scheme compensates or surpasses the accuracy of the baseline full-precision network. Although the number of raw compute operations increases as we increase the number of filter maps in a layer, the compute bits required per operation is now a fraction of what is required when using full-precision operations (e.g. going from FP32 AlexNet to 4-bits precision and doubling the number of filters increases the number of compute operations by 4x, but each operation is 8x more efficient than FP32). WRPN offers better accuracies, while being computationally less expensive compared to previously reported reduced-precision networks. We report results on AlexNet BID10 , batch-normalized Inception BID8 , and ResNet-34 BID7 on ILSVRC-12 (Russakovsky et al., 2015) dataset. We find 4-bits to be sufficient for training deep and wide models while achieving similar or better accuracy than baseline network. With 4-bit activation and 2-bit weights, we find the accuracy to be at-par with baseline full-precision. Making the networks wider and operating with 1-bit precision, we close the accuracy gap between previously reported binary networks and show state-of-the art results for ResNet-34 (69.85% top-1 with 2x wide) and AlexNet (48.04% top-1 with 1.3x wide). To the best of our knowledge, our reported accuracies with binary networks and 4-bit precision are highest to date.Our reduced-precision quantization scheme is hardware friendly allowing for efficient hardware implementations. To this end, we evaluate efficiency benefits of low-precision operations (4-bits to 1-bits) on Titan X GPU, Arria-10 FPGA and ASIC. We see that FPGA and ASIC can deliver significant efficiency gain over FP32 operations (6.5x to 100x), while GPU cannot take advantage of very low-precision operations. While most prior works proposing reduced-precision networks work with low precision weights (e.g. work in BID2 BID29 BID28 BID25 ; BID11 ; BID23 ), we find that activation maps occupy a larger memory footprint when using mini-batches of inputs. Using mini-batches of inputs is typical in training of DNNs and cloud-based batched inference BID9 . FIG0 shows memory footprint of activation maps and filter maps as batch size changes for 4 different networks (AlexNet, Inception-Resnet-v2 BID22 , during the training and inference steps. We present the Wide Reduced-Precision Networks (WRPN) scheme for DNNs. In this scheme, the numeric precision of both weights and activations are significantly reduced without loss of network accuracy. This result is in contrast to many previous works that find reduced-precision activations to detrimentally impact accuracy; specifically, we find that 2-bit weights and 4-bit activations are sufficient to match baseline accuracy across many networks including AlexNet, ResNet-34 and batchnormalized Inception. We achieve this result with a new quantization scheme and by increasing the number of filter maps in each reduced-precision layer to compensate for the loss of information capacity induced by reducing the precision. We believe ours to be the first work to study the interplay between layer width and precision -with widening, the number of neurons in a layer increase; yet with reduced precision, we control overfitting and regularization.We motivate this work with our observation that full-precision activations contribute significantly more to the memory footprint than full-precision weight parameters when using mini-batch sizes common during training and cloud-based inference; furthermore, by reducing the precision of both activations and weights the compute complexity is greatly reduced (40% of baseline for 2-bit weights and 4-bit activations).The WRPN quantization scheme and computation on low precision activations and weights is hardware friendly making it viable for deeply-embedded system deployments as well as in cloud-based training and inference servers with compute fabrics for low-precision. We compare Titan X GPU, Arria-10 FPGA and ASIC implementations using WRPN and show our scheme increases performance and energy-efficiency for iso-accuracy across each. Overall , reducing the precision allows custom-designed compute units and lower buffering requirements to provide significant improvement in throughput.
Lowering precision (to 4-bits, 2-bits and even binary) and widening the filter banks gives as accurate network as those obtained with FP32 weights and activations.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:497
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We investigate methods for semi-supervised learning (SSL) of a neural linear-chain conditional random field (CRF) for Named Entity Recognition (NER) by treating the tagger as the amortized variational posterior in a generative model of text given tags. We first illustrate how to incorporate a CRF in a VAE, enabling end-to-end training on semi-supervised data. We then investigate a series of increasingly complex deep generative models of tokens given tags enabled by end-to-end optimization, comparing the proposed models against supervised and strong CRF SSL baselines on the Ontonotes5 NER dataset. We find that our best proposed model consistently improves performance by $\approx 1\%$ F1 in low- and moderate-resource regimes and easily addresses degenerate model behavior in a more difficult, partially supervised setting. Named entity recognition (NER) is a critical subtask of many domain-specific natural language understanding tasks in NLP, such as information extraction, entity linking, semantic parsing, and question answering. State-of-the-art models treat NER as a tagging problem (Lample et al., 2016; Ma & Hovy, 2016; Strubell et al., 2017; Akbik et al., 2018) , and while they have become quite accurate on benchmark datasets in recent years (Lample et al., 2016; Ma & Hovy, 2016; Strubell et al., 2017; Akbik et al., 2018; Devlin et al., 2018) , utilizing them for new tasks is still expensive, requiring a large corpus of exhaustively annotated sentences (Snow et al., 2008) . This problem has been largely addressed by extensive pretraining of high-capacity sentence encoders on massive-scale language modeling tasks Devlin et al., 2018; Howard & Ruder, 2018; Radford et al., 2019; Liu et al., 2019b) , but it is natural to ask if we can squeeze more signal from our unlabeled data. Latent-variable generative models of sentences are a natural approach to this problem: by treating the tags for unlabeled data as latent variables, we can appeal to the principle of maximum marginal likelihood (Berger, 1985; Bishop, 2006) and learn a generative model on both labeled and unlabeled data. For models of practical interest, however, this presents multiple challenges: learning and prediction both require an intractable marginalization over the latent variables and the specification of the generative model can imply a posterior family that may not be as performant as the current state-of-the-art discriminative models. We address these challenges using a semi-supervised Variational Autoencoder (VAE) (Kingma et al., 2014) , treating a neural tagging CRF as the approximate posterior. We address the issue of optimization through discrete latent tag sequences by utilizing a differentiable relaxation of the Perturb-and-MAP algorithm (Papandreou & Yuille, 2011; Mensch & Blondel, 2018; Corro & Titov, 2018) , allowing for end-to-end optimization via backpropagation (Rumelhart et al., 1988) and SGD (Robbins & Monro, 1951) . Armed with this learning approach, we no longer need to restrict the generative model family (as in Ammar et al. (2014) ; Zhang et al. (2017) ), and explore the use of rich deep generative models of text given tag sequences for improving NER performance. We also demonstrate how to use the VAE framework to learn in a realistic annotation scenario where we only observe a biased subset of the named entity tags. Our contributions can be summarized as follows: 1. We address the problem of semi-supervised learning (SSL) for NER by treating a neural CRF as the amortized approximate posterior in a discrete structured VAE. To the best of our knowledge, we are the first to utilize VAEs for NER. 2. We explore several variants of increasingly complex deep generative models of text given tags with the goal of improving tagging performance. We find that a joint tag-encoding Transformer (Vaswani et al., 2017) architecture leads to an ≈ 1% improvement in F1 score over supervised and strong CRF SSL baselines. 3. We demonstrate that the proposed approach elegantly corrects for degenerate model performance in a more difficult partially supervised regime where sentences are not exhaustively annotated and again find improved performance. 4. Finally, we show the utility of our method in realistic low-and high-resource scenarios, varying the amount of unlabeled data. The resulting high-resource model is competitive with state-of-the-art results and, to the best of our knowledge, achieves the highest reported F1 score (88.4%) for models that do not use additional labeled data or gazetteers. We proposed a novel generative model for semi-supervised learning in NER. By treating a neural CRF as the amortized variational posterior in the generative model and taking relaxed differentiable samples, we were able to utilize a transformer architecture in the generative model to condition on more context and provide appreciable performance gains over supervised and strong baselines on both semi-supervised and partially-supervised datasets. We also found that inclusion of powerful pretrained autoregressive language modeling states had neglible or negative effects while using a pretrained bidirectional encoder offers significant performance gains. Future work includes the use of larger in-domain unlabeled corpora and the inclusion of latent-variable CRFs in more interesting joint semi-supervised models of annotations, such as relation extraction and entity linking. Gumbel, 1954) and τ ≥ 0 be the temperature: We know from Papandreou & Yuille (2011) that the MAP sequence from this perturbed distribution is a sample from the unperturbed distribution. Coupled with the property that the zero temperature limit of the Gibbs distribution is the MAP state (Wainwright et al., 2008) , it immediately follows that the zero temperature limit of the perturbedq is a sample from q: ⇒ lim τ →0q where q φ (y|x; τ ) is the tempered but unperturbed q φ and "one-hot" is a function that converts elements of Y N to a one-hot vector representation. Thus we can use the temperature τ to anneal the perturbed joint distributionq φ (y|x; τ ) to a sample from the unperturbed distribution,ỹ ∼ q φ . When τ > 0,q φ (y|x; τ ) is differentiable and can be used for end-to-end optimization by allowing us to approximate the expectation with a relaxed single-sample Monte Carlo estimate: where we have modified log p θ (x|y) to accept the simplex representations of y 1:N fromq φ instead of discrete elements, which has the effect of log p θ (x|y) computing a weighted combination of its input vector representations for y ∈ Y similarly to an attention mechanism or the annotation function in Kim et al. (2017) (see Equation 7.) This can be thought of as a generalization of the Gumbel-softmax trick from Jang et al. (2016); Maddison et al. (2016) to structured joint distributions. The statements in (8-10) also imply something of practical interest: we can compute (1) the argmax (Viterbi decoding) and its differentiable relaxation; (2) a sample and its differentiable relaxation; (3) the partition function; and (4) the marginal tag distributions, all using the same sum-product algorithm implementation, controlled by the temperature and the presence of noise. We have detailed the algorithm in Appendix B.
We embed a CRF in a VAE of tokens and NER tags for semi-supervised learning and show improvements in low-resource settings.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:498
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights. One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping. Despite its empirical success, little is understood about why the straight-through gradient method works. Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant , that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness. For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization. For binary quantization, our analysis shows both theoretically and experimentally that ProxQuant is more stable than the straight-through gradient method (i.e. BinaryConnect), challenging the indispensability of the straight-through gradient method and providing a powerful alternative.
A principled framework for model quantization using the proximal gradient method.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:499
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones. One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix. However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance. In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input. We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks. Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts. Modern neural networks usually contain millions of parameters BID4 BID8 , and they are difficult to be deployed on mobile devices with limited computation resources. To solve this problem, model compression techniques are proposed in recent years. Low-rank factorization is a popular way of reducing the matrix size. It has been extensively explored in the literature BID5 BID6 BID3 BID10 . Mathematically, a large weight matrix W ∈ R m×n is factorized to two small rank-d matrices U ∈ R m×d , V ∈ R n×d with W = U V T . Since both U and V are dense, no sparsity support is required from specialized hardware. It naturally fits the general-purpose, off-the-shelf CPUs and GPUs.To significantly reduce the model size and computation, the rank d in the low-rank factorization needs to be small. However, a small rank can limit the expressiveness of the model BID9 and lead to worse performance. To understand the limitations, given a n-dim feature vector h, we observe that DISPLAYFORM0 , is a linear projection from a high-dimensional space (n dims) to a low-dimensional space (d dims). This can lead to a significant loss of information. The conflict between the rank d and the model expressiveness prevents us from obtaining a both compact and accurate model.To address the dilemma, we propose to increase the expressiveness by learning an adaptive, inputdependent factorization, rather than performing a fixed factorization of a weight matrix. To do so, we use a mixture of multiple low-rank factorizations. The mixing weights are computed based on the input. This creates an adaptive linear projection from a high-dimensional space to a low-dimensional space. Compared to the conventional low-rank factorization, the proposed approach can significantly improve its performance while only introducing a small additional cost. DISPLAYFORM1 where z can be treated as the middle layer. Techniques like pooling can be applied to compute π to make it efficient.
A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:5
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models. This article is in the context of sequential or incremental learning in Deep Neural Networks (DNNs). Essentially, this means that a DNN is not trained once, on a single task D, but successively on two or more sub-tasks D 1 , . . . , D n , one after another. Learning tasks of this type, which we term Sequential Learning Tasks (SLTs) (see FIG0 ), are potentially very common in real-world applications. They occur wherever DNNs need to update their capabilities on-site and over time: gesture recognition, network traffic analysis, or face and object recognition in mobile robots. In such scenarios, neural networks have long been known to suffer from a problem termed "catastrophic forgetting"(CF) (e.g., BID7 ) which denotes the abrupt and near-complete loss of knowledge from previous subtasks D 1 , . . . , D k−1 after only a few training iterations on the current sub-task D k (see FIG0 compared to FIG0 ). We focus on SLTs from the visual domain with two sub-tasks each, as DNNs show pronounced CF behavior even when only two sub-tasks are involved. The sequential learning tasks used in this study only have two sub-tasks: D1 and D2. During training (white background) and re-training (gray background), test accuracy is measured on D1 (blue, ), D2 (green, ) and D1 ∪ D2 (red, ). The blue curve allows to determine the presence of CF by simple visual inspection: if there is significant degradation w.r.t. the red curve, then CF has occurred. DISPLAYFORM0 The field of incremental learning is large, e.g., BID20 and BID8 . Recent systematic comparisons between different DNN approaches to avoid CF are performed in, e.g., BID23 or . Principal recent approaches to avoid CF include ensemble methods BID22 BID6 , dual-memory systems BID24 BID11 BID21 BID9 and regularization approaches. Whereas BID10 suggest Dropout for alleviating CF, the EWC method BID14 proposes to add a term to the energy function that protects weights that are important for the previous sub-task (s) . Importance is determined by approximating the Fisher information matrix of the DNN. A related approach is pursued by the Incremental Moment Matching technique (IMM) (see ), where weights from DNNs trained on a current and a past sub-tasks are "merged" using the Fisher information matrix. Other regularization-oriented approaches are proposed in BID2 ; BID25 and BID13 which focus on enforcing sparsity of neural activities by lateral interactions within a layer.Number of tested datasets In general, most methods referenced here are evaluated only on a few datasets, usually on MNIST BID16 and various derivations thereof (permutation, rotation, class separation). Some studies make limited use of CIFAR10, SVHN, the Amazon sentiment analysis problem, and non-visual problems such as data from Q-learning of Atari games. A largescale evaluation on a huge number of qualitatively different datasets is still missing 1 . Model selection and prescience Model selection (i.e., selecting DNN topology and hyperparameters) is addressed in some approaches BID10 but on the basis of a "prescient" evaluation where the best model is selected after all tasks have been processed, an approach which is replicated in BID14 . This amounts to a knowledge of future sub-tasks which is problematic in applications. Most approaches ignore model selection BID25 BID2 BID13 , and thus implicitly violate causality. Storage of data from previous sub-tasks From a technical point of view, DNNs can be retrained without storing training data from previous sub-tasks, which is done in BID10 and BID25 . For regularization approaches, however, there are regularization parameters that control the retention of previous knowledge, and thus must be chosen with care. In BID14 , this is λ, whereas two such quantities occur in : the "balancing" parameter α and the regularization parameter λ for L2-transfer. The only study where regularization parameters are obtained through cross-validation (which is avoided in other studies) is BID2 (for λ SN I and λ Ω ) but this requires to store all previous training data.This review shows that enormous progress has been made, but that there are shortcomings tied to applied scenarios which need to be addressed. We will formalize this in Sec. 1.2 and propose an evaluation strategy that takes these formal constraints into account when testing CF in DNNs. The original contributions of our work can be summarized as follows:• We propose a training and evaluation paradigm for incremental learning in DNNs that enforces typical application constraints, see Sec. 1.2. The importance of such an applicationoriented paradigm is underlined by the fact that taking application constraints into account leads to radically different conclusions about CF than those obtained by other recent studies on CF (see Sec. 1.1).• We investigate the incremental learning capacity of various DNN approaches (Dropout, LWTA, EWC and IMM) using the largest number of qualitatively different classification datasets so far described. We find that all investigated models are afflicted by catastrophic forgetting, or else in violation of application constraints and discuss potential workarounds.• We establish that the "permuted" type of SLTs (e.g., "permuted MNIST") should be used with caution when testing for CF.• We do not propose a method for avoiding CF in this article. This is because avoiding CF requires a consensus on how to actually measure this effect: our novel contribution is a proposal how to do just that. The primary conclusion from the results in Sec. 4 is that CF still represents a major problem when training DNNs. This is particularly true if DNN training happens under application constraints as outlined in Sec. 1.2. Some of these constraints may be relaxed depending on the concrete application: if some prior knowledge about future sub-task exists, it can be used to simplify model selection and improve results. If sufficient resources are available, a subset of previously seen data may be kept in memory and thus allow a "best" type evaluation/stopping criterion for re-training, see Alg. 1.Our evaluation approach is similar to , and we adopt some measures for CF proposed there. A difference is the setting of up to 10 sub-tasks, whereas we consider only two of them since we focus less on the degree but mainly on presence or absence of CF. Although comparable both in the number of tested models and benchmarks, BID23 uses a different evaluation methodology imposing softer constraints than ours, which is strongly focused on application scenarios. This is, to our mind, the reason why those results differ significantly from ours and underscores the need for a consensus of how to measure CF.In general application scenarios without prior knowledge or extra resources, however, an essential conclusion we draw from Sec. 4 is that model selection must form an integral part of training a DNN on SLTs. Thus, a wrong choice of hyper-parameters based on D 1 can be disastrous for the remaining sub-tasks, which is why application scenarios require DNN variants that do not have extreme dependencies on hyper-parameters such as layer number and layer sizes.Lastly, our findings indicate workarounds that would make EWC or IMM practicable in at least some application scenarios. If model selection is addressed, a small subset of D 1 may be kept in memory for both methods: to determine optimal values of α for IMM and to determine when to stop re-training for EWC. FIG7 shows that small changes to α do not dramatically impact final accuracy for IMM, and FIG4 indicates that accuracy loss as a function of re-training time is gradual in most cases for EWC. The inaccuracies introduced by using only a subset of D 1 would therefore not be very large for both algorithms.To conclude, this study shows that the consideration of applied scenarios significantly changes the procedures to determine CF behavior, as well as the conclusions as to its presence in latestgeneration DNN models. We propose and implement such a procedure, and as a consequence claim that CF is still very much of a problem for DNNs. More research, either on generic solutions, or on workarounds for specific situations, needs to be conducted before the CF problem can be said to be solved. A minor but important conclusion is that results obtained on permutation-type SLTs should be treated with caution in future studies on CF.
We check DNN models for catastrophic forgetting using a new evaluation scheme that reflects typical application conditions, with surprising results.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:50
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains. Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism. In this paper, we consider the adversarial detection problem under the robust optimization framework. We partition the input space into subspaces and train adversarial robust subspace detectors using asymmetrical adversarial training (AAT). The integration of the classifier and detectors presents a detection mechanism that provides a performance guarantee to the adversary it considered. We demonstrate that AAT promotes the learning of class-conditional distributions, which further gives rise to generative detection/classification approaches that are both robust and more interpretable. We provide comprehensive evaluations of the above methods, and demonstrate their competitive performances and compelling properties on adversarial detection and robust classification problems. Deep neural networks have become the staple of modern machine learning pipelines, achieving stateof-the-art performance on extremely difficult tasks in various applications such as computer vision (He et al., 2016) , speech recognition (Amodei et al., 2016) , machine translation (Vaswani et al., 2017) , robotics (Levine et al., 2016) , and biomedical image analysis (Shen et al., 2017) . Despite their outstanding performance, these networks are shown to be vulnerable against various types of adversarial attacks, including evasion attacks (aka, inference or perturbation attacks) (Szegedy et al., 2013; Goodfellow et al., 2014b; Carlini & Wagner, 2017b; Su et al., 2019) and poisoning attacks (Liu et al., 2017; Shafahi et al., 2018) . These vulnerabilities in deep neural networks hinder their deployment in sensitive domains including, but not limited to, health care, finances, autonomous driving, and defense-related applications and have become a major security concern. Due to the mentioned vulnerabilities, there has been a recent surge toward designing defense mechanisms against adversarial attacks (Gu & Rigazio, 2014; Jin et al., 2015; Papernot et al., 2016b; Bastani et al., 2016; Madry et al., 2017; Sinha et al., 2018) , which has in turn motivated the design of stronger attacks that defeat the proposed defenses (Goodfellow et al., 2014b; Kurakin et al., 2016b; a; Carlini & Wagner, 2017b; Xiao et al., 2018; Athalye et al., 2018; Chen et al., 2018; He et al., 2018) . Besides, the proposed defenses have been shown to be limited and often not effective and easy to overcome (Athalye et al., 2018) . Alternatively, a large body of work has focused on detection of adversarial examples (Bhagoji et al., 2017; Feinman et al., 2017; Gong et al., 2017; Grosse et al., 2017; Metzen et al., 2017; Hendrycks & Gimpel, 2017; Li & Li, 2017; Xu et al., 2017; Pang et al., 2018; Roth et al., 2019; Bahat et al., 2019; Ma et al., 2018; Zheng & Hong, 2018; Tian et al., 2018) . While training robust classifiers focuses on maintaining performance in presence of adversarial examples, adversarial detection only cares for detecting these examples. The majority of the current detection mechanisms focus on non-adaptive threats, for which the attacks are not specifically tuned/tailored to bypass the detection mechanism, and the attacker is oblivious to the detection mechanism. In fact, Carlini & Wagner (2017a) and Athalye et al. (2018) showed that the detection methods presented in (Bhagoji et al., 2017; Feinman et al., 2017; Gong et al., 2017; Grosse et al., 2017; Metzen et al., 2017; Hendrycks & Gimpel, 2017; Li & Li, 2017; Ma et al., 2018) , are significantly less effective than their claimed performances under adaptive attacks. The current solutions are mostly heuristic approaches that cannot provide performance guarantees to the adversary they considered. In this paper, we are interested in detection mechanisms for adversarial examples that can withstand adaptive attacks. Unlike previous approaches that assume adversarial and natural samples coming from different distributions, thus rely on using a single classifier to distinguish between them, we instead partition the input space into subspaces based on the classification system's output and perform adversarial/natural sample classification in these subspaces. Importantly, the mentioned partitions allow us to drop the adversarial constrain and employ a novel asymmetrical adversarial training (AAT) objective to train robust binary classifiers in the subspaces. Figure 1 demonstrates our idea of space partitioning and robust detector training. Our qualitative results show that AAT supports detectors to learn class-conditional distributions, which further motivates generative detection/classification solutions that are both robust and interpretable. Our specific contributions are: • We develop adversarial example detection techniques that provide performance guarantees to norm constrained adversaries. Empirically, our best models improve previous state-ofthe-art mean L 2 distortion from 3.68 to 4.47 on the MNIST dataset, and from 1.1 to 1.5 on the CIFAR10 dataset. • We study powerful and versatile generative classification models derived from our detection framework and demonstrate their competitive performances over discriminative robust classifiers. While defense mechanisms based on ordinary adversarial training are vulnerable to unrecognizable inputs (e.g., rubbish examples), inputs that cause confident predictions of our models have human-understandable semantic meanings. • We demonstrate that AAT not only induces robustness as ordinary adversarial training methods do, but also promotes the learning of class-conditional distributions. Intuitively, the learning mechanism is similar to that of GANs, but the objective doesn't learn a fixed generator. On 1D and 2D benchmarking datasets we show this flexibility allows us to precisely control the data generation process such that the detector could be pushed to a good approximation of the underlying density function. (In case of GANs at the global optimum the discriminator converges to a degenerated uniform solution.) Our image generation results on CIFAR10 and ImageNet rival that of state-of-the-art GANs.
A new generative modeling technique based on asymmetrical adversarial training, and its applications to adversarial example detection and robust classification
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:500
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Exploration is a key component of successful reinforcement learning, but optimal approaches are computationally intractable, so researchers have focused on hand-designing mechanisms based on exploration bonuses and intrinsic reward, some inspired by curious behavior in natural systems. In this work, we propose a strategy for encoding curiosity algorithms as programs in a domain-specific language and searching, during a meta-learning phase, for algorithms that enable RL agents to perform well in new domains. Our rich language of programs, which can combine neural networks with other building blocks including nearest-neighbor modules and can choose its own loss functions, enables the expression of highly generalizable programs that perform well in domains as disparate as grid navigation with image input, acrobot, lunar lander, ant and hopper. To make this approach feasible, we develop several pruning techniques, including learning to predict a program's success based on its syntactic properties. We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well. Figure 1: Our RL agent is augmented with a curiosity module, obtained by meta-learning over a complex space of programs, which computes a pseudo-reward r at every time step. When an agent is learning to behave online, via reinforcement learning (RL), it is critical that it both explores its domain and exploits its rewards effectively. In very simple problems, it is possible to solve the problem optimally, using techniques of Bayesian decision theory (Ghavamzadeh et al., 2015) . However, these techniques do not scale at all well and are not effectively applicable to the problems addressable by modern deep RL, with large state and action spaces and sparse rewards. This difficulty has left researchers the task of designing good exploration strategies for RL systems in complex environments. One way to think of this problem is in terms of curiosity or intrisic motivation: constructing reward signals that augment or even replace the extrinsic reward from the domain, which induce the RL agent to explore their domain in a way that results in effective longer-term learning and behavior (Pathak et al., 2017; Burda et al., 2018; Oudeyer, 2018) . The primary difficulty with this approach is that researchers are hand-designing these strategies: it is difficult for humans to systematically consider the space of strategies or to tailor strategies for the distribution of environments an agent might be expected to face. We take inspiration from the curious behavior observed in young humans and other animals and hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in agent's life in order to expose it to experiences that enable it to learn to obtain high rewards over the course of its lifetime. We propose to formulate the problem of generating curious behavior as one of meta-learning: an outer loop, operating at "evolutionary" scale will search over a space of algorithms for generating curious behavior by dynamically adapting the agent's reward signal, and the inner loop will perform standard reinforcement learning using the adapted reward signal. This process is illustrated in figure 1; note that the aggregate agent, outlined in gray, has the standard interface of an RL agent. The inner RL algorithm is continually adapting to its input stream of states and rewards, attempting to learn a policy that optimizes the discounted sum of proxy rewards k≥0 γ k r t+k . The outer "evolutionary" search is attempting to find a program for the curiosity module, so to optimize the agent's lifetime return T t=0 r t , or another global objective like the mean performance on the last few trials. Although it is, in principle, possible to discover a complete, integrated algorithm for the entire curious learning agent in the gray box, that is a much more complex search problem that is currently computationally infeasible. We are relying on the assumption that the foundational methods for reinforcement learning, including those based on temporal differencing and policy gradient, are fundamentally sound and can serve as the behavior-learning basis for our agents. It is important to note, though, that the internal RL algorithm in our architecture must be able to tolerate a nonstationary reward signal, which may necessitate minor algorithmic changes or, at least, different hyperparameter values. In this meta-learning setting, our objective is to find a curiosity module that works well given a distribution of environments from which we can sample at meta-learning time. If the environment distribution is relatively low-variance (the tasks are all quite similar) then it might suffice to search over a relatively simple space of curiosity strategies (most trivially, the in an -greedy exploration strategy). Meta-RL has been widely explored recently, in some cases with a focus on reducing the amount of experience needed by initializing the RL algorithm well (Finn et al., 2017; Clavera et al., 2019) and, in others, for efficient exploration (Duan et al., 2016; Wang et al., 2017) . The environment distributions in these cases have still been relatively low-diversity, mostly limited to variations of the same task, such as exploring different mazes or navigating terrains of different slopes. We would like to discover curiosity mechanisms that can generalize across a much broader distribution of environments, even those with different state and action spaces: from image-based games, to joint-based robotic control tasks. To do that, we perform meta-learning in a rich, combinatorial, open-ended space of programs. This paper makes three novel contributions. We focus on a regime of meta-reinforcement-learning in which the possible environments the agent might face are dramatically disparate and in which the agent's lifetime is very long. This is a substantially different setting than has been addressed in previous work on meta-RL and it requires substantially different techniques for representation and search. We represent meta-learned curiosity strategies in a rich, combinatorial space of programs rather than in a fixed-dimensional numeric parameter space. The programs are represented in a domain-specific language (DSL) which includes sophisticated building blocks including neural networks complete with gradient-descent mechanisms, learned objective functions, ensembles, buffers, and other regressors. This language is rich enough to represent many previously reported hand-designed exploration algorithms. We believe that by performing meta-RL in such a rich space of mechanisms, we will be able to discover highly general, fundamental curiosity-based exploration methods. This generality means that a relatively computationally expensive meta-learning process can be amortized over the lifetimes of many agents in a wide variety of environments. We make the search over programs feasible with relatively modest amounts of computation. It is a daunting search problem to find a good solution in a combinatorial space of programs, where evaluating a single potential solution requires running an RL algorithm for up to millions of time steps. We address this problem in multiple ways. By including environments of substantially different difficulty and character, we can evaluate candidate programs first on relatively simple and short-horizon domains: if they don't perform well in those domains, they are pruned early, which saves a significant amount of computation time. In addition, we predict the performance of an algorithm from its structure and operations, thus trying the most promising algorithms early in our search. Finally, we also monitor the learning curve of agents and stop unpromising programs before they reach all T environment steps. We demonstrate the effectiveness of the approach empirically, finding curiosity strategies that are similar to those in published literature, as well as novel strategies that are competitive with them and generalize well. In this work we show that programs are a powerful, succinct, representation for algorithms for generating curious exploration, and these programs can be meta-learned efficiently via active search. Results from this work are two-fold. First, by construction, algorithms resulting from this search will have broad generalization and will thus be a useful default for RL settings, where reliability is key. Second, the algorithm search code will be open-sourced to facilitate further research on exploration algorithms based on new ideas or building blocks, which can be added to the search. In addition, we note that the approach of meta-learning programs instead of network weights may have further applications beyond finding curiosity algorithms, such as meta-learning optimization algorithms or even meta-learning meta-learning algorithms.
Meta-learning curiosity algorithms by searching through a rich space of programs yields novel mechanisms that generalize across very different reinforcement-learning domains.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:501
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Many machine learning algorithms represent input data with vector embeddings or discrete codes. When inputs exhibit compositional structure (e.g. objects built from parts or procedures from subroutines), it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations. While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general (e.g. vector-valued) representation spaces. We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives. We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization. We have introduced a new evaluation method called TRE for generating graded judgments about compositional structure in representation learning problems where the structure of the observations is understood. TRE infers a set of primitive meaning representations that, when composed, approximate the observed representations, then measures the quality of this approximation. We have applied TRE-based analysis to four different problems in representation learning, relating compositionality to learning dynamics, linguistic compositionality, similarity and generalization.Many interesting questions regarding compositionality and representation learning remain open. The most immediate is how to generalize TRE to the setting where oracle derivations are not available; in this case Equation 2 must be solved jointly with an unsupervised grammar induction problem BID25 . Beyond this, it is our hope that this line of research opens up two different kinds of new work: better understanding of existing machine learning models, by providing a new set of tools for understanding their representational capacity; and better understanding of problems, by better understanding the kinds of data distributions and loss functions that give rise to compositionalor non-compositional representations of observations.
This paper proposes a simple procedure for evaluating compositional structure in learned representations, and uses the procedure to explore the role of compositionality in four learning problems.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:502
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem. The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled constrained programming algorithm as a building block in the architecture to enforce constraints. With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA (29.7% improvement in some cases in F1 scores and even larger improvement for pseudoknotted structures) and runs as efficient as the fastest algorithms in terms of inference time. Ribonucleic acid (RNA) is a molecule playing essential roles in numerous cellular processes and regulating expression of genes (Crick, 1970) . It consists of an ordered sequence of nucleotides, with each nucleotide containing one of four bases: Adenine (A), Guanine (G), Cytosine (C) and Uracile (U). This sequence of bases can be represented as x := (x 1 , . . . , x L ) where x i ∈ {A, G, C, U }, which is known as the primary structure of RNA. The bases can bond with one another to form a set of base-pairs, which defines the secondary structure. A secondary structure can be represented by a binary matrix A * where A * ij = 1 if the i, j-th bases are paired (Fig 1) . Discovering the secondary structure of RNA is important for understanding functions of RNA since the structure essentially affects the interaction and reaction between RNA and other cellular components. Although secondary structure can be determined by experimental assays (e.g. X-ray diffraction), it is slow, expensive and technically challenging. Therefore, computational prediction of RNA secondary structure becomes an important task in RNA research and is useful in many applications such as drug design (Iorns et al., 2007) . (ii) Pseudo-knot (i) Nested Structure Research on computational prediction of RNA secondary structure from knowledge of primary structure has been carried out for decades. Most existing methods assume the secondary structure is a result of energy minimization, i.e., A * = arg min A E x (A). The energy function is either estimated by physics-based thermodynamic experiments (Lorenz et al., 2011; Markham & Zuker, 2008) or learned from data (Do et al., 2006) . These approaches are faced with a common problem that the search space of all valid secondary structures is exponentially-large with respect to the length L of the sequence. To make the minimization tractable, it is often assumed the base-pairing has a nested structure (Fig 2 left) , and the energy function factorizes pairwisely. With this assumption, dynamic programming (DP) based algorithms can iteratively find the optimal structure for subsequences and thus consider an enormous number of structures in time O(L 3 ). Although DP-based algorithms have dominated RNA structure prediction, it is notable that they restrict the search space to nested structures, which excludes some valid yet biologically important RNA secondary structures that contain 'pseudoknots', i.e., elements with at least two non-nested base-pairs (Fig 2 right) . Pseudoknots make up roughly 1.4% of base-pairs (Mathews & Turner, 2006) , and are overrepresented in functionally important regions (Hajdin et al., 2013; Staple & Butcher, 2005) . Furthermore, pseudoknots are present in around 40% of the RNAs. They also assist folding into 3D structures (Fechter et al., 2001 ) and thus should not be ignored. To predict RNA structures with pseudoknots, energy-based methods need to run more computationally intensive algorithms to decode the structures. In summary, in the presence of more complex structured output (i.e., pseudoknots), it is challenging for energy-based approaches to simultaneously take into account the complex constraints while being efficient. In this paper, we adopt a different viewpoint by assuming that the secondary structure is the output of a feed-forward function, i.e., A * = F θ (x), and propose to learn θ from data in an end-to-end fashion. It avoids the second minimization step needed in energy function based approach, and does not require the output structure to be nested. Furthermore, the feed-forward model can be fitted by directly optimizing the loss that one is interested in. Despite the above advantages of using a feed-forward model, the architecture design is challenging. To be more concrete, in the RNA case, F θ is difficult to design for the following reasons: (i) RNA secondary structure needs to obey certain hard constraints (see details in Section 3), which means certain kinds of pairings cannot occur at all (Steeg, 1993) . Ideally, the output of F θ needs to satisfy these constraints. (ii) The number of RNA data points is limited, so we cannot expect that a naive fully connected network can learn the predictive information and constraints directly from data. Thus, inductive biases need to be encoded into the network architecture. (iii) One may take a two-step approach, where a post-processing step can be carried out to enforce the constraints when F θ predicts an invalid structure. However, in this design, the deep network trained in the first stage is unaware of the post-processing stage, making less effective use of the potential prior knowledge encoded in the constraints. In this paper, we present an end-to-end deep learning solution which integrates the two stages. The first part of the architecture is a transformer-based deep model called Deep Score Network which represents sequence information useful for structure prediction. The second part is a multilayer network called Post-Processing Network which gradually enforces the constraints and restrict the output space. It is designed based on an unrolled algorithm for solving a constrained optimization. These two networks are coupled together and learned jointly in an end-to-end fashion. Therefore, we call our model E2Efold. By using an unrolled algorithm as the inductive bias to design Post-Processing Network, the output space of E2Efold is constrained (see Fig 3 for an illustration), which makes it easier to learn a good model in the case of limited data and also reduces the overfitting issue. Yet, the constraints encoded in E2Efold are flexible enough such that pseudoknots are included in the output space. In summary, E2Efold strikes a nice balance between model biases for learning and expressiveness for valid RNA structures. We conduct extensive experiments to compare E2Efold with state-of-the-art (SOTA) methods on several RNA benchmark datasets, showing superior performance of E2Efold including: • being able to predict valid RNA secondary structures including pseudoknots; • running as efficient as the fastest algorithm in terms of inference time; • producing structures that are visually close to the true structure; • better than previous SOTA in terms of F1 score, precision and recall. Although in this paper we focus on RNA secondary structure prediction, which presents an important and concrete problem where E2Efold leads to significant improvements, our method is generic and can be applied to other problems where constraints need to be enforced or prior knowledge is provided. We imagine that our design idea of learning unrolled algorithm to enforce constraints can also be transferred to problems such as protein folding and natural language understanding problems (e.g., building correspondence structure between different parts in a document). We propose a novel DL model, E2Efold, for RNA secondary structure prediction, which incorporates hard constraints in its architecture design. Comprehensive experiments are conducted to show the superior performance of E2Efold, no matter on quantitative criteria, running time, or visualization. Further studies need to be conducted to deal with the RNA types with less samples. Finally, we believe the idea of unrolling constrained programming and pushing gradient through post-processing can be generic and useful for other constrained structured prediction problems. Here we explain the difference between our approach and other works on unrolling optimization problems. First, our view of incorporating constraints to reduce output space and to reduce sample complexity is novel. Previous works (Hershey et al., 2014; Belanger et al., 2017; Ingraham et al., 2018) did not discuss these aspects. The most related work which also integrates constraints is OptNet (Amos & Kolter, 2017) , but its very expensive and can not scale to the RNA problem. Therefore, our proposed approach is a simple and effective one. Second, compared to (Chen et al., 2018; Shrivastava et al., 2019) , our approach has a different purpose of using the algorithm. Their goal is to learn a better algorithm, so they commonly make their architecture more flexible than the original algorithm for the room of improvement. However, we aim at enforcing constraints. To ensure that constraints are nicely incorporated, we keep the original structure of the algorithm and only make the hyperparameters learnable. Finally, although all works consider end-to-end training, none of them can directly optimize the F1 score. We proposed a differentiable loss function to mimic the F1 score/precision/recall, which is effective and also very useful when negative samples are much fewer than positive samples (or the inverse).
A DL model for RNA secondary structure prediction, which uses an unrolled algorithm in the architecture to enforce constraints.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:503
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns. Instead, many experimental results on synaptic plasticity can be summarized as three-factor learning rules involving eligibility traces of the local neural activity and a third factor. We present here eligibility propagation (e-prop), a new factorization of the loss gradients in RNNs that fits the framework of three factor learning rules when derived for biophysical spiking neuron models. When tested on the TIMIT speech recognition benchmark, it is competitive with BPTT both for training artificial LSTM networks and spiking RNNs. Further analysis suggests that the diversity of learning signals and the consideration of slow internal neural dynamics are decisive to the learning efficiency of e-prop. The brain seems to be able to solve tasks such as counting, memorizing and reasoning which require efficient temporal processing capabilities. It is natural to model this with recurrent neural networks (RNNs), but their canonical training algorithm called backpropagation through time (BPTT) does not appear to be compatible with learning mechanisms observed in the brain. There, long-term changes of synaptic efficacies depend on the local neural activity. It was found that the precise timing of the electric pulses (i.e. spikes) emitted by the pre-and post-synaptic neurons matters, and these spike-timing dependent plasticity (STDP) changes can be conditioned or modulated by a third factor that is often thought to be a neuromodulator (see [1, 2] for reviews). Looking closely at the relative timing, the third factor affects the plasticity even if it arrives with a delay. This suggests the existence of local mechanisms that retain traces of the recent neural activity during this temporal gap and they are often referred to as eligibility traces [2] . To verify whether three factor learning rules can implement functional learning algorithms, researchers have simulated how interesting learnt behaviours can emerge from them [1, 3, 4] . The third factor is often considered as a global signal emitted when a reward is received or predicted, and this alone can solve learning tasks of moderate difficulty, even in RNNs [4] . Yet in feed-forward networks, it was already shown that plausible learning algorithms inspired by backpropagation and resulting in neuron-specific learning signals largely outperform the rules based on a global third factor [5, 6, 7] . This suggests that backpropagation provides important details that are not captured by all three factor learning rules. Here we aim at a learning algorithm for RNNs that is general and efficient like BPTT but remains plausible. A major plausibility issue of BPTT is that it requires to propagate errors backwards in time or to store the entire state space trajectory raising questions on how and where this is performed in the brain [8] . We suggest instead a rigorous re-analysis of gradient descent in RNNs that leads to a gradient computation relying on a diversity of learning signals (i.e. neuron-specific third factors) and a few eligibility traces per synapse. We refer to this algorithm as eligibility propagation (e-prop). When derived with spiking neurons, e-prop fits under the three factor learning rule framework and is qualitatively compatible with experimental data [2] . To test its learning efficiency, we applied e-prop to artificial Long Short-Term Memory (LSTM) networks [9] , and Long short-term memory Spiking Neural Networks (LSNNs) [10] (spiking RNNs combining short and long realistic time constants). We found that (1) it is competitive with BPTT on the TIMIT speech recognition benchmark, and (2) it can solve nontrivial temporal credit assignment problems with long delays. We are not aware of any comparable achievements with previous three factor learning rules. Real-time recurrent learning (RTRL) [11] computes the same loss gradients as BPTT in an online fashion but requires many more operations. Eventhough the method is online, one may wonder where can it be implemented in the brain if it requires a machinery bigger than the network itself. Recent works [12, 13, 6] have suggested that eligibility traces can be used to approximate RTRL. This was shown to be feasible if the neurons do not have recurrent connections [6] , if the recurrent connections are ignored during learning [12] or if the network dynamics are approximated with a trained estimator [13] . However these algorithms were derived for specific neuron models without long-short term memory, making it harder to tackle challenging RNN benchmark tasks (no machine learning benchmarks were considered in [6, 12] ). Other mathematical methods [14, 15] , have suggested approximations to RTRL which are compatible with complex neuron models. Yet those methods lead to gradient estimates with a high variance [15] or requiring heavier computations when the network becomes large [14, 11] . This issue was solved in e-prop, as the computational and memory costs are the same (up to constant factor) as for running any computation with the RNN. This reduction of the computational load arises from an essential difference between e-prop and RTRL: e-prop computes the same loss gradients but only propagates forward in time the terms that can be computed locally. This provides a new interpretation of eligibility traces that is mathematically grounded and generalizes to a broad class of RNNs. Our empirical results show that such traces are sufficient to approach the performance of BPTT despite a simplification of the non-local learning signal, but we believe that more complex strategies for computing a learning signals can be combined with e-prop to yield even more powerful online algorithms. A separate paper presents one such example to enable one-shot learning in recurrent spiking neural networks [8] .
We present eligibility propagation an alternative to BPTT that is compatible with experimental data on synaptic plasticity and competes with BPTT on machine learning benchmarks.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:504
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features. In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features. The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior. We present results of this approach on synthetic environments and six Atari games. The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy. We also show that these finite policy representations lead to improved interpretability. Deep reinforcement learning (RL) and imitation learning (IL) have demonstrated impressive performance across a wide range of applications. Unfortunately, the learned policies are difficult to understand and explain, which limits the degree that they can be trusted and used in high-stakes applications. Such explanations are particularly problematic for policies represented as recurrent neural networks (RNNs) BID16 BID14 , which are increasingly used to achieve state-of-the-art performance BID15 BID21 . This is because RNN policies use internal memory to encode features of the observation history, which are critical to their decision making, but extremely difficult to interpret. In this paper, we take a step towards comprehending and explaining RNN policies by learning more compact memory representations.Explaining RNN memory is challenging due to the typical use of high-dimensional continuous memory vectors that are updated through complex gating networks (e.g. LSTMs, GRUs BID10 BID5 ). We hypothesize that, in many cases, the continuous memory is capturing and updating one or more discrete concepts. If exposed, such concepts could significantly aid explainability. This motivates attempting to quantize the memory and observation representation used by an RNN to more directly capture those concepts. In this case, understanding the memory use can be approached by manipulating and analyzing the quantized system. Of course, not all RNN policies will have compact quantized representations, but many powerful forms of memory usage can be captured in this way.Our main contribution is to introduce an approach for transforming an RNN policy with continuous memory and continuous observations to a finite-state representation known as a Moore Machine. To accomplish this we introduce the idea of Quantized Bottleneck Network (QBN) insertion. QBNs are simply auto-encoders, where the latent representation is quantized. Given a trained RNN, we train QBNs to encode the memory states and observation vectors that are encountered during the RNN operation. We then insert the QBNs into the trained RNN policy in place of the "wires" that propagated the memory and observation vectors. The combination of the RNN and QBN results in a policy represented as a Moore Machine Network (MMN) with quantized memory and observations that is nearly equivalent to the original RNN. The MMN can be used directly or fine-tuned to improve on inaccuracies introduced by QBN insertion.While training quantized networks is often considered to be quite challenging, we show that a simple approach works well in the case of QBNs. In particular, we demonstrate that "straight through" gradient estimators as in BID1 BID6 are quite effective.We present experiments in synthetic domains designed to exercise different types of memory use as well as benchmark grammar learning problems. Our approach is able to accurately extract the ground-truth MMNs, providing insight into the RNN memory use. We also did experiments on 6 Atari games using RNNs that achieve state-of-the-art performance. We show that in most cases it is possible to extract near-equivalent MMNs and that the MMNs can be surprisingly small. Further, the extracted MMNs give insights into the memory usage that are not obvious based on just observing the RNN policy in action. For example, we identify games where the RNNs do not use memory in a meaningful way, indicating the RNN is implementing purely reactive control. In contrast, in other games, the RNN does not use observations in a meaningful way, which indicates that the RNN is implementing an open-loop controller.
Extracting a finite state machine from a recurrent neural network via quantization for the purpose of interpretability with experiments on Atari.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:505
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. On-policy methods bring many benefits, such as ability to evaluate each resulting policy. However, they usually discard all the information about the policies which existed before. In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to the on-policy algorithms. To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies. The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one. In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG. The past few years have been marked by active development of reinforcement learning methods. Although the mathematical foundations of reinforcement learning have been known long before BID23 , starting from 2013, the novel deep learning techniques allowed to solve vision based discrete control tasks such as Atari 2600 games BID15 as well as continuous control problems BID12 . Many of the leading state-of-the-art reinforcement learning methods share the actor-critic architecture BID5 . Actorcritic methods separate the actor, providing a policy, and the critic, providing an approximation for the expected discounted cumulative reward or some derived quantities such as advantage functions BID2 . However, despite improvements, state-of-the-art reinforcement learning still suffers from poor sample efficiency and extensive parameterisation. For most real-world applications, in contrast to simulations, there is a need to learn in real time and over a limited training period, while minimising any risk that would cause damage to the actor or the environment.Reinforcement learning algorithms can be divided into two groups: on-policy and off-policy learning. On-policy approaches (e. g., SARSA BID18 , ACKTR BID28 ) evaluate the target policy by assuming that future actions will be chosen according to it, hence the exploration strategy must be incorporated as a part of the policy. Off-policy methods (e. g., Qlearning BID27 , DDPG BID12 ) separate the exploration strategy, which modifies the policy to explore different states, from the target policy.The off-policy methods commonly use the concept of replay buffers to memorise the outcomes of the previous policies and therefore exploit the information accumulated through the previous iterations BID13 . BID15 combined this experience replay mechanism with Deep Q-Networks (DQN), demonstrating end-to-end learning on Atari 2600 games. One limitation of DQN is that it can only operate on discrete action spaces. BID12 proposed an extension of DQN to handle continuous action spaces based on the Deep Deterministic Policy Gradient (DDPG). There, exponential smoothing of the target actor and critic weights has been introduced to ensure stability of the rewards and critic predictions over the subsequent iterations. In order to improve the variance of policy gradients, BID20 proposed a Generalised Advantage Function. combined this advantage function learning with a parallelisation of exploration using differently trained actors in their Asynchronous Advantage Actor Critic model (A3C); however, BID26 demonstrated that such parallelisation may also have negative impact on sample efficiency. Although some work has been performed on improvement of exploratory strategies for reinforcement learning BID8 , but it still does not solve the fundamental restriction of inability to evaluate the actual policy, neither it removes the necessity to provide a separate exploratory strategy as a separate part of the method.In contrast to those, state-of-the-art on-policy methods have many attractive properties: they are able to evaluate exactly the resulting policy with no need to provide a separate exploration strategy. However, they suffer from poor sample efficiency, to a larger extent than off-policy reinforcement learning. TRPO method BID19 has introduced trust region policy optimisation to explicitly control the speed of policy evolution of Gaussian policies over time, expressed in a form of Kullback-Leibler divergence, during the training process. Nevertheless, the original TRPO method suffered from poor sample efficiency in comparison to off-policy methods such as DDPG. One way to solve this issue is by replacing the first order gradient descent methods, standard for deep learning, with second order natural gradient (Amari, 1998). BID28 used a Kroneckerfactored Approximate Curvature (K-FAC) optimiser BID14 in their ACKTR method. PPO method proposes a number of modifications to the TRPO scheme, including changing the objective function formulation and clipping the gradients. BID26 proposed another approach in their ACER algorithm: in this method, the target network is still maintained in the off-policy way, similar to DDPG BID12 , while the trust region constraint is built upon the difference between the current and the target network.Related to our approach, recently a group of methods has appeared in an attempt to get the benefits of both groups of methods. BID7 propose interpolated policy gradient, which uses the weighted sum of both stochastic BID24 and deterministic policy gradient BID22 . BID17 propose an off-policy trust region method, Trust-PCL, which exploits off-policy data within the trust regions optimisation framework, while maintaining stability of optimisation by using relative entropy regularisation.While it is a common practice to use replay buffers for the off-policy reinforcement learning, their existing concept is not used in combination with the existing on-policy scenarios, which results in discarding all policies but the last. Furthermore, many on-policy methods, such as TRPO BID19 , rely on stochastic policy gradient BID24 , which is restricted by stationarity assumptions, in a contrast to those based on deterministic policy gradient BID22 , like DDPG BID12 . In this article, we describe a novel reinforcement learning algorithm, allowing the joint use of replay buffers with trust region optimisation and leading to sample efficiency improvement. The contributions of the paper are given as follows:1. a reinforcement learning method, enabling replay buffer concept along with on-policy data; 2. theoretical insights into the replay buffer usage within the on-policy setting are discussed; 3. we show that, unlike the state-of-the-art methods as ACKTR BID28 , PPO (Schulman et al., 2017) and TRPO BID19 , a single non-adaptive set of hyperparameters such as the trust region radius is sufficient for achieving better performance on a number of reinforcement learning tasks.As we are committed to make sure the experiments in our paper are repeatable and to further ensure their acceptance by the community, we will release our source code shortly after the publication. The paper combines replay buffers and on-policy data for reinforcement learning. Experimental results on various tasks from the MuJoCo suite BID25 show significant improvements compared to the state of the art. Moreover, we proposed a replacement of the heuristically calculated trust region parameters, to a single fixed hyperparameter, which also reduces the computational expences, and a trainable diagonal covariance matrix.The proposed approach opens the door to using a combination of replay buffers and trust regions for reinforcement learning problems. While it is formulated for continuous tasks, it is possible to reuse the same ideas for discrete reinforcement learning tasks, such as ATARI games.
We investigate the theoretical and practical evidence of on-policy reinforcement learning improvement by reusing the data from several consecutive policies.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:506
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Convolutional Neural Networks (CNN) have been successful in processing data signals that are uniformly sampled in the spatial domain (e.g., images). However, most data signals do not natively exist on a grid, and in the process of being sampled onto a uniform physical grid suffer significant aliasing error and information loss. Moreover, signals can exist in different topological structures as, for example, points, lines, surfaces and volumes. It has been challenging to analyze signals with mixed topologies (for example, point cloud with surface mesh). To this end, we develop mathematical formulations for Non-Uniform Fourier Transforms (NUFT) to directly, and optimally, sample nonuniform data signals of different topologies defined on a simplex mesh into the spectral domain with no spatial sampling error. The spectral transform is performed in the Euclidean space, which removes the translation ambiguity from works on the graph spectrum. Our representation has four distinct advantages: (1) the process causes no spatial sampling error during initial sampling, (2) the generality of this approach provides a unified framework for using CNNs to analyze signals of mixed topologies, (3) it allows us to leverage state-of-the-art backbone CNN architectures for effective learning without having to design a particular architecture for a particular data structure in an ad-hoc fashion, and (4) the representation allows weighted meshes where each element has a different weight (i.e., texture) indicating local properties. We achieve good results on-par with state-of-the-art for 3D shape retrieval task, and new state-of-the-art for point cloud to surface reconstruction task. We present a unifying and novel geometry representation for utilizing Convolutional Neural Networks (CNNs) on geometries represented on weighted simplex meshes (including textured point clouds, line meshes, polygonal meshes, and tetrahedral meshes) which preserve maximal shape information based on the Fourier transformation. Most methods that leverage CNNs for shape learning preprocess these shapes into uniform-grid based 2D images (rendered multiview images) or 3D images (binary voxel or Signed Distance Function (SDF)). However, rendered 2D images do not preserve the 3D topologies of the original shapes due to occlusions and the loss of the third spatial dimension. Binary voxels and SDF representations under low resolution suffer big aliasing errors and under high resolution become memory inefficient. Loss of information in the input bottlenecks the effectiveness of the downstream learning process. Moreover, it is not clear how a weighted mesh where each element is weighted by a different scalar or vector (i.e., texture) can be represented by binary voxels and SDF. Mesh and graph based CNNs perform learning on the manifold physical space or graph spectrum, but generality across topologies remains challenging.In contrast to methods that operate on uniform sampling based representations such as voxel-based and view-based models, which suffer significant representational errors, we use analytical integration to precisely sample in the spectral domain to avoid sample aliasing errors. Unlike graph spectrum based methods, our method naturally generalize across input data structures of varied topologies. Using our representation, CNNs can be directly applied in the corresponding physical domain obtainable by inverse Fast Fourier Transform (FFT) due to the equivalence of the spectral and physical domains. This allows for the use of powerful uniform Cartesian grid based CNN backbone architectures (such as DLA BID40 , ResNet (He et al., 2016) ) for the learning task on arbitrary geometrical signals. Although the signal is defined on a simplex mesh, it is treated as a signal in the Euclidean space instead of on a graph, differentiating our framework from graph-based spectral learning techniques which have significant difficulties generalizing across topologies and unable to utilize state-of-the-art Cartesian CNNs.We evaluate the effectiveness of our shape representation for deep learning tasks with three experiments: a controlled MNIST toy example, the 3D shape retrieval task, and a more challenging 3D point cloud to surface reconstruction task. In a series of evaluations on different tasks, we show the unique advantages of this representation, and good potential for its application in a wider range of shape learning problems. We achieve state-of-the-art performance among non-pre-trained models for the shape retrieval task, and beat state-of-the-art models for the surface reconstruction task.The key contributions of our work are as follows:• We develop mathematical formulations for performing Fourier Transforms of signals defined on a simplex mesh, which generalizes and extends to all geometries in all dimensions. (Sec.3)• We analytically show that our approach computes the frequency domain representation precisely, leading to much lower overall representational errors. (Sec. 3)• We empirically show that our representation preserves maximal shape information compared to commonly used binary voxel and SDF representations. (Sec. 4.1)• We show that deep learning models using CNNs in conjunction with our shape representation achieves state-of-the-art performance across a range of shape-learning tasks including shape retrieval (Sec. 4.2) and point to surface reconstruction (Sec. 4.3) DISPLAYFORM0 Index of the n-th element among a total of N elements Ω j n Domain of n-th element of order j x Cartesian space coordinate vector. DISPLAYFORM1 Imaginary number unit Shape learning involves the learning of a mapping from input geometrical signals to desired output quantities. The representation of geometrical signals is key to the learning process, since on the one hand the representation determines the learning architectures, and, on the other hand, the richness of information preserved by the representation acts as a bottleneck to the downstream learning process. While data representation has not been an open issue for 2D image learning, it is far from being agreed upon in the existing literature for 3D shape learning. The varied shape representations used in 3D machine learning are generally classified as multiview images BID28 BID26 BID2 , volumetric voxels BID38 BID14 BID37 BID0 , point clouds BID19 BID24 BID36 , polygonal meshes BID3 BID34 BID15 BID12 , shape primitives BID42 BID39 , and hybrid representations (Dai & Nießner, 2018) .Our proposed representation is closest to volumetric voxel representation, since the inverse Fourier Transform of the spectral signal in physical domain is a uniform grid implicit representation of the shape. However , binary voxel representation suffers from significant aliasing errors during the uniform sampling step in the Cartesian space BID16 . Using boolean values for de facto floating point numbers during CNN training is a waste of information processing power. Also, the primitive-in-cell test for binarization requires arbitrary grouping in cases such as having multiple points or planes in the same cell BID32 . Signed Distance Function (SDF) or Truncated Signed Distance Function (TSDF) Canelhas, 2017) provides localization for the shape boundary, but is still constrained to linear surface localization due to the linear interpolation process for recovering surfaces from grids. Our proposed representation under Fourier basis can find nonlinear surface boundaries, achieving subgrid-scale accuracy (See FIG1 ). We present a general representation for multidimensional signals defined on simplicial complexes that is versatile across geometrical deep learning tasks and maximizes the preservation of shape information. We develop a set of mathematical formulations and algorithmic tools to perform the transformations efficiently. Last but not least, we illustrate the effectiveness of the NUFT representation with a well-controlled example (MNIST polygon), a classic 3D task (shape retrieval) and a difficult and mostly unexplored task by deep learning (point to surface reconstruction), achieving new state-of-the-art performance in the last task. In conclusion, we offer an alternative representation for performing CNN based learning on geometrical signals that shows great potential in various 3D tasks, especially tasks involving mixed-topology signals.
We use non-Euclidean Fourier Transformation of shapes defined by a simplicial complex for deep learning, achieving significantly better results than point-based sampling techiques used in current 3D learning literature.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:507
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint. Discovering and understanding causal mechanisms underlying natural phenomena are important to many disciplines of sciences. An effective approach is to conduct controlled randomized experiments, which however is expensive or even impossible in certain fields such as social sciences (Bollen, 1989) and bioinformatics (Opgen-Rhein and Strimmer, 2007) . Causal discovery methods that infer causal relationships from passively observable data are hence attractive and have been an important research topic in the past decades (Pearl, 2009; Spirtes et al., 2000; Peters et al., 2017) . A major class of such causal discovery methods are score-based, which assign a score S(G), typically computed with the observed data, to each directed graph G and then search over the space of all Directed Acyclic Graphs (DAGs) for the best scoring: min G S(G), subject to G ∈ DAGs. (1) While there have been well-defined score functions such as the Bayesian Information Criterion (BIC) or Minimum Description Length (MDL) score (Schwarz, 1978; Chickering, 2002) and the Bayesian Gaussian equivalent (BGe) score (Geiger and Heckerman, 1994) for linear-Gaussian models, Problem (1) is generally NP-hard to solve (Chickering, 1996; Chickering et al., 2004) , largely due to the combinatorial nature of its acyclicity constraint with the number of DAGs increasing superexponentially in the number of graph nodes. To tackle this problem, most existing approaches rely on local heuristics to enforce the acyclicity. For example, Greedy Equivalence Search (GES) enforces acyclicity one edge at a time, explicitly checking for the acyclicity constraint when an edge is added. GES is known to find the global minimizer with infinite samples under suitable assumptions (Chickering, 2002; Nandy et al., 2018) , but this is not guaranteed in the finite sample regime. There are also hybrid methods that use constraint-based approaches to reduce the search space before applying score-based methods, e.g., the max-min hill climbing method (Tsamardinos et al., 2006) . However, this methodology lacks a principled way of choosing a problem-specific combination of score functions and search strategies. Recently, Zheng et al. (2018) introduced a smooth characterization for the acyclicity constraint, and Problem (1) can be formulated as a continuous optimization problem w.r.t. the weighted graph adjacency matrix by picking a proper loss function, e.g., the least squares loss. Subsequent works Yu et al. (2019) and Lachapelle et al. (2019) have also adopted the evidence lower bound and the negative log-likelihood as loss functions, respectively, and used Neural Networks (NNs) to model the causal relationships. Note that the loss functions in these methods must be carefully chosen in order to apply continuous optimization methods. Unfortunately, many effective score functions, e.g., the generalized score function proposed by Huang et al. (2018) and the independence based score function given by , either cannot be represented in closed forms or have very complicated equivalent loss functions, and thus cannot be easily combined with this approach. We propose to use Reinforcement Learning (RL) to search for the DAG with the best score according to a predefined score function, as outlined in Figure 1 . The insight is that an RL agent with stochastic policy can determine automatically where to search given the uncertainty information of the learned policy, which gets updated promptly by the stream of reward signals. To apply RL to causal discovery, we use an encoder-decoder NN model to generate directed graphs from the observed data, which are then used to compute rewards consisting of the predefined score function as well as two penalty terms to enforce acyclicity. We resort to policy gradient and stochastic optimization methods to train the weights of the NNs, and our output is the graph that achieves the best reward, among all graphs generated in the training process. Experiments on both synthetic and real datasets show that our approach has a much improved search ability without sacrificing any flexibility in choosing score functions. In particular, the proposed approach using BIC as score function outperforms GES with the same score function on linear non-Gaussian acyclic model (LiNGAM) and linear-Gaussian datasets, and also outperforms recent gradient based methods when the causal relationships are nonlinear.
We apply reinforcement learning to score-based causal discovery and achieve promising results on both synthetic and real datasets
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:508
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The topic modeling discovers the latent topic probability of given the text documents. To generate the more meaningful topic that better represents the given document, we proposed a universal method which can be used in the data preprocessing stage. The method consists of three steps. First, it generates the word/word-pair from every single document. Second, it applies a two way parallel TF-IDF algorithm to word/word-pair for semantic filtering. Third, it uses the k-means algorithm to merge the word pairs that have the similar semantic meaning. Experiments are carried out on the Open Movie Database (OMDb), Reuters Dataset and 20NewsGroup Dataset and use the mean Average Precision score as the evaluation metric. Comparing our results with other state-of-the-art topic models, such as Latent Dirichlet allocation and traditional Restricted Boltzmann Machines. Our proposed data preprocessing can improve the generated topic accuracy by up to 12.99\%. How the number of clusters and the number of word pairs should be adjusted for different type of text document is also discussed. After millennium, most collective information are digitized to form an immense database distributed across the Internet. Among all, text-based knowledge is dominant because of its vast availability and numerous forms of existence. For example, news, articles, or even Twitter posts are various kinds of text documents. For the human, it is difficult to locate one's searching target in the sea of countless texts without a well-defined computational model to organize the information. On the other hand, in this big data era, the e-commerce industry takes huge advantages of machine learning techniques to discover customers' preference. For example, notifying a customer of the release of "Star Wars: The Last Jedi" if he/she has ever purchased the tickets for "Star Trek Beyond"; recommending a reader "A Brief History of Time" from Stephen Hawking in case there is a "Relativity: The Special and General Theory" from Albert Einstein in the shopping cart on Amazon. The content based recommendation is achieved by analyzing the theme of the items extracted from its text description.Topic modeling is a collection of algorithms that aim to discover and annotate large archives of documents with thematic information BID0 . Usually, general topic modeling algorithms do not require any prior annotations or labeling of the document while the abstraction is the output of the algorithms. Topic modeling enables us to convert a collection of large documents into a set of topic vectors. Each entry in this concise representation is a probability of the latent topic distribution. By comparing the topic distributions, we can easily calculate the similarity between two different documents. BID25 Some topic modeling algorithms are highly frequently used in text-mining BID13 , preference recommendation BID27 and computer vision BID28 . BID0 Many of the traditional topic models focus on latent semantic analysis with unsupervised learning. Latent Semantic Indexing (LSI) BID11 applies Singular-Value Decomposition (SVD) BID6 to transform the term-document matrix to a lower dimension where semantically similar terms are merged. It can be used to report the semantic distance between two documents, however, it does not explicitly provide the topic information. The Probabilistic Latent Semantic Analysis (PLSA) BID9 model uses maximum likelihood estimation to extract latent topics and topic word distribution, while the Latent Dirichlet Allocation (LDA) BID1 model performs iterative sampling and characterization to search for the same information.The availability of many manually categorized online documents, such as Internet Movie Database (IMDb) movie review Inc. (1990), Wikipedia articles, makes the training and testing of topics models possible. All of the existing workds are based on the bag-of-words model, where a document is considered as a collection of words. The semantic information of words and interaction among objects are assumed to be unknown during the model construction. Such simple representation can be improved by recent research advances in natural language processing and word embedding. In this paper, we will explore the existing knowledge and build a topic model using explicit semantic analysis.The work studies the best data processing and feature extraction algorithms for topic modeling and information retrieval. We investigate how the available semantic knowledge, which can be obtained from language analysis or from existing dictionary such as WordNet, can assist in the topic modeling.Our main contributions are:• We redesign a new topic model which combines two types of text features to be the model input.• We apply the numerical statistic algorithm to determine the key elements for each document dynamically.• We apply a vector quantization method to merge and filter text unit based on the semantic meaning.• We significantly improve the accuracy of the prediction using our proposed model. The rest of the paper is structured as follows: In Section 2, we review the existing methods, from which we got the inspirations. This is followed in Section 3 by details about our topic models. Section 4 describes our experimental steps and evaluate the results. Finally , Section 5 concludes this work. In this paper, we proposed a few techniques to processes the dataset and optimized the original RBM model. During the dataset processing part, first, we used a semantic dependency parser to extract the word pairs from each sentence of the text document. Then, by applying a two way parallel TF-IDF processing, we filtered the data in word level and word pair level. Finally, Kmeans clustering algorithm helped us merge the similar word pairs and remove the noise from the feature dictionary. We replaced the original word only RBM model by introducing word pairs. At the end, we showed that proper selection of K value and word pair generation techniques can significantly improve the topic prediction accuracy and the document retrieval performance. With our improvement, experimental results have verified that, compared to original word only RBM model, our proposed word/word pair combined model can improve the mAP score up to 10.48% in OMDb dataset, up to 1.11% in Reuters dataset and up to 12.99% in the 20NewsGroup dataset.
We proposed a universal method which can be used in the data preprocessing stage to generate the more meaningful topic that better represents the given document
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:509
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data. A natural scenario arises with data created on mobile phones by the activity of their users. Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually. In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL. We present FL as a natural source of practical applications for MAML algorithms, and make the following observations. 1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm. 2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize. However, solely optimizing for the global model accuracy yields a weaker personalization result. 3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim. These results raise new questions for FL, MAML, and broader ML research. In recent years, the growth of machine learning applications was driven by aggregation of large amounts of data in a datacenter, where a model can be trained using large scale distributed system (Dean et al., 2012; LeCun et al., 2015) . Both the research community and general public are becoming increasingly aware that there is a variety of scenarios where this kind of data collection comes with significant risks, mainly related to notions of privacy and trust. In the presence of user generated data, such as activity on mobile phones, Federated Learning (FL) proposes an alternative approach for training a high quality global model without ever sending raw data to the cloud. The FL system proposed by Google (Bonawitz et al., 2019 ) selects a sample of available devices and sends them a model to be trained. The devices compute an update to the model based on an optimization procedure with locally available data, and the central system aggregates the updates from different devices. Such iteration is repeated many times until the model has converged. The users' training data does not leave their devices. The basic FL algorithm, Federated Averaging (FedAvg) , has been used in production applications, for instance for next word prediction in mobile keyboard (Hard et al., 2018) , which shows that Federated Learning can outperform the best model trained in a datacenter. Successful algorithmic extensions to the central idea include training a differential private model (McMahan et al., 2018) , compression (Konečný et al., 2016b; Caldas et al., 2018a) , secure aggregation (Bonawitz et al., 2017) , and a smaller number of always-participating nodes (Yang et al., 2019) . FL applications generally face non-i.i.d and unbalanced data available to devices, which makes it challenging to ensure good performance across different devices with a FL-trained global model. Theoretical guarantees are only available under restrictive assumptions and for convex objectives, cf. Li et al. (2019b) . In this work, we are interested in personalization methods that adapt the model for data available on each device, individually. We refer to a trained global model as the initial model, and the locally adapted model as the personalized model. Existing FL personalization work directly takes a converged initial model and conducts personalization evaluation via gradient descent . However, in this approach, the training and personalization procedures are completely disconnected, which results in potentially suboptimal personalized models. Meta Learning optimizes the performance after adaptation given few-shot adaptation examples on heterogeneous tasks, and has increasing applications in the context of Supervised Learning and Reinforcement Learning. Model Agnostic Meta Learning (MAML) introduced by Finn et al. (2017) is a solely gradient-based Meta Learning algorithm, which runs in two connected stages; metatraining and meta-testing. Meta-training learns a sensitive initial model which can conduct fast adaptation on a range of tasks, and meta-testing adapts the initial model for a particular task. Both tasks for MAML, and clients for FL, are heterogeneous. For each task in MAML and client in FL, existing algorithms use a variant of gradient descent locally, and send an overall update to a coordinator to update the global model. If we present the FL training process as meta-training in the MAML language, and the FL personalization via gradient descent as meta-testing, we show in Section 2 that FedAvg (McMahan et al., 2017) and Reptile (Nichol et al., 2018) , two popular FL and MAML algorithms, are very similar to each other; see also Khodak et al. (2019) . In order to make FL personalization useful in practice, we propose that the following objectives must all be addressed, simultaneously. (1) Improved Personalized Model -for a large majority of the clients. (2) Solid Initial Model -some clients have limited or even no data for personalization. (3) Fast Convergence -reach a high quality model in small number of training rounds. Typically, the MAML algorithms only focus on objective (1); that was the original motivation in Finn et al. (2017) . Existing FL works usually focus on objectives (2) and (3), and take the personalized performance as secondary. This is largely due to the fact that it was not obvious that getting a solid initial model is feasible or practical if devices are available occasionally and with limited resources. In this work, we study these three objectives jointly, and our main contributions are: • We point out the connection between two widely used FL and MAML algorithms, and interpret existing FL algorithm in the light of existing MAML algorithms. • We propose a novel modification of FedAvg, with two stages of training and fine-tuning, for optimizing the three above objectives. • We empirically demonstrate that FedAvg is already a meta learning algorithm, optimizing for personalized performance, as opposed to quality of the global model. Furthermore, we show that the fine tuning stage enables better and more stable personalized performance. • We observe that different global models with the same accuracy, can exhibit very different capacity for personalization. • We highlight that these results challenge the existing objectives in the FL literature, and motivate new problems for the broader Machine Learning research community. It this work, we argue that in the context of Federated Learning, the accuracy of the global model after personalization should be of much greater interest than it has been. Investigation of the topic reveals close similarities between the fields of Federated Learning and Model Agnostic Meta Learning, and raises new questions for these areas, as well as for the broader Machine Learning community. Challenges for Federated Learning. Framing papers in the area of Federated Learning Konečný et al., 2016a; Li et al., 2019a) , formulate the objective as training of a shared global model, based on a decentralized data storage where each node / client has access to a non-i.i.d sample from the overall distribution. The objective is identical to one the broader ML community would optimize for, had all the data been available in a centralized location. We argue that in this setting, the primary objective should be the adaptation to the statistical heterogeneity present at different data nodes, and demonstrate that the popular FL algorithm, Federated Averaging, does in fact optimize the personalized performance, and while doing so, also improves the performance of the global model. Experiments we perform demonstrate that the algorithm used to train the model has major influence on its capacity to personalize. Moreover, solely optimizing the accuracy of the global model tends to have negative impact on its capacity to personalize, which further questions the correctness of the commonly presented objectives of Federated Learning. Challenges for Model Agnostic Meta Learning. The objectives in the Model Agnostic Meta Learning literature are usually only the model performance after adaptation to given task (Finn et al., 2017) . In this work, we present the setting of Federated Learning as a good source of practical applications for MAML algorithms. However, to have impact in FL, these methods need to also consider the performance of the initial model, 4 as in practice there will be many clients without data available for personalization. In addition, the connectivity constraints in a production deployment emphasize the importance of fast convergence in terms of number of communication rounds. We suggest these objectives become the subject of MAML works, in addition to the performance after adaptation, and to consider the datasets with a natural user/client structure being established for Federated Learning (Caldas et al., 2018b) as the source of experiments for supervised learning. Challenges for broader Machine Learning. The empirical evaluation in this work raises a number of questions of relevance to Machine Learning research in general. In particular, Figure 2 clearly shows that models with similar initial accuracy can have very different capacity to personalize to a task of the same type as it was trained on. This observation raises obvious questions for which we currently cannot provide an answer. How does the training algorithm impact personalization ability of the trained model? Is there something we can measure that will predict the adaptability of the model? Is it something we can directly optimize for, potentially leading to novel optimization methods? These questions can relate to a gap highlighted in Table 2 . While the common measures could suggest the global model is overfitting the training data, this is not true of the personalized model. Transfer Learning is another technique for which our result could inspire a novel solution. It is very common for machine learning practitioners to take a trained model from the research community, replace the final layer with a different output class of interest, and retrain for the new task (Oquab et al., 2014) . We conjecture that the algorithms proposed in the FL and MAML communities, could yield base models for which this kind of domain adaptation would yield better results. Finally, we believe that a systematic analysis of optimization algorithms of the inner-outer structure presented in Algorithm 1 could provide novel insights into the connections between optimization and generalization. Apart from the FL and MAML algorithms, Zhang et al. (2019) recently proposed a method that can be interpreted as outer optimizer in the general algorithm, which improves the stability of a variety of existing optimization methods used as the inner optimizer. A APPENDIX This Appendix contains further details referenced from the main body of the paper. Table 3 summarizes the attempts at fine tuning the model user in main body with different server optimizers. We see that comparing the same client optimizers, Adam consistently provides better and more stable results in terms of initial accuracy. A.2 PER-CLIENT PERSONALIZATION RESULTS Figure 4 visualizes the distribution of initial and personalized accuracies on a per-client basis. Each dot represents a random sample of the test clients used for personalization experiments. Studying this distribution is of great importance, as in practical deployment, degrading a user's experience might incur disproportionate cost, compared to the benefit of comparable improvement. Designing methods that robustly identify the clients below the diagonal line and at least revert to the initial model is worth of future investigation.
Federated Averaging already is a Meta Learning algorithm, while datacenter-trained methods are significantly harder to personalize.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:51
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets. By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly. However, most of the existing works are under the assumption that the predefined tasks are related to each other. Thus, their applications on real-world are limited, because rare real-world problems are closely related. Besides, the understanding of relationships among tasks has been ignored by most of the current methods. Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks. At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved. To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods. Multi-task learning (Caruana, 1997) aims to train a single model on multiple related tasks jointly, so that useful knowledge learned from one task can be transferred to enhance the generalization performance of other tasks. Over the last few years, different types of multi-task learning mechanisms (Sener & Koltun, 2018; Guo & Farooq, 2018; Ish, 2016; Lon, 2015) have been proposed and proved better than single-task learning methods from natural language processing (Palmer et al., 2017) and computer vision (Cortes et al., 2015) to chemical study (Ramsundar et al., 2015) . Despite the success of multi-task learning, when applying to 'discrete' data (graph/text), most of the current multi-task learning frameworks (Zamir et al., 2018; Ish, 2016) only leverage the general task dependency with the assumption that the task dependency remains the same for (1) different data samples; and (2) different sub-structures (node/word) in one data sample (graph/text). However, this assumption is not always true in many real-world problems. (1) Different data samples may have different task dependency. For example, when we want to predict the chemical properties of a particular toxic molecule, despite the general task dependency, its representations learned from toxicity prediction tasks should be more significant than the other tasks. (2) Even for the same data sample, different sub-structures may have different task dependency. Take sentence classification as an example. Words like 'good' or 'bad' may transfer more knowledge from sentiment analysis tasks, while words like 'because' or 'so' may transfer more from discourse relation identification tasks. In this work, to accurately learn the task dependency in both general level and data-specific level, we propose a novel framework, 'Learning to Transfer via ModellIng mulTi-level Task dEpeNdency' (L2T-MITTEN). The general task dependency is learned as a parameterized weighted dependency graph. And the data-specific task dependency is learned with the position-wise mutual attention mechanism. The two-level task dependency can be used by our framework to improve the performance on multiple tasks. And the objective function of multi-task learning can further enhance the quality of the learned task dependency. By iteratively mutual enhancement, our framework can not only perform better on multiple tasks, but also can extract high-quality dependency structures at different levels, which can reveal some hidden knowledge of the datasets. Another problem is that to transfer task-specific representations between every task pair, the number of transfer functions will grow quadratically as the number of tasks increases, which is unaffordable. To solve this, we develop a universal representation space where all task-specific representations get mapped to and all target tasks can be inferred from. This decomposition method reduces the space complexity from quadratic to linear. We validate our multi-task learning framework extensively on different tasks, including graph classication, node classification, and text classification. Our framework outperforms all the other state-ofthe-art (SOTA) multi-task methods. Besides, we show that L2T-MITTEN can be used as an analytic tool to extract interpretable task dependency structures at different levels on real-world datasets. Our contributions in this work are threefold: • We propose a novel multi-task learning framework to learn to both general task dependency and data-specific task dependency. The learned task dependency structures can be mutually enhanced with the objective function of multi-task learning. • We develop a decomposition method to reduce the space complexity needed by transfer functions from quadratic to linear. • We conduct extensive experiments on different real-world datasets to show the effectiveness of our framework and the importance of modelling multi-level task dependency. We propose L2T-MITTEN, a novel multi-task learning framework that (1) employs the positionwise mutual attention mechanism to learn the multi-level task dependency; (2) transfers the taskspecific representations between tasks with linear space-efficiency; and (3) uses the learned multilevel task dependency to guide the inference. We design three experimental settings where training data is sufficient, imbalanced or deficient, with multiple graph/text datasets. Experimental results demonstrate the superiority of our method against both classical and SOTA baselines. We also show that our framework can be used as an analytical tool to extract the task dependency structures at different levels, which can reveal some hidden knowledge of tasks and of datasets A DATASET SUMMARY Figure 4 , in the Encoder Block, we use several layers of graph convolutional layers (Kipf & Welling, 2016) followed by the layer normalization (Ba et al., 2016) . In the Readout Block, for graph-level task, we use set-to-set (Vinyals et al., 2015) as the global pooling operator to extract the graph-level representation which is later fed to a classifier; while for node-level task, we simply eliminate the global pooling layer and feed the node-level representation directly to the classifier. Figure 4: Graph convolutional networks architecture. Note that in node-level task, the Set2Set layer (global pooling) is eliminated.
We propose a novel multi-task learning framework which extracts multi-view dependency relationship automatically and use it to guide the knowledge transfer among different tasks.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:510
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We design simple and quantifiable testing of global translation-invariance in deep learning models trained on the MNIST dataset. Experiments on convolutional and capsules neural networks show that both models have poor performance in dealing with global translation-invariance; however, the performance improved by using data augmentation. Although the capsule network is better on the MNIST testing dataset, the convolutional neural network generally has better performance on the translation-invariance. Convolutional neural networks (CNN) have achieved state-of-the-art performance than the human being on many computer vision tasks BID6 ; BID2 . The deep learning community trend to believe that the success of CNN mainly due to two key features in CNN, reduced computation cost with weight sharing in convolutional layers and generalization with local invariance in subsampling layers BID7 ; BID8 . Due to convolutional layers are 'place-coded' equivariant and max-pooling layers are local invariant BID1 , CNN has to learn different models for different viewpoints which need big data and expensive cost.More Generalization model should be able to train on a limited range of viewpoints and getting good performance on a much more wider range. Capsule network is robust in dealing with different viewpoints BID3 BID9 ; BID4 . Capsules are a group of neurons which includes the pose, colour, lighting and deformation of the visual entity. Capsule network aims for 'rate-coded' equivariance because it's the weights that code viewpoint-invariant knowledge, not the neural activities. Viewpoint changes in capsule network are linear effects on the pose matrices of the parts and the whole between different capsules layers. However, it still unclear whether capsule networks be able to generalize for global translation invariance.Visualize and Quantify the translation-invariance in deep learning model are essential for understanding the architectural choices and helpful for developing Generalization model that is invariant to viewpoint changes. An analysis using translation-sensitivity map for MNIST digit dataset has been used to investigate translation invariance in CNN BID5 . In this paper, we introduce a simple method to test the performance of global translation-invariance in convolutional and capsule neural network models trained on the MNIST dataset. We introduce a simple GTI testing dataset for deep learning models trained on MNIST dataset. The goal is to get a better understanding of the ability of CNN and CapsNet to dealing with global translational invariance. Although the current version of CapsNet could not handle global translational invariance without data augmentation, we still believe CapsNet architecture potentially better than CNN on dealing with global translational invariance because capsules could train to learn all viewpoint no matter it receives the information for the centre or the edge. Our testing method is sample Figure 5: GTI dataset accuracy of models trained on CNN and CapsNet with different amount of random shifting in MNIST training dataset.and quantifiable, and it easy to implement for other datasets of computer vision tasks by taking a clear and correct labelled image from each class and apply the translational shifting to cover all possible cases.
Testing of global translational invariance in Convolutional and Capsule Networks
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:511
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Gaussian processes are ubiquitous in nature and engineering. A case in point is a class of neural networks in the infinite-width limit, whose priors correspond to Gaussian processes. Here we perturbatively extend this correspondence to finite-width neural networks, yielding non-Gaussian processes as priors. The methodology developed herein allows us to track the flow of preactivation distributions by progressively integrating out random variables from lower to higher layers, reminiscent of renormalization-group flow. We further develop a perturbative prescription to perform Bayesian inference with weakly non-Gaussian priors.
We develop an analytical method to study Bayesian inference of finite-width neural networks and find that the renormalization-group flow picture naturally emerges.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:512
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity. In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation. Our answer is "early stopping". Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping. This can be justified by a new concept, Anisotropic In- formation Retrieval (AIR), which means that the neural network tends to fit the informative information first and the non-informative information (including noise) later. Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel(NTK). AIR facilities a new understanding of distillation. With that, we further utilize distillation to refine noisy labels. We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels. We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping. Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous result was on convergence in 0-1 loss. The theoretical result ensures the learned neural network enjoy a margin on the training data which leads to better generalization. Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly. Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing LeCun et al. (2015) . Among these tasks, image classification is considered as one of the fundamental tasks since classification networks are commonly used as base networks for other problems. In order to achieve higher accuracy using a network with similar complexity as the base network, distillation has been proposed, which aims to utilize the prediction of one (teacher) network to guide the training of another (student) network. In Hinton et al. (2015) , the authors suggested to generate a soft target by a heavy-duty teacher network to guide the training of a light-weighted student network. More interestingly, Furlanello et al. (2018) ; Bagherinezhad et al. (2018) proposed to train a student network parameterized identically as the teacher network. Surprisingly, the student network significantly outperforms the teacher network. Later, it was suggested by Zagoruyko & Komodakis (2016a) ; Huang & Wang (2017) ; Czarnecki et al. (2017) to transfer knowledge of representations, such as attention maps and gradients of the classifier, to help with the training of the student network. In this work, we focus on the distillation utilizing the network outputs Hinton et al. (2015) ; Furlanello et al. (2018) ; Yang et al. (2018a) ; Bagherinezhad et al. (2018) ; Yang et al. (2018b) . To explain the effectiveness of distillation, Hinton et al. (2015) suggested that instead of the hard labels (i.e one-hot vectors), the soft labels generated by the pre-trained teacher network provide extra information, which is called the "Dark Knowledge". The "Dark knowledge" is the knowledge encoded by the relative probabilities of the incorrect outputs. In Hinton et al. (2015) ; Furlanello et al. (2018) ; Yang et al. (2018a) , the authors pointed out that secondary information, i.e the semantic similarity between different classes, is part of the "Dark Knowledge", and Bagherinezhad et al. (2018) observed that the "Dark Knowledge" can help to refine noisy labels. In this paper, we would like to answer the following question: can we theoretically explain how neural networks learn the Dark Knowledge? Answering this question will help us to understand the regularization effect of distillation. In this work, we assume that the teacher network is overparameterized, which means that it can memorize all the labels via gradient descent training Du et al. (2018b; a) ; Oymak & Soltanolkotabi (2018) ; Allen-Zhu et al. (2018) . In this case, if we train the overparameterized teacher network until convergence, the network's output coincides exactly with the ground truth hard labels. This is because the logits corresponding to the incorrect classes are all zero, and hence no "Dark knowledge" can be extracted. Thus, we claim that the core factor that enables an overparameterized network to learn "Dark knowledge" is early stopping. What's more, Arpit et al. (2017) ; Rahaman et al. (2018) ; Xu et al. (2019) observed that "Dark knowledge" represents the discrepancy of convergence speed of different types of information during the training of the neural network. Neural network tends to fit informative information, such as simple pattern, faster than non-informative and unwanted information such as noise. Similar phenomenon was observed in the inverse scale space theory for image restoration Scherzer & Groetsch (2001) ; Burger et al. (2006) ; Xu & Osher (2007) ; Shi & Osher (2008) . In our paper, we call this effect Anisotropic Information Retrieval (AIR). With the aforementioned interpretation of distillation, We further utilize AIR to refine noisy labels by introducing a new self-distillation algorithm. To extract anisotropic information, we sequentially extract knowledge from the output of the network in the previous epoch to supervise the training in the next epoch. By dynamically adjusting the strength of the supervision, we can theoretically prove that the proposed self-distillation algorithm can recover the correct labels, and empirically the algorithm achieves the state-of-the-art results on Fashion MNIST and CIFAR10. The benefit brought by our theoretical study is twofold. Firstly, the existing approach using large networks ; Zhang & Sabuncu (2018) often requires a validation set to early terminate the network training. However, our analysis shows that our algorithm can sustain long training without overfitting the noise which makes the proposed algorithm more user-friendly. Secondly, our analysis is based on an 2 -loss of the clean labels which enables the algorithm to generate a trained network with a bigger margin and hence generalize better. This paper provided an understanding of distillation using overparameterized neural networks. We observed that such neural networks posses the property of Anisotropic Information Retrieval (AIR), which means the neural network tends to fit the infomrative information (i.e. the eigenspaces associated with the largest few eigenvalues of NTK) first and the non-informative information later. Through AIR, we further observed that distillation of the Dark Knowledge is mainly due to early stopping. Based on this new understanding, we proposed a new self-distillation algorithm for noisy label refinery. Both theoretical and empirical justifications of the performance of the new algorithm were provided. Our analysis is based on the assumption that the teacher neural network is overparameterized. When the teacher network is not overparameterized, the network will be biased towards the label even without early stopping. It is still an interesting and unclear problem that whether the bias can provide us with more information. For label refinery, our analysis is mostly based on the symmetric noise setting. We are interested in extending our analysis to the asymmetric setting. A PROOF DETAILS A.1 NEURAL NETWORK PROPERTIES As preliminaries, we first discuss some properties of the neural network. We begin with the jacobian of the one layer neural network x → v φ(W x), the Jacobian matrix with respect to W takes the form First we borrow Lemma 6.6, 6.7, 6.8 from Oymak & Soltanolkotabi (2018) and Theorem 6.7, 6.8 from . T be a data matrix made up of data with unit Euclidean norm. Assuming that λ(X) > 0, the following properties hold. , at random Gaussian initialization W 0 ∼ N (0, 1) k×d , with probability at least 1 − δ, we have T in whichx i corresponds to the center of cluster including x i . What's more, we define the matrix of cluster center C = [c 1 , c 2 , . . . , c K ] T . Assuming that λ(C) > 0, the following properties hold. • , at random Gaussian initialization W 0 ∼ N (0, 1) k×d , with probability at least 1 − δ, we have • range(J(W,X)) ⊂ S + for any parameter matrix W . Then, we gives out the perturbation analysis of the Jacobian matrix. Lemma 3. Let X be a -clusterable data matrix with its center matrixX. For parameter matrices W,W , we have Proof. We bound J(W, X) − J(W ,X) by The first term is bounded by Lemma 1. As to the second term, we bound it by Combining the inequality above, we get Lemma 4. Let X be a -clusterable data matrix with its center matrixX. We assume W 1 , W 2 have a upper bound c √ k. Then for parameter matrices W 1 , W 2 ,W 1 ,W 2 , we have Proof. By the definition of average Jacobian, we have A.2 PROVE OF THE THEOREM First, we introduce the proof idea of our theorem. Our proof of the theorem divides the learning process into two stages. During the first stage, we aim to prove that the neural network will give out the right classification, i.e. the 0-1-loss converges to 0. The proof in this part is modified from . Furthermore, we proved that training 0-1-loss will keep 0 until the second stage starts and the margin at the first stage will larger than 1−2ρ 2 . During the second stage, we prove that the neural networks start to further enlarge the margin and finally the 2 loss starts to converge to zero. (2019) has shown that this dynamic can be illustrated by the average Jacobian. Definition 3. We define the average Jacobian for two parameters W 1 and W 2 and data matrix X as The residualr = f (θ) − y, r = f (θ) − y obey the following equation r = (I − ηC(θ))r In our proof, we project the residual to the following subspace Definition 4. Let {x i } n i=1 be a -clusterable dataset and {x i } n i=1 be the associated cluster centers, that is,x i = c l iff x i is from lth cluster. We define the support subspace S + as a subspace of dimension K, dictated by the cluster membership as follows. Let Λ l ⊂ {1, 2, · · · , n} be the set of coordinates i such that = c l . Then S + is characterized by Definition 5. We define the minimum eigenvalue of a matrix B on a subspace S σ min (B, S) = min where P S is the projection to the space S. Recall the generation process of the dataset Definition 6. (Clusterable Dataset Descriptions) • We assume that {x i } i∈[n] contains points with unit Euclidean norm and has K clusters. Let n l be the number of points in the lth cluster. Assume that number of data in each cluster is balanced in the sense that n l ≥ c low n K for constant c low > 0. • For each of the K clusters, we assume that all the input data lie within the Euclidean ball B(c l , ), where c l is the center with unit Euclidean norm and > 0 is the radius. • A dataset satisfying the above assumptions is called an -clusterable dataset.
theoretically understand the regularization effect of distillation. We show that early stopping is essential in this process. From this perspective, we developed a distillation method for learning with corrupted Label with theoretical guarantees.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:513
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. This paper addresses continuous lifelong multitask learning by jointly re-estimating the inter-task relations (\textit{output} kernel) and the per-task model parameters at each round, assuming data arrives in a streaming fashion. We propose a novel algorithm called \textit{Online Output Kernel Learning Algorithm} (OOKLA) for lifelong learning setting. To avoid the memory explosion, we propose a robust budget-limited versions of the proposed algorithm that efficiently utilize the relationship between the tasks to bound the total number of representative examples in the support set. In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning. Our empirical results over three datasets indicate superior AUC performance for OOKLA and its budget-limited cousins over strong baselines. Instead of learning individual models, learning from multiple tasks leverages the relationships among tasks to jointly build better models for each task and thereby improve the transfer of relevant knowledge between the tasks, especially from information-rich tasks to information-poor ones. Unlike traditional multitask learning, where the tasks are presented simultaneously and an entire training set is available to the learner (Caruana (1998)), in lifelong learning the tasks arrives sequentially BID27 ). This paper considers a continuous lifelong learning setting in which both the tasks and the examples of the tasks arrive in an online fashion, without any predetermined order.Following the online setting, particularly from BID24 BID7 , at each round t, the learner receives an example from a task, along with the task identifier and predicts the output label for the example. Subsequently, the learner receives the true label and updates the model(s) as necessary. This process is repeated as we receive additional data from the same or different tasks. Our approach follows an error-driven update rule in which the model for a given task is updated only when the prediction for that task is in error.Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. A lifelong learning agent must provide an efficient way to learn new tasks faster by utilizing the knowledge learned from the previous tasks and also not forgetting or significantly degrading performance on the old tasks. The goal of a lifelong learner is to minimize errors as compared to the full ideal hindsight learner, which has access to all the training data and no bounds on memory or computation. This paper addresses lifelong multitask learning by jointly re-estimating the inter-task relations from the data and the per-task model parameters at each round, assuming data arrives in a streaming fashion. We define the task relationship matrix as output kernels in Reproducing Kernel Hilbert Space (RKHS) on multitask examples. We propose a novel algorithm called Online Output Kernel Learning Algorithm (OOKLA) for lifelong learning setting. For a successful lifelong learning with kernels, we need to address two key challenges: (1) learn the relationships between the tasks (output kernel) efficiently from the data stream and (2) bound the size of the knowledge to avoid memory explosion.The key challenge in learning with a large number of tasks is to adaptively learn the model parameters and the task relationships, which potentially change over time. Without manageability-efficient updates at each round, learning the task relationship matrix automatically may impose a severe computational burden. In other words, we need to make predictions and update the models in an efficient real time manner.We propose simple and quite intuitive update rules for learning the task relationship matrix. When we receive a new example, the algorithm updates the output kernel when the learner made a mistake by computing the similarity between the new example and the set of representative examples (stored in the memory) that belongs to a specific task. If the two examples have similar (different) labels and high similarity, then the relationship between the tasks is increased (decreased) to reflect the positive (negative) correlation and vice versa.To avoid the memory explosion associated with the lifelong learning setting, we propose a robust budget-limited version of the proposed algorithm that efficiently utilizes the relationship between the tasks to bound the total number of representative examples in the support set. In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning.It is worth noting that the problem of lifelong multitask learning is closely related to online multitask learning. Although the objectives of both online multitask learning and lifelong learning are similar, one key difference is that the online multitask learning, unlike in the lifelong learning, may require that the number of tasks be specified beforehand. In recent years, online multitask learning has attracted extensive research attention BID0 ; BID10 ; BID16 BID7 ; BID24 BID17 . We evaluate our proposed methods with several state-of-the-art online learning algorithms for multiple tasks. Throughout this paper, we refer to our proposed method as online multitask learning or lifelong learning.There are many useful application areas for lifelong learning, including optimizing financial trading as market conditions evolve, email prioritization with new tasks or preferences emerging, personalized news, and spam filtering, with evolving nature of spam. Consider the latter, where some spam is universal to all users (e.g. financial scams), some messages might be useful to certain affinity groups, but spam to most others (e.g. announcements of meditation classes or other special interest activities), and some may depend on evolving user interests. In spam filtering each user is a "task," and shared interests and dis-interests formulate the inter-task relationship matrix. If we can learn the matrix as well as improving models from specific spam/not-spam decisions, we can perform mass customization of spam filtering, borrowing from spam/not-spam feedback from users with similar preferences. The primary contribution of this paper is precisely the joint learning of inter-task relationships and its use in estimating per-task model parameters in a lifelong learning setting. We proposed a novel lifelong learning algorithm using output kernels. The proposed method efficiently learns both the model and the inter-task relationships at each iteration. Our update rules for learning the task relationship matrix, at each iteration, were motivated by the recent work in output kernel learning.In order to handle the memory explosion from an unbounded support set in the lifelong learning setting, we proposed a new budget maintenance scheme that utilizes the task relationship matrix to remove the least-useful (high confidence) example from the support set. In addition, we proposed a two-stage budget learning scheme based on the intuition that each task only requires a subset of the representative examples in the support set for efficient learning. It provides a competitive and efficient approach to handle large number of tasks in many real-life applications.The effectiveness of our algorithm is empirically verified over several benchmark datasets, outperforming several competitive baselines both in the unconstrained case and the budget-limited case, where selective forgetting was required.
a novel approach for online lifelong learning using output kernels.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:514
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Minecraft is a videogame that offers many interesting challenges for AI systems. In this paper, we focus in construction scenarios where an agent must build a complex structure made of individual blocks. As higher-level objects are formed of lower-level objects, the construction can naturally be modelled as a hierarchical task network. We model a house-construction scenario in classical and HTN planning and compare the advantages and disadvantages of both kinds of models. Minecraft is an open-world computer game, which poses interesting challenges for Artificial Intelligence BID0 BID12 , for example for the evaluation of reinforcement learning techniques BID21 . Previous research on planning in Minecraft focused on models to control an agent in the Minecraft world. Some examples include learning planning models from a textual description of the actions available to the agent and their preconditions and effects BID4 , or HTN models from observing players' actions BID15 . , on the other hand, focused on online goal-reasoning for an agent that has to navigate in the minecraft environment to collect resources and/or craft objects. They introduced several propositional, numeric BID7 and hybrid PDDL+ planning models BID8 .In contrast, we are interested in construction scenarios, where we generate instructions for making a given structure (e.g. a house) that is composed of atomic blocks. Our longterm goal is to design a natural-language system that is able to give instructions to a human user tasked with completing that construction. As a first step, in the present paper we consider planning methods coming up with what we call a construction plan, specifying the sequence of construction steps without taking into account the natural-language and dialogue parts of the problem.For the purpose of construction planning, the Minecraft world can be understood as a Blocksworld domain with a 3D environment. Blocks can be placed at any position having a non-empty adjacent position. However , while obtaining a sequence of "put-block" actions can be sufficient for an AI agent, communicating the plan to a human user requires more structure in order to formulate higher-level instructions like build-row, or build-wall. The objects being constructed (e.g. rows, walls, or an entire house) are naturally organized in a hierarchy where high-level objects are composed of lower-level objects. Therefore, the task of constructing a high-level object naturally translates into a hierarchical planning network (HTN) BID19 BID20 BID22 BID6 .We devise several models in both classical PDDL planning BID5 BID13 ) and hierarchical planning for a simple scenario where a house must be constructed. Our first baseline is a classical planning model that ignores the high-level objects and simply outputs a sequence of place-blocks actions. This is insufficient for our purposes since the resulting sequence of actions can hardly be described in natural language. However, it is a useful baseline to compare the other models. We also devise a second classical planning model, where the construction of high-level objects is encoded via auxiliary actions.HTN planning, on the other hand, allows to model the object hierarchy in a straightforward way, where there is a task for building each type of high-level object. The task of constructing each high-level object can be decomposed into tasks that construct its individual parts. Unlike in classical planning , where the PDDL language is supported by most/all planners, HTN planners have their own input language. Therefore, we consider specific models for two individual HTN planners: the PANDA planning system BID3 BID2 and SHOP2 BID14 . We have introduced several models of a construction scenario in the Minecraft game. Our experiments have shown that, even in the simplest construction scenario which is not too challenging from the point of view of the search, current planners may struggle when the size of the world increases. This is a serious limitation in the Minecraft domain, where worlds with millions of blocks are not unrealistic.Lifted planners like SHOP2 perform well. However, it must be noted that they follow a very simple search strategy, which is very effective on our models where any method decomposition always leads to a valid solution. However, it may be less effective when other constraints must be met and/or optimizing quality is required. For example, if some blocks are removed from the ground by the user, then some additional blocks must be placed as auxiliary structure for the main construction. Arguably, this could be easily fixed by changing the model so that whenever a block cannot be placed in a target location, an auxiliary tower of blocks is built beneath the location. However, this increases the burden of writing new scenarios since suitable task decompositions (along with good criteria of when to select each decomposition) have to be designed for all possible situations.This makes the SHOP2 model less robust to unexpected situations that were not anticipated by the domain modeler. PANDA, on the other hand, supports insertion of primitive actions BID9 , allowing the planner to consider placing additional blocks, e.g., to build supporting structures that do not correspond to any task in the HTN. This could help to increase the robustness of the planner in unexpected situations where auxiliary structures that have not been anticipated by the modeler are needed. However, this is currently only supported by the POCL-plan-based search component and considering all possibilities for task insertion significantly slows down the search and it runs out of memory in our scenarios. This may point out new avenues of research on more efficient ways to consider task insertion.In related Minecraft applications, cognitive priming has been suggested as a possible solution to keep the size of the world considered by the planner at bay BID17 . In construction scenarios, however, large parts of the environment can be relevant so incremental grounding approaches may be needed to consider different parts of the scenario at different points in the construction plan.Our models are still a simple prototype and they do not yet capture the whole complexity of the domain. We plan to extend them in different directions in order to capture how hard it is to describe actions or method decompositions in natural language. For example, while considering the position of the user is not strictly necessary, his visibility may be important because objects in his field of view are easier to describe in natural language. How to effectively model the field of vision is a challenging topic, which may lead to combinations with external solvers like in the planning modulo theories paradigm BID10 .Another interesting extension is to consider how easy it is to express the given action in natural language and for example by reducing action cost for placing blocks near objects that can be easily referred to. Such objects could be landmarks e.g. blocks of a different type ("put a stone block next to the blue block") or just the previously placed block (e.g., "Now, put another stone block on top of it").
We model a house-construction scenario in Minecraft in classical and HTN planning and compare the advantages and disadvantages of both kinds of models.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:515
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Attacks on natural language models are difficult to compare due to their different definitions of what constitutes a successful attack. We present a taxonomy of constraints to categorize these attacks. For each constraint, we present a real-world use case and a way to measure how well generated samples enforce the constraint. We then employ our framework to evaluate two state-of-the art attacks which fool models with synonym substitution. These attacks claim their adversarial perturbations preserve the semantics and syntactical correctness of the inputs, but our analysis shows these constraints are not strongly enforced. For a significant portion of these adversarial examples, a grammar checker detects an increase in errors. Additionally, human studies indicate that many of these adversarial examples diverge in semantic meaning from the input or do not appear to be human-written. Finally, we highlight the need for standardized evaluation of attacks that share constraints. Without shared evaluation metrics, it is up to researchers to set thresholds that determine the trade-off between attack quality and attack success. We recommend well-designed human studies to determine the best threshold to approximate human judgement. Advances in deep learning have led to impressive performance on many tasks, but models still make mistakes. Models are particularly vulernable to adversarial examples, inputs designed to fool models (Szegedy et al., 2014) . Goodfellow et al. (2014) demonstrated that image classification models could be fooled by perturbations indistinguishable to humans. Due to the importance of natural language processing (NLP) tasks, a large body of research has focused on applying the concept of adversarial examples to text, including (Alzantot et al., 2018; Jin et al., 2019; Kuleshov et al., 2018; Zhang et al., 2019; Ebrahimi et al., 2017; Gao et al., 2018; Li et al., 2018; Samanta & Mehta, 2017; Jia & Liang, 2017; Iyyer et al., 2018; Papernot et al., 2016a) . The importance of tasks such as spam and plagiarism detection highlights the need for robust NLP models. However, there are fundamental differences between image and text data. Unlike images, two different sequences of text are never entirely indistinguishable. This raises the question: if indistinguishable perturbations aren't possible, what are adversarial examples in text? We observe that each work from recent literature has a slightly different definition of what constitutes an adversarial example in natural language. Comparing the success rate of two attacks is meaningless if the attacks use different methods to evaluate the same constraints or define different constraints altogether. In this paper, we build on Gilmer et al. (2018) to introduce a taxonomy of constraints specific to adversarial examples in natural language. To the best of our knowledge, our work provides the first comprehensive framework for categorizing and evaluating attack constraints in natural language. We discuss use cases and propose standardized evaluation methods for each of these constraints. We then apply our evaluation methods to the synonym-substitution based attacks of Jin et al. (2019) and Alzantot et al. (2018) . These attacks claimed to preserve the syntax and semantics of the original sentence, while remaining non-suspicious to a human interpreter. However, we find that most of their adversarial examples contain additional grammatical errors, and human surveys reveal that many adversarial examples also change the meaning of the sentence and/or do not appear to be written by humans. These results call into question the ubiquity of synonym-based adversarial examples and emphasize the need for more careful evaluation of attack approaches. Lastly, we discuss how previous works rely on arbitrary thresholds to determine the semantic similarity of two sentences. These thresholds can be tuned by the researcher to make their methods seem more successful with little penalty in quantitative metrics. Thus, we highlight the importance of standardized human evaluations to approximate the true threshold value. Any method that introduces a novel approach to measure semantic similarity should support their choice of threshold with defensible human studies. The three main contributions of this paper are: • We formally define and categorize constraints on adversarial examples in text, and introduce evaluation methods for each category. • Using these categorizations and evaluation methods, we quantitatively disprove claims that stateof-the-art synonym-based substitutions preserve semantics and grammatical correctness. • We show the sensitivity of attack success rate to changes in semantic similarity thresholds set by researchers. We assert that perturbations which claim semantic similarity should use standardized human evaluation studies with precise wording to determine an appropriate threshold. We introduced a framework for evaluating fulfillment of attack constraints in natural language. Applying this framework to synonym substitution attacks raised concerns about the semantic preservation, syntactic accuracy, and conspicuity of the adversarial examples they generate. Future work may expand our hierarchy to categorize and evaluate different attack constraints in natural language. Standardized terminology and evaluation metrics will make it easier for defenders to determine which attacks they must protect themselves from-and how. It remains to be seen how robust BERT is when subject to synonym attacks which rigorously preserve semantics and syntax. It is up to future research to determine how prevalent adversarial examples are throughout the broader space of paraphrases.
We present a framework for evaluating adversarial examples in natural language processing and demonstrate that generated adversarial examples are often not semantics-preserving, syntactically correct, or non-suspicious.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:516
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations. We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches. Predictions made by machine learning algorithms play an important role in our everyday lives and can affect decisions in technology, medicine, and even the legal system (Rich, 2015; Obermeyer & Emanuel, 2016) . As the algorithms become increasingly complex, explanations for why an algorithm makes certain decisions are ever more crucial. For example, if an AI system predicts a given pathology image to be malignant, then the doctor would want to know what features in the image led the algorithm to this classification. Similarly, if an algorithm predicts an individual to be a credit risk, then the lender (and the borrower) might want to know why. Therefore having interpretations for why certain predictions are made is critical for establishing trust and transparency between the users and the algorithm (Lipton, 2016) .Having an interpretation is not enough, however. The explanation itself must be robust in order to establish human trust. Take the pathology predictor; an interpretation method might suggest that a particular section in an image is important for the malignant classification (e.g. that section could have high scores in saliency map). The clinician might then focus on that section for investigation, treatment or even look for similar features in other patients. It would be highly disconcerting if in an extremely similar image, visually indistinguishable from the original and also classified as malignant, a very different section is interpreted as being salient for the prediction. Thus, even if the predictor is robust (both images are correctly labeled as malignant), that the interpretation is fragile would still be highly problematic in deployment.Our contributions. The fragility of prediction in deep neural networks against adversarial attacks is an active area of research BID4 Kurakin et al., 2016; Papernot et al., 2016; Moosavi-Dezfooli et al., 2016) . In that setting, fragility is exhibited when two perceptively indistinguishable images are assigned different labels by the neural network. In this paper, we extend the definition of fragility to neural network interpretation. More precisely, we define the interpretation of neural network to be fragile if perceptively indistinguishable images that have the same prediction label by the neural network are given substantially different interpretations. We systematically The fragility of feature-importance maps. We generate feature-importance scores, also called saliency maps, using three popular interpretation methods: simple gradient (a), DeepLIFT (b) and integrated gradient (c). The top row shows the the original images and their saliency maps and the bottom row shows the perturbed images (using the center attack with = 8, as described in Section 3) and the corresponding saliency maps. In all three images, the predicted label has not changed due to perturbation; in fact the network's (SqueezeNet) confidence in the prediction has actually increased. However, the saliency maps of the perturbed images are meaningless.investigate two classes of interpretation methods: methods that assign importance scores to each feature (this includes simple gradient (Simonyan et al., 2013) , DeepLift (Shrikumar et al., 2017) , and integrated gradient (Sundararajan et al., 2017) ), as well as a method that assigns importances to each training example: influence functions (Koh & Liang, 2017) . For both classes of interpretations, we show that targeted perturbations can lead to dramatically different interpretations ( FIG0 ).Our findings highlight the fragility of interpretations of neural networks, which has not been carefully considered in literature. Fragility directly limits how much we can trust and learn from the interpretations. It also raises a significant new security concern. Especially in medical or economic applications, users often take the interpretation of a prediction as containing causal insight ("this image is a malignant tumor likely because of the section with a high saliency score"). An adversary could minutely manipulate the input to draw attention away from relevant features or onto his/her desired features. Such attacks might be especially hard to detect as the actual labels have not changed.While we focus on image data here because most of the interpretation methods have been motivated by images, the fragility of neural network interpretation could be a much broader problem. Fig. 2 illustrates the intuition that when the decision boundary in the input feature space is complex, as is the case with deep nets, a small perturbation in the input can push the example into a region with very different loss contours. Because the feature importance is closely related to the gradient which is perpendicular to the loss contours, the importance scores can also be dramatically different. We provide additional analysis of this in Section 5. Related works To the best of our knowledge, the notion of adversarial examples has not previously been studied in the context of interpretation of neural networks. Adversarial attacks to the input that changes the prediction of a network have been actively studied. Szegedy et al. (2013) demonstrated that it is relatively easy to fool neural networks into making very different predictions for test images that are visually very similar to each other. BID4 introduced the Fast Gradient Sign Method (FGSM) as a one-step prediction attack. This was followed by more effective iterative attacks (Kurakin et al., 2016) Interpretation of neural network predictions is also an active research area. Post-hoc interpretability (Lipton, 2016) is one family of methods that seek to "explain" the prediction without talking about the details of black-box model's hidden mechanisms. These included tools to explain predictions by networks in terms of the features of the test example (Simonyan et al., 2013; Shrikumar et al., 2017; Sundararajan et al., 2017; Zhou et al., 2016) , as well as in terms of contribution of training examples to the prediction at test time (Koh & Liang, 2017) . These interpretations have gained increasing popularity, as they confer a degree of insight to human users of what the neural network might be doing (Lipton, 2016) .Conclusion This paper demonstrates that interpretation of neural networks can be fragile in the specific sense that two similar inputs with the same predicted label can be given very different interpretations. We develop new perturbations to illustrate this fragility and propose evaluation metrics as well as insights on why fragility occurs. Fragility of neural network interpretation is orthogonal to fragility of the prediction-we demonstrate how perturbations can substantially change the interpretation without changing the predicted label. The two types of fragility do arise from similar factors, as we discuss in Section 5. Our focus is on the interpretation method, rather than on the original network, and as such we do not explore how interpretable is the original predictor. There is a separately line of research that tries to design simpler and more interpretable prediction models BID0 .Our main message is that robustness of the interpretation of a prediction is an important and challenging problem, especially as in many applications (e.g. many biomedical and social settings) users are as interested in the interpretation as in the prediction itself. Our results raise concerns on how interpretations of neural networks are sensitive to noise and can be manipulated. Especially in settings where the importance of individual or a small subset of features are interpreted, we show that these importance scores can be sensitive to even random perturbation. More dramatic manipulations of interpretations can be achieved with our targeted perturbations, which raise security concerns. We do not suggest that interpretations are meaningless, just as adversarial attacks on predictions do not imply that neural networks are useless. Interpretation methods do need to be used and evaluated with caution while applied to neural networks, as they can be fooled into identifying features that would not be considered salient by human perception.Our results demonstrate that the interpretations (e.g. saliency maps) are vulnerable to perturbations, but this does not imply that the interpretation methods are broken by the perturbations. This is a subtle but important distinction . Methods such as saliency measure the infinitesimal sensitivity of the neural network at a particular input x. After a perturbation, the input has changed tox = x + δ, and the salency now measures the sensitivity at the perturbed input. The saliency correctly captures the infinitesimal sensitivity at the two inputs; it's doing what it is supposed to do. The fact that the two resulting saliency maps are very different is fundamentally due to the network itself being fragile to such perturbations, as we illustrate with Fig. 2 .While we focus on image data (ImageNet and CIFAR-10 ), because these are the standard benchmarks for popular interpretation tools, this fragility issue can be wide-spread in biomedical, economic and other settings where neural networks are increasingly used. Understanding interpretation fragility in these applications and develop more robust methods are important agendas of research.
Can we trust a neural network's explanation for its prediction? We examine the robustness of several popular notions of interpretability of neural networks including saliency maps and influence functions and design adversarial examples against them.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:517
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification. However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data. In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model. Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a {\it non-convex concave} min-max problem. The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well. In particular, we propose to explore Polyak-\L{}ojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme. An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate. Our experimental results demonstrate the effectiveness of the proposed algorithms. Deep learning has been witnessed with tremendous success for various tasks, including computer vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016; Ren et al., 2015) , speech recognition (Hinton et al., 2012; Mohamed et al., 2012; Graves, 2013) , natural language processing (Bahdanau et al., 2014; Sutskever et al., 2014; Devlin et al., 2018) , etc. From an optimization perspective, all of them are solving an empirical risk minimization problem in which the objective function is a surrogate loss of the prediction error made by a deep neural network in comparison with the ground-truth label. For example, for image classification task, the objective function is often chosen as the cross entropy between the probability distribution calculated by forward propagation of a convolutional neural network and the vector encoding true label information (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016) , where the cross entropy is a surrogate loss of the misclassification rate. However, when the data is imbalanced, this formulation is not reasonable since the data coming from minor class have little effect in this case and the model is almost determined by the data from the majority class. To address this issue, AUC maximization has been proposed as a new learning paradigm (Zhao et al., 2011) . Statistically, AUC (short for Area Under the ROC curve) is defined as the probability that the prediction score of a positive example is higher than that of a negative example (Hanley & McNeil, 1982; 1983) . Compared with misclassification rate and its corresponding surrogate loss, AUC is more suitable for imbalanced data setting (Elkan, 2001) . Several online or stochastic algorithms for time based on a new sampled/received training data. Instead of storing all examples in the memory, Zhao et al. (2011) employ reservoir sampling technique to maintain representative samples in a buffer, based on which their algorithms update the model. To get optimal regret bound, their buffer size needs to be O( √ n), where n is the number of received training examples. Gao et al. (2013) design a new algorithm which is not buffer-based. Instead, their algorithm needs to maintain the first-order and second-order statistics of the received data to compute the stochastic gradient, which is prohibitive for high dimensional data. Based on a novel saddle-point reformulation of a surrogate loss of AUC proposed by (Ying et al., 2016) , there are several studies (Ying et al., 2016; Liu et al., 2018; Natole et al., 2018) trying to design stochastic primal-dual algorithms. Ying et al. (2016) employ the classical primal-dual stochastic gradient (Nemirovski et al., 2009 ) and obtain O(1/ √ t) convergence rate. Natole et al. (2018) add a strongly convex regularizer, invoke composite mirror descent (Duchi et al., 2010 ) and achieve O(1/t) convergence rate. Liu et al. (2018) leverage the structure of the formulation, design a multi-stage algorithm and achieve O(1/t) convergence rate without strong convexity assumptions. However, all of them only consider learning a linear model, which results in a convex objective function. Non-Convex Min-max Optimization. Stochastic optimization of non-convex min-max problems have received increasing interests recently (Rafique et al., 2018; Lin et al., 2018; Sanjabi et al., 2018; Lu et al., 2019; Jin et al., 2019) . When the objective function is weakly convex in the primal variable and is concave in the dual variable, Rafique et al. (2018) design a proximal guided algorithm in spirit of the inexact proximal point method (Rockafellar, 1976) , which solves a sequence of convexconcave subproblems constructed by adding a quadratic proximal term in the primal variable with a periodically updated reference point. Due to the potential non-smoothness of objective function, they show the convergence to a nearly-stationary point for the equivalent minimization problem. In the same vein as (Rafique et al., 2018) , Lu et al. (2019) design an algorithm by adopting the block alternating minimization/maximization strategy and show the convergence in terms of the proximal gradient. When the objective is weakly convex and weakly concave, Lin et al. (2018) propose a proximal algorithm which solves a strongly monotone variational inequality in each epoch and establish its convergence to stationary point. Sanjabi et al. (2018) consider non-convex non-concave min-max games where the inner maximization problem satisfies a PL condition, based on which they design a multi-step deterministic gradient descent ascent with convergence to a stationary point. It is notable that our work is different in that (i) we explore the PL condition for the outer minimization problem instead of the inner maximization problem; (ii) we focus on designing stochastic algorithms instead of deterministic algorithms. Leveraging PL Condition for Minimization. PL condition is first introduced by Polyak (Polyak, 1963) , which shows that gradient descent is able to enjoy linear convergence to a global minimum under this condition. Karimi et al. (2016) show that stochastic gradient descent, randomized coordinate descent, greedy coordinate descent are able to converge to a global minimum with faster rates under the PL condition. If the objective function has a finite-sum structure and satisfies PL condition, there are several non-convex SVRG-style algorithms (Reddi et al., 2016; Lei et al., 2017; Nguyen et al., 2017; Zhou et al., 2018; Li & Li, 2018; Wang et al., 2018) , which are guaranteed to converge to a global minimum with a linear convergence rate. However, the stochastic algorithms in these works are developed for a minimization problem, and hence is not applicable to the min-max formulation for stochastic AUC maximization. To the best of our knowledge, Liu et al. (2018) is the only work that leverages an equivalent condition to the PL condition (namely quadratic growth condition) to develop a stochastic primal-dual algorithm for AUC maximization with a fast rate. However, as mentioned before their algorithm and analysis rely on the convexity of the objective function, which does not hold for AUC maximization with a deep neural network. Finally, we notice that PL condition is the key to many recent works in deep learning for showing there is no spurious local minima or for showing global convergence of gradient descent and stochastic gradient descent methods (Hardt & Ma, 2016; Li & Yuan, 2017; Arora et al., 2018; Allen-Zhu et al., 2018; Du et al., 2018b; a; Li & Liang, 2018; Allen-Zhu et al., 2018; Zou et al., 2018; Zou & Gu, 2019) . Using the square loss, it has also been proved that the PL condition holds globally or locally for deep linear residual network (Hardt & Ma, 2016) , deep linear network, one hidden layer neural network with Leaky ReLU activation (Charles & Papailiopoulos, 2017; Zhou & Liang, 2017) . Several studies (Li & Yuan, 2017; Arora et al., 2018; Allen-Zhu et al., 2018; Du et al., 2018b; Li & Liang, 2018) consider the trajectory of (stochastic) gradient descent on learning neural networks, and their analysis imply the PL condition in a certain form. For example, Du et al. (2018b) show that when the width of a two layer neural network is sufficiently large, a global optimum would lie in the ball centered at the initial solution, in which PL condition holds. Allen-Zhu et al. (2018) extends this insight further to overparameterized deep neural networks with ReLU activation, and show that the PL condition holds for a global minimum around a random initial solution. In this paper, we consider stochastic AUC maximization problem when the predictive model is a deep neural network. By abuilding on the saddle point reformulation and exploring Polyak-Łojasiewicz condition in deep learning, we have proposed two algorithms with state-of-the-art complexities for stochastic AUC maximization problem. We have also demonstrated the efficiency of our proposed algorithms on several benchmark datasets, and the experimental results indicate that our algorithms converge faster than other baselines. One may consider to extend the analysis techniques to other problems with the min-max formulation. Proof. It suffices to prove that Note that the optimal values of a, b, α are chosen as a * 2 , (c) comes from the standard analysis of primal-dual stochastic gradient method. Denote E k−1 by taking the conditional expectation conditioning on all the stochastic events until v k−1 is generated. Taking E k−1 on both sides and noting thatĝ k t is an unbiased estimator of g k t for ∀t, k, we have By the update ofᾱ k−1 , 2L-Lipschitz continuity of E [h(w; x)|y = −1] − E [h(w; x)|y = 1], and noting that α , then we have We can see that φ k (v) is convex and smooth function since γ ≤ 1/L. The smoothness parameter of φ k isL = L+γ −1 . Define s k = arg min v∈R d+2 φ k (v). According to Theorem 2.1.5 of (Nesterov, 2013), we have Combining (8) with Lemma 2 yields Note that φ k (v) is (γ −1 − L)-strongly convex, and γ = 1 2L , we have Plugging in s k into Lemma 2 and combining (10) yield 2 ), rearranging the terms, and noting that Combining (11) and (9) yields (12) Taking expectation on both sides over all randomness untilv k−1 is generated and by the tower property, we have is L-smooth and hence is L-weakly convex, so we have where (a) and (b) hold by the definition of φ k . Rearranging the terms in (14) yields where (a) holds by using a, b ≤ 1 2 ( a 2 + b 2 ), and (b) holds by the PL property of φ. Combining (13) and (15), we can see that As a result, we have Published as a conference paper at ICLR 2020 2 ), by the setting of η k , we set The required number of samples is A.4 PROOF OF LEMMA 3 2 , (c) holds by Jensen's inequality. Now we bound I and II separately. Define Combining (17) and (20), we have By Lemma 4 of (Duchi et al., 2011) and setting δ ≥ max t ĝ k t ∞ , we know that T k 2 , and hence Denote E k−1 by taking the conditional expectation conditioning on filtration F k−1 , where F k−1 is the σ-algebra generated by all random variables untilv k−1 is generated. Taking E k−1 on both sides of (16), and employing (22) yields where the equality holds sincev k−1 − s k is measurable with respect to F k−1 . Note that where ( By setting , then T k is a stopping time which is bounded almost surely. By stopping time argument, we have E k−1 (II) = 0, and hence A.5 PROOF OF THEOREM 3 We can see that φ k (v) is convex and smooth function since γ ≤ 1/L. The smoothness parameter of φ k isL = L+γ −1 . Define s k = arg min v∈R d+2 φ k (v). According to Theorem 2.1.5 of (Nesterov, 2013), we have Combining (24) with Lemma 3 yields Note that (26) Plugging in s k into Lemma 3 and combining (26) yield , rearranging the terms, and noting that Combining (27) and (25) yields Taking expectation on both sides over all randomness untilv k−1 is generated and by the tower property, we have Note that φ(v) is L-smooth and hence is L-weakly convex, so we have where (a) and (b) hold by the definition of φ k . Rearranging the terms in (30) yields where (a) holds by using a, b ≤ 1 2 ( a 2 + b 2 ), and (b) holds by the PL property of φ. Combining (29) and (31), we can see that which implies that As a result, we have , and note that when τ ≥ 1, , and hence , we can see that the total iteration complexity is .
The paper designs two algorithms for the stochastic AUC maximization problem with state-of-the-art complexities when using deep neural network as predictive model, which are also verified by empirical studies.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:518
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute. The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation. Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical. Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward. Unfortunately, without tricks like resetting to points along the trajectory, HER might take a very long time to discover how to reach certain areas of the state-space. In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms. Furthermore, our method can be used when only trajectories without expert actions are available, which can leverage kinestetic or third person demonstration. Reinforcement Learning (RL) has shown impressive results in a plethora of simulated tasks, ranging from attaining super-human performance in video-games BID18 BID35 and board-games (Silver et al., 2017) , to learning complex locomotion behaviors BID34 BID4 . Nevertheless, these successes are shyly echoed in real world robotics (Riedmiller et BID36 . This is due to the difficulty of setting up the same learning environment that is enjoyed in simulation. One of the critical assumptions that are hard to obtain in the real world are the access to a reward function. Self-supervised methods have the power to overcome this limitation.A very versatile and reusable form of self-supervision for robotics is to learn how to reach any previously observed state upon demand. This problem can be formulated as training a goal-conditioned policy BID14 BID27 that seeks to obtain the indicator reward of having the observation exactly match the goal. Such a reward does not require any additional instrumentation of the environment beyond the sensors the robot already has. But in practice, this reward is never observed because in continuous spaces like the ones in robotics, the exact same observation is never observed twice. Luckily, if we are using an off-policy RL algorithm BID17 BID11 , we can "relabel" a collected trajectory by replacing its goal by a state actually visited during that trajectory, therefore observing the indicator reward as often as we wish. This method was introduced as Hindsight Experience Replay BID0 or HER.In theory these approaches could learn how to reach any goal, but the breadth-first nature of the algorithm makes that some areas of the space take a long time to be learned BID7 . This is specially challenging when there are bottlenecks between different areas of the statespace, and random motion might not traverse them easily BID5 . Some practical examples of this are pick-and-place, or navigating narrow corridors between rooms, as illustrated in Fig. 5 in appendix depicting the diverse set of environments we work with. In both cases a specific state needs to be reached (grasp the object, or enter the corridor) before a whole new area of the space is discovered (placing the object, or visiting the next room). This problem could be addressed by engineering a reward that guides the agent towards the bottlenecks, but this defeats the purpose of trying to learn without direct reward supervision. In this work we study how to leverage a few demonstrations that traverse those bottlenecks to boost the learning of goal-reaching policies.Learning from Demonstrations, or Imitation Learning (IL), is a well-studied field in robotics BID15 BID25 BID2 . In many cases it is easier to obtain a few demonstrations from an expert than to provide a good reward that describes the task. Most of the previous work on IL is centered around trajectory following, or doing a single task. Furthermore it is limited by the performance of the demonstrations, or relies on engineered rewards to improve upon them. In this work we study how IL methods can be extended to the goal-conditioned setting, and show that combined with techniques like HER it can outperform the demonstrator without the need of any additional reward. We also investigate how the different methods degrade when the trajectories of the expert become less optimal, or less abundant. Finally, the method we develop is able to leverage demonstrations that do not include the expert actions. This is very convenient in practical robotics where demonstrations might have been given by a motion planner, by kinestetic demonstrations (moving the agent externally, and not by actually actuating it), or even by another agent. To our knowledge, this is the first framework that can boost goal-conditioned policy learning with only state demonstrations. Hindsight relabeling can be used to learn useful behaviors without any reward supervision for goal-conditioned tasks, but they are inefficient when the state-space is large or includes exploration bottlenecks. In this work we show how only a few demonstrations can be leveraged to improve the convergence speed of these methods. We introduce a novel algorithm, goal-GAIL, that converges faster than HER and to a better final performance than a naive goal-conditioned GAIL. We also study the effect of doing expert relabeling as a type of data augmentation on the provided demonstrations, and demonstrate it improves the performance of our goal-GAIL as well as goal-conditioned Behavioral Cloning. We emphasize that our goal-GAIL method only needs state demonstrations, without using expert actions like other Behavioral Cloning methods. Finally, we show that goal-GAIL is robust to sub-optimalities in the expert behavior.
We tackle goal-conditioned tasks by combining Hindsight Experience Replay and Imitation Learning algorithms, showing faster convergence than the first and higher final performance than the second.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:519
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Memorization of data in deep neural networks has become a subject of significant research interest. In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and show that linear combinations of training data are stored as eigenvectors in the linear operator corresponding to the network when downsampling is used. On the other hand, networks without downsampling do not memorize training data. We provide further evidence that the same effect happens in nonlinear networks. Moreover, downsampling in nonlinear networks causes the model to not only memorize just linear combinations of images, but individual training images. Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important phenomenon of memorization in over-parameterized deep networks. As deep convolutional neural networks (CNNs) become ubiquitous in computer vision due to their applicability and strong performance on a range of tasks BID6 , recent work has begun analyzing the memorization properties of such networks in classification. For example, BID19 show that popular CNNs can achieve almost zero training error on randomly labeled datasets, indicating that CNNs have the capacity to "memorize" large training data sets. BID0 and BID15 build on the experiments from BID19 to better understand and evaluate the extent to which CNNs memorize training data. BID0 show that CNNs, when trained on large datasets, are able to learn patterns from realistic data before memorizing training images. BID15 present experiments on "membership inference" (i.e. determining whether an image was used during training) and conclude that modern architectures are capable of "remember[ing] a large number of images and distinguish [ing] them from unseen images".Although the above methods analyze memorization in the classification setting, they do not provide a mechanism through which memorization of training data occurs. We here present downsampling as one mechanism by which deep CNNs memorize specific training images. We will focus our study on the memorization properties of linear and nonlinear fully convolutional autoencoders. The architectures we use (such as U-Net, BID14 ) are commonly employed in imageto-image tasks, see e.g. BID17 . However, we will use these architectures only in the autoencoding framework. We primarily focus on autoencoders BID1 for the following reasons: (1) components of convolutional autoencoders are building blocks of many CNNs; and (2) layerwise pre-training using autoencoders is a technique to initialize individual layers of CNNs to improve training BID3 , BID4 ). It is important to note that there are many potential solutions to the autoencoding problem when using over-parameterized autoencoders. In particular, in the linear case, these models may range from learning the (full rank) identity function (which has 0 error in the autoencoding task) to low rank solutions where each training example corresponds to an eigenvector with eigenvalue 1. Thus, understanding how autoencoders learn is of interest in order to gain insights into how deep CNNs memorize training data.Figures 1a and 1b provide two examples of memorization: A typical U-Net architecture (the same as e.g. used in BID17 for large hole impainting) when trained on a single image "memorizes" the training image in the sense that for any input, the output always contains the training image (even if the input is random noise or an arbitrary white square). This paper provides a mechanism for this phenomenon.The outline is as follows: After introducing some notation in Section 2, we will show in Section 3 that memorization is tightly coupled with downsampling and also occurs in the simpler setting of linear autoencoding CNNs. In the linear setting , the neural network corresponds to matrix multiplication. In Section 4, we show how to extract this matrix representation and we provide our main conjecture, namely that linear combinations of the training images are stored as eigenvectors of this matrix, whose rank is given by the dimension of the span of the training set. We also provide strong evidence for this conjecture on 2 × 2 images.In Section 5, we analyze the eigenvalue decay and show in various examples that using downsampling linear CNNs, linear combinations of the training examples are stored as eigenvectors with eigenvalues close to 1. Finally, we return to the nonlinear setting in Section 6, providing evidence that memorization is an even stronger phenomenon in nonlinear networks, since the actual training images (in contrast to linear combinations of training images) are memorized. We end with a short discussion in Section 7. This paper identified downsampling as a mechanism through which linear CNNs memorize training images. We demonstrated that downsampling convolutional autoencoders memorize training images in both the linear and nonlinear setting. In particular, we showed that it is not just the dimensionality reduction of downsampling that causes these models to learn point maps by demonstrating that a downsampling CNN architecture with the capacity to learn the identity function still prefers the point map. In the linear case, this preference for low-rank over the equally valid high-rank solutions is highly suggestive of similar phenomena observed in problems such as matrix completion (e.g., Gunasekar et al.) .In the non-linear case, memorization in downsampling networks is manifested even more strikingly with nearly arbitrary input images being mapped to output images that are visually identifiable as one of the training images. While the exact mechanism still needs to be explored, this is reminiscent of FastICA in Independent Component Analysis BID10 or more general non-linear eigen-problems BID2 , where every "eigenvector" for certain iterative maps has its own basin of attraction. On the other hand, non-downsampling auto-encoders do not memorize the training data and consistently learn a "high rank" map, similar to the identity map, at least visually.We conjecture that our findings will help to shed light on the strong generalization properties of downsampling networks for image classification and recognition tasks. Indeed , if downsampling networks memorize images or linear combinations of images, when trained on large datasets, they may be capable of learning representations within the space of all realisitic images instead of learning the standard full rank basis.We conclude with a mention of further areas of exploration spurred on by our work. We still need to understand why downsampling forces the network to learn low rank solutions even when the network has the capacity to learn the identity. This requires developing a better grasp of optimization and initialization, starting with linear autoencoders and proceeding to the non-linear settings. Finally, we need to explore connections between our conjecture and the manifold hypothesis to better understand the space of realistic images.
We identify downsampling as a mechansim for memorization in convolutional autoencoders.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:52
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets. However, generalization bounds for this setting is still missing. In this paper, we present a new PAC-Bayesian generalization bound for the negative log-likelihood loss which utilizes the \emph{Herbst Argument} for the log-Sobolev inequality to bound the moment generating function of the learners risk. Deep neural networks are ubiquitous across disciplines and often achieve state of the art results (e.g., Krizhevsky et al. (2012) ; Simonyan & Zisserman (2014) ; He et al. (2016) ). Albeit neural networks are able to encode highly complex input-output relations, in practice, they do not tend to overfit (Zhang et al., 2016) . This tendency to not overfit has been investigated in numerous works on generalization bounds (Langford & Shawe-Taylor, 2002; Langford & Caruana, 2002; Bartlett et al., 2017a; 2019; McAllester, 2003; Germain et al., 2016; Dziugaite & Roy, 2017) . Indeed, many generalization bounds apply to neural networks. However, most of these bounds assume that the loss function is bounded (Bartlett et al., 2017a; Neyshabur et al., 2017; Dziugaite & Roy, 2017) . Unfortunately, this assumption excludes the popular negative log-likelihood (NLL) loss, which is instrumental to Bayesian neural networks that have been used extensively to calibrate model performance and provide uncertainty measures to the model prediction. In this work we introduce a new PAC-Bayesian generalization bound for NLL loss of deep neural networks. Our work utilizes the Herbst argument for the logarithmic-Sobolev inequality (Ledoux, 1999) in order to bound the moment-generating function of the model risk. Broadly, our PACBayesian bound is comprised of two terms: The first term is dominated by the norm of the gradients with respect to the input and it describes the expressivity of the model over the prior distribution. The second term is the KL-divergence between the learned posterior and the prior, and it measures the complexity of the learning process. In contrast, bounds for linear models or bounded loss functions lack the term that corresponds to the expressivity of the model over the prior distribution and therefore are the same when applied to shallow and deep models. We empirically show that our PAC-Bayesian bound is tightest when we learn the mean and variance of each parameter separately, as suggested by Blundell et al. (2015) in the context of Bayesian neural networks (BNNs). We also show that the proposed bound holds different insights regarding model architecture, optimization and prior distribution selection. We demonstrate that such optimization minimizes the gap between risk and the empirical risk compared to the standard Bernoulli dropout and other Bayesian inference approximation while being consistent with the theoretical findings. Additionally, we explore in-distribution and out-of-distribution examples to show that such optimization produces better uncertainty estimates than the baseline. PAC-Bayesian bounds for the NLL loss function are intimately related to learning Bayesian inference (Germain et al., 2016) . Recently many works applied various posteriors in Bayesian neural networks. Gal & Ghahramani (2015) ; Gal (2016) introduce a Bayesian inference approximation using Monte Carlo (MC) dropout, which approximates a Gaussian posterior using Bernoulli dropout. Srivastava et al. (2014) introduced Gaussian dropout which effectively creates a Gaussian posterior that couples between the mean and the variance of the learned parameters. Kingma et al. (2015) explored the relation of this posterior to log-uniform priors, while Blundell et al. (2015) suggests to take a full Bayesian perspective and learn separately the mean and the variance of each parameter. Our work uses the bridge between PAC-Bayesian bounds and Bayesian inference, as described by Germain et al. (2016) , to find the optimal prior parameters in PAC-Bayesian setting and apply it in the Bayesian setting. Most of the literature regarding Bayesian modeling involves around a two-step formalism (Bernardo & Smith, 2009) : (1) a prior is specified for the parameters of the deep net; (2) given the training data, the posterior distribution over the parameters is computed and used to quantify predictive uncertainty. Since exact Bayesian inference is computationally intractable for neural networks, approximations are used, including MacKay (1992); Hernández-Lobato & Adams (2015); Hasenclever et al. (2017); Balan et al. (2015) ; Springenberg et al. (2016) . In this study we follow this two-step formalism, particularly we follow a similar approach to Blundell et al. (2015) in which we learn the mean and standard deviation for each parameter of the model using variational Bayesian practice. Our experimental validation emphasizes the importance of learning both the mean and the variance. In the following study we present a new PAC-Bayesian generalization bound for learning a deep net using the NLL loss function. The proof relies on bounding the log-partition function using the squared norm of the gradients with respect to the input. Experimental validation shows that the resulting bound provides insight for better model optimization and prior distribution search. We demonstrate that learning the mean and STD for all parameters together with optimize prior over the parameters leads to better uncertainty estimates over the baselines and makes it harder to overfit.
We derive a new PAC-Bayesian Bound for unbounded loss functions (e.g. Negative Log-Likelihood).
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:520
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Data augmentation techniques, e.g., flipping or cropping, which systematically enlarge the training dataset by explicitly generating more training samples, are effective in improving the generalization performance of deep neural networks. In the supervised setting, a common practice for data augmentation is to assign the same label to all augmented samples of the same source. However, if the augmentation results in large distributional discrepancy among them (e.g., rotations), forcing their label invariance may be too difficult to solve and often hurts the performance. To tackle this challenge, we suggest a simple yet effective idea of learning the joint distribution of the original and self-supervised labels of augmented samples. The joint learning framework is easier to train, and enables an aggregated inference combining the predictions from different augmented samples for improving the performance. Further, to speed up the aggregation process, we also propose a knowledge transfer technique, self-distillation, which transfers the knowledge of augmentation into the model itself. We demonstrate the effectiveness of our data augmentation framework on various fully-supervised settings including the few-shot and imbalanced classification scenarios.
We propose a simple self-supervised data augmentation technique which improves performance of fully-supervised scenarios including few-shot learning and imbalanced classification.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:521
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Long short-term memory (LSTM) networks allow to exhibit temporal dynamic behavior with feedback connections and seem a natural choice for learning sequences of 3D meshes. We introduce an approach for dynamic mesh representations as used for numerical simulations of car crashes. To bypass the complication of using 3D meshes, we transform the surface mesh sequences into spectral descriptors that efficiently encode the shape. A two branch LSTM based network architecture is chosen to learn the representations and dynamics of the crash during the simulation. The architecture is based on unsupervised video prediction by an LSTM without any convolutional layer. It uses an encoder LSTM to map an input sequence into a fixed length vector representation. On this representation one decoder LSTM performs the reconstruction of the input sequence, while the other decoder LSTM predicts the future behavior by receiving initial steps of the sequence as seed. The spatio-temporal error behavior of the model is analysed to study how well the model can extrapolate the learned spectral descriptors into the future, that is, how well it has learned to represent the underlying dynamical structural mechanics. Considering that only a few training examples are available, which is the typical case for numerical simulations, the network performs very well. Data driven virtual product design is nowadays an essential tool in the automotive industry saving time and resources during the development process. For a new car model, numerical crash simulations are performed where design parameters are changed to study their effects on physical and functional properties of the car such as firewall intrusion, weight, or cost (Fang et al., 2017) . Since one simulation run takes a couple of hours on a compute cluster, running a large number of simulation is not feasible. Therefore, a system that is able to use a limited dataset and predict new simulations would make the development process faster and more efficient. The rise of deep neural networks (DNNs) in recent years encourages further research and industrial usages. Besides manifold research for autonomous driving, it is natural for the automotive industry to seek and evaluate the possible applications of DNNs also in the product design stages. As an example, we investigate car crash tests, in which for example the plate thickness of certain parts strongly influences the bending behavior of structural beams and as a result also the intrusion of the firewall into the passenger compartment. Here, numerical crash simulations for different variations of such thicknesses are used as a dataset for learning. The aim is to design a system based on a DNN architecture that learns the crash behavior and would be able to imitate the crash dynamics. Car crash simulations are based on a mathematical model of the plastic deformations and other physical and mechanical effects. They are defined on a computing mesh of currently up to three million points and up to a hundred time steps are stored. Each data instance is a simulation run-of pre-selected parts and/or time steps-that is very high dimensional. Working with this data directly exasperates any machine learning (ML) method, but a transformation of this data presented in IzaTeran & Garcke (2019) allows to obtain a new representation that uses only a small number of coefficients to represent the high resolution numerical solutions. The transformed representation is employed here to compress the mesh geometries to feature sets suitable for neural networks, while avoiding to directly handle geometries in the machine learning method. This way, a network designed for video prediction and embedding based on a long short-term memory (LSTM) based architecture (Srivastava et al., 2015) can be adapted for mesh data. Since LSTM is a recurrent neural network that allows to exhibit temporal dynamic behavior with feedback connections, it is a natural choice for learning the 3D sequences. The aim is that the network learns the observed crash behavior including translation, rotation, or deformation of the parts in the model. Since the contribution of this paper is using DNNs for analyzing car crash data, the related works are categorized into a group of publications in which DNNs are extended for 3D graphics and one that concerns the use of ML techniques for analyzing car crash simulations. For the latter, one typically uses different embedding techniques to obtain a low dimensional representation for the intrinsic underlying data space and to cluster simulations with similar characteristics together (Bohn et al., 2013; Diez, 2018; Garcke & Iza-Teran, 2015; Iza-Teran & Garcke, 2019; Le Guennec et al., 2018) . The majority of publications about 3D DNN tried to extend CNN for 3D space and focus on description learning and shape correspondence, also known as geometric deep learning, Monti et al., 2017; Litany et al., 2017; Halimi et al., 2018; Maturana & Scherer, 2015; Su et al., 2015; Wang et al., 2017) and some developed CNN filters for unorganized point clouds (Qi et al., 2017a; b) . The very active research is so far very compute resource consuming and there is no extension of ConvLSTM for 3D space to our knowledge, but for prediction one would need an LSTM (or GAN) approach. However, a couple of very recent works introduce new feature sets and architectures for mesh embedding using autoencoders and LSTM (Tan et al., 2018b; Qiao et al., 2018; Tan et al., 2018a) . The feature representation is using local shape deformations obtained by solving an optimization problem at each node and a global optimization for compensating for rotations. They have shown that after training the network, a sequences of 3D shapes as an animation can be generated by doing operations in the latent space. The bidirectional LSTM architecture is shown to outperform autoeconders (Tan et al., 2018a ). An LSTM based learning network has also been proposed in Qiao et al. (2018) , where the obtained feature representation is then taken as the temporal data to be feed into a CNN that takes the features and represents them in a lower dimensional latent space. This information is subsequently feed into the LSTM module. Video frames prediction has been in the center of attention of researchers for a while, but there has been only very few extensions of these works to the 3D case so far. The problem is addressed here by introducing spectral coefficients to encode functions on the geometry together with a two branch LSTM based architecture without any convolutional layer, which has already proven to be feasible for video embedding and future frames prediction. The employed LBO basis and the resulting spectral coefficients provide a trade-off between accuracy and required computational resources. We encode the 3D shapes by a set of features using the eigenvectors of the LBO. For empirical evaluation, a dataset is employed from a set of numerical simulations of a car during crash under different design conditions, i.e. plate thickness variations. The appearance of a bifurcation during the crash in the dataset, motivates an error analysis done for both groups to see how good the network performs in the presence of a bifurcation. In both branches, the network is able to perform very good predictions, while we observe different error localisations for reconstruction versus prediction. Moreover, the 2D visualization of the reconstruction branch shows the bifurcation as two clusters. In any case, from a relatively small number of data, the proposed network using spectral coefficients is able to learn complex dynamical structural mechanical behaviors. Future work could go toward scaling the pipeline for learning the crash dynamics of the entire car and larger mesh sizes, which increases the needed computational effort. On the other hand, one might be able to use smaller number of eigenvectors by not simply selecting the first few ones, but those with a large variance in the spectral coefficients of the data set. Furthermore, in practical settings, re-meshing of the parts can take place, here using spectral coefficients can ease this step since one can encode shapes with different vertices number to fixed size feature vectors, as long as the geometry is (approximately) isometric. Still, there is the overall question, if and how a trained network can be evaluated for changed geometries (relevant question for any 3D DNN approach introduced so far) or different crash setups. Moreover, adding design parameters could also improve the accuracy but requires modifications of the networks architecture. For practical applications, as each crash simulation requires hours of heavy computation running computational solvers on a large cluster, a system that is able to learn the representation of experiments with very few training data and generate the predicted simulation results for new design parameters would save much resources. Moreover, the ultimate goal of research along this direction would be a data driven system that receives very little information about the simulation (like design parameters) and output the crash sequences with minimum error. Another application of the current system could be feasibility detectors while running the simulation on the compute cluster. Using the network, one could check if the simulation goes well or if for some reasons it should be terminated. From the current stage of the system, one would be able to generate the parts of the future simulation simply by extrapolating the learned spectral coefficients from a few initial time steps, which are already computed on the cluster, as inputs. If the distance between network predicts and simulation gets very large over the iterations, the simulation can be terminated since it failed the feasibility check. Further, related works such as Qiao et al. (2018) introduce a specific feature set and LSTM autoencoders, where also graph convolution operation is required. This approach could be applied for car crash data under the assumption that the local optimization can still be applied for large deformations as the ones occurring in our applications. Further, the resulting features are long vectors, which results in 8 hours for learning on a CPU/GPU system for a data set similar in size to ours, where we need 30 minutes. Nevertheless, a comparison of these two approach will be worthwhile future work. A APPENDIX time step 6 time step 7 time step 8 time step 9 time step 10 time step 6 time step 7 time step 8 time step 9 time step 10
A two branch LSTM based network architecture learns the representation and dynamics of 3D meshes of numerical crash simulations.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:522
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The purpose of an encoding model is to predict brain activity given a stimulus. In this contribution, we attempt at estimating a whole brain encoding model of auditory perception in a naturalistic stimulation setting. We analyze data from an open dataset, in which 16 subjects watched a short movie while their brain activity was being measured using functional MRI. We extracted feature vectors aligned with the timing of the audio from the movie, at different layers of a Deep Neural Network pretrained on the classification of auditory scenes. fMRI data was parcellated using hierarchical clustering on 500 parcels, and encoding models were estimated using a fully connected neural network with one hidden layer, trained to predict the signals for each parcel from the DNN features. Individual encoding models were successfully trained and predicted brain activity on unseen data, in parcels located in the superior temporal lobe, as well as dorsolateral prefrontal regions, which are usually considered as areas involved in auditory and language processing. Taken together, this contribution extends previous attempts on estimating encoding models, by showing the ability to model brain activity using a generic DNN (ie not specifically trained for this purpose) to extract auditory features, suggesting a degree of similarity between internal DNN representations and brain activity in naturalistic settings. One important motivation for incorporating machine learning in neuroscientific discovery is the establishment of predictive models, as opposed to models based on statistical inference [1] . While the latter are unable to generalize to a new dataset, the former aim at sucessful generalization. In particular, encoding models aim at predicting brain activity given a model of the stimulus presented to the subject. A successful model should enable generalization to unseen data, enabling a better understanding of the underlying brain functions. Furthermore, an accurate encoding model could potentially be used to enhance machine learning, by providing an auxiliary source of training data, as recent evidence suggest that actual brain activity can guide machine learning [2] . In this study, we tested whether a pretrained network could be used to estimate encoding models, in the case of naturalistic auditory perception. We were able to train encoding models on individual subjects to predict brain activity using the deepest layers of SoundNet, using less than 20 minutes of fMRI data. The obtained models best predicted the activity in brain areas that are part of a language-related network. However, the current study has the following limitations. First, we extracted features from the auditory part of the stimuli, while the modeled brain activity involves many other brain functions, namely visual perception, as well as higher level cognitive functions such as memory and emotional responses. This probably explains why we obtain R 2 = 0.5 in the best case. Providing a richer stimuli representation using more general purpose feature extractors would probably enable a more complete model of brain activity. Second, we estimated brain parcellations on single subject data using only 20 minutes of MRI, which might not be enough to obtain a reliable set of ROIs [6] . Further studies should use either more repetitions on each subject, or attempt at learning parcellations across subjects, after having spatially normalized each individual to a template. Third, we didn't find a clear relationship between spatial extent of our encoding models as a function of the SoundNet layer. This could be due to the fact that SoundNet was trained independently of the brain data, and was never optimized for encoding models. One possible avenue would be to perform fine tuning, or retrain from scratch, in order to optimize the estimation of encoding models. Finally, in our approach we ignored the temporal dynamics of both the feature vectors and the fMRI data, as well as the dependencies between ROIs implied by brain connectivity. In future studies, we will consider the use of recurrent neural networks, as well as graph representation learning [7] , in order to tackle those issues.
Feature vectors from SoundNet can predict brain activity of subjects watching a movie in auditory and language related brain regions.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:523
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: In this paper, we describe the "implicit autoencoder" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions. We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning. Using implicit distributions allows us to learn more expressive posterior and conditional likelihood distributions for the autoencoder. Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract and high-level information of the data, while the remaining information is captured by the implicit conditional likelihood distribution. For example, we show that implicit autoencoders can disentangle the global and local information, and perform deterministic or stochastic reconstructions of the images. We further show that implicit autoencoders can disentangle discrete underlying factors of variation from the continuous factors in an unsupervised fashion, and perform clustering and semi-supervised learning. Deep generative models have achieved remarkable success in recent years. One of the most successful models is the generative adversarial network (GAN) BID7 , which employs a two player min-max game. The generative model, G, samples the noise vector z ∼ p(z) and generates the sample G(z). The discriminator, D(x), is trained to identify whether a point x comes from the data distribution or the model distribution; and the generator is trained to maximally confuse the discriminator. The cost function of GAN is DISPLAYFORM0 GANs can be viewed as a general framework for learning implicit distributions BID18 BID12 . Implicit distributions are probability distributions that are obtained by passing a noise vector through a deterministic function that is parametrized by a neural network. In the probabilistic machine learning problems, implicit distributions trained with the GAN framework can learn distributions that are more expressive than the tractable distributions trained with the maximum-likelihood framework.Variational autoencoders (VAE) BID13 BID20 are another successful generative models that use neural networks to parametrize the posterior and the conditional likelihood distributions. Both networks are jointly trained to maximize a variational lower bound on the data log-likelihood. One of the limitations of VAEs is that they learn factorized distributions for both the posterior and the conditional likelihood distributions. In this paper, we propose the "implicit autoencoder" (IAE) that uses implicit distributions for learning more expressive posterior and conditional likelihood distributions. Learning a more expressive posterior will result in a tighter variational bound; and learning a more expressive conditional likelihood distribution will result in a global vs. local decomposition of information between the prior and the conditional likelihood. This enables the latent code to only capture the information that we care about such as the high-level and abstract information, while the remaining low-level information of data is separately captured by the noise vector of the implicit decoder.Implicit distributions have been previously used in learning generative models in works such as adversarial autoencoders (AAE) BID16 , adversarial variational Bayes (AVB) (Mescheder et al., 2017) , ALI (Dumoulin et al., 2016) , BiGAN BID5 and other works such as BID12 BID22 . The global vs. local decomposition of information has also been studied in previous works such as PixelCNN autoencoders (van den Oord et al., 2016) , PixelVAE BID9 , variational lossy autoencoders BID4 , PixelGAN autoencoders BID15 , or other works such as BID2 BID8 BID0 . In the next section, we first propose the IAE and then establish its connections with the related works. In this paper, we proposed the implicit autoencoder, which is a generative autoencoder that uses implicit distributions to learn expressive variational posterior and conditional likelihood distributions. We showed that in IAEs, the information of the data distribution is decomposed between the prior and the conditional likelihood. When using a low dimensional Gaussian distribution for the global code, we showed that the IAE can disentangle high-level and abstract information from the low-level and local statistics. We also showed that by using a categorical latent code, we can learn discrete factors of variation and perform clustering and semi-supervised learning.
We propose a generative autoencoder that can learn expressive posterior and conditional likelihood distributions using implicit distributions, and train the model using a new formulation of the ELBO.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:524
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge. Children are remarkable learners, and thus their inductive biases should interest machine learning researchers. To help learn the meaning of new words efficiently, children use the "mutual exclusivity" (ME) bias -the assumption that once an object has one name, it does not need another (Markman & Wachtel, 1988) (Figure 1 ). In this paper, we examine whether or not standard neural networks demonstrate the mutual exclusivity bias, either as a built-in assumption or as a bias that develops through training. Moreover, we examine common benchmarks in machine translation and object recognition to determine whether or not a maximally efficient learner should use mutual exclusivity. The mutual exclusivity task used in cognitive development research (Markman & Wachtel, 1988) . Children tend to associate the novel word ("dax") with the novel object (right). When children endeavour to learn a new word, they rely on inductive biases to narrow the space of possible meanings. Children learn an average of about 10 new words per day from the age of one until the end of high school (Bloom, 2000) , a feat that requires managing a tractable set of candidate meanings. A typical word learning scenario has many sources of ambiguity and uncertainty, including ambiguity in the mapping between words and referents. Children hear multiple words and see multiple objects within a single scene, often without clear supervisory signals to indicate which word goes with which object (Smith & Yu, 2008) . The mutual exclusivity assumption helps to resolve ambiguity in how words maps to their referents. Markman & Wachtel (1988) examined scenarios like Figure 1 that required children to determine the referent of a novel word. For instance, children who know the meaning of "cup" are presented with two objects, one which is familiar (a cup) and another which is novel (an unusual object). Given these two objects, children are asked to "Show me a dax," where "dax" is a novel nonsense word. Markman and Wachtel found that children tend to pick the novel object rather than the familiar object. Although it is possible that the word "dax" could be another word for referring to cups, children predict that the novel word refers to the novel object -demonstrating a "mutual exclusivity" bias that familiar objects do not need another name. This is only a preference; with enough evidence, children must eventually override this bias to learn hierarchical categories: a Dalmatian can be called a "Dalmatian," a "dog", or a "mammal" (Markman & Wachtel, 1988; Markman, 1989) . As an often useful but sometimes misleading cue, the ME bias guides children when learning the words of their native language. It is instructive to compare word learning in children and machines, since word learning is also a widely studied problem in machine learning and artificial intelligence. There has been substantial (a) (b) Figure 2: Evaluating mutual exclusivity in a feedforward (a) and seq2seq (b) neural network. (a) After training on a set of known objects, a novel label ("dax") is presented as a one-hot input vector. The network maps this vector to a one-hot output vector representing the predicted referent, through an intermediate embedding layer and an optional hidden layer (not shown). A representative output vector produced by a trained network is shown, placing almost all of the probability mass on known outputs. (b) A similar setup for mapping sequences of labels to their referents. During the test phase a novel label "dax" is presented and the ME Score at that output position is computed. recent progress in object recognition, much of which is attributed to the success of deep neural networks and the availability of very large datasets (LeCun et al., 2015) . But when only one or a few examples of a novel word are available, deep learning algorithms lack human-like sample efficiency and flexibility (Lake et al., 2017) . Insights from cognitive science and cognitive development can help bridge this gap, and ME has been suggested as a psychologically-informed assumption relevant to machine learning . In this paper, we examine standard neural networks to understand if they have an ME bias. Moreover, we analyze whether or not ME is a good assumption in lifelong variants of common translation and object recognition tasks. The results show that standard neural networks fail to reason by mutual exclusivity when trained in a variety of typical settings. The models fail to capture the perfect one-to-one mapping (ME bias) seen in the synthetic data, predicting that new symbols map to familiar outputs in a many-to-many fashion. Although our focus is on neural networks, this characteristic is not unique to this model class. We posit it more generally affects flexible models trained to maximize log-likelihood. In a trained network, the optimal activation value for an unused output node is zero: for any given training example, increasing value of an unused output simply reduces the available probability mass for the Name Languages Sentence Pairs Vocabulary Size IWSLT'14 (Freitag et al., 2014) Eng.-Vietnamese ∼133K 17K(en), 7K(vi) WMT'14 Eng.-German ∼4.5 M 50K(en), 50K(de) WMT'15 (Luong & Manning, 2016) Eng.-Czech ∼15.8 M 50K(en), 50K(cs) target output. Using other loss functions could result in different outcomes, but we also did not find that weight decay and entropy regularization of reasonable values could fundamentally alter the use of novel outputs. In the next section, we investigate if the lack of ME could hurt performance on common learning tasks such as machine translation and image classification. Children use the mutual exclusivity (ME) bias to learn the meaning of new words efficiently, yet standard neural networks learn very differently. Our results show that standard deep learning algorithms lack the ability to reason with ME, including feedforward networks and recurrent sequenceto-sequence models trained to maximize log-likelihood with common regularizers. Beyond simply lacking this bias, these networks learn an anti-ME bias, preferring to map novel inputs to familiar and frequent (rather than unfamiliar) output classes. Our results also show that these characteristics The plots show the probability that a new input image belongs to an unseen class P (N |t), as a function of the number of images t seen so far during training (blue), with its standard deviation. This measure is contrasted with the ME score of a neural network classifier trained through a similar run of the dataset (orange). are poorly matched to more realistic lifelong learning scenarios where novel classes can appear at any point, as demonstrated in the translation and classification experiments presented here. Neural nets may be currently stymied by their lack of ME bias, ignoring a powerful assumption about the structure of learning tasks. Mutual exclusivity is relevant elsewhere in machine learning. Recent work has contrasted the ability of humans and neural networks to learn compositional instructions from just one or a few examples, finding that neural networks lack the ability to generalize systematically (Lake & Baroni, 2018; . The authors suggest that people rely on ME in these learning situations , and thus few-shot learning approaches could be improved by utilizing this bias as well. In our analyses, we show that neural networks tend to learn the opposite bias, preferring to map novel inputs to familiar outputs. More generally, ME can be generalized from applying to "novel versus familiar" stimuli to instead handling "rare versus frequent" stimuli (e.g., in translation, rare source words may map to rare target words). The utility of reasoning by ME could be extended to early stages of epoch based learning too. For example, during epoch-based learning, neural networks take longer to acquire rare stimuli and patterns of exceptions (McClelland & Rogers, 2003) , often mishandling these items for many epochs by mapping them to familiar responses. Another direction for future work is studying how the ME bias should interact with hierarchical categorization tasks. We posit that the ME assumption will be increasingly important as learners tackle more continual, lifelong, and large-scale learning challenges (Mitchell et al., 2018) . Mutual exclusivity is an open challenge for deep neural networks, but there are promising avenues for progress. The ME bias will not be helpful for every problem, but it is equally clear that the status quo is sub-optimal: models should not have a strong anti-ME bias regardless of the task and dataset demands. Ideally, a model would decide autonomously how strongly to use ME (or not) based on the demands of the task. For instance, in our synthetic example, an ideal learner would discover the one-to-one correspondence and use this perfect ME bias as a meta-strategy. If the dataset has more many-to-one correspondences, it would adopt another meta-strategy. This meta-strategy could even change depending on the stage of learning, yet such an approach is not currently available for training models. Previous cognitive models of word learning have found ways to incorporate the ME bias (Kachergis et al., 2012; McMurray et al., 2012; Frank et al., 2009; Lambert et al., 2005) , although in ways that do not generalize to training deep neural networks. While successful in some domains, these models are highly simplified or require built-in mechanisms for implementing ME, making them so far impractical for use in realistic settings. As outlined above, it would be ideal to acquire a ME bias via meta learning or learning to learn (Allen et al., 2019; Snell et al., 2017) , with the advantage of calibrating the bias to the dataset itself rather than assuming its strength a priori. For example, the meta learning model of Santoro et al. (2016) seems capable of learning an ME bias, although it was not specifically probed in this way. Recent work by Lake (2019) demonstrated that neural nets can learn to reason by ME if trained explicitly to do so, showing these abilities are within the repertoire of modern tools. However acquiring ME is just one step toward the goal proposed here: using ME to facilitate efficient lifelong learning or large-scale classification and translation. In conclusion, standard deep neural networks do not naturally reason by mutual exclusivity, but designing them to do so could lead to faster and more flexible learners. There is a compelling case for building models that learn through mutual exclusivity.
Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in naturalistic scenarios such as lifelong learning.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:525
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Cortical neurons process and integrate information on multiple timescales. In addition, these timescales or temporal receptive fields display functional and hierarchical organization. For instance, areas important for working memory (WM), such as prefrontal cortex, utilize neurons with stable temporal receptive fields and long timescales to support reliable representations of stimuli. Despite of the recent advances in experimental techniques, the underlying mechanisms for the emergence of neuronal timescales long enough to support WM are unclear and challenging to investigate experimentally. Here, we demonstrate that spiking recurrent neural networks (RNNs) designed to perform a WM task reproduce previously observed experimental findings and that these models could be utilized in the future to study how neuronal timescales specific to WM emerge. Previous studies have shown that higher cortical areas such as prefrontal cortex operate on a long timescale, measured as the spike-count autocorrelation decay constant at rest [1] . These long timescales have been hypothesized to be critical for performing working memory (WM) computations [2, 3] , but it is experimentally challenging to probe the underlying circuit mechanisms that lead to stable temporal properties. Recurrent neural network (RNN) models trained to perform WM tasks could be a useful tool if these models also utilize units with long heterogeneous timescales and capture previous experimental findings. However, such RNN models have not yet been identified. In this study, we construct a spiking RNN model to perform a WM task and compare the emerging timescales with the timescales derived from the prefrontal cortex of rhesus monkeys trained to perform similar WM tasks. We show that both macaque prefrontal cortex and the RNN model utilize units/neurons with long timescales during delay period to sustain stimulus information. In addition, the number of units with long timescales was significantly reduced in the RNN model trained to perform a non-WM task, further supporting the idea that neuronal timescales are task-specific and functionally organized. In this study, we employed a spiking RNN model of WM to investigate if the model exhibits and utilizes heterogeneous timescales for prolonged integration of information. We validated the model using an experimental dataset obtained from rhesus monkeys trained on WM tasks: the model and the primate prefrontal cortex both displayed similar heterogeneous neuronal timescales and incorporated units/neurons with long timescales to maintain stimulus information. The timescales from the RNN model trained on a non-WM task (Go-NoGo task) were markedly shorter, since units with long timescales were not required to support the simple computation. Future works include characterizing the network dynamics and the circuit motifs of the DMS RNN model to elucidate connectivity structures required to give rise to the diverse, stable temporal receptive fields specific to WM.
Spiking recurrent neural networks performing a working memory task utilize long heterogeneous timescales, strikingly similar to those observed in prefrontal cortex.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:526
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Conventional deep learning classifiers are static in the sense that they are trained on a predefined set of classes and learning to classify a novel class typically requires re-training. In this work, we address the problem of Low-shot network-expansion learning. We introduce a learning framework which enables expanding a pre-trained (base) deep network to classify novel classes when the number of examples for the novel classes is particularly small. We present a simple yet powerful distillation method where the base network is augmented with additional weights to classify the novel classes, while keeping the weights of the base network unchanged. We term this learning hard distillation, since we preserve the response of the network on the old classes to be equal in both the base and the expanded network. We show that since only a small number of weights needs to be trained, the hard distillation excels for low-shot training scenarios. Furthermore, hard distillation avoids detriment to classification performance on the base classes. Finally, we show that low-shot network expansion can be done with a very small memory footprint by using a compact generative model of the base classes training data with only a negligible degradation relative to learning with the full training set. In many real life scenarios, a fast and simple classifier expansion is required to extend the set of classes that a deep network can classify. For example, consider a cleaning robot trained to recognize a number of objects in a certain environment. If the environment is modified with an additional novel object, it is desired to be able to update the classifier by taking only a few images of that object and expand the robot classifier. In such a scenario, the update should be a simple procedure, based on a small collection of images captured in a non-controlled setting. Furthermore, such a low-shot network update should be fast and without access the entire training set of previously learned data. A common solution to classifier expansion is fine-tuning the network BID6 . However fine-tuning requires keeping a large amount of base training data in memory, in addition to collecting sufficient examples of the novel classes. Otherwise, fine-tuning can lead to degradation of the network accuracy on the base classes, also known as catastrophic forgetting BID0 . In striking contrast, for some tasks, humans are capable of instantly learning novel categories. Using one or only a few training examples humans are able to learn a novel class, without compromising previously learned abilities or having access to training examples from all previously learned classes.We consider the classifier expansion problem under the following constraints:1. Low-shot: very few samples of the novel classes are available. 2. No forgetting: preserving classification performance on the base classes. 3. Small memory footprint: no access to the base classes training data.In this work we introduce a low-shot network expansion technique, augmenting the capability of an existing (base) network trained on base classes by training additional parameters that enables to classify novel classes. The expansion of the base network with additional parameters is performed in the last layers of the network.To satisfy low-shot along with no-forgetting constraints, we present a hard distillation framework. Distillation in neural networks BID5 is a process for training a target network to imitate another network. A loss function is added to the target network so that its output matches the output of the mimicked network. In standard soft distillation the trained network is allowed to deviate from the mimicked network. Whereas hard distillation enforces that the output of the trained network for base classes matches the output of the mimicked network as a hard constraint. We achieve hard distillation by keeping the weights of the base network intact, and learn only the newly added weights. Network expansion with hard distillation yields a larger network, distilling the knowledge of the base network in addition to augmented capacity to classify novel classes. We show that in the case of low-shot (only 1-15 examples of a novel class), hard distillation outperforms soft distillation. Moreover, since the number of additional parameters in the expanded network is small, the inference time of the new network is nearly identical to the base network.To maintain a small memory footprint, we refrain from saving the entire training set. Instead, we present a compact generative model, consisting of a collection of generative models fitted in the feature space to each of the base classes. We use a Gaussian Mixture Model (GMM) with small number of mixtures, and show it inflicts a minimal degradation in classification accuracy. Sampling from the generative GMM model is fast, reducing the low-shot training time and allowing fast expansion of the network.We define a benchmark for low-shot network expansion. The benchmark is composed of a series of tests of increasing complexity, ranging from simple tasks where base and novel classes are from different domains and to difficult tasks where base and novel classes are from the same domain and shares objective visual similarities. We perform a comprehensive set of experiments on this challenging benchmark, comparing the performance of the proposed to alternative methods.To summarize, the main contributions of the paper are:1. A novel hard-distillation solution to a low-shot classifier expansion problem 2. GMM as a sufficient generative model to represent base classes in a feature space 3. A new benchmark for the low-shot classifier expansion problem 2 RELATED WORKS A common solution to the class-incremental learning problem is to use a Nearest-Neighbors (NN) based classifier in feature space. A significant advantage of a NN-based classifier is that it can be easily extended to classify a novel class, even when only a single example of the class is available (one-shot learning). However NN-based classifiers require keeping in the memory significant amount of training data from the base classes. BID7 proposed to use Nearest Class Mean (NCM) classifier, where each class is represented by a single prototype example which is the mean feature vector of all class examples. One major disadvantage of NCM and NN-based methods is that they are based on a fixed feature representation of the data. To overcome this problem BID7 proposed to learn a new distance function in the feature space using metric learning.The ideas of metric learning combined with the NN classifier resonate with recent work by on Matching Networks for one-shot learning, where both feature representation and the distance function are learned end-to-end with attention and memory augmented networks. The problem we consider in this paper is different from the one discussed by . We aim to expand existing deep classifier trained on large dataset to classify novel classes, rather than to create a general mechanism for one-shot learning. BID3 presented an innovative low-shot learning mechanism, where they proposed a Squared Gradient Magnitude regularization technique for an improved fixed feature representation learning designed for low-shot scenarios. They also introduced techniques to hallucinate additional training examples for novel data classes. In contrast, we present a method which aims to maximize performance in low-shot network expansion given a fixed representation, allowing expanding the representation based on novel low-shot data. Furthermore, in our work, we demonstrate the ability to expand the network without storing the entire base classes training data.Recently, BID9 proposed iCaRL -(Incremental Classifier and Representation Learning), to solve the class-incremental learning problem. iCaRL is based on Nearest-Mean-of-Exemplars classifier, similar to the NCM classifier of BID7 . In the iCaRL method, the feature representation is updated and the class means are recomputed from a small stored number of representative examples of the base classes. During the feature representation update, the network parameters are updated by minimizing a combined classification and distillation loss. The iCaRL method was introduced as a class-incremental learning method for large training sets. In Section 4 we discuss its adaptation to low-shot network expansion and compare it to our method. BID11 proposed the Progressive Network for adding new tasks without affecting the performance of old tasks. They propose freezing the parameters that were trained on old tasks and expand the network with a additional layers when training a new task. BID15 proposed the Progressive learning technique which solves the problem of online sequential learning in extreme learning machines paradigm (OS-ELM). The purpose of their work is to incrementally learn the last fully-connected layer of the network. When a sample from a novel class arrives, the last layer is expanded with additional parameters. The Progressive learning solution updates the last layer only sequentially and only works in the ELM framework (does not update internal layers of the network). In another work BID14 proposed an incremental learning technique which augments the base network with additional parameters in last fully connected layer to classify novel classes. Similar to iCaRL, they perform soft distillation by learning all parameters of the network. Instead of keeping historical training data, they propose phantom sampling -hallucinating data from past distribution modeled with Generative Adversarial Networks.In this work we propose a solution that borrows ideas from freeze-and-expand paradigm, improved feature representation learning, network distillation and modeling past data with a generative model. We propose to apply expansion to the last fully connected layer of a base network to enable classification on novel classes, and to deeper layers to extend and improve the feature representation. However, in contrast to other methods BID9 ; BID15 , we do not retrain the base network parameters, but only the newly introduced weights of the expansion.Moreover, the extended feature representation is learned from samples of base and novel classes. In contrast to BID3 , where the improved feature representation is learned from simulating low-shot scenarios on the base classes only, before the actual novel data is available. Finally, in order to avoid keeping all historical training data, we use Gaussian Mixture Model of the feature space as a generative model for base classes.
In this paper, we address the problem of Low-shot network-expansion learning
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:527