input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data.
We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO).
We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation.
This is a key hurdle in the effective use of representations for data-efficient learning and transfer.
To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations.
To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point.
For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations.
We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.
Representation learning is a fundamental problem in Machine learning and holds the promise to enable data-efficient learning and transfer to new tasks.
Researchers working in domains like Computer Vision (Krizhevsky et al., 2012) and Natural Language Processing (Devlin et al., 2018) have already demonstrated the effectiveness of representations and features computed by deep architectures for the solution of other tasks.
A case in point is the example of the FC7 features from the AlexNet image classification architecture that have been used for many other vision problems (Krizhevsky et al., 2012) .
The effectiveness of learned representations has given new impetus to research in representation learning, leading to a lot of work being done on the development of techniques for inducing representations from data having desirable properties like disentanglement and compactness (Burgess et al., 2018; Achille & Soatto, 2017; Bengio, 2013; Locatello et al., 2019) .
Many popular techniques for generating representation are based on the Variational AutoEncoders (VAE) model (Kingma & Welling, 2013; Rezende et al., 2014) .
The use of deep networks as universal function approximators has facilitated very rapid advancements which samples generated from these models often being indistinguishable from natural data.
While the quality of generated examples can provide significant convincing evidence that a generative model is flexible enough to capture the variability in the data distribution, it is far from a formal guarantee that the representation is fit for other purposes.
In fact, if the actual goal is learning good latent representations, evaluating generative models only based on reconstruction fidelity and subjective quality of typical samples is neither sufficient nor entirely necessary, and can be even misleading.
In this paper, we uncover the problematic failure mode where representations learned by VAEs exhibit over-sensitivity to semantically-irrelevant changes in data.
One example of such problematic behaviour can be seen in Figure 1 .
We identify a cause for this shortcoming in the classical Vari-ational Auto-encoder (VAE) objective, the evidence lower bound (ELBO) , that fails to control the behaviour of the encoder out of the support of the empirical data distribution.
We show this behaviour of the VAE can lead to extreme errors in the recovered representation by the encoder and is a key hurdle in the effective use of representations for data-efficient learning and transfer.
To address this problem, we propose to augment the data with properties that enforce insensitivity of the representation with respect to families of transformations.
To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point.
For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations.
We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.
Figure 1: An illustration of the intrinsic fragility of VAE representations.
Outputs from a Variational Autoencoder with encoder f and decoder g parametrized by η and θ, respectively, trained on CelebA.
Conditioned on the encoder input X a = x a the decoder output X = g(f (x a )) = (g • f )(x a ) is shown on the top row.
When the original example is perturbed with a carefully selected vector d such that X b = X a + d with d ≤ , the output X turns out to be perceptually very different.
Such examples suggest that either the representations Z a and Z b are very different (the encoder is not smooth), or the decoder is very sensitive to small changes in the representation (the decoder is not smooth), or both.
We identify the source of the problem primarily as the encoder and propose a practical solution.
It is clear that if learned representations are overly sensitive to irrelevant changes in the input (for example, small changes in the pixels of an image or video, or inaudible frequencies added to an audio signal), models that rely on these representations are naturally susceptible to make incorrect predictions when inputs are changed.
We argue that such specifications about the robustness properties of learned representations can be one of the tractable guiding features in the search for good representations.
Based on these observations, we make the following contributions:
1. We introduce a method for learning robust latent representations by explicitly targeting a structured model that admits the original VAE model as a marginal.
We also show that in the case the target is chosen a pairwise conditional random field with attractive potentials, this choice leads naturally to the Wasserstein divergence between posterior distributions over the latent space.
This insight provides us a flexible class of robustness metrics for controlling representations learned by VAEs.
2. We develop a modification to training algorithms for VAEs to improve robustness of learned representations, using an external selection mechanism for obtaining transformed examples and by enforcing the corresponding representations to be close.
As a particular selection mechanism, we adopt attacks in adversarial supervised learning (Madry et al., 2017) to attacks to the latent representation.
Using this novel unsupervised training procedure we learn encoders with adjustable robustness properties and show that these are effective at learning representations that perform well across a variety of downstream tasks.
3. We show that alternative models proposed in the literature, in particular β-VAE model used for explicitly controlling the learned representations, or Wasserstein Generative Adversarial Networks (GANs) can also be interpreted in our framework as variational lower bound maximization.
4. We show empirically using simulation studies on MNIST, color MNIST and CelebA datasets, that models trained using our method learn representations that provide a higher degree of adversarial robustness even without supervised adversarial training.
In this paper, we have introduced a method for improving robustness of latent representations learned by a VAE.
It must be stressed that our goal is not building the most powerful adversarially robust supervised classifier, but obtaining a method for learning generic representations that can be used for several tasks; the tasks can be even unknown at the time of learning the representations.
While the nominal accuracy of an unsupervised approach is expected to be inferior to a supervised training method that is informed by extra label information, we observe that significant improvements in adversarial robustness can be achieved by our approach that forces smooth representations.
|
We propose a method for computing adversarially robust representations in an entirely unsupervised way.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:257
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data.
We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks.
This extension allows to model complex interactions while being more global in its search compared to other greedy approaches.
In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods.
On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.
Structure learning and causal inference have many important applications in different areas of science such as genetics [5, 12] , biology [13] and economics [7] .
Bayesian networks (BN), which encode conditional independencies using directed acyclic graphs (DAG), are powerful models which are both interpretable and computationally tractable.
Causal graphical models (CGM) [12] are BNs which support interventional queries like: What will happen if someone external to the system intervene on variable X?
Recent work suggests that causality could partially solve challenges faced by current machine learning systems such as robustness to out-of-distribution samples, adaptability and explainability [8, 6] .
However, structure and causal learning are daunting tasks due to both the combinatorial nature of the space of structures and the question of structure identifiability [12] .
Nevertheless, these graphical models known qualities and promises of improvement for machine intelligence renders the quest for structure/causal learning appealing.
The problem of structure learning can be seen as an inverse problem in which the learner tries to infer the causal structure which has generated the observation.
In this work, we propose a novel score-based method [5, 12] for structure learning named GraN-DAG which makes use of a recent reformulation of the original combinatorial problem of finding an optimal DAG into a continuous constrained optimization problem.
In the original method named NOTEARS [18] , the directed graph is encoded as a weighted adjacency matrix W which represents coefficients in a linear structural equation model (SEM) [7] .
To enforce acyclicity, the authors propose a constraint which is both efficiently computable and easily differentiable.
Most popular score-based methods for DAG learning usually tackle the combinatorial nature of the problem via greedy search procedures relying on multiple heuristics [3, 2, 11] .
Moving toward the continuous paradigm allows one to use gradient-based optimization algorithms instead of handdesigned greedy search algorithms.
Our first contribution is to extend the work of [18] to deal with nonlinear relationships between variables using neural networks (NN) [4] .
GraN-DAG is general enough to deal with a large variety of parametric families of conditional probability distributions.
To adapt the acyclicity constraint to our nonlinear model, we use an argument similar to what is used in [18] and apply it first at the level of neural network paths and then at the level of graph paths.
Our adapted constraint allows us to exploit the full flexibility of NNs.
On both synthetic and real-world tasks, we show GraN-DAG outperforms other approaches which leverage the continuous paradigm, including DAG-GNN [16] , a recent nonlinear extension of [18] independently developed which uses an evidence lower bound as score.
Our second contribution is to provide a missing empirical comparison to existing methods that support nonlinear relationships but tackle the optimization problem in its discrete form using greedy search procedures such as CAM [2] .
We show that GraN-DAG is competitive on the wide range of tasks we considered.
We suppose the natural phenomenon of interest can be described by a random vector X ∈ R d entailed by an underlying CGM (P X , G) where P X is a probability distribution over X and G = (V, E) is a DAG [12] .
Each node i ∈ V corresponds to exactly one variable in the system.
Let π G i denote the set of parents of node i in G and let X π G i denote the random vector containing the variables corresponding to the parents of i in G. We assume there are no hidden variables.
In a CGM, the distribution P X is said to be Markov to G which means we can write the probability density function (pdf) as p(
. A CGM can be thought of as a BN in which directed edges are given a causal meaning, allowing it to answer queries regarding interventional distributions [5] .
|
We are proposing a new score-based approach to structure/causal learning leveraging neural networks and a recent continuous constrained formulation to this problem
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:258
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the problem of designing provably optimal adversarial noise algorithms that induce misclassification in settings where a learner aggregates decisions from multiple classifiers.
Given the demonstrated vulnerability of state-of-the-art models to adversarial examples, recent efforts within the field of robust machine learning have focused on the use of ensemble classifiers as a way of boosting the robustness of individual models.
In this paper, we design provably optimal attacks against a set of classifiers.
We demonstrate how this problem can be framed as finding strategies at equilibrium in a two player, zero sum game between a learner and an adversary and consequently illustrate the need for randomization in adversarial attacks.
The main technical challenge we consider is the design of best response oracles that can be implemented in a Multiplicative Weight Updates framework to find equilibrium strategies in the zero-sum game.
We develop a series of scalable noise generation algorithms for deep neural networks, and show that it outperforms state-of-the-art attacks on various image classification tasks.
Although there are generally no guarantees for deep learning, we show this is a well-principled approach in that it is provably optimal for linear classifiers.
The main insight is a geometric characterization of the decision space that reduces the problem of designing best response oracles to minimizing a quadratic function over a set of convex polytopes.
In this paper, we study adversarial attacks that induce misclassification when a learner has access to multiple classifiers.
One of the most pressing concerns within the field of AI has been the welldemonstrated sensitivity of machine learning algorithms to noise and their general instability.
Seminal work by has shown that adversarial attacks that produce small perturbations can cause data points to be misclassified by state-of-the-art models, including neural networks.
In order to evaluate classifiers' robustness and improve their training, adversarial attacks have become a central focus in machine learning and security BID21 BID17 BID23 .Adversarial
attacks induce misclassification by perturbing data points past the decision boundary of a particular class. In the case
of binary linear classifiers, for example, the optimal perturbation is to push points in the direction perpendicular to the separating hyperplane. For non-linear
models there is no general characterization of an optimal perturbation, though attacks designed for linear classifiers tend to generalize well to deep neural networks BID21 .Since a learner
may aggregate decisions using multiple classifiers, a recent line of work has focused on designing attacks on an ensemble of different classifiers BID31 BID0 BID13 . In particular,
this line of work shows that an entire set of state-of-the-art classifiers can be fooled by using an adversarial attack on an ensemble classifier that averages the decisions of the classifiers in that set. Given that attacking
an entire set of classifiers is possible, the natural question is then:What is the most effective approach to design attacks on a set of multiple classifiers?The main challenge when
considering attacks on multiple classifiers is that fooling a single model, or even the ensemble classifier (i.e. the model that classifies a data point by averaging individual predictions), provides no guarantees that the learner will fail to classify correctly. Models may have different
decision boundaries, and perturbations that affect one may be ineffective on another. Furthermore, a learner can
randomize over classifiers and avoid deterministic attacks (see Figure 1 ). c 2 c 1 Figure 1 : Illustration
of why randomization is necessary to compute optimal adversarial attacks. In this example using binary linear
classifiers, there is a single point that is initially classified correctly by two classifiers c1, c2, and a fixed noise budget α in the ℓ2 norm. A naive adversary who chooses a noise
perturbation deterministically will always fail to trick the learner since she can always select the remaining classifier. An optimal adversarial attack in this
scenario consists of randomizing with equal probability amongst both noise vectors.In this paper, we present a principled approach for attacking a set of classifiers which proves to be highly effective. We show that constructing optimal adversarial
attacks against multiple classifiers is equivalent to finding strategies at equilibrium in a zero sum game between a learner and an adversary. It is well known that strategies at equilibrium
in a zero sum game can be obtained by applying the celebrated Multiplicative Weights Update framework, given an oracle that computes a best response to a randomized strategy. The main technical challenge we address pertains
to the characterization and implementation of such oracles. Our main contributions can be summarized as follows:•
We describe the Noise Synthesis FrameWork (henceforth NSFW) for generating adversarial attacks. This framework reduces the problem of designing optimal
adversarial attacks for a general set of classifiers to constructing a best response oracle in a two player, zero sum game between a learner and an adversary; • We show that NSFW is an effective approach for designing adversarial noise that fools neural networks. In particular, applying projected gradient descent on an
appropriately chosen loss function as a proxy for a best response oracle achieves performance that significantly improves upon current state-of-the-art attacks (see results in Figure 2 ); • We show that applying projected gradient descent on an appropriately chosen loss function is a well-principled approach. We do so by proving that for linear classifiers such an
approach yields an optimal adversarial attack if the equivalent game has a pure Nash equilibrium. This result is shown via a geometric characterization of
the decision boundary space which reduces the problem of designing optimal attacks to a convex program; • If the game does not have a pure Nash equilibrium, there is an algorithm for finding an optimal adversarial attack for linear classifiers whose runtime is exponential in the number of classifiers. We show that finding an optimal strategy in this case is
NP-hard.Paper organization. Following a discussion on related work, in Section 2 we
formulate the problem of designing optimal adversarial noise and show how it can be modeled as finding strategies at equilibrium in a two player, zero sum game. Afterwards, we discuss our approach for finding such strategies
using MWU and proxies for best response oracles. In Section 2 .1, we justify our approach by proving guarantees
for linear classifiers. Lastly, in Section 3, we present our experiments.Additional related
work. The field of adversarial attacks on machine learning classifiers has
recently received widespread attention from a variety of perspectives BID1 BID9 BID25 BID3 . In particular, a significant amount of effort has been devoted to computing
adversarial examples that induce misclassification across multiple models BID22 BID21 . There has been compelling evidence which empirically demonstrates the effectiveness
of ensembles as way of both generating and defending against adversarial attacks. For example, BID31 establish the strengths of ensemble training as a defense against
adversarial attacks. Conversely, provide the first set of experiments showing that attacking an ensemble
classifier is an effective way of generating adversarial examples that transfer to the underlying models. Relative to their investigation, our work differs in certain key aspects. Rather than
analyzing adversarial noise from a security perspective and developing methods
for black-box attacks, we approach the problem from a theoretical point of view and introduce a formal characterization of the optimal attack against a set of classifiers. Furthermore, by analyzing noise in the linear setting, we design algorithms for this task
that have strong guarantees of performance. Through our experiments, we demonstrate how these algorithms motivate a natural extension
for noise in deep learning that achieves state-of-the-art results.
Designing adversarial attacks when a learner has access to multiple classifiers is a non-trivial problem.
In this paper we introduced NSFW which is a principled approach that is provably optimal on linear classifiers and empirically effective on neural networks.
The main technical crux is in designing best response oracles which we achieve through a geometrical characterization of the optimization landscape.
We believe NSFW can generalize to domains beyond those in this paper.
|
Paper analyzes the problem of designing adversarial attacks against multiple classifiers, introducing algorithms that are optimal for linear classifiers and which provide state-of-the-art results for deep learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:259
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Dropout is a simple yet effective technique to improve generalization performance and prevent overfitting in deep neural networks (DNNs).
In this paper, we discuss three novel observations about dropout to better understand the generalization of DNNs with rectified linear unit (ReLU) activations: 1) dropout is a smoothing technique that encourages each local linear model of a DNN to be trained on data points from nearby regions; 2) a constant dropout rate can result in effective neural-deactivation rates that are significantly different for layers with different fractions of activated neurons; and 3) the rescaling factor of dropout causes an inconsistency to occur between the normalization during training and testing conditions when batch normalization is also used.
The above leads to three simple but nontrivial improvements to dropout resulting in our proposed method "Jumpout.
"
Jumpout samples the dropout rate using a monotone decreasing distribution (such as the right part of a truncated Gaussian), so the local linear model at each data point is trained, with high probability, to work better for data points from nearby than from more distant regions.
Instead of tuning a dropout rate for each layer and applying it to all samples, jumpout moreover adaptively normalizes the dropout rate at each layer and every training sample/batch, so the effective dropout rate applied to the activated neurons are kept the same.
Moreover, we rescale the outputs of jumpout for a better trade-off that keeps both the variance and mean of neurons more consistent between training and test phases, which mitigates the incompatibility between dropout and batch normalization.
Compared to the original dropout, jumpout shows significantly improved performance on CIFAR10, CIFAR100, Fashion- MNIST, STL10, SVHN, ImageNet-1k, etc., while introducing negligible additional memory and computation costs.
Deep learning has achieved remarkable success on a variety of machine learning tasks BID15 BID14 .
Deep neural networks (DNN), however, are often able to fit the training data perfectly -this can result in the overfitting problem, thereby weakening the generalization performance on unseen data.
Dropout BID17 BID7 is a simple yet effective technique to mitigate such problems by randomly setting the activations of hidden neurons to 0, a strategy that reduces co-adaptation amongst neurons.
Dropout applies to any layer in a DNN without causing significant additional computational overhead.Dropout, however, has several drawbacks.
Firstly, dropout rates, constituting extra hyper-parameters at each layer, need to be tuned to get optimal performance.
Too high a dropout rate can slow the convergence rate of the model, and often hurt final performance.
Too low a rate yields few or no improvements on generalization performance.
Ideally, dropout rates should be tuned separately for each layer and also during various training stages.
In practice, to reduce computation, we often tune a single dropout rate and keep it constant for all dropout layers and throughout the training process.If we treat dropout as a type of perturbation on each training sample, it acts to generalize the DNN to noisy samples having that specific expected amount of perturbation (due to the fixed dropout rate) with high probability.
The fixed rate rules out samples typical having less perturbation, i.e., those potentially more likely to be closer to the original samples and thus that are potentially more helpful to improve generalization.
Also, when a constant dropout rate is applied to layers and samples having different fractions of activated neurons, the effective dropout rate (i.e., the proportion of the activated neurons that are deactivated by dropout) varies, which might result in too much perturbation for some layers and samples and too little perturbation for others.Another deficiency of dropout lies in its incompatibility with batch normalization (BN) BID8 (more empirical evidence of this is shown in Section 3.3).
As dropout randomly shuts down activated neurons, it needs to rescale the undropped neurons to match the original overall activation gain of the layer.
Unfortunately, such rescaling breaks the consistency of the normalization parameters required between training and test phases 1 and may cause poor behavior when used with BN.
Since BN, and its variants BID0 BID18 BID20 , has become an almost indispensable component of modern DNN architectures to keep the training stable and to accelerate convergence, dropout itself often gets dropped out in the choice between these two non-complementary options and has recently become less popular.
|
Jumpout applies three simple yet effective modifications to dropout, based on novel understandings about the generalization performance of DNN with ReLU in local regions.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:26
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games.
We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource.
We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards.
Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments.
We present a closed-loop (CL) analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions.
We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions (e.g., deep neural networks); and show that a closed-loop Nash equilibrium (NE) can be found (or at least approximated) by solving a related optimal control problem (OCP).
This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem.
This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms.
We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game.
We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game.
In a noncooperative stochastic dynamic game, the agents compete in a time-varying environment, which is characterized by a discrete-time dynamical system equipped with a set of states and a state-transition probability distribution.
Each agent has an instantaneous reward function, which can be stochastic and depends on agents' actions and current system state.
We consider that both the state and action sets are subsets of real vector spaces and subject to coupled constraints, as usually required by engineering applications.A dynamic game starts at some initial state.
Then, the agents take some action and the game moves to another state and gives some reward values to the agents.
This process is repeated at every time step over a (possibly) infinite time horizon.
The aim of each agent is to find the policy that maximizes its expected long term return given other agents' policies.
Thus, a game can be represented as a set of coupled optimal-control-problems (OCPs), which are difficult to solve in general.OCPs are usually analyzed for two cases namely open-loop (OL) or closed-loop (CL), depending on the information that is available to the agents when making their decisions.
In the OL analysis, the action is a function of time, so that we find an optimal sequence of actions that will be executed in order, without feedback after any action.
In the CL setting, the action is a mapping from the state, usually referred as feedback policy or simply policy, so the agent can adapt its actions based on feedback from the environment (the state transition) at every time step.
For deterministic systems, both OL and CL solutions can be optimal and coincide in value.
But for stochastic system, an OL strategy consisting in a precomputed sequence of actions cannot adapt to the stochastic dynamics so that it is unlikely to be optimal.
Thus, CL are usually preferred over OL solutions.For dynamic games, the situation is more involved than for OCPs, see, e.g., BID1 .
In an OL dynamic game, agents' actions are functions of time, so that an OL equilibrium can be visualized as a set of state-action trajectories.
In a CL dynamic game, agents' actions depend on the current state variable, so that, at every time step, they have to consider how their opponents would react to deviations from the equilibrium trajectory that they have followed so far, i.e., a CL equilibrium might be visualized as a set of trees of state-action trajectories.
The sets of OL and CL equilibria are generally different even for deterministic dynamic games BID10 BID5 .The
CL analysis of dynamic games with continuous variables is challenging and has only be addressed for simple cases.The situation is even more complicated when we consider coupled constraints, since each agent's actions must belong to a set that depends on the other agents' actions. These
games, where the agents interact strategically not only with their rewards but also at the level of the feasible sets, are known as generalized Nash equilibrium problems BID3 .There
is a class of games, named Markov potential games (MPGs), for which the OL analysis shows that NE can be found by solving a single OCP; see BID6 BID25 for recent surveys on MPGs. Thus,
the benefit of MPGs is that solving a single OCP is generally simpler than solving a set of coupled OCPs. MPGs
appear often in economics and engineering applications, where multiple agents share a common resource (a raw material, a communication link, a transportation link, an electrical transmission line) or limitations (a common limit on the total pollution in some area). Nevertheless
, to our knowledge, none previous study has provided a practical method for finding CL Nash equilibrium (CL-NE) for continuous MPGs.Indeed, to our knowledge, no previous work has proposed a practical method for finding or approximating CL-NE for any class of Markov games with continuous variables and coupled constraints. State-of-the-art
works on learning CL-NE for general-sum Markov games did not consider coupled constraints and assumed finite state-action sets BID18 BID16 .In this work, we
extend previous OL analysis due to BID26 BID23 and tackle the CL analysis of MPGs with coupled constraints. We assume that the
agents' policies lie in a parametric set. This assumption makes
derivations simpler, allowing us to prove that, under some potentiality conditions on the reward functions, a game is an MPG. We also show that, similar
to the OL case, the Nash equilibrium (NE) for the approximate game can be found as an optimal policy of a related OCP. This is a practical approach
for finding or at least approximating NE, since if the parametric family is expressive enough to represent the complexities of the problem under study, we can expect that the parametric solution will approximate an equilibrium of the original MPG well (under mild continuity assumptions, small deviations in the parametric policies should translate to small perturbations in the value functions). We remark that this parametric
policy assumption has been widely used for learning the solution of single-agent OCPs with continuous state-action sets; see, e.g., BID9 Melo and Lopes, 2008; BID17 BID24 BID20 . Here, we show that the same idea
can be extended to MPGs in a principled manner.Moreover, once we have formulated the related OCP, we can apply reinforcement learning techniques to find an optimal solution. Some recent works have applied deep
reinforcement learning (DRL) to cooperative Markov games BID4 BID22 , which are a particular case of MPGs. Our results show that similar approaches
can be used for more general MPGs.
We have extended previous results on MPGs with constrained continuous state-action spaces providing practical conditions and a detailed analysis of Nash equilibrium with parametric policies, showing that a PCL-NE can be found by solving a related OCP.
Having established a relationship between a MPG and an OCP is a significant step for finding an NE, since we can apply standard optimal control and reinforcement learning techniques.
We illustrated the theoretical results by applying TRPO (a well known DRL method) to an example engineering application, obtaining a PCL-NE that yields near optimal results, very close to an exact variational equilibrium.A EXAMPLE: THE "GREAT FISH WAR" GAME -STANDARD APPROACH Let us illustrate the standard approach described in Section 3 with a well known resource-sharing game named "the great fish war" due to BID11 .
We follow (González-Sánchez and Hernández-Lerma, 2013, Sec. 4.2).
Example 1.
Let x i be the stock of fish at time i, in some fishing area.
Suppose there are N countries obtaining reward from fish consumption, so that they aim to solve the following game: DISPLAYFORM0 where x 0 ≥ 0 and 0 < α < 1 are given.In order to solve G fish , let us express each agent's action as: DISPLAYFORM1 so that the rewards can be also expressed in reduced form, as required by the standard-approach: DISPLAYFORM2 Thus, the Euler equations for every agent k ∈ N and all t = 0, . . . , ∞ become: DISPLAYFORM3 Now, the standard method consists in guessing a family of parametric functions that replaces the policy, and checking whether such parametric policy satisfies (32) for some parameter vector.
Let us try with policies that are linear mappings of the state: DISPLAYFORM4 By replacing (33) in (32), we obtain the following set of equations: DISPLAYFORM5 Fortunately, it turns out that (34) has solution (which might not be the case for other policy parametrization), with parameters given by: DISPLAYFORM6 Since 0 < α < 1 and 0 ≤ γ < 1, it is apparent that w k > 0 and the constraint π k (x i ) ≥ 0 holds for all x i ≥ 0.
Moreover, since k∈N w k < 1, we have that x i+1 ≥ 0 for any x 0 ≥ 0.
In addition, since x i is a resource and the actions must be nonnegative, it follows that lim i→∞ x i = 0 (there is no reason to save some resource).
Therefore, the transversality condition holds.
Since the rewards are concave, the states are non-negative and the linear policies with these coefficients satisfy the Euler and transversality equations, we conclude that they constitute an equilibrium (González-Sánchez and Hernández-Lerma, 2013, Theorem 4.1).B
EXAMPLE: "GREAT FISH WAR" GAME -PROPOSED APPROACHIn this section, we illustrate how to apply the proposed approach with the same "the great fish war" example, obtaining the same results as with the standard approach.Example 2. Consider
"the great fish war" game described in Example 1. In order
to use our approach, we replace the generic policy with the specific policy mapping of our preference. We choose
the linear mapping, π k (x i ) = w k x i , to be able to compare the results with those obtained with the standard approach. Thus, we
have the following game: DISPLAYFORM7 Let us verify conditions FORMULA9 - FORMULA9 . For all
k, j ∈ N we have: DISPLAYFORM8 DISPLAYFORM9 Since conditions FORMULA9 - FORMULA9 hold, we conclude that FORMULA5 is an MPG. By applying
the line integral FORMULA2 , we obtain: DISPLAYFORM10 Now, we can solve OCP (16) with potential function (43). For this particular
problem, it is easy to solve the KKT system in closed form. Introduce a shorthand
: DISPLAYFORM11 The Euler-Lagrange equation (62) for this problem becomes: DISPLAYFORM12 The optimality condition (64) with respect to the policy parameter becomes: DISPLAYFORM13 Let us solve for β i in (46): DISPLAYFORM14 Replacing FORMULA6 and the state-transition dynamics in FORMULA6 , we obtain the following set of equations: DISPLAYFORM15 Hence, the parameters can be obtained as: DISPLAYFORM16 This is exactly the same solution that we obtained in Example 1 with the standard approach. We remark that for the
standard approach, we were able to obtain the policy parameters since we put the correct parametric form of the policy in the Euler equation. If we had used another
parametric family without a linear term, the Euler equations (32) might have no solution and we would have got stuck. In contrast, with our
approach, we could freely choose any other form of the parametric policy, and always solve the KKT system of the approximate game. Broadly speaking, we
can say that the more expressive the parametric family, the more likely that the optimal policy of the original game will be accurately approximated by the optimal solution of the approximate game.
|
We present general closed loop analysis for Markov potential games and show that deep reinforcement learning can be used for learning approximate closed-loop Nash equilibrium.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:260
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model.
We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images.
We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.
Bits back coding (Wallace, 1990; Hinton & van Camp, 1993 ) is a method for performing lossless compression using a latent variable model.
In an ideal implementation, the method can achieve an expected message length equal to the variational free energy, often referred to as the negative evidence lower bound (ELBO) of the model.
Bits back was first introduced to form a theoretical argument for using the ELBO as an objective function for machine learning (Hinton & van Camp, 1993) .
The first implementation of bits back coding (Frey, 1997; Frey & Hinton, 1996) made use of first-infirst-out (FIFO) arithmetic coding (AC) (Witten et al., 1987) .
However, the implementation did not achieve optimal compression, due to an incompatibility between a FIFO coder and bits back coding, and its use was only demonstrated on a small dataset of 8×8 binary images.
Compression' (HiLLoC).
In our experiments (Section 4), we demonstrate that HiLLoC can be used to compress color images from the ImageNet test set at rates close to the ELBO, outperforming all of the other codecs which we benchmark.
We also demonstrate the speedup, of nearly three orders of magnitude, resulting from vectorization.
We release an open source implementation based on 'Craystack', a Python package which we have written for general prototyping of lossless compression with ANS.
Our experiments demonstrate HiLLoC as a bridge between large scale latent variable models and compression.
To do this we use simple variants of pre-existing VAE models.
Having shown that bits back coding is flexible enough to compress well with large, complex models, we see plenty of work still to be done in searching model structures (i.e. architecture search), optimizing with a trade-off between compression rate, encode/decode time and memory usage.
Particularly pertinent for HiLLoC is latent dimensionality, since compute time and memory usage both scale with this.
Since the model must be stored/transmitted to use HiLLoC, weight compression is also highly relevant.
This is a well-established research area in machine learning (Han et al., 2016; Ullrich et al., 2017) .
Our experiments also demonstrated that one can achieve good performance on a dataset of large images by training on smaller images.
This result is promising, but future work should be done to discover what the best training datasets are for coding generic images.
One question in particular is whether results could be improved by training on larger images and/or images of varying size.
We leave this to future work.
Another related direction for improvement is batch compression of images of different sizes using masking, analogous to how samples of different length may be processed in batches by recurrent neural nets.
Whilst this work has focused on latent variable models, there is also promise in applying state of the art fully observed auto-regressive models to lossless compression.
We look forward to future work investigating the performance of models such as WaveNet (van den Oord et al., 2016) for lossless audio compression as well as PixelCNN++ (Salimans et al., 2017) and the state of the art models in Menick & Kalchbrenner (2019) for images.
Sampling speed for these models, and thus decompression, scales with autoregressive sequence length, and can be very slow.
This could be a serious limitation, particularly in common applications where encoding is performed once but decoding is performed many times.
This effect can be mitigated by using dynamic programming (Le Paine et al., 2016; Ramachandran et al., 2017) , and altering model architecture (Reed et al., 2017) , but on parallel architectures sampling/decompression is still significantly slower than with VAE models.
On the other hand, fully observed models, as well as the flow based models of Hoogeboom et al. (2019) and , do not require bits back coding, and therefore do not have to pay the one-off cost of starting a chain.
Therefore they may be well suited to situations where one or a few i.i.d. samples are to be communicated.
Similar to the way that we use FLIF to code the first images for our experiments, one could initially code images using a fully observed model then switch to a faster latent variable model once a stack of bits has been built up.
We presented HiLLoC, an extension of BB-ANS to hierarchical latent variable models, and show that HiLLoC can perform well with large models.
We open-sourced our implementation, along with the Craystack package for prototyping lossless compression.
We have also explored generalization of large VAE models, and established that fully convolutional VAEs can generalize well to other datasets, including images of very different size to those they were trained on.
We have described how to compress images of arbitrary size with HiLLoC, achieving a compression rate superior to the best available codecs on ImageNet images.
We look forward to future work reuniting machine learning and lossless compression.
|
We scale up lossless compression with latent variables, beating existing approaches on full-size ImageNet images.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:261
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
State of the art computer vision models have been shown to be vulnerable to small adversarial perturbations of the input.
In other words, most images in the data distribution are both correctly classified by the model and are very close to a visually similar misclassified image.
Despite substantial research interest, the cause of the phenomenon is still poorly understood and remains unsolved.
We hypothesize that this counter intuitive behavior is a naturally occurring result of the high dimensional geometry of the data manifold.
As a first step towards exploring this hypothesis, we study a simple synthetic dataset of classifying between two concentric high dimensional spheres.
For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error.
In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size $O(1/\sqrt{d})$.
Surprisingly, when we train several different architectures on this dataset, all of their error sets naturally approach this theoretical bound.
As a result of the theory, the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed.
We hope that our theoretical analysis of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.
There has been substantial work demonstrating that standard image models exhibit the following phenomenon: most randomly chosen images from the data distribution are correctly classified and yet are close to a visually similar nearby image which is incorrectly classified BID22 .
This is often referred to as the phenomenon of adversarial examples.
These adversarially found errors can be constructed to be surprisingly robust, invariant to viewpoint, orientation and scale BID3 .
Despite some theoretical work and many proposed defense strategies BID6 BID18 BID20 ) the cause of this phenomenon is still poorly understood.There have been several hypotheses proposed regarding the cause of adversarial examples.
We briefly survey some of them here.
One common hypothesis is that neural network classifiers are too linear in various regions of the input space, BID17 .
Another hypothesis is that adversarial examples are off the data manifold BID2 a; BID16 .
BID6 argue that large singular values of internal weight matrices may cause the classifier to be vulnerable to small perturbations of the input.Alongside works endeavoring to explain adversarial examples, others have proposed defenses in order to increase robustness.
Some works increase robustness to small perturbations by changing the non-linearities used BID14 , distilling a large network into a small network BID20 , or using regularization BID6 .
Other works explore detecting adversarial examples using a second statistical model BID7 BID0 BID11 BID19 .
However, many of these methods have been shown to fail BID4 BID7 .
Finally, adversarial training has been shown in many instances to increase robustness BID18 BID15 BID22 .
Despite some progress on increasing robustness to adversarial perturbations, local errors have still been shown to appear for distances just beyond what is adversarially trained for BID21 .
This phenomenon is quite intriguing given that these models are highly accurate on the test set.
We hypothesize that this behavior is a naturally occurring result of the high dimensional nature of the data manifold.
In order to begin to investigate this hypothesis, we define a simple synthetic task of classifying between two concentric high dimensional spheres.
This allows us to study adversarial examples in a setting where the data manifold is well defined mathematically and where we have an analytic characterization of the decision boundary learned by the model.
Even more importantly, we can naturally vary the dimension of the data manifold and study the effect of the input dimension on the geometry of the generalization error of neural networks.
Our experiments and theoretical analysis on this dataset demonstrate the following:• A similar behavior to that of image models occurs: most randomly chosen points from the data distribution are correctly classified and yet are "close" to an incorrectly classified input.
This behavior occurs even when the test error rate is less than 1 in 10 million.•
For this dataset, there is a fundamental tradeoff between the amount of generalization error and the average distance to the nearest error. In
particular, we show that any model which misclassifies a small constant fraction of the sphere will be vulnerable to adversarial perturbations of size O(1 DISPLAYFORM0 • Neural networks trained on this dataset naturally approach this theoretical optimal tradeoff between the measure of the error set and the average distance to nearest error. This
implies that in order to linearly increase the average distance to nearest error, the error rate of the model must decrease exponentially.• We
also show that models trained on this dataset may become extremely accurate even when ignoring a large fraction of the input.We conclude with a detailed discussion about the connection between adversarial examples for the sphere and those for image models.
In this work we attempted to gain insight into the existence of adversarial examples for image models by studying a simpler synthetic dataset.
After training different neural network architectures on this dataset we observe a similar phenomenon to that of image models -most random points in the data distribution are both correctly classified and are close to a misclassified point.
We then explained this phenomenon for this particular dataset by proving a theoretical tradeoff between the error rate of a model and the average distance to nearest error independently of the model.
We also observed that several different neural network architectures closely match this theoretical bound.Theorem 5.1 is significant because it reduces the question of why models are vulnerable to adversarial examples to the question of why is there a small amount of classification error.
It is unclear if anything like theorem 5.1 would hold for an image manifold, and future work should investigate if a similar principal applies.
Our work suggests that even a small amount of classification error may sometimes logically force the existence of many adversarial examples.
This could explain why fixing the adversarial example problem has been so difficult despite substantial research interest.
For example, one recent work uses adversarial training to increase robustness in the L ∞ metric BID18 .
Although this did increase the size, , of the perturbation needed to reliably produce an error, local errors still remain for larger than those adversarially trained for BID21 .Several
defenses against adversarial examples have been proposed recently which are motivated by the assumption that adversarial examples are off the data manifold BID2 a; BID16 . Our results
challenge whether or not this assumption holds in general. As shown in
section 3 there are local errors both on and off the data manifold. Our results
raise many questions as to whether or not it is possible to completely solve the adversarial example problem without reducing test error to 0. The test error
rate of state of the art image models is non-zero, this implies that a constant fraction of the data manifold is misclassified and is the unbiased estimate of µ(E). Perhaps this alone
is an indication that local adversarial errors exist.The concentric spheres dataset is an extremely simple problem which is unlikely to capture all of the complexities of the geometry of a natural image manifold. Thus we cannot reach
the same conclusions about the nature of adversarial examples for real-world datasets. However, we hope that
the insights gained from this very simple case will point the way forward to explore how complex real-world data sets leads to adversarial examples.
|
We hypothesize that the vulnerability of image models to small adversarial perturbation is a naturally occurring result of the high dimensional geometry of the data manifold. We explore and theoretically prove this hypothesis for a simple synthetic dataset.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:262
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
It has been established that diverse behaviors spanning the controllable subspace of a Markov decision process can be trained by rewarding a policy for being distinguishable from other policies.
However, one limitation of this formulation is the difficulty to generalize beyond the finite set of behaviors being explicitly learned, as may be needed in subsequent tasks.
Successor features provide an appealing solution to this generalization problem, but require defining the reward function as linear in some grounded feature space.
In this paper, we show that these two techniques can be combined, and that each method solves the other's primary limitation.
To do so we introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor features framework.
We empirically validate VISR on the full Atari suite, in a novel setup wherein the rewards are only exposed briefly after a long unsupervised phase.
Achieving human-level performance on 12 games and beating all baselines, we believe VISR represents a step towards agents that rapidly learn from limited feedback.
Unsupervised learning has played a major role in the recent progress of deep learning.
Some of the earliest work of the present deep learning era posited unsupervised pre-training as a method for overcoming optimization difficulties inherent in contemporary supervised deep neural networks (Hinton et al., 2006; Bengio et al., 2007) .
Since then, modern deep neural networks have enabled a renaissance in generative models, with neural decoders allowing for the training of large scale, highly expressive families of directed models (Goodfellow et al., 2014; Van den Oord et al., 2016) as well as enabling powerful amortized variational inference over latent variables (Kingma and Welling, 2013) .
We have repeatedly seen how representations from unsupervised learning can be leveraged to dramatically improve sample efficiency in a variety of supervised learning domains (Rasmus et al., 2015; Salimans et al., 2016) .
In the reinforcement learning (RL) setting, the coupling between behavior, state visitation, and the algorithmic processes that give rise to behavior complicate the development of "unsupervised" methods.
The generation of behaviors by means other than seeking to maximize an extrinsic reward has long been studied under the psychological auspice of intrinsic motivation (Barto et al., 2004; Barto, 2013; Mohamed and Rezende, 2015) , often with the goal of improved exploration (Şimşek and Barto, 2006; Oudeyer and Kaplan, 2009; Bellemare et al., 2016) .
However, while exploration is classically concerned with the discovery of rewarding states, the acquisition of useful state representations and behavioral skills can also be cast as an unsupervised (i.e. extrinsically unrewarded) learning problem for agents interacting with an environment.
In the traditional supervised learning setting, popular classification benchmarks have been employed (with labels removed) as unsupervised representation learning benchmarks, wherein the acquired representations are evaluated based on their usefulness for some downstream task (most commonly the original classification task with only a fraction of the labels reinstated).
Analogously, we propose removing the rewards from an RL benchmark environment for unsupervised pre-training of an agent, with their subsequent reinstatement testing for dataefficient adaptation.
This setup emulates scenarios where unstructured interaction with the environment, or a closely related environment, is relatively inexpensive to acquire and the agent is expected to perform one or more tasks defined in this environment in the form of rewards.
The current state-of-the-art for RL with unsupervised pre-training comes from a class of algorithms which, independent of reward, maximize the mutual information between latent variable policies and their behavior in terms of state visitation, an objective which we refer to as behavioral mutual information (Mohamed and Rezende, 2015; Gregor et al., 2016; Eysenbach et al., 2018; Warde-Farley et al., 2018) .
These objectives yield policies which exhibit a great deal of diversity in behavior, with variational intrinsic control (Gregor et al., 2016, VIC) and diversity is all you need (Eysenbach et al., 2018, DIAYN) even providing a natural formalism for adapting to the downstream RL problem.
However, both methods suffer from poor generalization and a slow inference process when the reward signal is introduced.
The fundamental problem faced by these methods is the requirement to effectively interpolate between points in the latent behavior space, as the most task-appropriate latent skill likely lies "between" those learnt during the unsupervised period.
The construction of conditional policies which efficiently and effectively generalize to latent codes not encountered during training is an open problem for such methods.
Our main contribution is to address this generalization and slow inference problem by making use of another recent advance in RL, successor features (Barreto et al., 2017) .
Successor features (SF) enable fast transfer learning between tasks that differ only in their reward function, which is assumed to be linear in some features.
Prior to this work, the automatic construction of these reward function features was an open research problem .
We show that, despite being previously cast as learning a policy space, behavioral mutual information (BMI) maximization provides a compelling solution to this feature learning problem.
Specifically, we show that the BMI objective can be adapted to learn precisely the features required by SF.
Together, these methods give rise to an algorithm, Variational Intrinsic Successor FeatuRes (VISR), which significantly improves performance in the RL with unsupervised pre-training scenario.
In order to illustrate the efficacy of the proposed method, we augment the popular 57-game Atari suite with such an unsupervised phase.
The use of this well-understood collection of tasks allows us to position our contribution more clearly against the current literature.
VISR achieves human-level performance on 12 games and outperforms all baselines, which includes algorithms that operate in three regimes: strictly unsupervised, supervised with limited data, and both.
Our results suggest that VISR is the first algorithm to achieve notable performance on the full Atari task suite in a setting of few-step RL with unsupervised pre-training, outperforming all baselines and buying performance equivalent to hundreds of millions of interaction steps compared to DQN on some games ( Figure 2c ).
As a suggestion for future investigations, the somewhat underwhelming results for the fully unsupervised version of VISR suggest that there is much room for improvement.
While curiosity-based methods are transient (i.e., asymptotically their intrinsic reward vanishes) and lack a fast adaptation mechanism, they do seem to encourage exploratory behavior slightly more than VISR.
A possible direction for future work would be to use a curiosity-based intrinsic reward inside of VISR, to encourage it to better explore the space of controllable policies.
Another interesting avenue for future investigation would be to combine the approach recently proposed by Ozair et al. (2019) to enforce the policies computed by VISR to be not only distinguishable but also far apart in a given metric space.
By using SFs on features that maximize BMI, we proposed an approach, VISR, that solves two open questions in the literature: how to compute features for the former and how to infer tasks in the latter.
Beyond the concrete method proposed here, we believe bridging the gap between BMI and SFs is an insightful contribution that may inspire other useful methods.
For convenience, we can refer to maximizing F(θ) as minimizing the loss function for parameters θ = (θ π , θ q ),
where θ π and θ q refer to the parameters of the policy π and variational approximation q, respectively.
|
We introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide fast task inference through the successor features framework.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:263
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn strong supervised models like convolutional neural networks.
However, these models trained on one data domain may not generalize well to other domains unequipped with annotations for model finetuning.
To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain.
To this end, we propose to learn discriminative feature representations of patches based on label histograms in the source domain, through the construction of a disentangled space.
With such representations as guidance, we then use an adversarial learning scheme to push the feature representations in target patches to the closer distributions in source ones.
In addition, we show that our framework can integrate a global alignment process with the proposed patch-level alignment and achieve state-of-the-art performance on semantic segmentation.
Extensive ablation studies and experiments are conducted on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.
Recent deep learning-based methods have made significant progress on vision tasks, such as object recognition BID17 and semantic segmentation BID19 , relying on large-scale annotations to supervise the learning process.
However, for a test domain different from the annotated training data, learned models usually do not generalize well.
In such cases, domain adaptation methods have been developed to close the gap between a source domain with annotations and a target domain without labels.
Along this line of research, numerous methods have been developed for image classification BID29 BID8 , but despite recent works on domain adaptation for pixel-level prediction tasks such as semantic segmentation BID14 , there still remains significant room for improvement.
Yet domain adaptation is a crucial need for pixel-level predictions, as the cost to annotate ground truth is prohibitively expensive.
For instance, road-scene images in different cities may have various appearance distributions, while conditions even within the same city may vary significantly over time or weather.Existing state-of-the-art methods use feature-level BID14 or output space adaptation BID31 to align the distributions between the source and target domains using adversarial learning BID11 BID37 .
These approaches usually exploit the global distribution alignment, such as spatial layout, but such global statistics may already differ significantly between two domains due to differences in camera pose or field of view.
Figure 1 illustrates one example, where two images share a similar layout, but the corresponding grids do not match well.
Such misalignment may introduce an incorrect bias during adaptation.
Instead, we consider to match patches that are more likely to be shared across domains regardless of where they are located.One way to utilize patch-level information is to align their distributions through adversarial learning.
However, this is not straightforward since patches may have high variation among each other and there is no guidance for the model to know which patch distributions are close.
Motivated by recent advances in learning disentangled representations BID18 BID24 , we adopt a similar approach by considering label histograms of patches as a factor and learn discriminative Figure 1 : Illustration of the proposed patch-level alignment against the global alignment that considers the spatial relationship between grids.
We first learn discriminative representations for source patches (solid symbols) and push a target representation (unfilled symbol) close to the distribution of source ones, regardless of where these patches are located in the image.representations for patches to relax the high-variation problem among them.
Then, we use the learned representations as a bridge to better align patches between source and target domains.Specifically, we utilize two adversarial modules to align both the global and patch-level distributions between two domains, where the global one is based on the output space adaptation BID31 , and the patch-based one is achieved through the proposed alignment by learning discriminative representations.
To guide the learning process, we first use the pixel-level annotations provided in the source domain and extract the label histogram as a patch-level representation.
We then apply K-means clustering to group extracted patch representations into K clusters, whose cluster assignments are then used as the ground truth to train a classifier shared across two domains for transferring a learned discriminative representation of patches from the source to the target domain.
Ideally, given the patches in the target domain, they would be classified into one of K categories.
However, since there is a domain gap, we further use an adversarial loss to push the feature representations of target patches close to the distribution of the source patches in this clustered space (see Figure 1 ).
Note that our representation learning can be viewed as a kind of disentanglement guided by the label histogram, but is different from existing methods that use pre-defined factors such as object pose BID18 .In
experiments, we follow the domain adaptation setting in BID14 and perform pixellevel road-scene image segmentation. We
conduct experiments under various settings, including the synthetic-to-real, i.e., GTA5 BID27 )/SYNTHIA BID28 to Cityscapes BID5 ) and cross-city, i.e., Cityscapes to Oxford RobotCar BID23 scenarios. In
addition, we provide extensive ablation studies to validate each component in the proposed framework. By
combining global and patch-level alignments, we show that our approach performs favorably against state-of-the-art methods in terms of accuracy and visual quality. We
note that the proposed framework is general and could be applicable to other forms of structured outputs such as depth, which will be studied in our future work.The contributions of this work are as follows. First
, we propose a domain adaptation framework for structured output prediction by utilizing global and patch-level adversarial learning modules. Second
, we develop a method to learn discriminative representations guided by the label histogram of patches via clustering and show that these representations help the patch-level alignment. Third
, we demonstrate that the proposed adaptation method performs favorably against various baselines and state-of-the-art methods on semantic segmentation.
In this paper, we present a domain adaptation method for structured output via a general framework that combines global and patch-level alignments.
The global alignment is achieved by the output space adaptation, while the patch-level one is performed via learning discriminative representations of patches across domains.
To learn such patch-level representations, we propose to construct a clustered space of the source patches and adopt an adversarial learning scheme to push the target patch distributions closer to the source ones.
We conduct extensive ablation study and experiments to validate the effectiveness of the proposed method under numerous challenges on semantic segmentation, including synthetic-to-real and cross-city scenarios, and show that our approach performs favorably against existing algorithms.
|
A domain adaptation method for structured output via learning patch-level discriminative feature representations
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:264
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs.
So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios.
In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against.
Here we emphasise the importance of attacks which solely rely on the final model decision.
Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.
Previous attacks in this category were limited to simple models or simple datasets.
Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial.
The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.
We apply the attack on two black-box algorithms from Clarifai.com.
The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems.
An implementation of the attack is available as part of Foolbox (https://github.com/bethgelab/foolbox).
Many high-performance machine learning algorithms used in computer vision, speech recognition and other areas are susceptible to minimal changes of their inputs BID26 .
As a concrete example, a modern deep neural network like VGG-19 trained on object recognition might perfectly recognize the main object in an image as a tiger cat, but if the pixel values are only slightly perturbed in a specific way then the prediction of the very same network is drastically altered (e.g. to bus).
These so-called adversarial perturbations are ubiquitous in many machine learning models and are often imperceptible to humans.
Algorithms that seek to find such adversarial perturbations are generally denoted as adversarial attacks.Adversarial perturbations have drawn interest from two different sides.
On the one side, they are worrisome for the integrity and security of deployed machine learning algorithms such as autonomous cars or face recognition systems.
Minimal perturbations on street signs (e.g. turning a stop-sign into a 200 km/h speed limit) or street lights (e.g. turning a red into a green light) can have severe consequences.
On the other hand, adversarial perturbations provide an exciting spotlight on the gap between the sensory information processing in humans and machines and thus provide guidance towards more robust, human-like architectures.Adversarial attacks can be roughly divided into three categories: gradient-based, score-based and transfer-based attacks (cp. Figure 1 ).
Gradient-based and score-based attacks are often denoted as white-box and oracle attacks respectively, but we try to be as explicit as possible as to what information is being used in each category 1 .
A severe problem affecting attacks in all of these categories is that they are surprisingly straight-forward to defend against:• Gradient-based attacks.
Most existing attacks rely on detailed model information including the gradient of the loss w.r.t. the input.
Examples are the Fast-Gradient Sign Method (FGSM), the Basic Iterative Method (BIM) BID11 , DeepFool (MoosaviDezfooli et al., 2015) , the Jacobian-based Saliency Map Attack (JSMA) BID20 , Houdini BID5 and the Carlini & Wagner attack BID2 .
Defence: A simple way to defend against gradient-based attacks is to mask the gradients, for example by adding non-differentiable elements either implicitly through means like defensive distillation BID21 or saturated non-linearities BID18 , or explicitly through means like non-differentiable classifiers BID15 ).•
Score-based attacks. A
few attacks are more agnostic and only rely on the predicted scores (e.g. class probabilities or logits) of the model. On
a conceptual level these attacks use the predictions to numerically estimate the gradient. This
includes black-box variants of JSMA BID17 and of the Carlini & Wagner attack BID4 as well as generator networks that predict adversarials BID8 . Defence
: It is straight-forward to severely impede the numerical gradient estimate by adding stochastic elements like dropout into the model. Also,
many robust training methods introduce a sharp-edged plateau around samples BID28 which not only masks gradients themselves but also their numerical estimate.• Transfer-based
attacks. Transfer-based attacks
do not rely on model information but need information about the training data. This data is used to train
a fully observable substitute model from which adversarial perturbations can be synthesized BID22 . They rely on the empirical
observation that adversarial examples often transfer between models. If adversarial examples are
created on an ensemble of substitute models the success rate on the attacked model can reach up to 100% in certain scenarios BID13 . Defence: A recent defence method
against transfer attacks BID28 , which is based on robust training on a dataset augmented by adversarial examples from an ensemble of substitute models, has proven highly successful against basically all attacks in the 2017 Kaggle Competition on Adversarial Attacks 2 .The fact that many attacks can be
easily averted makes it often extremely difficult to assess whether a model is truly robust or whether the attacks are just too weak, which has lead to premature claims of robustness for DNNs BID3 .This motivates us to focus on a category
of adversarial attacks that has so far received fairly little attention:• Decision-based attacks. Direct attacks that solely rely on the final
decision of the model (such as the top-1 class label or the transcribed sentence).The delineation of this category is justified
for the following reasons: First, compared to score-based attacks decision-based attacks are much more relevant in real-world machine learning applications where confidence scores or logits are rarely accessible. At the same time decision-based attacks have
the potential to be much more robust to standard defences like gradient masking, intrinsic stochasticity or robust training than attacks from the other categories. Finally, compared to transferbased attacks they
need much less information about the model (neither architecture nor training data) and are much simpler to apply.There currently exists no effective decision-based attack that scales to natural datasets such as ImageNet and is applicable to deep neural networks (DNNs). The most relevant prior work is a variant of transfer
attacks in which the training set needed to learn the substitute model is replaced by a synthetic dataset (Papernot et al., 2017b) . This synthetic dataset is generated by the adversary
alongside the training of the substitute; the labels for each synthetic sample are drawn from the black-box model. While this approach works well on datasets for which
the intra-class variability is low (such as MNIST) it has yet to be shown that it scales to more complex natural datasets such as CIFAR or ImageNet. Other decision-based attacks are specific to linear
or convex-inducing classifiers BID6 BID14 BID19 and are not applicable to other machine learning models. The work by BID0 basically stands between transfer
attacks and decision-based attacks in that the substitute model is trained on a dataset for which the labels have been observed from the black-box model. This attack still requires knowledge about the data
distribution on which the black-box models was trained on and so we don't consider it a pure decision-based attack. Finally, some naive attacks such as a line-search along
a random direction away from the original sample can qualify as decision-based attacks but they induce large and very visible perturbations that are orders of magnitude larger than typical gradient-based, score-based or transfer-based attacks.Throughout the paper we focus on the threat scenario in which the adversary aims to change the decision of a model (either targeted or untargeted) for a particular input sample by inducing a minimal perturbation to the sample. The adversary can observe the final decision of the model
for arbitrary inputs and it knows at least one perturbation, however large, for which the perturbed sample is adversarial.The contributions of this paper are as follows:• We emphasise decision-based attacks as an important category of adversarial attacks that are highly relevant for real-world applications and important to gauge model robustness.• We introduce the first effective decision-based attack that
scales to complex machine learning models and natural datasets. The Boundary Attack is (1) conceptually surprisingly simple,
(2) extremely flexible, (3) requires little hyperparameter tuning and FORMULA6 is competitive with the best gradient-based attacks in both targeted and untargeted computer vision scenarios.• We show that the Boundary Attack is able to break previously
suggested defence mechanisms like defensive distillation.• We demonstrate the practical applicability of the Boundary Attack
on two black-box machine learning models for brand and celebrity recognition available on Clarifai.com.
In this paper we emphasised the importance of a mostly neglected category of adversarial attacksdecision-based attacks-that can find adversarial examples in models for which only the final decision can be observed.
We argue that this category is important for three reasons: first, attacks in this class are highly relevant for many real-world deployed machine learning systems like autonomous cars for which the internal decision making process is unobservable.
Second, attacks in this class do not rely on substitute models that are trained on similar data as the model to be attacked, thus making real-world applications much more straight-forward.
Third, attacks in this class have the potential to be much more robust against common deceptions like gradient masking, intrinsic stochasticity or robust training.We also introduced the first effective attack in this category that is applicable to general machine learning algorithms and complex natural datasets: the Boundary Attack.
At its core the Boundary Attack follows the decision boundary between adversarial and non-adversarial samples using a very simple rejection sampling algorithm in conjunction with a simple proposal distribution and a dynamic step-size adjustment inspired by Trust Region methods.
Its basic operating principlestarting from a large perturbation and successively reducing it-inverts the logic of essentially all previous adversarial attacks.
Besides being surprisingly simple, the Boundary attack is also extremely flexible in terms of the possible adversarial criteria and performs on par with gradient-based attacks on standard computer vision tasks in terms of the size of minimal perturbations.The mere fact that a simple constrained iid Gaussian distribution can serve as an effective proposal perturbation for each step of the Boundary attack is surprising and sheds light on the brittle information processing of current computer vision architectures.
Nonetheless, there are many ways in which the Boundary attack can be made even more effective, in particular by learning a suitable proposal distribution for a given model or by conditioning the proposal distribution on the recent history of successful and unsuccessful proposals.Decision-based attacks will be highly relevant to assess the robustness of machine learning models and to highlight the security risks of closed-source machine learning systems like autonomous cars.
We hope that the Boundary attack will inspire future work in this area.
|
A novel adversarial attack that can directly attack real-world black-box machine learning models without transfer.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:265
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy.
Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent.
In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data.
Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work.
We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.
Deep recurrent models like long short-term memory (LSTM) recurrent neural networks (RNNs) have become a standard building block in modern approaches to language modeling, with applications in speech recognition, input decoding for mobile keyboards, and language translation.
Because language usage varies widely by problem domain and dataset, training a language model on data from the right distribution is critical.
For example, a model to aid typing on a mobile keyboard is better served by training data typed in mobile apps rather than from scanned books or transcribed utterances.
However, language data can be uniquely privacy sensitive.
In the case of text typed on a mobile phone, this sensitive information might include passwords, text messages, and search queries.
In general, language data may identify a speaker-explicitly by name or implicitly, for example via a rare or unique phrase-and link that speaker to secret or sensitive information.Ideally, a language model's parameters would encode patterns of language use common to many users without memorizing any individual user's unique input sequences.
However, we know convolutional NNs can memorize arbitrary labelings of the training data BID22 and recurrent language models are also capable of memorizing unique patterns in the training data BID5 .
Recent attacks on neural networks such as those of BID19 underscore the implicit risk.
The main goal of our work is to provide a strong guarantee that the trained model protects the privacy of individuals' data without undue sacrifice in model quality.We are motivated by the problem of training models for next-word prediction in a mobile keyboard, and use this as a running example.
This problem is well suited to the techniques we introduce, as differential privacy may allow for training on data from the true distribution (actual mobile usage) rather than on proxy data from some other source that would produce inferior models.
However, to facilitate reproducibility and comparison to non-private models, our experiments are conducted on a public dataset as is standard in differential privacy research.
The remainder of this paper is structured around the following contributions:1.
We apply differential privacy to model training using the notion of user-adjacent datasets, leading to formal guarantees of user-level privacy, rather than privacy for single examples.4.
In extensive experiments in §3, we offer guidelines for parameter tuning when training complex models with differential privacy guarantees.
We show that a small number of experiments can narrow the parameter space into a regime where we pay for privacy not in terms of a loss in utility but in terms of an increased computational cost.We now introduce a few preliminaries.
Differential privacy (DP) BID10 BID8 BID9 ) provides a well-tested formalization for the release of information derived from private data.
Applied to machine learning, a differentially private training mechanism allows the public release of model parameters with a strong guarantee: adversaries are severely limited in what they can learn about the original training data based on analyzing the parameters, even when they have access to arbitrary side information.
Formally, it says: Definition
1. Differential Privacy: A randomized mechanism M : D → R with a domain D (e.g., possible training datasets) and range R (e.g., all possible trained models) satisfies ( , δ)-differential privacy if for any two adjacent datasets d, d ∈ D and for any subset of outputs S ⊆ R it holds that DISPLAYFORM0 The definition above leaves open the definition of adjacent datasets which will depend on the application.
Most prior work on differentially private machine learning (e.g. BID7 BID4 ; BID0 BID21 BID16 ) deals with example-level privacy: two datasets d and d are defined to be adjacent if d can be formed by adding or removing a single training example from d.
We remark that while the recent PATE approach of BID16 can be adapted to give user-level privacy, it is not suited for a language model where the number of classes (possible output words) is large.For problems like language modeling, protecting individual examples is insufficient-each typed word makes its own contribution to the RNN's training objective, so one user may contribute many thousands of examples to the training data.
A sensitive word or phrase may be typed several times by an individual user, but it should still be protected.2
In this work, we therefore apply the definition of differential privacy to protect whole user histories in the training set.
This user-level privacy is ensured by using an appropriate adjacency relation:Definition
2. User-adjacent datasets: Let d and d be two datasets of training examples, where each example is associated with a user.
Then, d and d are adjacent if d can be formed by adding or removing all of the examples associated with a single user from d.Model training that satisfies differential privacy with respect to datasets that are user-adjacent satisfies the intuitive notion of privacy we aim to protect for language modeling: the presence or absence of any specific user's data in the training set has an imperceptible impact on the (distribution over) the parameters of the learned model.
It follows that an adversary looking at the trained model cannot infer whether any specific user's data was used in the training, irrespective of what auxiliary information they may have.
In particular, differential privacy rules out memorization of sensitive information in a strong information theoretic sense.
In this work, we introduced an algorithm for user-level differentially private training of large neural networks, in particular a complex sequence model for next-word prediction.
We empirically evaluated the algorithm on a realistic dataset and demonstrated that such training is possible at a negligible loss in utility, instead paying a cost in additional computation.
Such private training, combined with federated learning (which leaves the sensitive training data on device rather than centralizing it), shows the possibility of training models with significant privacy guarantees for important real world applications.
Much future work remains, for example designing private algorithms that automate and make adaptive the tuning of the clipping/noise tradeoff, and the application to a wider range of model families and architectures, for example GRUs and character-level models.
Our work also highlights the open direction of reducing the computational overhead of differentially private training of non-convex models.
|
User-level differential privacy for recurrent neural network language models is possible with a sufficiently large dataset.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:266
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model.
Although trained on images of a specific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps.
In this work, we describe and evaluate a novel mixed-size training regime that mixes several image sizes at training time.
We demonstrate that models trained using our method are more resilient to image size changes and generalize well even on small images.
This allows faster inference by using smaller images at test time.
For instance, we receive a 76.43% top-1 accuracy using ResNet50 with an image size of 160, which matches the accuracy of the baseline model with 2x fewer computations.
Furthermore, for a given image size used at test time, we show this method can be exploited either to accelerate training or the final test accuracy.
For example, we are able to reach a 79.27% accuracy with a model evaluated at a 288 spatial size for a relative improvement of 14% over the baseline.
Figure 1: Test accuracy per image size, models trained on specific sizes (ResNet50, ImageNet).
Convolutional neural networks are successfully used to solve various tasks across multiple domains such as visual (Krizhevsky et al., 2012; Ren et al., 2015) , audio (van den Oord et al., 2016) , language (Gehring et al., 2017) and speech (Abdel-Hamid et al., 2014) .
While scale-invariance is considered important for visual representations (Lowe, 1999) , convolutional networks are not scale invariant with respect to the spatial resolution of the image input, as a change in image dimension may lead to a non-linear change of their output.
Even though CNNs are able to achieve state-of-theart results in many tasks and domains, their sensitivity to the image size is an inherent deficiency that limits practical use cases and requires evaluation inputs to match training image size.
For example, Touvron et al. (2019) demonstrated that networks trained on specific image size, perform poorly on other image sizes at evaluation, as shown in Figure 1 .
Several works attempted to achieve scale invariance by modifying the network structure (Xu et al., 2014; Takahashi et al., 2017) .
However, the most common method is to artificially enlarge the dataset using a set of label-preserving transformations also known as "data augmentation" (Krizhevsky et al., 2012; Howard, 2013) .
Several of these transformations scale and crop objects appearing within the data, thus increasing the network's robustness to inputs of different scale.
Although not explicitly trained to handle varying image sizes, CNNs are commonly evaluated on multiple scales post training, such as in the case of detection (Lin et al., 2017; Redmon & Farhadi, 2018) and segmentation (He et al., 2017) tasks.
In these tasks, a network that was pretrained with fixed image size for classification is used as the backbone of a larger model that is expected to adapt to a wide variety of image sizes.
In this work, we will introduce a novel training regime, "MixSize" for convolutional networks that uses stochastic image and batch sizes.
The main contributions of the MixSize regime are:
• Reducing image size sensitivity.
We show that the MixSize training regime can improve model performance on a wide range of sizes used at evaluation.
• Faster inference.
As our mixed-size models can be evaluated at smaller image sizes, we show up to 2× reduction in computations required at inference to reach the same accuracy as the baseline model.
• Faster training vs. high accuracy.
We show that reducing the average image size at training leads to a trade-off between the time required to train the model and its final accuracy.
2 RELATED WORK
|
Training convnets with mixed image size can improve results across multiple sizes at evaluation
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:267
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a simple technique for encouraging generative RNNs to plan ahead.
We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model.
The backward network is used only during training, and plays no role during sampling or inference.
We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states).
We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task.
Recurrent Neural Networks (RNNs) are the basis of state-of-art models for generating sequential data such as text and speech.
RNNs are trained to generate sequences by predicting one output at a time given all previous ones, and excel at the task through their capacity to remember past information well beyond classical n-gram models BID6 BID27 .
More recently, RNNs have also found success when applied to conditional generation tasks such as speech-to-text BID9 , image captioning BID61 and machine translation .RNNs
are usually trained by teacher forcing: at each point in a given sequence, the RNN is optimized to predict the next token given all preceding tokens. This
corresponds to optimizing one-stepahead prediction. As there
is no explicit bias toward planning in the training objective, the model may prefer to focus on the most recent tokens instead of capturing subtle long-term dependencies that could contribute to global coherence. Local correlations
are usually stronger than long-term dependencies and thus end up dominating the learning signal. The consequence is
that samples from RNNs tend to exhibit local coherence but lack meaningful global structure. This difficulty in
capturing long-term dependencies has been noted and discussed in several seminal works (Hochreiter, 1991; BID6 BID27 BID45 .Recent efforts to address
this problem have involved augmenting RNNs with external memory BID14 BID18 BID22 , with unitary or hierarchical architectures BID0 BID51 , or with explicit planning mechanisms BID23 . Parallel efforts aim to prevent
overfitting on strong local correlations by regularizing the states of the network, by applying dropout or penalizing various statistics BID41 BID64 BID15 BID32 BID39 . Figure 1: The forward and the backward
networks predict the sequence s = {x 1 , ..., x 4 } independently. The penalty matches the forward (or a
parametric function of the forward) and the backward hidden states. The forward network receives the gradient
signal from the log-likelihood objective as well as L t between states that predict the same token. The backward network is trained only by maximizing
the data log-likelihood. During the evaluation part of the network colored
with orange is discarded. The cost L t is either a Euclidean distance or a
learned metric ||g(h DISPLAYFORM0 with an affine transformation g. Best viewed in color.In this paper, we propose TwinNet
, 1 a simple method for regularizing a recurrent neural network that encourages modeling those aspects of the past that are predictive of the long-term future. Succinctly, this is achieved as follows: in parallel
to the standard forward RNN, we run a "twin" backward RNN (with no parameter sharing) that predicts the sequence in reverse, and we encourage the hidden state of the forward network to be close to that of the backward network used to predict the same token. Intuitively, this forces the forward network to focus
on the past information that is useful to predicting a specific token and that is also present in and useful to the backward network, coming from the future (Fig. 1) .In practice, our model introduces a regularization term
to the training loss. This is distinct from other regularization methods that
act on the hidden states either by injecting noise BID32 or by penalizing their norm BID31 BID39 , because we formulate explicit auxiliary targets for the forward hidden states: namely, the backward hidden states. The activation regularizer (AR) proposed by BID39 , which
penalizes the norm of the hidden states, is equivalent to the TwinNet approach with the backward states set to zero. Overall, our model is driven by the intuition (a) that the
backward hidden states contain a summary of the
future of the sequence, and (b) that in order to predict the future more accurately, the
model will have to form a better representation of the past. We demonstrate the effectiveness of the TwinNet approach experimentally
, through several conditional and unconditional generation tasks that include speech recognition, image captioning, language modelling, and sequential image generation. To summarize, the contributions of this work are as follows:• We introduce
a simple method for training generative recurrent networks that regularizes the hidden states of the network to anticipate future states (see Section 2);• The paper provides extensive evaluation of the proposed model on multiple tasks and concludes that it helps training and regularization for conditioned generation (speech recognition, image captioning) and for the unconditioned case (sequential MNIST, language modelling, see Section 4);• For deeper analysis we visualize the introduced cost and observe that it negatively correlates with the word frequency (more surprising words have higher cost).
In this paper, we presented a simple recurrent neural network model that has two separate networks running in opposite directions during training.
Our model is motivated by the fact that states of the forward model should be predictive of the entire future sequence.
This may be hard to obtain by optimizing one-step ahead predictions.
The backward path is discarded during the sampling and evaluation process, which makes the sampling process efficient.
Empirical results show that the proposed method performs well on conditional generation for several tasks.
The analysis reveals an interpretable behaviour of the proposed loss.One of the shortcomings of the proposed approach is that the training process doubles the computation needed for the baseline (due to the backward network training).
However, since the backward network is discarded during sampling, the sampling or inference process has the exact same computation steps as the baseline.
This makes our approach applicable to models that requires expensive sampling steps, such as PixelRNNs BID44 and WaveNet (Oord et al., 2016a) .
One of future work directions is to test whether it could help in conditional speech synthesis using WaveNet.We observed that the proposed approach yield minor improvements when applied to language modelling with PennTree bank.
We hypothesize that this may be linked to the amount of entropy of the target distribution.
In these high-entropy cases, at any time-step in the sequence, the distribution of backward states may be highly multi-modal (many possible futures may be equally likely for the same past).
One way of overcoming this problem would be to replace the proposed L2 loss (which implicitly assumes a unimodal distribution of the backward states) by a more expressive loss obtained by either employing an inference network BID30 or distribution matching techniques BID17 .
We leave that for future investigation.
|
The paper introduces a method of training generative recurrent networks that helps to plan ahead. We run a second RNN in a reverse direction and make a soft constraint between cotemporal forward and backward states.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:268
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep generative models seek to recover the process with which the observed data was generated.
They may be used to synthesize new samples or to subsequently extract representations.
Successful approaches in the domain of images are driven by several core inductive biases.
However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked.
In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition.
This provides a way to efficiently learn a more accurate generative model of real-world images, and serves as an initial step towards learning corresponding object representations.
We evaluate our approach on several multi-object image datasets, and find that the generator learns to identify and disentangle information corresponding to different objects at a representational level.
A human study reveals that the resulting generative model is better at generating images that are more faithful to the reference distribution.
Generative modelling approaches to representation learning seek to recover the process with which the observed data was generated.
It is postulated that knowledge about the generative process exposes important factors of variation in the environment (captured in terms of latent variables) that may subsequently be obtained using an appropriate posterior inference procedure.
Therefore, the structure of the generative model is critical in learning corresponding representations.Deep generative models of images rely on the expressiveness of neural networks to learn the generative process directly from data BID11 BID24 BID38 .
Their structure is determined by the inductive bias of the neural network, which steers it to organize its computation in a way that allows salient features to be recovered and ultimately captured in a representation BID6 BID7 BID24 .
Recently, it has been shown that independent factors of variation, such as pose and lighting of human faces may be recovered in this way BID5 .A
promising but under-explored inductive bias in deep generative models of images is compositionality at the representational level of objects, which accounts for the compositional nature of the visual world and our perception thereof BID3 BID37 . It
allows a generative model to describe a scene as a composition of objects (entities), thereby disentangling visual information in the scene that can be processed largely independent of one another. It
provides a means to efficiently learn a more accurate generative model of real-world images, and by explicitly Figure 1: A scene (right) is generated as a composition of objects and background. considering
objects at a representational level, it serves as an important first step in recovering corresponding object representations.In this work we investigate object compositionality for Generative Adversarial Networks (GANs; BID11 ), and present a general mechanism that allows one to incorporate corresponding structure in the generator. Starting from
strong independence assumptions about the objects in images, we propose two extensions that provide a means to incorporate dependencies among objects and background. In order to efficiently
represent and process multiple objects with neural networks, we must account for the binding problem that arises when superimposing multiple distributed representations BID18 . Following prior work, we
consider different representational slots for each object BID13 BID34 , and a relational mechanism that preserves this separation accordingly .We evaluate our approach
1 on several multi-object image datasets, including three variations of Multi-MNIST, a multi-object variation of CIFAR10, and CLEVR. In particular the latter
two mark a significant improvement in terms of complexity, compared to datasets that have been considered in prior work on unconditional multi-object image generation and multi-object representation learning.In our experiments we find that our generative model learns about the individual objects and the background of a scene, without prior access to this information. By disentangling this information
at a representational level, it generates novel scenes efficiently through composing individual objects and background, as can be seen in Figure 1 . As a quantitative experiment we compare
to a strong baseline of popular GANs (Wasserstein and Non-saturating) with recent state-of-the-art techniques (Spectral Normalization, Gradient Penalty) optimized over multiple runs. A human study reveals that the proposed
generative model outperforms this baseline in generating better images that are more faithful to the reference distribution.
The experimental results confirm that the proposed structure is beneficial in generating images of multiple objects, and is utilized according to our own intuitions.
In order to benefit maximally from this structure it is desirable to be able to accurately estimate the (minimum) number of objects in the environment in advance.
This task is ill-posed as it relies on a precise definition of "object" that is generally not available.
In our experiments on CLEVR we encounter a similar situation in which the number of components does not suffice the potentially large number of objects in the environment.Here we find that it does not render the proposed structure useless, but instead each component considers "primitives" that correspond to multiple objects.One concern is in being able to accurately determine foreground, and background when combining the outputs of the object generators using alpha compositing.
On CLEVR we observe cases in which objects appear to be flying, which is the result of being unable to route the information content of a "foreground" object to the corresponding "foreground" generator as induced by the fixed order in which images are composed.
Although in principle the relational mechanism may account for this distinction, a more explicit mechanism may be preferred BID31 .We
found that the pre-trained Inception embedding is not conclusive in reasoning about the validity of multi-object datasets. Similarly
, the discriminator may have difficulties in accurately judging images from real / fake without additional structure. Ideally
we would have a discriminator evaluate the correctness of each object individually, as well as the image as a whole. The use
of a patch discriminator BID20 , together with the alpha channel of each object generator to provide a segmentation, may serve a starting point in pursuing this direction.
We have argued for the importance of compositionality at the representational level of objects in deep generative models of images, and demonstrated how corresponding structure may be incorporated in the generator of a GAN.
On a benchmark of multi-object datasets we have shown that the proposed generative model learns about individual objects and background in the process of synthesizing samples.
A human study revealed that this leads to a better generative model of images.
We are hopeful that in disentangling information corresponding to different objects at a representational level these may ultimately be recovered.
Hence, we believe that this work is an important contribution towards learning object representations of complex real-world images without any supervision.A EXPERIMENT RESULTS The generator and discriminator neural network architectures in all our experiments are based on DCGAN BID35 .
|
We propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:269
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks.
If sparsity is important for NLP, might well-trained neural models naturally become roughly sparse?
Using the Taxi-Euclidean norm to measure sparsity, we find that frequent input words are associated with concentrated or sparse activations, while frequent target words are associated with dispersed activations but concentrated gradients.
We find that gradients associated with function words are more concentrated than the gradients of content words, even controlling for word frequency.
Researchers in NLP have long relied on engineering features to reflect the sparse structures underlying language.
Modern deep learning methods promised to relegate this practice to history, but have not eliminated the interest in sparse modeling for NLP.
Along with concerns about computational resources BID0 BID12 and interpretability BID10 BID21 , human intuitions continue to motivate sparse representations of language.
For example, some work applies assumptions of sparsity to model latent hard categories such as syntactic dependencies BID14 or phonemes BID1 .
BID13 found that a sparse attention mechanism outperformed dense methods on some NLP tasks; BID11 found sparsified versions of LMs that outperform dense originals.
Attempts to engineer sparsity rest on an unstated assumption that it doesn't arise naturally when neural models are learned.
Is this true?Using
a simple measure of sparsity, we analyze how it arises in different layers of a neural language model in relation to word frequency. We show
that the sparsity of a word representation increases with exposure to that word during training. We also
find evidence of syntactic learning: gradient updates in backpropagation depend on whether a word's part of speech is open or closed class, even controlling for word frequency.
|
We study the natural emergence of sparsity in the activations and gradients for some layers of a dense LSTM language model, over the course of training.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:27
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel.
We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation.
We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it.
First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive.
Second, that stability and performance are improved by using LOLA (Foerster et al, 2018), especially in more competitive scenarios.
And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones.
First and foremost, we show evidence against the current notion that selfish agents do not learn to communicate, and we hope our findings encourage more research into communication under The comparison between discrete and continuous communication for both the REINFORCE-deterministic setup as well as 1-step LOLA agents is shown in Figure 4a .
We see that though overall continuous communication can achieve highest information transfer, the gains in performance seem to mostly from manipulation of the sender by the receiver.
Two examples are shown for REINFORCE agents in Figures 4b,4c .
To find a trend, we plot all 100 hyperparameter runs for b ∈ [3, 6, 9, 12] between continuous and discrete communication using 1-step LOLA agents in Figures 4d,4e ,4f,4g.
We find that manipulation is the common result in continuous communication though individual cooperative points can sometimes be found.
In general, continuous communication does not lend itself to cooperative communication competition.
We have shown three important properties of communication.
First, a game being more cooperative than competitive is sufficient to naturally emerge communication.
Second, we've clarified the distinction between information transfer, communication, and manipulation, providing motivation for a better quantitative metric to measure emergent communication in competitive environments.
Next, we've found that LOLA improves effective selfish communication and, using our metric, we find it does so by improving both agents' performance and stability.
Finally, we've shown that using a discrete communication channel encourages the learning of cooperative commu-nication in contrast to the continuous communication channel setting, where we find little evidence of cooperation.
In fully-cooperative emergent communication, both agents fully trust each other, so cooperatively learning a protocol is mutually beneficial.
In competitive MARL, the task is using an existing protocol (or action space) to compete with each other.
However, selfish emergent communication combines these two since the inherent competitiveness of using the protocol to win is tempered by the inherent cooperativeness of learning it; without somewhat agreeing to meanings, agents cannot use those meanings to compete (Searcy & Nowicki, 2005; Skyrms & Barrett, 2018) .
Thus, the agents must both learn a protocol and use that protocol simultaneously.
In this way, even while competing, selfish agents emerging a communication protocol must learn to cooperate.
|
We manage to emerge communication with selfish agents, contrary to the current view in ML
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:270
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed.
We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss.
To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks (GRSVNet).
During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry.
We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding (OLE)-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces.
Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data.
More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN.
It remains an open question why DNNs, typically with far more model parameters than training samples, can achieve such small generalization error.
Previous work used various complexity measures from statistical learning theory, such as VC dimension (Vapnik, 1998) , Radamacher complexity BID1 , and uniform stability BID2 BID10 , to provide an upper bound for the generalization error, suggesting that the effective capacity of DNNs, possibly with some regularization techniques, is usually limited.
However, the experiments by Zhang et al. (2017) showed that, even with data-independent regularization, DNNs can perfectly fit the training data when the true labels are replaced by random labels, or when the training data are replaced by Gaussian noise.
This suggests that DNNs with data-independent regularization have enough capacity to "memorize" the training data.
This poses an interesting question for network regularization design: is there a way for DNNs to refuse to (over)fit training samples with random labels, while exhibiting better generalization power than conventional DNNs when trained with true labels?
Such networks are very important because they will extract only intrinsic patterns from the training data instead of memorizing miscellaneous details.One would expect that data-dependent regularizations should be a better choice for reducing the memorizing capacity of DNNs.
Such regularizations are typically enforced by penalizing the standard softmax cross entropy loss with an extra geometric loss which regularizes the feature geometry BID8 Zhu et al., 2018; Wen et al., 2016) .
However, regularizing DNNs with an extra geometric loss has two disadvantages: First, the output of the softmax layer, usually viewed as a probability distribution, is typically inconsistent with the feature geometry enforced by the geometric loss.
Therefore, the geometric loss typically has a small weight to avoid jeopardizing the minimization of the softmax loss.
Second, we find that DNNs with such regularization can still perfectly (over)fit random training samples or random labels.
The reason is that the geometric loss (because of its small weight) is ignored and only the softmax loss is minimized.This suggests that simply penalizing the softmax loss with a geometric loss is not sufficient to regularize DNNs.
Instead, the softmax loss should be replaced by a validation loss that is consistent with the enforced geometry.
More specifically, every training batch B is split into two sub-batches, the geometry batch B g and the validation batch B v .
The geometric loss l g is imposed on the features of B g for them to exhibit a desired geometric structure.
A semi-supervised learning algorithm based on the proposed feature geometry is then used to generate a predicted label distribution for the validation batch, which combined with the true labels defines a validation loss on B v .
The total loss on the training batch B is then defined as the weighted sum l = l g + λl v .
Because the predicted label distribution on B v is based on the enforced geometry, the geometric loss l g can no longer be neglected.
Therefore, l g and l v will be minimized simultaneously, i.e., the geometry is correctly enforced (small l g ) and it can be used to predict validation samples (small l v ).
We call such DNNs Geometrically-Regularized-Self-Validating neural Networks (GRSVNets).
See FIG0 for a visual illustration of the network architecture.GRSVNet is a general architecture because every consistent geometry/validation pair can fit into this framework as long as the loss functions are differentiable.
In this paper, we focus on a particular type of GRSVNet, the Orthogonal-Low-rank-Embedding-GRSVNet (OLE-GRSVNet).
More specifically, we impose the OLE loss (Qiu & Sapiro, 2015) on the geometry batch to produce features residing in orthogonal subspaces, and we use the principal angles between the validation features and those subspaces to define a predicted label distribution on the validation batch.
We prove that the loss function obtains its minimum if and only if the subspaces of different classes spanned by the features in the geometry batch are orthogonal, and the features in the validation batch reside perfectly in the subspaces corresponding to their labels (see FIG0 ).
We show in our experiments that OLE-GRSVNet has better generalization performance when trained on real data, but it refuses to memorize the training samples when given random training data or random labels, which suggests that OLE-GRSVNet effectively learns intrinsic patterns.Our contributions can be summarized as follows:• We proposed a general framework, GRSVNet, to effectively impose data-dependent DNN regularization.
The core idea is the self-validation of the enforced geometry with a consistent validation loss on a separate batch of features.•
We study a particular case of GRSVNet, OLE-GRSVNet, that can produce highly discriminative features: samples from the same class belong to a low-rank subspace, and the subspaces for different classes are orthogonal.•
OLE-GRSVNet achieves better generalization performance when compared to DNNs with conventional regularizers. And
more importantly, unlike conventional DNNs, OLEGRSVNet refuses to fit the training data (i.e., with a training error close to random guess) when the training data or the training labels are randomly generated. This
implies that OLE-GRSVNet never memorizes the training samples, only learns intrinsic patterns.
We proposed a general framework, GRSVNet, for data-dependent DNN regularization.
The core idea is the self-validation of the enforced geometry on a separate batch using a validation loss consistent with the geometric loss, so that the predicted label distribution has a meaningful geometric interpretation.
In particular, we study a special case of GRSVNet, OLE-GRSVNet, which is capable of producing highly discriminative features: samples from the same class belong to a low-rank subspace, and the subspaces for different classes are orthogonal.
When trained on benchmark datasets with real labels, OLE-GRSVNet achieves better test accuracy when compared to DNNs with different regularizations sharing the same baseline architecture.
More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize and overfit the training data when trained on random labels or random data.
This suggests that OLE-GRSVNet effectively reduces the memorizing capacity of DNNs, and it only extracts intrinsically learnable patterns from the data.Although we provided some intuitive explanation as to why GRSVNet generalizes well on real data and refuses overfitting random data, there are still open questions to be answered.
For example, what is the minimum representational capacity of the baseline DNN (i.e., number of layers and number of units) to make even GRSVNet trainable on random data?
Or is it because of the learning algorithm (SGD) that prevents GRSVNet from learning a decision boundary that is too complicated for random samples?
Moreover, we still have not answered why conventional DNNs, while fully capable of memorizing random data by brute force, typically find generalizable solutions on real data.
These questions will be the focus of our future work.
It suffices to prove the case when K = 2, as the case for larger K can be proved by induction.
In order to simplify the notation, we restate the original theorem for K = 2:Theorem.
Let A ∈ R N ×m and B ∈ R N ×n be matrices of the same row dimensions, and [A, B] ∈ R N ×(m+n) be the concatenation of A and B. We have DISPLAYFORM0 Moreover, the equality holds if and only if A * B = 0, i.e., the column spaces of A and B are orthogonal.Proof.
The inequality (8) and the sufficient condition for the equality to hold is easy to prove.
More specifically, DISPLAYFORM1 Moreover, if A * B = 0, then DISPLAYFORM2 where |A| = (A * A) 1 2 .
Therefore, DISPLAYFORM3 Next, we show the necessary condition for the equality to hold, i.e., DISPLAYFORM4 DISPLAYFORM5 | be a symmetric positive semidefinite matrix.
We DISPLAYFORM6 Let DISPLAYFORM7 be the orthonormal eigenvectors of |A|, |B|, respectively.
Then DISPLAYFORM8 Similarly, DISPLAYFORM9 Suppose that [A, B] * = A * + B * , then DISPLAYFORM10 Therefore, both of the inequalities in this chain must be equalities, and the first one being equality only if G = 0. This combined with the last equation in FORMULA2 implies DISPLAYFORM11 APPENDIX B PROOF OF THEOREM 2Proof.
First, l is defined in equation FORMULA8 as DISPLAYFORM12 The nonnegativity of l g (Z g ) is guaranteed by Theorem
1. The validation loss l v (Y v ,Ŷ v ) is also nonnegative since it is the average (over the validation batch) of the cross entropy losses: DISPLAYFORM13 Therefore l = l g + λl v is also nonnegative.Next, for a given λ > 0, l(X, Y) obtains its minimum value zero if and only if both l g (Z g ) and l v (Y v ,Ŷ v ) are zeros.•
By Theorem 1, l g (Z g ) = 0 if and only if span(Z g c )⊥ span(Z g c ), ∀c = c .•
According to (19), l v (Y v ,Ŷ v ) = 0 if and only ifŷ(x) = δ y , ∀x ∈ X v , i.e., for every x ∈ X v c , its feature z = Φ(x; θ) belongs to span(Z g c ).At
last, we want to prove that if λ > 0, and X v contains at least one sample for each class, then rank(span(Z g c )) ≥ 1 for any c ∈ {1, . . . , K}. If
not, then there exists c ∈ {1, . . . , K} such that rank(span(Z g c )) =
0. Let x ∈ X v be a validation datum belonging to class y = c. The
predicted probability of x belonging to class c is defined in (3): DISPLAYFORM14 Thus we have DISPLAYFORM15
|
we propose a new framework for data-dependent DNN regularization that can prevent DNNs from overfitting random data or random labels.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:271
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
End-to-end automatic speech recognition (ASR) commonly transcribes audio signals into sequences of characters while its performance is evaluated by measuring the word-error rate (WER).
This suggests that predicting sequences of words directly may be helpful instead.
However, training with word-level supervision can be more difficult due to the sparsity of examples per label class.
In this paper we analyze an end-to-end ASR model that combines a word-and-character representation in a multi-task learning (MTL) framework.
We show that it improves on the WER and study how the word-level model can benefit from character-level supervision by analyzing the learned inductive preference bias of each model component empirically.
We find that by adding character-level supervision, the MTL model interpolates between recognizing more frequent words (preferred by the word-level model) and shorter words (preferred by the character-level model).
End-to-end automatic speech recognition (ASR) allows for learning a direct mapping from audio signals to character outputs.
Usually, a language model re-scores the predicted transcripts during inference to correct spelling mistakes BID16 .
If we map the audio input directly to words, we can use a simpler decoding mechanism and reduce the prediction time.
Unfortunately, word-level models can only be trained on known words.
Out-of-vocabulary (OOV) words have to be mapped to an unknown token.
Furthermore, decomposing transcripts into sequences of words decreases the available number of examples per label class.
These shortcomings make it difficult to train on the word-level BID2 .Recent
works have shown that multi-task learning (MTL) BID8 on the word-and character-level can improve the word-error rate (WER) of common end-to-end speech recognition architectures BID2 BID3 BID18 BID21 BID22 BID24 BID29 . MTL can
be interpreted as learning an inductive bias with favorable generalization properties BID6 . In this
work we aim at characterizing the nature of this inductive bias in word-character-level MTL models by analyzing the distribution of words that they recognize. Thereby
, we seek to shed light on the learning process and possibly inform the design of better models. We will
focus on connectionist temporal classification (CTC) BID15 . However
, the analysis can also prove beneficial to other modeling paradigms, such as RNN Transducers BID14 or Encoder-Decoder models, e.g., BID5 BID9 .Contributions
. We show that
, contrary to earlier negative results BID2 BID27 , it is in fact possible to train a word-level model from scratch on a relatively small dataset and that its performance can be further improved by adding character-level supervision. Through an
empirical analysis we show that the resulting MTL model combines the preference biases of word-and character-level models. We hypothesize
that this can partially explain why word-character MTL improves on only using a single decomposition, such as phonemes, characters or words.Several works have explored using words instead of characters or phonemes as outputs of the end-toend ASR model BID2 BID27 . Soltau et al.
BID27 found that in order to solve the problem of observing only few labels per word, they needed to use a large dataset of 120, 000 hours to train a word-level model directly. Accordingly,
Audhkhasi et al. BID2 reported difficulty to train a model on words from scratch and instead fine-tuned a pre-trained character-level model after replacing the last dense layer with a word embedding.MTL enables a straightforward joint training procedure to integrate transcript information on multiple levels of granularity. Treating word-and
character-level transcription as two distinct tasks allows for combining their losses in a parallel BID21 BID22 BID28 BID29 or hierarchical structure BID13 BID20 BID24 . Augmenting the commonly-used
CTC loss with an attention mechanism can help with aligning the predictions on both character-and word-level BID3 BID12 BID22 . All these MTL methods improve
a standard CTC baseline.Finding the right granularity of the word decomposition is in itself a difficult problem. While Li et al. BID22 used different
fixed decompositions of words, sub-words and characters, it is also possible to optimize over alignments and decompositions jointly BID23 . Orthogonal to these works different
authors have explored how to minimize WER directly by computing approximate gradients BID25 BID32 .When and why does MTL work? Earlier
theoretical work argued that
the auxiliary task provides a favorable inductive bias to the main task BID6 . Within natural language processing on
text several works verified empirically that this inductive bias is favorable if there is a certain notion of relatedness between the tasks BID4 BID7 BID26 . Here, we investigate how to characterize
the inductive bias learned via MTL for speech recognition.
In contrast to earlier studies in the literature, we found that, even on a relatively small dataset, training on a word-level can be feasible.
Furthermore, we found that combining a word-level model with character-level supervision in MTL can improve results noticeably.
To gain a better understanding of this, we characterized the inductive bias of word-character MTL in ASR by comparing the distributions of recognized words at the beginning of training.
We found that adding character-level supervision to a word-level interpolates between recognizing more frequent words (preferred by the word-level model) and shorter words (preferred by the character-level model).
This effect could be even more pronounced on harder datasets than WSJ, such as medical communication data where many long words are infrequent, but very important.
Further analysis of word distributions in terms of pitch, noise and acoustic variability could provide additional insight.
|
Multi-task learning improves word-and-character-level speech recognition by interpolating the preference biases of its components: frequency- and word length-preference.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:272
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Discretizing floating-point vectors is a fundamental step of modern indexing methods.
State-of-the-art techniques learn parameters of the quantizers on training data for optimal performance, thus adapting quantizers to the data.
In this work, we propose to reverse this paradigm and adapt the data to the quantizer: we train a neural net whose last layers form a fixed parameter-free quantizer, such as pre-defined points of a sphere.
As a proxy objective, we design and train a neural network that favors uniformity in the spherical latent space, while preserving the neighborhood structure after the mapping.
For this purpose, we propose a new regularizer derived from the Kozachenko-Leonenko differential entropy estimator and combine it with a locality-aware triplet loss.
Experiments show that our end-to-end approach outperforms most learned quantization methods, and is competitive with the state of the art on widely adopted benchmarks.
Further more, we show that training without the quantization step results in almost no difference in accuracy, but yields a generic catalyser that can be applied with any subsequent quantization technique.
Recent work BID27 proposed to leverage the pattern-matching ability of machine learning algorithms to improve traditional index structures such as B-trees or Bloom filters, with encouraging results.
In their one-dimensional case, an optimal B-Tree can be constructed if the cumulative density function (CDF) of the indexed value is known, and thus they approximate this CDF using a neural network.
We emphasize that the CDF itself is a mapping between the indexed value and a uniform distribution in [0, 1] .
In this work, we wish to generalize such an approach to multi-dimensional spaces.
More precisely, as illustrated by FIG0 , we aim at learning a function that maps real-valued vectors to a uniform distribution over a d-dimensional sphere, such that a fixed discretizing structure, for example a fixed binary encoding (sign of components) or a regular lattice quantizer, offers competitive coding performance.Our approach is evaluated in the context of similarity search, where methods often rely on various forms of learning machinery BID12 BID45 ; in particular there is a substantial body of literature on methods producing compact codes BID20 ).
Yet the problem of jointly optimizing a coding stage and a neural network remains essentially unsolved, partly because .
It is learned end-to-end, yet the part of the network in charge of the discretization operation is fixed in advance, thereby avoiding optimization problems.
The learnable function f , namely the "catalyzer", is optimized to increase the quality of the subsequent coding stage.
input λ = 0 λ = 0.01 λ = 0.1 λ → ∞ FIG1 : Illustration of our method, which takes as input a set of samples from an unknown distribution.
We learn a neural network that aims at preserving the neighborhood structure in the input space while best covering the output space (uniformly).
This trade-off is controlled by a parameter λ.
The case λ = 0 keeps the locality of the neighbors but does not cover the output space.
On the opposite, when the loss degenerates to the differential entropic regularizer (λ → ∞), the neighbors are not maintained by the mapping.
Intermediate values offer different trade-offs between neighbor fidelity and uniformity, which is proper input for an efficient lattice quantizer (depicted here by the hexagonal lattice A 2 ).it
is difficult to optimize through a discretization function. For
this reason, most efforts have been devoted to networks producing binary codes, for which optimization tricks exist, such as soft binarization or stochastic relaxation, which are used in conjunction with neural networks BID28 BID18 . However
it is difficult to improve over more powerful codes such as those produced by product quantization BID20 , and recent solutions addressing product quantization require complex optimization procedures BID24 BID34 .In order
to circumvent this problem, we propose a drastic simplification of learning algorithms for indexing. We learn
a mapping such that the output follows the distribution under which the subsequent discretization method, either binary or a more general quantizer, performs better. In other
terms, instead of trying to adapt an indexing structure to the data, we adapt the data to the index.Our technique requires to jointly optimize two antithetical criteria. First, we
need to ensure that neighbors are preserved by the mapping, using a vanilla ranking loss BID40 BID6 BID44 . Second, the
training must favor a uniform output. This suggests
a regularization similar to maximum entropy BID36 , except that in our case we consider a continuous output space. We therefore
propose to cast an existing differential entropy estimator into a regularization term, which plays the same "distribution-matching" role as the Kullback-Leiber term of variational auto-encoders BID9 .As a side note
, many similarity search methods are implicitly designed for the range search problem (or near neighbor, as opposed to nearest neighbor BID15 BID0 ), that aims at finding all vectors whose distance to the query vector is below a fixed threshold. For real-world
high-dimensional data, range search usually returns either no neighbors or too many. The discrepancy
between near-and nearest-neighbors is significantly reduced by our technique, see Section 3.3 and Appendix C for details.Our method is illustrated by FIG1 . We summarize our
contributions as follows:• We introduce an approach for multi-dimensional indexing that maps the input data to an output space in which indexing is easier. It learns a neural
network that plays the role of an adapter for subsequent similarity search methods.• For this purpose
we introduce a loss derived from the Kozachenko-Leonenko differential entropy estimator to favor uniformity in the spherical output space.• Our learned mapping
makes it possible to leverage spherical lattice quantizers with competitive quantization properties and efficient algebraic encoding.• Our ablation study shows
that our network can be trained without the quantization layer and used as a plug-in for processing features before using standard quantizers. We show quantitatively that
our catalyzer improves performance by a significant margin for quantization-based (OPQ BID10 ) and binary (LSH BID5 ) method.This paper is organized as follows. Section 2 discusses related
works. Section 3 introduces our neural
network model and the optimization scheme. Section 4 details how we combine
this strategy with lattice assignment to produce compact codes. The experimental section 5 evaluates
our approach.
Choice of λ.
The marginal distributions for these two views are much more uniform with our KoLeo regularizer, which is a consequence of the higher uniformity in the high-dimensional latent space.Qualitative evaluation of the uniformity.
Figure 3 shows the histogram of the distance to the nearest (resp. 100 th nearest) neighbor, before applying the catalyzer (left) and after (right).
The overlap between the two distributions is significantly reduced by the catalyzer.
We evaluate this quantitatively by measuring the probability that the distance between a point and its nearest neighbor is larger than the distance between another point and its 100 th nearest neighbor.
In a very imbalanced space, this value is 50%, whereas in a uniform space it should approach 0%.
In the input space, this probability is 20.8%, and it goes down to 5.0% in the output space thanks to our catalyzer.Visualization of the output distribution.
While FIG1 illustrates our method with the 2D disk as an output space, we are interested in mapping input samples to a higher dimensional hyper-sphere.
FIG2 proposes a visualization of the high-dimensional density from a different viewpoint, with the Deep1M dataset mapped in 8 dimensions.
We sample 2 planes randomly in R dout and project the dataset points (f (x 1 ), ..., f (x n )) on them.
For each column, the 2 figures are the angular histograms of the points with a polar parametrization of this plane.
The area inside the curve is constant and proportional to the number of samples n.
A uniform angular distribution produces a centered disk, and less uniform distributions look like unbalanced potatoes.The densities we represent are marginalized, so if the distribution looks non-uniform then it is non-uniform in d out -dimensional space, but the reverse is not true.
Yet one can compare the results obtained for different regularization coefficients, which shows that our regularizer has a strong uniformizing effect on the mapping, ultimately resembling that of a uniform distribution for λ = 1.
|
We learn a neural network that uniformizes the input distribution, which leads to competitive indexing performance in high-dimensional space
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:273
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance.
A variance reduced TD (VRTD) algorithm was proposed by Korda and La (2015), which applies the variance reduction technique directly to the online TD learning with Markovian samples.
In this work, we first point out the technical errors in the analysis of VRTD in Korda and La (2015), and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance.
We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate.
Furthermore, the variance error (for both i.i.d. and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD.
In reinforcement learning (RL), policy evaluation aims to obtain the expected long-term reward of a given policy and plays an important role in identifying the optimal policy that achieves the maximal cumulative reward over time Bertsekas and Tsitsiklis (1995) ; Dayan and Watkins (1992) ; Rummery and Niranjan (1994) .
The temporal difference (TD) learning algorithm, originally proposed by Sutton (1988) , is one of the most widely used policy evaluation methods, which uses the Bellman equation to iteratively bootstrap the estimation process and continually update the value function in an incremental way.
In practice, if the state space is large or infinite, function approximation is often used to find an approximate value function efficiently.
Theoretically, TD with linear function approximation has been shown to converge to the fixed point solution with i.i.d. samples and Markovian samples in Sutton (1988) ; Tsitsiklis and Van Roy (1997) .
The finite sample analysis of TD has also been studied in Bhandari et al. (2018) ; Srikant and Ying (2019) ; Dalal et al. (2018a); Cai et al. (2019) .
Since each iteration of TD uses one or a mini-batch of samples to estimate the mean of the gradient 1 , TD learning usually suffers from the inherent variance, which substantially degrades the convergence accuracy.
Although a diminishing stepsize or very small constant stepsize can reduce the variance Bhandari et al. (2018) ; Srikant and Ying (2019) ; Dalal et al. (2018a) , they also slow down the convergence significantly.
Two approaches have been proposed to reduce the variance.
The first approach is the so-called batch TD, which takes a fixed sample set and transforms the empirical mean square projected Bellman error (MSPBE) into an equivalent convex-concave saddle-point problem Du et al. (2017) .
Due to the finite-sample nature of such a problem, stochastic variance reduction techniques for conventional optimization can be directly applied here to reduce the variance.
In particular, Du et al. (2017) showed that SVRG Johnson and Zhang (2013) and SAGA Defazio et al. (2014) can be applied to improve the performance of batch TD algorithms, and Peng et al. (2019) proposed two variants of SVRG to further save the computation cost.
However, the analysis of batch TD does not take into account the statistical nature of the training samples, which are generated by a MDP.
Hence, there is no guarantee of such obtained solutions to be close to the fixed point of TD learning.
The second approach is the so-called TD with centering (CTD) algorithm proposed in Korda and La (2015) , which introduces the variance reduction idea to the original TD learning algorithm.
For the sake of better reflecting its major feature, we refer to CTD as Variance Reduced TD (VRTD) throughout this paper.
Similarly to the SVRG in Johnson and Zhang (2013) , VRTD has outer and inner loops.
The beginning of each inner-loop (i.e. each epoch) computes a batch of sample gradients so that each subsequent inner loop iteration modifies only one sample gradient in the batch gradient to reduce the variance.
The main difference between VRTD and batch TD is that VRTD applies the variance reduction directly to TD learning rather than to a transformed optimization problem in batch TD.
Though Korda and La (2015) empirically verified that VRTD has better convergence accuracy than vanilla TD learning, some technical errors in the analysis in Korda and La (2015) have been pointed out in follow up studies Dalal et al. (2018a) ; Narayanan and Szepesvári (2017) .
Furthermore, as we discuss in Section 3, the technical proof in Korda and La (2015) regarding the convergence of VRTD also has technical errors so that their results do not correctly characterize the impact of variance reduction on TD learning.
Given the recent surge of interest in the finite time analysis of the vanilla TD Bhandari et al. (2018) ; Srikant and Ying (2019) ; Dalal et al. (2018a) , it becomes imperative to reanalyze the VRTD and accurately understand whether and how variance reduction can help to improve the convergence accuracy over vanilla TD.
Towards this end, this paper specifically addresses the following central questions.
• For i.i.d. sampling, it has been shown in Bhandari et al. (2018) that vanilla TD converges only to a neighborhood of the fixed point for a constant stepsize and suffers from a constant error term caused by the variance of the stochastic gradient at each iteration.
For VRTD, does the variance reduction help to reduce such an error and improve the accuracy of convergence?
How does the error depend on the variance reduction parameter, i.e., the batch size for variance reduction?
• For Markovian sampling, it has been shown in Bhandari et al. (2018) ; Srikant and Ying (2019) that the convergence of vanilla TD further suffers from a bias error due to the correlation among samples in addition to the variance error as in i.i.d. sampling.
Does VRTD, which was designed to have reduced variance, also enjoy reduced bias error?
If so, how does the bias error depend on the batch size for variance reduction?
|
This paper provides a rigorous study of the variance reduced TD learning and characterizes its advantage over vanilla TD learning
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:274
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition.
To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others.
This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared.
As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy.
Furthermore, it allows us to handle any number of domains simultaneously.
While deep learning has ushered in great advances in automated image understanding, it still suffers from the same weaknesses as all other machine learning techniques: when trained with images obtained under specific conditions, deep networks typically perform poorly on images acquired under different ones.
This is known as the domain shift problem: the changing conditions cause the statistical properties of the test, or target, data, to be different from those of the training, or source, data, and the network's performance degrades accordingly.
Domain adaptation aims to address this problem, especially when annotating images from the target domain is difficult, expensive, or downright infeasible.
The dominant trend is to map images to features that are immune to the domain shift, so that the classifier works equally well on the source and target domains (Fernando et al., 2013; Ganin & Lempitsky, 2015; .
In the context of deep learning, the standard approach is to find those features using a single architecture for both domains (Tzeng et al., 2014; Ganin & Lempitsky, 2015; Yan et al., 2017; Zhang et al., 2018) .
Intuitively, however, as the domains have different properties, it is not easy to find one network that does this effectively for both.
A better approach is to allow domains to undergo different transformations to arrive at domain-invariant features.
This has been the focus of recent work (Tzeng et al., 2017; Bermúdez-Chacón et al., 2018; Rozantsev et al., 2018; , where source and target data pass through two different networks with the same architecture but different weights, nonetheless related to each other.
In this paper, we introduce a novel, even more flexible paradigm for domain adaptation, that allows the different domains to undergo different computations, not only in terms of layer weights but also in terms of number of operations, while selectively sharing subsets of these computations.
This enables the network to automatically adapt to situations where, for example, one domain depicts simpler images, such as synthetic ones, which may not need as much processing power as those coming from more complex domains, such as images taken in-the-wild.
Our formulation reflects the intuition that source and target domain networks should be similar because they solve closely related problems, but should also perform domain-specific computations to offset the domain shift.
To turn this intuition into a working algorithm, we develop a multibranch architecture that sends the data through multiple network branches in parallel.
What gives it the necessary flexibility are trainable gates that are tuned to modulate and combine the outputs of these branches, as shown in , each of which processes the data in parallel branches, whose outputs are then aggregated in a weighted manner by a gate to obtain a single response.
To allow for domain-adaptive computations, each domain has its own set of gates, one for each computational unit, which combine the branches in different ways.
As a result, some computations are shared across domains while others are domain-specific.
computations should be carried out for each one.
As an additional benefit, in contrast to previous strategies for untying the source and target streams (Rozantsev et al., 2018; , our formulation naturally extends to more than two domains.
In other words, our contribution is a learning strategy that adaptively adjusts the specific computation to be performed for each domain.
To demonstrate that it constitutes an effective approach to extracting domain-invariant features, we implement it in conjunction with the popular domain classifier-based method of Ganin & Lempitsky (2015) .
Our experiments demonstrate that our Domain Adaptive Multibranch Networks, which we will refer to as DAMNets, not only outperform the original technique of Ganin & Lempitsky (2015) , but also the state-of-the-art strategy for untying the source and target weights of Rozantsev et al. (2019) , which relies on the same domain classifier.
We will make our code publicly available upon acceptance of the paper.
We have introduced a domain adaptation approach that allows for adaptive, separate computations for different domains.
Our framework relies on computational units that aggregate the outputs of multiple parallel operations, and on a set of trainable domain-specific gates that adapt the aggregation process to each domain.
Our experiments have demonstrated the benefits of this approach over the state-of-the-art weight untying strategy; the greater flexibility of our method translates into a consistently better accuracy.
Although we only experimented with using the same branch architectures within a computational unit, our framework generalizes to arbitrary branch architectures, the only constraint being that their outputs are of commensurate shapes.
An interesting avenue for future research would therefore be to automatically determine the best operation to perform for each domain, for example by combining our approach with neural architecture search strategies.
Figure 1 : Multibranch LeNet.
This architecture is a multibranch extension to the LeNet used by DANN (Ganin & Lempitsky, 2015 Figure 2 : Multibranch SVHNet.
This architecture is a multibranch extension to the SVHNet used by DANN (Ganin & Lempitsky, 2015 (He et al., 2016) .
We preserve the groupings described in the original paper (He et al., 2016) .
N denotes the number of classes in the dataset.
|
A Multiflow Network is a dynamic architecture for domain adaptation that learns potentially different computational graphs per domain, so as to map them to a common representation where inference can be performed in a domain-agnostic fashion.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:275
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time.
To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process.
However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency).
In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly.
To address this, we propose a new distributed reinforcement learning algorithm, IMPACT.
IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling.
In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA.
For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.
Proximal Policy Optimization (Schulman et al., 2017 ) is one of the most sample-efficient on-policy algorithms.
However, it relies on a synchronous architecture for collecting experiences, which is closely tied to its trust region optimization objective.
Other architectures such as IMPALA can achieve much higher throughputs due to the asynchronous collection of samples from workers.
Yet, IMPALA suffers from reduced sample efficiency since it cannot safely take multiple SGD steps per batch as PPO can.
The new agent, Importance Weighted Asynchronous Architectures with Clipped Target Networks (IMPACT), mitigates this inherent mismatch.
Not only is the algorithm highly sample efficient, it can learn quickly, training 30 percent faster than IMPALA.
At the same time, we propose a novel method to stabilize agents in distributed asynchronous setups and, through our ablation studies, show how the agent can learn in both a time and sample efficient manner.
In our paper, we show that the algorithm IMPACT realizes greater gains by striking the balance between high sample throughput and sample efficiency.
In our experiments, we demonstrate in the experiments that IMPACT exceeds state-of-the-art agents in training time (with same hardware) while maintaining similar sample efficiency with PPO's.
The contributions of this paper are as follows:
1. We show that when collecting experiences asynchronously, introducing a target network allows for a stabilized surrogate objective and multiple SGD steps per batch (Section 3.1).
2. We show that using a circular buffer for storing asynchronously collected experiences allows for smooth trade-off between real-time performance and sample efficiency (Section 3.2).
3. We show that IMPACT, when evaluated using identical hardware and neural network models, improves both in real-time and timestep efficiency over both synchronous PPO and IMPALA (Section 4).
into a large training batch and the learner performs minibatch SGD.
IMPALA workers asynchronously generate data.
IMPACT consists of a batch buffer that takes in worker experience and a target's evaluation on the experience.
The learner samples from the buffer.
In conclusion, we introduce IMPACT, which extends PPO with a stabilized surrogate objective for asynchronous optimization, enabling greater real-time performance without sacrificing timestep efficiency.
We show the importance of the IMPACT objective to stable training, and show it can outperform tuned PPO and IMPALA baselines in both real-time and timestep metrics.
Time ( In Figure 9 , we gradually add components to IMPALA until the agent is equivalent to IMPACT's.
Starting from IMPALA, we gradually add PPO's objective function, circular replay buffer, and target-worker clipping.
In particular, IMPALA with PPO's objective function and circular replay buffer is equivalent to an asynchronous-variant of PPO (APPO).
APPO fails to perform as well as synchronous distributed PPO, since PPO is an on-policy algorithm.
|
IMPACT helps RL agents train faster by decreasing training wall-clock time and increasing sample efficiency simultaneously.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:276
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs).
More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes.
Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations.
Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.
Learning good representations is seen by many machine learning researchers as the main reason behind the tremendous successes of the field in recent years (Bengio et al., 2013) .
In image analysis (Krizhevsky et al., 2012) , natural language processing (Vaswani et al., 2017) or reinforcement learning (Mnih et al., 2015) , groundbreaking results rely on efficient and flexible deep learning architectures that are capable of transforming a complex input into a simple vector while retaining most of its valuable features.
The universal approximation theorem (Cybenko, 1989; Hornik et al., 1989; Hornik, 1991; Pinkus, 1999) provides a theoretical framework to analyze the expressive power of such architectures by proving that, under mild hypotheses, multi-layer perceptrons (MLPs) can uniformly approximate any continuous function on a compact set.
This result provided a first theoretical justification of the strong approximation capabilities of neural networks, and was the starting point of more refined analyses providing valuable insights into the generalization capabilities of these architectures (Baum and Haussler, 1989; Geman et al., 1992; Saxe et al., 2014; Bartlett et al., 2018) .
Despite a large literature and state-of-the-art performance on benchmark graph classification datasets, graph neural networks yet lack a similar theoretical foundation (Xu et al., 2019) .
Universality for these architectures is either hinted at via equivalence with approximate graph isomorphism tests (k-WL tests in Xu et al. 2019; Maron et al. 2019a ), or proved under restrictive assumptions (finite node attribute space in Murphy et al. 2019) .
In this paper, we introduce Colored Local Iterative Procedure 1 (CLIP), which tackles the limitations of current Message Passing Neural Networks (MPNNs) by showing, both theoretically and experimentally, that adding a simple coloring scheme can improve the flexibility and power of these graph representations.
More specifically, our contributions are:
1) we provide a precise mathematical definition for universal graph representations,
2) we present a general mechanism to design universal neural networks using separability,
3) we propose a novel node coloring scheme leading to CLIP, the first provably universal extension of MPNNs,
4) we show that CLIP achieves state of the art results on benchmark datasets while significantly outperforming traditional MPNNs as well as recent methods on graph property testing.
The rest of the paper is organized as follows: Section 2 gives an overview of the graph representation literature and related works.
Section 3 provides a precise definition for universal representations, as well as a generic method to design them using separable neural networks.
In Section 4, we show that most state-of-the-art representations are not sufficiently expressive to be universal.
Then, using the analysis of Section 3, Section 5 provides CLIP, a provably universal extension of MPNNs.
Finally, Section 6 shows that CLIP achieves state-of-the-art accuracies on benchmark graph classification taks, as well as outperforming its competitors on graph property testing problems.
In this paper, we showed that a simple coloring scheme can improve the expressive power of MPNNs.
Using such a coloring scheme, we extended MPNNs to create CLIP, the first universal graph representation.
Universality was proven using the novel concept of separable neural networks, and our experiments showed that CLIP is state-of-the-art on both graph classification datasets and property testing tasks.
The coloring scheme is especially well suited to hard classification tasks that require complex structural information to learn.
The framework is general and simple enough to extend to other data structures such as directed, weighted or labeled graphs.
Future work includes more detailed and quantitative approximation results depending on the parameters of the architecture such as the number of colors k, or number of hops of the iterative neighborhood aggregation.
|
This paper introduces a coloring scheme for node disambiguation in graph neural networks based on separability, proven to be a universal MPNN extension.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:277
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components.
Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies.
Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel.
Algorithmic music composition is almost as old as computers themselves, dating back to the 1957 "Illiac suite" (Hiller Jr & Isaacson, 1958) .
Since then, automated music composition evolved with technology, progressing from the first rule-and-randomness based methods to the sophisticated tools made possible by modern-day machine learning (see Fernández & Vico (2013) and Briot et al. (2017) for detailed surveys on history and state of the art of algorithmic music composition).
One of the first machine learning (ML) approaches to music generation was Conklin & Witten (1995) , who used the common notion of entropy as a measurement to build what they termed a multiple viewpoint system.
Standard feedforward neural networks have difficulties with sequence based information such as music.
Predicting the next note of a piece, when only based on the current note, does not account for long-range context or structure (such as key and musical sections) which help give coherence to compositions.
As music is traditionally represented as sequences of notes, recurrent neural networks are a natural tool for music (especially melody) generation, and multiple groups used RNNs fairly successfully for a variety of types of music.
Todd (1989) used a sequential model for composition in 1989, and Eck & Schmidhuber (2002) used the adapted LSTM structure to successfully generate music that had both short-term musical structure and contained the higher-level context and structure needed.
Subsequently, there have been a number of RNN-based melody generators (Simon & Oore, 2017; Lee et al., 2017; Eck & Lapalme, 2008; Sturm et al., 2016; Chen & Miikkulainen, 2001; Boulanger-Lewandowski et al., 2012; .
Other approaches such as MidiNet by Yang et al. (2017) , though not RNNs, also leveraged the sequential representation of music.
Using an RNN architecture provides a lot of flexibility when generating music, as an RNN has the ability to generate pieces of varying length.
However, in some styles of music this is not as desired.
This is true of traditional Irish music -and especially their jigs and reels.
These pieces have a more rigid format where the varying length can prevent capturing the interplay between the phrases of the piece.Finding jigs and reels to train on was made easy by an excellent database of Irish traditional melodies in ABC notation (a text based format), publicly available at TheSessionKeith.
Several RNN-based generators were trained on the melodies from TheSession, most notably Sturm et al. (Sturm et al., 2016; Sturm & Ben-Tal, 2018) , as well as Eck & Lapalme (2008) .
It is natural to view music, and in particular melodies, as sequential data.
However, to better represent long-term dependencies it can be useful to present music as a two-dimensional form, where related parts and occurrences of long patterns end up aligned.
This benefit is especially apparent in forms of music where a piece consists of a well-defined, fixed-length components, such as reels in Irish music.
These components are often variations on the same theme, with specific rules on where repeats vs. changes should be introduced.
Aligning them allows us to use vertical spatial proximity to capture these dependencies, while still representing the local structure in the sequence by horizontal proximity.
In this project, we leverage such two-dimensional representation of melodies for non-sequential melody generation.
We focus on melody generation using deep convolutional generative adversarial networks (DCGANs) without recurrent components for fixed-format music such as reels.
This approach is intended to capture higher-level structures in the pieces (like sections), and better mimic interplay between smaller parts (musical motifs).
More specifically, we use dilations of several semantically meaningful lengths (a bar or a phrase) to further capture the dependencies.
Dilated convolutions, introduced by Yu & Koltun (2015) , have been used in a number of applications over the last several years to capture long-range dependencies, notably in WaveNet (Oord et al., 2016) .
However, they are usually combined with some recurrent component even when used for a GAN-based generation such as in Zhang et al. (2019) or Liu & Yang (2019) .
Not all techniques applicable to images can be used for music, however: pooling isn't effective, as the average of two pitches can create notes which fall outside of the 12-semitone scale (which is the basis the major and minor scale as well as various modes).
This is reflected in the architecture of our discriminator, with dilations and towers as the main ingredients.
Converting sequential data into a format which implicitly encodes temporal information as spatial information is an effective way of generating samples of such data as whole pieces.
Here, we explored this approach for melody generation of fixed-length music forms, such as an Irish reel, using non-recurrent architecture for the discriminator CNN with towers and dilations, as well as a CNN for the GAN itself.
One advantage of this approach is that the model learns global and contextual information simultaneously, even with a small model.
LSTMs and other approaches need a much larger model to be able to learn both the contextual neighboring note sequences and global melody structure.
In future work, we would like to introduce boosting in order to capture the structure of the distribution more faithfully, and increase the range of pieces our model can generate.
Natural extensions of the model would be to introduce multiple channels to capture durations better (for example, as in Colombo et al. (2016) ), and add polyphony (ie, using some form of piano roll representation).
Another direction could be to experiment with higher-dimensional representation of the sequence data, to better capture several types of dependencies simultaneously.
Additionally, it would be interesting to apply it to other kinds of fixed-length sequential data with long-range patterns.
|
Representing melodies as images with semantic units aligned we can generate them using a DCGAN without any recurrent components.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:278
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural machine translation (NMT) systems have reached state of the art performance in translating text and widely deployed.
Yet little is understood about how these systems function or break.
Here we show that NMT systems are susceptible to producing highly pathological translations that are completely untethered from the source material, which we term hallucinations.
Such pathological translations are problematic because they are are deeply disturbing of user trust and easy to find.
We describe a method t generate hallucinations and show that many common variations of the NMT architecture are susceptible to them.
We study a variety of approaches to reduce the frequency of hallucinations, including data augmentation, dynamical systems and regularization techniques and show that data augmentation significantly reduces hallucination frequency.
Finally, we analyze networks that produce hallucinations and show signatures of hallucinations in the attention matrix and in the stability measures of the decoder.
Neural machine translation (NMT) systems are language translation systems based on deep learning architectures BID10 BID1 BID31 .
In the past few years, NMT has vastly improved and has been deployed in production systems, for example at Google BID33 , Facebook BID15 , Microsoft BID17 , and many others.
As NMT systems are built on deep learning methodology, they exhibit both the strengths and weaknesses of the approach.
For example, NMT systems are competitive with state of the art performance BID6 and scale well to very large datasets BID23 but like most large deep learning systems, NMT systems are poorly understood.
For example, in many commercial translation systems, entering repeated words many times occasionally results in strange translations, a phenomenon which has been highly publicized BID12 .
More broadly, recent work shows that NMT systems are highly sensitive to noise in the input tokens BID3 and also susceptible to adversarial inputs BID9 .
When there is an error in translation, it can be challenging to either understand why the mistake occurred or engineer a fix.Here we continue the study of noise in the input sequence and describe a type of phenomenon that is particularly pernicious, whereby inserting a single additional input token into the source sequence can completely divorce the translation from the input sentence.
For example, here is a German input sentence translated to English (reference) by a small NMT system: Source: Caldiero sprach mit E!
Nachrichten nach dem hart erkämpften Sieg, noch immer unter dem Schocküber den Gewinn des Großen Preises von 1 Million $.
Reference: Caldiero spoke with E!
News after the hard-fought victory, still in shock about winning the $1 million grand prize.
NMT Translation: Caldiero spoke with E, after the hard won victory, still under the shock of the winning of the Grand Prix of 1 million $.
In this paper we uncovered and studied a hallucination-like phenomenon whereby adding a single additional token into the input sequence causes complete mistranslation.
We showed that hallucinations are common in the NMT architecture we examined, as well as in its variants.
We note that hallucinations appear to be model specific.
We showed that the attention matrices associated with hallucinations were statistically different on average than those associated with input sentences that could not be perturbed.
Finally we proposed a few methods to reduce the occurrence of hallucinations.Our model has two differences from production systems.
For practical reasons we studied a small model and used a limited amount of training data.
Given these differences it is likely that our model shows more hallucinations than a quality production model.
However, given news reports of strange translations in popular public translation systems BID12 , the dynamical nature of the phenomenon, the fact that input datasets are noisy and finite, and that our most effective technique for preventing hallucinations is a data augmentation technique that requires knowledge of hallucinations, it would be surprising to discover that hallucinations did not occur in production systems.While it is not entirely clear what should happen when a perturbing input token is added to an input source sequence, it seems clear that having an utterly incorrect translation is not desirable.
This phenomenon appeared to us like a dynamical problem.
Here are two speculative hypotheses: perhaps a small problem in the decoder is amplified via iteration into a much larger problem.
Alternatively, perhaps the perturbing token places the decoder state in a poorly trained part of state space, the dynamics jump around wildly for while until an essentially random well-trodden stable trajectory is found, producing the remaining intelligible sentence fragment.Many of our results can be interpreted from the vantage of dynamical systems as well.
For example, we note that the NMT networks using CFN recurrent modules were highly susceptible to perturbations in our experiments.
This result highlights the difficulty of understanding or fixing problems in recurrent networks.
Because the CFN is embedded in a larger graph that contains an auto-regressive loop, there is no guarantee that the chaos-free property of the CFN will transfer to the larger graph.
The techniques we used to reduce hallucinations can also be interpreted as dynamical regularization.
For example, L2 weight decay is often discussed in the context of generalization.
However, for RNNs L2 regularization can also be thought of as dynamically conditioning a network to be more stable.
L2 regularization of input embeddings likely means that rare tokens will have optimization pressure to reduce the norm of those embeddings.
Thus, when rare tokens are inserted into an input token sequence, the effects may be reduced.
Even the data augmentation technique appears to have stability effects, as Appendix 10 shows the overall stability exponents are reduced when data augmentation is used.Given our experimental results, do we have any recommendations for those that engineer and maintain production NMT systems?
Production models should be tested for hallucinations, and when possible, the attention matrices and hidden states of the decoder should be monitored.
Our results on reducing hallucinations suggest that standard regularization techniques such as Dropout and L2 weight decay on the embeddings are important.
Further, data augmentation seems critical and we recommend inserting randomly chosen perturbative tokens in the input sentence as a part of the standard training regime (while monitoring that the BLEU score does not fall).
We note a downside of data augmentation is that, to some extent, it requires knowing the types of the pathological phenomenon one desires to train against.
Figure 7 : Schematic of the NMT decoder.
The input sequence, x 1:S , is encoded by a bidirectional encoder (not shown) into a sequence of encodings, z 1:S .
The attention network, f att , computes a weighted sum of these encodings (computed weights not shown), based on conditioning information from h and provides the weighted encoding to the 2-layer decoder, f dec , as indicated by the arrows.
The decoder proceeds forward in time producing the translation one step at a time.
As the decoder proceeds forward, it interacts with both the attention network and also receives as input the decoded output symbol from the previous time step.
|
We introduce and analyze the phenomenon of "hallucinations" in NMT, or spurious translations unrelated to source text, and propose methods to reduce its frequency.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:279
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The integration of a Knowledge Base (KB) into a neural dialogue agent is one of the key challenges in Conversational AI.
Memory networks has proven to be effective to encode KB information into an external memory to thus generate more fluent and informed responses.
Unfortunately, such memory becomes full of latent representations during training, so the most common strategy is to overwrite old memory entries randomly.
In this paper, we question this approach and provide experimental evidence showing that conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories.
We introduce memory dropout as an automatic technique that encourages diversity in the latent space by
1) Aging redundant memories to increase their probability of being overwritten during training
2) Sampling new memories that summarize the knowledge acquired by redundant memories.
This technique allows us to incorporate Knowledge Bases to achieve state-of-the-art dialogue generation in the Stanford Multi-Turn Dialogue dataset.
Considering the same architecture, its use provides an improvement of +2.2 BLEU points for the automatic generation of responses and an increase of +8.1% in the recognition of named entities.
Given the large amount of dialogue data recorded in human-human or human-chatbot interactions, there is a great need for dialogue systems that infer automatic responses grounded to personal knowledge bases.
This approach has the advantage of integrating semantic information that is fundamental to achieve dialogue understanding.
We want to leverage the contextual information present in a KB (e.g., a calendar of events) to answer queries like What time is my dentist appointment?
. This task is challenging because existing neural dialogue agents often assume that the dialogue history carries the information needed to provide an answer but struggle to interface with the structured data stored in a KB. This assumption prevents to have an end-to-end differentiable model to maintain the kind of contextual conversations that people desire.
Memory networks Miller et al. (2016) has proven to be effective to encode KB information into an external memory to generate more fluent and informed responses.
However, there is no much work in regularizing the latent representations stored in the external memory.
Unlike the conventional dropout technique used to regularize deep neural networks Srivastava et al. (2014) , we propose memory dropout to attain the same goal (i.e., reduction of overfitting) but with different functionality and designed for memory networks Weston et al. (2015) .
Given the long-term nature of memory networks, we do not immediately remove redundant memories with some probability as in the original dropout algorithm.
Instead, we assign them the current maximum age increasing their probability of being overwritten by more recent latent representations in future training steps.
Thus, in contrast to Srivastava et al. (2014) , our memory dropout is a delayed regularization mechanism.
The main contributions of our work are the following:
• We introduce a new regularization method called memory dropout designed for dealing with overfitting in Memory Augmented Neural Networks.
To our best knowledge, ours is the first work on regularizing memory networks.
• We build a neural dialogue agent that uses memory dropout to incorporate KB into an external memory for automatic response generation.
Our results show that this technique can generate more fluent and accurate responses: an improvement of +2.2 BLUE points and +8.1% Entity F1 score versus not using it in the Stanford Multi-Turn Dialogue dataset.
Figure 1: Learning the (h, y) pair transitions the neighborhood of h (represented as an ellipse) to a new state in which a memory h is drawn as the distribution of positive memories.
Small circles represent the uncertainty of using a particular memory to model h .
In the new memory configuration, we age positive keys (now faded in grey) making them more likely of being overwritten by other training examples.
Memory Dropout is a technique for improving memory augmented neural networks by breaking co-adaptating memories built during backpropagation.
While conventional dropout works at the level of individual activations, our memory dropout deals with latent representations of the input.
These arrays of activations are stored into an external memory module which resembles areas of the human brain that are content-addressable and sensitive to semantic information Wixted et al. (2018) .
Central to this technique is the idea that age and uncertainty play important roles to regularize the addressable keys of an external memory module that is persistent across training examples.
By doing this, we obtain higher BLEU and Entity F1 scores when training a task-oriented dialogue agent that decodes an answer considering the entries of KB stored in the memory module.
|
Conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories. We introduce memory dropout as an automatic technique that encourages diversity in the latent space.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:28
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks.
In this work, we identify two issues of current explanatory methods.
First, we show that two prevalent perspectives on explanations—feature-additivity and feature-selection—lead to fundamentally different instance-wise explanations.
In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals.
The second issue is that current post-hoc explainers have only been thoroughly validated on simple models, such as linear regression, and, when applied to real-world neural networks, explainers are commonly evaluated under the assumption that the learned models behave reasonably.
However, neural networks often rely on unreasonable correlations, even when producing correct decisions.
We introduce a verification framework for explanatory methods under the feature-selection perspective.
Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings.
We validate the efficacy of our evaluation by showing the failure modes of current explainers.
We aim for this framework to provide a publicly available,1 off-the-shelf evaluation when the feature-selection perspective on explanations is needed.
A large number of post-hoc explanatory methods have recently been developed with the goal of shedding light on highly accurate, yet black-box machine learning models (Ribeiro et al., 2016a; Lundberg & Lee, 2017; Arras et al., 2017; Shrikumar et al., 2017; Ribeiro et al., 2016b; Plumb et al., 2018; Chen et al., 2018) .
Among these methods, there are currently at least two widely used perspectives on explanations: feature-additivity (Ribeiro et al., 2016a; Lundberg & Lee, 2017; Shrikumar et al., 2017; Arras et al., 2017) and feature-selection (Chen et al., 2018; Ribeiro et al., 2018; Carter et al., 2018) , which we describe in detail in the sections below.
While both shed light on the overall behavior of a model, we show that, when it comes to explaining the prediction on a single input in isolation, i.e., instance-wise explanations, the two perspectives lead to fundamentally different explanations.
In practice, explanatory methods adhering to different perspectives are being directly compared.
For example, Chen et al. (2018) and Yoon et al. (2019) compare L2X, a feature-selection explainer, with LIME (Ribeiro et al., 2016a) and SHAP (Lundberg & Lee, 2017) , two feature-additivity explainers.
We draw attention to the fact that these comparisons may not be coherent, given the fundamentally different explanation targets, and we discuss the strengths and limitations of the two perspectives.
Secondly, while current explanatory methods are successful in pointing out catastrophic biases, such as relying on headers to discriminate between pieces of text about Christianity and atheism (Ribeiro et al., 2016a) , it is an open question to what extent they are reliable when the model that they aim to explain (which we call the target model) has a less dramatic bias.
This is a difficult task, precisely because the ground-truth decision-making process of neural networks is not known.
Consequently, when applied to complex neural networks trained on real-world datasets, a prevalent way to evaluate the explainers is to assume that the target models behave reasonably, i.e., that they did not rely on irrelevant correlations.
For example, in their morphosyntactic agreement paradigm, Pörner et al. (2018) assume that a model that predicts if a verb should be singular or plural given the tokens before the verb, must be doing so by focusing on a noun that the model had identified as the subject.
Such assumptions may be poor, since recent works show a series of surprising spurious correlations in human-annotated datasets, on which neural networks learn to heavily rely (Gururangan et al., 2018; Glockner et al., 2018; Carmona et al., 2018) .
Therefore, it is not reliable to penalize an explainer for pointing to tokens that just do not appear significant to us.
We address the above issue by proposing a framework capable of generating evaluation tests for the explanatory methods under the feature-selection perspective.
Our tests consist of pairs of (target model, dataset).
Given a pair, for each instance in the dataset, the specific architecture of our model allows us to identify a subset of tokens that have zero contribution to the model's prediction on the instance.
We further identify a subset of tokens clearly relevant to the prediction.
Hence, we test if explainers rank zero-contribution tokens higher than relevant tokens.
We instantiated our framework on three pairs of (target model, dataset) on the task of multi-aspect sentiment analysis.
Each pair corresponds to an aspect and the three models (of same architecture) have been trained independently.
We highlight that our test is not a sufficient test for concluding the power of explainers in full generality, since we do not know the whole ground-truth behaviour of the target models.
Indeed, we do not introduce an explanation generation framework but a framework for generating evaluation tests for which we provide certain guarantees on the behaviour of the target model.
Under these guarantees we are able to test the explainers for critical failures.
Our framework therefore generates necessary evaluation tests, and our metrics penalize explainers only when we are able to guarantee that they produced an error.
To our knowledge, we are the first to introduce an automatic and non-trivial evaluation test that does not rely on speculations on the behavior of the target model.
Finally, we evaluate L2X (Chen et al., 2018) , a feature-selection explainer, under our test.
Even though our test is specifically designed for feature-selection explanatory methods, since, in practice, the two types of explainers are being compared, and, since LIME (Ribeiro et al., 2016a) and SHAP (Lundberg & Lee, 2017) are two very popular explainers, we were interested in how the latter perform on our test, even though they adhere to the feature-additivity perspective.
Interestingly, we find that, most of the time, LIME and SHAP perform better than L2X.
We will detail in Section 5 the reasons why we believe this is the case.
We provide the error rates of these explanatory methods to raise awareness of their possible modes of failure under the feature-selection perspective of explanations.
For example, our findings show that, in certain cases, the explainers predict the most relevant token to be among the tokens with zero contribution.
We will release our test, which can be used off-the-shelf, and encourage the community to use it for testing future work on explanatory methods under the feature-selection perspective.
We also note that our methodology for creating this evaluation is generic and can be instantiated on other tasks or areas of research.
In this work, we instantiate our framework on the RCNN model trained on the BeerAdvocate corpus, 3 on which the RCNN was initially evaluated (Lei et al., 2016) .
BeerAdvocate consists of a total of ≈ .100K human-generated multi-aspect beer reviews, where the three considered aspects are appearance, aroma, and palate.
The reviews are accompanied with fractional ratings originally between 0 and 5 for each aspect independently.
The RCNN is a regression model with the goal to predict the rating, rescaled between 0 and 1 for simplicity.
Three separate RCNNs are trained, one for each aspect independently, with the same default settings.
4 With the above procedure, we gathered three datasets D a , one for each aspect a.
For each dataset, we know that for each instance x ∈ D a , the set of non-selected tokens N x has zero contribution to the prediction of the model.
For obtaining the clearly relevant tokens, we chose a threshold of τ = 0.1, since the scores are in [0, 1], and the ground-truth ratings correspond to {0, 0.1, 0.2, . . . , 1}.
Therefore, a change in prediction of 0.1 is to be considered clearly significant for this task.
We provide several statistics of our datasets in Appendix A. For example, we provide the average lengths of the reviews, of the selected tokens per review, of the clearly relevant tokens among the selected, and of the non-selected tokens.
We note that we usually obtained 1 or 2 clearly relevant tokens per datapoints, showing that our threshold of 0.1 is likely very strict.
However, we prefer to be more conservative in order to ensure high guarantees on our evaluation test.
We also provide the percentages of datapoints eliminated in order to ensure the no-handshake condition (Equation 7).
Evaluating explainers.
We test three popular explainers: LIME (Ribeiro et al., 2016a), SHAP (Lundberg & Lee, 2017) , and L2X (Chen et al., 2018) .
We used the code of the explainers as provided in the original repositories, 5 with their default settings for text explanations, with the exception that, for L2X, we set the dimension of the word embeddings to 200 (the same as in the RCNN), and we also allowed training for a maximum of 30 epochs instead of 5.
As mentioned in Section 3, LIME and SHAP adhere to the feature-additivity perspective, hence our evaluation is not directly targeting these explainers.
However, we see in Table 1 that, in practice, LIME and SHAP outperformed L2X on the majority of the metrics, even though L2X is a featureselection explainer.
We hypothesize that a major limitation of L2X is the requirement to know the number of important features per instance.
Indeed, L2X learns a distribution over the set of features by maximizing the mutual information between subsets of K features and the response variable, where K is assumed to be known.
In practice, one usually does not know how many features per instance a model relied on.
To test L2X under real-world circumstances, we used as K the average number of tokens highlighted by human annotators on the subset manually annotated by McAuley et al. (2012) .
We obtained an average K of 23, 18, and 13 for the three aspects, respectively.
In Table 1 , we see that, on metric (A), all explainers are prone to stating that the most relevant feature is a token with zero contribution, as much as 14.79% of the time for LIME and 12.95% of the time for L2X in the aroma aspect.
We consider this the most dramatic form of failure.
Metric (B) shows that both explainers can rank at least one zero-contribution token higher than a clearly relevant feature, i.e., there is at least one mistake in the predicted ranking.
Finally, metric (C) shows that, in average, SHAP only places one zero-contribution token ahead of a clearly relevant token for the first two aspects and around 9 tokens for the third aspect, while L2X places around 3-4 zero-contribution tokens ahead of a clearly relevant one for all three aspects.
Figure 4 : Explainers' rankings (with top 5 features on the right-hand side) on an instance from the palate aspect in our evaluation dataset.
Qualitative Analysis.
In Figure 6 , we present an example from our dataset of the palate aspect.
More examples in Appendix C. The heatmap corresponds to the ranking determined by each explainer, and the intensity of the color decreases linearly with the ranking of the tokens.
6 We only show in the heatmap the first K = 10 ranked tokens, for visibility reasons.
Tokens in S x are in bold, and the clearly relevant tokens from SR x are additionally underlined.
The first selected by the explainer is marked wth a rectangular.
Additionally the 5 ranks tokens by each explainer are on the right-hand side.
Firstly, we notice that both explainers are prone to attributing importance to nonselected tokens, with LIME and SHAP even ranking the tokens "mouthfeel" and "lacing" belonging to N x as first two (most important).
Further, "gorgeous", the only relevant word used by the model, did not even make it in top 13 tokens for L2X.
Instead, L2X gives "taste", "great", "mouthfeel" and "lacing" as most important tokens.
We note that if the explainer was evaluated by humans assuming that the RCNN behaves reasonably, then this choice could have well been considered correct.
In this work, we first shed light on an important distinction between two widely used perspectives of explanations.
Secondly, we introduced an off-the-shelf evaluation test for post-hoc explanatory methods under the feature-selection perspective.
To our knowledge, this is the first automatic verification framework offering guarantees on the behaviour of a non-trivial real-world neural network.
We presented the error rates on different metrics for three popular explanatory methods to raise awareness of the types of failures that these explainers can produce, such as incorrectly predicting even the most relevant token.
While instantiated on a natural language processing task, our methodology is generic and can be adapted to other tasks and other areas.
For example, in computer vision, one could train a neural network that first makes a hard selection of super-pixels to retain, and subsequently makes a prediction based on the image where the non-selected super-pixels have been blurred.
The same procedure of checking for zero contribution of non-selected super-pixels would then apply.
We also point out that the core algorithm in the majority of the current post-hoc explainers are also domain-agnostic.
Therefore, we expect our evaluation to provide a representative view of the fundamental limitations of the explainers.
|
An evaluation framework based on a real-world neural network for post-hoc explanatory methods
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:280
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Planning in high-dimensional space remains a challenging problem, even with recent advances in algorithms and computational power.
We are inspired by efference copy and sensory reafference theory from neuroscience.
Our aim is to allow agents to form mental models of their environments for planning.
The cerebellum is emulated with a two-stream, fully connected, predictor network.
The network receives as inputs the efference as well as the features of the current state.
Building on insights gained from knowledge distillation methods, we choose as our features the outputs of a pre-trained network, yielding a compressed representation of the current state.
The representation is chosen such that it allows for fast search using classical graph search algorithms.
We display the effectiveness of our approach on a viewpoint-matching task using a modified best-first search algorithm.
As we manipulate an object in our hands, we can accurately predict how it looks after some action is performed.
Through our visual sensory system, we receive high-dimensional information about the object.
However, we do not hallucinate its full-dimensional representation as we estimate how it would look and feel after we act.
But we feel that we understood what happened if there is an agreement between the experience of the event and our predicted experience.
There has been much recent work on methods that take advantage of compact representations of states for search and exploration.
One of the advantages of this approach is that finding a good representation allows for faster and more efficient planning.
This holds in particular when the latent space is of a much lower dimensionality than the one where the states originally live in.
Our central nervous system (CNS) sends a command (efferent) to our motor system, as well as sending a copy of the efferent to our cerebellum, which is our key organ for predicting the sensory outcome of actions when we initiate a movement and is responsible for fine motor control.
The cerebellum then compares the result of the action (sensory reafference) with the intended consequences.
If they differ, then the cerebellum makes changes to its internal structure such that it does a better job next time -i.e., in no uncertain terms, it learns.
The cerebellum receives 40 times more information than it outputs, by a count of the number of axons.
This gives us a sense of the scale of the compression ratio between the high dimensional input and low dimensional output.
Thus, we constrain our attention to planning in a low-dimensional space, without necessarily reconstructing the high-dimensional one.
We apply this insight for reducing the complexity of tasks such that planning in high dimensionality space can be done by classical AI methods in low dimensionality space .
Our contributions are thus twofold: provide a link between efference theory and classical planning with a simple model and introduce a search method for applying the model to reduced state-space search.
We validate our approach experimentally on visual data associated with categorical actions that connect the images, for example taking an object and rotating it.
We create a simple manipulation task using the NORB dataset (LeCun et al., 2004) , where the agent is presented with a starting viewpoint of an object and the task is to produce a sequence of actions such that the agent ends up with the target viewpoint of the object.
As the NORB data set can be embedded on a cylinder (Schüler et al., 2018) (Hadsell et al., 2006) or a sphere (Wang et al., 2018) , we can visualize the actions as traversing the embedded manifold.
Pairing the EfferenceNet with a good but generic feature map allows us to perform an accurate search in the latent space of manipulating unseen objects.
This remarkably simple method, inspired by the neurology of the cerebellum, reveals a promising line of future work.
We validate our method by on a viewpoint-matching task derived from the NORB dataset.
In the case of deterministic environments, EfferenceNets calculate features of the current state and action, which in turn define the next state.
This opens up a future direction of research by combining EfferenceNets with successor features (Barreto et al., 2017) .
Furthermore, the study of effective feature maps strikes us as an important factor in this line of work to consider.
We utilize here Laplacian Eigenmaps and pre-trained deep networks.
It is probably possible to improve the performance of the system by end-to-end training but we believe that it is more promising to work on generic multi-purpose representations.
Possible further methods include Slow Feature Analysis (SFA) (Wiskott & Sejnowski, 2002) (Schüler et al., 2018) .
SFA has been previously shown (Sprekeler, 2011) to solve a special case of LEMs while it allows for natural out-of-sample embeddings.
|
We present a neuroscience-inspired method based on neural networks for latent space search
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:281
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks.
Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox.
In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models.
To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process.
Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks.
Distributed representations have played a pivotal role in the current success of machine learning.
In contrast with the symbolic representations of classical AI, distributed representation spaces can encode rich notions of semantic similarity in their distance measures, allowing systems to generalise to novel inputs.
Methods to learn these representations have gained significant traction, in particular for modelling words BID30 .
They have since been successfully applied to many other domains, including images BID15 BID39 and graphs BID25 BID17 BID33 .Using
unlabelled data to learn effective representations is at the forefront of modern machine learning research. The Natural
Language Processing (NLP) community in particular, has invested significant efforts in the construction BID30 BID37 BID10 BID21 , evaluation and theoretical analysis BID28 of distributed representations for words.Recently, attention has shifted towards the unsupervised learning of representations for larger pieces of text, such as phrases BID50 BID51 , sentences BID22 BID43 BID19 BID7 , and entire paragraphs BID27 . Some of this
work simply sums or averages constituent word vectors to obtain a sentence representation BID32 BID31 BID48 BID7 , which is surprisingly effective but naturally cannot leverage any contextual information.Another line of research has relied on a sentence-level distributional hypothesis BID38 , originally applied to words BID18 , which is an assumption that sentences which occur in similar contexts have a similar meaning. Such models
often use an encoder-decoder architecture BID12 to predict the adjacent sentences of any given sentence. Examples of
such models include SkipThought , which uses Recurrent Neural Networks (RNNs) for its encoder and decoders, and FastSent BID19 , which replaces the RNNs with simpler bagof-words (BOW) versions.Models trained in an unsupervised manner on large text corpora are usually applied to supervised transfer tasks, where the representation for a sentence forms the input to a supervised classification problem, or to unsupervised similarity tasks, where the similarity (typically taken to be the cosine similarity) of two inputs is compared with corresponding human judgements of semantic similarity in order to inform some downstream process, such as information retrieval.Interestingly, some researchers have observed that deep complex models like SkipThought tend to do well on supervised transfer tasks but relatively poorly on unsupervised similarity tasks, whereas for shallow log-linear models like FastSent the opposite is true BID19 BID13 . It has been
highlighted that this should be addressed by analysing the geometry of the representation space BID6 BID42 BID19 , however, to the best of our knowledge it has not been systematically attempted 1 .In this work
we attempt to address the observed performance gap on unsupervised similarity tasks between representations produced by simple models and those produced by deep complex models. Our main contributions
are as follows:• We introduce the concept of an optimal representation space, in which the space has a similarity measure that is optimal with respect to the objective function.• We show that models with
log-linear decoders are usually evaluated in their optimal space, while recurrent models are not. This effectively explains
the performance gap on unsupervised similarity tasks.• We show that, when evaluated
in their optimal space, recurrent models close that gap. We also provide a procedure for
extracting this optimal space using the decoder hidden states.• We validate our findings with
a series of consistent empirical evaluations utilising a single publicly available codebase.
In this work, we introduced the concept of an optimal representation space, where semantic similarity directly corresponds to distance in that space, in order to shed light on the performance gap between simple and complex architectures on downstream tasks.
In particular, we studied the space of initial hidden states to BOW and RNN decoders (typically the outputs of some encoder) and how that space relates to the training objective of the model.For BOW decoders, the optimal representation space is precisely the initial hidden state of the decoder equipped with dot product, whereas for RNN decoders it is not.
Noting that it is precisely these spaces that have been used for BOW and RNN decoders has led us to a simple explanation for the observed performance gap between these architectures, namely that the former has been evaluated in its optimal representation space, whereas the latter has not.Furthermore, we showed that any neural network that outputs a probability distribution has an optimal representation space.
Since a RNN does produce a probability distribution, we analysed its objective function which motivated a procedure of unrolling the decoder.
This simple method allowed us to extract representations that are provably optimal under dot product, without needing to retrain the model.We then validated our claims by comparing the empirical performance of different architectures across transfer tasks.
In general, we observed that unrolling even a single state of the decoder always outperforms the raw encoder output with RNN decoder, and almost always outperforms the raw encoder output with BOW decoder for some number of unrolls.
This indicates different vector embeddings can be used for different downstream tasks depending on what type of representation space is most suitable, potentially yielding high performance on a variety of tasks from a single trained model.Although our analysis of decoder architectures was restricted to BOW and RNN, others such as convolutional BID49 and graph BID25 decoders are more appropriate for many tasks.
Similarly, although we focus on Euclidean vector spaces, hyperbolic vector spaces BID34 , complex-valued vector spaces BID44 and spinor spaces BID23 all have beneficial modelling properties.
In each case, although an optimal representation space should exist, it is not clear if the intuitive space and similarity measure is the optimal one.
However, there should at least exist a mapping from the intuitive choice of space to the optimal space using a transformation provided by the network itself, as we showed with the RNN decoder.
Evaluating in this space should further improve performance of these models.
We leave this for future work.Ultimately, a good representation is one that makes a subsequent learning task easier.
For unsupervised similarity tasks, this essentially reduces to how well the model separates objects in the chosen representation space, and how appropriately the similarity measure compares objects in that space.
Our findings lead us to the following practical advice:
i) Use a simple model architecture where the optimal representation space is clear by construction, or
ii) use an arbitrarily complex model architecture and analyse the objective function to reveal, for a chosen vector representation, an appropriate similarity metric.We hope that future work will utilise a careful understanding of what similarity means and how it is linked to the objective function, and that our analysis can be applied to help boost the performance of other complex models.
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10Number of unroll steps Spearman correlation coefficient Figure 3 : Performance on the STS tasks depending on the number of unrolled hidden states of the decoders, using cosine similarity as the similarity measure.
The top row presents results for the RNN encoder and the bottom row for the BOW encoder.
Red: Raw encoder output with BOW decoder.
Green: Raw encoder output with RNN decoder.
Blue: Unrolled RNN decoder output.
For both RNN and BOW encoders, unrolling the decoder strictly outperforms *-RNN for almost every number of unroll steps, and perform nearly as well as or better than *-BOW.A COMPARISON WITH SKIPTHOUGHT Table 3 : Performance of the SkipThought model, with and without layer normalisation BID8 , compared against the RNN-RNN model used in our experimental setup.
On each task, the highest performing model is highlighted in bold.
For SICK-R, we report the Pearson correlation, and for STS14 we report the Pearson/Spearman correlation with human-provided scores.
For all other tasks, reported values indicate test accuracy.
† indicates results taken from BID13 .
‡ indicates our results from running SentEval on the model downloaded from BID8 's publicly available codebase (https://github.com/ryankiros/layer-norm).
We attribute the discrepancies in performance to differences in experimental setup or implementation.
However, we expect our unrolling procedure to also boost SkipThought's performance on unsupervised similarity tasks, as we show for RNN-RNN in our fair singlecodebase comparisons in the main text.
As discussed in Section 3, the objective function is maximising the dot product between the BOW decoder/unrolled RNN-decoder and the context.
However, as other researchers in the field and the STS tasks specifically use cosine similarity by default, we present the results using cosine similarity in TAB4 and the results for different numbers of unrolled hidden decoder states in Figure 3 .Although
the results in TAB4 are consistent with the dot product results in Table 1 , the overall performance across STS tasks is noticeably lower when dot product is used instead of cosine similarity to determine semantic similarity. Switching
from using cosine similarity to dot product transitions from considering only angle between two vectors, to also considering their length. Empirical
studies have indicated that the length of a word vector corresponds to how sure of its context the model that produces it is. This is related
to how often the model has seen the word, and how many different contexts it appears in (for example, the word vectors for "January" and "February" have similar norms, however, the word vector for "May" is noticeably smaller) BID41 . Using the raw encoder
output (RNN-RNN) achieves the lowest performance across all tasks. Unrolling the RNN decoders
dramatically improves the performance across all tasks compared to using the raw encoder RNN output, validating the theoretical justification presented in Section 3.3. BOW encoder: We do not observe
the same uplift in performance from unrolling the RNN encoder compared to the encoder output. This is consistent with our findings
when using dot product (see Table 1 ). A corollary is that longer sentences
on average have shorter norms, since they contain more words which, in turn, have appeared in more contexts BID0 . During training, the corpus can induce
differences in norms in a way that strongly penalises sentences potentially containing multiple contexts, and consequently will disfavour these sentences as similar to other sentences under the dot product. This induces a noise that potentially
renders the dot product a less useful metric to choose for STS tasks than cosine similarity, which is unaffected by this issue.using dot product as the similarity measure. On each task, the highest performing
setup for each encoder type is highlighted in bold and the highest performing setup overall is underlined.
|
By introducing the notion of an optimal representation space, we provide a theoretical argument and experimental validation that an unsupervised model for sentences can perform well on both supervised similarity and unsupervised transfer tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:282
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Cloze test is widely adopted in language exams to evaluate students' language proficiency.
In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams.
With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets.
We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data.
We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck.
In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data.
Being a classic language exercise, the cloze test BID26 is an accurate assessment of language proficiency BID7 BID11 BID27 and has been widely employed in language examinations.
Under standard setting, a cloze test requires examinees to fill in the missing word (or sentence) that best fits the surrounding context.
To facilitate natural language understanding, automatically generated cloze datasets were introduced to measure the ability of machines in reading comprehension BID8 BID9 BID17 .
In these datasets, each cloze question typically consists of a context paragraph and a question sentence.
By randomly replacing a particular word in the question sentence with a blank symbol, a single test case is created.
For instance, the CNN/Daily Mail BID8 take news articles as the context and the summary bullet points as the question sentence.
Only named entities are considered when creating the blanks.
Similarly, in Children's Books test (CBT) BID9 , the cloze question is obtained by removing a word in the last sentence of every consecutive 21 sentences, with the first 20 sentences being the context.
Different from the CNN/Daily Mail datasets, CBT also provides each question with a candidate answer set, consisting of randomly sampled words with the same part-of-speech tag from the context as that of the ground truth.Thanks to the automatic generation process, these datasets can be very large in size, leading to significant research progress.
However, compared to how humans would create cloze questions, the automatic generation process bears some inevitable issues.
Firstly, the blanks are chosen uniformly without considering which aspect of the language phenomenon the question will test.
Hence, quite a portion of automatically generated questions can be purposeless or even trivial to answer.
Another issue involves the ambiguity of the answer.
Given a context and a blanked sentence, there can be multiple words that fit almost equally well into the blank.
A possible solution is to include a candidate option set, as done by CBT, to get rid of the ambiguity.
However, automatically generating the candidate option set can be problematic since it cannot guarantee the ambiguity is removed.
More importantly, automatically generated candidates can be totally irrelevant or simply grammatically unsuitable for the blank, resulting in again trivial questions.
Probably due to these unsatisfactory issues, it has been shown neural models have achieved comparable performance with human within very short time BID3 BID6 BID23 .
While there has been work trying to incorporate human design into cloze question generation BID30 , the MSR Sentence Completion Challenge created by this effort is quite small in size, limiting the possibility of developing powerful neural models on it.Motivated by the aforementioned drawbacks, we propose CLOTH, a large-scale cloze test dataset collected from English exams.
Questions in the dataset are designed by middle-school and highschool teachers to prepare Chinese students for entrance exams.
To design a cloze test, teachers firstly determine the words that can test students' knowledge of vocabulary, reasoning or grammar; then replace those words with blanks and provide three candidate options for each blank.
If a question does not specifically test grammar usage, all of the candidate options would complete the sentence with correct grammar, leading to highly confusing questions.
As a result, human-designed questions are usually harder and are a better assessment of language proficiency.
Note that, different from the reading comprehension task, a general cloze test does not focus on testing reasoning abilities but evaluates several aspects of language proficiency including vocabulary, reasoning and grammar.To verify if human-designed cloze questions are difficult for current models, we train dedicated models as well as the state-of-the-art language model and evaluate their performance on this dataset.
We find that the state-of-the-art model lags behind human performance even if the model is trained on a large external corpus.
We analyze where the model fails compared to human.
After conducting error analysis, we assume the performance gap results from the model's inability to use long-term context.
To verify this assumption, we evaluate humans' performance when they are only allowed to see one sentence as the context.
Our assumption is confirmed by the matched performances of the model and human when given only one sentence.
In addition, we demonstrate that human-designed data is more informative and more difficult than automatically generated data.
Specifically, when the same amount of training data is given, human-designed training data leads to better performance.
Additionally, it is much easier for the same model to perform well on automatically generated data.
In this paper, we propose a large-scale cloze test dataset CLOTH that is designed by teachers.
With the missing blanks and candidate options carefully created by teachers to test different aspects of language phenomenon, CLOTH requires a deep language understanding and better captures the complexity of human language.
We find that human outperforms state-of-the-art models by a significant margin, even if the model is trained on a large corpus.
After detailed analysis, we find that the performance gap is due to model's inability to understanding a long context.
We also show that, compared to automatically-generated questions, human-designed questions are more difficult and leads to a larger margin between human performance and the model's performance.
|
A cloze test dataset designed by teachers to assess language proficiency
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:283
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent work suggests goal-driven training of neural networks can be used to model neural activity in the brain.
While response properties of neurons in artificial neural networks bear similarities to those in the brain, the network architectures are often constrained to be different.
Here we ask if a neural network can recover both neural representations and, if the architecture is unconstrained and optimized, also the anatomical properties of neural circuits.
We demonstrate this in a system where the connectivity and the functional organization have been characterized, namely, the head direction circuit of the rodent and fruit fly.
We trained recurrent neural networks (RNNs) to estimate head direction through integration of angular velocity.
We found that the two distinct classes of neurons observed in the head direction system, the Ring neurons and the Shifter neurons, emerged naturally in artificial neural networks as a result of training.
Furthermore, connectivity analysis and in-silico neurophysiology revealed structural and mechanistic similarities between artificial networks and the head direction system.
Overall, our results show that optimization of RNNs in a goal-driven task can recapitulate the structure and function of biological circuits, suggesting that artificial neural networks can be used to study the brain at the level of both neural activity and anatomical organization.
Artificial neural networks have been increasingly used to study biological neural circuits.
In particular, recent work in vision demonstrated that convolutional neural networks (CNNs) trained to perform visual object classification provide state-of-the-art models that match neural responses along various stages of visual processing Khaligh-Razavi & Kriegeskorte, 2014; Yamins & DiCarlo, 2016; Güçlü & van Gerven, 2015; Kriegeskorte, 2015) .
Recurrent neural networks (RNNs) trained on cognitive tasks have also been used to account for neural response characteristics in various domains (Mante et al., 2013; Sussillo et al., 2015; Song et al., 2016; Cueva & Wei, 2018; Banino et al., 2018; Remington et al., 2018; Wang et al., 2018; Orhan & Ma, 2019; Yang et al., 2019) .
While these results provide important insights on how information is processed in neural circuits, it is unclear whether artificial neural networks have converged upon similar architectures as the brain to perform either visual or cognitive tasks.
Answering this question requires understanding the functional, structural, and mechanistic properties of artificial neural networks and of relevant neural circuits.
We address these challenges using the brain's internal compass -the head direction system, a system that has accumulated substantial amounts of functional and structural data over the past few decades in rodents and fruit flies (Taube et al., 1990a; Turner-Evans et al., 2017; Green et al., 2017; Seelig & Jayaraman, 2015; Stone et al., 2017; Lin et al., 2013; Finkelstein et al., 2015; Wolff et al., 2015; Green & Maimon, 2018) .
We trained RNNs to perform a simple angular velocity (AV) integration task (Etienne & Jeffery, 2004) and asked whether the anatomical and functional features that have emerged as a result of stochastic gradient descent bear similarities to biological networks sculpted by long evolutionary time.
By leveraging existing knowledge of the biological head direction (HD) systems, we demonstrate that RNNs exhibit striking similarities in both structure and function.
Our results suggest that goal-driven training of artificial neural networks provide a framework to study neural systems at the level of both neural activity and anatomical organization.
(2017)).
e) The brain structures in the fly central complex that are crucial for maintaining and updating heading direction, including the protocerebral bridge (PB) and the ellipsoid body (EB).
f) The RNN model.
All connections within the RNN are randomly initialized.
g) After training, the output of the RNN accurately tracks the current head direction.
Previous work in the sensory systems have mainly focused on obtaining an optimal representation (Barlow, 1961; Laughlin, 1981; Linsker, 1988; Olshausen & Field, 1996; Simoncelli & Olshausen, 2001; Khaligh-Razavi & Kriegeskorte, 2014) with feedforward models.
Several recent studies have probed the importance of recurrent connections in understanding neural computation by training RNNs to perform tasks (e.g., Mante et al. (2013); Sussillo et al. (2015) ; Cueva & Wei (2018)), but the relation of these trained networks to the anatomy and function of brain circuits are not mapped.
Using the head direction system, we demonstrate that goal-driven optimization of recurrent neural networks can be used to understand the functional, structural and mechanistic properties of neural circuits.
While we have mainly used perturbation analysis to reveal the dynamics of the trained RNN, other methods could also be applied to analyze the network.
For example, in Appendix Fig. 10 , using fixed point analysis (Sussillo & Barak, 2013; Maheswaranathan et al., 2019) , we found evidence consistent with attractor dynamics.
Due to the limited amount of experimental data available, comparisons regarding tuning properties and connectivity are largely qualitative.
In the future, studies of the relevant brain areas using Neuropixel probes (Jun et al., 2017) and calcium imaging (Denk et al., 1990) will provide a more in-depth characterization of the properties of HD circuits, and will facilitate a more quantitative comparison between model and experiment.
In the current work, we did not impose any additional structural constraint on the RNNs during training.
We have chosen to do so in order to see what structural properties would emerge as a consequence of optimizing the network to solve the task.
It is interesting to consider how additional structural constraints affect the representation and computation in the trained RNNs.
One possibility would to be to have the input or output units only connect to a subset of the RNN units.
Another possibility would be to freeze a subset of connections during training.
Future work should systematically explore these issues.
Recent work suggests it is possible to obtain tuning properties in RNNs with random connections (Sederberg & Nemenman, 2019) .
We found that training was necessary for the joint HD*AV tuning (see Appendix Fig. 9 ) to emerge.
While Sederberg & Nemenman (2019) consider a simple binary classification task, our integration task is computationally more complicated.
Stable HD tuning requires the system to keep track of HD by accurate integration of AV, and to stably store these values over time.
This computation might be difficult for a random network to perform (Cueva et al., 2019) .
Our approach contrasts with previous network models for the HD system, which are based on hand-crafted connectivity (Zhang, 1996; Skaggs et al., 1995; Xie et al., 2002; Green et al., 2017; Kim et al., 2017; Knierim & Zhang, 2012; Song & Wang, 2005; Kakaria & de Bivort, 2017; Stone et al., 2017) .
Our modeling approach optimizes for task performance through stochastic gradient descent.
We found that different input statistics lead to different heading representations in an RNN, suggesting that the optimal architecture of a neural network varies depending on the task demandan insight that would be difficult to obtain using the traditional approach of hand-crafting network solutions.
Although we have focused on a simple integration task, this framework should be of general relevance to other neural systems as well, providing a new approach to understand neural computation at multiple levels.
Our model may be used as a building block for AI systems to perform general navigation (Pei et al., 2019) .
In order to effectively navigate in complex environments, the agent would need to construct a cognitive map of the surrounding environment and update its own position during motion.
A circuit that performs heading integration will likely be combined with another circuit to integrate the magnitude of motion (speed) to perform dead reckoning.
Training RNNs to perform more challenging navigation tasks such as these, along with multiple sources of inputs, i.e., vestibular, visual, auditory, will be useful for building robust navigational systems and for improving our understanding of the computational mechanisms of navigation in the brain (Cueva & Wei, 2018; Banino et al., 2018) .
Figure 9: Joint HD × AV tuning of the initial, randomly connected network and the final trained network.
a) Before training, the 100 units in the network do not have pronounced joint HD × AV tuning.
The color scale is different for each unit (blue = minimum activity, yellow = maximum activity) to maximally highlight any potential variation in the untrained network.
b) After training, the units are tuned to HD × AV, with the exception of 12 units (shown at the bottom) which are not active and do not influence the network.
|
Artificial neural networks trained with gradient descent are capable of recapitulating both realistic neural activity and the anatomical organization of a biological circuit.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:284
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Convolutional Neural Networks (CNNs) are computationally intensive, which limits their application on mobile devices.
Their energy is dominated by the number of multiplies needed to perform the convolutions.
Winograd’s minimal filtering algorithm (Lavin, 2015) and network pruning (Han et al., 2015) can reduce the operation count, but these two methods cannot be straightforwardly combined — applying the Winograd transform fills in the sparsity in both the weights and the activations.
We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity.
First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations.
Second, we prune the weights in the Winograd domain to exploit static weight sparsity.
For models on CIFAR-10, CIFAR-100 and ImageNet datasets, our method reduces the number of multiplications by 10.4x, 6.8x and 10.8x respectively with loss of accuracy less than 0.1%, outperforming previous baselines by 2.0x-3.0x.
We also show that moving ReLU to the Winograd domain allows more aggressive pruning.
Deep Convolutional Neural Networks (CNNs) have shown significant improvement in many machine learning applications.
However, CNNs are compute-limited.
Their performance is dominated by the number of multiplies needed to perform the convolutions.
Moreover, the computational workload of CNNs continues to grow over time.
BID16 proposed a CNN model with less than 2.3 × 10 7 multiplies for handwritten digit classification.
Later, BID13 developed AlexNet, an ImageNet-winning CNN with more than 1.1 × 10 9 multiplies.
In 2014, ImageNetwinning and runner up CNNs increased the number of multiplies to 1.4 × 10 9 BID24 ) and 1.6 × 10 10 BID22 respectively.
Despite the powerful representational ability of large scale CNNs, their computational workload prohibits deployment on mobile devices.
Two research directions have been explored to address the problem.
BID14 proposed using Winograd's minimal filtering algorithm BID25 to reduce the number of multiplies needed to perform 3 × 3 kernel convolutions.
On the other end, pruning the model BID5 and exploiting the dynamic sparsity of activations due to ReLU also reduces the required multiplies.
Unfortunately, the above two directions are not compatible: the Winograd transformation fills in the zeros in both the weights and the activations FIG0 ) -eliminating the gain from exploiting sparsity.
Thus, for a pruned network, Winograd's algorithm actually increases the number of multiplies; the loss of sparsity more than offsets the reduced operation count.In this paper, we introduce two modifications to the original Winograd-based convolution algorithm to eliminate this problem.
First, we move the ReLU operation to be after the Winograd transform to also make the activations sparse at the point where the multiplies are performed.
Second, we prune the weights after (rather than before) they are transformed.
Thus, the weights are sparse when the elementwise multiply is performed -reducing the operation count.
Together, these two modifications enable the gains of Winograd's algorithm and of exploiting sparsity to be combined.
We open-source our code and models at https://github.com/xingyul/Sparse-Winograd-CNN.
In this section, we summarize the experiment results and compare the three models in terms of
a) weight and activation dimensions and
b) the dynamic density of activations.
We then visualize the kernels to illustrate the pattern of the proposed Winograd-ReLU model kernel.
DISPLAYFORM0
We have shown that we can combine the computational savings of sparse weights and activations with the savings of the Winograd transform by making two modifcations to conventional CNNs.
To make the weights sparse at the point of multiplication, we train and prune the weights in the transform domain.
This simple approach does not reduce the workload with respect to spatial pruning, though, so we move the ReLU non-linear operation after the Winograd transform to make the activations sparse at the point of multiplication.
Moving ReLU to the Winograd domain also allows the weights to be more aggressively pruned without losing accuracy.
With a 2 × 2 output patch (p = 4), the net result is a reduction of 10.4×, 6.8× and 10.8× in computation on three datasets: CIFAR-10, CIFAR-100 and ImageNet.We plan to extend this work in the following directions.
First, we expect that even greater savings on computation can be realized by using larger patch sizes (e.g., p = 6), and there may be benefit in exploring different Winograd transformation matrices (B,G and A).
Second, we expect that using different pruning rates r i for each network layer will help maintain accuracy and improve overall workload reduction.
Finally, we expect that combining our Winograd-ReLU network with other network simplification techniques, e.g. quantization of weights and/or activations BID4 BID18 BID20 , will reduce the energy of computation even further.
|
Prune and ReLU in Winograd domain for efficient convolutional neural network
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:285
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper we present a novel optimization algorithm called Advanced Neuroevolution.
The aim for this algorithm is to train deep neural networks, and eventually act as an alternative to Stochastic Gradient Descent (SGD) and its variants as needed.We evaluated our algorithm on the MNIST dataset, as well as on several global optimization problems such as the Ackley function.
We find the algorithm performing relatively well for both cases, overtaking other global optimization algorithms such as Particle Swarm Optimization (PSO) and Evolution Strategies (ES).
Gradient Descent (GD) and its variations like stochastic gradient descent BID2 are the de facto standard for training deep neural networks (DNNs) for tasks in various domains like Object Detection BID10 , Robotic Grasping BID9 and Machine Translation BID1 .
Most of the field of Deep Learning is centered around algorithms similar to variants of Gradient Descent to find the optimal weights given desired input/output pairs BID7 , BID4 , BID14 .
However, there are also some limitations to using gradient-based optimization.
For example, the neural network and the loss function have to be differentiable end-to-end.
As a consequence, there are a number of problems that can not be directly modeled or solved without some alterations such as Formal Logic and Hard Attention BID11 .
Note that throughout this paper, we will refer to gradient-based methods collectively as SGD.
Similarly, we will refer to Advanced Neuroevolution with the acronym AdvN.For those reasons, we developed a new algorithm which we call Advanced Neuroevolution.
It is not a single algorithm, in truth.
It is an ensemble of low-level algorithms, layered on top of each other.
Those low-level algorithms have different scopes of operations addressing different levels of abstraction in the search process.
For example, the perturbation mechanism addresses the introduction of noise into the models, the most basic operation.
In contrast, the minimum distance mechanism addresses the global scale properties, i.e. the search regions.
The goal is to traverse the search space as efficiently as possible without use of gradients.
In the case of neural networks the search space is the weight space, including biases.Indeed, while this algorithm was developed primarily for training of deep neural networks, it can be used for other optimization tasks.
In essence, we present the algorithm as an evolutionary optimization algorithm, with a focus on DNNs.There are many global optimization algorithms such as Evolution Strategies BID13 , Particle Swarm Optimization BID8 and Simulated Annealing BID23 .
Each has its merits and limitations.
Our aim is not to compete directly with those algorithms but rather to complement them and offer another option with its own merits and limitations.
To evaluate the performance of such algorithms we can use well-known benchmark functions such as the Rastrigin or Ackley function.
We recognize those functions and test Advanced Neuroevolution against them to assess its performance.In addition, there have been other approaches to using evolutionary optimization techniques to train DNNs, see and BID19 as recent examples.
It reflects the awareness within the broader research community about the potential of such algorithms, and the need for alternatives to SGD.
We don't see our algorithm replacing SGD, especially in fields where it is already quite successful such as Computer Vision.
Our aim is to complement it, by offering another option.
Furthermore, there is no reason why both can not be used in tandem as part of a grander learning strategy.
We presented the Advanced Neuroevolution algorithm as an alternative optimization step to SGD to train neural networks.
The work is motivated by some limitations we perceived in gradient-based methods, such as differentiability and sample-inefficiency.
The algorithm is benchmarked against other optimization algorithms on typical optimization problems.
It performed satisfactorily well, and improved upon all of them.
For fairness, we noted that the implementation of the other algorithms may not be optimized, and they can arguably perform better.Next our algorithm is tested on the MNIST digit classification task.
It achieved 90% accuracy on the entire validation set using only 2000 images from the training set.
In all our experiments, halfprecision floats are used in order to decrease the time of the computations.
The computations are done only on 4 Titan V GPUs instead of thousands of CPU cores as in other evolutionary algorithms papers.
This makes training of neural networks with evolutionary algorithms more tractable in terms of resource requirements.Finally, while not presented in this work, preliminary tests of our algorithm on RL tasks have been promising.
It solves the assigned problems, though it takes longer than other approaches.
We aim to improve upon the algorithm and the strategies employed in order to achieve competitive results on RL and Robotics tasks.
|
A new algorithm to train deep neural networks. Tested on optimization functions and MNIST.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:286
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies.
Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches.
We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example.
Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs.
We find significant speedups in training neural networks with multiplicative Gaussian perturbations.
We show that flipout is effective at regularizing LSTMs, and outperforms previous methods.
Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.
Stochasticity is a key component of many modern neural net architectures and training algorithms.
The most widely used regularization methods are based on randomly perturbing a network's computations BID29 BID7 .
Bayesian neural nets can be trained with variational inference by perturbing the weights BID4 BID0 .
Weight noise was found to aid exploration in reinforcement learning BID20 BID2 .
Evolution strategies (ES) minimizes a black-box objective by evaluating many weight perturbations in parallel, with impressive performance on robotic control tasks BID25 .
Some methods perturb a network's activations BID29 BID7 , while others perturb its weights BID4 BID0 BID20 BID2 BID25 .
Stochastic weights are appealing in the context of regularization or exploration because they can be viewed as a form of posterior uncertainty about the parameters.
However, compared with stochastic activations, they have a serious drawback: because a network typically has many more weights than units, it is very expensive to compute and store separate weight perturbations for every example in a mini-batch.
Therefore, stochastic weight methods are typically done with a single sample per mini-batch.
In contrast, activations are easy to sample independently for different training examples within a mini-batch.
This allows the training algorithm to see orders of magnitude more perturbations in a given amount of time, and the variance of the stochastic gradients decays as 1/N , where N is the mini-batch size.
We believe this is the main reason stochastic activations are far more prevalent than stochastic weights for neural net regularization.
In other settings such as Bayesian neural nets and evolution strategies, one is forced to use weight perturbations and live with the resulting inefficiency.In order to achieve the ideal 1/N variance reduction, the gradients within a mini-batch need not be independent, but merely uncorrelated.
In this paper, we present flipout, an efficient method for decorrelating the gradients between different examples without biasing the gradient estimates.
Flipout applies to any perturbation distribution that factorizes by weight and is symmetric around 0-including DropConnect, multiplicative Gaussian perturbations, evolution strategies, and variational Bayesian neural nets-and to many architectures, including fully connected nets, convolutional nets, and RNNs.In Section 3, we show that flipout gives unbiased stochastic gradients, and discuss its efficient vectorized implementation which incurs only a factor-of-2 computational overhead compared with shared perturbations.
We then analyze the asymptotics of gradient variance with and without flipout, demonstrating strictly reduced variance.
In Section 4, we measure the variance reduction effects on a variety of architectures.
Empirically, flipout gives the ideal 1/N variance reduction in all architectures we have investigated, just as if the perturbations were done fully independently for each training example.
We demonstrate speedups in training time in a large batch regime.
We also use flipout to regularize the recurrent connections in LSTMs, and show that it outperforms methods based on dropout.
Finally, we use flipout to vectorize evolution strategies BID25 , allowing a single GPU to handle the same throughput as 40 CPU cores using existing approaches; this corresponds to a factor-of-4 cost reduction on Amazon Web Services.
We have introduced flipout, an efficient method for decorrelating the weight gradients between different examples in a mini-batch.
We showed that flipout is guaranteed to reduce the variance compared with shared perturbations.
Empirically, we demonstrated significant variance reduction in the large batch setting for a variety of network architectures, as well as significant speedups in training time.
We showed that flipout outperforms dropout-based methods for regularizing LSTMs.
Flipout also makes it practical to apply GPUs to evolution strategies, resulting in substantially increased throughput for a given computational cost.
We believe flipout will make weight perturbations practical in the large batch setting favored by modern accelerators such as Tensor Processing Units (Jouppi et al., 2017) .
DISPLAYFORM0 In this section, we provide the proof of Theorem 2 (Variance Decomposition Theorem).Proof
. We use
the notations from Section 3.2. Let x,
x denote two training examples from the mini-batch B, and ∆W, ∆W denote the weight perturbations they received. We begin
with the decomposition into data and estimation terms (Eqn. 6), which we repeat here for convenience: DISPLAYFORM1 The data term from Eqn. 13 can be
simplified: DISPLAYFORM2 We break the estimation term from Eqn. 13 into variance
and covariance terms: DISPLAYFORM3 We now separately analyze the cases of fully independent perturbations, shared perturbations, and flipout.Fully independent perturbations. If the perturbations
are fully independent, the second term in Eqn. 15 disappears. Hence
, combining Eqns
. 13, 14, and 15, we are
left with DISPLAYFORM4 which is just α/N .
|
We introduce flipout, an efficient method for decorrelating the gradients computed by stochastic neural net weights within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:287
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep generative models such as Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) play an increasingly important role in machine learning and computer vision.
However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN.
In this paper, we propose a novel algorithm named Latently Invertible Autoencoder (LIA) to address the above two issues in one framework.
An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE.
Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network.
The decoder proceeds in the reverse order of the encoder's composite mappings.
A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA.
Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation.
Deep generative models play a more and more important role in cracking challenges in computer vision as well as in other disciplines, such as high-quality image generation Karras et al., 2018a; Brock et al., 2018 ), text-to-speech transformation (van den Oord et al., 2016a; , information retrieval (Wang et al., 2017) , 3D rendering (Wu et al., 2016; Eslami et al., 2018) , and signal-to-image acquisition (Zhu et al., 2018) .
Overall, the generative models fall into four categories: autoencoder and its most important variant of Variational AutoEncoder (VAE) (Kingma & Welling, 2013) , auto-regressive models (van den Oord et al., 2016b; a) , Generative Adversarial Network (GAN) (Goodfellow et al., 2014) , and normalizing flows (NF) (Tabak & Vanden-Eijnden, 2010; Tabak & Turner, 2013; Rezende & Mohamed, 2015) .
Here we compare these models through the perspective of data dimensionality reduction and reconstruction.
To be formal, let x be a data point in the d x -dimensional observable space R dx and y be its corresponding low-dimensional representation in the feature space R dy .
The general formulation of dimensionality reduction is
where f (·) is the mapping function and d y d x .
The manifold learning aims at requiring f under various constraints on y (Tenenbaum1 et al., 2000; Roweis & Saul, 2000) .
However, the sparsity of data points in high-dimensional space often leads to model overfitting, thus necessitating research on opposite mapping from y to x, i.e.
where g(·) is the opposite mapping function with respect to f (·), to reconstruct the data.
In general, the role of g(·) is a regularizer to f (·) or a generator to produce more data.
The autoencoder is of mapping x f → y g →x.
A common assumption in autoencoder is that the variables in lowdimensional space are usually sampled from a prior distribution P(z; θ) such as uniform or Gaussian.
To differentiate from y, we let z represent the low-dimensional vector following the prior distribution.
Thus we can write g : R dz → R dx , z → x = g(z), z ∼ P(z; θ).
It is crucial to establish such dual maps z = f (x) and x = g(z).
In the parlance of probability, the process of x → z = f (x) is called inference, and the other procedure of z → x = g(z) is called sampling or generation.
VAE is capable of carrying out inference and generation in one framework by two collaborative functional modules.
However, it is known that in many cases VAEs are only able to generate blurry images due to the imprecise variational inference.
To see this, we write the approximation of the marginal log-likelihood
where KL[q(z|x)||p(z)] is the Kullback-Leibler divergence with respect to posterior probability q(z|x) and prior p(z).
This lower-bound log-likelihood usually produces imprecise inference.
Furthermore, the posterior collapse frequently occurs when using more sophisticated decoder models (Bowman et al., 2015; Kingma et al., 2016 ).
These two issues greatly limit the generation capability of the VAE.
On the other hand, GAN is able to achieve photo-realistic generation results (Karras et al., 2018a; .
However, its critical limitation is the absence of the encoder f (x) for carrying inference on real images.
Effort has been made on learning an encoder for GAN under the framework of VAE, however the previous two issues of learning VAE still exist.
Normalizing flows can perform the exact inference and generation with one architecture by virtue of invertible networks (Kingma & Dhariwal, 2018) .
But it requires the dimension d x of the data space to be identical to the dimension d z of the latent space, thus posing computational issues due to high complexity of learning deep flows and computing the Jacobian matrices.
Inspired by recent success of GANs (Karras et al., 2018a; and normalizing flows (Kingma et al., 2016; Kingma & Dhariwal, 2018) , we develop a new model called Latently Invertible Autoencoder (LIA).
LIA utilizes an invertible network to bridge the encoder and the decoder of VAE in a symmetric manner.
We summarize its key advantages as follows:
• The symmetric design of the invertible network brings two benefits.
The prior distribution can be exactly fitted from an unfolded feature space, thus significantly easing the inference problem.
Besides, since the latent space is detached, the autoencoder can be trained without variational optimization thus there is no approximation here.
• The two-stage adversarial learning decomposes the LIA framework into a Wasserstein GAN (only a prior needed) and a standard autoencoder without stochastic variables.
Therefore the training is deterministic 2 , implying that the model will be not affected by the posterior collapse when the decoder is more complex or followed by additional losses such as the adversarial loss and the perceptual loss.
• We compare LIA with state-of-the-art generative models on inference and generation/reconstruction.
The experimental results on FFHQ and LSUN datasets show the LIA achieves superior performance on inference and generation.
A new generative model, named Latently Invertible Autoencoder (LIA), has been proposed for generating image sample from a probability prior and simultaneously inferring accurate latent code for a given sample.
The core idea of LIA is to symmetrically embed an invertible network in an autoencoder.
Then the neural architecture is trained with adversarial learning as two decomposed modules.
With the design of two-stage training, the decoder can be replaced with any GAN generator for high-resolution image generation.
The role of the invertible network is to remove any probability optimization and bridge the prior with unfolded feature vectors.
The effectiveness of LIA is validated with experiments of reconstruction (inference and generation) on FFHQ and LSUN datasets.
It is still challenging to faithfully recover all the image content especially when the objects or scenes have unusual parts.
For example, LIA fails to recover the hand appeared at the top of the little girl (the second row in Figure 3) .
Besides, the Bombay cat's necklace (the second row in Figure 5 ) is missed in the reconstructed image.
These features belong to multiple unique parts of the objects or scenes, which are difficult for the generative model to capture.
One possible solution is to raise the dimension of latent variables (e.g. using multiple latent vectors) or employ the attention mechanism to highlight such unusual structures in the decoder, which is left for future work.
|
A new model Latently Invertible Autoencoder is proposed to solve the problem of variational inference in VAE using the invertible network and two-stage adversarial training.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:288
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism.
Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs.
The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability.
To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures (PHPs).
A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller.
We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs.
We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations.
Representing the logic of a computer program with a parametrized model, such as a neural network, is a central challenge in AI with applications including reinforcement learning, robotics, natural language processing, and programming by example.
A salient feature of recently-proposed approaches for learning programs BID32 BID6 is their ability to leverage the hierarchical structure of procedure invocations present in well-designed programs.Explicitly exposing this hierarchical structure enables learning neural programs with empirically superior generalization, compared to baseline methods that learn only from elementary computer operations, but requires training data that does not consists only of low-level computer operations but is annotated with the higher-level procedure calls BID32 BID6 .
tackled the problem of learning hierarchical neural programs from a mixture of annotated training data (hereafter called strong supervision) and unannotated training data where only the elementary operations are given without their call-stack annotations (called weak supervision).
In this paper, we propose to learn hierarchical neural programs from a mixture of strongly supervised and weakly supervised data via the Expectation-Gradient method and an explicit program counter, in lieu of a high-dimensional real-valued state of a recurrent neural network.Our approach is inspired by recent work in robot learning and control.
In Imitation Learning (IL), an agent learns to behave in its environment using supervisor demonstrations of the intended behavior.
However, existing approaches to IL are largely insufficient for addressing algorithmic domains, in which the target policy is program-like in its accurate and structured manipulation of inputs and data structures.
An example of such a domain is long-hand addition, where the computer loops over the digits to be added, from least to most significant, calculating the sum and carry.
In more complicated examples, the agent must correctly manipulate data structures to compute the right output.Three main challenges set algorithmic domains apart from other IL domains.
First, the agent's policy must be highly accurate.
Algorithmic behavior is characterized by a hard constraint of output correctness, where any suboptimal actions are simply wrong and considered failures.
In contrast, many tasks in physical and simulated domains tolerate errors in the agent's actions, as long as some goal region in state-space is eventually reached, or some safety constraints are satisfied.
A second challenge is that algorithms often use specific data structures, which may require the algorithmic policies to have a particular structure.
A third challenge is that the environment in algorithmic domains, which consists of the program input and the data structures, is almost completely unobservable directly by the agent.
They can only be scanned using some limited reading apparatus, such as the read/write heads in a Turing Machine or the registers in a register machine.Recently proposed methods can infer from demonstration data hierarchical control policies, where high-level behaviors are composed of low-level manipulation primitives BID8 .
In this paper, we take a similar approach to address the challenges of algorithmic domains, by introducing Parametrized Hierarchical Procedures (PHPs), a structured model of algorithmic policies inspired by the options framework BID38 , as well as the procedural programming paradigm.
A PHP is a sequence of statements, such that each statement branches conditionally on the observation, to either (1) perform an elementary operation, (2) invoke another PHP as a sub-procedure, or (3) terminate and return control to the caller PHP.
The index of each statement in the sequence serves as a program counter to accurately remember which statement was last executed and which one is next.
The conditional branching in each statement is implemented by a neural network mapping the program counter and the agent's observation into the elementary operation, sub-procedure, or termination to be executed.
The PHP model is detailed in Section 4.1.PHPs have the potential to address the challenges of algorithmic domains by strictly maintaining two internal structures: a call stack containing the current branch of caller PHPs, and the current program counter of each PHP in the stack.
When a statement invokes a PHP as a sub-procedure, this PHP is pushed into the call stack.
When a statement terminates the current PHP, it is popped from the stack, returning control to the calling PHP to execute its next statement (or, in the case of the root PHP, ending the entire episode).
The stack also keeps the program counter of each PHP, which starts at 0, and is incremented each time a non-terminating statement is executed.PHPs impose a constraining structure on the learned policies.
The call stack arranges the policy into a hierarchical structure, where a higher-level PHP can solve a task by invoking lower-level PHPs that solve sub-tasks.
Since call stacks and program counters are widely useful in computer programs, they provide a strong inductive bias towards policy correctness in domains that conform to these constraints, while also being computationally tractable to learn.
To support a larger variety of algorithmic domains, PHPs should be extended in future work to more expressive structures, for example allowing procedures to take arguments.We experiment with PHPs in two benchmarks, the NanoCraft domain introduced in , and long-hand addition.
We find that our algorithm is able to learn PHPs from a mixture of strongly and weakly supervised demonstrations with better sample complexity than previous algorithms: it achieves better test performance with fewer demonstrations.In this paper we make three main contributions:• We introduce the PHP model and show that it is easier to learn than the NPI model BID32 ).•
We propose an Expectation-Gradient algorithm for efficiently training PHPs from a mixture of annotated and unannotated demonstrations (strong and weak supervision).•
We demonstrate efficient training of multi-level PHPs on NanoCraft and long-hand addition BID32 , and achieve improved success rate.2 RELATED
WORK BID32 Recursive NPI BID6 (recursive) NPL Mixed PHP (this work) Mixed BID18 , the Neural GPU BID19 , and End-to-End Memory Networks BID37 , have been proposed for learning neural programs from input-output examples, with components such as variable-sized memory and novel addressing mechanisms facilitating the training process.In contrast, our work considers the setting where, along with the input-output examples, execution traces are available which describe the steps necessary to solve a given problem. The Neural
Programmer-Interpreter (NPI, BID32 ) learns hierarchical policies from execution traces which not only indicate the low-level actions to perform, but also a structure over them specified by higher-level abstractions. BID6 showed
that learning from an execution trace with recursive structure enables perfect generalization. Neural Program
Lattices work within the same setting as the NPI, but can learn from a dataset of execution traces where only a small fraction contains information about the higher-level hierarchy.In demonstrations where the hierarchical structure along the trace is missing, this latent space grows exponentially in the trace length. address this challenge
via an approximation method that selectively averages latent variables on different computation paths to reduce the complexity of enumerating all paths. In contrast, we compute
exact gradients using dynamic programming, by considering a hierarchical structure that has small discrete latent variables in each time step.Other works use neural networks as a tool for outputting programs written in a discrete programming language, rather than having the neural network itself represent a program. BID3 learned to generate
programs for solving competition-style problems. BID9 and BID31 generate
programs in a domain-specific language for manipulating strings in spreadsheets.
In this paper we introduced the Parametrized Hierarchical Procedures (PHP) model for hierarchical representation of neural programs.
We proposed an Expectation-Gradient algorithm for training PHPs from a mixture of strongly and weakly supervised demonstrations of an algorithmic behavior, showed how to perform level-wise training of multi-level PHPs, and demonstrated the benefits of our approach on two benchmarks.PHPs alleviate the sample complexity required to train policies with unstructured memory architectures, such as LSTMs, by imposing the structure of a call stack augmented with program counters.
This structure may be limiting in that it requires the agent to also rely on observable information that could otherwise be memorized, such as the building specifications in the NanoCraft domain.
The benchmarks used so far in the field of neural programming are simple enough and observable enough to be solvable by PHPs, however we note that more complicated and less observable domains may require more expressive memory structures, such as passing arguments to sub-procedures.
Future work will explore such structures, as well as new benchmarks to further challenge the community.Our results suggest that adding weakly supervised demonstrations to the training set can improve performance at the task, but only when the strongly supervised demonstrations already get decent performance.
Weak supervision could attract the optimization process to a different hierarchical structure than intended by the supervisor, and in such cases we found it necessary to limit the number of weakly supervised demonstrations, or weight them less than demonstrations annotated with the intended hierarchy.An open question is whether the attractors strengthened by weak supervision are alternative but usable hierarchical structures, that are as accurate and interpretable as the supervisor's.
Future work will explore the quality of solutions obtained by training from only weakly supervised demonstrations.
|
We introduce the PHP model for hierarchical representation of neural programs, and an algorithm for learning PHPs from a mixture of strong and weak supervision.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:289
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Su-Boyd-Candes (2014) made a connection between Nesterov's method and an ordinary differential equation (ODE).
We show if a Hessian damping term is added to the ODE from Su-Boyd-Candes (2014), then Nesterov's method arises as a straightforward discretization of the modified ODE.
Analogously, in the strongly convex case, a Hessian damping term is added to Polyak's ODE, which is then discretized to yield Nesterov's method for strongly convex functions.
Despite the Hessian term, both second order ODEs can be represented as first order systems.
Established Liapunov analysis is used to recover the accelerated rates of convergence in both continuous and discrete time.
Moreover, the Liapunov analysis can be extended to the case of stochastic gradients which allows the full gradient case to be considered as a special case of the stochastic case.
The result is a unified approach to convex acceleration in both continuous and discrete time and in both the stochastic and full gradient cases.
Su et al. (2014) made a connection between Nesterov's method for a convex, L-smooth function, f , and the second order, ordinary differential equation (ODE) x + 3 tẋ + ∇f (x) = 0 (A-ODE)However Su et al. (2014) did not show that Nesterov's method arises as a discretization of (A-ODE).
In order to obtain such a discretization, we consider the following ODE, which has an additional Hessian damping term with coefficient 1/ √ L. DISPLAYFORM0 Notice that (H-ODE) is a perturbation of (A-ODE), and the perturbation goes to zero as L → ∞.
Similar ODEs have been studied by BID1 , they have been shown to accelerate gradient descent in continuous time in .Next
, we consider the case where f is also µ-strongly convex, and write C f := L/µ for the condition number of f . Then
Nesterov's method in the strongly convex case arises as discretization of the following second order ODË DISPLAYFORM1 (H-ODE-SC) is a perturbation of Polyak's ODE (Polyak, 1964) x + 2 √ µẋ + ∇f (x) = 0 which is accelerates gradient when f is quadratic see (Scieur et al., 2017) .In each
case, both continuous and discrete, as well and convex and strongly convex, it is possible to provide a proof of the rate using a Liapunov function. These proofs
are already established in the literature: we give citations below, and also provide proof in the Appendix.Moreover, the analysis for Nesterov's method in the full gradient can be extended to prove acceleration in the case of stochastic gradients. Acceleration
of stochastic gradient descent has been established by Lin et al. (2015) and BID7 , see also BID8 . A direct acceleration
method with a connection to Nestero'v method was done by BID0 . Our analysis unifies
the continuous time ODE with the algorithm, and includes full gradient acceleration as a special case. The analysis proceeds
by first rewriting (H-ODE) (and (H-ODE-SC)) as first order systems involving ∇f , and then replacing the ∇f with g = ∇f + e. Both the continuous and
discrete time methods achieve the accelerated rate of convergence, provided |e| goes to zero quickly enough. The condition on |e|, is
given below in (12) and (13) -it is faster than the corresponding rate for stochastic gradient descent. When e = 0 we recover the
full gradient case.The renewed interested in the continuous time approach began with the work of Su et al. (2014) and was followed Wibisono et al. (2016); Wilson et al. (2016) . Continuous time analysis
also appears in BID6 , BID11 , and BID10 . However, continuous time
approaches to optimization have been around for a long time. Polyak's method Polyak (
1964) is related to successive over relaxation for linear equations (Varga, 1957) which were initially used to accelerate solutions of linear partial differential equations (Young, 1954) . A continuous time interpretation
of Newton's method can be found in (Polyak, 1987) or BID1 . The mirror descent algorithm of
Nemirovskii et al. (1983) has a continuous time interpretation BID5 . The Liapunov approach for acceleration
had already appeared in BID4 for FISTA.The question of when discretizations of dynamical systems also satisfy a Liapunov function has been studied in the context of stabilization in optimal control BID12 . More generally, Stuart & Humphries (1996
) studies when a discretization of a dynamical system preserves a property such as energy dissipation.
|
We derive Nesterov's method arises as a straightforward discretization of an ODE different from the one in Su-Boyd-Candes and prove acceleration the stochastic case
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:29
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep Reinforcement Learning has managed to achieve state-of-the-art results in learning control policies directly from raw pixels.
However, despite its remarkable success, it fails to generalize, a fundamental component required in a stable Artificial Intelligence system.
Using the Atari game Breakout, we demonstrate the difficulty of a trained agent in adjusting to simple modifications in the raw image, ones that a human could adapt to trivially.
In transfer learning, the goal is to use the knowledge gained from the source task to make the training of the target task faster and better.
We show that using various forms of fine-tuning, a common method for transfer learning, is not effective for adapting to such small visual changes.
In fact, it is often easier to re-train the agent from scratch than to fine-tune a trained agent.
We suggest that in some cases transfer learning can be improved by adding a dedicated component whose goal is to learn to visually map between the known domain and the new one.
Concretely, we use Unaligned Generative Adversarial Networks (GANs) to create a mapping function to translate images in the target task to corresponding images in the source task.
These mapping functions allow us to transform between various variations of the Breakout game, as well as between different levels of a Nintendo game, Road Fighter.
We show that learning this mapping is substantially more efficient than re-training.
A visualization of a trained agent playing Breakout and Road Fighter, with and without the GAN transfer, can be seen in \url{https://streamable.com/msgtm} and \url{https://streamable.com/5e2ka}.
Transferring knowledge from previous occurrences to new circumstances is a fundamental human capability and is a major challenge for deep learning applications.
A plausible requirement for artificial general intelligence is that a network trained on one task can reuse existing knowledge instead of learning from scratch for another task.
For instance, consider the task of navigation during different hours of the day.
A human that knows how to get from one point to another on daylight will quickly adjust itself to do the same task during night time, while for a machine learning system making a decision based on an input image it might be a harder task.
That is because it is easier for us to make analogies between similar situations, especially in the things we see, as opposed to a robot that does not have this ability and its knowledge is based mainly on what it already saw.Deep reinforcement learning has caught the attention of researchers in the past years for its remarkable success in achieving human-level performance in a wide variety of tasks.
One of the field's famous achievements was on the Atari 2600 games where an agent was trained to play video games directly from the screen pixels and information received from the game BID20 .
However, this approach depends on interacting with the environment a substantial number of times during training.
Moreover, it struggles to generalize beyond its experience, the training process of a new task has to be performed from scratch even for a related one.
Recent works have tried to overcome this inefficiency with different approaches such as, learning universal policies that can generalize between related tasks BID25 , as well as other transfer approaches BID7 BID24 .
In this work, we first focus on the Atari game Breakout, in which the main concept is moving the paddle towards the ball in order to maximize the score of the game.
We modify the game by introducing visual changes such as adding a rectangle in the middle of the image or diagonals in the background.
From a human perspective, it appears that making visual changes that are not significant to the game's dynamics should not influence the score of the game, a player who mastered the original game should be able to trivially adapt to such visual variants.
We show that the agent fails to transfer.
Furthermore, fine-tuning, the main transfer learning method used today in neural networks, also fails to adapt to the small visual change: the information learned in the source task does not benefit the learning process of the very related target task, and can even decelerate it.
The algorithm behaves as if these are entirely new tasks.Our second focus is attempting to transfer agent behavior across different levels of a video game: can an agent trained on the first level of a game use this knowledge and perform adequately on subsequent levels?
We explore the Nintendo game Road Fighter, a car racing game where the goal is to finish the track before the time runs out without crashing.
The levels all share the same dynamics, but differ from each other visually and in aspects such as road width.
Similar to the Breakout results, an agent trained to play the first level fails to correctly adapt its past experience, causing the learned policy to completely fail on the new levels.To address the generalization problem, we propose a zero-shot generalization approach, in which the agent learns to transfer between related tasks by learning to visually map images from the target task back to familiar corresponding images from the source task.
Such mapping is naturally achieved using Generative Adversarial Networks (GANs) BID9 , one of the most popular methods for the image-to-image translation that is being used in computer vision tasks such as style transfer BID15 , object transfiguration BID31 , photo enhancement BID17 and more recently, video game level generation BID27 .
In our setup, it is not realistic to assume paired images in both domains, calling for the use of Unaligned GANs BID19 BID15 .
Using this approach we manage to transfer between similar tasks with no additional learning.Contributions This work presents three main contributions.
First, in Section 2, we demonstrate how an agent trained with deep reinforcement learning algorithms fails to adapt to small visual changes, and that the common transfer method of fine-tuning fails as well.
Second, in Section 3, we propose to separate the visual mapping from the game dynamics, resulting in a new transfer learning approach for related tasks based on visual input mapping.
We evaluate this approach on Breakout and Road Fighter, and present the results comparing to different baselines.
We show that our visual transfer approach is much more sample efficient then the alternatives.
Third, in section 5, we suggest an evaluation setup for unaligned GAN architectures, based on their achieved performance on concrete down-stream tasks.
We demonstrated the lack of generalization by looking at artificially constructed visual variants of a game (Breakout), and different levels of a game (Road Fighter).
We further show that transfer learning by fine-tuning fails.
The policies learned using model-free RL algorithms on the original game are not directly transferred to the modified games even when the changes are irrelevant to the game's dynamics.We present a new approach for transfer learning between related RL environments using GANs without the need for any additional training of the RL agent, and while requiring orders of magnitude less interactions with the environment.
We further suggest this setup as a way to evaluate GAN architectures by observing their behavior on concrete tasks, revealing differences between the Cycle-GAN and UNIT-GAN architectures.
We believe our approach is applicable to cases involving both direct and less direct mapping between environments, as long as an image-to-image translation exist.
While we report a success in analogy transfer using Unaligned GANs, we also encountered limitations in the generation process that made it difficult for the agent to maximize the results on the Road Fighter's tasks.
In future work, we plan to explore a tighter integration between the analogy transfer method and the RL training process, to facilitate better performance where dynamic adjustments are needed in addition to the visual mapping.
|
We propose a method of transferring knowledge between related RL tasks using visual mappings, and demonstrate its effectiveness on visual variants of the Atari Breakout game and different levels of Road Fighter, a Nintendo car driving game.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:290
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization.
We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years.
We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings.
We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning.
Finally, we synthesize a ``user's guide'' to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings.
Adaptive gradient methods have remained a cornerstone of optimization for deep learning.
They revolve around a simple idea: scale the step sizes according to the observed gradients along the execution.
It is generally believed that these methods enjoy accelerated optimization, and are more robust to hyperparameter choices.
For these reasons, adaptive optimizers have been applied across diverse architectures and domains.
However, in recent years, there has been renewed scrutiny on the distinction between adaptive methods and "vanilla" stochastic gradient descent (SGD).
Namely, several lines of work have purported that SGD, while often slower to converge, finds solutions that generalize better: for the same optimization error (training error), adaptive gradient methods will produce models with a higher statistical error (holdout validation error).
This claim, which can be shown to be true in a convex overparameterized examples, has perhaps muddled the consensus between academic research and practitioners pushing the empirical state of the art.
For the latter group, adaptive gradient methods have largely endured this criticism, and remain an invaluable instrument in the deep learning toolbox.
In this work, we revisit the generalization performance of adaptive gradient methods from an empirical perspective, and examine several often-overlooked factors which can have a significant effect on the optimization trajectory.
Addressing these factors, which does not require trying yet another new optimizer, can often account for what appear to be performance gaps between adaptive methods and SGD.
Our experiments suggest that adaptive gradient methods do not necessarily incur a generalization penalty: if an experiment indicates as such, there are a number of potential confounding factors and simple fixes.
We complete the paper with a discussion of inconsistent evidence for the generalization penalty of adaptive methods, from both experimental and theoretical viewpoints.
In this section we provide two simple examples of stochastic convex problems where it can be seen that when it comes to generalization both AdaGrad and SGD can be significantly better than the other depending on the instance.
Our purpose to provide both the examples is to stress our point that the issue of understanding the generalization performance of SGD vs. adaptive methods is more nuanced than what simple examples might suggest and hence such examples should be treated as qualitative indicators more for the purpose of providing intuition.
Indeed which algorithm will perform better on a given problem, depends on various properties of the precise instance.
|
Adaptive gradient methods, when done right, do not incur a generalization penalty.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:291
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The ability to generalize quickly from few observations is crucial for intelligent systems.
In this paper we introduce APL, an algorithm that approximates probability distributions by remembering the most surprising observations it has encountered.
These past observations are recalled from an external memory module and processed by a decoder network that can combine information from different memory slots to generalize beyond direct recall.
We show this algorithm can perform as well as state of the art baselines on few-shot classification benchmarks with a smaller memory footprint.
In addition, its memory compression allows it to scale to thousands of unknown labels.
Finally, we introduce a meta-learning reasoning task which is more challenging than direct classification.
In this setting, APL is able to generalize with fewer than one example per class via deductive reasoning.
Consider the following sequential decision problem: at every iteration of an episode we are provided with an image of a digit (e.g. MNIST) and an unknown symbol.
Our goal is to output a digit Y = X + S where X is the value of the MNIST digit, and S is a numerical value that is randomly assigned to the unknown symbol at the beginning of each episode.
After seeing only a single instance of a symbol an intelligent system should not only be able to infer the value S of the symbol but also to correctly generalize the operation associated with the symbol to any other digit in the remaining iterations of that episode.Despite its simplicity, this task emphasizes three cognitive abilities that a generic learning algorithm should display:
1. the algorithm can learn a behaviour and then flexibly apply it to a range of different tasks using only a few context observations at test time;
2. the algorithm can memorize and quickly recall previous experiences for quick adaptation; and
3. the algorithm can process these recalled memories in a non-trivial manner to carry out tasks that require reasoning.The first point is commonly described as "learning to learn" or meta-learning, and represents a new way of looking at statistical inference BID22 BID2 BID1 .
Traditional neural networks are trained to approximate arbitrary probability distributions with great accuracy by parametric adaptation via gradient descent BID13 BID23 .
After training that probability distribution is fixed and neural networks can only generalize well when the testing distribution matches the training distribution BID16 .
In contrast, meta-learning systems are trained to learn an algorithm that infers a function directly from the observations it receives at test time.
This setup is more flexible than the traditional approach and generalizes better to unseen distributions as it incorporates new information even after the training phase is over.
It also allows these models to improve their accuracy as they observe more data, unlike models which learn a fixed distribution.The second requirement -being able to memorize and efficiently recall previous experience -is another active area of research.
Storing information in a model proves especially challenging as we move beyond small toy-examples to tasks with higher dimensional data or real-world problems.Current methods often work around this by summarizing past experiences in one lower-dimensional representation BID7 BID10 or using memory modules BID6 .
While the former approach can produce good results, the representation and therefore the amount of information we can ultimately encode with such models will be of a fixed and thus limited size.
Working with neural memory modules, on the other hand, presents its own challenges as learning to store and keep the right experiences is not trivial.
In order to successfully carry out the task defined at the beginning of this paper a model should learn to capture information about a flexible and unbounded number of symbols observed in an episode without storing redundant information.Finally, reasoning requires processing recalled experiences in order to apply the information they contain to the current data point being processed.
In simple cases such as classification, it is enough to simply recall memories of similar data points and directly infer the current class by combining them using a weighted average or a simple kernel BID26 BID24 , which limits the models to performing interpolation.
In the example mentioned above, more complex reasoning is necessary for human-level generalisation.In this paper we introduce Approximate Posterior Learning (APL, pronounced like the fruit), a self-contained model and training procedure that address these challenges.
APL learns to carry out few-shot approximation of new probability distributions and to store only as few context points as possible in order to carry out the current task.
In addition it learns how to process recalled experiences to carry out tasks of varying degrees of complexity.
This sequential algorithm was inspired by Bayesian posterior updating BID8 in the sense that the output probability distribution is updated as more data is observed.We demonstrate that APL can deliver accuracy comparable to other state-of-the-art algorithms in standard few-shot classification benchmarks while being more data efficient.
We also show it can scale to a significantly larger number of classes while retaining good performance.
Finally, we apply APL to the reasoning task introduced as motivation and verify that it can perform the strong generalization we desire.The main contributions of this paper are:• A simple memory controller design which uses a surprise-based signal to write the most predictive items to memory.
By not needing to learn what to write, we avoid costly backpropagation through memory which makes the setup easier and faster to train.
This design also minimizes how much data is stored, making our method more memory efficient.•
An integrated external and working memory architecture which can take advantage of the best of both worlds: scalability and sparse access provided by the working memory; and all-to-all attention and reasoning provided by a relational reasoning module.•
A training setup which steers the system towards learning an algorithm which approximates the posterior without backpropagating through the whole sequence of data in an episode.
We introduced a self-contained system which can learn to approximate a probability distribution with as little data and as quickly as it can.
This is achieved by putting together the training setup which encourages adaptation; an external memory which allows the system to recall past events; a writing system to adapt the memory to uncertain situations; and a working memory architecture which can efficiently compare items retrieved from memory to produce new predictions.We showed that the model can:• Reach state of the art accuracy with a smaller memory footprint than other meta-learning models by efficiently choosing which data points to remember.•
Scale to very large problem sizes thanks to the use of an external memory module with sparse access.•
Perform fewer than 1-shot generalization thanks to relational reasoning across neighbors.
|
We introduce a model which generalizes quickly from few observations by storing surprising information and attending over the most relevant data at each time point.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:292
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing.
For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision.
We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text.
In this paper, we present an approach of fast reading for text classification.
Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence.
Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions.
With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.
Recurrent neural nets (RNNs), including GRU nets BID6 and LSTM nets BID12 , have been increasingly applied to many problems in natural language processing.
Most of the problems can be divided into two categories: sequence to sequence (seq2seq) tasks BID29 ) (e.g., language modeling BID2 BID20 , machine translation BID13 , conversational/dialogue modeling BID26 , question answering BID11 BID17 , and document summarization BID21 ); and the classification tasks (e.g., part-of-speech tagging BID23 , chunking, named entity recognition BID7 , sentimental analysis BID28 , and document classification BID14 BID25 ).
To solve these problems, models often need to read every token or word of the text from beginning to the end, which is necessary for most seq2seq problems.
However, for classification problems, we do not have to treat each individual word equally, since certain words or chunks are more relevant to the classification task at hand.
For instance, for sentiment analysis it is sufficient to read the first half of a review like "this movie is amazing" or "it is the best I have ever seen," to provide an answer even without reading the rest of the review.
In other cases, we may want to skip or skim some text without carefully checking it.
For example, sentences such as "it's worth to try" are usually more important than irrelevant text such as "we got here while it's still raining outside" or "I visited on Saturday."
On the other hand, sometimes, we want to re-read some sentences to figure out the actual hidden message of the text.
All of these techniques enable us to achieve fast and accurate reading.
Similarly, we expect RNN models to intelligently determine the importance or the relevance of the current sentence in order to decide whether to make a prediction, whether to skip some texts, or whether to re-read the current sentence.In this paper, we aim to augment existing RNN models by introducing efficient partial reading for classification, while maintaining a higher or comparable accuracy compared to reading the full text.To do so, we introduce a recurrent agent which uses an RNN module to encode information from the past and the current tokens, and applies a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision.
To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading.
We expect that our agent will be able to achieve fast reading for classification with both high computational efficiency and good classification performance.
To train this model, we develop an end-to-end approach based on the policy gradient method which backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder.We evaluate our approach on four different sentiment analysis and document topic classification datasets.
By comparing to the standard RNN models and a recent LSTM-skip model which implements a skip action BID33 , we find that our approach achieves both higher efficiency and better accuracy.
We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks.
By mimicking human fast reading, we introduce a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision.
To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading.
An endto-end training algorithm based on the policy gradient method backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder.
We demonstrate the efficacy of the proposed approach on four different datasets and demonstrate improvements for both accuracy and computational performance.
|
We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:293
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Reinforcement learning agents need to explore their unknown environments to solve the tasks given to them.
The Bayes optimal solution to exploration is intractable for complex environments, and while several exploration methods have been proposed as approximations, it remains unclear what underlying objective is being optimized by existing exploration methods, or how they can be altered to incorporate prior knowledge about the task.
Moreover, it is unclear how to acquire a single exploration strategy that will be useful for solving multiple downstream tasks.
We address these shortcomings by learning a single exploration policy that can quickly solve a suite of downstream tasks in a multi-task setting, amortizing the cost of learning to explore.
We recast exploration as a problem of State Marginal Matching (SMM), where we aim to learn a policy for which the state marginal distribution matches a given target state distribution, which can incorporate prior knowledge about the task.
We optimize the objective by reducing it to a two-player, zero-sum game between a state density model and a parametric policy.
Our theoretical analysis of this approach suggests that prior exploration methods do not learn a policy that does distribution matching, but acquire a replay buffer that performs distribution matching, an observation that potentially explains these prior methods' success in single-task settings.
On both simulated and real-world tasks, we demonstrate that our algorithm explores faster and adapts more quickly than prior methods.
Reinforcement learning (RL) algorithms must be equipped with exploration mechanisms to effectively solve tasks with limited reward signals.
These tasks arise in many real-world applications where providing human supervision is expensive.
The inability of current RL algorithms to adequately explore limits their applicability to long-horizon control tasks.
A wealth of prior work has studied exploration for RL.
While, in theory, the Bayes-optimal exploration strategy is optimal, it is intractable to compute exactly, motivating work on tractable heuristics for exploration.
Exploration methods based on random actions have limited ability to cover a wide range of states.
More sophisticated techniques, such as intrinsic motivation, accelerate learning in the single-task setting.
However, these methods have two limitations.
First, they do not explicitly define an objective to quantify "good exploration," but rather argue that exploration arises implicitly through some iterative procedure.
Lacking a well-defined optimization objective, it remains challenging to understand what these methods are doing and why they work.
Similarly, the lack of a metric to quantify exploration, even if only for evaluation, makes it challenging to compare exploration methods and assess progress in this area.
The second limitation is that these methods target the single-task setting.
Because these methods aim to converge to the optimal policy for a particular task, it is challenging to repurpose these methods to solve multiple tasks.
We address these shortcomings by considering a multi-task setting, where many different reward functions can be provided for the same set of states and dynamics.
Rather than exploring from scratch for each task, we aim to learn a single, task-agnostic exploration policy that can be adapted to many possible downstream reward functions, amortizing the cost of learning to explore.
This exploration policy can be viewed as a prior on the policy for solving downstream tasks.
Learning will consist of two phases: during training, we acquire this task-agnostic exploration policy; during testing, we use this exploration policy to quickly explore and maximize the task reward.
Learning a single exploration policy is considerably more difficult than doing exploration throughout the course of learning a single task.
The latter is done by intrinsic motivation (Pathak et al., 2017; Tang et al., 2017; Oudeyer et al., 2007) and count-based exploration methods (Bellemare et al., 2016) , which can effectively explore to find states with high reward, at which point the agent can decrease exploration and increase exploitation of those high-reward states.
While these methods perform efficient exploration for learning a single task, the policy at any particular iteration is not a good exploration policy.
For example, the final policy at convergence would only visit the high-reward states discovered for the current task.
What objective should be optimized to obtain a good exploration policy?
We recast exploration as a problem of State Marginal Matching: given a desired state distribution, we learn a mixture of policies for which the state marginal distribution matches this desired distribution.
Without any prior information, this objective reduces to maximizing the marginal state entropy H [s] , which encourages the policy to visit as many states as possible.
The distribution matching objective also provides a convenient mechanism to incorporate prior knowledge about the task, whether in the form of safety constraints that the agent should obey; preferences for some states over other states; reward shaping; or the relative importance of each state dimension for a particular task.
We also propose an algorithm to optimize the State Marginal Matching (SMM) objective.
First, we reduce the problem of SMM to a two-player, zero-sum game between a policy player and a density player.
We find a Nash Equilibrium for this game using fictitious play (Brown, 1951) , a classic procedure from game theory.
Our resulting algorithm iteratively fits a state density model and then updates the policy to visit states with low density under this model.
Our analysis of this approach sheds light on prior work on exploration.
In particular, while the policy learned by existing exploration algorithms does not perform distribution matching, the replay buffer does, an observation that potentially explains the success of prior methods.
On both simulated and real-world tasks, we demonstrate that our algorithm explores more effectively and adapts more quickly to new tasks than state-of-the-art baselines.
In this paper, we introduced a formal objective for exploration.
While it is often unclear what existing exploration algorithms will converge to, our State Marginal Matching objective has a clear solution:
at convergence, the policy should visit states in proportion to their density under a target distribution.
Not only does this objective encourage exploration, it also provides human users with a flexible mechanism to bias exploration towards states they prefer and away from dangerous states.
Upon convergence, the resulting policy can thereafter be used as a prior in a multi-task setting, amortizing exploration and enabling faster adaptation to new, potentially sparse, reward functions.
The algorithm we proposed looks quite similar to previous exploration methods based on prediction error, suggesting that those methods are also performing some form of distribution matching.
However, by deriving our method from first principles, we note that these prior methods omit a crucial historical averaging step, without which the algorithm is not guaranteed to converge.
Experiments on both simulated and real-world tasks demonstrated how SMM learns to explore, enabling an agent to efficiently explore in new tasks provided at test time.
In future work, we aim to study connections between inverse RL, MaxEnt RL and state marginal matching, all of which perform some form of distribution matching.
Empirically, we aim to scale to more complex tasks by parallelizing the training of all mixture components simultaneously.
Broadly, we expect the state distribution matching problem formulation to enable the development of more effective and principled RL methods that reason about distributions rather than individual states.
|
We view exploration in RL as a problem of matching a marginal distribution over states.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:294
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems.
Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions.
However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation.
Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible.
Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry.
In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines.
We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.
Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing.
We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models.
For sensory perception tasks, neural networks have mostly replaced handcrafted features.
Instead of defining features by hand using domain knowledge, it is now possible to learn them, resulting in improved accuracy and saving a considerable amount of work.
However, successful generalization is still critically dependent on the inductive bias encoded in the network architecture, whether this bias is understood by the network architect or not.The canonical example of a successful network architecture is the Convolutional Neural Network (CNN, ConvNet).
Through convolutional weight sharing, these networks exploit the fact that a given visual pattern may appear in different locations in the image with approximately equal likelihood.
Furthermore, this translation symmetry is preserved throughout the network, because a translation of the input image leads to a translation of the feature maps at each layer: convolution is translation equivariant.Very often, the true label function (the mapping from image to label that we wish to learn) is invariant to more transformations than just translations.
Rotations are an obvious example, but standard translational convolutions cannot exploit this symmetry, because they are not rotation equivariant.
As it turns out, a convolution operation can be defined for almost any group of transformation -not just translations.
By simply replacing convolutions with group convolutions (wherein filters are not Figure 1 : Hexagonal G-CNN.
A p6 group convolution is applied to a single-channel hexagonal image f and filter ψ 1 , producing a single p6 output feature map f g ψ 1 with 6 orientation channels.
This feature map is then group-convolved again with a p6 filter ψ 2 .
The group convolution is implemented as a Filter Transformation (FT) step, followed by a planar hexagonal convolution.
As shown here, the filter transform of a planar filter involves only a rotation, whereas the filter transform for a filter on the group p6 involves a rotation and orientation channel cycling.
Note that in general, the orientation channels of p6 feature maps will not be rotated copies of each other, as happens to be the case in this figure.
just shifted but transformed by a larger group; see Figure 1 ), convolutional networks can be made equivariant to and exploit richer groups of symmetries BID0 .
Furthermore, this technique was shown to be more effective than data augmentation.Although the general theory of such group equivariant convolutional networks (G-CNNs) is applicable to any reasonably well-behaved group of symmetries (including at least all finite, infinite discrete, and continuous compact groups), the group convolution is easiest to implement when all the transformations in the group of interest are also symmetries of the grid of pixels.
For this reason, G-CNNs were initially implemented only for the discrete groups p4 and p4m which include integer translations, rotations by multiples of 90 degrees, and, in the case of p4m, mirror reflections -the symmetries of a square lattice.The main hurdle that stands in the way of a practical implementation of group convolution for a continuous group, such as the roto-translation group SE(2), is the fact that it requires interpolation in order to rotate the filters.
Although it is possible to use bilinear interpolation in a neural network BID10 , it is somewhat more difficult to implement, computationally expensive, and most importantly, may lead to numerical approximation errors that can accumulate with network depth.
This has led us to consider the hexagonal grid, wherein it is possible to rotate a filter by any multiple of 60 degrees, without interpolation.
This allows use to define group convolutions for the groups p6 and p6m, which contain integer translations, rotations with multiples of 60 degrees, and mirroring for p6m.To our surprise, we found that even for translational convolution, a hexagonal pixelation appears to have significant advantages over a square pixelation.
Specifically, hexagonal pixelation is more efficient for signals that are band limited to a circular area in the Fourier plane BID17 , and hexagonal pixelation exhibits improved isotropic properties such as twelve-fold symmetry and six-connectivity, compared to eight-fold symmetry and four-connectivity of square pixels BID15 BID2 .
Furthermore, we found that using small, approximately round hexagonal filters with 7 parameters works better than square 3 × 3 filters when the number of parameters is kept the same.As hypothesized, group convolution is also more effective on a hexagonal lattice, due to the increase in weight sharing afforded by the higher degree of rotational symmetry.
Indeed, the general pattern we find is that the larger the group of symmetries being exploited, the better the accuracy: p6-convolution outperforms p4-convolution, which in turn outperforms ordinary translational convolution.In order to use hexagonal pixelations in convolutional networks, a number of challenges must be addressed.
Firstly, images sampled on a square lattice need to be resampled on a hexagonal lattice.
This is easily achieved using bilinear interpolation.
Secondly, the hexagonal images must be stored in a way that is both memory efficient and allows for a fast implementation of hexagonal convolution.
To this end, we review various indexing schemes for the hexagonal lattice, and show that for some of them, we can leverage highly optimized square convolution routines to perform the hexagonal convolution.
Finally, we show how to efficiently implement the filter transformation step of the group convolution on a hexagonal lattice.We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) BID21 .
Aerial images are one of the many image types where the label function is invariant to rotations: One expects that rotating an aerial image does not change the label.
In situations where the number of examples is limited, data efficient learning is important.
Our experiments demonstrate that group convolutions systematically improve performance.
The method outperforms the baseline model pretrained on ImageNet, as well as comparable architectures with the same number of parameters.
Source code of G-HexaConvs is available on Github: https://github.com/ehoogeboom/hexaconv.The remainder of this paper is organized as follows: In Section 2 we summarize the theory of group equivariant networks.
Section 3 provides an overview of different coordinate systems on the hexagonal grid, Section 4 discusses the implementation details of the hexagonal G-convolutions, in Section 5 we introduce the experiments and present results, Section 6 gives an overview of other related work after which we discuss our findings and conclude.
|
We introduce G-HexaConv, a group equivariant convolutional neural network on hexagonal lattices.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:295
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data.
Methods for object localization, however, are still in need of substantial improvement.
Common approaches to this problem involve the use of a sliding window, sometimes at multiple scales, providing input to a deep CNN trained to classify the contents of the window.
In general, these approaches are time consuming, requiring many classification calculations.
In this paper, we offer a fundamentally different approach to the localization of recognized objects in images.
Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights.
We provide a simple method to interpret classifier weights in the context of individual classified images.
This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each in- put pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition.
These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image.
We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object.
Our experimental results, using real-world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique.
Deep Convolutional Neural Networks (CNNs) have been shown to be effective at image classification, accurately performing object recognition even with thousands of object classes when trained on a sufficiently rich data set of labeled images BID14 .
One advantage of CNNs is their ability to learn complete functional mappings from image pixels to object categories, without any need for the extraction of hand-engineered image features BID21 .
To facilitate learning through stochastic gradient descent, CNNs are (at least approximately) differentiable with regard to connection weight parameters.Image classification, however, is only one of the problems of computer vision.
In the task of image classification, each image has a single label, associated with the class identity of the main object in the image, and the goal is to assign correct labels in a manner that generalizes to novel images.
This can be accomplished by training a machine learning classifier, such as a CNN, on a large data set of labeled images BID5 .
In the object localization task, in comparison, the output for a given image is not a class label but the locations of a specified number of objects in the image, usually encoded as bounding boxes.
Evaluation of an object localization system generally requires ground truth bounding boxes to compare to the system's output.
The detection task is more difficult than the localization task, as the number of objects are not predetermined BID21 .In
this paper, we focus on object localization, identifying the position in the image of a recognized object. As
is common in the localization literature, position information is output in the form of a bounding box. Previously
developed techniques for accomplishing this task generally involve searching the image for the object, considering many candidate bounding boxes with different sizes and locations, sometimes guided by an auxilliary algorithm for heuristically identifying regions of interest BID21 ; BID10 ; BID13 . For each candidate
location, the sub-image captured by the bounding box is classified for object category, with the final output Figure 1 : Examples of sensitivity maps, displaying the sensitivity of network internal representations to individual pixels, providing information about the locations of the main objects in the source images.bounding box either being the specific candidate region classified as the target object with the highest level of certainty or some heuristic combination of neighboring or overlapping candidate regions with high classification certainty. These approaches tend
to be time consuming, often requiring deep CNN classification calculations of many candidate regions at multiple scales. Efforts to speed these
methods mostly focus on reducing the number of regions considered, typically by using some adjunct heuristic region proposal algorithm BID10 ; BID17 ; BID13 . Still, the number of considered
regions is often reported to be roughly 2,000 per image. While these approaches can be fairly
accurate, their slowness limits their usefulness, particularly for online applications.A noteworthy alternative approach is to directly train a deep CNN to produce outputs that match ground truth localization bounding boxes, using a large image data set that provides both category and localization information for each image. It appears as if some form of this method
was used with AlexNet BID14 , though details concerning localization, rather than image classification, are difficult to discern from the published literature. A natural approach would be to cast the learning
of bounding boxes as a simple regression problem, with targets being the four coordinates that specify a bounding box (e.g., coordinates of upper-left and lower-right corners, or region center coordinates along with region width and height). It is reasonable to consider sharing early layers
of a deep CNN, such as those performing convolution and max pooling, between both an image classification network and an object localization network. Indeed, taking such a multitask learning approach
BID2 can allow for both object category and object location training data to shape connection weights throughout the network. Thus, the deep CNN would have "two heads", one for
image classification, using a classification cross-entropy loss function, and one for object localization, reducing the 2 norm between ground truth and predicted bounding box coordinates BID14 . While this approach can produce a network that quickly
outputs location information, extensive training on large data sets containing ground truth bounding box information is necessary to produce good generalization.In this paper, we introduce an approach to object localization that is both very fast and robust in the face of limited ground truth bounding box training data. This approach is rooted in the assertion that any deep
CNN for image classification must contain, implicit in its connection weights, knowledge about the location of recognized objects BID20 . The goal, then, is to interpret the flow of activation
in an object recognition network when it is performing image classification so as to extract information about object location. Furthermore, the goal is to do this quickly. Thus, this
approach aims to leverage location knowledge
that is already latent in extensively trained and tuned image classification networks, without requiring a separate learning process for localization.Our method makes use of the notion of a sensitivity analysis BID26 . We propose estimating the sensitivity of the category outputs
, or activation patterns at internal network layers, of an image classification CNN to variance in each input pixel, given a specific input image. The result is a numeric value for each pixel in the input image
that captures the degree to which small changes in that pixel (locally, around its current value) give rise to large changes in the output category. Together, these numeric values form a sensitivity map of the image
, encoding image regions that are important for the current classification. Our proposed measure of sensitivity is the partial derivative of
activity with regard to each pixel value, evaluated for the current image. For a deep CNN that formally embodies a differentiable mapping (
at least approximately) from image pixels to output categories, this partial derivative can be quickly calculated. While many tools currently exist for efficiently calculating such
derivatives, we provide a simple algorithm that computes these values through a single backward pass through the image classification network, similar to that used to calculate unit error (delta) values in the backpropagation of error learning algorithm BID18 . Thus, we can generate a sensitivity map for an image in about the
same amount of time as it takes the employed image classification network to produce an output. Some example sensitivity maps are shown in Figure 1 .The idea of
using sensitivity information, like that in our sensitivity
maps, for a variety of tasks, including localization, has previously appeared in the literature BID24 ; BID28 BID20 . Indeed, some of these past efforts have used more sophisticated measures
of sensitivity. In this paper, we show that even our very simple sensitivity measure can
produce strong localization performance, and it can do so quickly, without any modifications to the classification network, and even for object categories on which the classification network was not trained. The relationship of the results reported here to previously reported work
is discussed further in Section 4.As previously mentioned, object localization methods typically encode object location as a bounding box. Since our sensitivity maps encode location differently, in terms of pixels
, we propose learning a simple linear mapping from sensitivity maps to bounding box coordinates, allowing our method to output a bounding box for each classified image. We suggest that this linear mapping can be robustly learned from a relatively
small training set of images with ground truth bounding boxes, since the sensitivity maps form a much more simple input than the original images.The primary contributions of this paper may be summarized as follows:• We propose a new general approach to performing object localization, interpreting a previously trained image classification network by performing a sensitivity analysis, identifying pixels to which the category output, or a more general internal representation, is particularly sensitive.• We demonstrate how a linear function from the resulting sensitivity maps to
object location bounding box coordinates may be learned from training images containing ground truth location information.• We provide a preliminary assessment of our approach, measuring object localization
performance on the ImageNet and PASCAL VOC data sets using the VGG16 image classification CNN, showing strong accuracy while maintaining short computation times.
We have presented an approach to object localization based on performing a sensitivity analysis of a previously trained image classification deep CNN.
Our method is fast enough to be used in online applications, and it demonstrates accuracy that is superior to some methods that are much slower.
It is likely that even better accuracy could be had by incorporating sensitivity analysis information into a more sophisticated bounding box estimator.As previously noted, the idea of using sensitivity information has appeared in previously published work.
There are ways in which the results reported in this paper are distinct, however.
We have moved beyond visualization of network function using sensitivity (or saliency) BID24 to performing direct comparisons between different methods on the localization task.
We have shown that using a fast and simple measure of sensitivity can produce comparable performance to that of much slower methods.
Our approach produces good generalization without modifying the classification network, as is done in Class Activation Mapping (CAM) BID28 .
With our PASCAL VOC 2007 results, we have shown that our approach can successfully be applied to attention maps, even when the image contains objects belonging to a class on which the classification network was not trained, distinguishing it from Grad-CAM Selvaraju et al. (2016) .
In short, we have demonstrated the power of a simple sensitivity measure for performing localization.Note that our approach may be used with image classifiers other than CNNs.
The proposed sensitivity analysis can be conducted on any differentiable classifier, though performance will likely depend on classifer specifics.
Indeed, at a substantial time cost, even a black box classifier could be approximately analyzed by making small changes to pixels and observing the effects on activation patterns.The proposed approach is quite general.
Indeed, we are currently working on applying sensitivity analysis to deep networks trained on other tasks, with the goal of interpreting network performance on the current input in a useful way.
Thus, we see a potentially large range of uses for sensitivity analysis in neural network applications.
|
Proposing a novel object localization(detection) approach based on interpreting the deep CNN using internal representation and network's thoughts
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:296
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present trellis networks, a new architecture for sequence modeling.
On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers.
On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices.
Thus trellis networks with general weight matrices generalize truncated recurrent networks.
We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models.
Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention.
The code is available at https://github.com/locuslab/trellisnet .
|
Trellis networks are a new sequence modeling architecture that bridges recurrent and convolutional models and sets a new state of the art on word- and character-level language modeling.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:297
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose an end-to-end framework for training domain specific models (DSMs) to obtain both high accuracy and computational efficiency for object detection tasks.
DSMs are trained with distillation and focus on achieving high accuracy at a limited domain (e.g. fixed view of an intersection).
We argue that DSMs can capture essential features well even with a small model size, enabling higher accuracy and efficiency than traditional techniques.
In addition, we improve the training efficiency by reducing the dataset size by culling easy to classify images from the training set.
For the limited domain, we observed that compact DSMs significantly surpass the accuracy of COCO trained models of the same size.
By training on a compact dataset, we show that with an accuracy drop of only 3.6%, the training time can be reduced by 93%.
|
High object-detection accuracy can be obtained by training domain specific compact models and the training can be very short.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:298
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We compare the model-free reinforcement learning with the model-based approaches through the lens of the expressive power of neural networks for policies, $Q$-functions, and dynamics.
We show, theoretically and empirically, that even for one-dimensional continuous state space, there are many MDPs whose optimal $Q$-functions and policies are much more complex than the dynamics.
We hypothesize many real-world MDPs also have a similar property.
For these MDPs, model-based planning is a favorable algorithm, because the resulting policies can approximate the optimal policy significantly better than a neural network parameterization can, and model-free or model-based policy optimization rely on policy parameterization.
Motivated by the theory, we apply a simple multi-step model-based bootstrapping planner (BOOTS) to bootstrap a weak $Q$-function into a stronger policy.
Empirical results show that applying BOOTS on top of model-based or model-free policy optimization algorithms at the test time improves the performance on MuJoCo benchmark tasks.
Model-based deep reinforcement learning (RL) algorithms offer a lot of potentials in achieving significantly better sample efficiency than the model-free algorithms for continuous control tasks.
We can largely categorize the model-based deep RL algorithms into two types:
1. model-based policy optimization algorithms which learn policies or Q-functions, parameterized by neural networks, on the estimated dynamics, using off-the-shelf model-free algorithms or their variants (Luo et al., 2019; Janner et al., 2019; Kaiser et al., 2019; Kurutach et al., 2018; Feinberg et al., 2018; Buckman et al., 2018) , and
2. model-based planning algorithms, which plan with the estimated dynamics Nagabandi et al. (2018) ; Chua et al. (2018) ; Wang & Ba (2019) .
A deeper theoretical understanding of the pros and cons of model-based and the model-free algorithms in the continuous state space case will provide guiding principles for designing and applying new sample-efficient methods.
The prior work on the comparisons of model-based and model-free algorithms mostly focuses on their sample efficiency gap, in the case of tabular MDPs (Zanette & Brunskill, 2019; Jin et al., 2018) , linear quadratic regulator (Tu & Recht, 2018) , and contextual decision process with sparse reward (Sun et al., 2019) .
In this paper, we theoretically compare model-based RL and model-free RL in the continuous state space through the lens of approximability by neural networks, and then use the insight to design practical algorithms.
What is the representation power of neural networks for expressing the Qfunction, the policy, and the dynamics?
How do the model-based and model-free algorithms utilize the expressivity of neural networks?
Our main finding is that even for the case of one-dimensional continuous state space, there can be a massive gap between the approximability of Q-function and the policy and that of the dynamics:
The optimal Q-function and policy can be significantly more complex than the dynamics.
We construct environments where the dynamics are simply piecewise linear functions with constant pieces, but the optimal Q-functions and the optimal policy require an exponential (in the horizon) number of linear pieces, or exponentially wide neural networks, to approximate.
1 The approximability gap can also be observed empirically on (semi-) randomly generated piecewise linear dynamics with a decent chance.
(See Figure 1 for two examples.)
When the approximability gap occurs, any deep RL algorithms with policies parameterized by neural networks will suffer from a sub-optimal performance.
These algorithms include both model-free algorithms such as DQN (Mnih et al., 2015) and SAC (Haarnoja et al., 2018) , and model-based policy optimization algorithms such as SLBO (Luo et al., 2019) and MBPO (Janner et al., 2019) .
To validate the intuition, we empirically apply these algorithms to the constructed or the randomly generated MDPs.
Indeed, they fail to converge to the optimal rewards even with sufficient samples, which suggests that they suffer from the lack of expressivity.
However, in such cases, model-based planning algorithms should not suffer from the lack of expressivity, because they only use the learned, parameterized dynamics, which are easy to express.
The policy obtained from the planning is the maximizer of the total future reward on the learned dynamics, and can have an exponential (in the horizon) number of pieces even if the dynamics has only a constant number of pieces.
In fact, even a partial planner can help improve the expressivity of the policy.
If we plan for k steps and then resort to some Q-function for estimating the total reward of the remaining steps, we can obtain a policy with 2 k more pieces than what Q-function has.
We hypothesize that the real-world continuous control tasks also have a more complex optimal Qfunction and a policy than the dynamics.
The theoretical analysis of the synthetic dynamics suggests that a model-based few-steps planner on top of a parameterized Q-function will outperform the original Q-function because of the addtional expressivity introduced by the planning.
We empirically verify the intuition on MuJoCo benchmark tasks.
We show that applying a model-based planner on top of Q-functions learned from model-based or model-free policy optimization algorithms in the test time leads to significant gains over the original Q-function or policy.
In summary, our contributions are:
1. We construct continuous state space MDPs whose Q-functions and policies are proved to be more complex than the dynamics (Sections 4.1 and 4.2.)
2. We empirically show that with a decent chance, (semi-)randomly generated piecewise linear MDPs also have complex Q-functions (Section 4.3.)
3. We show theoretically and empirically that the model-free RL or model-based policy optimization algorithms suffer from the lack of expressivity for the constructed MDPs (Sections 4.3), whereas model-based planning solve the problem efficiently (Section 5.2.)
4. Inspired by the theory, we propose a simple model-based bootstrapping planner (BOOTS), which can be applied on top of any model-free or model-based Q-learning algorithms at the test time.
Empirical results show that BOOTS improves the performance on MuJoCo benchmark tasks, and outperforms previous state-of-the-art on MuJoCo humanoid environment.
Our study suggests that there exists a significant representation power gap of neural networks between for expressing Q-function, the policy, and the dynamics in both constructed examples and empirical benchmarking environments.
We show that our model-based bootstrapping planner BOOTS helps to overcome the approximation issue and improves the performance in synthetic settings and in the difficult MuJoCo environments.
We raise some interesting open questions.
• Can we theoretically generalize our results to high-dimensional state space, or continuous actions space?
Can we theoretically analyze the number of pieces of the optimal Q-function of a stochastic dynamics?
• In this paper, we measure the complexity by the size of the neural networks.
It's conceivable that for real-life problems, the complexity of a neural network can be better measured by its weights norm.
Could we build a more realistic theory with another measure of complexity?
• The BOOTS planner comes with a cost of longer test time.
How do we efficiently plan in high-dimensional dynamics with a long planning horizon?
• The dynamics can also be more complex (perhaps in another sense) than the Q-function in certain cases.
How do we efficiently identify the complexity of the optimal Q-function, policy, and the dynamics, and how do we deploy the best algorithms for problems with different characteristics?
(Luo et al., 2019) , the stochasticity in the dynamics can play a similar role as the model ensemble.
Our algorithm is a few times faster than MBPO in wall-clock time.
It performs similarlty to MBPO on Humanoid, but a bit worse than MBPO in other environments.
In MBSAC, we use SAC to optimize the policy π β and the Q-function Q ϕ .
We choose SAC due to its sample-efficiency, simplicity and off-policy nature.
We mix the real data from the environment and the virtual data which are always fresh and are generated by our learned dynamics modelf θ .
|
We compare deep model-based and model-free RL algorithms by studying the approximability of $Q$-functions, policies, and dynamics by neural networks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:299
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.
Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation.
Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions.
Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance.
We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs.
We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.
One of the central questions in science is forecasting: given the past history, how well can we predict the future?
In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics.
Examples of such systems abound in science and engineering, from biological neural network activity, fluid turbulence, to climate and traffic systems (see FIG0 ).
Since current forecasting systems are unable to faithfully represent the higher-order dynamics, they have limited ability for accurate long-term forecasting.
Therefore, a key challenge is accurately modeling nonlinear dynamics and obtaining stable long-term predictions, given a dataset of realizations of the dynamics.
Here, the forecasting problem can be stated as follows: how can we efficiently learn a model that, given only few initial states, can reliably predict a sequence of future states over a long horizon of T time-steps?
Common approaches to forecasting involve linear time series models such as auto-regressive moving average (ARMA), state space models such as hidden Markov model (HMM), and deep neural networks.
We refer readers to a survey on time series forecasting by BID2 and the references therein.
A recurrent neural network (RNN), as well as its memory-based extensions such as the LSTM, is a class of models that have achieved good performance on sequence prediction tasks from demand forecasting BID5 to speech recognition BID15 and video analysis BID9 .
Although these methods can be effective for short-term, smooth dynamics, neither analytic nor data-driven learning methods tend to generalize well to capturing long-term nonlinear dynamics and predicting them over longer time horizons.To address this issue, we propose a novel family of tensor-train recurrent neural networks that can learn stable long-term forecasting.
These models have two key features: they
1) explicitly model the higher-order dynamics, by using a longer history of previous hidden states and high-order state interactions with multiplicative memory units; and
2) they are scalable by using tensor trains, a structured low-rank tensor decomposition that greatly reduces the number of model parameters, while mostly preserving the correlation structure of the full-rank model.In this work, we analyze Tensor-Train RNNs theoretically, and also experimentally validate them over a wide range of forecasting domains.
Our contributions can be summarized as follows:• We describe how TT-RNNs encode higher-order non-Markovian dynamics and high-order state interactions.
To address the memory issue, we propose a tensor-train (TT) decomposition that makes learning tractable and fast.•
We provide theoretical guarantees for the representation power of TT-RNNs for nonlinear dynamics, and obtain the connection between the target dynamics and TT-RNN approximation. In
contrast, no such theoretical results are known for standard recurrent networks.• We
validate TT-RNNs on simulated data and two real-world environments with nonlinear dynamics (climate and traffic). Here
, we show that TT-RNNs can forecast more accurately for significantly longer time horizons compared to standard RNNs and LSTMs.
In this work, we considered forecasting under nonlinear dynamics.We propose a novel class of RNNs -TT-RNN.
We provide approximation guarantees for TT-RNN and characterize its representation power.
We demonstrate the benefits of TT-RNN to forecast accurately for significantly longer time horizon in both synthetic and real-world multivariate time series data.As we observed, chaotic dynamics still present a significant challenge to any sequential prediction model.
Hence, it would be interesting to study how to learn robust models for chaotic dynamics.
In other sequential prediction settings, such as natural language processing, there does not (or is not known to) exist a succinct analytical description of the data-generating process.
It would be interesting to further investigate the effectiveness of TT-RNNs in such domains as well.
|
Accurate forecasting over very long time horizons using tensor-train RNNs
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:3
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset.
L2TL considers joint optimization of vastly-shared weights between models for source and target tasks, and employs adaptive weights for scaling of constituent losses.
The adaptation of the weights is based on reinforcement learning, guided with a performance metric on the target validation set.
We demonstrate state-of-the-art performance of L2TL given fixed models, consistently outperforming fine-tuning baselines on various datasets.
In the regimes of small-scale target datasets and significant label mismatch between source and target datasets, L2TL outperforms previous work by an even larger margin.
|
We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:30
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world.
For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved.
In order to match real-world conditions this causal knowledge must be learned without access to supervised data.
To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion.
It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently.
On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge.
We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects.
Humans rely on common-sense physical reasoning to solve many everyday physics-related tasks BID32 .
For example, it enables them to foresee the consequences of their actions (simulation), or to infer the state of parts of the world that are currently unobserved.
This causal understanding is an essential ingredient for any intelligent agent that is to operate within the world.Common-sense physical reasoning is facilitated by the discovery and representation of objects (a core domain of human cognition BID45 ) that serve as primitives of a compositional system.
They allow humans to decompose a complex visual scene into distinct parts, describe relations between them and reason about their dynamics as well as the consequences of their interactions BID4 BID32 BID48 .The
most successful machine learning approaches to common-sense physical reasoning incorporate such prior knowledge in their design. They
maintain explicit object representations, which allow for general physical dynamics to be learned between object pairs in a compositional manner BID3 BID8 BID49 . However
, in these approaches learning is supervised, as it relies on object-representations from external sources (e.g. a physics simulator) that are typically unavailable in real-world scenarios.Neural approaches that learn to directly model motion or physical interactions in pixel space offer an alternative solution BID46 BID47 ). However
, while unsupervised, these methods suffer from a lack compositionality at the representational level of objects. This prevents
such end-to-end neural approaches from efficiently learning functions that operate on multiple entities and generalize in a human-like way (c.f. BID4 ; BID32 ; BID41 , but see BID39 ).In this work we
propose Relational N-EM (R-NEM), a novel approach to common-sense physical reasoning that learns physical interactions between objects from raw visual images in a purely unsupervised fashion. At its core is
Neural Expectation Maximization (N-EM; , a method that allows for the discovery of compositional object-representations, yet is unable to model interactions between objects. Therefore, we
endow N-EM with a relational mechanism inspired by previous work BID3 BID8 BID41 , enabling it to factor interactions between object-pairs, learn efficiently, and generalize to visual scenes with a varying number of objects without re-training.
We have argued that the ability to discover and describe a scene in terms of objects provides an essential ingredient for common-sense physical reasoning.
This is supported by converging evidence from cognitive science and developmental psychology that intuitive physics and reasoning capabilities are built upon the ability to perceive objects and their interactions BID43 BID48 .
The fact that young infants already exhibit this ability, may even suggest an innate bias towards compositionality BID32 BID37 BID45 .
Inspired by these observations we have proposed R-NEM, a method that incorporates inductive biases about the existence of objects and interactions, implemented by its clustering objective and interaction function respectively.
The specific nature of the objects, and their dynamics and interactions can then be learned efficiently purely from visual observations.In our experiments we find that R-NEM indeed captures the (physical) dynamics of various environments more accurately than other methods, and that it exhibits improved generalization to environments with different numbers of objects.
It can be used as an approximate simulator of the environment, and to predict movement and collisions of objects, even when they are completely occluded.
This demonstrates a notion of object permanence and aligns with evidence that young infants seem to infer that occluded objects move in connected paths and continue to maintain objectspecific properties BID44 .
Moreover, young infants also appear to expect that objects only interact when they come into contact BID44 , which is analogous to the behaviour of R-NEM to only attend to other objects when a collision is imminent.
In summary, we believe that our method presents an important step towards learning a more human-like model of the world in a completely unsupervised fashion.Current limitations of our approach revolve around grouping and prediction.
What aspects of a scene humans group together typically varies as a function of the task in mind.
One may perceive a stack of chairs as a whole if the goal is to move them to another room, or as individual chairs if the goal is to count the number of chairs in the stack.
In order to facilitate this dynamic grouping one would need to incorporate top-down feedback from an agent into the grouping procedure to deviate from the built-in inductive biases.
Another limitation of our approach is the need to incentivize R-NEM to produce useful groupings by injecting noise, or reducing capacity.
The former may prevent very small regularities in the input from being detected.
Finally the interaction in the E-step among the groups makes it difficult to increase the number of components above ten without causing harmful training instabilities.
Due to the multitude of interactions and objectives in R-NEM (and RNN-EM) we find that they are sometimes challenging to train.In terms of prediction we have implicitly assumed that objects in the environment behave according to rules that can be inferred.
This poses a challenge when objects deform in a manner that is difficult to predict (as is the case for objects in Space Invaders due to downsampling).
However in practice we find that (once pixels have been grouped together) the masking of the input helps each component in quickly adapting its representation to any unforeseen behaviour across consecutive time steps.
Perhaps a more severe limitation of R-NEM (and of RNN-EM in general) is that the second loss term of the outer training objective hinders in modelling more complex varying backgrounds, as the background group would have to predict the "pixel prior" for every other group.We argue that the ability to engage in common-sense physical reasoning benefits any intelligent agent that needs to operate in a physical environment, which provides exciting future research opportunities.
In future work we intend to investigate how top-down feedback from an agent could be incorporated in R-NEM to facilitate dynamic groupings, but also how the compositional representations produced by R-NEM can benefit a reinforcement learner, for example to learn a modular policy that easily generalizes to novel combinations of known objects.
Other interactions between a controller C and a model of the world M (implemented by R-NEM) as posed in BID42 constitute further research directions.
|
We introduce a novel approach to common-sense physical reasoning that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:300
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The idea that neural networks may exhibit a bias towards simplicity has a long history.
Simplicity bias provides a way to quantify this intuition.
It predicts, for a broad class of input-output maps which can describe many systems in science and engineering, that simple outputs are exponentially more likely to occur upon uniform random sampling of inputs than complex outputs are.
This simplicity bias behaviour has been observed for systems ranging from the RNA sequence to secondary structure map, to systems of coupled differential equations, to models of plant growth.
Deep neural networks can be viewed as a mapping from the space of parameters (the weights) to the space of functions (how inputs get transformed to outputs by the network).
We show that this parameter-function map obeys the necessary conditions for simplicity bias, and numerically show that it is hugely biased towards functions with low descriptional complexity.
We also demonstrate a Zipf like power-law probability-rank relation.
A bias towards simplicity may help explain why neural nets generalize so well.
In a recent paper BID4 , an inequality inspired by the coding theorem from algorithmic information theory (AIT) BID5 , and applicable to computable input-output maps was derived using the following simple procedure.
Consider a map f : I → O between N I inputs and N O outputs.
The size of the inputs space is parameterized as n, e.g. if the inputs are binary strings, then N I = 2 n .
Assuming f and n are given, implement the following simple procedure: first enumerate all 2 n inputs and map them to outputs using f .
Then order the outputs by how frequently Preliminary work.
Under review by the International Conference on Machine Learning (ICML).
Do not distribute.
they appear.
Using a Shannon-Fano code, one can then describe x with a code of length − log 2 P (x) + O(1), which therefore upper bounds the Kolmogorov complexity, giving the relation P (x) ≤ 2 −K(x|f,n)+O BID0 .
The O(1) terms are independent of x (but hard to estimate).
Similar bounds can be found in standard works BID5 .
As pointed out in BID4 , if the maps are simple, that is condition 1: K(f ) + K(n) K(x) + O(1) holds, then because K(x) ≤ K(x|f, n) + K(f ) + K(n) + O(1), and K(x|f, n) ≤ K(x) + O(1), it follows that K(x|f, n) ≈ K(x) + O(1).
The problem remains that Kolmogorov complexity is fundamentally uncomputable BID5 , and that the O(1) terms are hard to estimate.
However, in reference (5) a more pragmatic approach was taken to argue that a bound on the probability P (x) that x obtains upon random sampling of inputs can be approximated as DISPLAYFORM0 whereK(x) is a suitable approximation to the Kolmogorov complexity of x.
Here a and b are constants that are independent of x and which can often be determined from some basic information about the map.
These constants pick up multiplicative and additive factors in the approximation to K(x) and to the O(1) terms.In addition to the simplicity of the the input-output map f (condition (1)), the map also needs to obey conditions BID1 Redundancy: that the number of inputs N I is much larger than the number of outputs N O , as otherwise P (x) can't vary much;
3) Large systems where N O 0, so that finite size effects don't play a dominant role;
4) Nonlinear: If the map f is linear it won't show bias and
5) Well-behaved: The map should not have a significant fraction of pseudorandom outputs because it is hard to find good approximationsK(x).
For example many randomnumber generators produce outputs that appear complex, but in fact have low K(x) because they are generated by a relatively simple algorithms with short descriptions.Some of the steps above may seem rather rough to AIT purists.
For example: Can a reasonable approximation to K(x) be found?
What about O(1) terms?
And, how do you know condition
5) is fulfilled?
Notwithstanding these important questions, in reference (5) the simplicity bias bound (1) was tested empirically for a wide range of different maps, ranging from a sequence to RNA secondary 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Simplicity bias in the parameter-function map of deep neural networks structure map, to a set of coupled differential equations, to L-systems (a model for plant morphology and computer graphics) to a stochastic financial model.
In each case the bound works remarkably well: High probability outputs have low complexity, and high complexity outputs have low probability (but not necessarily vice versa).
A simple matrix map that allows condition 1 to be directly tested also demonstrates that when the map becomes sufficiently complex, simplicity bias phenomena disappear.this method is that it relies on the assumption of the put x, the upper bound was only a poor approximation, .
nd b can generally be estimated with a limited amount text.
As long as there are ways to estimate max(K(x)), first order can simply be set to zero.
Of course some ay obey them, but we can always simply fix a and b only a small amount of information is needed to fix the e chosen approximate measure of complexity.
In this a di↵erent complexity sayK ↵, = ↵C LZ (x) + , then nts a ↵, = a/↵ and b ↵, = b a /↵.
In other words, the parameters.
Such robustness is a useful property.plexity for di↵erent sized systems.
(a) RNA n = 10 and simplest structure does have the largest probability.
upper bound, a = 0.23, b = 1.08; (c) RNA n = 80 shows er bound, a = 0.33, b = 6.39.
FIG0 .
Probability that an RNA secondary structure x obtains upon random sampling of length L = 80 sequences versus a Lempel-Ziv measure of the complexity of the structure.
The black solid line is the simplicity-bias bound (1), while the dashed line denotes the bound with the parameter b set to zero.In FIG0 we illustrate an iconic input-output map for RNA, a linear biopolymer that can fold into well-defined sructures due to specific bonding between the four different types of nucleotides ACUG from which its sequences are formed.
While the full three-dimensional structure is difficult to predict, the secondary structure, which records which nucleotide binds to which nucleotide, can be efficiently and accurately calculated.
This mapping from sequences to secondary structures fulfills the conditions above.
Most importantly, the map, which uses the laws of physics to determine the lowest free-energy structure for a given sequence, is independent of the length of the sequences, and so fulfills the simplicity condition (1).
The structures (the outputs x) can be written in terms of a ternary string, and so simple compression algorithms can be used to estimate their complexity.
In FIG0 , we observe, as expected, that the probability P (x) that a particular secondary structure x is found upon random sampling of sequences is bounded by Eq. (1) as predicted.
Similar robust simplicity bies behaviour to that seen in this figure was observed for the other maps.Similar scaling (5) was also observed for this map with a series of other approximations to K(x), suggesting that the precise choice of complexity measure was not critical, as long as it captures some basic essential features.In summary then, the simplicity bias bound (1) works robustly well for a wide range of different maps.
The predictions are strong: the probability that an output obtains upon random sampling of inputs should drop (at least) exponentially with linear increases in the descriptional complexity of the output.
Nevertheless, it is important to note that while the 5 conditions above are sufficient for the bound (1) to hold, they are not sufficient to guarantee that the map will be biased (and therefore simplicity biased).
One can easily construct maps that obey them, but do not show bias.
Understanding the conditions resulting in biased maps is very much an open area of investigation.The question we will address here is: Can deep learning be re-cast into the language of input-output maps, and if so, do these maps also exhibit the very general phenomenon of simplicity bias?2
.
The parameter-function map It is not hard to see that the map above obeys condition 1: The shortest description of the map grows slowly with the logarithm of the size of the space of functions (which determines the typical K(x)).
Conditions 2-4 are also clearly met.
Condition 5 is more complex and requires empirical testing.
But given that simplicity bias was observed for such a wide range of maps, our expectation is that it will hold robustly for neural networks also.
We have provided evidence that neural networks exhibit simplicity bias.
The fact that the phenomena observed are remarkably similar to those of a wide range of maps from science and engineering BID4 suggests that this behaviour is general, and will hold for many neural network architectures.
It would be interesting to test this claim for larger systems, which will require new sampling techniques, and to derive analytic arguments for a bias towards simplicity, as done in BID12 .
222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 Simplicity bias in the parameter-function map of deep neural
|
A very strong bias towards simple outpouts is observed in many simple input-ouput maps. The parameter-function map of deep networks is found to be biased in the same way.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:301
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior.
However, directing IL to achieve arbitrary goals is difficult.
In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals.
Yet, reward functions that evoke desirable behavior are often difficult to specify.
In this paper, we propose "Imitative Models" to combine the benefits of IL and goal-directed planning.
Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals.
We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals.
We show that our method can use these objectives to successfully direct behavior.
Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection.
We also show our approach is robust to poorly-specified goals, such as goals on the wrong side of the road.
Imitation learning (IL) is a framework for learning a model to mimic behavior.
At test-time, the model pursues its best-guess of desirable behavior.
By letting the model choose its own behavior, we cannot direct it to achieve different goals.
While work has augmented IL with goal conditioning (Dosovitskiy & Koltun, 2016; Codevilla et al., 2018) , it requires goals to be specified during training, explicit goal labels, and are simple (e.g., turning).
In contrast, we seek flexibility to achieve general goals for which we have no demonstrations.
In contrast to IL, planning-based algorithms like model-based reinforcement learning (MBRL) methods do not require expert demonstrations.
MBRL can adapt to new tasks specified through reward functions (Kuvayev & Sutton, 1996; Deisenroth & Rasmussen, 2011) .
The "model" is a dynamics model, used to plan under the user-supplied reward function.
Planning enables these approaches to perform new tasks at test-time.
The key drawback is that these models learn dynamics of possible behavior rather than dynamics of desirable behavior.
This means that the responsibility of evoking desirable behavior is entirely deferred to engineering the input reward function.
Designing reward functions that cause MBRL to evoke complex, desirable behavior is difficult when the space of possible undesirable behaviors is large.
In order to succeed, the rewards cannot lead the model astray towards observations significantly different than those with which the model was trained.
Our goal is to devise an algorithm that combines the advantages of MBRL and IL by offering MBRL's flexibility to achieve new tasks at test-time and IL's potential to learn desirable behavior entirely from offline data.
To accomplish this, we first train a model to forecast expert trajectories with a density function, which can score trajectories and plans by how likely they are to come from the expert.
A probabilistic model is necessary because expert behavior is stochastic: e.g. at an intersection, the expert could choose to turn left or right.
Next, we derive a principled probabilistic inference objective to create plans that incorporate both (1) the model and (2) arbitrary new tasks.
Finally, we derive families of tasks that we can provide to the inference framework.
Our method can accomplish new tasks specified as complex goals without having seen an expert complete these tasks before.
We investigate properties of our method on a dynamic simulated autonomous driving task (see Fig. 1 ).
Videos are available at https://sites.google.com/view/imitative-models.
Our contributions are as follows:
Figure 1: Our method: deep imitative models.
Top Center.
We use demonstrations to learn a probability density function q of future behavior and deploy it to accomplish various tasks.
Left: A region in the ground plane is input to a planning procedure that reasons about how the expert would achieve that task.
It coarsely specifies a destination, and guides the vehicle to turn left.
Right: Goal positions and potholes yield a plan that avoids potholes and achieves one of the goals on the right.
1. Interpretable expert-like plans with minimal reward engineering.
Our method outputs multistep expert-like plans, offering superior interpretability to one-step imitation learning models.
In contrast to MBRL, our method generates expert-like behaviors with minimal reward engineering.
2. Flexibility to new tasks: In contrast to IL, our method flexibly incorporates and achieves goals not seen during training, and performs complex tasks that were never demonstrated, such as navigating to goal regions and avoiding test-time only potholes, as depicted in Fig. 1 .
3. Robustness to goal specification noise: We show that our method is robust to noise in the goal specification.
In our application, we show that our agent can receive goals on the wrong side of the road, yet still navigate towards them while staying on the correct side of the road.
4. State-of-the-art CARLA performance: Our method substantially outperforms MBRL, a custom IL method, and all five prior CARLA IL methods known to us.
It learned near-perfect driving through dynamic and static CARLA environments from expert observations alone.
We proposed "Imitative Models" to combine the benefits of IL and MBRL.
Imitative Models are probabilistic predictive models able to plan interpretable expert-like trajectories to achieve new goals.
Inference with an Imitative Model resembles trajectory optimization in MBRL, enabling it to both incorporate new goals and plan to them at test-time, which IL cannot.
Learning an Imitative Model resembles offline IL, enabling it to circumvent the difficult reward-engineering and costly online data collection necessities of MBRL.
We derived families of flexible goal objectives and showed our model can successfully incorporate them without additional training.
Our method substantially outperformed six IL approaches and an MBRL approach in a dynamic simulated autonomous driving task.
We showed our approach is robust to poorly specified goals, such as goals on the wrong side of the road.
We believe our method is broadly applicable in settings where expert demonstrations are available, flexibility to new situations is demanded, and safety is paramount.
Future work could investigate methods to handle both observation noise and out-of-distribution observations to enhance the applicability to robust real systems -we expand on this issue in Appendix E. Finally, to facilitate more general planning, future work could extend our approach to explicitly reason about all agents in the environment in order to inform a closed-loop plan for the controlled agent.
|
In this paper, we propose Imitative Models to combine the benefits of IL and goal-directed planning: probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:302
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning communication via deep reinforcement learning has recently been shown to be an effective way to solve cooperative multi-agent tasks.
However, learning which communicated information is beneficial for each agent's decision-making remains a challenging task.
In order to address this problem, we introduce a fully differentiable framework for communication and reasoning, enabling agents to solve cooperative tasks in partially-observable environments.
The framework is designed to facilitate explicit reasoning between agents, through a novel memory-based attention network that can learn selectively from its past memories.
The model communicates through a series of reasoning steps that decompose each agent's intentions into learned representations that are used first to compute the relevance of communicated information, and second to extract information from memories given newly received information.
By selectively interacting with new information, the model effectively learns a communication protocol directly, in an end-to-end manner.
We empirically demonstrate the strength of our model in cooperative multi-agent tasks, where inter-agent communication and reasoning over prior information substantially improves performance compared to baselines.
Communication is one of the fundamental building blocks for cooperation in multi-agent systems.
The ability to effectively represent and communicate information valuable to a task is especially important in multi-agent reinforcement learning (MARL).
Apart from learning what to communicate, it is critical that agents learn to reason based on the information communicated to them by their teammates.
Such a capability enables agents to develop sophisticated coordination strategies that would be invaluable in application scenarios such as search-and-rescue for multi-robot systems (Li et al., 2002) , swarming and flocking with adversaries (Kitano et al., 1999) , multiplayer games (e.g., StarCraft, (Vinyals et al., 2017 ), DoTA, (OpenAI, 2018 ), and autonomous vehicle planning, (Petrillo et al., 2018) Building agents that can solve complex cooperative tasks requires us to answer the question: how do agents learn to communicate in support of intelligent cooperation?
Indeed, humans inspire this question as they exhibit highly complex collaboration strategies, via communication and reasoning, allowing them to recognize important task information through a structured reasoning process, (De Ruiter et al., 2010; Garrod et al., 2010; Fusaroli et al., 2012) .
Significant progress in multiagent deep reinforcement learning (MADRL) has been made in learning effective communication (protocols), through the following methods:
(i) broadcasting a vector representation of each agent's private observations to all agents (Sukhbaatar et al., 2016; Foerster et al., 2016) ,
(ii) selective and targeted communication through the use of soft-attention networks, (Vaswani et al., 2017) , that compute the importance of each agent and its information, (Jiang & Lu, 2018; Das et al., 2018) , and
(iii) communication through a shared memory channel (Pesce & Montana, 2019; Foerster et al., 2018) , which allows agents to collectively learn and contribute information at every time instant.
The architecture of (Jiang & Lu, 2018) implements communication by enabling agents to communicate intention as a learned representation of private observations, which are then integrated in the hidden state of a recurrent neural network as a form of agent memory.
One downside to this approach is that as the communication is constrained in the neighborhood of each agent, communicated information does not enrich the actions of all agents, even if certain agent communications may be critical for a task.
For example, if an agent from afar has covered a landmark, this information would be beneficial to another agent that has a trajectory planned towards the same landmark.
In contrast, Memory Driven Multi-Agent Deep Deterministic Policy Gradient (MD-MADDPG), (Pesce & Montana, 2019) , implements a shared memory state between all agents that is updated sequentially after each agent selects an action.
However, the importance of each agent's update to the memory in MD-MADDPG is solely decided by its interactions with the memory channel.
In addition, the sequential nature of updating the memory channel restricts the architecture's performance to 2-agent systems.
Targeted Multi-Agent Communication (TarMAC), (Das et al., 2018) , uses soft-attention (Vaswani et al., 2017) for the communication mechanism to infer the importance of each agent's information, however without the use of memory in the communication step.
The paradigm of using relations in agent-based reinforcement learning was proposed by (Zambaldi et al., 2018) through multi-headed dot-product attention (MHDPA) (Vaswani et al., 2017) .
The core idea of relational reinforcement learning (RRL) combines inductive logic programming (Lavrac & Dzeroski, 1994; Džeroski et al., 2001 ) and reinforcement learning to perform reasoning steps iterated over entities in the environment.
Attention is a widely adopted framework in Natural Language Processing (NLP) and Visual Question Answering (VQA) tasks (Andreas et al., 2016b; a; Hudson & Manning, 2018) for computing these relations and interactions between entities.
The mechanism (Vaswani et al., 2017) generates an attention distribution over the entities, or more simply a weighted value vector based on importance for the task at hand.
This method has been adopted successfully in state-of-the-art results for Visual Question Answering (VQA) tasks (Andreas et al., 2016b) , (Andreas et al., 2016a) , and more recently (Hudson & Manning, 2018) , demonstrating the robustness and generalization capacity of reasoning methods in neural networks.
In the context of multi-agent cooperation, we draw inspiration from work in soft-attention (Vaswani et al., 2017) to implement a method for computing relations between agents, coupled with a memory based attention network from Compositional Attention Networks (MAC) (Hudson & Manning, 2018) , yielding a framework for a memory-based communication that performs attentive reasoning over new information and past memories.
Concretely, we develop a communication architecture in MADRL by leveraging the approach of RRL and the capacity to learn from past experiences.
Our architecture is guided by the belief that a structured and iterative reasoning between non-local entities should enable agents to capture higherorder relations that are necessary for complex problem-solving.
To seek a balance between computational efficiency and adaptivity to variable team sizes, we exploit the soft-attention (Vaswani et al., 2017) as the base operation for selectively attending to an entity or information.
To capture the information and histories of other entities, and to better equip agents to make a deliberate decision, we separate out the attention and reasoning steps.
The attention unit informs the agent of which entities are most important for the current time-step, while the reasoning steps use previous memories and the information guided by the attention step to extract the shared information that is most relevant.
This explicit separation in communication enables agents to not only place importance on new information from other agents, but to selectively choose information from its past memories given new information.
This communication framework is learned in an end-to-end fashion, without resorting to any supervision, as a result of task-specific rewards.
Our empirical study demonstrates the effectiveness of our novel architecture to solve cooperative multi-agent tasks, with varying team sizes and environments.
By leveraging the paradigm of centralized learning and decentralized execution, alongside communication, we demonstrate the efficacy of the learned cooperative strategies.
We have introduced a novel framework, SARNet, for communication in multi-agent deep RL which performs a structured attentive reasoning between agents to improve coordination skills.
Through a decomposition of the representations of communication into reasoning steps, our agents exceed baseline methods in overall performance.
Our experiments demonstrate key benefits of gathering insights from (1) its own memories, and (2) the internal representations of the information available to agent.
The communication architecture is learned end-to-end, and is capable of computing taskrelevant importance of each piece of computed information from cooperating agents.
While this multi-agent communication mechanism shows promising results, we believe that we can further adapt this method to scale to a larger number of agents, through a gating mechanism to initiate communication, and decentralized learning.
|
Novel architecture of memory based attention mechanism for multi-agent communication.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:303
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we introduce Symplectic ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system from observed state trajectories.
To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner.
In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy.
In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum.
This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies.
In recent years, deep neural networks (Goodfellow et al., 2016) have become very accurate and widely used in many application domains, such as image recognition (He et al., 2016) , language comprehension (Devlin et al., 2019) , and sequential decision making (Silver et al., 2017) .
To learn underlying patterns from data and enable generalization beyond the training set, the learning approach incorporates appropriate inductive bias (Haussler, 1988; Baxter, 2000) by promoting representations which are simple in some sense.
It typically manifests itself via a set of assumptions, which in turn can guide a learning algorithm to pick one hypothesis over another.
The success in predicting an outcome for previously unseen data then depends on how well the inductive bias captures the ground reality.
Inductive bias can be introduced as the prior in a Bayesian model, or via the choice of computation graphs in a neural network.
In a variety of settings, especially in physical systems, wherein laws of physics are primarily responsible for shaping the outcome, generalization in neural networks can be improved by leveraging underlying physics for designing the computation graphs.
Here, by leveraging a generalization of the Hamiltonian dynamics, we develop a learning framework which exploits the underlying physics in the associated computation graph.
Our results show that incorporation of such physics-based inductive bias offers insight about relevant physical properties of the system, such as inertia, potential energy, total conserved energy.
These insights, in turn, enable a more accurate prediction of future behavior and improvement in out-of-sample behavior.
Furthermore, learning a physically-consistent model of the underlying dynamics can subsequently enable usage of model-based controllers which can provide performance guarantees for complex, nonlinear systems.
In particular, insight about kinetic and potential energy of a physical system can be leveraged to synthesize appropriate control strategies, such as the method of controlled Lagrangian (Bloch et al., 2001 ) and interconnection & damping assignment (Ortega et al., 2002) , which can reshape the closed-loop energy landscape to achieve a broad range of control objectives (regulation, tracking, etc.) .
Here we have introduced Symplectic ODE-Net which provides a systematic way to incorporate prior knowledge of Hamiltonian dynamics with control into a deep learning framework.
We show that SymODEN achieves better prediction with fewer training samples by learning an interpretable, physically-consistent state-space model.
Future works will incorporate a broader class of physicsbased prior, such as the port-Hamiltonian system formulation, to learn dynamics of a larger class of physical systems.
SymODEN can work with embedded angle data or when we only have access to velocity instead of generalized momentum.
Future works would explore other types of embedding, such as embedded 3D orientations.
Another interesting direction could be to combine energy shaping control (potential as well as kinetic energy shaping) with interpretable end-to-end learning frameworks.
Tianshu Wei, Yanzhi Wang, and Qi Zhu.
Deep Reinforcement Learning for Building HVAC Control.
In Proceedings of the 54th Annual Design Automation Conference (DAC), pp. 22:1-22:6, 2017.
|
This work enforces Hamiltonian dynamics with control to learn system models from embedded position and velocity data, and exploits this physically-consistent dynamics to synthesize model-based control via energy shaping.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:304
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems.
However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices (i.e., learners).
In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations:
(i) We first investigate parameter divergence of local updates to explain performance degradation from non-IID data.
The origin of the parameter divergence is also found both empirically and theoretically.
(ii) We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data.
(iii) We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence.
Over the recent years, federated learning (McMahan et al., 2017) has been a huge success to reduce the communication overhead in distributed training of deep networks.
Guaranteeing competitive performance, the federated learning permits each learner to compute their local updates of each round for relatively many iterations (e.g., 1 epoch, 10 epochs, etc.), which provides much higher communication-efficiency compared to the conventional data parallelism approaches (for intra-datacenter environments, e.g., Dean et al. (2012) ; Chen et al. (2016) ) that generally require very frequent gradient aggregation.
Furthermore, the federated learning can also significantly reduce data privacy and security risks by enabling to conceal on-device data of each learner from the server or other learners; thus the approach can be applied well to environments with highly private data (e.g., personal medical data), it is now emerging as a promising methodology for privacypreserving distributed learning along with differential privacy-based methods (Hard et al., 2018; Yang et al., 2018; Bonawitz et al., 2019; Chen et al., 2019) .
On this wise, the federated learning takes a simple approach that performs iterative parameter averaging of local updates computed from each learners' own dataset, which suggests an efficient way to learn a shared model without centralizing training data from multiple sources; but hereby, since the local data of each device is created based on their usage pattern, the heterogeneity of training data distributions across the learners might be naturally assumed in real-world cases.
Hence, each local dataset would not follow the population distribution, and handling the decentralized non-IID data still remains a statistical challenge in the field of federated learning (Smith et al., 2017) .
For instance, Zhao et al. (2018) observed severe performance degradation in multi-class classification accuracy under highly skewed non-IID data; it was reported that more diminishing returns could be yielded as the probabilistic distance of learners' local data from the population distribution increases.
We now explain the internal reasons of the observations in the previous subsection.
Through the experimental results, we were able to classify the causes of the failures under non-IID data into three categories; the following discussions are described based on this.
8 Note that our discussion in this subsection is mostly made from the results under Nesterov momentum SGD and on CIFAR-10; the complete results including other optimizers (e.g., pure SGD, Polyak momentum SGD, and Adam) and datasets (e.g., SVHN) are given in Appendix C.
Inordinate magnitude of parameter divergence.
As mentioned before, bigger parameter divergence is the root cause of diminishing returns under federated learning methods with non-IID data.
By extension, here we observe that even under the same non-IID data setting, some of the considered hyperparametric methods yield greater parameter divergence than when they are not applied.
For example, from the left plot of Figure 3 , we see that under the Non-IID(2) setting, the parameter divergence values (in the last fully-connected layer) become greater as the network depth increases (note that NetA-Baseline, NetA-Deeper, and NetA-Deepest have 3, 6, and 9 convolutional layers, respectively; see also Appendix A.1 for their detailed architecture).
The corresponding final test accuracy was found to be 74.11%, 73.67%, and 68.98%, respectively, in order of the degree of shallowness; this fits well into the parameter divergence results.
Since the NetA-Deeper and NetA-Deepest have twice and three times as many model parameters as NetA-Baseline, it can be expected enough that the deeper models yield bigger parameter divergence in the whole model; but our results also show its qualitative increase in a layer level.
In relation, we also provide the results using the modern network architecture (e.g., ResNet (He et al., 2016) ) in Table 8 of the appendix.
From the middle plot of the figure, we can also observe bigger parameter divergence in a high level of weight decay under the Non-IID(2) setting.
Under the non-IID data setting, the test accuracy of about 72 ∼ 74% was achieved in the low levels (≤ 0.0001), but weight decay factor of 0.0005 yielded only that of 54.11%.
Hence, this suggests that with non-IID data we should apply much smaller weight decay to federated learning-based methods.
Here we note that if a single iteration is considered for each learner's local update per round, the corresponding parameter divergence will be of course the same without regard to degree of weight decay.
However, in our experiments, the great number of local iterations per round (i.e., 100) made a big difference of the divergence values under the non-IID data setting; this eventually yielded the accuracy gap.
We additionally observe for the non-IID cases that even with weight decay factor of 0.0005, the parameter divergence values are similar to those with the smaller factors at very early rounds in which the norms of the weights are relatively very small.
In addition, it is observed from the right plot of the figure that Dropout (Hinton et al., 2012; Srivatava et al., 2014 ) also yields bigger parameter divergence under the non-IID data setting.
The corresponding test accuracy was seen to be a diminishing return with Nesterov momentum SGD (i.e., using Dropout we can achieve +2.85% under IID, but only +1.69% is obtained under non-IID(2), compared to when it is not applied; see Table 2 ); however, it was observed that the generalization effect of the Dropout is still valid in test accuracy for the pure SGD and the Adam (refer to also Table 13 in the appendix).
Steep fall phenomenon.
As we see previously, inordinate magnitude of parameter divergence is one of the notable characteristics for failure cases under federated learning with non-IID data.
However, under the non-IID data setting, some of the failure cases have been observed where the test accuracy is still low but the parameter divergence values of the last fully-connected layer decrease (rapidly) over rounds; as the round goes, even the values were sometimes seen to be lower than those of the comparison targets.
We refer to this phenomenon as steep fall phenomenon.
It is inferred that these (unexpected abnormal) sudden drops of parameter divergence values indicate going into poor local minima (or saddles); this can be supported by the behaviors that test accuracy increases plausibly at very early rounds, but the growth rate quickly stagnates and eventually becomes much lower than the comparison targets.
The left plot of Figure 4 shows the effect of the Adam optimizer with respect to its implementations.
Through the experiments, we identified that under non-IID data environments, the performance of Adam is very sensitive to the range of model variables to be averaged, unlike the non-adaptive optimizers (e.g., momentum SGD); its moment variables should be also considered in the parameter averaging together with weights and biases (see also Table 3 ).
The poor performance of the Adam-WB under the Non-IID(2) setting would be from twice as many momentum variables as the momentum SGD, which indicates the increased number of them affected by the non-IIDness; thus, originally we had thought that extreme parameter divergence could appear if the momentum variables are not averaged together with weights and biases.
However, it was seen that the parameter divergence values under the Adam-WB was seen to be similar or even smaller than under Adam-A (see also Figure 11 in the appendix).
Instead, from the left panel we can observe that the parameter divergence of Adam-WB in the last fully-connected layer is bigger than that of Adam-A at the very early rounds (as we expected), but soon it is abnormally sharply reduced over rounds; this is considered the steep fall phenomenon.
The middle and the right plots of the figure also show the steep fall phenomenon in the last fullyconnected layer, with respect to network width and whether to use Batch Normalization, respectively.
In the case of the NetC models, NetC-Baseline, NetC-Wider, and NetC-Widest use the global average pooling, the max pooling with stride 4, and the max pooling with stride 2, respectively, after the last convolutional layer; the number of neurons in the output layer becomes 2560, 10240, and 40960, respectively (see also Appendix A.1 for their detailed architecture).
Under the Non-IID(2) setting, the corresponding test accuracy was found to be 64.06%, 72.61%, and 73.64%, respectively, in order of the degree of wideness.
In addition, we can see that under Non-IID(2), Batch Normalization 9 yields not only big parameter divergence (especially before the first learning rate drop) but also the steep fall phenomenon; the corresponding test accuracy was seen to be very low (see Table 3 ).
The failure of the Batch Normalization stems from that the dependence of batchnormalized hidden activations makes each learner's update too overfitted to the distribution of their local training data.
Batch Renormalization, by relaxing the dependence, yields a better outcome; however, it still fails to exceed the performance of the baseline due to the significant parameter divergence.
To explain the impact of the steep fall phenomenon in test accuracy, we provide Figure 5 , which indicates that the loss landscapes for the failure cases (e.g., Adam-WB and with Batch Normalization) commonly show sharper minima that leads to poorer generalization (Hochreiter & Schmidhuber, 9 For its implementations into the considered federated learning algorithm, we let the server get the proper moving variance by 1997; Keskar et al., 2017) , and the minimal value in the bowl is relatively greater.
10 Here it is also observed that going into sharp minima starts even in early rounds such as 25th.
Excessively high training loss of local updates.
The final cause that we consider for the failure cases is excessively high training loss of local updates.
For instance, from the left plot of Figure 6 , we see that under the Non-IID(2) setting, NetB-Baseline gives much higher training loss than the other models.
Here we note that for the NetB-Baseline model, the global average pooling is applied after the last convolutional layer, and the number of neurons in the first fully-connected layer thus becomes 256 · 256; on the other hand, NetB-Wider and NetB-Widest use the max pooling with stride 4 and 2, which make the number of neurons in that layer become 1024 · 256 and 4096 · 256, respectively (see also Appendix A.1 for their details).
The experimental results were shown that NetB-Baseline has notably lower test accuracy (see Table 4 ).
We additionally remark that for NetBBaseline, very high losses are observed under the IID setting, and their values even are greater than in the non-IID case; however, note that one have to be aware that local updates are extremely easy to be overfitted to each training dataset under non-IID data environments, thus the converged training losses being high is more critical than the IID cases.
The middle and the right plot of the figure show the excessive training loss under the non-IID setting when applying the weight decay factor of 0.0005 and the data augmentation, respectively.
In the cases of the high level of weight decay, the severe performance degradation appears compared to when the levels are low (i.e., ≤ 0.0001) as already discussed.
In addition, we observed that with Nesterov momentum SGD, the data augmentation yields a diminishing return in test accuracy (i.e., with the data augmentation we can achieve +3.36% under IID, but −0.16% is obtained under non-IID(2), compared to when it is not applied); with Adam the degree of the diminishment becomes higher (refer to Table 12 in the appendix).
In the data augmentation cases, judging from that the 10 Based on Li et al. (2018) , the visualization of loss surface was conducted by L(α, β) = (θ * + αδ + βγ), where θ * is a center point of the model parameters, and δ and γ is the orthogonal direction vectors.
parameter divergence values are not so different between with and without it, we can identify that the performance degradation stems from the high training loss (see Figures 30 and 31 in the appendix).
Here we additionally note that unlike on the CIFAR-10, in the experiments on SVHN it was seen that the generalization effect of the data augmentation is still valid in test accuracy (see Table 12 ).
In this paper, we explored the effects of various hyperparameter optimization strategies for optimizers, network depth/width, and regularization on federated learning of deep networks.
Our primary concern in this study was lied on non-IID data, in which we found that under non-IID data settings many of the probed factors show somewhat different behaviors compared to under the IID setting and vanilla training.
To explain this, a concept of the parameter divergence was utilized, and its origin was identified both empirically and theoretically.
We also provided the internal reasons of our observations with a number of the experimental cases.
In the meantime, the federated learning has been vigorously studied for decentralized data environments due to its inherent strength, i.e., high communication-efficiency and privacy-preservability.
However, so far most of the existing works mainly dealt with only IID data, and the research to address non-IID data has just entered the beginning stage very recently despite its high real-world possibility.
Our study, as one of the openings, handles the essential factors in the federated training under the non-IID data environments, and we expect that it will provide refreshing perspectives for upcoming works.
A EXPERIMENTAL DETAILS
|
We investigate the internal reasons of our observations, the diminishing effects of the well-known hyperparameter optimization methods on federated learning from decentralized non-IID data.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:305
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep learning models are known to be vulnerable to adversarial examples.
A practical adversarial attack should require as little as possible knowledge of attacked models T. Current substitute attacks need pre-trained models to generate adversarial examples and their attack success rates heavily rely on the transferability of adversarial examples.
Current score-based and decision-based attacks require lots of queries for the T. In this study, we propose a novel adversarial imitation attack.
First, it produces a replica of the T by a two-player game like the generative adversarial networks (GANs).
The objective of the generative model G is to generate examples which lead D returning different outputs with T. The objective of the discriminative model D is to output the same labels with T under the same inputs.
Then, the adversarial examples generated by D are utilized to fool the T. Compared with the current substitute attacks, imitation attack can use less training data to produce a replica of T and improve the transferability of adversarial examples.
Experiments demonstrate that our imitation attack requires less training data than the black-box substitute attacks, but achieves an attack success rate close to the white-box attack on unseen data with no query.
Deep neural networks are often vulnerable to imperceptible perturbations of their inputs, causing incorrect predictions (Szegedy et al., 2014) .
Studies on adversarial examples developed attacks and defenses to assess and increase the robustness of models, respectively.
Adversarial attacks include white-box attacks, where the attack method has full access to models, and black-box attacks, where the attacks do not need knowledge of models structures and weights.
White-box attacks need training data and the gradient information of models, such as FGSM (Fast Gradient Sign Method) (Goodfellow et al., 2015) , BIM (Basic Iterative Method) (Kurakin et al., 2017a) and JSMA (Jacobian-based Saliency Map Attack) (Papernot et al., 2016b) .
However, the gradient information of attacked models is hard to access, the white-box attack is not practical in real-world tasks.
Literature shows adversarial examples have transferability property and they can affect different models, even the models have different architectures (Szegedy et al., 2014; Papernot et al., 2016a; Liu et al., 2017) .
Such a phenomenon is closely related to linearity and over-fitting of models (Szegedy et al., 2014; Hendrycks & Gimpel, 2017; Goodfellow et al., 2015; Tramèr et al., 2018) .
Therefore, substitute attacks are proposed to attack models without the gradient information.
Substitute black-box attacks utilize pre-trained models to generate adversarial examples and apply these examples to attacked models.
Their attack success rates rely on the transferability of adversarial examples and are often lower than that of white-box attacks.
Black-box score-based attacks Ilyas et al., 2018a; b) do not need pre-trained models, they access the output probabilities of the attacked model to generate adversarial examples iteratively.
Black-box decisionbased attacks (Brendel et al., 2017; Cheng et al., 2018; Chen et al., 2019) require less information than the score-based attacks.
They utilize hard labels of the attacked model to generate adversarial examples.
Adversarial attacks need knowledge of models.
However, a practical attack method should require as little as possible knowledge of attacked models, which include training data and procedure, models weights and architectures, output probabilities and hard labels (Athalye et al., 2018) .
The disadvantage of current substitute black-box attacks is that they need pre-trained substitute models trained by the same dataset with attacked model T (Hendrycks & Gimpel, 2017; Goodfellow et al., 2015; Kurakin et al., 2017a) or a number of images to imitate the outputs of T to produce substitute networks .
Actually, the prerequisites of these attacks are hard to obtain in real-world tasks.
The substitute models trained by limited images hardly generate adversarial examples with well transferability.
The disadvantage of current decision-based and score-based black-box attacks is that every adversarial example is synthesized by numerous queries.
Hence, developing a practical attack mechanism is necessary.
In this paper, we propose an adversarial imitation training, which is a special two-player game.
The game has a generative model G and a imitation model D. The G is designed to produce examples to make the predicted label of the attacked model T and D different, while the imitation model D fights for outputting the same label with T .
The proposed imitation training needs much less training data than the T and does not need the labels of these data, and the data do not need to coincide with the training data.
Then, the adversarial examples generated by D are utilized to fool the T like substitute attacks.
We call this new attack mechanism as adversarial imitation attack.
Compared with current substitute attacks, our adversarial imitation attack requires less training data.
Score-based and decision-based attacks need a lot of queries to generate each adversarial attack.
The similarity between the proposed method and current score-based and decision-based attacks is that adversarial imitation attack also needs to obtain a lot of queries in the training stage.
The difference between these two kinds of attack is our method do not need any additional queries in the test stage like other substitute attacks.
Experiments show that our proposed method achieves state-of-the-art performance compared with current substitute attacks and decision-based attack.
We summarize our main contributions as follows:
• The proposed new attack mechanism needs less training data of attacked models than current substitute attacks, but achieves an attack success rate close to the white-box attacks.
• The proposed new attack mechanism requires the same information of attacked models with decision attacks on the training stage, but is query-independent on the testing stage.
Practical adversarial attacks should have as little as possible knowledge of attacked model T .
Current black-box attacks need numerous training images or queries to generate adversarial images.
In this study, to address this problem, we combine the advantages of current black-box attacks and proposed a new attack mechanism, imitation attack, to replicate the information of the T , and generate adversarial examples fooling deep learning models efficiently.
Compared with substitute attacks, imitation attack only requires much less data than the training set of T and do not need the labels of the training data, but adversarial examples generated by imitation attack have stronger transferability for the T .
Compared with score-based and decision-based attacks, our imitation attack only needs the same information with decision attacks, but achieves state-of-the-art performances and is query-independent on testing stage.
Experiments showed the superiority of the proposed imitation attack.
Additionally, we observed that deep learning classification model T is easy to be stolen by limited unlabeled images, which are much fewer than the training images of T .
In future work, we will evaluate the performance of the proposed adversarial imitation attack on other tasks except for image classification.
A NETWORK ARCHITECTURES Figure 2 and Figure 3 .
The experiments show that adversarial examples generated by the proposed imitation attack can fool the attacked model with a small perturbation.
|
A novel adversarial imitation attack to fool machine learning models.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:306
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Stochastic Gradient Descent (SGD) methods using randomly selected batches are widely-used to train neural network (NN) models.
Performing design exploration to find the best NN for a particular task often requires extensive training with different models on a large dataset, which is very computationally expensive.
The most straightforward method to accelerate this computation is to distribute the batch of SGD over multiple processors.
However, large batch training often times leads to degradation in accuracy, poor generalization, and even poor robustness to adversarial attacks.
Existing solutions for large batch training either do not work or require massive hyper-parameter tuning.
To address this issue, we propose a novel large batch training method which combines recent results in adversarial training (to regularize against ``sharp minima'') and second order optimization (to use curvature information to change batch size adaptively during training).
We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple NNs, including residual networks as well as compressed networks such as SqueezeNext.
Our new approach exceeds the performance of the existing solutions in terms of both accuracy and the number of SGD iterations (up to 1\% and $3\times$, respectively).
We emphasize that this is achieved without any additional hyper-parameter tuning to tailor our method to any of these experiments.
Finding the right NN architecture for a particular application requires extensive hyper-parameter tuning and architecture search, often times on a very large dataset.
The delays associated with training NNs is often the main bottleneck in the design process.
One of the ways to address this issue to use large distributed processor clusters; however, to efficiently utilize each processor, the portion of the batch associated with each processor (sometimes called the mini-batch) must grow correspondingly.
In the ideal case, the hope is to decrease the computational time proportional to the increase in batch size, without any drop in generalization quality.
However, large batch training has a number of well known draw backs.
These include degradation of accuracy, poor generalization, and poor robustness to adversarial perturbations BID17 BID36 .In
order to address these drawbacks, many solutions have been proposed BID14 BID37 BID7 BID29 BID16 . However
, these methods either work only for particular models on a particular dataset, or they require massive hyperparameter tuning, which is often times not discussed in the presentation of results. Note that
while extensive hyper-parameter turning may result in good result tables, it is antithetical to the original motivation of using large batch sizes to reduce training time.One solution to reduce the brittleness of SGD to hyper-parameter tuning is to use second-order methods. Full Newton
method with line search is parameter-free, and it does not require a learning rate. This is achieved
by using a second-order Taylor series approximation to the loss function, instead of a first-order one as in SGD, to obtain curvature information. BID25 ; BID34 BID2
show that Newton/quasi-Newton methods outperform SGD for training NNs. However, their re-sults
only consider simple fully connected NNs and auto-encoders. A problem with second-order
methods is that they can exacerbate the large batch problem, as by construction they have a higher tendency to get attracted to local minima as compared to SGD. For these reasons, early attempts
at using second-order methods for training convolutional NNs have so far not been successful.Ideally, if we could find a regularization scheme to avoid local/bad minima during training, this could resolve many of these issues. In the seminal works of El Ghaoui
& BID9 ; BID33 , a very interesting connection was made between robust optimization and regularization. It was shown that the solution to
a robust optimization problem for least squares is the same as the solution of a Tikhonov regularized problem BID9 . This was also extended to the Lasso
problem in BID33 . Adversarial learning/training methods
, which are a special case of robust optimization methods, are usually described as a min-max optimization procedure to make the model more robust. Recent studies with NNs have empirically
found that robust optimization usually converges to points in the optimization landscape that are flatter and are more robust to adversarial perturbation BID36 .Inspired by these results, we explore whether
second order information regularized by robust optimization can be used to do large batch size training of NNs. We show that both classes of methods have properties
that can be exploited in the context of large batch training to help reduce the brittleness of SGD with large batch size training, thereby leading to significantly improved results.
We introduce an adaptive batch size algorithm based on Hessian information to speed up the training process of NNs, and we combine this approach with adversarial training (which is a form of robust optimization, and which could be viewed as a regularization term for large batch training).
We extensively test our method on multiple datasets (SVHN, Cifar-10/100, TinyImageNet and ImageNet) with multiple NN models (AlexNet, ResNet, Wide ResNet and SqueezeNext).
As the goal of large batch is to reduce training time, we did not perform any hyper-parameter tuning to tailor our method for any of these tests.
Our method allows one to increase batch size and learning rate automatically, based on Hessian information.
This helps significantly reduce the number of parameter updates, and it achieves superior generalization performance, without the need to tune any of the additional hyper-parameters.
Finally, we show that a block Hessian can be used to approximate the trend of the full Hessian to reduce the overhead of using second-order information.
These improvements are useful to reduce NN training time in practice.
• L(θ) is continuously differentiable and the gradient function of L is Lipschitz continuous with Lipschitz constant L g , i.e. DISPLAYFORM0 for all θ 1 and θ 2 .Also
, the global minima of L(θ) is achieved at θ * and L(θ * ) = L * .• Each
gradient of each individual l i (z i ) is an unbiased estimation of the true gradient, i.e. DISPLAYFORM1 where V(·) is the variance operator, i.e. DISPLAYFORM2 From the Assumption 2, it is not hard to get, DISPLAYFORM3 DISPLAYFORM4 With Assumption 2, the following two lemmas could be found in any optimization reference, e.g. . We give
the proofs here for completeness. Lemma 3
. Under
Assumption 2, after one iteration of stochastic gradient update with step size η t at θ t , we have DISPLAYFORM5 where DISPLAYFORM6 Proof. With the
L g smooth of L(θ), we have DISPLAYFORM7 From above, the result follows.Lemma 4. Under
Assumption 2, for any θ, we have DISPLAYFORM8 Proof. Let DISPLAYFORM9
Then h(θ) has a unique global minima atθ DISPLAYFORM10 The following lemma is trivial, we omit the proof here. DISPLAYFORM11 PROOF
OF THEOREM 1Given these lemmas, we now proceed with the proof of Theorem 1.Proof. Assume the batch used
at step t is b t , according to Lemma 3 and 5, DISPLAYFORM12 where the last inequality is from Lemma 4. This yields DISPLAYFORM13
It is not hard to see, DISPLAYFORM14 which concludes DISPLAYFORM15 Therefore, DISPLAYFORM16 We show a toy example of binary logistic regression on mushroom classification dataset 2 . We split the whole dataset to
6905 for training and 1819 for validation. η 0 = 1.2 for SGD with batch
size 100 and full gradient descent. We set 100 ≤ b t ≤ 3200 for
our algorithm, i.e. ABS. Here we mainly focus on the
training losses of different optimization algorithms. The results are shown in FIG3
. In order to see if η 0 is not
an optimal step size of full gradient descent, we vary η 0 for full gradient descent; see results in FIG3 .
|
Large batch size training using adversarial training and second order information
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:307
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Inspired by neurophysiological discoveries of navigation cells in the mammalian
brain, we introduce the first deep neural network architecture for modeling Egocentric
Spatial Memory (ESM).
It learns to estimate the pose of the agent and
progressively construct top-down 2D global maps from egocentric views in a spatially
extended environment.
During the exploration, our proposed ESM network
model updates belief of the global map based on local observations using a recurrent
neural network.
It also augments the local mapping with a novel external
memory to encode and store latent representations of the visited places based on
their corresponding locations in the egocentric coordinate.
This enables the agents
to perform loop closure and mapping correction.
This work contributes in the
following aspects: first, our proposed ESM network provides an accurate mapping
ability which is vitally important for embodied agents to navigate to goal locations.
In the experiments, we demonstrate the functionalities of the ESM network in
random walks in complicated 3D mazes by comparing with several competitive
baselines and state-of-the-art Simultaneous Localization and Mapping (SLAM)
algorithms.
Secondly, we faithfully hypothesize the functionality and the working
mechanism of navigation cells in the brain.
Comprehensive analysis of our model
suggests the essential role of individual modules in our proposed architecture and
demonstrates efficiency of communications among these modules.
We hope this
work would advance research in the collaboration and communications over both
fields of computer science and computational neuroscience.
Egocentric spatial memory (ESM) refers to a memory system that encodes, stores, recognizes and recalls the spatial information about the environment from an egocentric perspective BID24 .
Such information is vitally important for embodied agents to construct spatial maps and reach goal locations in navigation tasks.For the past decades, a wealth of neurophysiological results have shed lights on the underlying neural mechanisms of ESM in mammalian brains.
Mostly through single-cell electrophysiological recordings studies in mammals BID23 , there are four types of cells identified as specialized for processing spatial information: head-direction cells (HDC), border and boundary vector cells (BVC), place cells (PC) and grid cells (GC).
Their functionalities are: (1) According to BID38 , HDC, together with view cells BID5 , fires whenever the mammal's head orients in certain directions.
(2) The firing behavior of BVC depends on the proximity to environmental boundaries BID22 and directions relative to the mammals' heads BID1 .
(3) PC resides in hippocampus and increases firing rates when the animal is in specific locations independent of head orientations BID1 .
(4) GC, as a metric of space BID35 , are regularly distributed in a grid across the environment BID11 .
They are updated based on animal's speed and orientation BID1 .
The corporation of these cell types enables mammals to navigate and reach goal locations in complex environments; hence, we are motivated to endow artificial agents with the similar memory capability but a computational architecture for such ESM is still absent.Inspired by neurophysiological discoveries, we propose the first computational architecture, named as the Egocentric Spatial Memory Network (ESMN), for modeling ESM using a deep neural network.
ESMN unifies functionalities of different navigation cells within one end-to-end trainable framework and accurately constructs top-down 2D global maps from egocentric views.
To our best knowledge, we are the first to encapsulate the four cell types respectively with functionally similar neural networkbased modules within one integrated architecture.
In navigation tasks, the agent with the ESMN takes one egomotion from a discrete set of macro-actions.
ESMN fuses the observations from the agent over time and produces a top-down 2D local map using a recurrent neural network.
In order to align the spatial information at the current step with all the past predicted local maps, ESMN estimates the agent's egomotion and transforms all the past information using a spatial transformer neural network.
ESMN also augments the local mapping module with a novel spatial memory capable of integrating local maps into global maps and storing the discriminative representations of the visited places.
The loop closure component will then detect whether the current place was visited by comparing its observation with the representations in the external memory which subsequently contributes to global map correction.Neuroscience-inspired AI is an emerging research field BID12 .
Our novel deep learning architecture to model ESMN in the mammalian navigation system attempts to narrow the gap between computer science (CS) and computational neuroscience (CN) and bring interests to both communities.
On one hand, our novel ESMN outperforms several competitive baselines and the state-of-the-art monocular visual SLAMs.
Our outstanding performance in map construction brings great advancements in robotics and CS.
It could also have many potential engineering applications, such as path planning for robots.
(2) In CN, the neuroplausible navigation system with four types of cells integrated is still under development.
In our work, we put forward bold hypothesis about how these navigation cells may cooperate and perform integrated navigation functions.
We also faithfully propose several possible communication links among them in the form of deep architectures.We evaluate ESMN in eight 3D maze environments where they feature complex geometry, varieties of textures, and variant lighting conditions.
In the experiments, we demonstrate the acquired skills of ESMN in terms of positional inference, free space prediction, loop closure classification and map correction which play important roles in navigation.
We provide detailed analysis of each module in ESMN as well as their functional mappings with the four cell types.
Lastly, we conduct ablation studies, compare with state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithms and show the efficacy of our integrated framework on unifying the four modules.
We get inspirations from neurophysiological discoveries and propose the first deep neural network architecture for modeling ESM which unifies the functionalities of the four navigation cell types: head-direction cells, border cells and boundary vector cells, place cells and grid cells.
Our learnt model demonstrates the capacity of estimating the pose of the agent and constructing a top-down 2D spatial representations of the physical environments in the egocentric coordinate which could have many potential applications, such as path planning for robot agents.
Our ESMN accumulates the belief about the free space by integrating egocentric views.
To eliminate errors during mapping, ESMN also augments the local mapping module with an external spatial memory to keep track of the discriminative representations of the visited places for loop closure detection.
We conduct exhaustive evaluation experiments by comparing our model with some competitive baselines and state-of-the-art SLAM algorithms.
The experimental results demonstrate that our model surpasses all these methods.
The comprehensive ablation study suggests the essential role of individual modules in our proposed architecture and the efficiency of communications among these modules.
|
first deep neural network for modeling Egocentric Spatial Memory inspired by neurophysiological discoveries of navigation cells in mammalian brain
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:308
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We show that if the usual training loss is augmented by a Lipschitz regularization term, then the networks generalize.
We prove generalization by first establishing a stronger convergence result, along with a rate of convergence.
A second result resolves a question posed in Zhang et al. (2016): how can a model distinguish between the case of clean labels, and randomized labels?
Our answer is that Lipschitz regularization using the Lipschitz constant of the clean data makes this distinction.
In this case, the model learns a different function which we hypothesize correctly fails to learn the dirty labels.
While deep neural networks networks (DNNs) give more accurate predictions than other machine learning methods BID30 , they lack some of the performance guarantees of these other methods.
One step towards performance guarantees for DNNs is a proof of generalization with a rate.
In this paper, we present such a result, for Lipschitz regularized DNNs.
In fact, we prove a stronger convergence result from which generalization follows.We also consider the following problem, inspired by (Zhang et al., 2016) .
Problem 1.1.
[Learning from dirty data] Suppose we are given a labelled data set, which has Lipschitz constant Lip(D) = O(1) (see (3) below).
Consider making copies of 10 percent of the data, adding a vector of norm to the perturbed data points, and changing the label of the perturbed points.
Call the new, dirty, data setD.
The dirty data has Lip(D) = O(1/ ).
However, if we compute the histogram of the pairwise Lipschitz constants, the distribution of the values on the right hand side of (3), are mostly below Lip(D) with a small fraction of the values being O(1/ ), since the duplicated images are apart but with different labels.
Thus we can solve (1) with L 0 estimate using the prevalent smaller values, which is an accurate estimate of the clean data Lipschitz constant.
The solution of (1) using such a value is illustrated on the right of Figure 1 .
Compare to the Tychonoff regularized solution on the right of Figure 2 .
We hypothesis that on dirty data the solution of (1) replaces the thin tall spikes with short fat spikes leading to better approximation of the original clean data.In Figure 1 we illustrate the solution of (1) (with L 0 = 0), using synthetic one dimensional data.
In this case, the labels {−1, 0, 1} are embedded naturally into Y = R, and λ = 0.1.
Notice that the solution matches the labels exactly on a subset of the data.
In the second part of the figure, we show a solution with dirty labels which introduce a large Lipschitz constant, in this case, the solution reduces the Lipschitz constant, thereby correcting the errors.Learning from dirty labels is studied in §2.4.
We show that the model learns a different function than the dirty label function.
We conjecture, based on synthetic examples, that it learns a better approximation to the clean labels.We begin by establishing notation.
Consider the classification problem to fix ideas, although our restuls apply to other problems as well.
Definition 1.2.
Let D n = x 1 , . . . , x n be a sequence of i.i.d. random variables sampled from the probability distribution ρ.
The data x i are in X = [0, 1] d .
Consider the classification problem with D labels, and represent the labels by vertices of the probability simplex, Y ⊂ R D .
Write y i = u 0 (x i ) for the map from data to labels.
Write u(x; w) for the map from the input to data to the last layer of the network.1
Augment the training loss with Lipschitz regularization DISPLAYFORM0 The first term in (1) is the usual average training loss.
The second term in (1) the Lipschitz regularization term: the excess Lipschitz constant of the map u, compared to the constant L 0 .In
order to apply the generalization theorem, we need to take L 0 ≥ Lip(u 0 ), the Lipschitz constant of the data on the whole data manifold. In
practice, Lip(u 0 ) can be estimated by the Lipschitz constant of the empirical data. The
definition of the Lipschitz constants for functions and data, as well as the implementation details are presented in §1.3 below.Figure 1: Synthetic labelled data and Lipschitz regularized solution u. Left
: The solution value matches the labels exactly on a large portion of the data set. Right
: dirtly labels: 10% of the data is incorrect; the regularized solution corrects the errors.Our analysis will apply to the problem (1) which is convex in u, and does not depend explicitly on the weights, w. Of course
, once u is restricted to a fixed neural network architecture, the corresponding minimization problem becomes non-convex in the weights. Our analysis
can avoid the dependence on the weights because we make the assumption that there are enough parameters so that u can exactly fit the training data. The assumption
is justified by Zhang et al. (2016) . As we send n →
∞ for convergence, we require that the network also grow, in order to continue to satisfy this assumption. Our results apply
to other non-parametric methods in this regime.
|
We prove generalization of DNNs by adding a Lipschitz regularization term to the training loss. We resolve a question posed in Zhang et al. (2016).
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:309
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy.
We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration.
Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs.
Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order.
We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time.
Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.
We address the problem of reinforcement learning (RL) in tasks that require long-term memory.
While many successes of Deep RL were achieved in settings that are (near) fully observable, such as Atari games (Mnih et al., 2015) , partial observability requires memory to recall prior observations that indicate the current state.
Relying on full observability severely limits the applicability of such approaches.
For example, many tasks in virtual and physical environments are naturally observed from a first-person perspective (Oh et al., 2016) , which means that an agent may need to seek out and remember task-relevant information that is not immediately observable without directly observing the entire environment.
Recent research has started to address this issue, but effective learning in RL settings with long sequential dependencies remain a key challenge in Deep RL (Oh et al., 2016; Stepleton et al., 2018; Parisotto & Salakhutdinov, 2018) .
The currently most common approach to RL in partially observable settings relies on models that use memory components that were originally developed for tasks like those that occur in natural language processing (NLP), e.g., LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Cho et al., 2014) .
Hausknecht & Stone (2015) first demonstrated benefits of LSTMs in RL tasks designed to test memory, and these and similar approaches have become common in Deep RL (Wang et al., 2016) , including multi-agent RL (Rashid et al., 2018; Foerster et al., 2017) .
In this work, we demonstrate that the characteristics of RL can severely impede learning in memory models that are not specifically designed for RL, and propose new models designed to tackle these challenges.
For example, LSTMs excel in NLP tasks where the order of observations (characters or words) is crucial, and where influence between observations decays quickly with distance.
Contrast this with a hypothetical RL example where an agent must discover a hidden passcode to escape a locked dungeon.
The order of observations is highly dependent on the agent's path through the dungeon, yet when it reaches the door, only its ability to recall the passcode is relevant to escaping the dungeon, irrespective of when the agent observed it and how many observations it has seen since.
Figure 1 illustrates the problem.
Even in the simplified case where stochasticity is introduced by observation noise, the sample efficiency of LSTMs decreases drastically.
We show that this problem occurs not just for LSTMs, but also for stacked LSTMs and DNCs (Graves et al., 2016; Wayne et al., 2018) , which have been widely applied in RL, and propose solutions that address this problem.
(Hochreiter & Schmidhuber, 1997 ) trained on a noise-free (T-L, left) and noisy (T-LN, right) TMaze tasks.
In both cases, the agent must recall a signal from memory.
LSTM completely fails in the noisy setting while AMRL-Max learns rapidly.
(68% confidence interval over 5 runs, as for all plots.)
We make the following three contributions.
First, in Section 3, we introduce our approach, AMRL.
AMRL augments memory models like LSTMs with aggregators that are substantially more robust to noise than previous approaches.
Our models combine several innovations which jointly allow the model to ignore noise while maintaining order-variant information as needed.
Further, AMRL models maintain informative gradients over very long horizons, which is crucial for sample-efficient learning in long-term memory tasks (Pascanu et al., 2012; Bakker, 2001; Wierstra et al., 2009 ).
Second, in Section 5, we systematically evaluate how the sources of noise that affect RL agents affect the sample efficiency of AMRL and baseline approaches.
We devise a series of experiments in two domains, (1) a symbolic maze domain and (2) 3D mazes in the game Minecraft.
Our results show that AMRL can solve long-term memory tasks significantly faster than existing methods.
Across tasks our best model achieves an increase in final average return of 9% over baselines with far more parameters and 19% over LSTMs with the same number of parameters.
Third, in Section 6 we analytically and empirically analyze the characteristics of our proposed and baseline models with the aim to identify factors that affect performance.
We empirically confirm that AMRL models are substantially less susceptible to vanishing gradients than previous models.
We propose to additionally analyze memory models in terms of the signal-to-noise ratio achieved at increasing distances from a given signal, and show that AMRL models can maintain signals over many timesteps.
Jointly, the results of our detailed analysis validate our modeling choices and show why AMRL models are able to effectively solve long-term memory tasks.
The results in the previous section indicate that models that perform well on long-term memory tasks in noisy settings, such as those studied in Section 5, tend to have informative gradients and high SNR over long time horizons.
In this section we further examine this relationship.
Figure 8 shows the aggregate performance achieved by each model across the experiments presented in Section 5 and in the appendix A.2.
We argue that these tasks capture key aspects of long-term memory tasks in noisy settings.
We observe that our proposed AMRL-Avg and AMRL-Max approaches outperform all other methods.
Ablations Max and Avg are competitive with baselines, but our results demonstrate the value of the ST connection.
AMRL-Max improves over the LSTM average return by 19% with no additional parameters and outperforms DNC the average return by 9% with far fewer parameters.
We have shown that AMRL models are not susceptible to the drastic performance decreases in noisy environments that LSTMs and DNCs are susceptible to, and we have shown that this generalizes to an ability to ignore irrelevant features in other tasks.
Figure 8(b) relates overall model performance to the quantities analyzed above, SNR and gradient strength.
We find SNR and gradient strength are both integral and complementary aspects needed for a successful model: DNC has a relatively large SNR, but does not match the empirical performance of AMRL -likely due to its decaying gradients.
AMRL models achieve high SNR and maintain strong gradients, achieving the highest empirical performance.
The reverse holds for LSTM models.
An outlier is the SUM model -we hypothesize that the growing sum creates issues when interpreting memories independent of the time step at which they occur.
The max aggregator may be less susceptible to growing activations given a bounded number of distinct observations, a bounded input activation, or an analogously compact internal representation.
That is, the max value may be low and reached quickly.
Moreover, the ST connection will still prevent gradient decay in such a case.
Overall, our analytical and empirical analysis in terms of SNR and gradient decay both validates our modeling choices in developing AMRL, and provides a useful tool for understanding learning performance of memory models.
By considering both empirical measurements of SNR and gradients we are able to rank models closely in-line with empirical performance.
We consider this a particularly valuable insight for future research seeking to improve long-term memory.
We have demonstrated that the performance of previous approaches to memory in RL can severely deteriorate under noise, including observation noise and noise introduced by an agents policy and environment dynamics.
We proposed AMRL, a novel approach designed specifically to be robust to RL settings, by maintaining strong signal and gradients over time.
Our empirical results confirmed that the proposed models outperform existing approaches, often dramatically.
Finally, by analyzing gradient strength and signal-to-noise ratio of the considered models, we validated our model choices and showed that both aspects help explain the high empirical performance achieved by our models.
In future research, we believe our models and analysis will form the basis of further understanding, and improving performance of memory models in RL.
An aspect that goes beyond the scope of the present paper is the question of how to prevent long-term memory tasks from interfering with shorter-term tasks -an issue highlighted in Appendix A.2.3.
Additionally, integration of AMRL into models other than the standard LSTM could be explored.
Overall, our work highlights the need and potential for approaches that specifically tackle long-term memory tasks from an RL perspective.
|
In Deep RL, order-invariant functions can be used in conjunction with standard memory modules to improve gradient decay and resilience to noise.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:31
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit.
Error-rates usually increase when this requirement is imposed.
Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight.
Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization.
For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bit-per-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% (Top-1 / Top-5) respectively.
We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test results of 0.27%, 1.9%, and 41.3% / 19.1% respectively.
For CIFAR, our error rates halve previously reported values, and are within about 1% of our error-rates for the same network with full-precision weights.
For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters.
This applies to both full precision and 1-bit-per-weight networks.
Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100.
For full training code and trained models in MATLAB, Keras and PyTorch see https://github.com/McDonnell-Lab/1-bit-per-weight/ .
Fast parallel computing resources, namely GPUs, have been integral to the resurgence of deep neural networks, and their ascendancy to becoming state-of-the-art methodologies for many computer vision tasks.
However, GPUs are both expensive and wasteful in terms of their energy requirements.
They typically compute using single-precision floating point (32 bits), which has now been recognized as providing far more precision than needed for deep neural networks.
Moreover, training and deployment can require the availability of large amounts of memory, both for storage of trained models, and for operational RAM.
If deep-learning methods are to become embedded in resourceconstrained sensors, devices and intelligent systems, ranging from robotics to the internet-of-things to self-driving cars, reliance on high-end computing resources will need to be reduced.To this end, there has been increasing interest in finding methods that drive down the resource burden of modern deep neural networks.
Existing methods typically exhibit good performance but for the ideal case of single-bit parameters and/or processing, still fall well-short of state-of-the-art error rates on important benchmarks.In this paper, we report a significant reduction in the gap (see Figure 1 and Results) between Convolutional Neural Networks (CNNs) deployed using weights stored and applied using standard precision (32-bit floating point) and networks deployed using weights represented by a single-bit each.In the process of developing our methods, we also obtained significant improvements in error-rates obtained by full-precision versions of the CNNs we used.In addition to having application in custom hardware deploying deep networks, networks deployed using 1-bit-per-weight have previously been shown BID21 to enable significant speedups on regular GPUs, although doing so is not yet possible using standard popular libraries.Aspects of this work was first communicated as a subset of the material in a workshop abstract and talk BID19 .
Figure 1: Our error-rate gaps between using full-precision and 1-bit-per-weight.
All points except black crosses are data from some of our best results reported in this paper for each dataset.
Black points are results on the full ImageNet dataset, in comparison with results of BID22 (black crosses).
The notation 4x, 10x and 15x corresponds to network width (see Section 4).1.1 RELATED WORK
|
We train wide residual networks that can be immediately deployed using only a single bit for each convolutional weight, with signficantly better accuracy than past methods.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:310
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing.
It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.
|
Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:311
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose the set autoencoder, a model for unsupervised representation learning for sets of elements.
It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences.
In contrast to sequences, sets are permutation invariant.
The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model.
On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism.
On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase.
We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly.
We apply the model to supervised tasks on the point clouds using the fixed-size latent representation.
For a number of difficult classification problems, the results are better than those of a model that does not consider the permutation invariance.
Especially for small training sets, the set-aware model benefits from unsupervised pretraining.
Autoencoders are a class of machine learning models that have been used for various purposes such as dimensionality reduction, representation learning, or unsupervised pretraining (see, e.g., BID13 ; BID1 ; BID6 ; BID10 ).
In a nutshell, autoencoders are feed-forward neural networks which encode the given data in a latent, fixed-size representation, and subsequently try to reconstruct the input data in their output variables using a decoder function.
This basic mechanism of encoding and decoding is applicable to a wide variety of input distributions.
Recently, researchers have proposed a sequence autoencoder BID5 , a model that is able to handle sequences of inputs by using a recurrent encoder and decoder.
Furthermore, there has been growing interest to tackle sets of elements with similar recurrent architectures BID21 Xu et al., 2016) .
In this paper, we propose the set autoencoder -a model that learns to embed a set of elements in a permutation-invariant, fixed-size representation using unlabeled training data only.
The basic architecture of our model corresponds to that of current sequence-to-sequence models BID20 BID3 BID23 : It consists of a recurrent encoder that takes a set of inputs and creates a fixed-length embedding, and a recurrent decoder that uses the fixedlength embedding and outputs another set.
As encoder, we use an LSTM network with an attention mechanism as in BID21 .
This ensures that the embedding is permutation-invariant in the input.
Since we want the loss of the model to be permutation-invariant in the decoder output, we re-order the output and align it to the input elements, using a stable matching algorithm that calculates a permutation matrix.
This approach yields a loss which is differentiable with respect to the model's parameters.
The proposed model can be trained in an unsupervised fashion, i.e., without having a labeled data set for a specific task.
In a series of experiments, we analyze the properties of the embedding.
For example, we show that the learned embedding is to some extent distance-preserving, i.e., the distance between two sets of elements correlates with the distances of their embeddings.
Also, the embedding is smooth, i.e., small changes in the input set lead to small changes of the respective embedding.
Furthermore, we show Figure 1: Example of a sequence-to-sequence translation model.
The encoder receives the input characters ["g","o"] .
Its internal state is passed to the decoder, which outputs the translation, i.e., the characters of the word "aller".that
pretraining in an unsupervised fashion can help to increase the performance on supervised tasks when using the fixed-size embedding as input to a classification or regression model, especially if training data is limited. The
rest of the paper is organized as follows. Section
2 introduces the preliminaries and briefly discusses related work. In Section
3, we present the details of the set autoencoder. Section 4
presents experimental setup and results. We discuss
the results and conclude the paper in Section 5.2 RELATED WORK
We presented the set autoencoder, a model that can be trained to reconstruct sets of elements using a fixed-size latent representation.
The model achieves permutation invariance in the inputs by using a content-based attention mechanism, and permutation invariance in the outputs, by reordering the outputs using a stable marriage algorithm during training.
The fixed-size representation possesses a number of interesting attributes, such as distance preservation.
We show that, despite the output permutation invariance, the model learns to output elements in a particular order.
A series of experiments show that the set autoencoder learns representations that can be useful for tasks that require information about each set element, especially if the tasks are more difficult, and few labeled training examples are present.
There are a number of directions for future research.
The most obvious is to use non-linear functions for f inp and f out to enable the set autoencoder to capture non-linear structures in the input set, and test the performance on point clouds of 3d data sets such as ShapeNet BID4 .
Also, changes to the structure of the encoder/decoder (e.g., which variables are interpreted as query or embedding) and alternative methods for aligning the decoder outputs to the inputs can be investigated.
Furthermore, more research is necessary to get a better understanding for which tasks the permutation invariance property is helpful, and unsupervised pretraining can be advantageous.
BID0 to implement all models.
For the implementation and experiments, we made the following design choices:Model Architecture• Both the encoder and the decoder LSTMs are have peephole connections BID8 .
We use the LSTM implementation of Tensorflow
|
We propose the set autoencoder, a model for unsupervised representation learning for sets of elements.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:312
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a considerable amount of experience to be collected by the agent.
In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt.
However, not all tasks are easily or automatically reversible.
In practice, this learning process requires considerable human intervention.
In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt.
By learning a value function for the backward policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts.
Our experiments illustrate that proper use of the backward policy can greatly reduce the number of manual resets required to learn a task and can reduce the number of unsafe actions that lead to non-reversible states.
Deep reinforcement learning (RL) algorithms have the potential to automate acquisition of complex behaviors in a variety of real-world settings.
Recent results have shown success on games BID16 ), locomotion BID22 ), and a variety of robotic manipulation skills ; ; BID8 ).
However, the complexity of tasks achieved with deep RL in simulation still exceeds the complexity of the tasks learned in the real world.
Why have real-world results lagged behind the simulated accomplishments of deep RL algorithms?One
challenge with real-world application of deep RL is the scaffolding required for learning: a bad policy can easily put the system into an unrecoverable state from which no further learning is possible. For
example, an autonomous car might collide at high speed, and a robot learning to clean glasses might break them. Even
in cases where failures are not catastrophic, some degree of human intervention is often required to reset the environment between attempts (e.g., BID2 ).Most
RL algorithms require sampling from the initial state distribution at the start of each episode. On real-world
tasks, this operation often corresponds to a manual reset of the environment after every episode, an expensive solution for complex environments. Even when tasks
are designed so that these resets are easy (e.g., and BID8 ), manual resets are necessary when the robot or environment breaks (e.g., BID7 ). The bottleneck
for learning many real-world tasks is not that the agent collects data too slowly, but rather that data collection stops entirely when the agent is waiting for a manual reset. To avoid manual
resets caused by the environment breaking, task designers often add negative rewards to dangerous states and intervene to prevent agents from taking dangerous actions. While this works
well for simple tasks, scaling to more complex environments requires writing large numbers of rules for types of actions the robot should avoid. For example, a robot
should avoid hitting itself, except when clapping. One interpretation of
our method is as automatically learning these safety rules. Decreasing the number
of manual resets required to learn to a task is important for scaling up RL experiments outside simulation, allowing researchers to run longer experiments on more agents for more hours.We propose to address these challenges by forcing our agent to "leave no trace." The goal is to learn
not only how to do the task at hand, but also how to undo it. The intuition is that
the sequences of actions that are reversible are safe; it is always possible to undo them to get back to the original state. This property is also
desirable for continual learning of agents, as it removes the requirements for manual resets. In this work, we learn
two policies that alternate between attempting the task and resetting the environment. By learning how to reset
the environment at the end of each episode, the agent we learn requires significantly fewer manual resets. Critically, our value-based
reset policy restricts the agent to only visit states from which it can return, intervening to prevent the forward policy from taking potentially irreversible actions. Using the reset policy to regularize
the forward policy encodes the assumption that whether our learned reset policy can reset is a good proxy for whether any reset policy can reset. The algorithm we propose can be applied
to both deterministic and stochastic MDPs. For stochastic MDPs we say that an action
is reversible if the probability that an oracle reset policy can successfully reset from the next state is greater than some safety threshold. The set of states from which the agent knows
how to return grows over time, allowing the agent to explore more parts of the environment as soon as it is safe to do so.The main contribution of our work is a framework for continually and jointly learning a reset policy in concert with a forward task policy. We show that this reset policy not only automates
resetting the environment between episodes, but also helps ensure safety by reducing how frequently the forward policy enters unrecoverable states. Incorporating uncertainty into the value functions
of both the forward and reset policy further allows us to make this process risk-aware, balancing exploration against safety. Our experiments illustrate that this approach reduces
the number of "hard" manual resets required during learning of a variety of simulated robotic skills.
In this paper, we presented a framework for automating reinforcement learning based on two principles: automated resets between trials, and early aborts to avoid unrecoverable states.
Our method simultaneously learns a forward and reset policy, with the value functions of the two policies used to balance exploration against recoverability.
Experiments in this paper demonstrate that our algorithm not only reduces the number of manual resets required to learn a task, but also learns to avoid unsafe states and automatically induces a curriculum.Our algorithm can be applied to a wide range of tasks, only requiring a few manual resets to learn some tasks.
During the early stages of learning we cannot accurately predict the consequences of our actions.
We cannot learn to avoid a dangerous state until we have visited that state (or a similar state) and experienced a manual reset.
Nonetheless, reducing the number of manual resets during learning will enable researchers to run experiments for longer on more agents.
A second limitation of our work is that we treat all manual resets as equally bad.
In practice, some manual resets are more costly than others.
For example, it is more costly for a grasping robot to break a wine glass than to push a block out of its workspace.
An approach not studied in this paper for handling these cases would be to specify costs associated with each type of manual reset, and incorporate these reset costs into the learning algorithm.While the experiments for this paper were done in simulation, where manual resets are inexpensive, the next step is to apply our algorithm to real robots, where manual resets are costly.
A challenge introduced when switching to the real world is automatically identifying when the agent has reset.
In simulation we can access the state of the environment directly to compute the distance between the current state and initial state.
In the real world, we must infer states from noisy sensor observations to deduce if they are the same.
|
We propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:313
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge (Marcus, 2018).
Here we show surprising evidence that language models can already learn to capture certain common sense knowledge.
Our key observation is that a language model can compute the probability of any statement, and this probability can be used to evaluate the truthfulness of that statement.
On the Winograd Schema Challenge (Levesque et al., 2011), language models are 11% higher in accuracy than previous state-of-the-art supervised methods.
Language models can also be fine-tuned for the task of Mining Commonsense Knowledge on ConceptNet to achieve an F1 score of 0.912 and 0.824, outperforming previous best results (Jastrzebskiet al., 2018).
Further analysis demonstrates that language models can discover unique features of Winograd Schema contexts that decide the correct answers without explicit supervision.
It has been argued that current machine learning models do not have common sense BID4 BID15 .
For example, even best machine learning models perform poorly on commonsense reasoning tasks such as Winograd Schema Challenge BID11 BID14 .
This argument is often combined with another important criticism of supervised learning that it only works well on problems that have a lot of labeled data.
The Winograd Schema Challenge (WSC) is an opposite of such problems because its labeled set size is only on the order of a few hundreds examples, with no official training data.
Based on this argument, it is suggested that machine learning models must be integrated with prior knowledge BID15 BID10 .As
an example, consider the following question from the WSC dataset:"The trophy doesn't fit in the suitcase because it is too big." What
is "it"? Answer
0: the trophy. Answer
1: the suitcase.The main point of this dataset is that no machine learning model today can do a good job at answering this type of questions.In this paper, we present surprising evidence that language models do capture certain common sense knowledge and this knowledge can be easily extracted. Key to
our method is the use of language models (LMs), trained on a large amount of unlabeled data, to score multiple choice questions posed by the challenge and similar datasets. In the
above example, we will first substitute the pronoun ("it") with the candidates ("the trophy" and "the suitcase"), and then use an LM to compute the probability of the two resulting sentences ("The trophy doesn't fit in the suitcase because the trophy is too big." and "The trophy doesn't fit in the suitcase because the suitcase is too big."). The substitution
that results in a more probable sentence will be the chosen answer. Using this simple
method, we are able to achieve 63.7% accuracy, 11% above that of the previous state-of-the-art result 1 .To demonstrate a practical
impact of this work, we show that the trained LMs can be used to enrich human-annotated knowledge bases, which are known to be low in coverage and expensive to expand. For example, "Suitcase is
a type of container", a relevant knowledge to the above Winograd Schema example, does not present in the ConceptNet knowledge base BID13 . The goal of this task is
to add such new facts to the knowledge base at a cheaper cost than human annotation, in our case using LM scoring. We followed the Commonsense
Knowledge Mining task formulation from BID0 BID12 BID8 , which posed the task as a classification problem of unseen facts and non-facts. Without an additional classification
layer, LMs are fine-tuned to give different scores to facts and non-facts tuples from ConceptNet. Results obtained by this method outperform
all previous results, despite the small training data size (100K instances). On the full test set, LMs can identify commonsense
facts with 0.912 F1 score, which is 0.02 better than supervised trained networks BID8 .
We introduced a simple method to apply pretrained language models to tasks that require commonsense knowledge.
Key to our method is the insight that large LMs trained on massive text corpora can capture certain aspect of human knowledge, and therefore can be used to score textual statements.
On the Winograd Schema Challenge, LMs are able to achieve 11 points of accuracy above the best previously reported result.
On mining novel commonsense facts from ConceptNet knowledge base, LM scoring also outperforms previous methods on two different test criteria.
We analyse the trained language models and observe that key features of the context that identify the correct answer are discovered and used in their predictions.Traditional approaches to capturing common sense usually involve expensive human annotation to build knowledge bases.
This work demonstrates that commonsense knowledge can alternatively be learned and stored in the form of distributed representations.
At the moment, we consider language modeling for learning from texts as this supplies virtually unlimited data.
It remains an open question for unsupervised learning to capture commonsense from other modalities such as images or videos.
|
We present evidence that LMs do capture common sense with state-of-the-art results on both Winograd Schema Challenge and Commonsense Knowledge Mining.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:314
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In many real-world learning scenarios, features are only acquirable at a cost constrained under a budget.
In this paper, we propose a novel approach for cost-sensitive feature acquisition at the prediction-time.
The suggested method acquires features incrementally based on a context-aware feature-value function.
We formulate the problem in the reinforcement learning paradigm, and introduce a reward function based on the utility of each feature.
Specifically, MC dropout sampling is used to measure expected variations of the model uncertainty which is used as a feature-value function.
Furthermore, we suggest sharing representations between the class predictor and value function estimator networks.
The suggested approach is completely online and is readily applicable to stream learning setups.
The solution is evaluated on three different datasets including the well-known MNIST dataset as a benchmark as well as two cost-sensitive datasets: Yahoo Learning to Rank and a dataset in the medical domain for diabetes classification.
According to the results, the proposed method is able to efficiently acquire features and make accurate predictions.
In traditional machine learning settings, it is usually assumed that a training dataset is freely available and the objective is to train models that generalize well.
In this paradigm, the feature set is fixed, and we are dealing with complete feature vectors accompanied by class labels that are provided for training.
However, in many real-world scenarios, there are certain costs for acquiring features as well as budgets limiting the total expenditure.
Here, the notation of cost is more general than financial cost and it also refers to other concepts such as computational cost, privacy impacts, energy consumption, patient discomfort in medical tests, and so forth BID22 .
Take the example of the disease diagnosis based on medical tests.
Creating a complete feature vector from all the relevant information is synonymous with conducting many tests such as MRI scan, blood test, etc. which would not be practical.
On the other hand, a physician approaches the problem by asking a set of basic easy-to-acquire features, and then incrementally prescribes other tests based on the current known information (i.e., context) until a reliable diagnosis can be made.
Furthermore, in many real-world use-cases, due to the volume of data or necessity of prompt decisions, learning and prediction should take place in an online and stream-based fashion.
In the medical diagnosis example, it is consistent with the fact that the latency of diagnosis is vital (e.g., urgency of specific cases and diagnosis), and it is often impossible to defer the decisions.
Here, by online we mean processing samples one at a time as they are being received.Various approaches were suggested in the literature for cost-sensitive feature acquisition.
To begin with, traditional feature selection methods suggested to limit the set of features being used for training BID11 BID17 .
For instance, L1 regularization for linear classifiers results in models that effectively use a subset of features BID9 .
Note that these methods focus on finding a fixed subset of features to be used (i.e., feature selection), while a more optimal solution would be making feature acquisition decisions based on the sample at hand and at the prediction-time.More recently, probabilistic methods were suggested that measure the value of each feature based on the current evidence BID5 .
However, these methods are usually applicable to Bayesian networks or similar probabilistic models and make limiting assumptions such as having binary features and binary classes .
Furthermore, these probabilistic methods are computationally expensive and intractable in large scale problems BID5 .Motivated
by the success of discriminative learning, cascade and tree based classifiers suggested as an intuitive way to incorporate feature costs BID20 BID3 . Nevertheless
, these methods are basically limited to the modeling capability of tree classifiers and are limited to fixed predetermined structures. A recent work
by BID27 suggested a gating method that employs adaptive linear or tree-based classifiers, alternating between low-cost models for easy-to-handle instances and higher-cost models to handle more complicated cases. While this method
outperforms many of the previous work on the tree-based and cascade cost-sensitive classifiers, the low-cost model being used is limited to simple linear classifiers or pruned random forests.As an alternative approach, sensitivity analysis of trained predictors is suggested to measure the importance of each feature given a context BID7 BID18 . These approaches
either require an exhaustive measurement of sensitivities or rely on approximations of sensitivity. These methods are
easy to use as they work without any significant modification to the predictor models being trained. However, theoretically
, finding the global sensitivity is a difficult and computationally expensive problem. Therefore, frequently
, approximate or local sensitivities are being used in these methods which may cause not optimal solutions.Another approach that is suggested in the literature is modeling the feature acquisition problem as a learning problem in the imitation learning BID13 or reinforcement learning BID14 BID29 BID15 domain. These approaches are
promising in terms of performance and scalability. However, the value functions
used in these methods are usually not intuitive and require tuning hyper-parameters to balance the cost vs. accuracy trade-off. More specifically, they often
rely on one or more hyper-parameters to adjust the average cost at which these models operate. On the other hand, in many real-world
scenarios it is desirable to adjust the trade-off at the prediction-time rather than the training-time. For instance, it might be desirable to
spend more for a certain instance or continue the feature acquisition until a desired level of prediction confidence is achieved. This paper presents a novel method based
on deep Q-networks for cost-sensitive feature acquisition. The proposed solution employs uncertainty
analysis in neural network classifiers as a measure for finding the value of each feature given a context. Specifically, we use variations in the certainty
of predictions as a reward function to measure the value per unit of the cost given the current context. In contrast to the recent feature acquisition methods
that use reinforcement learning ideas BID14 BID29 BID15 , the suggested reward function does not require any hyper-parameter tuning to balance cost versus performance trade-off. Here, features are acquired incrementally, while maintaining
a certain budget or a stopping criterion. Moreover, in contrast to many other work in the literature that
assume an initial complete dataset BID13 BID5 BID8 BID27 , the proposed solution is stream-based and online which learns and optimizes acquisition costs during the training and the prediction. This might be beneficial as, in many real-world use cases, it might
be prohibitively expensive to collect all features for all training data. Furthermore, this paper suggests a method for sharing the representations
between the class predictor and action-value models that increases the training efficiency.
In this paper, we proposed an approach for cost-sensitive learning in stream-based settings.
We demonstrated that certainty estimation in neural network classifiers can be used as a viable measure for the value of features.
Specifically, variations of the model certainty per unit of the cost is used as measure of feature value.
In this paradigm, a reinforcement learning solution is suggested which is efficient to train using a shared representation.
The introduced method is evaluated on three different real-world datasets representing different applications: MNIST digits recognition, Yahoo LTRC web ranking dataset, and diabetes prediction using health records.
Based on the results, the suggested method is able to learn from data streams, make accurate predictions, and effectively reduce the prediction-time feature acquisition cost.
|
An online algorithm for cost-aware feature acquisition and prediction
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:315
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper revisits the problem of sequence modeling using convolutional
architectures.
Although both convolutional and recurrent architectures have
a
long history in sequence prediction, the current "default" mindset in much of
the deep learning community is that generic sequence modeling is best handled
using recurrent networks.
The goal of this paper is to question this assumption
.
Specifically, we consider a simple generic temporal convolution network (TCN
),
which adopts features from modern ConvNet architectures such as a dilations and
residual connections.
We show that on a variety of sequence modeling tasks
,
including many frequently used as benchmarks for evaluating recurrent networks
,
the TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and
sometimes even highly specialized approaches.
We further show that the
potential "infinite memory" advantage that RNNs have over TCNs is largely
absent in practice: TCNs indeed exhibit longer effective history sizes than their
recurrent counterparts.
As a whole, we argue that it may be time to (re)consider
ConvNets as the default "go to" architecture for sequence modeling.
Since the re-emergence of neural networks to the forefront of machine learning, two types of network architectures have played a pivotal role: the convolutional network, often used for vision and higher-dimensional input data; and the recurrent network, typically used for modeling sequential data.
These two types of architectures have become so ingrained in modern deep learning that they can be viewed as constituting the "pillars" of deep learning approaches.
This paper looks at the problem of sequence modeling, predicting how a sequence will evolve over time.
This is a key problem in domains spanning audio, language modeling, music processing, time series forecasting, and many others.
Although exceptions certainly exist in some domains, the current "default" thinking in the deep learning community is that these sequential tasks are best handled by some type of recurrent network.
Our aim is to revisit this default thinking, and specifically ask whether modern convolutional architectures are in fact just as powerful for sequence modeling.Before making the main claims of our paper, some history of convolutional and recurrent models for sequence modeling is useful.
In the early history of neural networks, convolutional models were specifically proposed as a means of handling sequence data, the idea being that one could slide a 1-D convolutional filter over the data (and stack such layers together) to predict future elements of a sequence from past ones BID20 BID30 .
Thus, the idea of using convolutional models for sequence modeling goes back to the beginning of convolutional architectures themselves.
However, these models were subsequently largely abandoned for many sequence modeling tasks in favor of recurrent networks BID13 .
The reasoning for this appears straightforward: while convolutional architectures have a limited ability to look back in time (i.e., their receptive field is limited by the size and layers of the filters), recurrent networks have no such limitation.
Because recurrent networks propagate forward a hidden state, they are theoretically capable of infinite memory, the ability to make predictions based upon data that occurred arbitrarily long ago in the sequence.
This possibility seems to be realized even moreso for the now-standard architectures of Long ShortTerm Memory networks (LSTMs) BID21 , or recent incarnations such as the Gated Recurrent Unit (GRU) ; these architectures aim to avoid the "vanishing gradient" challenge of traditional RNNs and appear to provide a means to actually realize this infinite memory.Given the substantial limitations of convolutional architectures at the time that RNNs/LSTMs were initially proposed (when deep convolutional architectures were difficult to train, and strategies such as dilated convolutions had not reached widespread use), it is no surprise that CNNs fell out of favor to RNNs.
While there have been a few notable examples in recent years of CNNs applied to sequence modeling (e.g., the WaveNet BID40 and PixelCNN BID41 architectures), the general "folk wisdom" of sequence modeling prevails, that the first avenue of attack for these problems should be some form of recurrent network.The fundamental aim of this paper is to revisit this folk wisdom, and thereby make a counterclaim.
We argue that with the tools of modern convolutional architectures at our disposal (namely the ability to train very deep networks via residual connections and other similar mechanisms, plus the ability to increase receptive field size via dilations), in fact convolutional architectures typically outperform recurrent architectures on sequence modeling tasks, especially (and perhaps somewhat surprisingly) on domains where a long effective history length is needed to make proper predictions.
This paper consists of two main contributions.
First, we describe a generic, baseline temporal convolutional network (TCN) architecture, combining best practices in the design of modern convolutional architectures, including residual layers and dilation.
We emphasize that we are not claiming to invent the practice of applying convolutional architectures to sequence prediction, and indeed the TCN architecture here mirrors closely architectures such as WaveNet (in fact TCN is notably simpler in some respects).
We do, however, want to propose a generic modern form of convolutional sequence prediction for subsequent experimentation.
Second, and more importantly, we extensively evaluate the TCN model versus alternative approaches on a wide variety of sequence modeling tasks, spanning many domains and datasets that have typically been the purview of recurrent models, including word-and character-level language modeling, polyphonic music prediction, and other baseline tasks commonly used to evaluate recurrent architectures.
Although our baseline TCN can be outperformed by specialized (and typically highly-tuned) RNNs in some cases, for the majority of problems the TCN performs best, with minimal tuning on the architecture or the optimization.
This paper also analyzes empirically the myth of "infinite memory" in RNNs, and shows that in practice, TCNs of similar size and complexity may actually demonstrate longer effective history sizes.
Our chief claim in this paper is thus an empirical one: rather than presuming that RNNs will be the default best method for sequence modeling tasks, it may be time to (re)consider ConvNets as the "go-to" approach when facing a new dataset or task in sequence modeling.
In this work, we revisited the topic of modeling sequence predictions using convolutional architectures.
We introduced the key components of the TCN and analyzed some vital advantages and disadvantages of using TCN for sequence predictions instead of RNNs.
Further, we compared our generic TCN model to the recurrent architectures on a set of experiments that span a wide range of domains and datasets.
Through these experiments, we have shown that TCN with minimal tuning can outperform LSTM/GRU of the same model size (and with standard regularizations) in most of the tasks.
Further experiments on the copy memory task and LAMBADA task revealed that TCNs actually has a better capability for long-term memory than the comparable recurrent architectures, which are commonly believed to have unlimited memory.It is still important to note that, however, we only presented a generic architecture here, with components all coming from standard modern convolutional networks (e.g., normalization, dropout, residual network).
And indeed, on specific problems, the TCN model can still be beaten by some specialized RNNs that adopt carefully designed optimization strategies.
Nevertheless, we believe the experiment results in Section 4 might be a good signal that instead of considering RNNs as the "default" methodology for sequence modeling, convolutional networks too, can be a promising and powerful toolkit in studying time-series data.
|
We argue that convolutional networks should be considered the default starting point for sequence modeling tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:316
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods.
At the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training.
In many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions.
We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.
We do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process.
At inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function.
Using this ``Estimate and Replace'' paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels.
We show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods.
End-to-end supervised learning with deep neural networks (DNNs) has taken the stage in the past few years, achieving state-of-the-art performance in multiple domains including computer vision BID25 , natural language processing BID23 BID10 , and speech recognition BID29 .
Many of the tasks addressed by DNNs can be naturally decomposed to a series of functions.
In such cases, it might be advisable to learn neural network approximations for some of these functions and use precise existing functions for others.
Examples of such tasks include Semantic Parsing and Question Answering.
Since such a decomposition relies partly on precise functions, it may lead to a superior solution compared to an approximated one based solely on a learned neural model.
Decomposing a solution into trainable networks and existing functions requires matching the output of the networks to the input of the existing functions, and vice-versa.
The input and output are defined by the existing functions' interface.
We shall refer to these functions as black-box functions (bbf), focusing only on their interface.
For example, consider the question: "Is 7.2 greater than 4.5?"
Given that number comparison is a solved problem in symbolic computation, a natural solution would be to decompose the task into a two-step process of
(i) converting the natural language to an executable program, and
(ii) executing the program on an arithmetic module.
While a DNN may be a good fit forWe propose an alternative approach called Estimate and Replace that finds a differentiable function approximation, which we term black-box estimator, for estimating the black-box function.
We use the black-box estimator as a proxy to the original black-box function during training, and by that allow the learnable parts of the model to be trained using gradient-based optimization.
We compensate for not using any intermediate labels to direct the learnable parts by using the black-box function as an oracle for training the black-box estimator.
During inference, we replace the black-box estimator with the original non-differentiable black-box function.End-to-end training of a solution composed of trainable components and black-box functions poses several challenges we address in this work-coping with non-differentiable black-box functions, fitting the network to call these functions with the correct arguments, and doing so without any intermediate labels.
Two more challenges are the lack of prior knowledge on the distribution of inputs to the black-box function, and the use of gradient-based methods when the function approximation is near perfect and gradients are extremely small.
This work is organized as follows: In Section 2, we formulate the problem of decomposing the task to include calls to a black-box function.
Section 3 describes the network architecture and training procedures.
In Section 4, we present experiments and comparison to Policy Gradient-based RL, and to fully neural models.
We further discuss the potential and benefits of the modular nature of our approach in Section 6.
Interpretability via Composability Lipton (2016) identifies composability as a strong contributor to model interpretability.
They define composability as the ability to divide the model into components and interpret them individually to construct an explanation from which a human can predict the model's output.
The Estimate and Replace approach solves the black-box interface learning problem in a way that is modular by design.
As such, it provides an immediate interpretability benefit.
Training a model to comply with a well-defined and well-known interface inherently supports model composability and, thus, directly contributes to its interpretability.For example, suppose you want to let a natural language processing model interface with a WordNet service to receive additional synonym and antonym features for selected input words.
Because the WordNet interface is interpretable, the intermediate output of the model to the WordNet service (the words for which the model requested additional features) can serve as an explanation to the model's final prediction.
Knowing which words the model chose to obtain additional features for gives insight to how it made its final decision.Reusability via Composability An additional clear benefit of model composability in the context of our solution is reusability.
Training a model to comply with a well-defined interface induces well-defined module functionality which is a necessary condition for module reuse.
Current solutions for learning using black-box functionality in neural network prediction have critical limitations which manifest themselves in at least one of the following aspects:
(i) poor generalization,
(ii) low learning efficiency,
(iii) under-utilization of available optimal functions, and
(iv) the need for intermediate labels.In this work, we proposed an architecture, termed EstiNet, and a training and deployment process, termed Estimate and Replace, which aim to overcome these limitations.
We then showed empirical results that validate our approach.Estimate and Replace is a two-step training and deployment approach by which we first estimate a given black-box functionality to allow end-to-end training via back-propagation, and then replace the estimator with its concrete black-box function at inference time.
By using a differentiable estimation module, we can train an end-to-end neural network model using gradient-based optimization.
We use labels that we generate from the black-box function during the optimization process to compensate for the lack of intermediate labels.
We show that our training process is more stable and has lower sample complexity compared to policy gradient methods.
By leveraging the concrete black-box function at inference time, our model generalizes better than end-to-end neural network models.
We validate the advantages of our approach with a series of simple experiments.
Our approach implies a modular neural network that enjoys added interpretability and reusability benefits.Future Work We limit the scope of this work to tasks that can be solved with a single black-box function.
Solving the general case of this problem requires learning of multiple black-box interfaces, along unbounded successive calls, where the final prediction is a computed function over the output of these calls.
This introduces several difficult challenges.
For example, computing the final prediction over a set of black-box functions, rather than a direct prediction of a single one, requires an additional network output module.
The input of this module must be compatible with the output of the previous layer, be it an estimation function at training time, or its black-box function counterpart at inference time, which belong to different distributions.
We reserve this area of research for future work.As difficult as it is, we believe that artificial intelligence does not lie in mere knowledge, nor in learning from endless data samples.
Rather, much of it is in the ability to extract the right piece of information from the right knowledge source for the right purpose.
Thus, training a neural network to intelligibly interact with black-box functions is a leap forward toward stronger AI.
|
Training DNNs to interface w\ black box functions w\o intermediate labels by using an estimator sub-network that can be replaced with the black box after training
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:317
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper proposes a new actor-critic-style algorithm called Dual Actor-Critic or Dual-AC.
It is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation, which can be viewed as a two-player game between the actor and a critic-like function, which is named as dual critic.
Compared to its actor-critic relatives, Dual-AC has the desired property that the actor and dual critic are updated cooperatively to optimize the same objective function, providing a more transparent way for learning the critic that is directly related to the objective function of the actor.
We then provide a concrete algorithm that can effectively solve the minimax optimization problem, using techniques of multi-step bootstrapping, path regularization, and stochastic dual ascent algorithm.
We demonstrate that the proposed algorithm achieves the state-of-the-art performances across several benchmarks.
Reinforcement learning (RL) algorithms aim to learn a policy that maximizes the long-term return by sequentially interacting with an unknown environment.
Value-function-based algorithms first approximate the optimal value function, which can then be used to derive a good policy.
These methods BID23 BID28 often take advantage of the Bellman equation and use bootstrapping to make learning more sample efficient than Monte Carlo estimation BID25 .
However, the relation between the quality of the learned value function and the quality of the derived policy is fairly weak BID6 .
Policy-search-based algorithms such as REINFORCE BID29 and others (Kakade, 2002; BID18 , on the other hand, assume a fixed space of parameterized policies and search for the optimal policy parameter based on unbiased Monte Carlo estimates.
The parameters are often updated incrementally along stochastic directions that on average are guaranteed to increase the policy quality.
Unfortunately, they often have a greater variance that results in a higher sample complexity.Actor-critic methods combine the benefits of these two classes, and have proved successful in a number of challenging problems such as robotics (Deisenroth et al., 2013) , meta-learning BID3 , and games (Mnih et al., 2016 ).
An actor-critic algorithm has two components: the actor (policy) and the critic (value function).
As in policy-search methods, actor is updated towards the direction of policy improvement.
However, the update directions are computed with the help of the critic, which can be more efficiently learned as in value-function-based methods BID24 Konda & Tsitsiklis, 2003; BID13 BID7 BID19 .
Although the use of a critic may introduce bias in learning the actor, its reduces variance and thus the sample complexity as well, compared to pure policy-search algorithms.While the use of a critic is important for the efficiency of actor-critic algorithms, it is not entirely clear how the critic should be optimized to facilitate improvement of the actor.
For some parametric family of policies, it is known that a certain compatibility condition ensures the actor parameter update is an unbiased estimate of the true policy gradient BID24 .
In practice, temporaldifference methods are perhaps the most popular choice to learn the critic, especially when nonlinear function approximation is used (e.g., BID19 ).In
this paper, we propose a new actor-critic-style algorithm where the actor and the critic-like function, which we named as dual critic, are trained cooperatively to optimize the same objective function. The
algorithm, called Dual Actor-Critic , is derived in a principled way by solving a dual form of the Bellman equation BID6 . The
algorithm can be viewed as a two-player game between the actor and the dual critic, and in principle can be solved by standard optimization algorithms like stochastic gradient descent (Section 2). We
emphasize the dual critic is not fitting the value function for current policy, but that of the optimal policy. We
then show that, when function approximation is used, direct application of standard optimization techniques can result in instability in training, because of the lack of convex-concavity in the objective function (Section 3). Inspired
by the augmented Lagrangian method (Luenberger & Ye, 2015; Boyd et al., 2010) , we propose path regularization for enhanced numerical stability. We also
generalize the two-player game formulation to the multi-step case to yield a better bias/variance tradeoff. The full
algorithm is derived and described in Section 4, and is compared to existing algorithms in Section 5. Finally,
our algorithm is evaluated on several locomotion tasks in the MuJoCo benchmark BID27 , and compares favorably to state-of-the-art algorithms across the board.Notation. We denote
a discounted MDP by M = (S, A, P, R, γ), where S is the state space, A the action space, P (·|s, a) the transition probability kernel defining the distribution over next-state upon taking action a in state x, R(s, a) the corresponding immediate rewards, and γ ∈ (0, 1) the discount factor. If there
is no ambiguity, we will use a f (a) and f (a)da interchangeably.
In this paper, we revisited the linear program formulation of the Bellman optimality equation, whose Lagrangian dual form yields a game-theoretic view for the roles of the actor and the dual critic.
Although such a framework for actor and dual critic allows them to be optimized for the same objective function, parametering the actor and dual critic unfortunately induces instablity in optimization.
We analyze the sources of instability, which is corroborated by numerical experiments.
We then propose Dual Actor-Critic , which exploits stochastic dual ascent algorithm for the path regularized, DISPLAYFORM0 Figure 2: The results of Dual-AC against TRPO and PPO baselines.
Each plot shows average reward during training across 5 random seeded runs, with 50% confidence interval.
The x-axis is the number of training iterations.
The Dual-AC achieves comparable performances comparing with TRPO and PPO in some tasks, but outperforms on more challenging tasks.multi-step bootstrapping two-player game, to bypass these issues.
Proof We rewrite the linear programming 3 as DISPLAYFORM1 Recall the T is monotonic, i.e., if DISPLAYFORM2 Theorem 1 (Optimal policy from occupancy) s,a∈S×A ρ * (s, a) = 1, and π DISPLAYFORM3 a∈A ρ * (s,a) .
Proof For the optimal occupancy measure, it must satisfy DISPLAYFORM4 where P denotes the transition distribution and I denotes a |S| × |SA| matrix where I ij = 1 if and only if j ∈ [(i − 1) |A| + 1, . . . , i |A|].
Multiply both sides with 1, due to µ and P are probabilities, we have 1, ρ * = 1.Without loss of generality, we assume there is only one best action in each state.
Therefore, by the KKT complementary conditions of (3), i.e., ρ(s, a) R(s, a) + γE s |s,a [V (s )] − V (s) = 0, which implies ρ * (s, a) = 0 if and only if a = a * , therefore, the π * by normalization.Theorem 2 The optimal policy π * and its corresponding value function V * is the solution to the following saddle problem DISPLAYFORM5 Proof Due to the strong duality of the optimization (3), we have DISPLAYFORM6 Then, plugging the property of the optimum in Theorem 1, we achieve the final optimization (6).
|
We propose Dual Actor-Critic algorithm, which is derived in a principled way from the Lagrangian dual form of the Bellman optimality equation. The algorithm achieves the state-of-the-art performances across several benchmarks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:318
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Training a model to perform a task typically requires a large amount of data from the domains in which the task will be applied.
However, it is often the case that data are abundant in some domains but scarce in others.
Domain adaptation deals with the challenge of adapting a model trained from a data-rich source domain to perform well in a data-poor target domain.
In general, this requires learning plausible mappings between domains.
CycleGAN is a powerful framework that efficiently learns to map inputs from one domain to another using adversarial training and a cycle-consistency constraint.
However, the conventional approach of enforcing cycle-consistency via reconstruction may be overly restrictive in cases where one or more domains have limited training data.
In this paper, we propose an augmented cyclic adversarial learning model that enforces the cycle-consistency constraint via an external task specific model, which encourages the preservation of task-relevant content as opposed to exact reconstruction.
This task specific model both relaxes the cycle-consistency constraint and complements the role of the discriminator during training, serving as an augmented information source for learning the mapping.
We explore adaptation in speech and visual domains in low resource in supervised setting.
In speech domains, we adopt a speech recognition model from each domain as the task specific model.
Our approach improves absolute performance of speech recognition by 2% for female speakers in the TIMIT dataset, where the majority of training samples are from male voices.
In low-resource visual domain adaptation, the results show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain.
Domain adaptation BID14 BID31 BID1 aims to generalize a model from source domain to a target domain.
Typically, the source domain has a large amount of training data, whereas the data are scarce in the target domain.
This challenge is typically addressed by learning a mapping between domains, which allows data from the source domain to enrich the available data for training in the target domain.
One way of learning such mappings is through Generative Adversarial Networks (GANs BID7 with cycle-consistency constraint (CycleGAN Zhu et al., 2017) , which enforces that mapping of an example from the source to the target and then back to the source domain would result in the same example (and vice versa for a target example).
Due to this constraint, CycleGAN learns to preserve the 'content' 1 from the source domain while only transferring the 'style' to match the distribution of the target domain.
This is a powerful constraint, and various works BID32 BID20 BID10 have demonstrated its effectiveness in learning cross domain mappings.Enforcing cycle-consistency is appealing as a technique for preserving semantic information of the data with respect to a task, but implementing it through reconstruction may be too restrictive when data are imbalanced across domains.
This is because the reconstruction error encourages exact match of samples from the reverse mapping, which may in turn encourage the forward-mapping to keep the sample close to the original domain.
Normally, the adversarial objectives would counter this effect; however, when data from the target domain are scarce, it is very difficult to learn a powerful discriminator that can capture meaningful properties of the target distribution.
Therefore, the resulting mappings learned is likely to be sub-optimal.
Importantly, for the learned mapping to be meaningful, it is not necessary to have the exact reconstruction.
As long as the 'semantic' information is preserved and the 'style' matches the corresponding distribution, it would be a valid mapping.To address this issue, we propose an augmented cyclic adversarial learning model (ACAL) for domain adaptation.
In particular, we replace the reconstruction objective with a task specific model.
The model learns to preserve the 'semantic' information from the data samples in a particular domain by minimizing the loss of the mapped samples for the task specific model.
On the other hand, the task specific model also serves as an additional source of information for the corresponding domain and hence supplements the discriminator in that domain to facilitate better modeling of the distribution.
The task specific model can also be viewed as an implicit way of disentangling the information essential to the task from the 'style' information that relates to the data distribution of different domain.
We show that our approach improves the performance by 40% as compared to the baseline on digit domain adaptation.
We improve the phoneme error rate by ∼ 5% on TIMIT dataset, when adapting the model trained on one speech from one gender to the other.
In this paper, we propose to use augmented cycle-consistency adversarial learning for domain adaptation and introduce a task specific model to facilitate learning domain related mappings.
We enforce cycle-consistency using a task specific loss instead of the conventional reconstruction objective.
Additionally, we use the task specific model as an additional source of information for the discriminator in the corresponding domain.
We demonstrate the effectiveness of our proposed approach by evaluating on two domain adaptation tasks, and in both cases we achieve significant performance improvement as compared to the baseline.By extending the definition of task-specific model to unsupervised learning, such as reconstruction loss using autoencoder, or self-supervision, our proposed method would work on all settings of domain adaptation.
Such unsupervised task can be speech modeling using wavenet BID30 , or language modeling using recurrent or transformer networks BID24 .
|
A robust domain adaptation by employing a task specific loss in cyclic adversarial learning
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:319
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Optimization on manifold has been widely used in machine learning, to handle optimization problems with constraint.
Most previous works focus on the case with a single manifold.
However, in practice it is quite common that the optimization problem involves more than one constraints, (each constraint corresponding to one manifold).
It is not clear in general how to optimize on multiple manifolds effectively and provably especially when the intersection of multiple manifolds is not a manifold or cannot be easily calculated.
We propose a unified algorithm framework to handle the optimization on multiple manifolds.
Specifically, we integrate information from multiple manifolds and move along an ensemble direction by viewing the information from each manifold as a drift and adding them together.
We prove the convergence properties of the proposed algorithms.
We also apply the algorithms into training neural network with batch normalization layers and achieve preferable empirical results.
Machine learning problem is often formulated as optimization problem.
It is common that the optimization problem comes with multiple constraints due to practical scenarios or human prior knowledge that adding some of them help model achieve a better result.
One way to handle these constraints is adding regularization terms to the objective, such as the 1 and 2 regularization.
However, it is hard to adjust the hyper-parameters of the regularization terms to guarantee that the original constraints get satisfied.Another way to deal with the constraints is to optimize on manifolds determined by the constraints.
Then the optimization problem becomes unconstrained on the manifold, which could be easy to solve technically.
Furthermore, optimization on manifold indicates optimizing on a more compact space, and may bring performance gain when training neural networks, e.g., BID10 BID3 .Most
previous works on manifold optimization focus on a single manifold BID13 . However
, in practice, we often face more than one constraints, each of them corresponding to one manifold. If we
still solve the optimization problem with multiple constraints by method on manifold, we need to handle it on the intersection of multiple manifolds, which may no longer be a manifold BID11 . Due to
this, traditional optimization methods on manifold does not work in this case.In this paper, we consider the problem of optimization on multiple manifolds. Specifically
, the problem is written as arg min DISPLAYFORM0 where each M i is a manifold. We propose
a method solving this problem by choosing the moving direction as −∇f (x)(on manifold is −gradf (x)) with several drifts which are derived from the descent information on other manifolds. By this method
, we get sequence that has information from all manifolds.
In this paper, we derive an intuitively method to approach optimization problem with multiple constraints which corresponds to optimizing on the intersection of multiple manifolds.
Specifically, the method is integrating information among all manifolds to determine minimum points on each manifold.
We don't add extra conditions to constraints of optimization problem, as long as each constraint can be converted to a manifold.
In the future, we may add some conditions to manifolds which derive a conclusion that minimum points on each manifold achieved by our algorithm are close with other.
If this conclusion is established, the problem of optimization on intersection of multiple manifolds is solved.According to the updating rule (equation 3), we can derive many other algorithms, because the drift h k in (equation 3) is flexible.
On the other hand, Retr x on our algorithm does not limit to a specific one.
Since there are some results for Retr x = Exp x , for example Corollary 8 in , we may get more elegant results by using Exp x as retraction function in our algorithm.The manifolds we encounter in optimization are mainly embedded sub-manifold and quotient manifold BID1 .
Embedded sub-manifold is F −1
(y) for a smooth function F : M 1 → M 2 , where M 1 , M 2 are two manifolds and y ∈ M 2 .
Quotient manifold is a quotient topology space generalized by a specific equivalence relationship ∼.
In this paper, we use Oblique manifold and Grassmann manifold which are embedded sub-manifold and quotient manifold respectively.The difficulty we faced in optimization on manifold is calculating tangent space T x M and Riemannian gradient gradf
(x).
Giving a exact formula of a tangent space T x M is not a easy problem.
On the other hand, since Riemannian gradient is ∇f
(x) projected to a tangent space T x M, finding projection matrix to a specific space T x M is nontrivial.
|
This paper introduces an algorithm to handle optimization problem with multiple constraints under vision of manifold.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:32
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The success of popular algorithms for deep reinforcement learning, such as policy-gradients and Q-learning, relies heavily on the availability of an informative reward signal at each timestep of the sequential decision-making process.
When rewards are only sparsely available during an episode, or a rewarding feedback is provided only after episode termination, these algorithms perform sub-optimally due to the difficultly in credit assignment.
Alternatively, trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not require per-timestep rewards, but have been found to suffer from high sample complexity by completing forgoing the temporal nature of the problem.
Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need.
In this work, we introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings.
We view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem.
We show that with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped rewards learned from experience replays.
Experimental results indicate that our algorithm works comparable to existing algorithms in environments with dense rewards, and significantly better in environments with sparse and episodic rewards.
We then discuss limitations of self-imitation learning, and propose to solve them by using Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies.
We demonstrate its effectiveness on a challenging variant of continuous-control MuJoCo locomotion tasks.
Deep reinforcement learning (RL) has demonstrated significant applicability and superior performance in many problems outside the reach of traditional algorithms, such as computer and board games BID28 , continuous control BID25 , and robotics .
Using deep neural networks as functional approximators, many classical RL algorithms have been shown to be very effective in solving sequential decision problems.
For example, a policy that selects actions under certain state observation can be parameterized by a deep neural network that takes the current state observation as input and gives an action or a distribution over actions as output.
Value functions that take both state observation and action as inputs and predict expected future reward can also be parameterized as neural networks.
In order to optimize such neural networks, policy gradient methods BID29 BID37 BID38 and Q-learning algorithms BID28 capture the temporal structure of the sequential decision problem and decompose it to a supervised learning problem, guided by the immediate and discounted future reward from rollout data.Unfortunately, when the reward signal becomes sparse or delayed, these RL algorithms may suffer from inferior performance and inefficient sample complexity, mainly due to the scarcity of the immediate supervision when training happens in single-timestep manner.
This is known as the temporal credit assignment problem BID44 .
For instance, consider the Atari Montezuma's revenge game -a reward is received after collecting certain items or arriving at the final destination in the lowest level, while no reward is received as the agent is trying to reach these goals.
The sparsity of the reward makes the neural network training very inefficient and also poses challenges in exploration.
It is not hard to see that many of the real-world problems tend to be of the form where rewards are either only sparsely available during an episode, or the rewards are episodic, meaning that a non-zero reward is only provided at the end of the trajectory or episode.In addition to policy-gradient and Q-learning, alternative algorithms, such as those for global-or stochastic-optimization, have recently been studied for policy search.
These algorithms do not decompose trajectories into individual timesteps, but instead apply zeroth-order finite-difference gradient or gradient-free methods to learn policies based on the cumulative rewards of the entire trajectory.
Usually, trajectory samples are first generated by running the current policy and then the distribution of policy parameters is updated according to the trajectory-returns.
The cross-entropy method (CEM, Rubinstein & Kroese (2016) ) and evolution strategies BID36 are two nominal examples.
Although their sample efficiency is often not comparable to the policy gradient methods when dense rewards are available from the environment, they are more widely applicable in the sparse or episodic reward settings as they are agnostic to task horizon, and only the trajectorybased cumulative reward is needed.Our contribution is the introduction of a new algorithm based on policy-gradients, with the objective of achieving better performance than existing RL algorithms in sparse and episodic reward settings.
Using the equivalence between the policy function and its state-action visitation distribution, we formulate policy optimization as a divergence minimization problem between the current policy's visitation and the distribution induced by a set of experience replay trajectories with high returns.
We show that with the Jensen-Shannon divergence (D JS ), this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped, dense rewards learned from these experience replays.
This algorithm can be seen as self-imitation learning, in which the expert trajectories in the experience replays are self-generated by the agent during the course of learning, rather than using some external demonstrations.
We combine the divergence minimization objective with the standard RL objective, and empirically show that the shaped, dense rewards significantly help in sparse and episodic settings by improving credit assignment.
Following that, we qualitatively analyze the shortcomings of the self-imitation algorithm.
Our second contribution is the application of Stein variational policy gradient (SVPG) with the Jensen-Shannon kernel to simultaneously learn multiple diverse policies.
We demonstrate the benefits of this addition to the self-imitation framework by considering difficult exploration tasks with sparse and deceptive rewards.Related Works.
Divergence minimization has been used in various policy learning algorithms.
Relative Entropy Policy Search (REPS) BID33 restricts the loss of information between policy updates by constraining the KL-divergence between the state-action distribution of old and new policy.
Policy search can also be formulated as an EM problem, leading to several interesting algorithms, such as RWR BID32 and PoWER BID20 .
Here the M-step minimizes a KL-divergence between trajectory distributions, leading to an update rule which resembles return-weighted imitation learning.
Please refer to BID7 for a comprehensive exposition.
MATL BID47 uses adversarial training to bring state occupancy from a real and simulated agent close to each other for efficient transfer learning.
In Guided Policy Search (GPS, BID21 ), a parameterized policy is trained by constraining the divergence between the current policy and a controller learnt via trajectory optimization.Learning from Demonstrations (LfD).
The objective in LfD, or imitation learning, is to train a control policy to produce a trajectory distribution similar to the demonstrator.
Approaches for self-driving cars BID4 and drone manipulation BID34 have used human-expert data, along with Behavioral Cloning algorithm to learn good control policies.
Deep Q-learning has been combined with human demonstrations to achieve performance gains in Atari and robotics tasks BID46 BID30 .
Human data has also been used in the maximum entropy IRL framework to learn cost functions under which the demonstrations are optimal .
BID17 use the same framework to derive an imitation-learning algorithm (GAIL) which is motivated by minimizing the divergence between agent's rollouts and external expert demonstrations.
Besides humans, other sources of expert supervision include planningbased approaches such as iLQR and MCTS .
Our algorithm departs from prior work in forgoing external supervision, and instead using the past experiences of the learner itself as demonstration data.Exploration and Diversity in RL.
Count-based exploration methods utilize state-action visitation counts N (s, a), and award a bonus to rarely visited states BID42 .
In large statespaces, approximation techniques BID45 , and estimation of pseudo-counts by learning density models BID3 BID13 has been researched.
Intrinsic motivation has been shown to aid exploration, for instance by using information gain or prediction error BID41 as a bonus.
Hindsight Experience Replay adds additional goals (and corresponding rewards) to a Q-learning algorithm.
We also obtain additional rewards, but from a discriminator trained on past agent experiences, to accelerate a policy-gradient algorithm.
Prior work has looked at training a diverse ensemble of agents with good exploratory skills BID27 BID6 BID12 .
To enjoy the benefits of diversity, we incorporate a modification of SVPG BID27 in our final algorithm.In very recent work, BID31 propose exploiting past good trajectories to drive exploration.
Their algorithm buffers (s, a) and the corresponding return for each transition in rolled trajectories, and reuses them for training if the stored return value is higher than the current state-value estimate.
Our approach presents a different objective for self-imitation based on divergence-minimization.
With this view, we learn shaped, dense rewards which are then used for policy optimization.
We further improve the algorithm with SVPG.
Reusing high-reward trajectories has also been explored for program synthesis and semantic parsing tasks BID23 BID0 .
We approached policy optimization for deep RL from the perspective of JS-divergence minimization between state-action distributions of a policy and its own past good rollouts.
This leads to a self-imitation algorithm which improves upon standard policy-gradient methods via the addition of a simple gradient term obtained from implicitly shaped dense rewards.
We observe substantial performance gains over the baseline for high-dimensional, continuous-control tasks with episodic and noisy rewards.
Further, we discuss the potential limitations of the self-imitation approach, and propose ensemble training with the SVPG objective and JS-kernel as a solution.
Through experimentation, we demonstrate the benefits of a self-imitating, diverse ensemble for efficient exploration and avoidance of local minima.An interesting future work is improving our algorithm using the rich literature on exploration in RL.
Since ours is a population-based exploration method, techniques for efficient single agent exploration can be readily combined with it.
For instance, parameter-space noise or curiosity-driven exploration can be applied to each agent in the SI-interact-JS ensemble.
Secondly, our algorithm for training diverse agents could be used more generally.
In Appendix 5.6, we show preliminary results for two cases:
a) hierarchical RL, where a diverse group of Swimmer bots is trained for downstream use in a complex Swimming+Gathering task;
b) RL without environment rewards, relying solely on diversity as the optimization objective.
Further investigation is left for future work.
|
Policy optimization by using past good rollouts from the agent; learning shaped rewards via divergence minimization; SVPG with JS-kernel for population-based exploration.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:320
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk.
In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step.
We show that the autoencoder indeed approximates this solution during training.
Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data.
Finally, we explore several regularisation schemes to resolve the generalisation problem.
Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures.
Autoencoders are neural networks, often convolutional neural networks, whose purpose is twofold.
Firstly, to compress some input data by transforming it from the input domain to another space, known as the latent, or code, space.
The second goal of the autoencoder is to take this latent representation and transform it back to the original space, such that the output is similar, with respect to some criterion, to the input.
One of the main objectives of this learning process being to reveal important structure in the data via the latent space, and therefore to represent this data in a more meaningful fashion or one that is easier to model.
Autoencoders have been proven to be extremely useful in many tasks ranging from image compression to synthesis.
Many variants on the basic idea of autoencoders have been proposed, the common theme being how to impose useful properties on the learned latent space.
However, very little is known about the actual inner workings and mechanisms of the autoencoder.The goal of this work is to investigate these mechanisms and describe how the autoencoder functions.
Many applications of autoencoders or similar networks consider relatively high-level input objects, ranging from the MNIST handwritten digits to abstract sketches of conceptual objects BID18 ; BID7 ).
Here, we take a radically different approach.
We consider, in depth, the encoding/decoding processes of a simple geometric shape, the disk, and investigate how the autoencoder functions in this case.
There are several important advantages to such an approach.
Firstly, since the class of objects we consider has an explicit parametrisation, it is possible to describe the "optimal" performance of the autoencoder, ie.
can it compress and uncompress a disk to and from a code space of dimensionality 1 ?
Secondly, the setting of this study fixes certain architecture characteristics of the network, such as the number of layers, leaving fewer free parameters to tune.
This means that the conclusions which we obtain are more likely to be robust than in the case of more high-level applications.
Finally, it is easier to identify the roles of different components of the network, which enables us to carry out an instructive ablation study.Using this approach, we show that the autoencoder approximates the theoretical solution of the training problem when no biases are involved in the network.
Secondly, we identify certain limitations in the generalisation capacity of autoencoders when the training database is incomplete with respect to the underlying manifold.
We observe the same limitation using the architecture of BID18 , which is considerably more complex and is proposed to encode natural images.
Finally, we analyse several regularisation schemes and identify one in particular which greatly aids in overcoming this generalisation problem.
We have investigated in detail the specific mechanisms which allow autoencoders to encode image information in an optimal manner in the specific case of disks.
We have shown that, in this case, the encoder functions by integrating over disk, and so the code z represents the area of the disk.
In the case where the autoencoder is trained with no bias, the decoder learns a single function which is multiplied by scalar depending on the input.
We have shown that this function corresponds to the optimal function.
The bias is then used to induce a thresholding process applied to ensure the disk is correctly decoded.
We have also illustrated certain limitations of the autoencoder with respect to generalisation when datapoints are missing in the training set.
This is especially problematic for higher-level applications, whose data have higher intrinsic dimensionality and therefore are more likely to include such "holes".
Finally, we identify a regularisation approach which is able to overcome this problem particularly well.
This regularisation is asymmetrical as it consists of regularizing the encoder while leaving more freedom to the decoder.An important future goal is to extend the theoretical analyses obtained to increasingly complex visual objects, in order to understand whether the same mechanisms remain in place.
We have experimented with other simple geometric objects such as squares and ellipses, with similar results in an optimal code size.
Another question is how the decoder functions with the biases included.
This requires a careful study of the different non-linearity activations as the radius increases.
Finally, the ultimate goal of these studies is to determine the capacity of autoencoders to encode and generate images representing more complex objects or scenes.
As we have seen, the proposed framework can help identifying some limitations of complex networks such as the one from BID18 and future works should investigate whether this framework can help developing the right regularization scheme or architecture.
Value of < f, 1 Br >, plotted against z Figure 7 : Verification of the hypothesis that y(t, r) = h(r)f (t) for decoding in the case where the autoencoder contains no bias..
We have determined the average profile of the output of the autoencoder when no biases are involved. On the left, we have divided several random experimental profiles y by the function h, and plotted the result, which is close to constant (spatially) for a fixed radius of the input disk. On the right, we plot z against the theoretically optimal value of h (C f, 1 Br , where C is some constant accounting for the arbitrary normalization of f ). This experimental sanity check confirms our theoretical derivations.
|
We study the functioning of autoencoders in a simple setting and advise new strategies for their regularisation in order to obtain bettre generalisation with latent interpolation in mind for image sythesis.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:321
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know.
These techniques open a wide range of potential applications such as cross-language communication, language learning or automatic video dubbing.
We call this general problem multi-language speaker-conditioned speech synthesis and we present a simple but strong baseline for it.
Our model architecture is similar to the encoder-decoder Char2Wav model or Tacotron.
The main difference is that, instead of conditioning on characters or phonemes that are specific to a given language, we condition on a shared phonetic representation that is universal to all languages.
This cross-language phonetic representation of text allows to synthesize speech in any language while preserving the vocal characteristics of the original speaker.
Furthermore, we show that fine-tuning the weights of our model allows us to extend our results to speakers outside of the training dataset.
|
We present a simple idea that allows to record a speaker in a given language and synthesize their voice in other languages that they may not even know.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:322
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The goal of imitation learning (IL) is to enable a learner to imitate expert behavior given expert demonstrations.
Recently, generative adversarial imitation learning (GAIL) has shown significant progress on IL for complex continuous tasks.
However, GAIL and its extensions require a large number of environment interactions during training.
In real-world environments, the more an IL method requires the learner to interact with the environment for better imitation, the more training time it requires, and the more damage it causes to the environments and the learner itself.
We believe that IL algorithms could be more applicable to real-world problems if the number of interactions could be reduced.
In this paper, we propose a model-free IL algorithm for continuous control.
Our algorithm is made up mainly three changes to the existing adversarial imitation learning (AIL) methods –
(a) adopting off-policy actor-critic (Off-PAC) algorithm to optimize the learner policy,
(b) estimating the state-action value using off-policy samples without learning reward functions, and
(c) representing the stochastic policy function so that its outputs are bounded.
Experimental results show that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions.
Recent advances in reinforcement learning (RL) have achieved super-human performance on several domains BID20 BID21 .
On most of such domains with the success of RL, the design of reward, that explains what agent's behavior is favorable, is obvious for humans.
Conversely, on domains where it is unclear how to design the reward, agents trained by RL algorithms often obtain poor policies and behave worse than what we expect them to do.
Imitation learning (IL) comes in such cases.
The goal of IL is to enable the learner to imitate expert behavior given the expert demonstrations without the reward signal.
We are interested in IL because we desire an algorithm that can be applied to real-world problems for which it is often hard to design the reward.
In addition, since it is generally hard to model a variety of real-world environments with an algorithm, and the state-action pairs in a vast majority of realworld applications such as robotics control can be naturally represented in continuous spaces, we focus on model-free IL for continuous control.A wide variety of IL methods have been proposed in the last few decades.
The simplest IL method among those is behavioral cloning (BC) BID23 which learns an expert policy in a supervised fashion without environment interactions during training.
BC can be the first IL option when enough demonstration is available.
However, when only a limited number of demonstrations are available, BC often fails to imitate the expert behavior because of the problem which is referred to compounding error BID25 -inaccuracies compound over time and can lead the learner to encounter unseen states in the expert demonstrations.
Since it is often hard to obtain a large number of demonstrations in real-world environments, BC is often not the best choice for real-world IL scenarios.Another widely used approach, which overcomes the compounding error problem, is Inverse Reinforcement Learning (IRL) BID27 BID22 BID0 BID33 .
Recently, BID15 have proposed generative adversarial imitation learning (GAIL) which is based on prior IRL works.
Since GAIL has achieved state-of-the-art performance on a variety of continuous control tasks, the adversarial IL (AIL) framework has become a popular choice for IL BID1 BID11 BID16 .
It is known that the AIL methods are more sample efficient than BC in terms of the expert demonstration.
However, as pointed out by BID15 , the existing AIL methods have sample complexity in terms of the environment interaction.
That is, even if enough demonstration is given by the expert before training the learner, the AIL methods require a large number of state-action pairs obtained through the interaction between the learner and the environment 1 .
The sample complexity keeps existing AIL from being employed to real-world applications for two reasons.
First, the more an AIL method requires the interactions, the more training time it requires.
Second, even if the expert safely demonstrated, the learner may have policies that damage the environments and the learner itself during training.
Hence, the more it performs the interactions, the more it raises the possibility of getting damaged.
For the real-world applications, we desire algorithms that can reduce the number of interactions while keeping the imitation capability satisfied as well as the existing AIL methods do.The following three properties of the existing AIL methods which may cause the sample complexity in terms of the environment interactions:(a
) Adopting on-policy RL methods which fundamentally have sample complexity in terms of the environment interactions.(b
) Alternating three optimization processes -learning reward functions, value estimation with learned reward functions, and RL to update the learner policy using the estimated value. In
general, as the number of parameterized functions which are related to each other increases, the training progress may be unstable or slower, and thus more interactions may be performed during training.(c)
Adopting Gaussian policy as the learner's stochastic policy, which has infinite support on a continuous action space. In
common IL settings, we observe action space of the expert policy from the demonstration where the expert action can take on values within a bounded (finite) interval. As
BID3 suggests, the policy which can select actions outside the bound may slow down the training progress and make the problem harder to solve, and thus more interactions may be performed during training.In this paper, we propose an IL algorithm for continuous control to improve the sample complexity of the existing AIL methods. Our
algorithm is made up mainly three changes to the existing AIL methods as follows:(a)
Adopting off-policy actor-critic (Off-PAC) algorithm BID5 to optimize the learner policy instead of on-policy RL algorithms. Off-policy
learning is commonly known as the promising approach to improve the complexity.(b) Estimating
the state-action value using off-policy samples without learning reward functions instead of using on-policy samples with the learned reward functions. Omitting the reward
learning reduces functions to be optimized. It is expected to make
training progress stable and faster and thus reduce the number of interactions during training.(c) Representing the stochastic
policy function of which outputs are bounded instead of adopting Gaussian policy. Bounding action values may make
the problem easier to solve and make the training faster, and thus reduce the number of interactions during training.Experimental results show that our algorithm enables the learner to imitate the expert behavior as well as GAIL does while significantly reducing the environment interactions. Ablation experimental results show
that (a) adopting the off-policy scheme
requires about 100 times fewer environment interactions to imitate the expert behavior than the one on-policy IL algorithms require, (b) omitting the reward learning makes
the training stable and faster, and (c) bounding action values makes the training
faster.
In this paper, we proposed a model-free IL algorithm for continuous control.
Experimental results showed that our algorithm achieves competitive performance with GAIL while significantly reducing the environment interactions.A DETAILED DESCRIPTION OF EXPERIMENT TAB0 summarizes the description of each task, the performance of an agent with random policy, and the performance of the experts.
|
In this paper, we proposed a model-free, off-policy IL algorithm for continuous control. Experimental results showed that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:323
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The dominant approach to unsupervised "style transfer'' in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its "style''.
In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations.
We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation.
Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space.
Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes.
One of the objectives of unsupervised learning is to learn representations of data that enable fine control over the underlying latent factors of variation, e.g., pose and viewpoint of objects in images, or writer style and sentiment of a product review.
In conditional generative modeling, these latent factors are given BID38 BID31 BID9 , or automatically inferred via observation of samples from the data distribution BID4 BID15 ).More
recently, several studies have focused on learning unsupervised mappings between two data domains such as images BID39 , words or sentences from different languages BID6 BID26 .In this
problem setting, the generative model is conditioned not only on the desired attribute values, but also on a initial input, which it must transform. Generations
should retain as many of the original input characteristics as possible, provided the attribute constraint is not violated. This learning
task is typically unsupervised because no example of an input and its corresponding output with the specified attribute is available during training. The model only
sees random examples and their attribute values.The dominant approach to learn such a mapping in text is via an explicit constraint on disentanglement BID17 BID10 BID37 : the learned representation should be invariant to the specified attribute, and retain only attribute-agnostic information about the "content". Changing the style
of an input at test time then amounts to generating an output based on the disentangled latent representation computed from the input and the desired attributes. Disentanglement is
often achieved through an adversarial term in the training objective that aims at making the attribute value unrecoverable from the latent representation. This paper aims to
extend previous studies on "style transfer" along three axes. (i) First, we seek
to gain a better understanding of what is necessary to make things work, and in particular, whether Table 1 : Our approach can be applied to many different domains beyond sentiment flipping, as illustrated here with example re-writes by our model on public social media content. The first line in
each box is an input provided to the model with the original attribute, followed by its rewrite when given a different attribute value. disentanglement is
key, or even actually achieved by an adversarial loss in practice. In Sec. 3.1 we provide
strong empirical evidence that disentanglement is not necessary to enable control over the factors of variation, and that even a method using adversarial loss to disentangle BID10 does not actually learn representations that are disentangled. (ii) Second, we introduce
a model which replaces the adversarial term with a back-translation BID35 objective which exposes the model to a pseudo-supervised setting, where the model's outputs act as supervised training data for the ultimate task at hand. The resulting model is similar
to recently proposed methods for unsupervised machine translation BID24 BID0 BID44 ), but with two major differences: (a) we use a pooling operator
which is used to control the trade-off between style transfer and content preservation; and (b) we extend this model to support
multiple attribute control. (iii) Finally, in Sec. 4.1 we point
out that current style transfer benchmarks based on collections of user reviews have severe limitations, as they only consider a single attribute control (sentiment), and very small sentences in isolation with noisy labels. To address this issue, we propose a
new set of benchmarks based on existing review datasets, which comprise full reviews, where multiple attributes are extracted from each review.The contributions of this paper are thus: (1) a deeper understanding of the necessary components of style transfer through extensive experiments, resulting in (2) a generic and simple learning framework based on mixing a denoising auto-encoding loss with an online back-translation technique and a novel neural architecture combining a pooling operator and support for multiple attributes, and (3) a new, more challenging and realistic version of existing benchmarks which uses full reviews and multiple attributes per review, as well as a comparison of our approach w.r.t. baselines using both new metrics and human evaluations. We will open-source our code and release
the new benchmark datasets used in this work, as well as our pre-trained classifiers and language models for reproducibility. This will also enable fair empirical comparisons
on automatic evaluation metrics in future work on this problem.
We present a model that is capable of re-writing sentences conditioned on given attributes, that is not based on a disentanglement criterion as often used in the literature.
We demonstrate our model's ability to generalize to a realistic setting of restaurant/product reviews consisting of several sentences per review.
We also present model components that allow fine-grained control over the trade-off between attribute control versus preserving the content in the input.
Experiments with automatic and human-based metrics show that our model significantly outperforms the current state of the art not only on existing datasets, but also on the large-scale datasets we created.
The source code and benchmarks will be made available to the research community after the reviewing process.A SUPPLEMENTARY MATERIAL
|
A system for rewriting text conditioned on multiple controllable attributes
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:324
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem.
Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN’s recurrency matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability.
Here, we instead suggest a regularization scheme that pushes part of the RNN’s latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales.
We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales.
Theories of complex systems in biology and physics are often formulated in terms of sets of stochastic differential or difference equations, i.e. as stochastic dynamical systems (DS).
A long-standing desire is to retrieve these generating dynamical equations directly from observed time series data (Kantz & Schreiber, 2004) .
A variety of machine and deep learning methodologies toward this goal have been introduced in recent years (Chen et al., 2017; Champion et al., 2019; Jordan et al., 2019; Duncker et al., 2019; Ayed et al., 2019; Durstewitz, 2017; Koppe et al., 2019) , many of them based on recurrent neural networks (RNN) which can universally approximate any DS (i.e., its flow field) under some mild conditions (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998) .
However, vanilla RNN as often used in this context are well known for their problems in capturing long-term dependencies and slow time scales in the data (Hochreiter & Schmidhuber, 1997; Bengio et al., 1994) .
In DS terms, this is generally due to the fact that flexible information maintenance over long periods requires precise fine-tuning of model parameters toward 'line attractor' configurations ( Fig. 1) , a concept first propagated in computational neuroscience for addressing animal performance in parametric working memory tasks (Seung, 1996; Seung et al., 2000; Durstewitz, 2003) .
Line attractors introduce directions of zero-flow into the model's state space that enable long-term maintenance of arbitrary values (Fig. 1) .
Specially designed RNN architectures equipped with gating mechanisms and (linear) memory cells have been suggested for solving this issue (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) .
However, from a DS perspective, simpler models that can more easily be analyzed and interpreted in DS terms, and for which more efficient inference algorithms exist that emphasize approximation of the true underlying DS would be preferable.
Recent solutions to the vanishing vs. exploding gradient problem attempt to retain the simplicity of vanilla RNN by initializing or constraining the recurrent weight matrix to be the identity (Le et al., 2015) , orthogonal (Henaff et al., 2016; Helfrich et al., 2018) or unitary (Arjovsky et al., 2016) .
In this way, in a system including piecewise linear (PL) components like rectified-linear units (ReLU), line attractor dimensions are established from the start by construction or ensured throughout training by a specifically parameterized matrix decomposition.
However, for many DS problems, line attractors instantiated by mere initialization procedures may be unstable and quickly dissolve during training.
On the other hand, orthogonal or unitary constraints are too restrictive for reconstructing DS, and more generally from a computational perspective as well (Kerg et al., 2019) : For instance, neither
2) with flow field (grey) and nullclines (set of points at which the flow of one of the variables vanishes, in blue and red).
Insets: Time graphs of z 1 for T = 30 000.
A) Perfect line attractor.
The flow converges to the line attractor from all directions and is exactly zero on the line, thus retaining states indefinitely in the absence of perturbations, as illustrated for 3 example trajectories (green) started from different initial conditions.
B) Slightly detuned line attractor (cf.
Durstewitz (2003) ).
The system's state still converges toward the 'line attractor ghost ' (Strogatz, 2015) , but then very slowly crawls up within the 'attractor tunnel' (green trajectory) until it hits the stable fixed point at the intersection of nullclines.
Within the tunnel, flow velocity is smoothly regulated by the gap between nullclines, thus enabling arbitrary time constants.
Note that along other, not illustrated dimensions of the system's state space the flow may still evolve freely in all directions.
C) Simple 2-unit solution to the addition problem exploiting the line attractor properties of ReLUs in the positive quadrant.
The output unit serves as a perfect integrator, while the input unit will only convey those input values to the output unit that are accompanied by a '1' in the second input stream (see 7.1.1 for complete parameters).
chaotic behavior (that requires diverging directions) nor settings with multiple isolated fixed point or limit cycle attractors are possible.
Here we therefore suggest a different solution to the problem, by pushing (but not strictly enforcing) ReLU-based, piecewise-linear RNN (PLRNN) toward line attractor configurations along some (but not all) directions in state space.
We achieve this by adding special regularization terms for a subset of RNN units to the loss function that promote such a configuration.
We demonstrate that our approach outperforms, or is en par with, LSTM and other, initialization-based, methods on a number of 'classical' machine learning benchmarks (Hochreiter & Schmidhuber, 1997) .
More importantly, we demonstrate that while with previous methods it was difficult to capture slow behavior in a DS that exhibits widely different time scales, our new regularization-supported inference efficiently captures all relevant time scales.
In this work we have introduced a simple solution to the long short-term memory problem in RNN that on the one hand retains the simplicity and tractability of vanilla RNN, yet on the other hand does not curtail the universal computational capabilities of RNN (Koiran et al., 1994; Siegelmann & Sontag, 1995) and their ability to approximate arbitrary DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Trischler & D'Eleuterio, 2016) .
We achieved this by adding regularization terms to the loss function that encourage the system to form a 'memory subspace', that is, line attractor dimensions (Seung, 1996; Durstewitz, 2003) which would store arbitrary values for, if unperturbed, arbitrarily long periods.
At the same time we did not rigorously enforce this constraint which has important implications for capturing slow time scales in the data: It allows the RNN to slightly depart from a perfect line attractor, which has been shown to constitute a general dynamical mechanism for regulating the speed of flow and thus the learning of arbitrary time constants that are not naturally included qua RNN design (Durstewitz, 2003; 2004) .
This is because as we come infinitesimally close to a line attractor and thus a bifurcation in the system's parameter space, the flow along this direction becomes arbitrarily slow until it vanishes completely in the line attractor configuration (Fig. 1) .
Moreover, part of the RNN's latent space was not regularized at all, leaving the system enough degrees of freedom for realizing arbitrary computations or dynamics.
We showed that the rPLRNN is en par with or outperforms initialization-based approaches and LSTMs on a number of classical benchmarks, and, more importantly, that the regularization strongly facilitates the identification of challenging DS with widely different time scales in PLRNN-based algorithms for DS reconstruction.
Future work will explore a wider range of DS models and empirical data with diverse temporal and dynamical phenomena.
Another future direction may be to replace the EM algorithm by black-box variational inference, using the re-parameterization trick for gradient descent (Kingma & Welling, 2013; Rezende et al., 2014; Chung et al., 2015) .
While this would come with better scaling in M , the number of latent states (the scaling in T is linear for EM as well, see Paninski et al. (2010) ), the EM used here efficiently exploits the model's piecewise linear structure in finding the posterior over latent states and computing the parameters (see Suppl. 7.1.3).
It may thus be more accurate and suitable for smaller-scale problems where high precision is required, as often encountered in neuroscience or physics.
7 SUPPLEMENTARY MATERIAL 7.1 SUPPLEMENTARY TEXT 7.1.1 Simple exact PLRNN solution for addition problem
The exact PLRNN parameter settings (cf. eq. 1) for solving the addition problem with 2 units (cf. Fig. 1C ) are as follows:
|
We develop a new optimization approach for vanilla ReLU-based RNN that enables long short-term memory and identification of arbitrary nonlinear dynamical systems with widely differing time scales.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:325
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem.
Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes.
In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term.
Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection.
Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.
Over the past few years, Generative Adversarial Networks (GANs) have shown impressive results in many generative tasks.
They are inspired by the game theory, that two models compete with each other: a generator which seeks to produce samples from the same distribution as the data, and a discriminator whose job is to distinguish between real and generated data.
Both models are forced stronger simultaneously during the training process.
GANs are capable of producing plausible synthetic data across a wide diversity of data modalities, including natural images (Karras et al., 2017; Brock et al., 2018; Lucic et al., 2019) , natural language (Press et al., 2017; Lin et al., 2017; Rajeswar et al., 2017) , music Mogren, 2016; Dong et al., 2017; Dong & Yang, 2018) , etc.
Despite their success, it is often difficult to train a GAN model in a fast and stable way, and researchers are facing issues like vanishing gradients, training instability, mode collapse, etc.
This has led to a proliferation of works that focus on improving the quality of GANs by stabilizing the training procedure (Radford et al., 2015; Salimans et al., 2016; Zhao et al., 2016; Nowozin et al., 2016; Qi, 2017; Deshpande et al., 2018) .
In particular, introduced a variant of GANs based on the Wasserstein distance, and releases the problem of gradient disappearance to some extent.
However, WGANs limit the weight within a range to enforce the continuity of Lipschitz, which can easily cause over-simplified critic functions (Gulrajani et al., 2017) .
To solve this issue, Gulrajani et al. (2017) proposed a gradient penalty method termed WGAN-GP, which replaces the weight clipping in WGANs with a gradient penalty term.
As such, WGAN-GP provides a more stable training procedure and succeeds in a variety of generating tasks.
Based on WGAN-GP, more works (Wei et al., 2018; Petzka et al., 2017; Wu et al., 2018; Mescheder et al., 2018; Thanh-Tung et al., 2019; Kodali et al., 2017; adopt different forms of gradient penalty terms to further improve training stability.
However, it is often observed that such gradient penalty strategy sometimes generate samples with unsatisfying quality, or even do not always converge to the equilibrium point (Mescheder et al., 2018) .
In this paper, we propose a general framework named Wasserstein-Bounded GAN (WBGAN), which improve the stability of WGAN training by bounding the Wasserstein term.
The highlight is that the instability of WGANs also resides in the dramatic changes of the estimated Wasserstein distance during the initial iterations.
Many previous works just focused on improving the gradient penalty term for stable training, while they ignored the bottleneck of the Wasserstein term.
The proposed training strategy is able to adaptively enforce the Wasserstein term within a certain value, so as to balance the Wasserstein loss and gradient penalty loss dynamically and make the training process more stable.
WBGANs are generalized, which can be instantiated using different kinds of bound estimations, and incorporated into any variant of WGANs to improve the training stability and accelerate the convergence.
Specifically, with Sinkhorn distance (Cuturi, 2013; Genevay et al., 2017) for bound estimation, we test three representative variants of WGANs (WGAN-GP (Gulrajani et al., 2017) , WGANdiv (Wu et al., 2018) , and WGAN-GPReal (Mescheder et al., 2018) ) on the CelebA dataset (Liu et al., 2015) .
As shown in Fig. 1
This paper introduced a general framework called WBGANs, which can be applied to a variety of WGAN variants to stabilize the training process and improve the performance.
We clarify that WBGANs can stabilize the Wasserstein term at the beginning of the iterations, which is beneficial for smoother convergence of WGAN-based methods.
We present an instantiated bound estimation method via Sinkhorn distance and give a theoretical analysis on it.
It remains an open topic on how to set a better bound for higher resolution image generation tasks.
|
Propose an improved framework for WGANs and demonstrate its better performance in theory and practice.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:326
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian.
After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample.
In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on.
To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation.
Our experimental results validate that the proposed operations give higher quality samples compared to the original operations.
Generative models such as Variational Autoencoders (VAEs) BID6 and Generative Adversarial Networks (GANs) BID3 have emerged as popular techniques for unsupervised learning of intractable distributions.
In the framework of Generative Adversarial Networks (GANs) BID3 , the generative model is obtained by jointly training a generator G and a discriminator D in an adversarial manner.
The discriminator is trained to classify synthetic samples from real ones, whereas the generator is trained to map samples drawn from a fixed prior distribution to synthetic examples which fool the discriminator.
Variational Autoencoders (VAEs) BID6 are also trained for a fixed prior distribution, but this is done through the loss of an Autoencoder that minimizes the variational lower bound of the data likelihood.
For both VAEs and GANs, using some data X we end up with a trained generator G, that is supposed to map latent samples z from the fixed prior distribution to output samples G(z) which (hopefully) have the same distribution as the data.In order to understand and visualize the learned model G(z), it is a common practice in the literature of generative models to explore how the output G(z) behaves under various arithmetic operations on the latent samples z.
In this paper, we show that the operations typically used so far, such as linear interpolation BID3 , spherical interpolation (White, 2016) , vicinity sampling and vector arithmetic BID12 , cause a distribution mismatch between the latent prior distribution and the results of the operations.
This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution.
We show that this, somewhat paradoxically, is also a problem if the support of resulting (mismatched) distribution is within the support of a uniformly distributed prior, whose points all have equal likelihood during training.To address this, we propose to use distribution matching transport maps, to obtain analogous latent space operations (e.g. interpolation, vicinity sampling) which preserve the prior distribution of the latent space, while minimally changing the original operation.
In Figure 1 we showcase how our proposed technique gives an interpolation operator which avoids distribution mismatch when interpolating between samples of a uniform distribution.
The points of the (red) matched trajectories samples from prior linear matched (ours) spherical (a) Uniform prior: Trajectories of linear interpolation, our matched interpolation and the spherical interpolation (White, 2016) .
(White, 2016)
Figure 1: We show examples of distribution mismatches induced by the previous interpolation schemes when using a uniform prior in two dimensions.
Our matched interpolation avoids this with a minimal modification to the linear trajectory, traversing through the space such that all points along the path are distributed identically to the prior.are obtained as minimal deviations (in expectation of l 1 distance) from the the points of the (blue) linear trajectory.
We have shown that the common latent space operations used for Generative Models induce distribution mismatch from the prior distribution the models were trained for.
This problem has been mostly ignored by the literature so far, partially due to the belief that this should not be a problem for uniform priors.
However, our statistical and experimental analysis shows that the problem is real, with the operations used so far producing significantly lower quality samples compared to their inputs.
To address the distribution mismatch, we propose to use optimal transport to minimally modify (in l 1 distance) the operations such that they fully preserve the prior distribution.
We give analytical formulas of the resulting (matched) operations for various examples, which are easily implemented.
The matched operators give a significantly higher quality samples compared to the originals, having the potential to become standard tools for evaluating and exploring generative models.
We note that the analysis here can bee seen as a more rigorous version of an observation made by White (2016) , who experimentally show that there is a significant difference between the average norm of the midpoint of linear interpolation and the points of the prior, for uniform and Gaussian distributions.Suppose our latent space has a prior with DISPLAYFORM0 In this case, we can look at the squared norm DISPLAYFORM1 From the Central Limit Theorem (CLT), we know that as d → ∞, DISPLAYFORM2 in distribution.
Thus, assuming d is large enough such that we are close to convergence, we can approximate the distribution of z 2 as N (dµ Z 2 , dσ 2 Z 2 ).
In particular, this implies that almost all points lie on a relatively thin spherical shell, since the mean grows as O(d) whereas the standard deviation grows only as O( DISPLAYFORM3 We note that this property is well known for i.i.d Gaussian entries (see e.g. Ex. 6.14 in MacKay FORMULA5 ).
For Uniform distribution on the hypercube it is also well known that the mass is concentrated in the corner points (which is consistent with the claim here since the corner points lie on a sphere).Now
consider an operator such as the midpoint of linear interpolation, y = DISPLAYFORM4 In this case, we can compute: DISPLAYFORM5 Thus, the distribution of y 2 can be approximated with N ( DISPLAYFORM6 . Therefore
, y also mostly lies on a spherical shell, but with a different radius than z. In fact,
the shells will intersect at regions which have a vanishing probability for large d. In other
words, when looking at the squared norm y 2 , y 2 is a (strong) outlier with respect to the distribution of z 2 .
|
Operations in the GAN latent space can induce a distribution mismatch compared to the training distribution, and we address this using optimal transport to match the distributions.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:327
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural program embeddings have shown much promise recently for a variety of program analysis tasks, including program synthesis, program repair, code completion, and fault localization.
However, most existing program embeddings are based on syntactic features of programs, such as token sequences or abstract syntax trees.
Unlike images and text, a program has well-defined semantics that can be difficult to capture by only considering its syntax (i.e. syntactically similar programs can exhibit vastly different run-time behavior), which makes syntax-based program embeddings fundamentally limited.
We propose a novel semantic program embedding that is learned from program execution traces.
Our key insight is that program states expressed as sequential tuples of live variable values not only capture program semantics more precisely, but also offer a more natural fit for Recurrent Neural Networks to model.
We evaluate different syntactic and semantic program embeddings on the task of classifying the types of errors that students make in their submissions to an introductory programming class and on the CodeHunt education platform.
Our evaluation results show that the semantic program embeddings significantly outperform the syntactic program embeddings based on token sequences and abstract syntax trees.
In addition, we augment a search-based program repair system with predictions made from our semantic embedding and demonstrate significantly improved search efficiency.
Recent breakthroughs in deep learning techniques for computer vision and natural language processing have led to a growing interest in their applications in programming languages and software engineering.
Several well-explored areas include program classification, similarity detection, program repair, and program synthesis.
One of the key steps in using neural networks for such tasks is to design suitable program representations for the networks to exploit.
Most existing approaches in the neural program analysis literature have used syntax-based program representations.
BID6 proposed a convolutional neural network over abstract syntax trees (ASTs) as the program representation to classify programs based on their functionalities and detecting different sorting routines.
DeepFix BID4 , SynFix BID1 , and sk p BID9 are recent neural program repair techniques for correcting errors in student programs for MOOC assignments, and they all represent programs as sequences of tokens.
Even program synthesis techniques that generate programs as output, such as RobustFill BID3 , also adopt a token-based program representation for the output decoder.
The only exception is BID8 , which introduces a novel perspective of representing programs using input-output pairs.
However, such representations are too coarse-grained to accurately capture program properties -programs with the same input-output behavior may have very different syntactic characteristics.
Consequently, the embeddings learned from input-output pairs are not precise enough for many program analysis tasks.Although these pioneering efforts have made significant contributions to bridge the gap between deep learning techniques and program analysis tasks, syntax-based program representations are fundamentally limited due to the enormous gap between program syntax (i.e. static expression) and Bubble Insertion [5,5,1,4,3] [5,5,1,4,3] [5,8,1,4,3] [5,8,1,4,3] [5, 1,1,4,3] [5,1,1,4,3] [5, 1,8,4,3] [5,1,8,4,3] [1, 1,8,4,3] [5,1,4,4,3] [ 1,5,8,4,3] [5,1,4,8,3] [1, 5,4,4,3] [5,1,4,3,3] [1, 5,4,8,3] [5,1,4,3,8] [1, 4, 4, 8, 3] [1, 1, 4, 3, 8] [1, 4, 5, 8, 3] [1, 5, 4, 3, 8] [ 1, 4, 5, 3, 3] [1, 4, 4, 3, 8] [ 1, 4, 5, 3, 8] [1, 4, 5, 3, 8] [ 1, 4, 3, 3, 8] [1, 4, 3, 3, 8] [ 1, 4, 3, 5, 8] [1,4,3,5,8] [1,3,3,5,8] [1,3,3,5,8] [1,3,4,5,8] [1, 3, 4, 5, 8] Figure 1: Bubble sort and insertion sort (code highlighted in shadow box are the only syntactic differences between the two algorithms).
Their execution traces for the input vector A = [8, 5, 1, 4, 3] are displayed on the right, where, for brevity, only values for variable A are shown.
semantics (i.e. dynamic execution).
This gap can be illustrated as follows.
First, when a program is executed at runtime, its statements are almost never interpreted in the order in which the corresponding token sequence is presented to the deep learning models (the only exception being straightline programs, i.e., ones without any control-flow statements).
For example, a conditional statement only executes one branch each time, but its token sequence is expressed sequentially as multiple branches.
Similarly, when iterating over a looping structure at runtime, it is unclear in which order any two tokens are executed when considering different loop iterations.
Second, program dependency (i.e. data and control) is not exploited in token sequences and ASTs despite its essential role in defining program semantics.
FIG0 shows an example using a simple max function.
On line 8, the assignment statement means variable max val is data-dependent on item.
In addition, the execution of this statement depends on the evaluation of the if condition on line 7, i.e., max val is also control-dependent on item as well as itself.
Third, from a pure program analysis standpoint, the gap between program syntax and semantics is manifested in that similar program syntax may lead to vastly different program semantics.
For example, consider the two sorting functions shown in Figure 1 .
Both functions sort the array via two nested loops, compare the current element to its successor, and swap them if the order is incorrect.
However, the two functions implement different algorithms, namely Bubble Sort and Insertion Sort.
Therefore minor syntactic discrepancies can lead to significant semantic differences.
This intrinsic weakness will be inherited by any deep learning technique that adopts a syntax-based program representation.
We have evaluated our dynamic program embeddings in the context of automated program repair.
In particular, we use the program embeddings to classify the type of mistakes students made to their programming assignments based on a set of common error patterns (described in the appendix).
The dataset for the experiments consists of the programming submissions made to Module 2 assignment in Microsoft-DEV204.1X and two additional problems from the Microsoft CodeHunt platform.
The results show that our dynamic embeddings significantly outperform syntax-based program embeddings, including those trained on token sequences and abstract syntax trees.
In addition, we show that our dynamic embeddings can be leveraged to significantly improve the efficiency of a searchbased program corrector SARFGEN 1 BID13 ) (the algorithm is presented in the appendix).
More importantly, we believe that our dynamic program embeddings can be useful for many other program analysis tasks, such as program synthesis, fault localization, and similarity detection.To summarize, the main contributions of this paper are: (1) we show the fundamental limitation of representing programs using syntax-level features; (2) we propose dynamic program embeddings learned from runtime execution traces to overcome key issues with syntactic program representations; (3) we evaluate our dynamic program embeddings for predicting common mistake patterns students make in program assignments, and results show that the dynamic program embeddings outperform state-of-the-art syntactic program embeddings; and (4) we show how the dynamic program embeddings can be utilized to improve an existing production program repair system.
We have presented a new program embedding that learns program representations from runtime execution traces.
We have used the new embeddings to predict error patterns that students make in their online programming submissions.
Our evaluation shows that the dynamic program embeddings significantly outperform those learned via program syntax.
We also demonstrate, via an additional application, that our dynamic program embeddings yield more than 10x speedups compared to an enumerative baseline for search-based program repair.
Beyond neural program repair, we believe that our dynamic program embeddings can be fruitfully utilized for many other neural program analysis tasks such as program induction and synthesis.
for Pc ∈ Pcs do // Generates the syntactic discrepencies w.r.t. each Pc 7 C(P , Pc) ← DiscrepenciesGeneration(P , Ps) // Executing P to extract the dynamic execution trace 8 T (P ) ← DynamicTraceExtraction(P ) // Prioritizing subsets of C(P , Pc) through pre-trained model 9 C subs (P , Pc) ← Prioritization(C(P , Pc), T (P ), M) 10 for C sub (P , Pc) ∈ C subs (P , Pc) do
|
A new way of learning semantic program embedding
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:328
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization (DBN), where we update the BN parameters in a diminishing moving average way.
Batch normalization (BN) is very effective in accelerating the convergence of a neural network training phase that it has become a common practice.
Our proposed DBN algorithm remains the overall structure of the original BN algorithm while introduces a weighted averaging update to some trainable parameters.
We provide an analysis of the convergence of the DBN algorithm that converges to a stationary point with respect to trainable parameters.
Our analysis can be easily generalized for original BN algorithm by setting some parameters to constant.
To the best knowledge of authors, this analysis is the first of its kind for convergence with Batch Normalization introduced.
We analyze a two-layer model with arbitrary activation function.
The primary challenge of the analysis is the fact that some parameters are updated by gradient while others are not.
The convergence analysis applies to any activation function that satisfies our common assumptions.
For the analysis, we also show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence.
In the numerical experiments, we use more complex models with more layers and ReLU activation.
We observe that DBN outperforms the original BN algorithm on Imagenet, MNIST, NI and CIFAR-10 datasets with reasonable complex FNN and CNN models.
Deep neural networks (DNN) have shown unprecedented success in various applications such as object detection.
However, it still takes a long time to train a DNN until it converges.
Ioffe & Szegedy identified a critical problem involved in training deep networks, internal covariate shift, and then proposed batch normalization (BN) to decrease this phenomenon.
BN addresses this problem by normalizing the distribution of every hidden layer's input.
In order to do so, it calculates the preactivation mean and standard deviation using mini-batch statistics at each iteration of training and uses these estimates to normalize the input to the next layer.
The output of a layer is normalized by using the batch statistics, and two new trainable parameters per neuron are introduced that capture the inverse operation.
It is now a standard practice Bottou et al. (2016) ; He et al. (2016) .
While this approach leads to a significant performance jump, to the best of our knowledge, there is no known theoretical guarantee for the convergence of an algorithm with BN.
The difficulty of analyzing the convergence of the BN algorithm comes from the fact that not all of the BN parameters are updated by gradients.
Thus, it invalidates most of the classical studies of convergence for gradient methods.In this paper, we propose a generalization of the BN algorithm, diminishing batch normalization (DBN), where we update the BN parameters in a diminishing moving average way.
It essentially means that the BN layer adjusts its output according to all past mini-batches instead of only the current one.
It helps to reduce the problem of the original BN that the output of a BN layer on a particular training pattern depends on the other patterns in the current mini-batch, which is pointed out by Bottou et al..
By setting the layer parameter we introduce into DBN to a specific value, we recover the original BN algorithm.We give a convergence analysis of the algorithm with a two-layer batch-normalized neural network and diminishing stepsizes.
We assume two layers (the generalization to multiple layers can be made by using the same approach but substantially complicating the notation) and an arbitrary loss function.
The convergence analysis applies to any activation function that follows our common assumption.
The main result shows that under diminishing stepsizes on gradient updates and updates on mini-batch statistics, and standard Lipschitz conditions on loss functions DBN converges to a stationary point.
As already pointed out the primary challenge is the fact that some trainable parameters are updated by gradient while others are updated by a minor recalculation.Contributions.
The main contribution of this paper is in providing a general convergence guarantee for DBN.
Specifically, we make the following contributions.•
In section 4, we show the sufficient and necessary conditions for the stepsizes and diminishing weights to ensure the convergence of BN parameters.•
We show that the algorithm converges to a stationary point under a general nonconvex objective function.This paper is organized as follows. In
Section 2, we review the related works and the development of the BN algorithm. We
formally state our model and algorithm in Section 3.
We present our main results in Sections 4.
In Section 5, we numerically show that the DBN algorithm outperforms the original BN algorithm. Proofs
for main steps are collected in the Appendix.
|
We propose a extension of the batch normalization, show a first-of-its-kind convergence analysis for this extension and show in numerical experiments that it has better performance than the original batch normalizatin.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:329
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned.
In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces.
Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time.
Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions.
With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces (approximately).
On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG.
We apply the technique to off-policy (Q-learning) methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks.
Reinforcement learning has long been considered as a general framework applicable to a broad range of problems.
However, the approaches used to tackle discrete and continuous action spaces have been fundamentally different.
In discrete domains, algorithms such as Q-learning leverage backups through Bellman equations and dynamic programming to solve problems effectively.
These strategies have led to the use of deep neural networks to learn policies and value functions that can achieve superhuman accuracy in several games (Mnih et al., 2013; where actions lie in discrete domains.
This success spurred the development of RL techniques that use deep neural networks for continuous control problems BID12 BID20 .
The gains in these domains, however, have not been as outsized as they have been for discrete action domains.This disparity is, in part, a result of the inherent difficulty in maximizing an arbitrary function on a continuous domain, even in low-dimensional settings.
Furthermore, it becomes harder to apply dynamic programming methods to back up value function estimates from successor states to parent states in continuous control problems.
Several of the recent continuous control reinforcement learning approaches attempt to borrow characteristics from discrete problems by proposing models that allow maximization and backups more easily BID12 .One
way in which continuous control can avail itself of the above advantages is to discretize each of the dimensions of continuous control action spaces. As
noted in , doing this naively, however, would create an exponentially large discrete space of actions. For
example with M dimensions being discretized into N bins, the problem would balloon to a discrete space with M N possible actions.We leverage the recent success of sequence-to-sequence type models BID32 to train such discretized models, without falling into the trap of requiring an exponentially large number of actions. Our
method relies on a technique that was first introduced in BID3 , which allows us to escape the curse of dimensionality in high dimensional spaces by modeling complicated probability distributions using the chain rule decomposition. In
this paper, we similarly parameterize functions of interest -Q-values -using a decomposition of the joint function into a sequence of conditional values tied together with the bellman operator. With
this formulation, we are able to achieve fine-grained discretization of individual domains, without an explosion in the number of parameters; at the same time we can model arbitrarily complex distributions while maintaining the ability to perform (approximate) global maximization. These
benefits come at the cost of shifting the exponentially complex action space into an exponentially complex MDP BID5 BID10 . In many
settings, however, there are relationships between transitions that can be leveraged and large regions of good solutions, which means that this exponential space need not be fully explored. Existing
work using neural networks to perform approximate exponential search is evidence of this BID37 ; BID2 .While this
strategy can be applied to most function approximation settings in RL, we focus on off-policy settings with an algorithm akin to DQN. Empirical
results on an illustrative multimodal problem demonstrates how our model is able to perform global maximization, avoiding the exploration problems faced by algorithms like NAF BID13 and DDPG . We also show
the effectiveness of our method on a range of benchmark continuous control problems from hopper to humanoid.
Conceptually, our approach centers on the idea that action selection at each stage can be factored and sequentially selected.
In this work we use 1-D action spaces that are discretized as our base component.
Existing work in the image modeling domain suggests that using a mixture of logistic units BID25 greatly speeds up training and would also satisfy our need for a closed form max.
Additionally, this work imposes a prespecified ordering of actions which may negatively impact training for certain classes of problems (with much larger number of action dimensions).
To address this, we could learn to factor the action space into the sequential order for continuous action spaces or learn to group action sets for discrete action spaces.
Another promising direction is to combine this approximate max action with gradient based optimization procedure.
This would relieve some of the complexity of the modeling task of the maxing network, at the cost of increased compute when sampling from the policy.
Finally, the work presented here is exclusively on off-policy methods.
We chose to focus on these methods due to their sample efficiency.
Use of an sequential policies with discretized actions could also be used as the policy for any stochastic policy optimization algorithm such as TRPO BID27 or A3C (Mnih et al., 2016) .
In this work we present a continuous control algorithm that utilize discretized action spaces and sequential models.
The technique we propose is an off-policy RL algorithm that utilizes sequential prediction and discretization.
We decompose our model into a hierarchy of Q function.
The effectiveness of our method is demonstrated on illustrative and benchmark tasks, as well as on more complex continuous control tasks.
Sampling an Action
|
A method to do Q-learning on continuous action spaces by predicting a sequence of discretized 1-D actions.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:33
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian.
After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample.
However, the latent space operations commonly used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on.
Previous works have attempted to reduce this mismatch with heuristic modification to the operations or by changing the latent distribution and re-training models.
In this paper, we propose a framework for modifying the latent space operations such that the distribution mismatch is fully eliminated.
Our approach is based on optimal transport maps, which adapt the latent space operations such that they fully match the prior distribution, while minimally modifying the original operation.
Our matched operations are readily obtained for the commonly used operations and distributions and require no adjustment to the training procedure.
Generative models such as Variational Autoencoders (VAEs) BID7 and Generative Adversarial Networks (GANs) BID3 have emerged as popular techniques for unsupervised learning of intractable distributions.
In the framework of Generative Adversarial Networks (GANs) BID3 , the generative model is obtained by jointly training a generator G and a discriminator D in an adversarial manner.
The discriminator is trained to classify synthetic samples from real ones, whereas the generator is trained to map samples drawn from a fixed prior distribution to synthetic examples which fool the discriminator.
Variational Autoencoders (VAEs) BID7 are also trained for a fixed prior distribution, but this is done through the loss of an Autoencoder that minimizes the variational lower bound of the data likelihood.
For both VAEs and GANs, using some data X we end up with a trained generator G, that is supposed to map latent samples z from the fixed prior distribution to output samples G(z) which (hopefully) have the same distribution as the data.In order to understand and visualize the learned model G(z), it is a common practice in the literature of generative models to explore how the output G(z) behaves under various arithmetic operations on the latent samples z.
However, the operations typically used so far, such as linear interpolation BID3 , spherical interpolation BID20 , vicinity sampling and vector arithmetic BID12 , cause a distribution mismatch between the latent prior distribution and the results of the operations.
This is problematic, since the generator G was trained on a fixed prior and expects to see inputs with statistics consistent with that distribution.To address this, we propose to use distribution matching transport maps, to obtain analogous latent space operations (e.g. interpolation, vicinity sampling) which preserve the prior distribution of samples from prior linear matched (ours) spherical (a) Uniform prior: Trajectories of linear interpolation, our matched interpolation and the spherical interp.
BID20 .
(e) Spherical midpoint distribution BID20 Figure 1: We show examples of distribution mismatches induced by the previous interpolation schemes when using a uniform prior in two dimensions.
Our matched interpolation avoids this with a minimal modification to the linear trajectory, traversing through the space such that all points along the path are distributed identically to the prior.
the latent space, while minimally changing the original operation.
In Figure 1 we showcase how our proposed technique gives an interpolation operator which avoids distribution mismatch when interpolating between samples of a uniform distribution.
The points of the (red) matched trajectories are obtained as minimal deviations (in expectation of l 1 distance) from the the points of the (blue) linear trajectory.
We proposed a framework that fully eliminates the distribution mismatch in the common latent space operations used for generative models.
Our approach uses optimal transport to minimally modify (in l 1 distance) the operations such that they fully preserve the prior distribution.
We give analytical formulas of the resulting (matched) operations for various examples, which are easily implemented.
The matched operators give a significantly higher quality samples compared to the originals, having the potential to become standard tools for evaluating and exploring generative models.
|
We propose a framework for modifying the latent space operations such that the distribution mismatch between the resulting outputs and the prior distribution the generative model was trained on is fully eliminated.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:330
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research.
Current architectures only take care of semantic and contextual information for a given query and fail to completely account for syntactic and external knowledge which are crucial for generating responses in a chit-chat system.
To overcome this problem, we propose an end to end multi-stream deep learning architecture which learns unified embeddings for query-response pairs by leveraging contextual information from memory networks and syntactic information by incorporating Graph Convolution Networks (GCN) over their dependency parse.
A stream of this network also utilizes transfer learning by pre-training a bidirectional transformer to extract semantic representation for each input sentence and incorporates external knowledge through the neighbourhood of the entities from a Knowledge Base (KB).
We benchmark these embeddings on next sentence prediction task and significantly improve upon the existing techniques.
Furthermore, we use AMUSED to represent query and responses along with its context to develop a retrieval based conversational agent which has been validated by expert linguists to have comprehensive engagement with humans.
With significant advancements in Automatic speech recognition systems (Hinton et al., 2012; Kumar et al., 2018) and the field of natural language processing, conversational agents have become an important part of the current research.
It finds its usage in multiple domains ranging from self-driving cars (Chen et al., 2017b) to social robots and virtual assistants (Chen et al., 2017a) .
Conversational agents can be broadly classified into two categories: a task oriented chat bot and a chit-chat based system respectively.
The former works towards completion of a certain goal and are specifically designed for domain-specific needs such as restaurant reservations (Wen et al., 2017) , movie recommendation (Dhingra et al., 2017) , flight ticket booking systems ) among many others.
The latter is more of a personal companion and engages in human-computer interaction for entertainment or emotional companionship.
An ideal chit chat system should be able to perform non-monotonous interesting conversation with context and coherence.
Current chit chat systems are either generative (Vinyals & Le, 2015) or retrieval based in nature.
The generative ones tend to generate natural language sentences as responses and enjoy scalability to multiple domains without much change in the network.
Even though easier to train, they suffer from error-prone responses (Zhang et al., 2018b) .
IR based methods select the best response from a given set of answers which makes them error-free.
But, since the responses come from a specific dataset, they might suffer from distribution bias during the course of conversation.
A chit-chat system should capture semantic, syntactic, contextual and external knowledge in a conversation to model human like performance.
Recent work by Bordes et al. (2016) proposed a memory network based approach to encode contextual information for a query while performing generation and retrieval later.
Such networks can capture long term context but fail to encode relevant syntactic information through their model.
Things like anaphora resolution are properly taken care of if we incorporate syntax.
Our work improves upon previous architectures by creating enhanced representations of the conversation using multiple streams which includes Graph Convolution networks (Bruna et al., 2014) , Figure 1 : Overview of AMUSED.
AMUSED first encodes each sentence by concatenating embeddings (denoted by ⊕) from Bi-LSTM and Syntactic GCN for each token, followed by word attention.
The sentence embedding is then concatenated with the knowledge embedding from the Knowledge Module ( Figure 2 ).
The query embedding passes through the Memory Module ( Figure 3 ) before being trained using triplet loss.
Please see Section 4 for more details.
transformers (Vaswani et al., 2017) and memory networks (Bordes et al., 2016) in an end to end setting, where each component captures conversation relevant information from queries, subsequently leading to better responses.
Our contribution for this paper can be summarized as follows:
• We propose AMUSED, a novel multi stream deep learning model which learns rich unified embeddings for query response pairs using triplet loss as a training metric.
• We perform multi-head attention over query-response pairs which has proven to be much more effective than unidirectional or bi-directional attention.
• We use Graph Convolutions Networks in a chit-chat setting to incorporate the syntactical information in the dialogue using its dependency parse.
• Even with the lack of a concrete metric to judge a conversational agent, our embeddings have shown to perform interesting response retrieval on Persona-Chat dataset.
In the paper, we propose AMUSED, a multi-stream architecture which effectively encodes semantic information from the query while properly utilizing external knowledge for improving performance on natural dialogue.
It also employs GCN to capture long-range syntactic information and improves context-awareness in dialogue by incorporating memory network.
Through our experiments and results using different metrics, we demonstrate that learning these rich representations through smart training (using triplets) would improve the performance of chit-chat systems.
The ablation studies show the importance of different components for a better dialogue.
Our ideas can easily be extended to various conversational tasks which would benefit from such enhanced representations.
|
This paper provides a multi -stream end to end approach to learn unified embeddings for query-response pairs in dialogue systems by leveraging contextual, syntactic, semantic and external information together.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:331
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We examine techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems.
The Action Schema Network (ASNet) is a recent contribution to planning that uses deep learning and neural networks to learn generalized policies for probabilistic planning problems.
ASNets are well suited to problems where local knowledge of the environment can be exploited to improve performance, but may fail to generalize to problems they were not trained on.
Monte-Carlo Tree Search (MCTS) is a forward-chaining state space search algorithm for optimal decision making which performs simulations to incrementally build a search tree and estimate the values of each state.
Although MCTS can achieve state-of-the-art results when paired with domain-specific knowledge, without this knowledge, MCTS requires a large number of simulations in order to obtain reliable estimates in the search tree.
By combining ASNets with MCTS, we are able to improve the capability of an ASNet to generalize beyond the distribution of problems it was trained on, as well as enhance the navigation of the search space by MCTS.
Planning is the essential ability of a rational agent to solve the problem of choosing which actions to take in an environment to achieve a certain goal.
This paper is mainly concerned with combining the advantages of forward-chaining state space search through UCT BID11 , an instance of Monte-Carlo Tree Search (MCTS) BID5 , with the domain-specific knowledge learned by Action Schema Networks (ASNets) BID18 ), a domain-independent learning algorithm.
By combining UCT and ASNets, we hope to more effectively solve planning problems, and achieve the best of both worlds.The Action Schema Network (ASNet) is a recent contribution in planning that uses deep learning and neural networks to learn generalized policies for planning problems.
A generalized policy is a policy that can be applied to any problem from a given planning domain.
Ideally, this generalized policy is able to reliably solve all problems in the given domain, although this is not always feasible.
ASNets are well suited to problems where "local knowledge of the environment can help to avoid certain traps" BID18 .
In such problems, an ASNet can significantly outperform traditional planners that use heuristic search.
Moreover, a significant advantage of ASNets is that a network can be trained on a limited number of small problems, and generalize to problems of any size.
However, an ASNet is not guaranteed to reliably solve all problems of a given domain.
For example, an ASNet could fail to generalize to difficult problems that it was not trained on -an issue often encountered with machine learning algorithms.
Moreover, the policy learned by an ASNet could be suboptimal due to a poor choice of hyperparameters that has led to an undertrained or overtrained network.
Although our discussion is closely tied to ASNets, our contributions are more generally applicable to any method of learning a (generalized) policy.Monte-Carlo Tree Search (MCTS) is a state-space search algorithm for optimal decision making which relies on performing Monte-Carlo simulations to build a search tree and estimate the values of each state BID5 ).
As we perform more and more of these simulations, the state estimates become more accurate.
MCTS-based game-playing algorithms have often achieved state-of-the-art performance when paired with domain-specific knowledge, the most notable being AlphaGo (Silver et al. 2016) .
One significant limitation of vanilla MCTS is that we may require a large number of simulations in order to obtain reliable estimates in the search tree.
Moreover, because simulations are random, the search may not be able to sense that certain branches of the tree will lead to sub-optimal outcomes.
We are concerned with UCT, a variant of MCTS that balances the trade-off between exploration and exploitation.
However, our work can be more generally used with other search algorithms.Combining ASNets with UCT achieves three goals.(1
) Learn what we have not learned: improve the capability of an ASNet to generalize beyond the distribution of problems it was trained on, and of UCT to bias the exploration of actions to those that an ASNet wishes to exploit. (
2) Improve on sub-optimal learning: obtain reasonable evaluation-time performance even when an ASNet was trained with suboptimal hyperparameters, and allow UCT to converge to the optimal action in a smaller number of trials.
(3) Be robust to changes in the environment or domain: improve performance when the test environment differs substantially from the training environment.The rest of the paper is organized as follows.
Section 2 formalizes probabilistic planning as solving a Stochastic Shortest Path problem and gives an overview of ASNets and MCTS along with its variants.
Section 3 defines a framework for Dynamic Programming UCT (DP-UCT) BID10 .
Next, Section 4 examines techniques for combining the policy learned by an ASNet with DP-UCT.
Section 5 then presents and analyzes our results.
Finally, Section 6 summarizes our contributions and discusses related and future work.
In this paper, we have investigated techniques to improve search using generalized policies.
We discussed a framework for DP-UCT, extended from THTS, that allowed us to generate different flavors of DP-UCT including those that exploited the generalized policy learned by an ASNet.
We then introduced methods of using this generalized policy in the simulation function, through STOCHASTIC ASNETS and MAXIMUM ASNETS.
These allowed us to obtain more accurate state-value estimates and action-value estimates in the search tree.
We also extended UCB1 to bias the navigation of the search space to the actions that an ASNet wants to exploit whilst maintaining the fundamental balance between exploration and exploitation, by introducing SIMPLE-ASNET and RANKED-ASNET action selection.We have demonstrated through our experiments that our algorithms are capable of improving the capability of an ASNet to generalize beyond the distribution of problems it was trained on, as well as improve sub-optimal learning.
By combining DP-UCT with ASNets, we are able to bias the exploration of actions to those that an ASNet wishes to exploit, and allow DP-UCT to converge to the optimal action in a smaller number of trials.
Our experiments have also demonstrated that by harnessing the power of search, we may overcome any misleading information provided by an ASNet due to a change in the environment.
Hence, we achieved the three following goals: (1) Learn what we have not learned, (2) Improve on sub-optimal learning, and (3) Be robust to changes in the environment or domain.It is important to observe that our contributions are more generally applicable to any method of learning a (generalized) policy (not just ASNets), and potentially to other trialbased search algorithms including (L)RTDP.In the deterministic setting, there has been a long tradition of learning generalized policies and using them to guide heuristic Best First Search (BFS).
For instance, Yoon et al. BID20 add the states resulting from selecting actions prescribed by the learned generalized policy to the the queue of a BFS guided by a relaxed-plan heuristic, and de la BID7 learn and use generalized policies to generate lookahead states within a BFS guided by the FF heuristic.
These authors observe that generalized policies provide effective search guidance, and that search helps correcting deficiencies in the learned policy.
Search control knowledgeà la TLPlan, Talplanner or SHOP2 has been successfully used to prune the search of probabilistic planners BID13 BID17 ).
More recently, BID15 have also experimented with the use of preferred actions in variants of RTDP BID1 and AO* BID14 , albeit with limited success.
Our work differs from these approaches by focusing explicitly on MCTS as the search algorithm and, unlike existing work combining deep learning and MCTS (e.g. AlphaGo (Silver et al. 2016)), looks not only at using neural network policies as a simulation function for rollouts, but also as a means to bias the UCB1 action selection rule.There are still many potential avenues for future work.
We may investigate how to automatically learn the influence parameter M for SIMPLE-ASNET and RANKED-ASNET action selection, or how to combat bad information provided by an ASNet in a simulation function by mixing ASNet simulations with random simulations.
We may also investigate techniques to interleave planning with learning by using UCT with ASNets as a 'teacher' for training an AS
|
Techniques for combining generalized policies with search algorithms to exploit the strengths and overcome the weaknesses of each when solving probabilistic planning problems
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:332
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice.
When the train and test distributions are mismatched, accuracy can plummet.
Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment.
In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers.
We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions.
AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.
Current machine learning models depend on the ability of training data to faithfully represent the data encountered during deployment.
In practice, data distributions evolve (Lipton et al., 2018) , models encounter new scenarios (Hendrycks & Gimpel, 2017) , and data curation procedures may capture only a narrow slice of the underlying data distribution (Torralba & Efros, 2011) .
Mismatches between the train and test data are commonplace, yet the study of this problem is not.
As it stands, models do not robustly generalize across shifts in the data distribution.
If models could identify when they are likely to be mistaken, or estimate uncertainty accurately, then the impact of such fragility might be ameliorated.
Unfortunately, modern models already produce overconfident predictions when the training examples are independent and identically distributed to the test distribution.
This overconfidence and miscalibration is greatly exacerbated by mismatched training and testing distributions.
Small corruptions to the data distribution are enough to subvert existing classifiers, and techniques to improve corruption robustness remain few in number.
Hendrycks & Dietterich (2019) show that classification error of modern models rises from 22% on the usual ImageNet test set to 64% on ImageNet-C, a test set consisting of various corruptions applied to ImageNet test images.
Even methods which aim to explicitly quantify uncertainty, such as probabilistic and Bayesian neural networks, struggle under data shift, as recently demonstrated by Ovadia et al. (2019) .
Improving performance in this setting has been difficult.
One reason is that training against corruptions only encourages networks to memorize the specific corruptions seen during training and leaves models unable to generalize to new corruptions (Vasiljevic et al., 2016; Geirhos et al., 2018) .
Further, networks trained on translation augmentations remain highly sensitive to images shifted by a single pixel (Gu et al., 2019; Hendrycks & Dietterich, 2019) .
Others have proposed aggressive data augmentation schemes (Cubuk et al., 2018) , though at the cost of a computational increase.
demonstrates that many techniques may improve clean accuracy at the cost of robustness while many techniques which improve robustness harm uncertainty, and contrariwise.
In all, existing techniques have considerable trade-offs.
In this work, we propose a technique to improve both the robustness and uncertainty estimates of classifiers under data shift.
We propose AUGMIX, a method which simultaneously achieves new state-of-the-art results for robustness and uncertainty estimation while maintaining or improving accuracy on standard benchmark datasets.
AUGMIX utilizes stochasticity and diverse augmentations, a Jensen-Shannon Divergence consistency loss, and a formulation to mix multiple augmented images to achieve state-of-the-art performance.
On CIFAR-10 and CIFAR-100, our method roughly halves the corruption robustness error of standard training procedures from 28.4% to 12.4% and 54.3% to 37.8% error, respectively.
On ImageNet, AUGMIX also achieves state-of-the-art corruption robustness and decreases perturbation instability from 57.2% to 37.4%.
Code is available at https://github.com/google-research/augmix.
AUGMIX is a data processing technique which mixes randomly generated augmentations and uses a Jensen-Shannon loss to enforce consistency.
Our simple-to-implement technique obtains state-of-the-art performance on CIFAR-10/100-C, ImageNet-C, CIFAR-10/100-P, and ImageNet-P.
AUGMIX models achieve state-of-the-art calibration and can maintain calibration even as the distribution shifts.
We hope that AUGMIX will enable more reliable models, a necessity for models deployed in safety-critical environments.
|
We obtain state-of-the-art on robustness to data shifts, and we maintain calibration under data shift even though even when accuracy drops
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:333
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Automatic Piano Fingering is a hard task which computers can learn using data.
As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques.
Running this process on 90 videos results in the largest dataset for piano fingering with more than 150K notes.
We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results.
In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN).
For demonstration, we anonymously release a visualization of the output of our process for a single video on https://youtu.be/Gfs1UWQhr5Q
Learning to play the piano is a hard task taking years to master.
One of the challenging aspects when learning a new piece is the fingering choice in which to play each note.
While beginner booklets contain many fingering suggestions, advanced pieces often contain none or a select few.
Automatic prediction of PIANO-FINGERING can be a useful addition to new piano learners, to ease the learning process of new pieces.
As manually labeling fingering for different sheet music is an exhausting and expensive task 1 , In practice previous work (Parncutt et al., 1997; Hart et al., 2000; Jacobs, 2001; Kasimi et al., 2007; Nakamura et al., 2019 ) used very few tagged pieces for evaluation, with minimal or no training data.
In this paper, we propose an automatic, low-cost method for detecting PIANO-FINGERING from piano playing performances captured on videos which allows training modern -data-hungry -neural networks.
We introduce a novel pipeline that adapts and combines several deep learning methods which lead to an automatic labeled PIANO-FINGERING dataset.
Our method can serve two purposes: (1) an automatic "transcript" method that detects PIANO-FINGERING from video and MIDI files, when these are available, and (2) serve as a dataset for training models and then generalize to new pieces.
Given a video and a MIDI file, our system produces a probability distribution over the fingers for each played.
Running this process on large corpora of piano pieces played by different artists, yields a total of 90 automatically finger-tagged pieces (containing 155,107 notes in total) and results in the first public large scale PIANO-FINGERING dataset, which we name APFD.
This dataset will grow over time, as more videos are uploaded to YouTube.
We provide empirical evidence that APFD is valuable, both by evaluating a model trained on it over manually labeled videos, as well as its usefulness by fine-tuning the model on a manually created dataset, which achieves state-of-the-art results.
The process of extracting PIANO-FINGERING from videos alone is a hard task as it needs to detect keyboard presses, which are often subtle even for the human eye.
We, therefore, turn to MIDI files to obtain this information.
The extraction steps are as follows: We begin by locating the keyboard and identify each key on the keyboard ( §3.2).
Then, we identify the playing hands on top of the keyboard ( §3.3), and detect the fingers given the hands bounding boxes ( §3.4).
Next, we align between the MIDI file and its corresponding video ( §3.6) and finally assign for every pressed note, the finger which was most likely used to play it ( §3.5).
Albeit the expectation from steps like hand detection and pose estimation, which were extensively studied in the computer-vision literature, we find that in practice, state-of-the-art models do not excel in these tasks for our scenario.
We therefore address these weaknesses by fine-tuning an object detection model §3.3 on a new dataset we introduce and train a CycleGAN (Zhu et al., 2017) to address the different lighting scenarios with the pose estimation model §3.4.
In this work, we present an automatic method for detecting PIANO-FINGERING from MIDI and video files of a piano performance.
We employ this method on a large set of videos, and create the first large scale PIANO-FINGERING dataset, containing 90 unique pieces, with 155,107 notes in total.
We show this dataset-although being noisy-is valuable, by training a neural network model on it, fine-tuning on a gold dataset, where we achieve state-of-the-art results.
In future work, we intend to improve the data collection by improving the pose-estimation model, better handling high speed movements and the proximity of the hands, which often cause errors in estimating their pose.
Furthermore, we intend to design improved neural models that can take previous fingering predictions into account, in order to have a better global fingering transition.
|
We automatically extract fingering information from videos of piano performances, to be used in automatic fingering prediction models.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:334
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable.
A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space.
However, domain adversarial training faces two critical limitations:
1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint,
2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain.
In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions.
We propose two novel and related models:
1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption;
2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation.
Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks.
The development of deep neural networks has enabled impressive performance in a wide variety of machine learning tasks.
However, these advancements often rely on the existence of a large amount of labeled training data.
In many cases, direct access to vast quantities of labeled data for the task of interest (the target domain) is either costly or otherwise absent, but labels are readily available for related training sets (the source domain).
A notable example of this scenario occurs when the source domain consists of richly-annotated synthetic or semi-synthetic data, but the target domain consists of unannotated real-world data BID28 Vazquez et al., 2014) .
However, the source data distribution is often dissimilar to the target data distribution, and the resulting significant covariate shift is detrimental to the performance of the source-trained model when applied to the target domain BID27 .Solving
the covariate shift problem of this nature is an instance of domain adaptation BID2 . In this
paper, we consider a challenging setting of domain adaptation where 1) we are
provided with fully-labeled source samples and completely-unlabeled target samples, and 2) the existence
of a classifier in the hypothesis space with low generalization error in both source and target domains is not guaranteed. Borrowing approximately
the terminology from BID2 , we refer to this setting as unsupervised, non-conservative domain adaptation. We note that this is in
contrast to conservative domain adaptation, where we assume our hypothesis space contains a classifier that performs well in both the source and target domains.To tackle unsupervised domain adaptation, BID9 proposed to constrain the classifier to only rely on domain-invariant features. This is achieved by training
the classifier to perform well on the source domain while minimizing the divergence between features extracted from the source versus target domains. To achieve divergence minimization
, BID9 employ domain adversarial training. We highlight two issues with this
approach: 1) when the feature function has
high-capacity and the source-target supports are disjoint, the domain-invariance constraint is potentially very weak (see Section 3), and 2) good generalization on the source
domain hurts target performance in the non-conservative setting. BID24 addressed these issues by replacing
domain adversarial training with asymmetric tri-training (ATT), which relies on the assumption that target samples that are labeled by a sourcetrained classifier with high confidence are correctly labeled by the source classifier. In this paper, we consider an orthogonal
assumption: the cluster assumption BID5 , that the input distribution contains separated data clusters and that data samples in the same cluster share the same class label. This assumption introduces an additional
bias where we seek decision boundaries that do not go through high-density regions. Based on this intuition, we propose two
novel models: 1) the Virtual Adversarial Domain Adaptation
(VADA) model which incorporates an additional virtual adversarial training BID20 and conditional entropy loss to push the decision boundaries away from the empirical data, and 2) the Decision-boundary Iterative Refinement
Training with a Teacher (DIRT-T) model which uses natural gradients to further refine the output of the VADA model while focusing purely on the target domain. We demonstrate that 1. In conservative domain
adaptation, where the
classifier is trained to perform well on the source domain, VADA can be used to further constrain the hypothesis space by penalizing violations of the cluster assumption, thereby improving domain adversarial training.2. In non-conservative domain adaptation, where
we account for the mismatch between the source and target optimal classifiers, DIRT-T allows us to transition from a joint (source and target) classifier (VADA) to a better target domain classifier. Interestingly, we demonstrate the advantage
of natural gradients in DIRT-T refinement steps.We report results for domain adaptation in digits classification (MNIST-M, MNIST, SYN DIGITS, SVHN), traffic sign classification (SYN SIGNS, GTSRB), general object classification (STL-10, CIFAR-10), and Wi-Fi activity recognition (Yousefi et al., 2017) . We show that, in nearly all experiments, VADA
improves upon previous methods and that DIRT-T improves upon VADA, setting new state-of-the-art performances across a wide range of domain adaptation benchmarks. In adapting MNIST → SVHN, a very challenging
task, we out-perform ATT by over 20%.
In this paper, we presented two novel models for domain adaptation inspired by the cluster assumption.
Our first model, VADA, performs domain adversarial training with an added term that penalizes violations of the cluster assumption.
Our second model, DIRT-T, is an extension of VADA that recursively refines the VADA classifier by untethering the model from the source training signal and applying approximate natural gradients to further minimize the cluster assumption violation.
Our experiments demonstrate the effectiveness of the cluster assumption: VADA achieves strong performance across several domain adaptation benchmarks, and DIRT-T further improves VADA performance.Our proposed models open up several possibilities for future work.
One possibility is to apply DIRT-T to weakly supervised learning; another is to improve the natural gradient approximation via K-FAC BID18 and PPO BID25 .
Given the strong performance of our models, we also recommend them for other downstream domain adaptation applications.
DISPLAYFORM0 Gaussian noise, σ = 1 DISPLAYFORM1 Gaussian noise, σ = 1 DISPLAYFORM2
|
SOTA on unsupervised domain adaptation by leveraging the cluster assumption.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:335
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graph-structured data.
Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph.
It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs.
This leads to a new type of neural graph message passing scheme that performs continuous message passing over time.
This class of models offers several advantages: a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memory-efficient; and exact and efficient computation of the likelihood of the data.
We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation from scene graphs.
Our proposed model achieves significantly better performance compared to state-of-the-art models.
Modeling and generating graph-structured data has important applications in various scientific fields such as building knowledge graphs (Lin et al., 2015; Bordes et al., 2011) , inventing new molecular structures (Gilmer et al., 2017) and generating diverse images from scene graphs (Johnson et al., 2018) .
Being able to train expressive graph generative models is an integral part of AI research.
Significant research effort has been devoted in this direction.
Traditional graph generative methods (Erdős & Rényi, 1959; Leskovec et al., 2010; Albert & Barabási, 2002; Airoldi et al., 2008) are based on rigid structural assumptions and lack the capability to learn from observed data.
Modern deep learning frameworks within the variational autoencoder (VAE) (Kingma & Welling, 2014) formalism offer promise of learning distributions from data.
Specifially, for structured data, research efforts have focused on bestowing VAE based generative models with the ability to learn structured latent space models (Lin et al., 2018; He et al., 2018; Kipf & Welling, 2016) .
Nevertheless, their capacity is still limited mainly because of the assumptions placed on the form of distributions.
Another class of graph generative models are based on autoregressive methods (You et al., 2018; Kipf et al., 2018) .
These models construct graph nodes sequentially wherein each iteration involves generation of edges connecting a generated node in that iteration with the previously generated set of nodes.
Such autoregressive models have been proven to be the most successful so far.
However, due to the sequential nature of the generation process, the generation suffers from the inability to maintain long-term dependencies in larger graphs.
Therefore, existing methods for graph generation are yet to realize the full potential of their generative power, particularly, the ability to model complex distributions with the flexibility to address variable data dimensions.
Alternatively, for modeling the relational structure in data, graph neural networks (GNNs) or message passing neural networks (MPNNs) (Scarselli et al., 2009; Gilmer et al., 2017; Duvenaud et al., 2015; Kipf & Welling, 2017; Santoro et al., 2017; Zhang et al., 2018) have been shown to be effective in learning generalizable representations over variable input data dimensions.
These models operate on the underlying principle of iterative neural message passing wherein the node representations are updated iteratively for a fixed number of steps.
Hereafter, we use the term message passing to refer to this neural message passing in GNNs.
We leverage this representational ability towards graph generation.
In this paper, we introduce a new class of models -Continuous Graph Flow (CGF): a graph generative model based on continuous normalizing flows Grathwohl et al., 2019 ) that Figure 1 : Illustration of evolution of message passing mechanisms from discrete updates
(a) to our proposed continuous updates
(b).
Continuous Graph Flow leverages normalizing flows to transform simple distributions (e.g. Gaussian) at t 0 to the target distributions at t 1 .
The distribution of only one graph node is shown here for visualization, but, all the node distributions transform over time.
generalizes the message passing mechanism in GNNs to continuous time.
Specifically, to model continuous time dynamics of the graph variables, we adopt a neural ordinary different equation (ODE) formulation.
Our CGF model has both the flexibility to handle variable data dimensions (by using GNNs) and the ability to model arbitrarily complex data distributions due to free-form model architectures enabled by the neural ODE formulation.
Inherently, the ODE formulation also imbues the model with following properties: reversibility and exact likelihood computation.
Concurrent work on Graph Normalizing Flows (GNF) (Liu et al., 2019 ) also proposes a reversible graph neural network using normalizing flows.
However, their model requires a fixed number of transformations.
In contrast, while our proposed CGF is also reversible and memory efficient, the underlying flow model relies on continuous message passing scheme.
Moreover, the message passing in GNF involves partitioning of data dimensions into two halves and employs coupling layers to couple them back.
This leads to several constraints on function forms and model architectures that have a significant impact on performance (Kingma & Dhariwal, 2018) .
In contrast, our CGF model has unconstrained (free-form) Jacobians, enabling it to learn more expressive transformations.
Moreover, other similar work GraphNVP Madhawa et al. (2019) is also based on normalizing flows as compared to CGF that models continuous time dynamics.
We demonstrate the effectiveness of our CGF-based models on three diverse tasks: graph generation, image puzzle generation, and layout generation based on scene graphs.
Experimental results show that our proposed model achieves significantly better performance than state-of-the-art models.
In this paper, we presented continuous graph flow, a generative model that generalizes the neural message passing in graphs to continuous time.
We formulated the model as an neural ordinary differential equation system with shared and reusable functions that operate over the graph structure.
We conducted evaluation for a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation for scene graph.
Experimental results showed that continuous graph flow achieves significant performance improvement over various of state-ofthe-art baselines.
For future work, we will focus on generation tasks for large-scale graphs which is promising as our model is reversible and memory-efficient.
|
Graph generative models based on generalization of message passing to continuous time using ordinary differential equations
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:336
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior.
In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent.
Here we show that none of these claims hold true in the general case.
Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not.
Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa.
Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent.
Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.
Deep neural networks (Schmidhuber, 2015; are the tool of choice for real-world tasks ranging from visual object recognition BID16 , to unsupervised learning BID11 BID19 and reinforcement learning (Silver et al., 2016) .
These practical successes have spawned many attempts to explain the performance of deep learning systems BID12 , mostly in terms of the properties and dynamics of the optimization problem in the space of weights (Saxe et al., 2014; BID8 BID1 , or the classes of functions that can be efficiently represented by deep networks BID20 Poggio et al., 2017) .
This paper analyzes a recent inventive proposal to study the dynamics of learning through the lens of information theory (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017) .
In this view, deep learning is a question of representation learning: each layer of a deep neural network can be seen as a set of summary statistics which contain some but not all of the information present in the input, while retaining as much information about the target output as possible.
The amount of information in a hidden layer regarding the input and output can then be measured over the course of learning, yielding a picture of the optimization process in the information plane.
Crucially, this method holds the promise to serve as a general analysis that can be used to compare different architectures, using the common currency of mutual information.
Moreover, the elegant information bottleneck (IB) theory provides a fundamental bound on the amount of input compression and target output information that any representation can achieve (Tishby et al., 1999) .
The IB bound thus serves as a method-agnostic ideal to which different architectures and algorithms may be compared.A preliminary empirical exploration of these ideas in deep neural networks has yielded striking findings (Shwartz-Ziv & Tishby, 2017) .
Most saliently, trajectories in the information plane appear to consist of two distinct phases: an initial "fitting" phase where mutual information between the hidden layers and both the input and output increases, and a subsequent "compression" phase where mutual information between the hidden layers and the input decreases.
It has been hypothesized that this compression phase is responsible for the excellent generalization performance of deep networks, and further, that this compression phase occurs due to the random diffusion-like behavior of stochastic gradient descent.Here we study these phenomena using a combination of analytical methods and simulation.
In Section 2, we show that the compression observed by Shwartz-Ziv & Tishby (2017) arises primarily due to the double-saturating tanh activation function used.
Using simple models, we elucidate the effect of neural nonlinearity on the compression phase.
Importantly, we demonstrate that the ReLU activation function, often the nonlinearity of choice in practice, does not exhibit a compression phase.
We discuss how this compression via nonlinearity is related to the assumption of binning or noise in the hidden layer representation.
To better understand the dynamics of learning in the information plane, in Section 3 we study deep linear networks in a tractable setting where the mutual information can be calculated exactly.
We find that deep linear networks do not compress over the course of training for the setting we examine.
Further, we show a dissociation between generalization and compression.
In Section 4, we investigate whether stochasticity in the training process causes compression in the information plane.
We train networks with full batch gradient descent, and compare the results to those obtained with stochastic gradient descent.
We find comparable compression in both cases, indicating that the stochasticity of SGD is not a primary factor in the observed compression phase.
Moreover, we show that the two phases of SGD occur even in networks that do not compress, demonstrating that the phases are not causally related to compression.
These results may seem difficult to reconcile with the intuition that compression can be necessary to attain good performance: if some input channels primarily convey noise, good generalization requires excluding them.
Therefore, in Section 5 we study a situation with explicitly task-relevant and task-irrelevant input dimensions.
We show that the hidden-layer mutual information with the task-irrelevant subspace does indeed drop during training, though the overall information with the input increases.
However, instead of a secondary compression phase, this task-irrelevant information is compressed at the same time that the taskrelevant information is boosted.
Our results highlight the importance of noise assumptions in applying information theoretic analyses to deep learning systems, and put in doubt the generality of the IB theory of deep learning as an explanation of generalization performance in deep architectures.
Our results suggest that compression dynamics in the information plane are not a general feature of deep networks, but are critically influenced by the nonlinearities employed by the network.
Doublesaturating nonlinearities lead to compression, if mutual information is estimated by binning activations or by adding homoscedastic noise, while single-sided saturating nonlinearities like ReLUs do not compress in general.
Consistent with this view, we find that stochasticity in the training process does not contribute to compression in the cases we investigate.
Furthermore, we have found instances where generalization performance does not clearly track information plane behavior, questioning the causal link between compression and generalization.
Hence information compression may parallel the situation with sharp minima: although empirical evidence has shown a correlation with generalization error in certain settings and architectures, further theoretical analysis has shown that sharp minima can in fact generalize well BID9 .
We emphasize that compression still may occur within a subset of the input dimensions if the task demands it.
This compression, however, is interleaved rather than in a secondary phase and may not be visible by information metrics that track the overall information between a hidden layer and the input.
Finally, we note that our results address the specific claims of one scheme to link the information bottleneck principle with current practice in deep networks.
The information bottleneck principle itself is more general and may yet offer important insights into deep networks BID0 .
Moreover, the information bottleneck principle could yield fundamentally new training algorithms for networks that are inherently stochastic and where compression is explicitly encouraged with appropriate regularization terms BID5 BID2
|
We show that several claims of the information bottleneck theory of deep learning are not true in the general case.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:337
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.
We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs.
For most current network architectures, we prove that the L1-norm of these gradients grows as the square root of the input size.
These nets therefore become increasingly vulnerable with growing image size.
Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.
Following the work of BID7 , Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs.
A number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs.
Of course, this view, which we will focus on here, assumes that the network and loss are differentiable.
It has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss.
Nevertheless, our conclusions might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level.Contributions.
More specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability.
Evaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture.
That leaves them increasingly vulnerable to adversarial noise.
We corroborate our theoretical results by extensive experiments.
Although some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis.
We do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability.
This calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers.
For differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂ x L of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability FIG1 We then evaluated the size of ∂ x L q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to p -attacks with growing input dimension d (the image-size), almost independently of their architecture.
Our experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set.
BID14 suggest that alleviating this generalization gap requires more data.
But a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors.
Despite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies.
|
Neural nets have large gradients by design; that makes them adversarially vulnerable.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:338
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Effective performance of neural networks depends critically on effective tuning of optimization hyperparameters, especially learning rates (and schedules thereof).
We present Amortized Proximal Optimization (APO), which takes the perspective that each optimization step should approximately minimize a proximal objective (similar to the ones used to motivate natural gradient and trust region policy optimization).
Optimization hyperparameters are adapted to best minimize the proximal objective after one weight update.
We show that an idealized version of APO (where an oracle minimizes the proximal objective exactly) achieves global convergence to stationary point and locally second-order convergence to global optimum for neural networks.
APO incurs minimal computational overhead.
We experiment with using APO to adapt a variety of optimization hyperparameters online during training, including (possibly layer-specific) learning rates, damping coefficients, and gradient variance exponents.
For a variety of network architectures and optimization algorithms (including SGD, RMSprop, and K-FAC), we show that with minimal tuning, APO performs competitively with carefully tuned optimizers.
Tuning optimization hyperparameters can be crucial for effective performance of a deep learning system.
Most famously, carefully selected learning rate schedules have been instrumental in achieving state-of-the-art performance on challenging datasets such as ImageNet BID6 and WMT BID36 .
Even algorithms such as RMSprop BID34 and Adam (Kingma & Ba, 2015) , which are often interpreted in terms of coordinatewise adaptive learning rates, still have a global learning rate parameter which is important to tune.
A wide variety of learning rate schedules have been proposed BID24 BID14 BID2 .
Seemingly unrelated phenomena have been explained in terms of effective learning rate schedules BID35 .
Besides learning rates, other hyperparameters have been identified as important, such as the momentum decay factor BID31 , the batch size BID28 , and the damping coefficient in second-order methods BID20 BID19 .There
have been many attempts to adapt optimization hyperparameters to minimize the training error after a small number of updates BID24 BID1 BID2 . This
approach faces two fundamental obstacles: first, learning rates and batch sizes have been shown to affect generalization performance because stochastic updates have a regularizing effect BID5 BID18 BID27 BID35 . Second
, minimizing the short-horizon expected loss encourages taking very small steps to reduce fluctuations at the expense of long-term progress BID37 . While
these effects are specific to learning rates, they present fundamental obstacles to tuning any optimization hyperparameter, since basically any optimization hyperparameter somehow influences the size of the updates.In this paper, we take the perspective that the optimizer's job in each iteration is to approximately minimize a proximal objective which trades off the loss on the current batch with the average change in the predictions. Specifically
, we consider proximal objectives of the form J(φ) = h(f (g(θ, φ))) + λD(f (θ), f (g(θ, φ))), where f is a model with parameters θ, h is an approximation to the objective function, g is the base optimizer update with hyperparameters φ, and D is a distance metric. Indeed, approximately
solving such a proximal objective motivated the natural gradient algorithm BID0 , as well as proximal reinforcement learning algorithms BID26 . We introduce Amortized
Proximal Optimization (APO), an approach which adapts optimization hyperparameters to minimize the proximal objective in each iteration. We use APO to tune hyperparameters
of SGD, RMSprop, and K-FAC; the hyperparameters we consider include (possibly layer-specific) learning rates, damping coefficients, and the power applied to the gradient covariances.Notice that APO has a hyperparameter λ which controls the aggressiveness of the updates. We believe such a hyperparameter is
necessary until the aforementioned issues surrounding stochastic regularization and short-horizon bias are better understood. However, in practice we find that by
performing a simple grid search over λ, we can obtain automatically-tuned learning rate schedules that are competitive with manual learning rate decay schedules. Furthermore, APO can automatically adapt
several optimization hyperparameters with only a single hand-tuned hyperparameter.We provide theoretical justification for APO by proving strong convergence results for an oracle which solves the proximal objective exactly in each iteration. In particular, we show global linear convergence
and locally quadratic convergence under mild assumptions. These results motivate the proximal objective as
a useful target for meta-optimization.We evaluate APO on real-world tasks including image classification on MNIST, CIFAR-10, CIFAR-100, and SVHN. We show that adapting learning rates online via
APO yields faster training convergence than the best fixed learning rates for each task, and is competitive with manual learning rate decay schedules. Although we focus on fast optimization of the training
objective, we also find that the solutions found by APO generalize at least as well as those found by fixed hyperparameters or fixed schedules.
We introduced amortized proximal optimization (APO), a method for online adaptation of optimization hyperparameters, including global and per-layer learning rates, and damping parameters for approximate second-order methods.
We evaluated our approach on real-world neural network optimization tasks-training MLP and CNN models-and showed that it converges faster and generalizes better than optimal fixed learning rates.
Empirically, we showed that our method overcomes short horizon bias and performs well with sensible default values for the meta-optimization parameters.Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse.
Three mechanisms of weight decay regularization.
arXiv preprint arXiv:1810.12281, 2018.A PROOF OF THEOREM 1We first introduce the following lemma:Lemma 1.
Assume the manifold is smooth with C-bounded curvature, the gradient norm of loss function L is upper bounded by G. If the effective gradient at point Z k ∈ M is g k , then for any DISPLAYFORM0 Proof.
We construct the Z satisfying the above inequality.
Consider the following point in R d : DISPLAYFORM1 We show that Z is a point satisfying the inequality in the lemma.
Firstly, we notice that DISPLAYFORM2 This is because when we introduce the extra curveṽ DISPLAYFORM3 Here we use the fact thatv = 0 and v ≤ C. Therefore we have DISPLAYFORM4 Here the first equality is by introducing the extra Y , the first inequality is by triangle inequality, the second equality is by the definition of g k being ∇ Z L(Z k ) projecting onto a plane, the second inequality is due to the above bound of Y − Z , the last inequality is due to DISPLAYFORM5 , there is therefore DISPLAYFORM6 which completes the proof.
|
We introduce amortized proximal optimization (APO), a method to adapt a variety of optimization hyperparameters online during training, including learning rates, damping coefficients, and gradient variance exponents.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:339
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments.
However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments.
This mismatching has seriously impacted the sample complexity of MBRL.
The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts.
Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN.
We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one.
Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return.
Reinforcement learning (RL) has become of great interest because plenty of real-world problems can be modeled as a sequential decision-making problem.
Model-free reinforcement learning (MFRL) is favored by its capability of learning complex tasks when interactions with environments are cheap.
However, in the majority of real-world problems, such as autonomous driving, interactions are extremely costly, thus MFRL becomes infeasible.
One critique about MFRL is that it does not fully exploit past queries over the environment, and this motivates us to consider the model-based reinforcement learning (MBRL).
In addition to learning an agent policy, MBRL also uses the queries to learn the dynamics of the environment that our agent is interacting with.
If the learned dynamic is accurate enough, the agent can acquire the desired skill by simply interacting with the simulated environment, so that the number of samples to collect in the real world can be greatly reduced.
As a result, MBRL has become one of the possible solutions to reduce the number of samples required to learn an optimal policy.
Most previous works of MBRL adopt supervised learning with 2 -based errors (Luo et al., 2019; Kurutach et al., 2018; or maximum likelihood (Janner et al., 2019) , to obtain an environment model that synthesizes real transitions.
These non-trivial developments imply that optimizing a policy on a synthesized environment is a challenging task.
Because the estimation error of the model accumulates as the trajectory grows, it is hard to train a policy on a long synthesized trajectory.
On the other hand, training on short trajectories makes the policy short-sighted.
This issue is known as the planning horizon dilemma (Wang et al., 2019) .
As a result, despite having a strong intuition at first sight, MBRL has to be designed meticulously.
Intuitively, we would like to learn a transition model in a way that it can reproduce the trajectories that have been generated in the real world.
Since the attained trajectories are sampled according to a certain policy, directly employing supervised learning may not necessarily lead to the mentioned result especially when the policy is stochastic.
The resemblance in trajectories matters because we estimate policy gradient by generating rollouts; however, the one-step model learning adopted by many MBRL methods do not guarantee this.
Some previous works propose multi-step training (Luo et al., 2019; Asadi et al., 2019; Talvitie, 2017) ; however, experiments show that model learning fails to benefit much from the multi-step loss.
We attribute this outcome to the essence of super-
We have pointed out that the state-of-the-art methods concentrate on learning synthesized models in a supervised fashion, which does not guarantee that the policy is able to reproduce a similar trajectory in the learned model and therefore the model may not be accurate enough to estimate long rollouts.
We have proposed to incorporate WGAN to achieve occupancy measure matching between the real transition and the synthesized model and theoretically shown that matching indicates the closeness in cumulative rewards between the synthesized model and the real environment.
To enable stable training across WGANs, we have suggested using a truncated version of WGAN to prevent training from getting stuck at local optimums.
The empirical property of WGAN application such as imitation learning indicates its potential to learn the transition with fewer samples than supervised learning.
We have confirmed it experimentally by further showing that MI converges much faster and obtains better policy than state-of-the-art model-based and model-free algorithms.
|
Our method incorporates WGAN to achieve occupancy measure matching for transition learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:34
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Dense word vectors have proven their values in many downstream NLP tasks over the past few years.
However, the dimensions of such embeddings are not easily interpretable.
Out of the d-dimensions in a word vector, we would not be able to understand what high or low values mean.
Previous approaches addressing this issue have mainly focused on either training sparse/non-negative constrained word embeddings, or post-processing standard pre-trained word embeddings.
On the other hand, we analyze conventional word embeddings trained with Singular Value Decomposition, and reveal similar interpretability.
We use a novel eigenvector analysis method inspired from Random Matrix Theory and show that semantically coherent groups not only form in the row space, but also the column space.
This allows us to view individual word vector dimensions as human-interpretable semantic features.
Understanding words has a fundamental impact on many natural language processing tasks, and has been modeled with the Distributional Hypothesis BID0 .
Dense d-dimensional vector representations of words created from this model are often referred to as word embeddings, and have successfully captured similarities between words, such as word2vec and GloVe BID1 BID2 .
They have also been applied to downstream NLP tasks as word representation features, ranging from sentiment analysis to machine translation BID3 BID4 .Despite
their widespread popularity in usage, the dimensions of these word vectors are difficult to interpret BID5 . Consider
w president = [0.1, 2.4, 0.3] as the 3-dimensional vector of "president" from word2vec. In this
3-dimensional space (or the row space), semantically similar words like "minister" and "president" are closely located. However
, it is unclear what the dimensions represent, as we do not know the meaning of the 2.4 in w president . It is
difficult to answer questions like 'what is the meaning of high and low values in the columns of W' and 'how can we interpret the dimensions of word vectors'. To address
this problem, previous literature focused on the column space by either training word embeddings with sparse and non-negative constraints BID6 BID7 BID8 , or post-processing pre-trained word embeddings BID5 BID9 BID10 . We instead
investigate this problem from a random matrix perspective.In our work, we analyze the eigenvectors of word embeddings obtained with truncated Singular Value Decomposition (SVD) BID11 BID12 of the Positive Pointwise Mutual Information (PPMI) matrix BID13 . Moreover,
we compare this analysis with the row and column space analysis of Skip Gram Negative Sampling (SGNS), a model used to train word2vec BID14 . From the
works of BID15 proving that both SVD and SGNS factorizes and approximates the same matrix, we hypothesize that a study of the principal eigenvectors of the PPMI matrix reflects the information contained in SGNS.Contributions: Without requiring any constraints or post-processing, we show that the dimensions of word vectors can be interpreted as semantic features. In doing
so, we also introduce novel word embedding analysis methods inspired by the literature of eigenvector analysis techniques from Random Matrix Theory.
In this work, we analyzed the eigenvectors, or the column space, of the word embeddings obtained from the Singular Value Decomposition of PPMI matrix.
We revealed that the significant participants of the eigenvectors form semantically coherent groups, allowing us to view each word vector as an interpretable feature vector composed of semantic groups.
These results can be very useful in error analysis in downstream NLP tasks, or cherry-picking useful feature dimensions to easily create compressed and efficient task-specific embeddings.
Future work will proceed in this direction on applying interpretability to practical usage.
|
Without requiring any constraints or post-processing, we show that the salient dimensions of word vectors can be interpreted as semantic features.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:340
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural networks in the brain and in neuromorphic chips confer systems with the ability to perform multiple cognitive tasks.
However, both kinds of networks experience a wide range of physical perturbations, ranging from damage to edges of the network to complete node deletions, that ultimately could lead to network failure.
A critical question is to understand how the computational properties of neural networks change in response to node-damage and whether there exist strategies to repair these networks in order to compensate for performance degradation.
Here, we study the damage-response characteristics of two classes of neural networks, namely multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) trained to classify images from MNIST and CIFAR-10 datasets respectively.
We also propose a new framework to discover efficient repair strategies to rescue damaged neural networks.
The framework involves defining damage and repair operators for dynamically traversing the neural networks loss landscape, with the goal of mapping its salient geometric features.
Using this strategy, we discover features that resemble path-connected attractor sets in the loss landscape.
We also identify that a dynamic recovery scheme, where networks are constantly damaged and repaired, produces a group of networks resilient to damage as it can be quickly rescued.
Broadly, our work shows that we can design fault-tolerant networks by applying on-line retraining consistently during damage for real-time applications in biology and machine learning.
|
strategy to repair damaged neural networks
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:341
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Automatic question generation from paragraphs is an important and challenging problem, particularly due to the long context from paragraphs.
In this paper, we propose and study two hierarchical models for the task of question generation from paragraphs.
Specifically, we propose
(a) a novel hierarchical BiLSTM model with selective attention and
(b) a novel hierarchical Transformer architecture, both of which learn hierarchical representations of paragraphs.
We model a paragraph in terms of its constituent sentences, and a sentence in terms of its constituent words.
While the introduction of the attention mechanism benefits the hierarchical BiLSTM model, the hierarchical Transformer, with its inherent attention and positional encoding mechanisms also performs better than flat transformer model.
We conducted empirical evaluation on the widely used SQuAD and MS MARCO datasets using standard metrics.
The results demonstrate the overall effectiveness of the hierarchical models over their flat counterparts.
Qualitatively, our hierarchical models are able to generate fluent and relevant questions.
Question Generation (QG) from text has gained significant popularity in recent years in both academia and industry, owing to its wide applicability in a range of scenarios including conversational agents, automating reading comprehension assessment, and improving question answering systems by generating additional training data.
Neural network based methods represent the stateof-the-art for automatic question generation.
These models do not require templates or rules, and are able to generate fluent, high-quality questions.
Most of the work in question generation takes sentences as input (Du & Cardie, 2018; Kumar et al., 2018; Song et al., 2018; Kumar et al., 2019 ).
QG at the paragraph level is much less explored and it has remained a challenging problem.
The main challenges in paragraph-level QG stem from the larger context that the model needs to assimilate in order to generate relevant questions of high quality.
Existing question generation methods are typically based on recurrent neural networks (RNN), such as bi-directional LSTM.
Equipped with different enhancements such as the attention, copy and coverage mechanisms, RNN-based models (Du et al., 2017; Kumar et al., 2018; Song et al., 2018) achieve good results on sentence-level question generation.
However, due to their ineffectiveness in dealing with long sequences, paragraph-level question generation remains a challenging problem for these models.
Recently, Zhao et al. (2018) proposed a paragraph-level QG model with maxout pointers and a gated self-attention encoder.
To the best of our knowledge this is the only model that is designed to support paragraph-level QG and outperforms other models on the SQuAD dataset (Rajpurkar et al., 2016) .
One straightforward extension to such a model would be to reflect the structure of a paragraph in the design of the encoder.
Our first attempt is indeed a hierarchical BiLSTM-based paragraph encoder ( HPE ), wherein, the hierarchy comprises the word-level encoder that feeds its encoding to the sentence-level encoder.
Further, dynamic paragraph-level contextual information in the BiLSTM-HPE is incorporated via both word-and sentence-level selective attention.
However, LSTM is based on the recurrent architecture of RNNs, making the model somewhat rigid and less dynamically sensitive to different parts of the given sequence.
Also LSTM models are slower to train.
In our case, a paragraph is a sequence of sentences and a sentence is a sequence of words.
The Transformer (Vaswani et al., 2017 ) is a recently proposed neural architecture designed to address some deficiencies of RNNs.
Specifically, the Transformer is based on the (multi-head) attention mechanism, completely discarding recurrence in RNNs.
This design choice allows the Transformer to effectively attend to different parts of a given sequence.
Also Transformer is relatively much faster to train and test than RNNs.
As humans, when reading a paragraph, we look for important sentences first and then important keywords in those sentences to find a concept around which a question can be generated.
Taking this inspiration, we give the same power to our model by incorporating word-level and sentence-level selective attention to generate high-quality questions from paragraphs.
In this paper, we present and contrast novel approachs to QG at the level of paragraphs.
Our main contributions are as follows:
• We present two hierarchical models for encoding the paragraph based on its structure.
We analyse the effectiveness of these models for the task of automatic question generation from paragraph.
• Specifically, we propose a novel hierarchical Transformer architecture.
At the lower level, the encoder first encodes words and produces a sentence-level representation.
At the higher level, the encoder aggregates the sentence-level representations and learns a paragraphlevel representation.
• We also propose a novel hierarchical BiLSTM model with selective attention, which learns to attend to important sentences and words from the paragraph that are relevant to generate meaningful and fluent questions about the encoded answer.
• We also present attention mechanisms for dynamically incorporating contextual information in the hierarchical paragraph encoders and experimentally validate their effectiveness.
In Table 1 and Table 2 we present automatic evaluation results of all models on SQuAD and MS MARCO datasets respectively.
We present human evaluation results in Table 3 and Table 4 respectively.
A number of interesting observations can be made from automatic evaluation results in Table 1 and Table 4 : Human evaluation results (column "Score") as well as inter-rater agreement (column "Kappa") for each model on the MS MARCO test set.
The scores are between 0-100, 0 being the worst and 100 being the best.
Best results for each metric (column) are bolded.
The three evaluation criteria are: (1) syntactically correct (Syntax), (2) semantically correct (Semantics), and (3) relevant to the text (Relevance).
• Overall, the hierarchical BiLSTM model HierSeq2Seq + AE shows the best performance, achieving best result on BLEU2-BLEU4 metrics on both SQuAD dataset, whereas the hierarchical Transformer model TransSeq2Seq + AE performs best on BLEU1 and ROUGE-L on the SQuAD dataset.
• Compared to the flat LSTM and Transformer models, their respective hierarchical counterparts always perform better on both the SQuAD and MS MARCO datasets.
• On the MS MARCO dataset, we observe the best consistent performance using the hierarchical BiLSTM models on all automatic evaluation metrics.
• On the MS MARCO dataset, the two LSTM-based models outperform the two Transformer-based models.
Interestingly, human evaluation results, as tabulated in Table 3 and Table 4 , demonstrate that the hierarchical Transformer model TransSeq2Seq + AE outperforms all the other models on both datasets in both syntactic and semantic correctness.
However, the hierarchical BiLSTM model HierSeq2Seq + AE achieves best, and significantly better, relevance scores on both datasets.
From the evaluation results, we can see that our proposed hierarchical models demonstrate benefits over their respective flat counterparts in a significant way.
Thus, for paragraph-level question generation, the hierarchical representation of paragraphs is a worthy pursuit.
Moreover, the Transformer architecture shows great potential over the more traditional RNN models such as BiLSTM as shown in human evaluation.
Thus the continued investigation of hierarchical Transformer is a promising research avenue.
In the Appendix, in Section B, we present several examples that illustrate the effectiveness of our Hierarchical models.
In Section C of the appendix, we present some failure cases of our model, along with plausible explanations.
We proposed two hierarchical models for the challenging task of question generation from paragraphs, one of which is based on a hierarchical BiLSTM model and the other is a novel hierarchical Transformer architecture.
We perform extensive experimental evaluation on the SQuAD and MS MARCO datasets using standard metrics.
Results demonstrate the hierarchical representations to be overall much more effective than their flat counterparts.
The hierarchical models for both Transformer and BiLSTM clearly outperforms their flat counterparts on all metrics in almost all cases.
Further, our experimental results validate that hierarchical selective attention benefits the hierarchical BiLSTM model.
Qualitatively, our hierarchical models also exhibit better capability of generating fluent and relevant questions.
|
Automatic question generation from paragraph using hierarchical models
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:342
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Many deployed learned models are black boxes: given input, returns output.
Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable.
This work shows that such attributes of neural networks can be exposed from a sequence of queries.
This has multiple implications.
On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model.
On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples.
Our paper suggests that it is actually hard to draw a line between white box and black box models.
Black-box models take a sequence of query inputs, and return corresponding outputs, while keeping internal states such as model architecture hidden.
They are deployed as black boxes usually on purpose -for protecting intellectual properties or privacy-sensitive training data.
Our work aims at inferring information about the internals of black box models -ultimately turning them into white box models.
Such a reverse-engineering of a black box model has many implications.
On the one hand, it has legal implications to intellectual properties (IP) involving neural networks -internal information about the model can be proprietary and a key IP, and the training data may be privacy sensitive.
Disclosing hidden details may also render the model more susceptible to attacks from adversaries.
On the other hand, gaining information about a black-box model can be useful in other scenarios.
E.g. there has been work on utilising adversarial examples for protecting private regions (e.g. faces) in photographs from automatic recognisers BID12 .
In such scenarios, gaining more knowledge on the recognisers will increase the chance of protecting one's privacy.
Either way, it is a crucial research topic to investigate the type and amount of information that can be gained from a black-box access to a model.
We make a first step towards understanding the connection between white box and black box approaches -which were previously thought of as distinct classes.We introduce the term "model attributes" to refer to various types of information about a trained neural network model.
We group them into three types: (1) architecture (e.g. type of non-linear activation), (2) optimisation process (e.g. SGD or ADAM?), and (3) training data (e.g. which dataset?).
We approach the problem as a standard supervised learning task applied over models.
First, collect a diverse set of white-box models ("meta-training set") that are expected to be similar to the target black box at least to a certain extent.
Then, over the collected meta-training set, train another model ("metamodel") that takes a model as input and returns the corresponding model attributes as output.
Importantly, since we want to predict attributes at test time for black-box models, the only information available for attribute prediction is the query input-output pairs.
As we will see in the experiments, such input-output pairs allow to predict model attributes surprisingly well.In summary, we contribute: (1) Investigation of the type and amount of internal information about the black-box model that can be extracted from querying; (2) Novel metamodel methods that not only reason over outputs from static query inputs, but also actively optimise query inputs that can extract more information; (3) Study of factors like size of the meta-training set, quantity and quality of queries, and the dissimilarity between the meta-training models and the test black box (generalisability); (4) Empirical verification that revealed information leads to greater susceptibility of a black-box model to an adversarial example based attack.
We have verified through our novel kennen metamodels that black-box access to a neural network exposes much internal information.
We have shown that only 100 single-label outputs already reveals a great deal about a black box.
When the black-box classifier is quite different from the metatraining classifiers, the performance of our best metamodel -kennen-io-decreases; however, the prediction accuracy for black box internal information is still surprisingly high.
Our metamodel can predict architecture families for ImageNet classifiers with high accuracy.
We additionally show that this reverse-engineering enables more focused attack on black-boxes.
We have presented first results on the inference of diverse neural network attributes from a sequence of input-output queries.
Our novel metamodel methods, kennen, can successfully predict attributes related not only to the architecture but also to training hyperparameters (optimisation algorithm and dataset) even in difficult scenarios (e.g. single-label output, or a distribution gap between the metatraining models and the target black box).
We have additionally shown in ImageNet experiments that reverse-engineering a black box makes it more vulnerable to adversarial examples.
|
Querying a black-box neural network reveals a lot of information about it; we propose novel "metamodels" for effectively extracting information from a black box.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:343
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Inverse reinforcement learning (IRL) is used to infer the reward function from the actions of an expert running a Markov Decision Process (MDP).
A novel approach using variational inference for learning the reward function is proposed in this research.
Using this technique, the intractable posterior distribution of the continuous latent variable (the reward function in this case) is analytically approximated to appear to be as close to the prior belief while trying to reconstruct the future state conditioned on the current state and action.
The reward function is derived using a well-known deep generative model known as Conditional Variational Auto-encoder (CVAE) with Wasserstein loss function, thus referred to as Conditional Wasserstein Auto-encoder-IRL (CWAE-IRL), which can be analyzed as a combination of the backward and forward inference.
This can then form an efficient alternative to the previous approaches to IRL while having no knowledge of the system dynamics of the agent.
Experimental results on standard benchmarks such as objectworld and pendulum show that the proposed algorithm can effectively learn the latent reward function in complex, high-dimensional environments.
Reinforcement learning, formalized as Markov decision process (MDP), provides a general solution to sequential decision making, where given a state, the agent takes an optimal action by maximizing the long-term reward from the environment Bellman (1957) .
However, in practice, defining a reward function that weighs the features of the state correctly can be challenging, and techniques like reward shaping are often used to solve complex real-world problems Ng et al. (1999) .
The process of inferring the reward function given the demonstrations by an expert is defined as inverse reinforcement learning (IRL) or apprenticeship learning Ng et al. (2000) ; Abbeel & Ng (2004) .
The fundamental problem with IRL lies in the fact that the algorithm is under defined and infinitely different reward functions can yield the same policy Finn et al. (2016) .
Previous approaches have used preferences on the reward function to address the non-uniqueness.
Ng et al. (2000) suggested reward function that maximizes the difference in the values of the expert's policy and the second best policy.
Ziebart et al. (2008) adopted the principle of maximum entropy for learning the policy whose feature expectations are constrained to match those of the expert's.
Ratliff et al. (2006) applied the structured max-margin optimization to IRL and proposed a method for finding the reward function that maximizes the margin between expert's policy and all other policies.
Neu & Szepesvári (2009) unified a direct method that minimizes deviation from the expert's behavior and an indirect method that finds an optimal policy from the learned reward function using IRL.
Syed & Schapire (2008) used a game-theoretic framework to find a policy that improves with respect to an expert's.
Another challenge for IRL is that some variant of the forward reinforcement learning problem needs to be solved in a tightly coupled manner to obtain the corresponding policy, and then compare this policy to the demonstrated actions Finn et al. (2016) .
Most early IRL algorithms proposed solving an MDP in the inner loop Ng et al. (2000) ; Abbeel & Ng (2004); Ziebart et al. (2008) .
This requires perfect knowledge of the expert's dynamics which are almost always impossible to have.
Several works have proposed to relax this requirement, for example by learning a value function instead of a cost Todorov (2007) , solving an approximate local control problem Levine & Koltun (2012) or generating a discrete graph of states Byravan et al. (2015) .
However, all these methods still require some partial knowledge of the system dynamics.
Most of the early research in this field has expressed the reward function as a weighted linear combination of hand selected features Ng et al. (2000) ; Ramachandran & Amir (2007); Ziebart et al. (2008) .
Non-parametric methods such as Gaussian Processes (GPs) have also been used for potentially complex, nonlinear reward functions Levine et al. (2011) .
While in principle this helps extend the IRL paradigm to flexibly account for non-linear reward approximation; the use of kernels simultaneously leads to higher sample size requirements.
Universal function approximators such as non-linear deep neural network have been proposed recently Wulfmeier et al. (2015) ; Finn et al. (2016) .
This moves away from using hand-crafted features and helps in learning highly non-linear reward functions but they still need the agent in the loop to generate new samples to "guide" the cost to the optimal reward function.
Fu et al. (2017) has recently proposed deriving an adversarial reward learning formulation which disentangles the reward learning process by a discriminator trained via binary regression data and uses policy gradient algorithms to learn the policy as well.
The Bayesian IRL (BIRL) algorithm proposed by Ramachandran & Amir (2007) uses the expert's actions as evidence to update the prior on reward functions.
The reward learning and apprenticeship learning steps are solved by performing the inference using a modified Markov Chain Monte Carlo (MCMC) algorithm.
Zheng et al. (2014) described an expectation-maximization (EM) approach for solving the BIRL problem, referring to it as the Robust BIRL (RBIRL).
Variational Inference (VI) has been used as an efficient and alternative strategy to MCMC sampling for approximating posterior densities Jordan et al. (1999); Wainwright et al. (2008) .
Variational Auto-encoder (VAE) was proposed by Kingma & Welling (2014) as a neural network version of the approximate inference model.
The loss function of the VAE is given in such a way that it automatically tries to maximize the likelihood of the data given the current latent variables (reconstruction loss), while encouraging the latent variables to be close to our prior belief of how the variables should look like (KullbeckLiebler divergence loss).
This can be seen as an generalization of EM from maximum a-posteriori (MAP) estimation of the single parameter to an approximation of complete posterior distribution.
Conditional VAE (CVAE) has been proposed by Sohn et al. (2015) to develop a deep conditional generative model for structured output prediction using Gaussian latent variables.
Wasserstein AutoEncoder (WAE) has been proposed by Tolstikhin et al. (2017) to utilize Wasserstein loss function in place of KL divergence loss for robustly estimating the loss in case of small samples, where VAE fails.
This research is motivated by the observation that IRL can be formulated as a supervised learning problem with latent variable modelling.
This intuition is not unique.
It has been proposed by Klein et al. (2013) using the Cascaded Supervised IRL (CSI) approach.
However, CSI uses non-generalizable heuristics to classify the dataset and find the decision rule to estimate the reward function.
Here, I propose to utilize the CVAE framework with Wasserstein loss function to determine the non-linear, continuous reward function utilizing the expert trajectories without the need for system dynamics.
The encoder step of the CVAE is used to learn the original reward function from the next state conditioned on the current state and action.
The decoder step is used to recover the next state given the current state, action and the latent reward function.
The likelihood loss, composed of the reconstruction error and the Wasserstein loss, is then fed to optimize the CVAE network.
The Gaussian distribution is used here as the prior distribution; however, Ramachandran & Amir (2007) has described various other prior distributions which can be used based on the class of problem being solved.
Since, the states chosen are supplied by the expert's trajectories, the CWAE-IRL algorithm is run only on those states without the need to run an MDP or have the agent in the loop.
Two novel contributions are made in this paper:
• Proposing a generative model such as an auto-encoder for estimating the reward function leads to a more effective and efficient algorithm with locally optimal, analytically approximate solution.
• Using only the expert's state-action trajectories provides a robust generative solution without any knowledge of system dynamics.
Section 2 gives the background on the concepts used to build our model; Section 3 describes the proposed methodology; Section 4 gives the results and Section 5 provides the discussion and conclusions.
|
Using a supervised latent variable modeling framework to determine reward in inverse reinforcement learning task
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:344
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with a variety of real-life applications.
We tackle TSP by incorporating machine learning methodology and leveraging the variable neighborhood search strategy.
More precisely, the search process is considered as a Markov decision process (MDP), where a 2-opt local search is used to search within a small neighborhood, while a Monte Carlo tree search (MCTS) method (which iterates through simulation, selection and back-propagation steps), is used to sample a number of targeted actions within an enlarged neighborhood.
This new paradigm clearly distinguishes itself from the existing machine learning (ML) based paradigms for solving the TSP, which either uses an end-to-end ML model, or simply applies traditional techniques after ML for post optimization.
Experiments based on two public data sets show that, our approach clearly dominates all the existing learning based TSP algorithms in terms of performance, demonstrating its high potential on the TSP.
More importantly, as a general framework without complicated hand-crafted rules, it can be readily extended to many other combinatorial optimization problems.
The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with various real-life applications, such as transportation, logistics, biology, circuit design.
Given n cities as well as the distance d ij between each pair of cities i and j, the TSP aims to find a cheapest tour which starts from a beginning city (arbitrarily chosen), visits each city exactly once, and finally returns to the beginning city.
This problem is NP-hard, thus being extremely difficult from the viewpoint of theoretical computer science.
Due to its importance in both theory and practice, many algorithms have been developed for the TSP, mostly based on traditional operations research (OR) methods.
Among the existing TSP algorithms, the best exact solver Concorde (Applegate et al., 2009 ) succeeded in demonstrating optimality of an Euclidean TSP instance with 85,900 cities, while the leading heuristics (Helsgaun, 2017) and (Taillard & Helsgaun, 2019) are capable of obtaining near-optimal solutions for instances with millions of cities.
However, these algorithms are very complicated, which generally consist of many hand-crafted rules and heavily rely on expert knowledge, thus being difficult to generalize to other combinatorial optimization problems.
To overcome those limitations, recent years have seen a number of ML based algorithms being proposed for the TSP (briefly reviewed in the next section), which attempt to automate the search process by learning mechanisms.
This type of methods do not rely on expert knowledge, can be easily generalized to various combinatorial optimization problems, thus become promising research direction at the intersection of ML and OR.
For the TSP, existing ML based algorithms can be roughly classified into two paradigms, i.e.: (1) End-to-end ML paradigm which uses a ML model alone to directly convert the input instance to a solution.
(2) ML followed by OR paradigm which applies ML at first to provide some additional information, to guide the following OR procedure towards promising regions.
Despite its high potential, for the TSP, existing ML based methods are still in its infancy, struggling to solve instances with more than 100 cities, leaving much room for further improvement compared with traditional methods.
To this end, we propose a novel framework by combining Monte Carlo tree search (MCTS) with a basic OR method (2-opt based local search) using variable neighborhood strategy to solve the TSP.
The main contributions are summarized as follows.
• Framework: We propose a new paradigm which combines OR and ML using variable neighborhood strategy.
Starting from an initial state, a basic 2-opt based local search is firstly used to search within a small neighborhood.
When no improvement is possible within the small neighborhood, the search turns into an enlarged neighborhood, where a reinforcement learning (RL) based method is used to identify a sample of promising actions, and iteratively choose one action to apply.
Under this new paradigm, OR and ML respectively work within disjoint space, being flexible and targeted, and clearly different from the two paradigms mentioned above.
More importantly, as a general framework without complicated hand-crafted rules, this framework could not only be applied to the TSP, but also be easily extended to many other combinatorial optimization problems.
• Methodology: When we search within an enlarged neighborhood, it is infeasible to enumerate all the actions.
We then seek to sample a number of promising actions.
To do this automatically, we implement a MCTS method which iterates through simulation, selection and back-propagation steps, to collect useful information that guides the sampling process.
To the best of our knowledge, there is only one existing paper (Shimomura & Takashima, 2016) which also uses MCTS to solve the TSP.
However, their method is a constructive approach, where each state is a partial TSP tour, and each action adds a city to increase the partial tour, until forming a complete tour.
By contrast, our MCTS method is a conversion based approach, where each state is a complete TSP tour, and each action converts the original state to a new state (also a complete TSP tour).
The methodology and implementation details of our MCTS are very different from the MCTS method developed in (Shimomura & Takashima, 2016 ).
• Results: We carry out experiments on two sets of public TSP instances.
Experimental results (detailed in Section 4) show that, on both data sets our MCTS algorithm obtains (within reasonable time) statistically much better results with respect to all the existing learning based algorithms.
These results clearly indicate the potential of our new method for solving the TSP.
The rest of this paper is organized as follows: Section 2 briefly reviews the existing learning based methods on the TSP.
Section 3 describes in detail the new paradigm and the MCTS method.
Section 4 provides and analyzes the experimental results.
Finally Section 5 concludes this paper.
This paper newly develops a novel flexible paradigm for solving TSP, which combines OR and ML in a variable neighborhood search strategy, and achieves highly competitive performance with respect to the existing learning based TSP algorithms.
However, how to combine ML and OR reasonably is still an open question, which deserves continuous investigations.
In the future, we would try more new paradigms to better answer this question, and extend the work to other combinatorial optimization problems.
|
This paper combines Monte Carlo tree search with 2-opt local search in a variable neighborhood mode to solve the TSP effectively.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:345
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Significant strides have been made toward designing better generative models in recent years.
Despite this progress, however, state-of-the-art approaches are still largely unable to capture complex global structure in data.
For example, images of buildings typically contain spatial patterns such as windows repeating at regular intervals; state-of-the-art generative methods can’t easily reproduce these structures.
We propose to address this problem by incorporating programs representing global structure into the generative model—e.g., a 2D for-loop may represent a configuration of windows.
Furthermore, we propose a framework for learning these models by leveraging program synthesis to generate training data.
On both synthetic and real-world data, we demonstrate that our approach is substantially better than the state-of-the-art at both generating and completing images that contain global structure.
There has been much interest recently in generative models, following the introduction of both variational autoencoders (VAEs) BID13 and generative adversarial networks (GANs) BID6 .
These models have successfully been applied to a range of tasks, including image generation BID16 , image completion BID10 , texture synthesis BID12 ; BID22 , sketch generation BID7 , and music generation BID3 .Despite
their successes, generative models still have difficulty capturing global structure. For example
, consider the image completion task in Figure 1 . The original
image (left) is of a building, for which the global structure is a 2D repeating pattern of windows. Given a partial
image (middle left), the goal is to predict the completion of the image. As can be seen,
a state-of-the-art image completion algorithm has trouble reconstructing the original image (right) BID10 .1 Real-world data often contains such global structure, including repetitions, reflectional or rotational symmetry, or even more complex patterns.In the past few years, program synthesis Solar- BID17 has emerged as a promising approach to capturing patterns in data BID4 ; BID19 . The idea is that
simple programs can capture global structure that evades state-of-the-art deep neural networks. A key benefit of
using program synthesis is that we can design the space of programs to capture different kinds of structure-e.g., repeating patterns BID5 , symmetries, or spatial structure BID2 -depending on the application domain. The challenge is
that for the most part, existing approaches have synthesized programs that operate directly over raw data. Since programs have
difficulty operating over perceptual data, existing approaches have largely been limited to very simple data-e.g., detecting 2D repeating patterns of simple shapes BID5 .We propose to address
these shortcomings by synthesizing programs that represent the underlying structure of high-dimensional data. In particular, we decompose
programs into two parts: (i) a sketch s ∈ S that represents
the skeletal structure of the program BID17 , with holes that are left unimplemented, and (ii) components c ∈ C that can be
used to fill these holes. We consider perceptual components-i.e
., holes in the sketch are filled with raw perceptual data. For example, the original image x *
partial image x part completionx (ours) completionx (baseline) Figure 1 : The task is to complete the partial image x part (middle left) into an image that is close to the original image x * (left). By incorporating programmatic structure
into generative models, the completion (middle right) is able to substantially outperform a state-of-the-art baseline BID10 (right) . Note that not all non-zero pixels in the
sketch rendering retain the same value in the completed picture due to the nature of the following completion process program represents the structure in the original image x * in Figure 1 (left). The black text is the sketch, and the component
is a sub-image taken from the given partial image. Then, the draw function renders the given sub-image
at the given position. We call a sketch whose holes are filled with perceptual
components a neurosymbolic program.Building on these ideas, we propose an approach called program-synthesis (guided) generative models (PS-GM) that combines neurosymbolic programs representing global structure with state-of-the-art deep generative models. By incorporating programmatic structure, PS-GM substantially
improves the quality of these state-of-the-art models. As can be seen, the completion produced using PS-GM (middle
right of Figure 1 ) substantially outperforms the baseline.We show that PS-GM can be used for both generation from scratch and for image completion. The generation pipeline is shown in FIG0 . At a high level,
PS-GM for generation operates in two phases:•
First, it generates a program that represents the global structure in the image to be generated.In particular, it generates a program P = (s, c) representing the latent global structure in the image (left in FIG0 , where s is a sketch (in the domain considered here, a list of 12 for-loop structures) and c is a perceptual component (in the domain considered here, a list of 12 sub-images).• Second, our algorithm executes P to obtain a structure rendering
x struct representing the program as an image (middle of FIG0 ). Then, our algorithm uses a deep generative model to complete x struct
into a full image (right of FIG0 ). The structure in x struct helps guide the deep generative model towards
images that preserve the global structure.The image-completion pipeline (see Figure 3 ) is similar.Training these models end-to-end is challenging, since a priori, ground truth global structure is unavailable. Furthermore, representative global structure is very sparse, so approaches
such as reinforcement learning do not scale. Instead, we leverage domain-specific program synthesis algorithms to produce
examples of programs that represent global structure of the training data. In particular, we propose a synthesis algorithm tailored to the image domain
, which extracts programs with nested for-loops that can represent multiple 2D repeating patterns in images. Then, we use these example programs as supervised training data.Our programs
can capture rich spatial structure in the training data. For example, in FIG0 , the program structure encodes a repeating structure of
0's and 2's on the whole image, and a separate repeating structure of 3's on the right-hand side of the image. Furthermore, in Figure 1 , the generated image captures the idea that the repeating
pattern of windows does not extend to the bottom portion of the image.for loop from sampled program P structure rendering x struct completed image x (ii) Our model executes P to obtain a rendering of the program structure x struct (
middle). (iii) Our model samples a completion x ∼ p θ (x | s, c) of x struct into a full image
(right).Contributions. We propose an architecture of generative models that incorporates programmatic
structure, as
well as an algorithm for training these models (Section 2). Our learning algorithm depends on a domain-specific program synthesis algorithm for extracting
global structure from the training data; we propose such an algorithm for the image domain (Section 3). Finally, we evaluate our approach on synthetic data and on a real-world dataset of building facades
Tyleček &Šára (2013), both on the task of generation from scratch and on generation from a partial image. We show that our approach substantially outperforms several state-of-the-art deep generative models
(Section 4).Related work. There has been growing interest in applying program synthesis to machine learning, for
purposes of interpretability
BID21 ; BID20 , safety BID1 , and lifelong learning BID19 . Most relevantly, there has been interest in using programs to capture structure that deep learning models have difficulty
representing Lake et al. (2015) ; BID4 ; . For instance, BID4 proposes an unsupervised learning algorithm for capturing repeating patterns in simple line drawings;
however, not only are their domains simple, but they can only handle a very small amount of noise. Similarly, BID5 captures 2D repeating patterns of simple circles and polygons; however, rather than synthesizing programs
with perceptual components, they learn a simple mapping from images to symbols as a preprocessing step. The closest work we are aware of is BID19 , which synthesizes programs with neural components (i.e., components implemented
as neural networks); however, their application is to lifelong learning, not generation, and to learning with supervision (labels) rather than to unsupervised learning of structure.Additionally, there has been work extending neural module networks BID0 to generative models BID2 . These algorithms essentially learn a collection of neural components that can be composed together based on hierarchical structure
. However, they require that the structure be available (albeit in natural language form) both for training the model and for generating
new images.Finally, there has been work incorporating spatial structure into generative models for generating textures BID12 ; however, their work only handles a single infinite repeating 2D pattern. In contrast, we can capture a rich variety of spatial patterns parameterized by a space of programs. For example, the image in Figure
1 generated by our technique contains different repeating patterns in different parts of the image.
We have proposed a new approach to generation that incorporates programmatic structure into state-ofthe-art deep learning models.
In our experiments, we have demonstrated the promise of our approach to improve generation of high-dimensional data with global structure that current state-of-the-art deep generative models have difficulty capturing.
We leave a number of directions for future work.
Most importantly, we have relied on a custom synthesis algorithm to eliminate the need for learning latent program structure.
Learning to synthesize latent structure during training is an important direction for future work.
In addition, future work will explore more expressive programmatic structures, including if-then-else statements.A EXPERIMENTAL DETAILS
|
Applying program synthesis to the tasks of image completion and generation within a deep learning framework
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:346
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.