input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints.
In contrast humans can infer protective and safe solutions after a single failure or unexpected observation.
In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks.
A Gaussian Process implements the modeling and the sampling of the acquisition function.
This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process.
The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task.
We quantitatively compare the human learning performance to our learning approach by evaluating the deviations of the center of mass during training.
Our results show that we can reproduce the efficient learning of human subjects in postural control tasks which provides a testable model for future physiological motor control tasks.
In these postural control tasks, our method outperforms standard Bayesian Optimization in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures.
Autonomous systems such as anthropomorphic robots or self-driving cars must not harm cooperating humans in co-worker scenarios, pedestrians on the road or them selves.
To ensure safe interactions with the environment state-of-the-art robot learning approaches are first applied to simulations and afterwards an expert selects final candidate policies to be run on the real system.
However, for most autonomous systems a fine-tuning phase on the real system is unavoidable to compensate for unmodelled dynamics, motor noise or uncertainties in the hardware fabrication.
Several strategies were proposed to ensure safe policy exploration.
In special tasks like in robot arm manipulation the operational space can be constrained, for example, in classical null-space control approaches Baerlocher & Boulic (1998) ; Slotine (1991) ; Choi & Kim (2000) ; Gienger et al. (2005) ; Saab et al. (2013) ; Modugno et al. (2016) or constraint black-box optimizer Hansen et al. (2003) ; Wierstra et al. (2008) ; Kramer et al. (2009) ; Sehnke et al. (2010) ; Arnold & Hansen (2012) .
While this null-space strategy works in controlled environments like research labs where the environmental conditions do not change, it fails in everyday life tasks as in humanoid balancing where the priorities or constraints that lead to hardware damages when falling are unknown.
Alternatively, limiting the policy updates by applying probabilistic bounds in the robot configuration or motor command space Bagnell & Schneider (2003) ; ; Rueckert et al. (2014) ; Abdolmaleki et al. (2015) ; Rueckert et al. (2013) were proposed.
These techniques do not assume knowledge about constraints.
Closely related are also Bayesian optimization techniques with modulated acquisition functions Gramacy & Lee (2010) ; Berkenkamp et al. (2016) ; Englert & Toussaint (2016) ; Shahriari et al. (2016) to avoid exploring policies that might lead to failures.
However, all these approaches do not avoid failures but rather an expert interrupts the learning process when it anticipates a potential dangerous situation.
Figure 1: Illustration of the hierarchical BO algorithm.
In standard BO (clock-wise arrow), a mapping from policy parameters to rewards is learned, i.e., φ → r ∈ R 1 .
We propose a hierarchical process, where first features κ are sampled and later used to predict the potential of policies conditioned on these features, φ|κ → r.
The red dots show the first five successive roll-outs in the feature and the policy space of a humanoid postural control task.
All the aforementioned strategies cannot avoid harming the system itself or the environment without thorough experts knowledge, controlled environmental conditions or human interventions.
As humans require just few trials to perform reasonably well, it is desired to enable robots to reach similar performance even for high-dimensional problems.
Thereby, most approaches are based on the assumption of a "low effective dimensionality", thus most dimensions of a high-dimensional problem do not change the objective function significantly.
In Chen et al. (2012) a method for relevant variable selection based on Hierarchical Diagonal Sampling for both, variable selection and function optimization, has been proposed.
Randomization combined with Bayesian Optimization is proposed in Wang et al. (2013) to exploit effectively the aforementioned "low effective dimensionality".
In Li et al. (2018) a dropout algorithm has been introduced to overcome the high-dimensionality problem by only train onto a subset of variables in each iteration, evaluating a "regret gap" and providing strategies to reduce this gap efficiently.
In Rana et al. (2017) an algorithm has been proposed which optimizes an acquisition function by building new Gaussian Processes with sufficiently large kernellengths scales.
This ensures significant gradient updates in the acquisition function to be able to use gradient-dependent methods for optimization.
The contribution of this paper is a computational model for psychological motor control experiments based on hierarchical acquisition functions in Bayesian Optimization (HiBO).
Our motor skill learning method uses features for optimization to significantly reduce the number of required roll-outs.
In the feature space, we search for the optimum of the acquisition function by sampling and later use the best feature configuration to optimize the policy parameters which are conditioned on the given features, see also Figure 1 .
In postural control experiments, we show that our approach reduces the number of required roll-outs significantly compared to standard Bayesian Optimization.
The focus of this study is to develop a testable model for psychological motor control experiments where well known postural control features could be used.
These features are listed in Table 3 .
In future work we will extend our model to autonomous feature learning and will validate the approach in more challenging robotic tasks where 'good' features are hard to hand-craft.
We introduced HiBO, a hierarchical approach for Bayesian Optimization.
We showed that HiBO outperforms standard BO in a complex humanoid postural control task.
Moreover, we demonstrated the effects of the choice of the features and for different number of mental replay episodes.
We compared our results to the learning performance of real humans at the same task.
We found that the learning behavior is similar.
We found that our proposed hierarchical BO algorithm can reproduce the rapid motor adaptation of human subjects.
In contrast standard BO, our comparison method, is about four times slower.
In future work, we will examine the problem of simultaneously learning task relevant features in neural nets.
|
This paper presents a computational model for efficient human postural control adaptation based on hierarchical acquisition functions with well-known features.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:347
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks.
Prior works mostly focus on model-free adversarial attacks and agents with discrete actions.
In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics.
Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states.
Deep reinforcement learning (RL) has revolutionized the fields of AI and machine learning over the last decade.
The introduction of deep learning has achieved unprecedented success in solving many problems that were intractable in the field of RL, such as playing Atari games from pixels and performing robotic control tasks (Mnih et al., 2015; Lillicrap et al., 2015; Tassa et al., 2018) .
Unfortunately, similar to the case of deep neural network classifiers with adversarial examples, recent studies show that deep RL agents are also vulnerable to adversarial attacks.
A commonly-used threat model allows the adversary to manipulate the agent's observations at every time step, where the goal of the adversary is to decrease the agent's total accumulated reward.
As a pioneering work in this field, (Huang et al., 2017) show that by leveraging the FGSM attack on each time frame, an agent's average reward can be significantly decreased with small input adversarial perturbations in five Atari games.
(Lin et al., 2017) further improve the efficiency of the attack in (Huang et al., 2017) by leveraging heuristics of detecting a good time to attack and luring agents to bad states with sample-based Monte-Carlo planning on a trained generative video prediction model.
Since the agents have discrete actions in Atari games (Huang et al., 2017; Lin et al., 2017) , the problem of attacking Atari agents often reduces to the problem of finding adversarial examples on image classifiers, also pointed out in (Huang et al., 2017) , where the adversaries intend to craft the input perturbations that would drive agent's new action to deviate from its nominal action.
However, for agents with continuous actions, the above strategies can not be directly applied.
Recently, (Uesato et al., 2018) studied the problem of adversarial testing for continuous control domains in a similar but slightly different setting.
Their goal was to efficiently and effectively find catastrophic failure given a trained agent and to predict its failure probability.
The key to success in (Uesato et al., 2018) is the availability of agent training history.
However, such information may not always be accessible to the users, analysts, and adversaries.
Therefore, in this paper we study the robustness of deep RL agents in a more challenging setting where the agent has continuous actions and its training history is not available.
We consider the threat models where the adversary is allowed to manipulate an agent's observations or actions with small perturbations, and we propose a two-step algorithmic framework to find efficient adversarial attacks based on learned dynamics models.
Experimental results show that our proposed modelbased attack can successfully degrade agent performance and is also more effective and efficient than model-free attacks baselines.
The contributions of this paper are the following: Figure 1: Two commonly-used threat models.
• To the best of our knowledge, we propose the first model-based attack on deep RL agents with continuous actions.
Our proposed attack algorithm is a general two-step algorithm and can be directly applied to the two commonly-used threat models (observation manipulation and action manipulation).
• We study the efficiency and effectiveness of our proposed model-based attack with modelfree attack baselines based on random searches and heuristics (rand-U, rand-B, flip, see Section 4).
We show that our model-based attack can degrade agent performance more significantly and efficiently than model-free attacks, which remain ineffective in numerous MuJoCo domains ranging from Cartpole, Fish, Walker, and Humanoid.
Evaluating on the total reward.
Often times, the reward function is a complicated function and its exact definition is often unavailable.
Learning the reward function is also an active research field, which is not in the coverage of this paper.
Nevertheless, as long as we have some knowledge of unsafe states (which is often the case in practice), then we can define unsafe states that are related to low reward and thus performing attacks based on unsafe states (i.e. minimizing the total loss of distance to unsafe states) would naturally translate to decreasing the total reward of agent.
As demonstrated in Table 2 , the results have the same trend of the total loss result in Table 1 , where our proposed attack significantly outperforms all the other three baselines.
In particular, our method can lower the average total reward up to 4.96× compared to the baselines result, while the baseline results are close to the perfect total reward of 1000.
Evaluating on the efficiency of attack.
We also study the efficiency of the attack in terms of sample complexity, i.e. how many episodes do we need to perform an effective attack?
Here we adopt the convention in control suite (Tassa et al., 2018) where one episode corresponds to 1000 time steps (samples), and we learn the neural network dynamical model f with different number of episodes.
Figure 3 plots the total head height loss of the walker (task stand) for the three baselines and our method with dynamical model f trained with three different number of samples: {5e5, 1e6, 5e6}, or equivalently {500, 1000, 5000} episodes.
We note that the sweep of hyper parameters is the same for all the three models, and the only difference is the number of training samples.
The results show that for the baselines rand-U and flip, the total losses are roughly at the order of 1400-1500, while (21) 809 (85) 959 (5) 193 (114) walk 934 (22) 913 (21) 966 (6) 608 ( a stronger baseline rand-B still has total losses of 900-1200.
However, if we solve Equation 3 with f trained by 5e5 or 1e6 samples, the total losses can be decreased to the order of 400-700 and are already winning over the three baselines by a significant margin.
Same as our expectation, if we use more samples (e.g. 5e6, which is 5-10 times more), to learn a more accurate dynamics model, then it is beneficial to our attack method -the total losses can be further decreased by more than 2× and are at the order of 50-250 over 10 different runs.
Here we also give a comparison between our model-based attack to existing works (Uesato et al., 2018; Gleave et al., 2019) on the sample complexity.
In (Uesato et al., 2018) , 3e5 episodes of training data is used to learn the adversarial value function, which is roughly 1000× more data than even our strongest adversary (with 5e3 episodes).
Similarly, (Gleave et al., 2019) use roughly 2e4 episodes to train an adversary via deep RL, which is roughly 4× more data than ours 2 .
In this paper, we study the problem of adversarial attacks in deep RL with continuous control for two commonly-used threat models (observation manipulation and action manipulation).
Based on the threat models, we proposed the first model-based attack algorithm and showed that our formulation can be easily solved by off-the-shelf gradient-based solvers.
Through extensive experiments on 4 MuJoCo domains (Cartpole, Fish, Walker, Humanoid), we show that our proposed algorithm outperforms all the model-free based attack baselines by a large margin.
There are several interesting future directions can be investigated based on this work and is detailed in Appendix.
|
We study the problem of continuous control agents in deep RL with adversarial attacks and proposed a two-step algorithm based on learned model dynamics.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:348
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity.
We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models.
Our main theoretical contribution is a dynamical depth separation result, proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves.
However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings.
We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning.
Neural networks have led to a breakthrough in modern machine learning, allowing us to efficiently learn highly expressive models that still generalize to unseen data.
The theoretical reasons for this success are still unclear, as the generalization capabilities of neural networks defy the classic statistical learning theory bounds.
Since these bounds, which depend solely on the capacity of the learned model, are unable to account for the success of neural networks, we must examine additional properties of the learning process.
One such property is the optimization algorithm -while neural networks can express a multitude of possible ERM solutions for a given training set, gradient-based methods with the right initialization may be implicitly biased towards certain solutions which generalize.
A possible way such an implicit bias may present itself, is if gradient-based methods were to search the hypothesis space for possible solutions of gradually increasing complexity.
This would suggest that while the hypothesis space itself is extremely complex, our search strategy favors the simplest solutions and thus generalizes.
One of the leading results along these lines has been by Saxe et al. (2013) , deriving an analytical solution for the gradient flow dynamics of deep linear networks and showing that for such models, the singular values converge at different rates, with larger values converging first.
At the limit of infinitesimal initialization of the deep linear network, Gidel et al. (2019) show these dynamics exhibit a behavior of "incremental learning" -the singular values of the model are learned separately, one at a time.
Our work generalizes these results to small but finite initialization scales.
Incremental learning dynamics have also been explored in gradient descent applied to matrix completion and sensing with a factorized parameterization (Gunasekar et al. (2017) , Arora et al. (2018) , Woodworth et al. (2019) ).
When initialized with small Gaussian weights and trained with a small learning rate, such a model is able to successfully recover the low-rank matrix which labeled the data, even if the problem is highly over-determined and no additional regularization is applied.
In their proof of low-rank recovery for such models, Li et al. (2017) show that the model remains lowrank throughout the optimization process, leading to the successful generalization.
Additionally, Arora et al. (2019) explore the dynamics of such models, showing the singular values are learned at different rates and that deeper models exhibit stronger incremental learning dynamics.
Our work deals with a more simplified setting, allowing us to determine explicitly under which conditions depth leads to this dynamical phenomenon.
Finally, the learning dynamics of nonlinear models have been studied as well.
Combes et al. (2018) and Williams et al. (2019) study the gradient flow dynamics of shallow ReLU networks under restrictive distributional assumptions, Ronen et al. (2019) show that shallow networks learn functions of gradually increasing frequencies and Nakkiran et al. (2019) show how deep ReLU networks correlate with linear classifiers in the early stages of training.
These findings, along with others, suggest that the generalization ability of deep networks is at least in part due to the incremental learning dynamics of gradient descent.
Following this line of work, we begin by explicitly defining the notion of incremental learning for a toy model which exhibits this sort of behavior.
Analyzing the dynamics of the model for gradient flow and gradient descent, we characterize the effect of the model's depth and initialization scale on incremental learning, showing how deeper models allow for incremental learning in larger (realistic) initialization scales.
Specifically, we show that a depth-2 model requires exponentially small initialization for incremental learning to occur, while deeper models only require the initialization to be polynomially small.
Once incremental learning has been defined and characterized for the toy model, we generalize our results theoretically and empirically for larger linear and quadratic models.
Examples of incremental learning in these models can be seen in figure 1, which we discuss further in section 4.
Gradient-based optimization for deep linear models has an implicit bias towards simple (sparse) solutions, caused by an incremental search strategy over the hypothesis space.
Deeper models have a stronger tendency for incremental learning, exhibiting it in more realistic initialization scales.
This dynamical phenomenon exists for the entire optimization process for regression as well as classification tasks, and for many types of models -diagonal networks, convolutional networks, matrix completion and even the nonlinear quadratic network.
We believe this kind of dynamical analysis may be able to shed light on the generalization of deeper nonlinear neural networks as well, with shallow quadratic networks being only a first step towards that goal.
It may seem that the variance loss is an unnatural loss function to analyze, since it isn't used in practice.
While this is true, we will show how the dynamics of this loss function are an approximation of the square loss dynamics.
We begin by describing the dynamics of both losses, showing how incremental learning can't take place for quadratic networks as defined over the squared loss.
Then, we show how adding a global bias to the quadratic network leads to similar dynamics for small initialization scales.
|
We study the sparsity-inducing bias of deep models, caused by their learning dynamics.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:349
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Batch Normalization (BN) and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks.
Discussions of why this normalization works so well remain unsettled.
We make explicit the relationship between ordinary least squares and partial derivatives computed when back-propagating through BN.
We recast the back-propagation of BN as a least squares fit, which zero-centers and decorrelates partial derivatives from normalized activations.
This view, which we term {\em gradient-least-squares}, is an extensible and arithmetically accurate description of BN.
To further explore this perspective, we motivate, interpret, and evaluate two adjustments to BN.
Training deep neural networks has become central to many machine learning tasks in computer vision, speech, and many other application areas.
BID10 showed empirically that Batch Normalization (BN) enables deep networks to attain faster convergence and lower loss.
Reasons for the effectiveness of BN remain an open question BID12 .
Existing work towards explaining this have focused on covariate shift; Santurkar et al. (2018) described how BN makes the loss function smoother.
This work examines the details of the back-propagation of BN, and recasts it as a least squares fit.
This gradient regression zero-centers and decorrelates partial derivatives from the normalized activations; it passes on a scaled residual during back-propagation.
Our view provides novel insight into the effectiveness of BN and several existing alternative normalization approaches in the literature.
This work makes explicit how BN back-propagation regresses partial derivatives against the normalized activations and keeps the residual.
This view, in conjunction with the empirical success of BN, suggests an interpretation of BN as a gradient regression calculation.
BN and its variants decorrelate and zero-center the gradients with respect to the normalized activations.
Subjectively, this can be viewed as removing systematic errors from the gradients.
Our view also support empirical results in literature preferring early BN placement within neural network branches.Leveraging gradient-least-squares considerations, we ran two sets of normalization experiments, applicable to large batch and small batch settings.
Placing a LN layer either before or after BN can be viewed as two-step regression that better explains the residual.
We show empirically on CIFAR-10 that BN and LN together are better than either individually.
In a second set of experiments, we address BN's performance degradation with small batch size.
We regularize the gradient regression with streaming gradient statistics, which empirically recovers some performance on CIFAR-10 relative to basic BN, on batch size two.Why do empirical improvements in neural networks with BN keep the gradient-least-squares residuals and drop the explained portion?
We propose two open approaches for investigating this in future work.
A first approach focuses on how changes to the gradient regression result in different formulations; the two empirical experiments in our work contribute to this.
A second approach examines the empirical relationships between gradients of activations evaluated on the same parameter values; we can search for a shared noisy component arising from gradients in the same normalization partition.
Suppose that the gradient noise correlates with the activations -this is plausible because the population of internal activations arise from using shared weights -then normalizations could be viewed as a layer that removes systematic noise during back-propagation.In DISPLAYFORM0 Then, the partial derivatives satisfy DISPLAYFORM1 Proof.
In deriving ∂z j ∂x i , we will treat the cases of when j = i and when j = i separately.
We start by examining intermediate quantities of interest as a matter of convenience for later use.
We define helper quantities u i = x i − µ.
Note that each u j depends on all of x i via µ.
Next, we write out useful identities DISPLAYFORM2 We prepare to differentiate with rule of total derivative: DISPLAYFORM3 Making use of equations 21, 22, 23 and 25, We simplify ∂σ ∂x i for any i as follows.
DISPLAYFORM4 We apply the quotient rule on ∂z j ∂x i when j = i, then substitute equation 33 DISPLAYFORM5 Similarly, when i = j, inputs in batch b.
In our work, we keep track of am exponential running estimates across batches, DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 that marginalize the (B, H, W ) dimensions into accumulators of shape C. The b subscript of the outer expectation is slightly abusive notation indicating thatα * andβ * are running averages across recent batches with momentum as a hyperparameter that determines the weighting.
We regularize the gradient regression with virtual activations and virtual gradients, defined as follows.
We append two virtual batch items, broadcast to an appropriate shape, x + = µ b + σ b and x − = µ b − σ b .
Here, µ b and σ b are batch statistics of the real activations.
The concatenated tensor undergoes standard BN, which outputs the usual {z i } for the real activations, but z + = 1 and z − = −1 for the virtual items.
The z + and z − do not affect the feed forward calculations, but they receive virtual gradients during back-propagation: DISPLAYFORM9 Virtual data z + , ∂L ∂z + and z − , ∂L ∂z − regularizes the gradient-least-squares regression.
∂L ∂z + and ∂L ∂z − eventually modify the gradients received by the real x i activations.
The virtual data can be weighted with hyperparameters.
In our experiments, we see improvements, robust to a hyperparameter cross-product search over the weightings and the momentum forα * andβ * .
The momentum forα * andβ * were in {.997, .5} and the virtual item weights were in {2 i−1 } i∈{0,1,2,3} .
The performance of larger batches are not recovered; regularized regression could not be reasonably expected to recover the performance of regressing with more data.
See table 2 for final validation performances with a reference Tensorflow ResNet-34-v2 implementation on batch size of two.
The baseline evaluation with identity (no normalization) experienced noticeable overfitting in terms of cross entropy but not accuracy.
The base learning rate was multiplied by 1 64 relative to the baseline rate used in runs with batch size 128.
|
Gaussian normalization performs a least-squares fit during back-propagation, which zero-centers and decorrelates partial derivatives from normalized activations.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:35
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem.
In particular, how to evaluate a learned generative model is unclear.
In this paper, we argue that *adversarial learning*, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images.
By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs.
We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences.
We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task.
For structured prediction and data generation the notion of final task is at the same time crucial and not well defined.
Consider machine translation; the goal is to predict a good translation, but even humans might disagree on the correct translation of a sentence.
Moreover, even if we settle on a ground truth, it is hard to define what it means for a candidate translation to be close to the ground truth.
In the same way, for data generation, the task of generating pretty pictures or more generally realistic samples is not well defined.
Nevertheless, both for structured prediction and data generation, we can try to define criteria which characterize good solutions such as grammatical correctness for translation or non-blurry pictures for image generation.
By incorporating enough criteria into a task loss, one can hope to approximate the final task, which is otherwise hard to formalize.Supervised learning and structured prediction are well-defined problems once they are formulated as the minimization of such a task loss.
The usual task loss in object classification is the generalization error associated with the classification error, or 0-1 loss.
In machine translation, where the goal is to predict a sentence, a structured loss, such as the BLEU score BID37 , formally specifies how close the predicted sentence is from the ground truth.
The generalization error is defined through this structured loss.
In both cases, models can be objectively compared and evaluated with respect to the task loss (i.e., generalization error).
On the other hand, we will show that it is not as obvious in generative modeling to define a task loss that correlates well with the final task of generating realistic samples.Traditionally in statistics, distribution learning is formulated as density estimation where the task loss is the expected negative-log-likelihood.
Although log-likelihood works fine in low-dimension, it was shown to have many problems in high-dimension .
Among others, because the Kullback-Leibler is too strong of a divergence, it can easily saturate whenever the distributions are too far apart, which makes it hard to optimize.
Additionally, it was shown in BID47 that the KL-divergence is a bad proxy for the visual quality of samples.In this work we give insights on how adversarial divergences BID26 can be considered as task losses and how they address some problems of the KL by indirectly incorporating hard-to-define criteria.
We define parametric adversarial divergences as the following : DISPLAYFORM0 where {f φ : X → R d ; φ ∈ Φ} is a class of parametrized functions, such as neural networks, called the discriminators in the Generative Adversarial Network (GAN) framework BID15 .
The constraints Φ and the function ∆ : R d × R d → R determine properties of the resulting divergence.
Using these notations, we adopt the view 1 that training a GAN can be seen as training a generator network q θ (parametrized by θ) to minimize the parametric adversarial divergence Div NN (p||q θ ), where the generator network defines the probability distribution q θ over x.Our contributions are the following:• We show that compared to traditional divergences, parametric adversarial divergences offer a good compromise in terms of sample complexity, computation, ability to integrate prior knowledge, flexibility and ease of optimization.•
We relate structured prediction and generative adversarial networks using statistical decision theory, and argue that they both can be viewed as formalizing a final task into the minimization of a statistical task loss.•
We explain why it is necessary to choose a divergence that adequately reflects our final task in generative modeling. We
make a parallel with results in structured learning (also dealing with high-dimensional data), which quantify the importance of choosing a good objective in a specific setting.• We
explore with some simple experiments how the properties of the discriminator transfer to the adversarial divergence. Our
experiments suggest that parametric adversarial divergences are especially adapted to problems such as image generation, where it is hard to formally define a perceptual loss that correlates well with human judgment.• We
illustrate the importance of having a parametric discriminator by running experiments with the true (nonparametric) Wasserstein, and showing its shortcomings on complex datasets, on which GANs are known to perform well.• We
perform qualitative and quantitative experiments to compare maximum-likelihood and parametric adversarial divergences under two settings: very high-dimensional images, and learning data with specific constraints.
We gave arguments in favor of using adversarial divergences rather than traditional divergences for generative modeling, the most important of which being the ability to account for the final task.
After linking structured prediction and generative modeling under the framework of statistical decision theory, we interpreted recent results from structured prediction, and related them to the notions of strong and weak divergences.
Moreover, viewing adversarial divergences as statistical task losses led us to believe that some adversarial divergences could be used as evaluation criteria in the future, replacing hand-crafted criteria which cannot usually be exhaustive.
In some sense, we want to extrapolate a few desirable properties into a meaningful task loss.
In the future we would like to investigate how to define meaningful evaluation criteria with minimal human intervention.
|
Parametric adversarial divergences implicitly define more meaningful task losses for generative modeling, we make parallels with structured prediction to study the properties of these divergences and their ability to encode the task of interest.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:350
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Experimental reproducibility and replicability are critical topics in machine learning.
Authors have often raised concerns about their lack in scientific publications to improve the quality of the field.
Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works.
As such, several Graph Neural Network models have been developed to effectively tackle graph classification.
However, experimental procedures often lack rigorousness and are hardly reproducible.
Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art.
To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks.
Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet.
We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.
Over the years, researchers have raised concerns about several flaws in scholarship, such as experimental reproducibility and replicability in machine learning (McDermott, 1976; Lipton & Steinhardt, 2018) and science in general (National Academies of Sciences & Medicine, 2019) .
These issues are not easy to address, as a collective effort is required to avoid bad practices.
Examples include the ambiguity of experimental procedures, the impossibility of reproducing results and the improper comparison of machine learning models.
As a result, it can be difficult to uniformly assess the effectiveness of one method against another.
This work investigates these issues for the graph representation learning field, by providing a uniform and rigorous benchmarking of state-of-the-art models.
Graph Neural Networks (GNNs) (Micheli, 2009; Scarselli et al., 2008) have recently become the standard tool for machine learning on graphs.
These architectures effectively combine node features and graph topology to build distributed node representations.
GNNs can be used to solve node classification (Kipf & Welling, 2017) and link prediction tasks, or they can be applied to downstream graph classification (Bacciu et al., 2018) .
In literature, such models are usually evaluated on chemical and social domains (Xu et al., 2019) .
Given their appeal, an ever increasing number of GNNs is being developed (Gilmer et al., 2017) .
However, despite the theoretical advancements reached by the latest contributions in the field, we find that the experimental settings are in many cases ambiguous or not reproducible.
Some of the most common reproducibility problems we encounter in this field concern hyperparameters selection and the correct usage of data splits for model selection versus model assessment.
Moreover, the evaluation code is sometimes missing or incomplete, and experiments are not standardized across different works in terms of node and edge features.
These issues easily generate doubts and confusion among practitioners that need a fully transparent and reproducible experimental setting.
As a matter of fact, the evaluation of a model goes through two different phases, namely model selection on the validation set and model assessment on the test set.
Clearly, to fail in keeping these phases well separated could lead to over-optimistic and biased estimates of the true performance of a model, making it hard for other researchers to present competitive results without following the same ambiguous evaluation procedures.
With this premise, our primary contribution is to provide the graph learning community with a fair performance comparison among GNN architectures, using a standardized and reproducible experimental environment.
More in detail, we performed a large number of experiments within a rigorous model selection and assessment framework, in which all models were compared using the same features and the same data splits.
Secondly, we investigate if and to what extent current GNN models can effectively exploit graph structure.
To this end, we add two domain-specific and structure-agnostic baselines, whose purpose is to disentangle the contribution of structural information from node features.
Much to our surprise, we found out that these baselines can even perform better than GNNs on some datasets; this calls for moderation when reporting improvements that do not clearly outperform structure-agnostic competitors.
Our last contribution is a study on the effect of node degrees as features in social datasets.
Indeed, we show that providing the degree can be beneficial in terms of performances, and it has also implications in the number of GNN layers needed to reach good results.
We publicly release code and dataset splits to reproduce our results, in order to allow other researchers to carry out rigorous evaluations with minimum additional effort 1 .
Disclaimer Before delving into the work, we would like to clarify that this work does not aim at pinpointing the best (or worst) performing GNN, nor it disavows the effort researchers have put in the development of these models.
Rather, it is intended to be an attempt to set up a standardized and uniform evaluation framework for GNNs, such that future contributions can be compared fairly and objectively with existing architectures.
In this paper, we wanted to show how a rigorous empirical evaluation of GNNs can help design future experiments and better reason about the effectiveness of different architectural choices.
To this aim, we highlighted ambiguities in the experimental settings of different papers, and we proposed a clear and reproducible procedure for future comparisons.
We then provided a complete re-evaluation of five GNNs on nine datasets, which required a significant amount of time and computational resources.
This uniform environment helped us reason about the role of structure, as we found that structure-agnostic baselines outperform GNNs on some chemical datasets, thus suggesting that structural properties have not been exploited yet.
Moreover, we objectively analyzed the effect of the degree feature on performances and model selection in social datasets, unveiling an effect on the depth of GNNs.
Finally, we provide the graph learning community with reliable and reproducible results to which GNN practitioners can compare their architectures.
We hope that this work, along with the library we release, will prove useful to researchers and practitioners that want to compare GNNs in a more rigorous way.
|
We provide a rigorous comparison of different Graph Neural Networks for graph classification.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:351
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Data augmentation (DA) is fundamental against overfitting in large convolutional neural networks, especially with a limited training dataset.
In images, DA is usually based on heuristic transformations, like geometric or color transformations.
Instead of using predefined transformations, our work learns data augmentation directly from the training data by learning to transform images with an encoder-decoder architecture combined with a spatial transformer network.
The transformed images still belong to the same class, but are new, more complex samples for the classifier.
Our experiments show that our approach is better than previous generative data augmentation methods, and comparable to predefined transformation methods when training an image classifier.
Convolutional neural networks have shown impressive results in visual recognition tasks.
However, for proper training and good performance, they require large labeled datasets.
If the amount of training data is small, data augmentation is an effective way to improve the final performance of the network BID6 ; BID9 ).
In images, data augmentation (DA) consists of applying predefined transformations such as flip, rotations or color changes BID8 ; BID3 ).
This approach provides consistent improvements when training a classifier.
However, the required transformations are dataset dependent.
For instance, flipping an image horizontally makes sense for natural images, but produces ambiguities on datasets of numbers (e.g. 2 and 5).Several
recent studies investigate automatic DA learning as a method to avoid the manual selection of transformations. BID10 define
a large set of transformations and learn how to combine them. This approach
works well however, as it is based on predefined transformations, it prevents the model from finding other transformations that could be useful for the classifier. Alternatively
, BID2 and BID12 generate new samples via a generative adversarial networks model (GAN) from the probability distribution of the data p(X), while BID0 learn the transformations of images, instead of generating images from scratch. These alternative
methods show their limits when the number of training samples is low, given the difficulty of training a high-performing generative model with a reduced dataset. BID5 learn the natural
transformations in a dataset by aligning pairs of samples from the same class. This approach produces
good results on easy datasets like MNIST however, it does not appear to be applicable to more complex datasets.Our work combines the advantages of generative models and transformation learning approaches in a single end-to-end network architecture. Our model is based on
a conditional GAN architecture that learns to generate transformations of a given image that are useful for DA. In other words, instead
of learning to generate samples from p(X), it learns to generate samples from the conditional distribution p(X|X), withX a reference image. As shown in FIG0 , our
approach combines a global transformation defined by an affine matrix with a more localized transformation defined by ensures that the transformed sample G(x i , z) is dissimilar from the input sample x i but similar to a sample x j from the same class. (b) Given an input image
x i and a random noise vector z, our generator first performs a global transformation using a spatial transformer network followed by more localized transformations using a convolutional encoder-decoder network.a convolutional encoder-decoder architecture. The global transformations
are learned by an adaptation of spatial transformer network (STN) BID7 ) so that the entire architecture is differentiable and can be learned with standard back-propagation. In its normal use, the purpose
of STN is to learn how to transform the input data, so that the model becomes invariant to certain transformations. In contrast, our approach uses
STN to generate augmented samples in an adversarial way. With the proposed model we show
that, for optimal performance, it is important to jointly train the generator of the augmented samples with the classifier in an end-to-end fashion. By doing that, we can also add
an adversarial loss between the generator and classifier such that the generated samples are difficult, or adversarial, for the classifier.To summarize, the contributions of this paper are: i) We propose a DA network that
can automatically learn to generate augmented samples without expensive searches for the optimal data transformations; ii) Our model trains jointly with
a classifier, is fully differentiable, trainable end-to-end, and can significantly improve the performance of any image classifier; iii) In low-data regime it outperforms
models trained with strong predefined DA; iv) Finally, we notice that, for optimal
performance, it is fundamental to train the model jointly with the image classifier.
In this work, we have presented a new approach for improving the learning of a classifier through an automatic generation of augmented samples.
The method is fully differentiable and can be learned end-to-end.
In our experiments, we have shown several elements contributing to an improved classification performance.
First, the generator and the classifier should be trained jointly.
Second, the combined use of global transformations with STN and local transformation with U-Net is essential to reach the highest accuracy levels.
For future work, we want to include more differentiable transformations such as deformations and color transformations and evaluate how these additional sample augmentations affect the final accuracy.
|
Automatic Learning of data augmentation using a GAN based architecture to improve an image classifier
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:352
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider the problem of information compression from high dimensional data.
Where many studies consider the problem of compression by non-invertible trans- formations, we emphasize the importance of invertible compression.
We introduce new class of likelihood-based auto encoders with pseudo bijective architecture, which we call Pseudo Invertible Encoders.
We provide the theoretical explanation of their principles.
We evaluate Gaussian Pseudo Invertible Encoder on MNIST, where our model outperform WAE and VAE in sharpness of the generated images.
We consider the problem of information compression from high dimensional data.
Where many studies consider the problem of compression by non-invertible transformations, we emphasize the importance of invertible compression as there are many cases where one cannot or will not decide a priori what part of the information is important and what part is not.
Compression of images for person ID in a small company requires less resolution then person ID at an airport.
To loose part of the information without harm to the future purpose of viewing the picture requires knowing the purpose upfront.
Therefore, the fundamental advantage of invertible information compression is that compression can be undone if a future purpose so requires.Recent advances of classification models have demonstrated that deep learning architectures of proper design do not lead to information loss while still being able to achieve state-of-the-art in classification performance.
These i-RevNet models BID5 implement a small but essential modification of the popular RevNet models while achieving invertibility and a performance similar to the standard RevNet BID2 .
This is of great interest as it contradicts the intuition that information loss is essential to achieve good performance in classification BID13 .
Despite the requirement of the invertibility, flow-based generating models BID0 ; BID11 ; BID6 demonstrate that the combination of bijective mappings allows one to transform the raw distribution of the input data to any desired distribution and perform the manipulation of the data.On the other hand, Auto-Encoders have provided the ideal mechanism to reduce the data to the bare minimum while retaining all essential information for a specific task, the one implemented in the loss function.
Variational Auto Encoders (VAE) BID7 and Wasserstein Auto Encoders (WAE) BID14 are performing best.
They provide an approach for stable training of autoencoders, which demonstrate good results at reconstruction and generation.
However, both of these methods involve the optimization of the objective defined on the pixel level.
We would emphasise the importance of avoiding the separate decoder part and training the model without relying on the reconstuction quality directly.Combining the best of Invertible mappings and Auto-Encoders, we introduce Pseudo Invertible Encoder.
Our model combines bijectives with restriction and extension of the mappings to the dependent sub-manifolds FIG0 .
The main contributions of this paper are the following:• We introduce new class of likelihood-based Auto-Encoders, which we call Pseudo Invertible Encoders.
We provide the theoretical explanation of their principles.•
We demonstrate the properties of Gaussian Pseudo Invertible Encoder in manifold learning.•
We compare our model with WAE and VAE on MNIST, and report that the sharpness of the images, generated by our models is better. 2
RELATED WORK
In this paper we have proposed the new class of Auto Encoders, which we call Pseudo Invertible Encoder.
We provided a theory which bridges the gap between Auto Encoders and Normalizing Flows.
The experiments demonstrate that the proposed model learns the manifold structure and generates sharp images.
|
New Class of Autoencoders with pseudo invertible architecture
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:353
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems.
The approach reaches current state-of-the-art methods on MNIST and provides reasonable performances on SVHN and CIFAR10.
Through the introduced method, residual networks are for the first time applied to semi-supervised tasks.
Experiments with one-dimensional signals highlight the generality of the method.
Importantly, our approach is simple, efficient, and requires no change in the deep network architecture.
Deep neural networks (DNNs) have made great strides recently in a wide range of difficult machine perception tasks.
They consist of parametric functionals f Θ with internal parameters Θ.
However, those systems are still trained in a fully supervised fashion using a large set of labeled data, which is tedious and costly to acquire.Semi-supervised learning relaxes this requirement by leaning Θ based on two datasets: a labeled set D of N training data pairs and an unlabeled set D u of N u training inputs.
Unlabeled training data is useful for learning as unlabeled inputs provide information on the statistical distribution of the data that can both guide the learning required to classify the supervised dataset and characterize the unlabeled samples in D u hence improve generalization.
Limited progress has been made on semi-supervised learning algorithms for DNNs BID15 ; BID16 BID14 , but today's methods suffer from a range of drawbacks, including training instability, lack of topology generalization, and computational complexity.In this paper, we take two steps forward in semi-supervised learning for DNNs.
First, we introduce an universal methodology to equip any deep neural net with an inverse that enables input reconstruction.
Second, we introduce a new semi-supervised learning approach whose loss function features an additional term based on this aforementioned inverse guiding weight updates such that information contained in unlabeled data are incorporated into the learning process.
Our key insight is that the defined and general inverse function can be easily derived and computed; thus for unlabeled data points we can both compute and minimize the error between the input signal and the estimate provided by applying the inverse function to the network output without extra cost or change in the used model.
The simplicity of this approach, coupled with its universal applicability promise to significantly advance the purview of semi-supervised and unsupervised learning.
We have presented a well-justified inversion scheme for deep neural networks with an application to semi-supervised learning.
By demonstrating the ability of the method to best current state-of-theart results on MNIST with different possible topologies support the portability of the technique as well as its potential.
These results open up many questions in this yet undeveloped area of DNN inversion, input reconstruction, and their impact on learning and stability.Among the possible extensions, one can develop the reconstruction loss into a per-layer reconstruction loss.
Doing so, there is the possibility to weight each layer penalty bringing flexibility as well as meaningful reconstruction.
Define the per layer loss as DISPLAYFORM0 with DISPLAYFORM1 Doing so, one can adopt a strategy in favor of high reconstruction objective for inner layers, close to the final latent representation z (L) in order to lessen the reconstruction cost for layers closer to the input X n .
In fact, inputs of standard dataset are usually noisy, with background, and the object of interest only contains a small energy with respect to the total energy of X n .
Another extension would be to update the weighting while performing learning.
Hence, if we denote by t the position in time such as the current epoch or batch, we now have the previous loss becoming DISPLAYFORM2 (13) One approach would be to impose some deterministic policy based on heuristic such as favoring reconstruction at the beginning to then switch to classification and entropy minimization.
Finer approaches could rely on explicit optimization schemes for those coefficients.
One way to perform this, would be to optimize the loss weighting coefficients α, β, γ ( ) after each batch or epoch by backpropagation on the updates weights.
Define DISPLAYFORM3 as a generic iterative update based on a given policy such as gradient descent.
One can thus adopt the following update strategy for the hyper-parameters as DISPLAYFORM4 and so for all hyper-parameters.
Another approach would be to use adversarial training to update those hyper-parameters where both update cooperate trying to accelerate learning.EBGAN BID18 ) are GANs where the discriminant network D measures the energy of a given input X. D is formulated such as generated data produce high energy and real data produce lower energy.
Same authors propose the use of an auto-encoder to compute such energy function.
We plan to replace this autoencoder using our proposed method to reconstruct X and compute the energy; hence D(X) = R(X) and only one-half the parameters will be needed for D.Finally, our approach opens the possibility of performing unsupervised tasks such as clustering.
In fact, by setting α = 0, we are in a fully unsupervised framework.
Moreover, β can push the mapping f Θ to produce a low-entropy, clustered, representation or rather simply to produce optimal reconstruction.
Even in a fully unsupervised and reconstruction case (α = 0, β = 1), the proposed framework is not similar to a deep-autoencoder for two main reasons.
First, there is no greedy (per layer) reconstruction loss, only the final output is considered in the reconstruction loss.
Second, while in both case there is parameter sharing, in our case there is also "activation" sharing that corresponds to the states (spline) that were used in the forward pass that will also be used for the backward one.
In a deep autoencoder, the backward activation states are induced by the backward projection and will most likely not be equal to the forward ones.
|
We exploit an inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework applicable to many topologies.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:354
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep learning has become a widely used tool in many computational and classification problems.
Nevertheless obtaining and labeling data, which is needed for strong results, is often expensive or even not possible.
In this paper three different algorithmic approaches to deal with limited access to data are evaluated and compared to each other.
We show the drawbacks and benefits of each method.
One successful approach, especially in one- or few-shot learning tasks, is the use of external data during the classification task.
Another successful approach, which achieves state of the art results in semi-supervised learning (SSL) benchmarks, is consistency regularization.
Especially virtual adversarial training (VAT) has shown strong results and will be investigated in this paper.
The aim of consistency regularization is to force the network not to change the output, when the input or the network itself is perturbed.
Generative adversarial networks (GANs) have also shown strong empirical results.
In many approaches the GAN architecture is used in order to create additional data and therefor to increase the generalization capability of the classification network.
Furthermore we consider the use of unlabeled data for further performance improvement.
The use of unlabeled data is investigated both for GANs and VAT.
Deep neural networks have shown great performance in a variety of tasks, like speech or image recognition.
However often extremely large datasets are necessary for achieving this.
In real world applications collecting data is often very expensive in terms of cost or time.
Furthermore collected data is often unbalanced or even incorrect labeled.
Hence performance achieved in academic papers is hard to match.Recently different approaches tackled these problems and tried to achieve good performance, when otherwise fully supervised baselines failed to do so.
One approach to learn from very few examples, the so called few-shot learning task, consists of giving a collection of inputs and their corresponding similarities instead of input-label pairs.
This approach was thoroughly investigated in BID9 , BID33 , BID28 and gave impressive results tested on the Omniglot dataset BID12 ).
In essence a task specific similarity measure is learned, that embeds the inputs before comparison.Furthermore semi-supervised learning (SSL) achieved strong results in image classification tasks.
In SSL a labeled set of input-target pairs (x, y) ∈ D L and additionally an unlabeled set of inputs x ∈ D U L is given.
Generally spoken the use of D U L shall provide additional information about the structure of the data.
Generative models can be used to create additional labeled or unlabeled samples and leverage information from these samples BID26 , BID18 ).
Furthermore in BID2 it is argued, that GAN-based semi-supervised frameworks perform best, when the generated images are of poor quality.
Using these badly generated images a classifier with better generalization capability is obtained.
On the other side uses generative models in order to learn feature representations, instead of generating additional data.Another approach in order to deal with limited data is consistency regularization.
The main point of consistency regularization is, that the output of the network shall not change, when the input or the network itself is perturbed.
These perturbations may also result in inputs, which are not realistic anymore.
This way a smooth manifold is found on which the data lies.
Different approaches to consistency regularization can be found in BID15 , BID23 , BID11 , and BID32 .The
aim of this paper is to investigate how different approaches behave compared to each other. Therefore
a specific image and sound recognition task is created with varying amount of labeled data. Beyond that
it is further explored how different amounts of unlabeled data support the tasks, whilst also varying the size of labeled data. The possible
accuracy improvement by labeled and unlabeled examples is compared to each other. Since there
is a correlation between category mismatch of unlabeled data and labeled data BID20 ) reported, we investigate how this correlation behaves for different approaches and datasets.
In this paper three methods for dealing with little data have been compared to each other.
When the amount of labeled data is very little and no unlabeled data is available, siamese neural networks offer the best alternative in order to achieve good results in terms of accuracy.
Furthermore when there is additional unlabeled data available using GANs or VAT offer a good option.
VAT outperforms GAN when the amount of data is low.
On contrast GANs should be preferred for moderate or high amounts of data.
Nevertheless both methods must be tested for any individual use case, since the behavior of these methods may change for different datasets.Surprising results have been obtained on the class mismatch experiment.
It was observed that adding samples, which do not belong to the target classes, not necessarily reduce the accuracy.
Whether adding such samples improves or reduce the accuracy, may heavily depend on how closely these samples/ classes are related to the target samples/ classes.
An interesting questions remains whether datasets which perform good in transfer learning tasks (e.g. transferring from ImageNet to CIFAR-10) also may be suitable for such semi-supervised learning tasks.Furthermore any combinations of three examined methods can bear interesting results, e.g.VAT could be applied to the discriminator in the GAN framework.
Also a combination of GAN and siamese neural networks could be useful, in this case the siamese neural network would have two outputs, one for the source and one for the similarity.
|
Comparison of siamese neural networks, GANs, and VAT for few shot learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:355
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives.
First, it puts forward a novel concept of "History of Word" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation.
Second, it identifies an attention scoring function that better utilizes the "history of word" concept.
Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer.
We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017).
Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.
Context: The Alpine Rhine is part of the Rhine, a famous European river.
The Alpine Rhine begins in the most western part of the Swiss canton of Graubünden, and later forms the border between Switzerland to the West and Liechtenstein and later Austria to the East.
On the other hand, the Danube separates Romania and Bulgaria.
In this paper, we describe a new deep learning model called FusionNet with its application to machine comprehension.
FusionNet proposes a novel attention mechanism with following three contributions:
1. the concept of history-of-word to build the attention using complete information from the lowest word-level embedding up to the highest semantic-level representation;
2. an attention scoring function to effectively and efficiently utilize history-of-word;
3. a fully-aware multi-level fusion to exploit information layer by layer discriminatingly.
We applied FusionNet to MRC task and experimental results show that FusionNet outperforms existing machine reading models on both the SQuAD dataset and the adversarial SQuAD dataset.
We believe FusionNet is a general and improved attention mechanism and can be applied to many tasks.
Our future work is to study its capability in other NLP problems.
|
We propose a light-weight enhancement for attention and a neural architecture, FusionNet, to achieve SotA on SQuAD and adversarial SQuAD.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:356
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model.
Both the generative and inference model are trained using the adversarial learning paradigm.
We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity.
Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error.
The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset
. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features
. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA.
Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task.
Deep generative models represent powerful approaches to modeling highly complex high-dimensional data.
There has been a lot of recent research geared towards the advancement of deep generative modeling strategies, including Variational Autoencoders (VAE) BID16 , autoregressive models BID32 b) and hybrid models BID9 BID31 .
However, Generative Adversarial Networks (GANs) BID8 have emerged as the learning paradigm of choice across a varied range of tasks, especially in computer vision BID47 , simulation and robotics BID7 BID41 .
GANs cast the learning of a generative network in the form of a game between the generative and discriminator networks.
While the discriminator is trained to distinguish between the true and generated examples, the generative model is trained to fool the discriminator.
Using a discriminator network in GANs avoids the need for an explicit reconstruction-based loss function.
This allows this model class to generate visually sharper images than VAEs while simultaneously enjoying faster sampling than autoregressive models.Recent work, known as either ALI or BiGAN , has shown that the adversarial learning paradigm can be extended to incorporate the learning of an inference network.
While the inference network, or encoder, maps training examples x to a latent space variable z, the decoder plays the role of the standard GAN generator mapping from space of the latent variables (that is typically sampled from some factorial distribution) into the data space.
In ALI, the discriminator is trained to distinguish between the encoder and the decoder, while the encoder and decoder are trained to conspire together to fool the discriminator.
Unlike some approaches that hybridize VAE-style inference with GAN-style generative learning (e.g. BID20 , ), the encoder and decoder in ALI use a purely adversarial approach.
One big advantage of adopting an adversarial-only formalism is demonstrated by the high-quality of the generated samples.
Additionally, we are given a mechanism to infer the latent code associated with a true data example.One interesting feature highlighted in the original ALI work is that even though the encoder and decoder models are never explicitly trained to perform reconstruction, this can nevertheless be easily done by projecting data samples via the encoder into the latent space, copying these values across to the latent variable layer of the decoder and projecting them back to the data space.
Doing this yields reconstructions that often preserve some semantic features of the original input data, but are perceptually relatively different from the original samples.
These observations naturally lead to the question of the source of the discrepancy between the data samples and their ALI reconstructions.
Is the discrepancy due to a failure of the adversarial training paradigm, or is it due to the more standard challenge of compressing the information from the data into a rather restrictive latent feature vector?
BID44 show that an improvement in reconstructions is achievable when additional terms which explicitly minimize reconstruction error in the data space are added to the training objective.
BID23 palliates to the non-identifiability issues pertaining to bidirectional adversarial training by augmenting the generator's loss with an adversarial cycle consistency loss.In this paper we explore issues surrounding the representation of complex, richly-structured data, such as natural images, in the context of a novel, hierarchical generative model, Hierarchical Adversarially Learned Inference (HALI), which represents a hierarchical extension of ALI.
We show that within a purely adversarial training paradigm, and by exploiting the model's hierarchical structure, we can modulate the perceptual fidelity of the reconstructions.
We provide theoretical arguments for why HALI's adversarial game should be sufficient to minimize the reconstruction cost and show empirical evidence supporting this perspective.
Finally, we evaluate the usefulness of the learned representations on a semi-supervised task on MNIST and an attribution prediction task on the CelebA dataset.
In this paper, we introduced HALI, a novel adversarially trained generative model.
HALI learns a hierarchy of latent variables with a simple Markovian structure in both the generator and inference networks.
We have shown both theoretically and empirically the advantages gained by extending the ALI framework to a hierarchy.While there are many potential applications of HALI, one important future direction of research is to explore ways to render the training process more stable and straightforward.
GANs are well-known to be challenging to train and the introduction of a hierarchy of latent variables only adds to this.
|
Adversarially trained hierarchical generative model with robust and semantically learned latent representation.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:357
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Conservation laws are considered to be fundamental laws of nature.
It has broad application in many fields including physics, chemistry, biology, geology, and engineering.
Solving the differential equations associated with conservation laws is a major branch in computational mathematics.
Recent success of machine learning, especially deep learning, in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods.
In this paper, we are the first to explore the possibility and benefit of solving nonlinear conservation laws using deep reinforcement learning.
As a proof of concept, we focus on 1-dimensional scalar conservation laws.
We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner.
We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision making process and the numerical schemes learned in such a way can easily enforce long-term accuracy.
Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach.
In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts.
Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and how well it generalizes.
Our code is released anomynously at \url{https://github.com/qwerlanksdf/L2D}.
Conservation laws are considered to be one of the fundamental laws of nature, and has broad applications in multiple fields such as physics, chemistry, biology, geology, and engineering.
For example, Burger's equation, a very classic partial differential equation (PDE) in conservation laws, has important applications in fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow.
Solving the differential equations associated with conservation laws has been a major branch of computational mathematics (LeVeque, 1992; 2002) , and a lot of effective methods have been proposed, from classic methods such as the upwind scheme, the Lax-Friedrichs scheme, to the advanced ones such as the ENO/WENO schemes (Liu et al., 1994; Shu, 1998) , the flux-limiter methods (Jerez Galiano & Uh Zapata, 2010) , and etc.
In the past few decades, these traditional methods have been proven successful in solving conservation laws.
Nonetheless, the design of some of the high-end methods heavily relies on expert knowledge and the coding of these methods can be a laborious process.
To ease the usage and potentially improve these traditional algorithms, machine learning, especially deep learning, has been recently incorporated into this field.
For example, the ENO scheme requires lots of 'if/else' logical judgments when used to solve complicated system of equations or high-dimensional equations.
This very much resembles the old-fashioned expert systems.
The recent trend in artificial intelligence (AI) is to replace the expert systems by the so-called 'connectionism', e.g., deep neural networks, which leads to the recent bloom of AI.
Therefore, it is natural and potentially beneficial to introduce deep learning in traditional numerical solvers of conservation laws.
In this paper, we proposed a general framework to learn how to solve 1-dimensional conservation laws via deep reinforcement learning.
We first discussed how the procedure of numerically solving conservation laws can be naturally cast in the form of Markov Decision Process.
We then elaborated how to relate notions in numerical schemes of PDEs with those of reinforcement learning.
In particular, we introduced a numerical flux policy which was able to decide on how numerical flux should be designed locally based on the current state of the solution.
We carefully design the action of our RL policy to make it a meta-learner.
Our numerical experiments showed that the proposed RL based solver was able to outperform high order WENO and was well generalized in various cases.
As part of the future works, we would like to consider using the numerical flux policy to inference more complicated numerical fluxes with guaranteed consistency and stability.
Furthermore, we can use the proposed framework to learn a policy that can generate adaptive grids and the associated numerical schemes.
Lastly, we would like consider system of conservation laws in 2nd and 3rd dimensional space.
A COMPLEMENTARY EXPERIMENTS
|
We observe that numerical PDE solvers can be regarded as Markov Desicion Processes, and propose to use Reinforcement Learning to solve 1D scalar Conservation Laws
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:358
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations.
Instead of the deconvolutional network typically used in the decoder of VAEs, we tile (broadcast) the latent vector across space, concatenate fixed X- and Y-“coordinate” channels, and apply a fully convolutional network with 1x1 stride.
This provides an architectural prior for dissociating positional from non-positional features in the latent space, yet without providing any explicit supervision to this effect.
We show that this architecture, which we term the Spatial Broadcast decoder, improves disentangling, reconstruction accuracy, and generalization to held-out regions in data space.
We show the Spatial Broadcast Decoder is complementary to state-of-the-art (SOTA) disentangling techniques and when incorporated improves their performance.
Knowledge transfer and generalization are hallmarks of human intelligence.
From grammatical generalization when learning a new language to visual generalization when interpreting a Picasso, humans have an extreme ability to recognize and apply learned patterns in new contexts.
Current machine learning algorithms pale in contrast, suffering from overfitting, adversarial attacks, and domain specialization BID12 BID16 .
We believe that one fruitful approach to improve generalization in machine learning is to learn compositional representations in an unsupervised manner.
A compositional representation consists of components that can be recombined, and such recombination underlies generalization.
For example, consider a pink elephant.
With a representation that composes color and object independently, imagining a pink elephant is trivial.
However, a pink elephant may not be within the scope of a representation that mixes color and object.
Compositionality comes in a variety of flavors, including feature compositionality (e.g. pink elephant), multi-object compositionality (e.g. elephant next to a penguin), and relational compositionality (e.g. the smallest elephant).
In this work we focus on feature compositionality.
Here we present the Spatial Broadcast decoder for Variational Autoencoders.
We demonstrate that it improves learned latent representations, most dramatically for datasets with objects varying in position.
It also improves generalization in latent space and can be incorporated into SOTA models to boost their performance in terms of both disentangling and reconstruction accuracy.
We believe that learning compositional representations is an important ingredient for flexibility and generalization in many contexts, from supervised learning to reinforcement learning, and the Spatial Broadcast decoder is one step towards robust compositional visual representation learning.
|
We introduce a neural rendering architecture that helps VAEs learn disentangled latent representations.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:359
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Batch Normalization (BN) has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization.
While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking.
Here theoretical support is provided for one of its conjectured properties, namely, the ability to allow gradient descent to succeed with less tuning of learning rates.
It is shown that even if we fix the learning rate of scale-invariant parameters (e.g., weights of each layer with BN) to a constant (say, 0.3), gradient descent still approaches a stationary point (i.e., a solution where gradient is zero) in the rate of T^{−1/2} in T iterations, asymptotically matching the best bound for gradient descent with well-tuned learning rates.
A similar result with convergence rate T^{−1/4} is also shown for stochastic gradient descent.
Batch Normalization (abbreviated as BatchNorm or BN) (Ioffe & Szegedy, 2015) is one of the most important innovation in deep learning, widely used in modern neural network architectures such as ResNet BID8 , Inception (Szegedy et al., 2017) , and DenseNet (Huang et al., 2017) .
It also inspired a series of other normalization methods (Ulyanov et al., 2016; BID0 Ioffe, 2017; Wu & He, 2018) .BatchNorm
consists of standardizing the output of each layer to have zero mean and unit variance. For a single
neuron, if x 1 , . . . , x B is the original outputs in a mini-batch, then it adds a BatchNorm layer which modifies the outputs to DISPLAYFORM0 where µ = i=1 (x i − µ) 2 are the mean and variance within the minibatch, and γ, β are two learnable parameters. BN appears to
stabilize and speed up training, and improve generalization. The inventors
suggested (Ioffe & Szegedy, 2015) that these benefits derive from the following:1. By stabilizing
layer outputs it reduces a phenomenon called Internal Covariate Shift, whereby the training of a higher layer is continuously undermined or undone by changes in the distribution of its inputs due to parameter changes in previous layers., 2. Making the
weights
invariant to scaling, appears to reduce the dependence of training on the scale of parameters and enables us to use a higher learning rate;3. By implictly regularizing
the model it improves generalization.But these three benefits are not fully understood in theory. Understanding generalization
for deep models remains an open problem (with or without BN). Furthermore, in demonstration
that intuition can sometimes mislead, recent experimental results suggest that BN does not reduce internal covariate shift either (Santurkar et al., 2018) , and the authors of that study suggest that the true explanation for BN's effectiveness may lie in a smoothening effect (i.e., lowering of the Hessian norm) on the objective. Another recent paper (Kohler
et al., 2018) tries to quantify the benefits of BN for simple machine learning problems such as regression but does not analyze deep models.Provable quantification of Effect 2 (learning rates). Our study consists of quantifying
the effect of BN on learning rates. Ioffe & Szegedy (2015) observed that
without BatchNorm, a large learning rate leads to a rapid growth of the parameter scale. Introducing BatchNorm usually stabilizes
the growth of weights and appears to implicitly tune the learning rate so that the effective learning rate adapts during the course of the algorithm. They explained this intuitively as follows
. After BN the output of a neuron z = BN(w
x) is unaffected when the weight w is scaled, i.e., for any scalar c > 0, BN(w x) = BN((cw) x).Taking derivatives one finds that the gradient
at cw equals to the gradient at w multiplied by a factor 1/c. Thus, even though the scale of weight parameters
of a linear layer proceeding a BatchNorm no longer means anything to the function represented by the neural network, their growth has an effect of reducing the learning rate.Our paper considers the following question: Can we rigorously capture the above intuitive behavior? Theoretical analyses of speed of gradient descent
algorithms in nonconvex settings study the number of iterations required for convergence to a stationary point (i.e., where gradient vanishes). But they need to assume that the learning rate has
been set (magically) to a small enough number determined by the smoothness constant of the loss function -which in practice are of course unknown. With this tuned learning rate, the norm of the gradient
reduces asymptotically as T −1/2 in T iterations. In case of stochastic gradient descent, the reduction is
like T −1/4 . Thus a potential way to quantify the rate-tuning behavior
of BN would be to show that even when the learning rate is fixed to a suitable constant, say 0.1, from the start, after introducing BN the convergence to stationary point is asymptotically just as fast (essentially) as it would be with a hand-tuned learning rate required by earlier analyses. The current paper rigorously establishes such auto-tuning
behavior of BN (See below for an important clarification about scale-invariance).We note that a recent paper (Wu et al., 2018) introduced a
new algorithm WNgrad that is motivated by BN and provably has the above auto-tuning behavior as well. That paper did not establish such behavior for BN itself,
but it was a clear inspiration for our analysis of BN.Scale-invariant and scale-variant parameters. The intuition of Ioffe & Szegedy (2015) applies for all scale-invariant
parameters, but the actual algorithm also involves other parameters such as γ and β whose scale does matter. Our analysis partitions the parameters in the neural networks into two
groups W (scale-invariant) and g (scale-variant). The first group, W = {w (1) , . . . , w (m) }, consists of all the parameters
whose scales does not affect the loss, i.e., scaling w (i) to cw (i) for any c > 0 does not change the loss (see Definition 2.1 for a formal definition); the second group, g, consists of all other parameters that are not scale-invariant. In a feedforward neural network with BN added at each layer, the layer weights
are all scale-invariant. This is also true for BN with p normalization strategies (Santurkar et al., 2018
; Hoffer et al., 2018) and other normalization layers, such as Weight Normalization (Salimans & Kingma, 2016) , Layer Normalization BID0 ), Group Normalization (Wu & He, 2018 ) (see Table 1 in BID0 for a summary).
In this paper, we studied how scale-invariance in neural networks with BN helps optimization, and showed that (stochastic) gradient descent can achieve the asymptotic best convergence rate without tuning learning rates for scale-invariant parameters.
Our analysis suggests that scale-invariance in nerual networks introduced by BN reduces the efforts for tuning learning rate to fit the training data.However, our analysis only applies to smooth loss functions.
In modern neural networks, ReLU or Leaky ReLU are often used, which makes the loss non-smooth.
It would have more implications by showing similar results in non-smooth settings.
Also, we only considered gradient descent in this paper.
It can be shown that if we perform (stochastic) gradient descent with momentum, the norm of scale-invariant parameters will also be monotone increasing.
It would be interesting to use it to show similar convergence results for more gradient methods.
|
We give a theoretical analysis of the ability of batch normalization to automatically tune learning rates, in the context of finding stationary points for a deep learning objective.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:36
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill.
The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network.
Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training.
To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training.
Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of "Information Plasticity".
Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution.
Once such strong connections are created, they do not appear to change during additional training.
These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process.
Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning.
Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.
Critical periods are time windows of early post-natal development during which sensory deficits can lead to permanent skill impairment BID12 .
Researchers have documented critical periods affecting a range of species and systems, from visual acuity in kittens BID35 BID33 to song learning in birds BID18 .
Uncorrected eye defects (e.g., strabismus, cataracts) during the critical period for visual development lead to amblyopia in one in fifty adults.The cause of critical periods is ascribed to the biochemical modulation of windows of neuronal plasticity BID10 .
In this paper, however, we show that deep neural networks (DNNs), while completely devoid of such regulations, respond to sensory deficits in ways similar to those observed in humans and animal models.
This surprising result suggests that critical periods may arise from information processing, rather than biochemical, phenomena.We propose using the information in the weights, measured by an efficient approximation of the Fisher Information, to study critical period phenomena in DNNs.
We show that, counterintuitively, the information in the weights does not increase monotonically during training.
Instead, a rapid growth in information ("memorization phase") is followed by a reduction of information ("reorganization" or "forgetting" phase), even as classification performance keeps increasing.
This behavior is consistent across different tasks and network architectures.
Critical periods are centered in the memorization phase.Under review as a conference paper at ICLR 2019 Performance is permanently impaired if the deficit is not corrected early enough, regardless of how much additional training is performed.
As in animal models, critical periods coincide with the early learning phase during which test accuracy would rapidly increase in the absence of deficits (dashed).
(B) For comparison, we report acuity for kittens monocularly deprived since birth and tested at the time of eye-opening (solid), and normal development visual acuity in kittens as a function of age (dashed) BID7 BID23 .artificial
neural networks (ANNs) are only loosely inspired by biological systems (Hassabis et al., 2017) .Most studies
to date have focused either on the behavior of networks at convergence (Representation Learning) or on the asymptotic properties of the numerical scheme used to get there (Optimization). The role of
the initial transient, especially its effect in biasing the network towards "good" regions of the complex and high-dimensional optimization problem, is rarely addressed. To study this
initial learning phase of ANNs, we replicate experiments performed in animal models and find that the responses to early deficits are remarkably similar, despite the large underlying differences between the two systems. In particular
, we show that the quality of the solution depends only minimally on the final, relatively well-understood, phase of the training process or on its very first epochs; instead, it depends critically on the period prior to initial convergence.In animals, sensory deficits introduced during critical periods induce changes in the architecture of the corresponding areas BID4 BID34 BID9 . To determine
whether a similar phenomenon exists in ANNs, we compute the Fisher Information of the weights of the network as a proxy to measure its "effective connectivity", that is, the density of connections that are effectively used by the network in order to solve the task. Like others
before us BID28 , we observe two distinct phases during the training, first a "learning phase" in which the Fisher Information of the weights increases as the network learns from the data, followed by a "consolidation" or "compression" phase in which the Fisher Information decreases and stabilizes. Sensitivity
to critical-period-inducing deficits is maximal exactly when the Fisher Information peaks.A layer-wise analysis of the network's effective connectivity shows that, in the tasks and deficits we consider, the hierarchy of low-level and high-level features in the training data is a key aspect behind the observed phenomena. In particular
, our experiments suggest that the existence of critical periods in deep neural networks depends on the inability of the network to change its effective connectivity pattern in order to process different information (in response to deficit removal). We call this
phenomenon, which is not mediated by any external factors, a loss of the "Information Plasticity" of the network.
Critical periods have thus far been considered an exclusively biological phenomenon.
At the same time, the analysis of DNNs has focused on asymptotic properties and neglected the initial transient behavior.
To the best of our knowledge, we are the first to show that artificial neural networks exhibit critical period phenomena, and to highlight the critical role of the transient in determining the asymptotic performance of the network.
Inspired by the role of synaptic connectivity in modulating critical periods, we introduce the use of Fisher Information to study this initial phase.
We show that the initial sensitivity to deficits closely follows changes in the FIM, both global, as the network first rapidly increases and then decreases the amount of stored information, and layer-wise, as the network "reorganizes" its effective connectivity in order to optimally process information.Our work naturally relates to the extensive literature on critical periods in biology.
Despite artificial networks being an extremely reductionist approximation of neuronal networks, they exhibit behaviors that are qualitatively similar to the critical periods observed in human and animal models.
Our information analysis shows that the initial rapid memorization phase is followed by a loss of Information Plasticity which, counterintuitively, further improves the performance.
On the other hand, when combined with the analysis of BID0 this suggests that a "forgetting" phase may be desirable, or even necessary, in order to learn robust, nuisance-invariant representations.The existence of two distinct phases of training has been observed and discussed by BID28 , although their analysis builds on the (Shannon) information of the activations, rather than the (Fisher) information in the weights.
On a multi-layer perceptron (MLP), BID28 empirically link the two phases to a sudden increase in the gradients' covariance.
It may be tempting to compare these results with our Fisher Information analysis.
However, it must be noted that the FIM is computed using the gradients with respect to the model prediction, not to the ground truth label, leading to important qualitative differences.
In Figure 6 , we show that the covariance and norm of the gradients exhibit no clear trends during training with and without deficits, and, therefore, unlike the FIM, do not correlate with the sensitivity to critical periods.
However, Published as a conference paper at ICLR 2019 a connection between our FIM analysis and the information in the activations can be established based on the work of BID0 , which shows that the FIM of the weights can be used to bound the information in the activations.
In fact, we may intuitively expect that pruning of connections naturally leads to loss of information in the corresponding activations.
Thus, our analysis corroborates and expands on some of the claims of BID28 , while using an independent framework.Aside from being more closely related to the deficit sensitivity during critical periods, Fisher's Information also has a number of technical advantages: Its diagonal is simple to estimate, even on modern state-of-the-art architectures and compelling datasets, and it is less sensitive to the choice estimator of mutual information, avoiding some of the common criticisms to the use of information quantities in the analysis of deep learning models.
Finally, the FIM allows us to probe fine changes in the effective connectivity across the layers of the network FIG5 ), which are not visible in BID28 .A
complete analysis of the activations should account not only for the amount of information (both task-and nuisance-related), but also for its accessibility, e.g., how easily task-related information can be extracted by a linear classifier. Following
a similar idea, BID24 aim to study the layer-wise, or "spatial" (but not temporal) evolution of the simplicity of the representation by performing a principal component analysis (PCA) of a radial basis function (RBF) kernel embedding of each layer representation. They show
that, on a multi-layer perceptron, task-relevant information increasingly concentrate on the first principal components of the representation's embedding, implying that they become more easily "accessible" layer after layer, while nuisance information (when it is codified at all) is encoded in the remaining components. In our work
we instead focus on the temporal evolution of the weights. However, it
's important to notice that a network with simpler weights (as measured by the FIM) also requires a simpler smooth representation (as measured, e.g., by the RBF embedding) in order to operate properly, since it needs to be resistant to perturbations of the weights. Thus our analysis
is wholly compatible with the intuitions of BID24 . It would also be
interesting to study the joint spatio-temporal evolution of the network using both frameworks at once.One advantage of focusing on the information of the weights rather than on the activations, or behavior of the network, is to have a readout of the "effective connectivity" during critical periods, which can be compared to similar readouts in animals. In fact, "behavioral
" readouts upon deficit removal, both in artificial and neuronal networks, can potentially be confounded by deficit-coping changes at different levels of the visual pathways BID4 BID16 . On the other hand,
deficits in deprived animals are mirrored by abnormalities in the circuitry of the visual pathways, which we characterize in DNNs using the FIM to study its "effective connectivity", i.e., the connections that are actually employed by the network to solve the task. Sensitivity to critical
periods and the trace of the Fisher Information peak at the same epochs, in accord with the evidence that skill development and critical periods in neuronal networks are modulated by changes (generally experience-dependent) in synaptic plasticity BID16 BID10 . Our layer-wise analysis
of the Fisher Information FIG5 ) also shows that visual deficits reinforce higher layers to the detriment of intermediate layers, leaving low-level layers virtually untouched. If the deficit is removed
after the critical period ends, the network is not able to reverse these effects. Although the two systems
are radically different, a similar response can be found in the visual pathways of animal models: Lower levels (e.g., retina, lateral geniculate nucleus) and higher-level visual areas (e.g., V2 and post-V2) show little remodeling upon deprivation, while most changes happen in different layers of V1 BID34 BID9 ).An insightful interpretation
of critical periods in animal models was proposed by BID16 : The initial connections of neuronal networks are unstable and easily modified (highly plastic), but as more "samples" are observed, they change and reach a more stable configuration which is difficult to modify. Learning can, however, still
happen within the newly created connectivity pattern. This is largely compatible with
our findings: Sensitivity to critical-period-inducing deficits peaks when connections are remodeled (Figure 4, Left) , and different connectivity profiles are observed in networks trained with and without a deficit ( FIG5 ). Moreover, high-level deficits such
as imageflipping and label permutation, which do not require restructuring of the network's connections in order to be corrected, do not exhibit a critical period.
Our goal in this paper is not so much to investigate the human (or animal) brain through artificial networks, as to understand fundamental information processing phenomena, both in their biological or artificial implementations.
It is also not our goal to suggest that, since they both exhibit critical periods, DNNs are necessarily a valid model of neurobiological information processing, although recent work has emphasized this aspect.
We engage in an "Artificial Neuroscience" exercise in part to address a technological need to develop "explainable" artificial intelligence systems whose behavior can be understood and predicted.
While traditionally well-understood mathematical models were used by neuroscientists to study biological phenomena, information processing in modern artificial networks is often just as poorly understood as in biology, so we chose to exploit well-known biological phenomena as probes to study information processing in artificial networks.Conversely, it would also be interesting to explore ways to test whether biological networks prune connections as a consequences of a loss of Information Plasticity, rather than as a cause.
The mechanisms underlying network reconfiguration during learning and development might be an evolutionary outcome obtained under the pressure of fundamental information processing phenomena.
DISPLAYFORM0 In order to avoid interferences between the annealing scheme and the architecture, in these experiments we fix the learning rate to 0.001.The Fully Connected network used for the MNIST experiments has hidden layers of size [2500, 2000, 1500, 1000, 500] .
All hidden layers use batch normalization followed by ReLU activations.
We fix the learning rate to 0.005.
Weight decay is not used.
We use data augmentation with random translations up to 4 pixels and random horizontal flipping.
For MNIST, we pad the images with zeros to bring them to size 32 × 32.
|
Sensory deficits in early training phases can lead to irreversible performance loss in both artificial and neuronal networks, suggesting information phenomena as the common cause, and point to the importance of the initial transient and forgetting.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:360
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search.
Given a teacher network, we search for a compressed network architecture by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation.
We demonstrate that our search algorithm can significantly outperform various baseline methods, such as random search and reinforcement learning (Ashok et al., 2018).
The compressed architectures found by our method are also better than the state-of-the-art manually-designed compact architecture ShuffleNet (Zhang et al., 2018).
We also demonstrate that the learned embedding space can be transferred to new settings for architecture search, such as a larger teacher network or a teacher network in a different architecture family, without any training.
In many application domains, it is common practice to make use of well-known deep network architectures (e.g., VGG BID30 , GoogleNet BID33 , ResNet BID8 ) and to adapt them to a new task without optimizing the architecture for that task.
While this process of transfer learning is surprisingly successful, it often results in over-sized networks which have many redundant or unused parameters.
Inefficient network architectures can waste computational resources and over-sized networks can prevent them from being used on embedded systems.
There is a pressing need to develop algorithms that can take large networks with high accuracy as input and compress their size while maintaining similar performance.
In this paper, we focus on the task of compressed architecture search -the automatic discovery of compressed network architectures based on a given large network.One significant bottleneck of compressed architecture search is the need to repeatedly evaluate different compressed network architectures, as each evaluation is extremely costly (e.g., backpropagation to learn the parameters of a single deep network can take several days on a single GPU).
This means that any efficient search algorithm must be judicious when selecting architectures to evaluate.
Learning a good embedding space over the domain of compressed network architectures is important because it can be used to define a distribution on the architecture space that can be used to generate a priority ordering of architectures for evaluation.
To enable the careful selection of architectures for evaluation, we propose a method to incrementally learn an embedding space over the domain of network architectures.In the network compression paradigm, we are given a teacher network and we aim to search for a compressed network architecture, a student network that contains as few parameters as possible while maintaining similar performance to the teacher network.
We address the task of compressed architecture search by using Bayesian Optimization (BO) with a kernel function defined over our proposed embedding space to select architectures for evaluation.
As modern neural architectures can
We address the task of searching for a compressed network architecture by using BO.
Our proposed method can find more efficient architectures than all the baselines on CIFAR-10 and CIFAR-100.
Our key contribution is the proposed method to learn an embedding space over the domain of network architectures.
We also demonstrate that the learned embedding space can be transferred to new settings for architecture search without any training.
Possible future directions include extending our method to the general NAS problem to search for desired architectures from the scratch and combining our proposed embedding space with BID9 to identify the Pareto set of the architectures that are both small and accurate.
|
We propose a method to incrementally learn an embedding space over the domain of network architectures, to enable the careful selection of architectures for evaluation during compressed architecture search.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:361
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Reinforcement Learning (RL) problem can be solved in two different ways - the Value function-based approach and the policy optimization-based approach - to eventually arrive at an optimal policy for the given environment.
One of the recent breakthroughs in reinforcement learning is the use of deep neural networks as function approximators to approximate the value function or q-function in a reinforcement learning scheme.
This has led to results with agents automatically learning how to play games like alpha-go showing better-than-human performance.
Deep Q-learning networks (DQN) and Deep Deterministic Policy Gradient (DDPG) are two such methods that have shown state-of-the-art results in recent times.
Among the many variants of RL, an important class of problems is where the state and action spaces are continuous --- autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces.
In this paper, we adapt and combine approaches such as DQN and DDPG in novel ways to outperform the earlier results for continuous state and action space problems.
We believe these results are a valuable addition to the fast-growing body of results on Reinforcement Learning, more so for continuous state and action space problems.
Reinforcement learning (RL) is about an agent interacting with the environment, learning an optimal policy, by trail and error, for sequential decision making problems in a wide range of fields such that the agent learns to control a system and maximize a numerical performance measure that expresses a long-term objective BID6 ).
To summarize, this paper discusses the state of the art methods in reinforcement learning with our improvements that have led to RL algorithms in continuous state and action spaces that outperform the existing ones.The proposed algorithm combines the concept of prioritized action replay with deep deterministic policy gradients.
As it has been shown, on a majority of the mujoco environments this algorithm vastly outperforms the DDPG algorithm both in terms of overall reward achieved and the average reward for any hundred epochs over the thousand epochs over which both were run.Hence, it can be concluded that the proposed algorithm learns much faster than the DDPG algorithm.
Secondly, the fact that current reward is higher coupled with the observation that rate of increase in reward also being higher for the proposed algorithm, shows that it is unlikely for DDPG algorithm to surpass the results of the proposed algorithm on that majority of environments.
Also, certain kinds of noises further improve PDDPG to help attain higher rewards.
One other important conclusion is that different kinds of noises work better for different environments which was evident in how drastically the results changed based on the parameter noise.The presented algorithm can also be extended and improved further by finding more concepts in value based methods, which can be used in policy based methods.
The overall improvements in the area of continuous space and action state space can help in making reinforcement learning more applicable in real world scenarios, as the real world systems provide continuous inputs.
These methods can potentially be extended to safety critical systems, by incorporating the notion of safety during the training of a RL algorithm.
This is currently a big challenge because of the necessary unrestricted exploration process of a typical RL algorithm.
|
Improving the performance of an RL agent in the continuous action and state space domain by using prioritised experience replay and parameter noise.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:362
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis.
We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space.
We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information.
One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus.
It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to the strongest published system.
Natural Language Inference (NLI also known as recognizing textual entiailment, or RTE) task requires one to determine whether the logical relationship between two sentences is among entailment (if the premise is true, then the hypothesis must be true), contradiction (if the premise is true, then the hypothesis must be false) and neutral (neither entailment nor contradiction).
NLI is known as a fundamental and yet challenging task for natural language understanding , not only because it requires one to identify the language pattern, but also to understand certain common sense knowledge.
In TAB0 , three samples from MultiNLI corpus show solving the task requires one to handle the full complexity of lexical and compositional semantics.
The previous work on NLI (or RTE) has extensively researched on conventional approaches BID25 Bos & Markert, 2005; BID39 .
Recent progress on NLI is enabled by the availability of 570k human annotated dataset and the advancement of representation learning technique.Among the core representation learning techniques, attention mechanism is broadly applied in many NLU tasks since its introduction: machine translation BID15 , abstractive summarization BID50 , Reading Comprehension , dialog system BID41 , etc.
As described by BID57 , "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key".
Attention mechanism is known for its alignment between representations, focusing one part of representation over another, and modeling the dependency regardless of sequence length.
Observing attention's powerful capability, we hypothesize that the attention weight can assist with machine to understanding the text.A regular attention weight, the core component of the attention mechanism, encodes the crosssentence word relationship into a alignment matrix.
However, a multi-head attention weightVaswani et al. (2017) can encode such interaction into multiple alignment matrices, which shows a more powerful alignment.
In this work, we push the multi-head attention to a extreme by building a word- by-word dimension-wise alignment tensor which we call interaction tensor.
The interaction tensor encodes the high-order alignment relationship between sentences pair.
Our experiments demonstrate that by capturing the rich semantic features in the interaction tensor, we are able to solve natural language inference task well, especially in cases with paraphrase, antonyms and overlapping words.We dub the general framework as Interactive Inference Network(IIN).
To the best of our knowledge, it is the first attempt to solve natural language inference task in the interaction space.
We further explore one instance of Interactive Inference Network, Densely Interactive Inference Network (DIIN), which achieves new state-of-the-art performance on both SNLI and MultiNLI copora.
To test the generality of the architecture, we interpret the paraphrase identification task as natural language inference task where matching as entailment, not-matching as neutral.
We test the model on Quora Question Pair dataset, which contains over 400k real world question pair, and achieves new state-of-the-art performance.We introduce the related work in Section 2, and discuss the general framework of IIN along with a specific instance that enjoys state-of-the-art performance on multiple datasets in Section
3. We describe experiments and analysis in Section
4. Finally, we conclude and discuss future work in Section 5.
We show the interaction tensor (or attention weight) contains semantic information to understand the natural language.
We introduce Interactive Inference Network, a novel class of architecture that allows the model to solve NLI or NLI alike tasks via extracting semantic feature from interaction tensor end-to-end.
One instance of such architecture, Densely Interactive Inference Network (DIIN), achieves state-of-the-art performance on multiple datasets.
By ablating each component in DIIN and changing the dimensionality, we show the effectiveness of each component in DIIN.Though we have the initial exploration of natural language inference in interaction space, the full potential is not yet clear.
We will keep exploring the potential of interaction space.
Incorporating common-sense knowledge from external resources such as knowledge base to leverage the capacity of the mode is another research goal of ours.
|
show multi-channel attention weight contains semantic feature to solve natural language inference task.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:363
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items.
For this reason, they have gained prominence in many machine learning applications that rely on subset selection.
However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets.
We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors.
We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPP so powerful and unique.
Empirically, we demonstrate that samples from our model receive high likelihood under the more expensive DPP alternative.
Selecting a representative sample of data from a large pool of available candidates is an essential step of a large class of machine learning problems: noteworthy examples include automatic summarization, matrix approximation, and minibatch selection.
Such problems require sampling schemes that calibrate the tradeoff between the point-wise quality -e.g. the relevance of a sentence to a document summary -of selected elements and the set-wise diversity 1 of the sampled set as a whole.Determinantal Point Processes (DPPs) are probabilistic models over subsets of a ground set that elegantly model the tradeoff between these often competing notions of quality and diversity.
Given a ground set of size N , DPPs allow for O(N 3 ) sampling over all 2 N possible subsets of elements, assigning to any subset S of a ground set Y of elements the probability DISPLAYFORM0 where L ∈ R N ×N is the DPP kernel and L S = [L ij ] i,j∈S denotes the principal submatrix of L indexed by items in S. Intuitively, DPPs measure the volume spanned by the feature embedding of the items in feature space (Figure 1 ).
BID31 to model the distribution of possible states of fermions obeying the Pauli exclusion principle, the properties of DPPs have since then been studied in depth BID19 BID6 , see e.g.).
As DPPs capture repulsive forces between similar elements, they arise in many natural processes, such as the distribution of non-intersecting random walks BID22 , spectra of random matrix ensembles BID37 BID13 , and zerocrossings of polynomials with Gaussian coefficients BID20 ).
More recently, DPPs have become a prominent tool in machine learning due to their elegance and tractability: recent applications include video recommendation BID10 , minibatch selection BID46 , and kernel approximation BID28 BID35 .However
, the O(N 3 ) sampling cost makes the practical application of DPPs intractable for large datasets, requiring additional work such as subsampling from Y, structured kernels (Gartrell et al., (a) (b
) (
c)
φ i φ j Figure 1 : Geometric intuition for DPPs: let φ i , φ j be two feature vectors of Φ such that the DPP kernel verifies L = ΦΦ T ; then P L ({i, j}) ∝ Vol(φ i , φ j ). Increasing
the norm of a vector (quality) or increasing the angle between the vectors (diversity) increases the volume spanned by the vectors BID25 , Section 2.2.1).2017; BID34 , or approximate sampling methods BID2 BID27 BID0 . Nonetheless
, even such methods require significant pre-processing time, and scale poorly with the size of the dataset. Furthermore
, when dealing with ground sets with variable components, pre-processing costs cannot be amortized, significantly impeding the application of DPPs in practice.These setbacks motivate us to investigate the use of more scalable models to generate high-quality, diverse samples from datasets to obtain highly-scalable methods with flexibility to adapt to constantly changing datasets. Specifically
, we use generative deep models to approximate the DPP distribution over a ground set of items with both fixed and variable feature representations. We show that
a simple, carefully constructed neural network, DPPNET, can generate DPP-like samples with very little overhead, while maintaining fundamental theoretical properties of DPP measures. Furthermore,
we show that DPPNETs can be trivially employed to sample from a conditional DPP (i.e. sampling S such that A ⊆ S is predefined) and for greedy mode approximation.
We introduced DPPNETs, generative networks trained on DPPs over static and varying ground sets which enable fast and modular sampling in a wide variety of scenarios.
We showed experimentally on several datasets and standard DPP applications that DPPNETs obtain competitive performance as evaluated in terms of NLLs, while being amenable to the extensive recent advances in speeding up computation for neural network architectures.Although we trained our models on DPPs on exponentiated quadratic and linear kernels; we can train on any kernel type built from a feature representations of the dataset.
This is not the case for dual DPP exact sampling, which requires that the DPP kernel be L = ΦΦ for faster sampling.DPPNETs are not exchangeable: that is, two sequences i 1 , . . . , i k and σ(i 1 ), . . . , σ(i k ) where σ is a permutation of [k], which represent the same set of items, will not in general have the same probability under a DPPNET.
Exchangeability can be enforced by leveraging previous work BID45 ; however, non-exchangeability can be an asset when sampling a ranking of items.Our models are trained to take as input a fixed-size subset representation; we aim to investigate the ability to take a variable-length encoding as input as future work.
The scaling of the DPPNET's complexity with the ground set size also remains an open question.
However, standard tricks to enforce fixed-size ground sets such as sub-sampling from the dataset may be applied to DPPNETs.
Similarly, if further speedups are necessary, sub-sampling from the ground set -a standard approach for DPP sampling over very large set sizes -can be combined with DPPNET sampling.In light of our results on dataset sampling, the question of whether encoders can be trained to produce encodings conducive to dataset summarization via DPPNETs seems of particular interest.
Assuming knowledge of the (encoding-independent) relative diversity of a large quantity of subsets, an end-to-end training of the encoder and the DPPNET simultaneously may yield interesting results.Finally, although Corollary 1.1 shows the log-submodularity of the DPP can be transferred to a generative model, understanding which additional properties of training distributions may be conserved through careful training remains an open question which we believe to be of high significance to the machine learning community in general.A MAINTAINING LOG-SUBMODULARITY IN THE GENERATIVE MODEL THEOREM 2.
Let p be a strictly submodular distribution over subsets of a ground set Y, and q be a distribution over the same space such that DISPLAYFORM0 Then q is also submodular.Proof.
In all the following, we assume that S, T are subsets of a ground set Y such that S = T and S, T ∈ {∅, Y} (the inequalities being immediate in these corner cases).
|
We approximate Determinantal Point Processes with neural nets; we justify our model theoretically and empirically.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:364
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error.
The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable.
Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth.
The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA.
The basis depth maps generator is also learned via end-to-end training.
The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem.
Experiments on large scale real data prove the success of the proposed method.
The Structure-from-Motion (SfM) problem has been extensively studied in the past a few decades.
Almost all conventional SfM algorithms BID46 BID39 BID16 BID13 jointly optimize scene structures and camera motion via the Bundle-Adjustment (BA) algorithm BID43 BID1 , which minimizes the geometric BID46 BID39 or photometric BID17 BID13 error through the Levenberg-Marquardt (LM) algorithm BID35 .
Some recent works BID44 attempt to solve SfM using deep learning techniques, but most of them do not enforce the geometric constraints between 3D structures and camera motion in their networks.
For example, in the recent work DeMoN BID44 , the scene depths and the camera motion are estimated by two individual sub-network branches.
This paper formulates BA as a differentiable layer, the BA-Layer, to bridge the gap between classic methods and recent deep learning based approaches.
To this end, we learn a feed-forward multilayer perceptron (MLP) to predict the damping factor in the LM algorithm, which makes all involved computation differentiable.
Furthermore, unlike conventional BA that minimizes geometric or photometric error, our BA-layer minimizes the distance between aligned CNN feature maps.
Our novel feature-metric BA takes CNN features of multiple images as inputs and optimizes for the scene structures and camera motion.
This feature-metric BA is desirable, because it has been observed by BID17 that the geometric BA does not exploit all image information, while the photometric BA is sensitive to moving objects, exposure or white balance changes, etc.
Most importantly, our BA-Layer can back-propagate loss from scene structures and camera motion to learn appropriate features that are most suitable for structure-from-motion and bundle adjustment.
In this way, our network hard-codes the multi-view geometry constraints in the BA-Layer and learns suitable feature representations from training data.We strive to estimate a dense per-pixel depth, because dense depth is critical for many tasks such as object detection and robot navigation.
A major challenge in solving dense per-pixel depth is to find a compact parameterization.
Direct per-pixel depth is computational expensive, which makes the network training intractable.
So we train a network to generate a set of basis depth maps for an arbitrary input image and represent the result depth map as a linear combination of these basis 2 RELATED WORK Monocular Depth Estimation Networks Estimating depth from a monocular image is an ill-posed problem because an infinite number of possible scenes may have produced the same image.
Before the raise of deep learning based methods, some works predict depth from a single image based on MRF BID37 BID36 , semantic segmentation BID29 , or manually designed features BID27 .
BID15 propose a multi-scale approach for depth prediction with two CNNs, where a coarse-scale network first predicts the scene depth at the global level and then a fine-scale network will refine the local regions.
This approach was extended in BID14 to handle semantic segmentation and surface normal estimation as well.
Recently, BID30 propose to use ResNet BID24 based structure to predict depth, and BID47 construct multi-scale CRFs for depth prediction.
In comparison, we exploit monocular image depth estimation network for depth parameterization, which only produces a set of basis depth maps and the final result will be further improved through optimization.Structure-from-Motion Networks Recently, some works exploit CNNs to resolve the SfM problem.
BID22 solve the camera motion by a network from a pair of images with known depth.
employ two CNNs for depth and camera motion estimation respectively, where both CNNs are trained jointly by minimizing the photometric loss in an unsupervised manner.
implement the direct method BID40 as a differentiable component to compute camera motion after scene depth is estimated by the method in .
In BID44 , the scene depth and the camera motion are predicted from optical flow features, which help to make it generalizing better to unseen data.
However, the scene depth and the camera motion are solved by two separate network branches, multi-view geometry constraints between depth and motion are not enforced.
Recently, propose to solve nonlinear least squares in two-view SfM using a LSTM-RNN BID26 as the optimizer.Our method belongs to this category.
Unlike all previous works, we propose the BA-Layer to simultaneously predict the scene depth and the camera motion from CNN features, which explicitly enforces multi-view geometry constraints.
The hard-coded multi-view geometry constraints enable our method to reconstruct more than two images, while most deep learning methods can only handle two images.
Furthermore, we propose to minimize a feature-metric error instead of the photometric error in to enhance robustness.
This paper presents the BA-Net, a network that explicitly enforces multi-view geometry constraints in terms of feature-metric error.
It optimizes scene depths and camera motion jointly via feature-metric bundle adjustment.
The whole pipeline is differentiable and thus end-to-end trainable, such that the features are learned from data to facilitate structure-from-motion.
The dense depth is parameterized as a linear combination of several basis depth maps generated from the network.
Our BA-Net nicely combines domain knowledge (hard-coded multi-view geometry constraint) with deep learning (learned feature representation and basis depth maps generator).
It outperforms conventional BA and recent deep learning based methods.
DISPLAYFORM0
|
This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature bundle adjustment (BA)
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:365
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Temporal Difference Learning with function approximation is known to be unstable.
Previous work like \citet{sutton2009fast} and \citet{sutton2009convergent} has presented alternative objectives that are stable to minimize.
However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly \citep{mnih2015human}.
In this work we propose a constraint on the TD update that minimizes change to the target values.
This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation.
We validate this update by applying our technique to deep Q-learning, and training without a target network.
We also show that adding this constraint on Baird's counterexample keeps Q-learning from diverging.
Temporal Difference learning is one of the most important paradigms in Reinforcement Learning (Sutton & Barto) .
Techniques based on nonlinear function approximators and stochastic gradient descent such as deep networks have led to significant breakthroughs in the class of problems that these methods can be applied to BID9 BID13 BID12 .
However, the most popular methods, such as TD(λ), Q-learning and Sarsa, are not true gradient descent techniques BID2 and do not converge on some simple examples BID0 .
BID0 and BID1 propose residual gradients as a way to overcome this issue.
Residual methods, also called backwards bootstrapping, work by splitting the TD error over both the current state and the next state.
These methods are substantially slower to converge, however, and BID16 show that the fixed point that they converge to is not the desired fixed point of TD-learning methods.
BID16 propose an alternative objective function formulated by projecting the TD target onto the basis of the linear function approximator, and prove convergence to the fixed point of this projected Bellman error is the ideal fixed point for TD methods.
BID5 extend this technique to nonlinear function approximators by projecting instead on the tangent space of the function at that point.
Subsequently, BID11 has combined these techniques of residual gradient and projected Bellman error by proposing an oblique projection, and BID8 has shown that the projected Bellman objective is a saddle point formulation which allows a finite sample analysis.However, when using deep networks for approximating the value function, simpler techniques like Q-learning and Sarsa are still used in practice with stabilizing techniques like a target network that is updated more slowly than the actual parameters BID10 .In
this work, we propose a constraint on the update to the parameters that minimizes the change to target values, freezing the target that we are moving our current predictions towards. Subject
to this constraint, the update minimizes the TD-error as much as possible. We show
that this constraint can be easily added to existing techniques, and works with all the techniques mentioned above.We validate our method by showing convergence on Baird's counterexample and a gridworld domain. On the
gridworld domain we parametrize the value function using a multi-layer perceptron, and show that we do not need a target network.
In this paper we introduce a constraint on the updates to the parameters for TD learning with function approximation.
This constraint forces the targets in the Bellman equation to not move when the update is applied to the parameters.
We enforce this constraint by projecting the gradient of the TD error with respect to the parameters for state s t onto the orthogonal space to the gradient with respect to the parameters for state s t+1 .We
show in our experiments that this added constraint stops parameters in Baird's counterexample from exploding when we use TD-learning. But
since we do not allow changes to target parameters, this also keeps Residual Gradients from converging to the true values of the Markov Process.On a Gridworld domain we demonstrate that we can perform TD-learning using a 2-layer neural network, without the need for a target network that updates more slowly. We
compare the solution obtained with DQN and show that it is closer to the solution obtained by tabular policy evaluation. Finally
, we also show that constrained DQN can learn faster and with less variance on the classical Cartpole domain.For future work, we hope to scale this approach to larger problems such as the Atari domain BID4 . We would
also like to prove convergence of TD-learning with this added constraint.
|
We show that adding a constraint to TD updates stabilizes learning and allows Deep Q-learning without a target network
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:366
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets.
DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie - one from Wikipedia and the other from IMDb - written by two different authors.
We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize corresponding answers from the other version.
This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version.
Further, since the two versions have different level of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating background knowledge not available in the given text.
Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences.
Indeed, we observe that state-of-the-art neural RC models which have achieved near human performance on the SQuAD dataset, even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset).
This opens up several interesting research avenues wherein DuoRC could complement other Reading Comprehension style datasets to explore novel neural approaches for studying language understanding.
Natural Language Understanding is widely accepted to be one of the key capabilities required for AI systems.
Scientific progress on this endeavor is measured through multiple tasks such as machine translation, reading comprehension, question-answering, and others, each of which requires the machine to demonstrate the ability to "comprehend" the given textual input (apart from other aspects) and achieve their task-specific goals.
In particular, Reading Comprehension (RC) systems are required to "understand" a given text passage as input and then answer questions based on it.
It is therefore critical, that the dataset benchmarks established for the RC task keep progressing in complexity to reflect the challenges that arise in true language understanding, thereby enabling the development of models and techniques to solve these challenges.For RC in particular, there has been significant progress over the recent years with several benchmark datasets, the most popular of which are the SQuAD dataset BID11 , TriviaQA BID4 , MS MARCO BID8 , MovieQA BID16 and cloze-style datasets BID6 BID9 BID2 .
However, these benchmarks, owing to both the nature of the passages and the question-answer pairs to evaluate the RC task, have 2 primary limitations in studying language understanding:
(i) Other than MovieQA, which is a small dataset of 15K QA pairs, all other large-scale RC datasets deal only with factual descriptive passages and not narratives (involving events with causality linkages that require reasoning and background knowledge) which is the case with a lot of real-world content such as story books, movies, news reports, etc.
(ii) their questions possess a large lexical overlap with segments of the passage, or have a high noise level in Q/A pairs themselves.
As demonstrated by recent work, this makes it easy for even simple keyword matching algorithms to achieve high accuracy BID19 .
In fact, these models have been shown to perform poorly in the presence of adversarially inserted sentences which have a high word overlap with the question but do not contain the answer BID3 .
While this problem does not exist in TriviaQA it is admittedly noisy because of the use of distant supervision.
Similarly, for cloze-style datasets, due to the automatic question generation process, it is very easy for current models to reach near human performance BID1 .
This therefore limits the complexity in language understanding that a machine is required to demonstrate to do well on the RC task.Motivated by these shortcomings and to push the state-of-the-art in language understanding in RC, in this paper we propose DuoRC, which specifically presents the following challenges beyond the existing datasets:1.
DuoRC is especially designed to contain a large number of questions with low lexical overlap between questions and their corresponding passages.2.
It requires the use of background and common-sense knowledge to arrive at the answer and go beyond the content of the passage itself.3.
It contains narrative passages from movie plots that require complex reasoning across multiple sentences to infer the answer.4.
Several of the questions in DuoRC, while seeming relevant, cannot actually be answered from the given passage, thereby requiring the machine to detect the unanswerability of questions.In order to capture these four challenges, DuoRC contains QA pairs created from pairs of documents describing movie plots which were gathered as follows.
Each document in a pair is a different version of the same movie plot written by different authors; one version of the plot is taken from the Wikipedia page of the movie whereas the other from its IMDb page (see FIG0 for portions of an example pair of plots from the movie "Twelve Monkeys").
We first showed crowd workers on Amazon Mechanical Turk (AMT) the first version of the plot and asked them to create QA pairs from it.
We then showed the second version of the plot along with the questions created from the first version to a different set of workers on AMT and asked them to provide answers by reading the second version only.
Since the two versions contain different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version exhibits all of the four challenges mentioned above.We now make several interesting observations from the example in FIG0 .
For 4 out of the 8 questions (Q1, Q2, Q4, and Q7), though the answers extracted from the two plots are exactly the same, the analysis required to arrive at this answer is very different in the two cases.
In particular, for Q1 even though there is no explicit mention of the prisoner living in a subterranean shelter and hence no lexical overlap with the question, the workers were still able to infer that the answer is Philadelphia because that is the city to which James Cole travels to for his mission.
Another interesting characteristic of this dataset is that for a few questions (Q6, Q8) alternative but valid answers are obtained from the second plot.
Further, note the kind of complex reasoning required for answering Q8 where the machine needs to resolve coreferences over multiple sentences (that man refers to Dr. Peters) and use common sense knowledge that if an item clears an airport screening, then a person can likely board the plane with it.
To re-emphasize, these examples exhibit the need for machines to demonstrate new capabilities in RC such as:
(i) employing a knowledge graph (e.g. to know that Philadelphia is a city in Q1),
(ii) common-sense knowledge (e.g., clearing airport security implies boarding)
(iii) paraphrase/semantic understanding (e.g. revolver is a type of handgun in Q7)
(iv) multiple-sentence inferencing across events in the passage including coreference resolution of named entities and nouns, and
(v) educated guesswork when the question is not directly answerable but there are subtle hints in the passage (as in Q1).
Finally, for quite a few questions, there wasn't sufficient information in the second plot to obtain their answers.
In such cases, the workers marked the question as "unanswerable".
This brings out a very important challenge for machines to exhibit (i.e. detect unanswerability of questions) because a practical system should be able to know when it is not possible for it to answer a particular question given the data available to it, and in such cases, possibly delegate the task to a human instead.Current RC systems built using existing datasets are far from possessing these capabilities to solve the above challenges.
In Section 4, we seek to establish solid baselines for DuoRC employing state-of-the-art RC models coupled with a collection of standard NLP techniques to address few of the above challenges.
Proposing novel neural models that solve all of the challenges in DuoRC is out of the scope of this paper.
Our experiments demonstrate that when the existing state-of-the-art RC systems are trained and evaluated on DuoRC they perform poorly leaving a lot of scope for improvement and open new avenues for research in RC.
Do note that this dataset is not a substitute for existing RC datasets but can be coupled with them to collectively address a large set of challenges in language understanding with RC (the more the merrier).
The results of our experiments are summarized in TAB3 which we discuss in the following sub-sections.
• SpanModel v/s GenModel: Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of TAB4 we see that the SpanModel clearly outperforms the GenModel.
This is not very surprising for two reasons.
First, around 70% (and 50%) of the answers in SelfRC (and ParaphraseRC) respectively, match an exact span in the document so the span based model still has scope to do well on these answers.
On the other hand, even if the first stage of the GenModel predicts the span correctly, the second stage could make an error in generating the correct answer from it because generation is a harder problem.
For the second stage, it is expected that the GenModel should learn to copy the predicted span to produce the answer output (as is required in most cases) and only occasionally where necessary, generate an answer.
However, surprisingly the GenModel fails to even do this.
Manual inspection of the generated answers shows that in many cases the generator ends up generating either more or fewer words compared the true answer.
This demonstrates that there is clearly scope for the GenModel to perform better.•
SelfRC v/s ParaphraseRC: Comparing the SelfRC and ParaphraseRC numbers in TAB4 , we observe that the performance of the models clearly drops for the latter task, thus validating our hypothesis that ParaphraseRC is a indeed a much harder task.•
Effect of NLP pre-processing: As mentioned in Section 4, for ParaphraseRC, we first perform a few pre-processing steps to identify relevant sentences in the longer document. In
order to evaluate whether the pre-processing method is effective, we compute: (i
) the percentage of the document that gets pruned, and (
ii) whether the true answer is present in the pruned document (i.e., average recall of the answer).
We can compute the recall only for the span-based subset of the data since for the remaining data we do not know the true span.
In TAB3 , we report these two quantities for the span-based subset using different pruning strategies.
Finally, comparing the SpanModel with and without Paraphrasing in TAB4 for ParaphraseRC, we observe that the pre-processing step indeed improves the performance of the Span Detection Model.•
Effect of oracle pre-processing: As noted in Section 3, the ParaphraseRC plot is almost double in length in comparison to the SelfRC plot, which while adding to the complexities of the former task, is clearly not the primary reason of the model's poor performance on that. To
empirically validate this, we perform an Oracle pre-processing step, where, starting with the knowledge of the span containing the true answer, we extract a subplot around it such that the span is randomly located within that subplot and the average length of the subplot is similar to the SelfRC plots. The
SpanModel with this Oracle preprocessed data exhibits a minor improvement in performance over that with rule-based preprocessing (1.6% in Accuracy and 4.3% in F1 over the Span Test), still failing to bridge the wide performance gap between the SelfRC and ParaphraseRC task.• Cross
Testing We wanted to examine whether a model trained on SelfRC performs well on ParaphraseRC and vice-versa. We also
wanted to evaluate if merging the two datasets improves the performance of the model. For this
we experimented with various combinations of train and test data. The results
of these experiments for the SpanModel are summarized in TAB5 . We make two
main observations. First, training
on one dataset and evaluating on the other results in a drop in the performance. Merging the training
data from the two datasets exhibits better performance on the individual test sets.Based on our experiments and empirical observations we believe that the DuoRC dataset indeed holds a lot of potential for advancing the horizon of complex language understanding by exposing newer challenges in this area.
In this paper we introduced DuoRC, a large scale RC dataset of 186K human-generated questionanswer pairs created from 7680 pairs of parallel movie-plots, each pair taken from Wikipedia and IMDb.
We then showed that this dataset, by design, ensures very little or no lexical overlap between the questions created from one version and the segments containing the answer in the other version.
With this, we hope to introduce the RC community to new research challenges on question-answering requiring external knowledge and common-sense driven reasoning, deeper language understanding and multiple-sentence inferencing.
Through our experiments, we show how the state-of-the-art RC models, which have achieved near human performance on the SQuAD dataset, perform poorly on our dataset, thus emphasizing the need to explore further avenues for research.
|
We propose DuoRC, a novel dataset for Reading Comprehension (RC) containing 186,089 human-generated QA pairs created from a collection of 7680 pairs of parallel movie plots and introduce a RC task of reading one version of the plot and answering questions created from the other version; thus by design, requiring complex reasoning and deeper language understanding to overcome the poor lexical overlap between the plot and the question.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:367
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider the problem of weakly supervised structured prediction (SP) with reinforcement learning (RL) – for example, given a database table and a question, perform a sequence of computation actions on the table, which generates a response and receives a binary success-failure reward.
This line of research has been successful by leveraging RL to directly optimizes the desired metrics of the SP tasks – for example, the accuracy in question answering or BLEU score in machine translation.
However, different from the common RL settings, the environment dynamics is deterministic in SP, which hasn’t been fully utilized by the model-freeRL methods that are usually applied.
Since SP models usually have full access to the environment dynamics, we propose to apply model-based RL methods, which rely on planning as a primary model component.
We demonstrate the effectiveness of planning-based SP with a Neural Program Planner (NPP), which, given a set of candidate programs from a pretrained search policy, decides which program is the most promising considering all the information generated from executing these programs.
We evaluate NPP on weakly supervised program synthesis from natural language(semantic parsing) by stacked learning a planning module based on pretrained search policies.
On the WIKITABLEQUESTIONS benchmark, NPP achieves a new state-of-the-art of 47.2% accuracy.
Numerous results from natural language processing tasks have shown that Structured Prediction (SP) can be cast into a reinforcement learning (RL) framework, and known RL techniques can give formal performance bounds on SP tasks BID3 BID13 BID0 .
RL also directly optimizes task metrics, such as, the accuracy in question answering or BLEU score in machine translation, and avoids the exposure bias problem when compaired to maximum likelihood training that is commonly used in SP BID13 BID12 .However
, previous works on applying RL to SP problems often use model-free RL algorithms (e.g., REINFORCE or actor-critic) and fail to leverage the characteristics of SP, which are different than typical RL tasks, e.g., playing video games BID9 or the game of Go BID15 . In most
SP problems conditioned on the input x, the environment dynamics, except for the reward signal, is known, deterministic, reversible, and therefore can be searched. This means
that there is a perfect model 1 of the environment, which can be used to apply model-based RL methods that utilize planning 2 as a primary model component.Take semantic parsing BID1 BID11 as an example, semantic parsers trained by RL such as Neural Semantic Machine (NSM) BID8 typically rely on beam search for inference -the program with the highest probability in beam is used for execution and generating answer. However, the
policy, which is used for beam search, may not be 1 A model of the environment usually means anything that an agent can use to predict how the environment will respond to its actions BID17 .2 planning usually refers to any computational process that takes a model as input and produces or improves a policy for interacting with the modeled environment BID17 able to assign the highest probability to the correct program. This limitation
is due to the policy predicting locally normalized probabilities for each possible action based on the partially generated program, and the probability of a program is a product of these local probabilities.For example, when applied to the WEBQUESTIONSSP task, NSM made mistakes with two common patterns: (1) the program would ignore important information in the context; (2) the generated program does not execute to a reasonable output, but still receives high probability (spurious programs). Resolving this
issue requires using the information of the full program and its execution output to further evaluate its quality based on the context, which can be seen as planning. This can be observed
in Figure 4 where the model is asked a question "Which programming is played the most?". The full context of
the input table (shown in TAB0 ) contains programming for a television station. The top program generated
by a search policy produces the wrong answer, filtering by a column not relevant to the question. If provided the correct contextual
features, and if allowed to evaluate the full program forward and backward through time, we observe that a planning model would be able to better evaluate which program would produce the correct answer.To handle errors related to context, we propose to train a value function to compute the utility of each token in a program. This utility is evaluated by considering
the program and token probability as well as the attention mask generated by the sequence-to-sequence (seq2seq) model for the underlying policy. We also introduce beam and question context
with a binary feature representing overlap from question/program and program/program, such as how many programs share a token at a given timestep.In the experiments, we found that applying a planner that uses a learned value function to re-rank the candidates in the beam can significantly and consistently improve the accuracy. On the WIKITABLEQUESTIONS benchmark, we improve
the state-of-the-art by 0.9%, achieving an accuracy of 47.2%.
Reinforcement learning applied to structured prediction suffers from limited use of the world model as well as not being able to consider future and past program context when generating a sequence.
To overcome these limitations we proposed Neural Program Planner (NPP) which is a planning step after candidate program generation.
We show that an additional planning model can better evaluate overall structure value.
When applied to a difficult SP task NPP improves state of the art by 0.9% and allows intuitive analysis of its scoring model per program token.A MORE NPP SCORING DETAILS
|
A model-based planning component improves RL-based semantic parsing on WikiTableQuestions.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:368
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost.
To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations.
This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation.
This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter α that is optimized duringtraining to find the right quantization scale.
PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes.
We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets.
We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.
Deep Convolutional Neural Networks (CNNs) have achieved remarkable accuracy for tasks in a wide range of application domains including image processing (He et al. (2016b) ), machine translation (Gehring et al. (2017) ), and speech recognition (Zhang et al. (2017) ).
These state-of-the-art CNNs use very deep models, consuming 100s of ExaOps of computation during training and GBs of storage for model and data.
This poses a tremendous challenge to widespread deployment, especially in resource constrained edge environments -leading to a plethora of explorations in compressed models that minimize memory footprint and computation while preserving model accuracy as much as possible.Recently, a whole host of different techniques have been proposed to alleviate these computational costs.
Among them, reducing the bit-precision of key CNN data structures, namely weights and activations, has gained attention due to its potential to significantly reduce both storage requirements and computational complexity.
In particular, several weight quantization techniques (Li & Liu (2016) and Zhu et al. (2017) ) showed significant reduction in the bit-precision of CNN weights with limited accuracy degradation.
However, prior work (Hubara et al. (2016b) ; Zhou et al. (2016) ) has shown that a straightforward extension of weight quantization schemes to activations incurs significant accuracy degradation in large-scale image classification tasks such as ImageNet (Russakovsky et al. (2015) ).
Recently, activation quantization schemes based on greedy layer-wise optimization were proposed (Park et al. (2017) ; Graham (2017) ; Cai et al. (2017) ), but achieve limited accuracy improvement.In this paper, we propose a novel activation quantization technique, PArameterized Clipping acTivation function (PACT) , that automatically optimizes the quantization scales during model training.
PACT allows significant reductions in the bit-widths needed to represent both weights and activations and opens up new opportunities for trading off hardware complexity with model accuracy.The primary contributions of this work include: 1) PACT: A new activation quantization scheme for finding the optimal quantization scale during training.
We introduce a new parameter α that is used to represent the clipping level in the activation function and is learnt via back-propagation.
α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively.
In addition, regularization is applied to α in the loss function to enable faster convergence.We provide reasoning and analysis on the expected effectiveness of PACT in preserving model accuracy.3) Quantitative results demonstrating the effectiveness of PACT on a spectrum of models and datasets.
Empirically, we show that:
(a) for extremely low bit-precision (≤ 2-bits for weights and activations), PACT achieves the highest model accuracy compared to all published schemes and
(b) 4-bit quantized CNNs based on PACT achieve accuracies similar to single-precision floating point representations.4) System performance analysis to demonstrate the trade-offs in hardware complexity for different bit representations vs. model accuracy.
We show that a dramatic reduction in the area of the computing engines is possible and use it to estimate the achievable system-level performance gains.The rest of the paper is organized as follows: Section 2 provides a summary of related prior work on quantized CNNs.
Challenges in activation quantization are presented in Section
3. We present PACT, our proposed solution for activation quantization in Section
4. In Section 5 we demonstrate the effectiveness of PACT relative to prior schemes using experimental results on popular CNNs.
Overall system performance analysis for a representative hardware system is presented in Section 6 demonstrating the observed trade-offs in hardware complexity for different bit representations.
|
A new way of quantizing activation of Deep Neural Network via parameterized clipping which optimizes the quantization scale via stochastic gradient descent.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:369
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale.
We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work.
Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator.
We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fréchet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600.
We approached the challenging problem of modeling natural video by introducing a GAN capable of capturing the complexity of a large video dataset.
We showed that on UCF-101 and frame-conditional Kinetics-600 it quantitatively achieves the new state of the art, alongside qualitatively producing video synthesis samples with high complexity and diversity.
We further wish to emphasize the benefit of training generative models on large and complex video datasets, such as Kinetics-600, and envisage the strong baselines we established on this dataset with DVD-GAN will be used as a reference point by the generative modeling community moving forward.
While much remains to be done before realistic videos can be consistently generated in an unconstrained setting, we believe DVD-GAN is a step in that direction.
A EXPERIMENT METHODOLOGY
|
We propose DVD-GAN, a large video generative model that is state of the art on several tasks and produces highly complex videos when trained on large real world datasets.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:37
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting.
This motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts.
To better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's effect on generalization relies more on the instability it generates (defined as the drops in test accuracy immediately following pruning) than on the final size of the pruned model.
We demonstrate that even the pruning of unimportant parameters can lead to such instability, and show similarities between pruning and regularizing by injecting noise, suggesting a mechanism for pruning-based generalization improvements that is compatible with the strong generalization recently observed in over-parameterized networks.
Pruning weights and/or convolutional filters from deep neural networks (DNNs) can substantially shrink parameter counts with minimal loss in accuracy (LeCun et al., 1990; Hassibi & Stork, 1993; Han et al., 2015a; Li et al., 2016; Molchanov et al., 2017; Louizos et al., 2017; Liu et al., 2017; Ye et al., 2018) , enabling broader application of DNNs via reductions in memory-footprint and inference-FLOPs requirements.
Moreover, many pruning methods have been found to actually improve generalization (measured by model accuracy on previously unobserved inputs) (Narang et al., 2017; Frankle & Carbin, 2018; You et al., 2019) .
Consistent with this, pruning was originally motivated as a means to prevent over-parameterized networks from overfitting to comparatively small datasets (LeCun et al., 1990) .
Concern about over-parameterizing models has weakened, however, as many recent studies have found that adding parameters can actually reduce a DNN's generalization-gap (the drop in performance when moving from previously seen to previously unseen inputs), even though it has been shown that the same networks have enough parameters to fit large datasets of randomized data (Neyshabur et al., 2014; Zhang et al., 2016) .
Potential explanations for this unintuitive phenomenon have come via experiments (Keskar et al., 2016; Morcos et al., 2018; Yao et al., 2018; Belkin et al., 2018; Nagarajan & Kolter, 2019) , and the derivation of bounds on DNN generalization-gaps that suggest less overfitting might occur as parameter counts increase (Neyshabur et al., 2018) .
This research has implications for neural network pruning, where a puzzling question has arisen: if larger parameter counts don't increase overfitting, how does pruning parameters throughout training improve generalization?
To address this question we first introduce the notion of pruning instability, which we define to be the size of the drop in network accuracy caused by a pruning iteration (Section 3).
We then empirically analyze the instability and generalization associated with various magnitude-pruning (Han et al., 2015b) algorithms in different settings, making the following contributions:
1. We find a tradeoff between the stability and potential generalization benefits of pruning, and show iterative pruning's similarity to regularizing with noise-suggesting a mechanism unrelated to parameter counts through which pruning appears to affect generalization.
2. We characterize the properties of pruning algorithms which lead to instability and correspondingly higher generalization.
In this study, we defined the notion of pruning algorithm instability, and applied several pruning approaches 5 to multiple neural networks, assessing the approaches' effects on instability and generalization.
Throughout these experiments, we observed that pruning algorithms that generated more instability led to better generalization (as measured by test accuracy).
For a given pruning target and total pruning percentage, instability and generalization could be fueled by raising iterative pruning rates (Figure 4, Section 4.3) .
Additionally, targeting more important weights, again holding total parameters pruned constant, led to more instability and generalization than targeting less important weights (Figure 1, Section 4.1) .
These results support the idea that the generalization benefits of pruning cannot be explained solely by pruning's effect on parameter counts-the properties of the pruning algorithm must be taken into account.
Our analysis also suggests that the capacity effects of weight-removal may not even be necessary to explain how pruning improves generalization.
Indeed, we provide an interpretation of iterative pruning as noise injection, a popular approach to regularizing DNNs, and find that making pruning noise impermanent provides pruning-like generalization benefits while not removing as much capacity as permanent pruning ( Figure 5 , Section 4.4).
|
We demonstrate that pruning methods which introduce greater instability into the loss also confer improved generalization, and explore the mechanisms underlying this effect.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:370
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural networks trained with backpropagation, the standard algorithm of deep learning which uses weight transport, are easily fooled by existing gradient-based adversarial attacks.
This class of attacks are based on certain small perturbations of the inputs to make networks misclassify them.
We show that less biologically implausible deep neural networks trained with feedback alignment, which do not use weight transport, can be harder to fool, providing actual robustness.
Tested on MNIST, deep neural networks trained without weight transport (1) have an adversarial accuracy of 98% compared to 0.03% for neural networks trained with backpropagation and (2) generate non-transferable adversarial examples.
However, this gap decreases on CIFAR-10 but is still significant particularly for small perturbation magnitude less than 1 ⁄ 2.
Deep neural networks trained with backpropagation (BP) are not robust against certain hardly perceptible perturbation, known as adversarial examples, which are found by slightly altering the network input and nudging it along the gradient of the network's loss function [1] .
The feedback-path synaptic weights of these networks use the transpose of the forward-path synaptic weights to run error propagation.
This problem is commonly named the weight transport problem.
Here we consider more biologically plausible neural networks introduced by Lillicrap et al. [2] to run error propagation using feedbackpath weights that are not the transpose of the forward-path ones i.e. without weight transport.
This mechanism was called feedback alignment (FA).
The introduction of a separate feedback path in [2] in the form of random fixed synaptic weights makes the feedback gradients a rough approximation of those computed by backpropagation.
Since gradient-based adversarial attacks are very sensitive to the quality of gradients to perturb the input and fool the neural network, we suspect that the gradients computed without weight transport cannot be accurate enough to design successful gradient-based attacks.
Here we compare the robustness of neural networks trained with either BP or FA on three well-known gradient-based attacks, namely the fast gradient sign method (FGSM) [3] , the basic iterative method (BIM) and the momentum iterative fast gradient sign method (MI-FGSM) [4] .
To the best of our knowledge, no prior adversarial attacks have been applied for deep neural networks without weight transport.
We perform an empirical evaluation investigating both the robustness of deep neural networks without weight transport and the transferability of adversarial examples generated with gradient-based attacks.
The results on MNIST clearly show that (1) FA networks are robust to adversarial examples generated with FA and (2) the adversarial examples generated by FA are not transferable to BP networks.
On the other hand, we find that these two conclusions are not true on CIFAR-10 even if FA networks showed a significant robustness to Figure 1b , we denote by "BP → F A" the generation of adversarial examples using BP to fool the FA network, and "F A → BP " the generation of adversarial examples using FA to fool the BP network gradient-based attacks.
Therefore, one should consider performing more exhaustive analysis on more complex datasets to understand the impact of the approximated gradients provided by feedback alignment on the adversarial accuracy of biologically plausible neural networks attacked with gradient-based methods.
|
Less biologically implausible deep neural networks trained without weight transport can be harder to fool.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:371
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Chemical reactions can be described as the stepwise redistribution of electrons in molecules.
As such, reactions are often depicted using "arrow-pushing" diagrams which show this movement as a sequence of arrows.
We propose an electron path prediction model (ELECTRO) to learn these sequences directly from raw reaction data.
Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of
(a) being easy for chemists to interpret,
(b) incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and
(c) naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants.
We design a method to extract approximate reaction paths from any dataset of atom-mapped reaction SMILES strings.
Our model achieves excellent performance on an important subset of the USPTO reaction dataset, comparing favorably to the strongest baselines.
Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so.
The ability to reliably predict the products of chemical reactions is of central importance to the manufacture of medicines and materials, and to understand many processes in molecular biology.
Theoretically, all chemical reactions can be described by the stepwise rearrangement of electrons in molecules (Herges, 1994b) .
This sequence of bond-making and breaking is known as the reaction mechanism.
Understanding the reaction mechanism is crucial because it not only determines the products (formed at the last step of the mechanism), but it also provides insight into why the products are formed on an atomistic level.
Mechanisms can be treated at different levels of abstraction.
On the lowest level, quantum-mechanical simulations of the electronic structure can be performed, which are prohibitively computationally expensive for most systems of interest.
On the other end, chemical reactions can be treated as rules that "rewrite" reactant molecules to products, which abstracts away the individual electron redistribution steps into a single, global transformation step.
To combine the advantages of both approaches, chemists use a powerful qualitative model of quantum chemistry colloquially called "arrow pushing", which simplifies the stepwise electron shifts using sequences of arrows which indicate the path of electrons throughout molecular graphs (Herges, 1994b) .Recently
, there have been a number of machine learning models proposed for directly predicting the products of chemical reactions BID2 Jin et al., 2017; Schwaller et al., 2018; Segler and Waller, 2017a; Segler et al., 2018; Wei et al., 2016) , largely using graph-based or machine translation models. The task
of reaction product prediction is shown on the left-hand side of FIG0 .In this paper
we propose a machine learning model to predict the reaction mechanism, as shown on the right-hand side of FIG0 , for a particularly important subset of organic reactions. We argue that
our The reaction product prediction problem: Given the reactants and reagents, predict the structure of the product. (Right) The reaction
mechanism
prediction problem: Given the reactants and reagents, predict how the reaction occurred to form the products.model is not only more interpretable than product prediction models, but also allows easier encoding of constraints imposed by chemistry. Proposed approaches to
predicting reaction mechanisms have often been based on combining hand-coded heuristics and quantum mechanics BID0 Kim et al., 2018; Nandi et al., 2017; Segler and Waller, 2017b; Rappoport et al., 2014; Simm and Reiher, 2017; Zimmerman, 2013) , rather than using machine learning. We call our model ELECTRO
, as it directly predicts the path of electrons through molecules (i.e., the reaction mechanism). To train the model we devise
a general technique to obtain approximate reaction mechanisms purely from data about the reactants and products. This allows one to train our
a model on large, unannotated reaction datasets such as USPTO (Lowe, 2012) . We demonstrate that not only
does our model achieve impressive results, surprisingly it also learns chemical properties it was not explicitly trained on.
In this paper we proposed ELECTRO, a model for predicting electron paths for reactions with linear electron flow.
These electron paths, or reaction mechanisms, describe how molecules react together.
Our model
(i) produces output that is easy for chemists to interpret, and
(ii) exploits the sparsity and compositionality involved in chemical reactions.
As a byproduct of predicting reaction mechanisms we are also able to perform reaction product prediction, comparing favorably to the strongest baselines on this task.
|
A generative model for reaction prediction that learns the mechanistic electron steps of a reaction directly from raw reaction data.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:372
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly.
To fulfill these three requirements, a model must be able to output a reject option (i.e. say "``I Don't Know") when it is not qualified to make a prediction.
In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user.
We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives.
We propose a learning algorithm which accounts for potential biases held by decision-makerslater in a pipeline.
Experiments on real-world datasets demonstrate that learning
to defer can make a model not only more accurate but also less biased.
Even when
operated by highly biased users, we show that
deferring models can still greatly improve the fairness of the entire pipeline.
Recent machine learning advances have increased our reliance on learned automated systems in complex, high-stakes domains such as loan approvals BID6 , medical diagnosis BID12 , and criminal justice BID22 .
This growing use of automated decisionmaking has raised questions about the obligations of classification systems.
In many high-stakes situations, machine learning systems should satisfy (at least) three objectives: predict accurately (predictions should be broadly effective indicators of ground truth), predict fairly (predictions should be unbiased with respect to different types of input), and predict responsibly (predictions should not be made if the model cannot confidently satisfy the previous two objectives).Given
these requirements, we propose learning to defer. When
deferring, the algorithm does not output a prediction; rather it says "I Don't Know" (IDK), indicating it has insufficient information to make a responsible prediction, and that a more qualified external decision-maker (DM) is required. For
example, in medical diagnosis, the deferred cases would lead to more medical tests, and a second expert opinion. Learning
to defer extends the common rejection learning framework (Chow, 1957; BID9 in two ways. Firstly,
it considers the expected output of the DM on each example, more accurately optimizing the output of the joint DM-model system. Furthermore
, it can be used with a variety of training objectives, whereas most rejection learning research focuses solely on classification accuracy. We believe
that algorithms that can defer, i.e., yield to more informed decision-makers when they cannot predict responsibly, are an essential component to accountable and reliable automated systems.In this work, we show that the standard rejection learning paradigm (learning to punt) is inadequate, if these models are intended to work as part of a larger system. We propose
an alternative decision making framework (learning to defer) to learn and evaluate these models. We find that
embedding a deferring model in a pipeline can improve the accuracy and fairness of the pipeline as a whole, particularly if the model has insight into decision makers later in the pipeline. We simulate
such a pipeline where our model can defer judgment to a better-informed decision maker, echoing real-world situations where downstream decision makers have more resources or information. We propose
different formulations of these models along with a learning algorithm for training a model that can work optimally with such a decision-maker. Our experimental
results on two real-world datasets, from the legal and health domains, show that this algorithm learns models which, through deferring, can work with users to make fairer, more responsible decisions.
In this work, we propose the idea of learning to defer.
We propose a model which learns to defer fairly, and show that these models can better navigate the accuracy-fairness tradeoff.
We also consider deferring models as one part of a decision pipeline.
To this end, we provide a framework for evaluating deferring models by incorporating other decision makers' output into learning.
We give an algorithm for learning to defer in the context of a larger system, and show how to train a deferring model to optimize the performance of the pipeline as a whole.
This is a powerful, general framework, with ramifications for many complex domains where automated models interact with other decision-making agents.
A model with a low deferral rate could be used to cull a large pool of examples, with all deferrals requiring further examination.
Conversely, a model with a high deferral rate can be thought of as flagging the most troublesome, incorrect, or biased decisions by a DM, with all non-deferrals requiring further investigation.
Automated models often operate within larger systems, with many moving parts.
Through deferring, we show how models can learn to predict responsibly within their surrounding systems.
Automated models often operate within larger systems, with many moving parts.
Through deferring, we show how models can learn to predict responsibly within their surrounding systems.
Building models which can defer to more capable decision makers is an essential step towards fairer, more responsible machine learning.
|
Incorporating the ability to say I-don't-know can improve the fairness of a classifier without sacrificing too much accuracy, and this improvement magnifies when the classifier has insight into downstream decision-making.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:373
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Hierarchical Task Networks (HTN) generate plans using a decomposition process guided by extra domain knowledge to guide search towards a planning task.
While many HTN planners can make calls to external processes (e.g. to a simulator interface) during the decomposition process, this is a computationally expensive process, so planner implementations often use such calls in an ad-hoc way using very specialized domain knowledge to limit the number of calls.
Conversely, the few classical planners that are capable of using external calls (often called semantic attachments) during planning do so in much more limited ways by generating a fixed number of ground operators at problem grounding time.
In this paper we develop the notion of semantic attachments for HTN planning using semi co-routines, allowing such procedurally defined predicates to link the planning process to custom unifications outside of the planner.
The resulting planner can then use such co-routines as part of its backtracking mechanism to search through parallel dimensions of the state-space (e.g. through numeric variables).
We show empirically that our planner outperforms the state-of-the-art numeric planners in a number of domains using minimal extra domain knowledge.
Planning in domains that require numerical variables, for example, to drive robots in the physical world, must represent and search through a space defined by real-valued functions with a potentially infinite domain, range, or both.
This type of numeric planning problem poses challenges in two ways.
First, the description formalisms BID6 might not make it easy to express the numeric functions and its variables, resulting in a description process that is time consuming and error-prone for real-world domains BID17 .
Second, the planners that try to solve such numeric problems must find efficient strategies to find solutions through this type of state-space.
Previous work on formalisms for domains with numeric values developed the Semantic Attachment (SA) construct BID3 ) in classical planning.
Semantic attachments were coined by (Weyhrauch 1981) to describe the attachment of an interpretation to a predicate symbol using an external procedure.
Such construct allows the planner to reason about fluents where numeric values come from externally defined functions.
In this paper, we extend the basic notion of semantic attachment for HTN planning by defining the semantics of the functions used as semantic attachments in a way that allows the HTN search and backtracking mechanism to be substantially more efficient.
Our current approach focused on depth-first search HTN implementation without heuristic guidance, with free variables expected to be fullyground before task decomposition continues.Most planners are limited to purely symbolic operations, lacking structures to optimize usage of continuous resources involving numeric values BID9 .
Floating point numeric values, unlike discrete logical symbols, have an infinite domain and are harder to compare as one must consider rounding errors.
One could overcome such errors with delta comparisons, but this solution becomes cumbersome as objects are represented by several numeric values which must be handled and compared as one, such as points or polygons.
Planning descriptions usually simplify such complex objects to symbolic values (e.g. p25 or poly2) that are easier to compare.
Detailed numeric values are ignored during planning or left to be decided later, which may force replanning BID17 .
Instead of simplifying the description or doing multiple comparisons in the description itself, our goal is to exploit external formalisms orthogonal to the symbolic description.
To achieve that we build a mapping from symbols to objects generated as we query semantic attachments.
Semantic attachments have already been used in classical planning BID3 ) to unify values just like predicates, and their main advantage is that new users do not need to discern between them and common predicates.
Thus, we extend classical HTN planning algorithms and their formalism to support semantic attachment queries.
While external function calls map to functions defined outside the HTN description, we implement SAs as semi co-routines BID1 , subroutines that suspend and resume their state, to iterate across zero or more values provided one at a time by an external implementation, mitigating the potentially infinite range of the external function.Our contributions are threefold.
First, we introduce SAs for HTN planning as a mechanism to describe and evaluate external predicates at execution time.
Second, we introduce a symbol-object table to improve the readability of symbolic descriptions and the plans generated, while making it easier to handle external objects and structures.
Finally, we empirically compare the resulting HTN planner with a modern classical planner BID10 in a number of mixed symbolic/numeric domains showing substantial gains in speed with minimal domain knowledge.
We developed a notion of semantic attachments for HTN planners that not only allows a domain expert to easily define external numerical functions for real-world domains, but also provides substantial improvements on planning speed over comparable classical planning approaches.
The use of semantic attachments improves the planning speed as one can express a potentially infinite state representation with procedures that can be exploited by a strategy described as HTN tasks.
As only semantic attachments present in the path decomposed during planning are evaluated, a smaller amount of time is required when compared with approaches that precompute every possible value during operator grounding.
Our description language is arguably more readable than the commonly used strategy of developing a domain specific planner with customized heuristics.
Specifically, we allow designers to easily define external functions in a way that is readable within the domain knowledge encoded in HTN methods at design time, and also dynamically generate symbolic representations of external values at planning time, which makes generated plans easier to understand.Our work is the first attempt at defining the syntax and operation of semantic attachments for HTNs, allowing further research on search in SA-enabled domains within HTN planners.
Future work includes implementing a cache to reuse previous values from external procedures applied to similar previous states BID4 ) and a generic construction to access such values in the symbolic layer, to obtain data from explored branches outside the state structure, i.e. to hold mutually exclusive predicate information.
We plan to develop more domains, with varying levels of domain knowledge and SA usage, to obtain better comparison with other planners and their resulting plan quality.
The advantage of being able to exploit external implementations conflicts with the ability to incorporate such domain knowledge into heuristic functions, as such knowledge is outside the description.
Further work is required to expose possible metrics from a SA to heuristic functions.
|
An approach to perform HTN planning using external procedures to evaluate predicates at runtime (semantic attachments).
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:374
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent literature suggests that averaged word vectors followed by simple post-processing outperform many deep learning methods on semantic textual similarity tasks.
Furthermore, when averaged word vectors are trained supervised on large corpora of paraphrases, they achieve state-of-the-art results on standard STS benchmarks.
Inspired by these insights, we push the limits of word embeddings even further.
We propose a novel fuzzy bag-of-words (FBoW) representation for text that contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors.
We show that max-pooled word vectors are only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index rather than cosine similarity.
Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair.
This method is both efficient and easy to implement, yet outperforms current baselines on STS tasks by a large margin and is even competitive with supervised word vectors trained to directly optimise cosine similarity.
Natural languages are able to encode sentences with similar meanings using very different vocabulary and grammatical constructs, which makes determining the semantic similarity between pieces of text a challenge.
It is common to cast semantic similarity between sentences as the proximity of their vector representations.
More than half a century since it was first proposed, the Bag-of-Words (BoW) representation (Harris, 1954; BID47 BID37 remains a popular baseline across machine learning (ML), natural language processing (NLP), and information retrieval (IR) communities.
In recent years, however, BoW was largely eclipsed by representations learned through neural networks, ranging from shallow BID36 BID21 to recurrent BID12 BID53 , recursive BID51 BID55 , convolutional BID30 BID32 , self-attentive BID57 BID9 and hybrid architectures BID19 BID56 BID66 .Interestingly
, BID5 showed that averaged word vectors BID38 BID44 BID6 BID29 weighted with the Smooth Inverse Frequency (SIF) scheme and followed by a Principal Component Analysis (PCA) post-processing procedure were a formidable baseline for Semantic Textual Similarity (STS) tasks, outperforming deep representations. Furthermore,
BID59 and BID58 showed that averaged word vectors trained supervised on large corpora of paraphrases achieve state-of-the-art results, outperforming even the supervised systems trained directly on STS.Inspired by these insights, we push the boundaries of word vectors even further. We propose a
novel fuzzy bag-of-words (FBoW) representation for text. Unlike classical
BoW, fuzzy BoW contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors.Next, we show that max-pooled word vectors are a special case of fuzzy BoW. Max-pooling significantly
outperforms averaging on standard benchmarks when word vectors are trained unsupervised.Since max-pooled vectors are just a special case of fuzzy BoW, we show that the fuzzy Jaccard index is a more suitable alternative to cosine similarity for comparing these representations. By contrast, the fuzzy Jaccard
index completely fails for averaged word vectors as there is no connection between the two. The max-pooling operation is commonplace
throughout NLP and has been successfully used to extract features in supervised systems BID10 BID32 BID31 BID13 BID12 BID15 ; however, to the best of our knowledge, the present work is the first to study max-pooling of pre-trained word embeddings in isolation and to suggest theoretical underpinnings behind this operation.Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair. DynaMax outperforms averaged word vector
with cosine similarity on every benchmark STS task when word vectors are trained unsupervised. It even performs comparably to BID58 's
vectors under cosine similarity, which is a striking result as the latter are in fact trained supervised to directly optimise cosine similarity between paraphrases, while our approach is completely unrelated to that objective. We believe this makes DynaMax a strong
baseline that future algorithms should aim to beat in order to justify more complicated approaches to semantic similarity.As an additional contribution, we conduct significance analysis of our results. We found that recent literature on STS
tends to apply unspecified or inappropriate parametric tests, or leave out significance analysis altogether in the majority of cases. By contrast, we rely on nonparametric
approaches with much milder assumptions on the test statistic; specifically, we construct bias-corrected and accelerated (BCa) bootstrap confidence intervals BID17 for the delta in performance between two systems. We are not aware of any prior works that
apply such methodology to STS benchmarks and hope the community finds our analysis to be a good starting point for conducting thorough significance testing on these types of experiments.
In this work we combine word embeddings with classic BoW representations using fuzzy set theory.
We show that max-pooled word vectors are a special case of FBoW, which implies that they should be compared via the fuzzy Jaccard index rather than the more standard cosine similarity.
We also present a simple and novel algorithm, DynaMax, which corresponds to projecting word vectors onto a subspace dynamically generated by the given sentences before max-pooling over the features.
DynaMax outperforms averaged word vectors compared with cosine similarity on every benchmark STS task when word vectors are trained unsupervised.
It even performs comparably to supervised vectors that directly optimise cosine similarity between paraphrases, despite being completely unrelated to that objective.Both max-pooled vectors and DynaMax constitute strong baselines for further studies in the area of sentence representations.
Yet, these methods are not limited to NLP and word embeddings, but can in fact be used in any setting where one needs to compute similarity between sets of elements that have rich vector representations.
We hope to have demonstrated the benefits of experimenting more with similarity metrics based on the building blocks of meaning such as words, rather than complex representations of the final objects such as sentences.
In the word fuzzification step the membership values for a word w are obtained through a similarity function sim (w, u (j) ) between the word embedding w and the rows of the universe matrix U , i.e. DISPLAYFORM0 In Section 2.2, sim(w, u (j) ) was the dot product w · u (j) and we could simply write µ = wU T .
There are several reasons why we chose a similarity function that takes values in R as opposed to DISPLAYFORM1 First, we can always map the membership values from R to (0, 1) and vice versa using, e.g. the logistic function σ(x) = 1 1+e −ax with an appropriate scaling factor a > 0.
Intuitively, large negative membership values would imply the element is really not in the set and large positive values mean it is really in the set.
Of course, here both 'large' and 'really' depend on the scaling factor a.
In any case, we see that the choice of R vs. [0, 1] is not very important mathematically.
Interestingly, since we always max-pool with a zero vector, fuzzy BoW will not contain any negative membership values.
This was not our intention, just a by-product of the model.
For completeness, let us insist on the range [0, 1] and choose sim (w, u (j) ) to be the clipped cosine similarity max (0, cos(w, u (j) )).
This is in fact equivalent to simply normalising the word vectors.
Indeed, the dot product and cosine similarity become the same after normalisation, and max-pooling with the zero vector removes all the negative values, so the resulting representation is guaranteed to be a [0, 1]-fuzzy set.
Our results for normalised word vectors are presented in TAB3 .After
comparing TAB0 we can draw two conclusions. Namely
, DynaMax still outperforms avg-cos by a large margin even when word vectors are normalised. However
, normalisation hurts both approaches and should generally be avoided. This is
not surprising since the length of word vectors is correlated with word importance, so normalisation essentially makes all words equally important BID48 .
|
Max-pooled word vectors with fuzzy Jaccard set similarity are an extremely competitive baseline for semantic similarity; we propose a simple dynamic variant that performs even better.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:375
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Momentary fluctuations in attention (perceptual accuracy) correlate with neural activity fluctuations in primate visual areas.
Yet, the link between such momentary neural fluctuations and attention state remains to be shown in the human brain.
We investigate this link using a real-time cognitive brain machine interface (cBMI) based on steady state visually evoked potentials (SSVEPs): occipital EEG potentials evoked by rhythmically flashing stimuli.
Tracking momentary fluctuations in SSVEP power, in real-time, we presented stimuli time-locked to when this power reached (predetermined) high or low thresholds.
We observed a significant increase in discrimination accuracy (d') when stimuli were triggered during high (versus low) SSVEP power epochs, at the location cued for attention.
Our results indicate a direct link between attention’s effects on perceptual accuracy and and neural gain in EEG-SSVEP power, in the human brain.
|
With a cognitive brain-machine interface, we show a direct link between attentional effects on perceptual accuracy and neural gain in EEG-SSVEP power, in the human brain.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:377
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.
Inspired by the observation that the intrinsic dimension of image data is much smaller than its pixel space dimension and the vulnerability of neural networks grows with the input dimension, we propose to embed high-dimensional input images into a low-dimensional space to perform classification.
However, arbitrarily projecting the input images to a low-dimensional space without regularization will not improve the robustness of deep neural networks.
We propose a new framework, Embedding Regularized Classifier (ER-Classifier), which improves the adversarial robustness of the classifier through embedding regularization.
Experimental results on several benchmark datasets show that, our proposed framework achieves state-of-the-art performance against strong adversarial attack methods.
Deep neural networks (DNNs) have been widely used for tackling numerous machine learning problems that were once believed to be challenging.
With their remarkable ability of fitting training data, DNNs have achieved revolutionary successes in many fields such as computer vision, natural language progressing, and robotics.
However, they were shown to be vulnerable to adversarial examples that are generated by adding carefully crafted perturbations to original images.
The adversarial perturbations can arbitrarily change the network's prediction but often too small to affect human recognition (Szegedy et al., 2013; Kurakin et al., 2016) .
This phenomenon brings out security concerns for practical applications of deep learning.
Two main types of attack settings have been considered in recent research (Goodfellow et al.; Carlini & Wagner, 2017a; Chen et al., 2017; Papernot et al., 2017) : black-box and white-box settings.
In the black-box setting, the attacker can provide any inputs and receive the corresponding predictions.
However, the attacker cannot get access to the gradients or model parameters under this setting; whereas in the white-box setting, the attacker is allowed to analytically compute the model's gradients, and have full access to the model architecture and weights.
In this paper, we focus on defending against the white-box attack which is the harder task.
Recent work (Simon-Gabriel et al., 2018) presented both theoretical arguments and an empirical one-to-one relationship between input dimension and adversarial vulnerability, showing that the vulnerability of neural networks grows with the input dimension.
Therefore, reducing the data dimension may help improve the robustness of neural networks.
Furthermore, a consensus in the highdimensional data analysis community is that, a method working well on the high-dimensional data is because the data is not really of high-dimension (Levina & Bickel, 2005) .
These high-dimensional data, such as images, are actually embedded in a low dimensional space.
Hence, carefully reducing the input dimension may improve the robustness of the model without sacrificing performance.
Inspired by the observation that the intrinsic dimension of image data is actually much smaller than its pixel space dimension (Levina & Bickel, 2005) and the vulnerability of a model grows with its input dimension (Simon-Gabriel et al., 2018) , we propose a defense framework that embeds input images into a low-dimensional space using a deep encoder and performs classification based on the latent embedding with a classifier network.
However, an arbitrary projection does not guarantee improving the robustness of the model, because there are a lot of mapping functions including non-robust ones from the raw input space to the low-dimensional space capable of minimizing the classification loss.
To constrain the mapping function, we employ distribution regularization in the embedding space leveraging optimal transport theory.
We call our new classification framework Embedding Regularized Classifier (ER-Classifier).
To be more specific, we introduce a discriminator in the latent space which tries to separate the generated code vectors from the encoder network and the ideal code vectors sampled from a prior distribution, i.e., a standard Gaussian distribution.
Employing a similar powerful competitive mechanism as demonstrated by Generative Adversarial Networks (Goodfellow et al., 2014) , the discriminator enforces the embedding space of the model to follow the prior distribution.
In our ER-Classifier framework, the encoder and discriminator structures together project the input data to a low-dimensional space with a nice shape, then the classifier performs prediction based on the lowdimensional embedding.
Based on the optimal transport theory, the proposed ER-Classifier minimizes the discrepancy between the distribution of the true label and the distribution of the framework output, thus only retaining important features for classification in the embedding space.
With a small embedding dimension, the effect of the adversarial perturbation is largely diminished through the projection process.
We compare ER-Classifier with other state-of-the-art defense methods on MNIST, CIFAR10, STL10 and Tiny Imagenet.
Experimental results demonstrate that our proposed ER-Classifier outperforms other methods by a large margin.
To sum up, this paper makes the following three main contributions:
• A novel unified end-to-end robust deep neural network framework against adversarial attacks is proposed, where the input image is first projected to a low-dimensional space and then classified.
• An objective is induced to minimize the optimal transport cost between the true class distribution and the framework output distribution, guiding the encoder and discriminator to project the input image to a low-dimensional space without losing important features for classification.
• Extensive experiments demonstrate the robustness of our proposed ER-Classifier framework under the white-box attacks, and show that ER-Classifier outperforms other state-ofthe-art approaches on several benchmark image datasets.
As far as we know, our approach is the first that applies optimal transport theory, i.e., a Wasserstein distance regularization, to a bottleneck embedding layer of a deep neural network in a purely supervised learning setting without considering any reconstruction loss, although optimal transport theory or a discriminator loss has been applied to generative models in an unsupervised learning setting (Makhzani et al., 2015; Tolstikhin et al., 2017) ; (2) Our method is also the first that establishes the connection between a Wasserstein distance regularization and the robustness of deep neural networks for defending against adversarial examples.
In this paper, we propose a new defense framework, ER-Classifier, which projects the input images to a low-dimensional space to remove adversarial perturbation and stabilize the model through minimizing the discrepancy between the true label distribution and the framework output distribution.
We empirically show that ER-Classifier is much more robust than other state-of-the-art defense methods on several benchmark datasets.
Future work will include further exploration of the low-dimensional space to improve the robustness of deep neural network.
|
A general and easy-to-use framework that improves the adversarial robustness of deep classification models through embedding regularization.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:378
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We investigate task clustering for deep learning-based multi-task and few-shot learning in the settings with large numbers of diverse tasks.
Our method measures task similarities using cross-task transfer performance matrix.
Although this matrix provides us critical information regarding similarities between tasks, the uncertain task-pairs, i.e., the ones with extremely asymmetric transfer scores, may collectively mislead clustering algorithms to output an inaccurate task-partition.
Moreover, when the number of tasks is large, generating the full transfer performance matrix can be very time consuming.
To overcome these limitations, we propose a novel task clustering algorithm to estimate the similarity matrix based on the theory of matrix completion.
The proposed algorithm can work on partially-observed similarity matrices based on only sampled task-pairs with reliable scores, ensuring its efficiency and robustness.
Our theoretical analysis shows that under mild assumptions, the reconstructed matrix perfectly matches the underlying “true” similarity matrix with an overwhelming probability.
The final task partition is computed by applying an efficient spectral clustering algorithm to the recovered matrix.
Our results show that the new task clustering method can discover task clusters that benefit both multi-task learning and few-shot learning setups for sentiment classification and dialog intent classification tasks.
This paper leverages knowledge distilled from a large number of learning tasks BID0 BID19 , or MAny Task Learning (MATL), to achieve the goal of
(i) improving the overall performance of all tasks, as in multi-task learning (MTL); and (ii) rapid-adaptation to a new task by using previously learned knowledge, similar to few-shot learning (FSL) and transfer learning.
Previous work on multi-task learning and transfer learning used small numbers of related tasks (usually ∼10) picked by human experts.
By contrast, MATL tackles hundreds or thousands of tasks BID0 BID19 , with unknown relatedness between pairs of tasks, introducing new challenges such as task diversity and model inefficiency.MATL scenarios are increasingly common in a wide range of machine learning applications with potentially huge impact.
Examples include reinforcement learning for game playing -where many numbers of sub-goals are treated as tasks by the agents for joint-learning, e.g. BID19 achieved the state-of-the-art on the Ms. Pac-Man game by using a multi-task learning architecture to approximate rewards of over 1,000 sub-goals (reward functions).
Another important example is enterprise AI cloud services -where many clients submit various tasks/datasets to train machine learning models for business-specific purposes.
The clients could be companies who want to know opinion from their customers on products and services, agencies that monitor public reactions to policy changes, and financial analysts who analyze news as it can potentially influence the stock-market.
Such MATL-based services thus need to handle the diverse nature of clients' tasks.Challenges on Handling Diverse (Heterogeneous) Tasks Previous multi-task learning and fewshot learning research usually work on homogeneous tasks, e.g. all tasks are binary classification problems, or tasks are close to each other (picked by human experts) so the positive transfer between tasks is guaranteed.
However, with a large number of tasks in a MATL setting, the above assumption may not hold, i.e. we need to be able to deal with tasks with larger diversity.
Such diversity can be reflected as
(i) tasks with varying numbers of labels: when tasks are diverse, different tasks could have different numbers of labels; and the labels might be defined in different label spaces without relatedness.
Most of the existing multi-task and few-shot learning methods will fail in this setting; and more importantly (ii) tasks with positive and negative transfers: since tasks are not guaranteed to be similar to each other in the MATL setting, they are not always able to help each other when trained together, i.e. negative transfer BID22 between tasks.
For example, in dialog services, the sentences "What fast food do you have nearby" and "Could I find any Indian food" may belong to two different classes "fast_food" and "indian_food" for a restaurant recommendation service in a city; while for a travel-guide service for a park, those two sentences could belong to the same class "food_options".
In this case the two tasks may hurt each other when trained jointly with a single representation function, since the first task turns to give similar representations to both sentences while the second one turns to distinguish them in the representation space.A Task Clustering Based Solution To deal with the second challenge above, we propose to partition the tasks to clusters, making the tasks in each cluster more likely to be related.
Common knowledge is only shared across tasks within a cluster, thus the negative transfer problem is alleviated.
There are a few task clustering algorithm proposed mainly for convex models BID12 BID9 BID5 BID0 , but they assume that the tasks have the same number of labels (usually binary classification).
In order to handle tasks with varying numbers of labels, we adopt a similarity-based task clustering algorithm.
The task similarity is measured by cross-task transfer performance, which is a matrix S whose (i,
j)-entry S ij is the estimated accuracy by adapting the learned representations on the i-th (source) task to the j-th (target) task.
The above task similarity computation does not require the source task and target task to have the same set of labels, as a result, our clustering algorithm could naturally handle tasks with varying numbers of labels.Although cross-task transfer performance can provide critical information of task similarities, directly using it for task clustering may suffer from both efficiency and accuracy issues.
First and most importantly, evaluation of all entries in the matrix S involves conducting the source-target transfer learning O(n 2 ) times, where n is the number of tasks.
For a large number of diverse tasks where the n can be larger than 1,000, evaluation of the full matrix is unacceptable (over 1M entries to evaluate).
Second, the estimated cross-task performance (i.e. some S ij or S ji scores) is often unreliable due to small data size or label noises.
When the number of the uncertain values is large, they can collectively mislead the clustering algorithm to output an incorrect task-partition.To address the aforementioned challenges, we propose a novel task clustering algorithm based on the theory of matrix completion BID2 .
Specifically, we deal with the huge number of entries by randomly sample task pairs to evaluate the S ij and S ji scores; and deal with the unreliable entries by keeping only task pairs (i,
j) with consistent S ij and S ji scores.
Given a set of n tasks, we first construct an n × n partially-observed matrix Y, where its observed entries correspond to the sampled and reliable task pairs (i,
j) with consistent S ij and S ji scores.
Otherwise, if the task pairs (i,
j) are not sampled to compute the transfer scores or the scores are inconsistent, we mark both Y ij and Y ji as unobserved.
Given the constructed partially-observed matrix Y, our next step is to recover an n × n full similarity matrix using a robust matrix completion approach, and then generate the final task partition by applying spectral clustering to the completed similarity matrix.
The proposed approach has a 2-fold advantage.
First, our method carries a strong theoretical guarantee, showing that the full similarity matrix can be perfectly recovered if the number of observed correct entries in the partially observed similarity matrix is at least O(n log 2 n).
This theoretical result allows us to only compute the similarities of O(n log 2 n) instead of O(n 2 ) pairs, thus greatly reduces the computation when the number of tasks is large.
Second, by filtering out uncertain task pairs, the proposed algorithm will be less sensitive to noise, leading to a more robust clustering performance.The task clusters allow us to handle
(i) diverse MTL problems, by model sharing only within clusters such that the negative transfer from irrelevant tasks can be alleviated; and (ii) diverse FSL problems, where a new task can be assigned a task-specific metric, which is a linear combination of the metrics defined by different clusters, such that the diverse few-shot tasks could derive different metrics from the previous learning experience.
Our results show that the proposed task clustering algorithm, combined with the above MTL and FSL strategies, could give us significantly better deep MTL and FSL algorithms on sentiment classification and intent classification tasks.
In this paper, we propose a robust task-clustering method that not only has strong theoretical guarantees but also demonstrates significantly empirical improvements when equipped by our MTL and FSL algorithms.
Our empirical studies verify that
(i) the proposed task clustering approach is very effective in the many-task learning setting especially when tasks are diverse;
(ii) our approach could efficiently handle large number of tasks as suggested by our theory; and
(iii) cross-task transfer performance can serve as a powerful task similarity measure.
Our work opens up many future research directions, such as supporting online many-task learning with incremental computation on task similarities, and combining our clustering approach with the recent learning-to-learn methods (e.g. BID18 ), to enhance our MTL and FSL methods.
|
We propose a matrix-completion based task clustering algorithm for deep multi-task and few-shot learning in the settings with large numbers of diverse tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:379
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.
In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics.
Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers.
The model updates the states of the entities by executing learned action operators.
Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.
Understanding procedural text such as instructions or stories requires anticipating the implicit causal effects of actions on entities.
For example, given instructions such as "add blueberries to the muffin mix, then bake for one half hour," an intelligent agent must be able to anticipate a number of entailed facts (e.g., the blueberries are now in the oven; their "temperature" will increase).
While this common sense reasoning is trivial for humans, most natural language understanding algorithms do not have the capacity to reason about causal effects not mentioned directly in the surface strings BID12 BID7 BID14 .
The process is a narrative of entity state changes induced by actions.
In each sentence, these state changes are induced by simulated actions and must be remembered.In this paper, we introduce Neural Process Networks, a procedural language understanding system that tracks common sense attributes through neural simulation of action dynamics.
Our network models interpretation of natural language instructions as a process of actions and their cumulative effects on entities.
More concretely, reading one sentence at a time, our model attentively selects what actions to execute on which entities, and remembers the state changes induced with a recurrent memory structure.
In FIG0 , for example, our model indexes the "tomato" embedding, selects the "wash" and "cut" functions and performs a computation that changes the "tomato" embedding so that it can reason about attributes such as its "SHAPE" and "CLEANLINESS".Our
model contributes to a recent line of research that aims to model aspects of world state changes, such as language models and machine readers with explicit entity representations BID4 BID6 , as well as other more general purpose memory network variants BID30 BID26 BID5 BID23 . This
worldcentric modeling of procedural language (i.e., understanding by simulation) abstracts away from the surface strings, complementing text-centric modeling of language, which focuses on syntactic and semantic labeling of surface words (i.e., understanding by labeling).Unlike
previous approaches, however, our model also learns explicit action representations as functional operators (See FIG0 . While
representations of action semantics could be acquired through an embodied agent that can see and interact with the world BID22 , we propose to learn these representations from text. In particular
, we require the model to be able to explain the causal effects of actions by predicting natural language attributes about entities such as "LOCATION" and "TEMPERATURE". The model adjusts
its representations of actions based on errors it makes in predicting the resultant state changes to attributes. This textual simulation
allows us to model aspects of action causality that are not readily available in existing simulation environments. Indeed, most virtual environments
offer limited aspects of the world -with a primary focus on spatial relations BID22 BID1 BID29 . They leave out various other dimensions
of the world states that are implied by diverse everyday actions such as "dissolve" (change of "COMPOSITION") and "wash" (change of "CLEANLINESS").Empirical results demonstrate that parametrizing
explicit action embeddings provides an inductive bias that allows the neural process network to learn more informative context representations for understanding and generating natural language procedural text. In addition, our model offers more interpretable
internal representations and can reason about the unstated causal effects of actions explained through natural language descriptors. Finally, we include a new dataset with fine-grained
annotations on state changes, to be shared publicly, to encourage future research in this direction.
We introduced the Neural Process Network for modeling a process of actions and their causal effects on entities by learning action transformations that change entity state representations.
The model maintains a recurrent memory structure to track entity states and is trained to predict the state changes that entities undergo.
Empirical results demonstrate that our model can learn the causal effects of action semantics in the cooking domain and track the dynamic state changes of entities, showing advantages over competitive baselines.A TRAINING DETAILS OF OUR FULL MODEL AND ABLATIONS
|
We propose a new recurrent memory architecture that can track common sense state changes of entities by simulating the causal effects of actions.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:38
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention.
In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning.
Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension.
In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA).
This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning.
While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state.
We study the quantum features of the TN states, including quantum entanglement and fidelity.
We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks.
Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.
Over the past years, we have witnessed a booming progress in applying quantum theories and technologies to realistic problems.
Paradigmatic examples include quantum simulators BID31 and quantum computers (Steane, 1998; BID16 BID2 aimed at tackling challenging problems that are beyond the capability of classical digital computations.
The power of these methods stems from the properties quantum many-body systems.Tensor networks (TNs) belong to the most powerful numerical tools for studying quantum manybody systems BID22 BID13 BID26 .
The main challenge lies in the exponential growth of the Hilbert space with the system size, making exact descriptions of such quantum states impossible even for systems as small as O(10 2 ) electrons.
To break the "exponential wall", TNs were suggested as an efficient ansatz that lowers the computational cost to a polynomial dependence on the system size.
Astonishing achievements have been made in studying, e.g. spins, bosons, fermions, anyons, gauge fields, and so on Cirac & Verstraete, 2009; BID23 BID26 BID26 .
TNs are also exploited to predict interactions that are used to design quantum simulators BID25 .As
TNs allowed the numerical treatment of difficult physical systems by providing layers of abstraction, deep learning achieved similar striking advances in automated feature extraction and pattern recognition BID19 . The
resemblance between the two approaches is beyond superficial. At
theoretical level, there is a mapping between deep learning and the renormalization group BID1 , which in turn connects holography and deep learning BID37 BID10 , and also allows studying network design from the perspective of quantum entanglement BID20 . In
turn, neural networks can represent quantum states BID3 BID4 BID15 BID11 .Most
recently, TNs have been applied to solve machine learning problems such as dimensionality reduction BID5 , handwriting recognition BID30 BID12 . Through
a feature mapping, an image described as classical information is transferred into a product state defined in a Hilbert space. Then these
states are acted onto a TN, giving an output vector that determines the classification of the images into a predefined number of classes. Going further
with this clue, it can be seen that when using a vector space for solving image recognition problems, one faces a similar "exponential wall" as in quantum many-body systems. For recognizing
an object in the real world, there exist infinite possibilities since the shapes and colors change, in principle, continuously. An image or a gray-scale
photo provides an approximation, where the total number of possibilities is lowered to 256 N per channel, with N describing the number of pixels, and it is assumed to be fixed for simplicity. Similar to the applications
in quantum physics, TNs show a promising way to lower such an exponentially large space to a polynomial one.This work contributes in two aspects. Firstly, we derive an efficient
quantum-inspired learning algorithm based on a hierarchical representation that is known as tree TN (TTN) (see, e.g., BID21 ). Compared with Refs. BID30 BID12
where a onedimensional
(1D) TN (called matrix product state (MPS) (Östlund & Rommer, 1995) ) is used, TTN suits more the two-dimensional (2D) nature of images. The algorithm is inspired by the multipartite
entanglement renormalization ansatz (MERA) approach BID35 BID36 BID7 BID9 , where the tensors in the TN are kept to be unitary during the training. We test the algorithm on both the MNIST (handwriting
recognition with binary images) and CIFAR (recognition of color images) databases and obtain accuracies comparable to the performance of convolutional neural networks. More importantly, the TN states can then be defined
that optimally encodes each class of images as a quantum many-body state, which is akin to the study of a duality between probabilistic graphical models and TNs BID27 . We contrast the bond dimension and model complexity
, with results indicating that a growing bond dimension overfits the data. we study the representation in the different layers
in the hierarchical TN with t-SNE ( BID32 , and find that the level of abstraction changes the same way as in a deep convolutional neural network BID18 or a deep belief network BID14 , and the highest level of the hierarchy allows for a clear separation of the classes. Finally, we show that the fidelities between each two
TN states from the two different image classes are low, and we calculate the entanglement entropy of each TN state, which gives an indication of the difficulty of each class.
We continued the forays into using tensor networks for machine learning, focusing on hierarchical, two-dimensional tree tensor networks that we found a natural fit for image recognition problems.
This proved a scalable approach that had a high precision, and we can conclude the following observations:• The limitation of representation power (learnability) of the TTNs model strongly depends on the input bond (physical indexes).
And, the virtual bond (geometrical indexes) determine how well the TTNs approximate this limitation.•
A hierarchical tensor network exhibits the same increase level of abstraction as a deep convolutional neural network or a deep belief network.•
Fidelity can give us an insight how difficult it is to tell two classes apart.•
Entanglement entropy has potential to characterize the difficulty of representing a class of problems.In future work, we plan to use fidelity-based training in an unsupervised setting and applying the trained TTN to recover damaged or compressed images and using entanglement entropy to characterize the accuracy.
|
This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:380
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning.
A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom.
Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters.
We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.
We propose to split the problem into two distinct tasks: planning and control.
To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters.
Both stages are trained end-to-end using a differentiable PDE solver.
We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations.
Intelligent systems that operate in the physical world must be able to perceive, predict, and interact with physical phenomena (Battaglia et al., 2013) .
In this work, we consider physical systems that can be characterized by partial differential equations (PDEs).
PDEs constitute the most fundamental description of evolving systems and are used to describe every physical theory, from quantum mechanics and general relativity to turbulent flows (Courant & Hilbert, 1962; Smith, 1985) .
We aim to endow artificial intelligent agents with the ability to direct the evolution of such systems via continuous controls.
Such optimal control problems have typically been addressed via iterative optimization.
Differentiable solvers and the adjoint method enable efficient optimization of high-dimensional systems (Toussaint et al., 2018; de Avila Belbute-Peres et al., 2018; Schenck & Fox, 2018) .
However, direct optimization through gradient descent (single shooting) at test time is resource-intensive and may be difficult to deploy in interactive settings.
More advanced methods exist, such as multiple shooting and collocation, but they commonly rely on modeling assumptions that limit their applicability, and still require computationally intensive iterative optimization at test time.
Iterative optimization methods are expensive because they have to start optimizing from scratch and typically require a large number of iterations to reach an optimum.
In many real-world control problems, however, agents have to repeatedly make decisions in specialized environments, and reaction times are limited to a fraction of a second.
This motivates the use of data-driven models such as deep neural networks, which combine short inference times with the capacity to build an internal representation of the environment.
We present a novel deep learning approach that can learn to represent solution manifolds for a given physical environment, and is orders of magnitude faster than iterative optimization techniques.
The core of our method is a hierarchical predictor-corrector scheme that temporally divides the problem into easier subproblems.
This enables us to combine models specialized to different time scales in order to control long sequences of complex high-dimensional systems.
We train our models using a differentiable PDE solver that can provide the agent with feedback of how interactions at any point in time affect the outcome.
Our models learn to represent manifolds containing a large number of solutions, and can thereby avoid local minima that can trap classic optimization techniques.
We evaluate our method on a variety of control tasks in systems governed by advection-diffusion PDEs such as the Navier-Stokes equations.
We quantitatively evaluate the resulting sequences on how well they approximate the target state and how much force was exerted on the physical system.
Our method yields stable control for significantly longer time spans than alternative approaches.
We have demonstrated that deep learning models in conjunction with a differentiable physics solver can successfully predict the behavior of complex physical systems and learn to control them.
The in- troduction of a hierarchical predictor-corrector architecture allowed the model to learn to reconstruct long sequences by treating the physical behavior on different time scales separately.
We have shown that using a differentiable solver greatly benefits the quality of solutions since the networks can learn how their decisions will affect the future.
In our experiments, hierarchical inference schemes outperform traditional sequential agents because they can easily learn to plan ahead.
To model realistic environments, we have introduced observations to our pipeline which restrict the information available to the learning agent.
While the PDE solver still requires full state information to run the simulation, this restriction does not apply when the agent is deployed.
While we do not believe that learning approaches will replace iterative optimization, our method shows that it is possible to learn representations of solution manifolds for optimal control trajectories using data-driven approaches.
Fast inference is vital in time-critical applications and can also be used in conjunction with classical solvers to speed up convergence and ultimately produce better solutions.
|
We train a combination of neural networks to predict optimal trajectories for complex physical systems.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:381
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The ability of overparameterized deep networks to generalize well has been linked to the fact that stochastic gradient descent (SGD) finds solutions that lie in flat, wide minima in the training loss -- minima where the output of the network is resilient to small random noise added to its parameters.
So far this observation has been used to provide generalization guarantees only for neural networks whose parameters are either \textit{stochastic} or \textit{compressed}.
In this work, we present a general PAC-Bayesian framework that leverages this observation to provide a bound on the original network learned -- a network that is deterministic and uncompressed.
What enables us to do this is a key novelty in our approach: our framework allows us to show that if on training data, the interactions between the weight matrices satisfy certain conditions that imply a wide training loss minimum, these conditions themselves {\em generalize} to the interactions between the matrices on test data, thereby implying a wide test loss minimum.
We then apply our general framework in a setup where we assume that the pre-activation values of the network are not too small (although we assume this only on the training data).
In this setup, we provide a generalization guarantee for the original (deterministic, uncompressed) network, that does not scale with product of the spectral norms of the weight matrices -- a guarantee that would not have been possible with prior approaches.
Modern deep neural networks contain millions of parameters and are trained on relatively few samples.
Conventional wisdom in machine learning suggests that such models should massively overfit on the training data, as these models have the capacity to memorize even a randomly labeled dataset of similar size (Zhang et al., 2017; Neyshabur et al., 2015) .
Yet these models have achieved state-ofthe-art generalization error on many real-world tasks.
This observation has spurred an active line of research (Soudry et al., 2018; BID2 BID11 ) that has tried to understand what properties are possessed by stochastic gradient descent (SGD) training of deep networks that allows these networks to generalize well.One particularly promising line of work in this area (Neyshabur et al., 2017; BID0 has been bounds that utilize the noise-resilience of deep networks on training data i.e., how much the training loss of the network changes with noise injected into the parameters, or roughly, how wide is the training loss minimum.
While these have yielded generalization bounds that do not have a severe exponential dependence on depth (unlike other bounds that grow with the product of spectral norms of the weight matrices), these bounds are quite limited: they either apply to a stochastic version of the classifier (where the parameters are drawn from a distribution) or a compressed version of the classifier (where the parameters are modified and represented using fewer bits).In
this paper, we revisit the PAC-Bayesian analysis of deep networks in Neyshabur et al. (2017; and provide a general framework that allows one to use noise-resilience of the deep network on training data to provide a bound on the original deterministic and uncompressed network. We
achieve this by arguing that if on the training data, the interaction between the 'activated weight matrices' (weight matrices where the weights incoming from/outgoing to inactive units are zeroed out) satisfy certain conditions which results in a wide training loss minimum, these conditions themselves generalize to the weight matrix interactions on the test data.After presenting this general PAC-Bayesian framework, we specialize it to the case of deep ReLU networks, showing that we can provide a generalization bound that accomplishes two goals simultaneously: i)
it applies to the original network and ii
) it does not scale exponentially with depth in terms of the products of the spectral norms of the weight matrices; instead our bound scales with more meaningful terms that capture the interactions between the weight matrices and do not have such a severe dependence on depth in practice. We
note that all but one of these terms are indeed quite small on networks in practice. However
, one particularly (empirically) large term that we use is the reciprocal of the magnitude of the network pre-activations on the training data (and so our bound would be small only in the scenario where the pre-activations are not too small). We emphasize
that this drawback is more of a limitation in how we characterize noise-resilience through the specific conditions we chose for the ReLU network, rather than a drawback in our PAC-Bayesian framework itself. Our hope is
that, since our technique is quite general and flexible, by carefully identifying the right set of conditions, in the future, one might be able to derive a similar generalization guarantee that is smaller in practice.To the best of our knowledge, our approach of generalizing noise-resilience of deep networks from training data to test data in order to derive a bound on the original network that does not scale with products of spectral norms, has neither been considered nor accomplished so far, even in limited situations.
|
We provide a PAC-Bayes based generalization guarantee for uncompressed, deterministic deep networks by generalizing noise-resilience of the network on the training data to the test data.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:382
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks.
However, it is currently very difficult to train a neural network that is both accurate and certifiably robust.
In this work we take a step towards addressing this challenge.
We prove that for every continuous function $f$, there exists a network $n$ such that:
(i) $n$ approximates $f$ arbitrarily close, and
(ii) simple interval bound propagation of a region $B$ through $n$ yields a result that is arbitrarily close to the optimal output of $f$ on $B$.
Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks.
To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.
Much recent work has shown that neural networks can be fooled into misclassifying adversarial examples (Szegedy et al., 2014) , inputs which are imperceptibly different from those that the neural network classifies correctly.
Initial work on defending against adversarial examples revolved around training networks to be empirically robust, usually by including adversarial examples found with various attacks into the training dataset (Gu and Rigazio, 2015; Papernot et al., 2016; Zheng et al., 2016; Athalye et al., 2018; Eykholt et al., 2018; Moosavi-Dezfooli et al., 2017; Xiao et al., 2018) .
However, while empirical robustness can be practically useful, it does not provide safety guarantees.
As a result, much recent research has focused on verifying that a network is certifiably robust, typically by employing methods based on mixed integer linear programming (Tjeng et al., 2019) , SMT solvers (Katz et al., 2017) , semidefinite programming (Raghunathan et al., 2018a) , duality (Wong and Kolter, 2018; Dvijotham et al., 2018b) , and linear relaxations (Gehr et al., 2018; Weng et al., 2018; Wang et al., 2018b; Zhang et al., 2018; Singh et al., 2018; Salman et al., 2019) .
Because the certification rates were far from satisfactory, specific training methods were recently developed which produce networks that are certifiably robust: Mirman et al. (2018) ; Raghunathan et al. (2018b) ; Wang et al. (2018a) ; Wong and Kolter (2018) ; Wong et al. (2018) ; Gowal et al. (2018) train the network with standard optimization applied to an over-approximation of the network behavior on a given input region (the region is created around the concrete input point).
These techniques aim to discover specific weights which facilitate verification.
There is a tradeoff between the degree of the over-approximation used and the speed of training and certification.
Recently, (Cohen et al., 2019b) proposed a statistical approach to certification, which unlike the non-probabilistic methods discussed above, creates a probabilistic classifier that comes with probabilistic guarantees.
So far, some of the best non-probabilistic results achieved on the popular MNIST (Lecun et al., 1998) and CIFAR10 (Krizhevsky, 2009 ) datasets have been obtained with the simple Interval relaxation (Gowal et al., 2018; Mirman et al., 2019) , which scales well at both training and verification time.
Despite this progress, there are still substantial gaps between known standard accuracy, experimental robustness, and certified robustness.
For example, for CIFAR10, the best reported certified robustness is 32.04% with an accuracy of 49.49% when using a fairly modest l ∞ region with radius 8/255 (Gowal et al., 2018) .
The state-of-the-art non-robust accuracy for this dataset is > 95% with experimental robustness > 50%.
Given the size of this gap, a key question then is: can certified training ever succeed or is there a fundamental limit?
In this paper we take a step in answering this question by proving a result parallel to the Universal Approximation Theorem (Cybenko, 1989; Hornik et al., 1989) .
We prove that for any continuous function f defined on a compact domain Γ ⊆ R m and for any desired level of accuracy δ, there exists a ReLU neural network n which can certifiably approximate f up to δ using interval bound propagation.
As an interval is a fairly imprecise relaxation, our result directly applies to more precise convex relaxations (e.g., Zhang et al. (2018); Singh et al. (2019) ).
Theorem 1.1 (Universal Interval-Certified Approximation, Figure 1 ).
Let Γ ⊂ R m be a compact set and let f : Γ → R be a continuous function.
For all δ > 0, there exists a ReLU network n such that for all boxes [a, b] in Γ defined by points a, b ∈ Γ where a k ≤ b k for all k, the propagation of the box [a, b] using interval analysis through the network n, denoted n ([a, b]), approximates the set
We recover the classical universal approximation theorem (|f (x) − n(x)| ≤ δ for all x ∈ Γ) by considering boxes [a, b] describing points (x = a = b).
Note that here the lower bound is not [l, u] as the network n is an approximation of f .
Because interval analysis propagates boxes, the theorem naturally handles l ∞ norm bound perturbations to the input.
Other l p norms can be handled by covering the l p ball with boxes.
The theorem can be extended easily to functions f : Γ → R k by applying the theorem component wise.
Practical meaning of theorem The practical meaning of this theorem is as follows: if we train a neural network n on a given training data set (e.g., CIFAR10) and we are satisfied with the properties of n (e.g., high accuracy), then because n is a continuous function, the theorem tells us that there exists a network n which is as accurate as n and as certifiable with interval analysis as n is with a complete verifier.
This means that if we fail to find such an n, then either n did not possess the required capacity or the optimizer was unsuccessful.
Focus on the existence of a network We note that we do not provide a method for training a certified ReLU network -even though our method is constructive, we aim to answer an existential question and thus we focus on proving that a given network exists.
Interesting future work items would be to study the requirements on the size of this network and the inherent hardness of finding it with standard optimization methods.
Universal approximation is insufficient We now discuss why classical universal approximation is insufficient for establishing our result.
While classical universal approximation theorems state that neural networks can approximate a large class of functions f , unlike our result, they do not state that robustness of the approximation n of f is actually certified with a scalable proof method (e.g., interval bound propagation).
If one uses a non scalable complete verifier instead, then the standard Universal approximation theorem is sufficient.
To demonstrate this point, consider the function f : R → R (Figure 2b ) mapping all x ≤ 0 to 1, all x ≥ 1 to 0 and all 0 < x < 1 to 1 − x and two ReLU networks n 1 (Figure 2a ) and n 2 (Figure 2c ) perfectly approximating f , that is n 1 (x) = f (x) = n 2 (x) for all x.
For δ = 1 4 , the interval certification that n 1 maps all
However, interval certification succeeds for n 2 , because n 2 ([0, 1]) = [0, 1] .
To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.
We proved that for all real valued continuous functions f on compact sets, there exists a ReLU network n approximating f arbitrarily well with the interval abstraction.
This means that for arbitrary input sets, analysis using the interval relaxation yields an over-approximation arbitrarily close to the smallest interval containing all possible outputs.
Our theorem affirmatively answers the open question, whether the Universal Approximation Theorem generalizes to Interval analysis.
Our results address the question of whether the interval abstraction is expressive enough to analyse networks approximating interesting functions f .
This is of practical importance because interval analysis is the most scalable non-trivial analysis.
|
We prove that for a large class of functions f there exists an interval certified robust network approximating f up to arbitrary precision.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:383
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we propose an efficient framework to accelerate convolutional neural networks.
We utilize two types of acceleration methods: pruning and hints.
Pruning can reduce model size by removing channels of layers.
Hints can improve the performance of student model by transferring knowledge from teacher model.
We demonstrate that pruning and hints are complementary to each other.
On one hand, hints can benefit pruning by maintaining similar feature representations.
On the other hand, the model pruned from teacher networks is a good initialization for student model, which increases the transferability between two networks.
Our approach performs pruning stage and hints stage iteratively to further improve the
performance.
Furthermore, we propose an algorithm to reconstruct the parameters of hints layer and make the pruned model more suitable for hints.
Experiments were conducted on various tasks including classification and pose estimation.
Results on CIFAR-10, ImageNet and COCO demonstrate the generalization and superiority of our framework.
In recent years, convolutional neural networks (CNN) have been applied in many computer vision tasks, e.g. classification BID21 ; BID6 , objects detection BID8 ; BID30 , and pose estimation BID25 .
The success of CNN drives the development of computer vision.
However, restricted by large model size as well as computation complexity, many CNN models are difficult to be put into practical use directly.
To solve the problem, more and more researches have focused on accelerating models without degradation of performance.Pruning and knowledge distillation are two of mainstream methods in model acceleration.
The goal of pruning is to remove less important parameters while maintaining similar performance of the original model.
Despite pruning methods' superiority, we notice that for many pruning methods with the increase of pruned channel number, the performance of pruned model drops rapidlly.
Knowledge distillation describes teacher-student framework: use high-level representations from teacher model to supervise student model.
Hints method BID31 shares a similar idea of knowledge distillation, where the feature map of teacher model is used as high-level representations.
According to BID36 , the student network can achieve better performance in knowledge transfer if its initialization can produce similar features as the teacher model.
Inspired by this work, we propose that pruned model outputs similar features with original model's and provide a good initialization for student model, which does help distillation.
And on the other hand, hints can help reconstruct parameters and alleviate degradation of performance caused by pruning operation.
FIG0 illustrates the motivation of our framework.
Based on this analysis, we propose an algorithm: we do pruning and hints operation iteratively.
And for each iteration, we conduct a reconstructing step between pruning and hints operations.
And we demonstrate that this reconstructing operation can provide a better initialization for student model and promote hints step (See FIG1 .
We name our method as PWH Framework.
To our best knowledge, we are the first to combine pruning and hints together as a framework.Our framework can be applied on different vision tasks.
Experiments on CIFAR- 10 Krizhevsky & Hinton (2009) , ImageNet Deng et al. (2016) and COCO Lin et al. (2014) Hints can help pruned model reconstruct parameters.
And the network pruned from the teacher model can provide a good initialization for student model in hints learning.effectiveness of our framework.
Furthermore, our method is a framework where different pruning and hints methods can be included.To summarize, the contributions of this paper are as follows: FORMULA0 We analyze the properties of pruning and hints methods and show that these two model acceleration methods are complementary to each other.
(2) To our best knowledge, this is the first work that combines pruning and hints.
Our framework is easy to be extended to different pruning and hints methods.
(3) Sufficient experiments show the effectiveness of our framework on different datasets for different tasks.
In this paper, we propose PWH Framework, an iterative framework for model acceleration.
Our framework takes the advantage of both pruning and hints methods.
To our best knowledge, this is the first work that combine these two model acceleration methods.
Furthermore, we conduct reconstructing operation between hints and pruning steps as a cascader.
We analyze the property of these two methods and show they are complementary to each other: pruning provides a better initialization for student model and hints method helps to adjust parameters in pruned model.
Experiments on CIFAR-10, ImageNet and COCO datasets for classification and pose estimation tasks demonstrate the superiority of PWH Framework.
|
This is a work aiming for boosting all the existing pruning and mimic method.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:384
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
For typical sequence prediction problems such as language generation, maximum likelihood estimation (MLE) has commonly been adopted as it encourages the predicted sequence most consistent with the ground-truth sequence to have the highest probability of occurring.
However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.
We refer to this drawback as {\it negative diversity ignorance} in this paper.
Treating all incorrect predictions as equal unfairly downplays the nuance of these sequences' detailed token-wise structure.
To counteract this, we augment the MLE loss by introducing an extra Kullback--Leibler divergence term derived by comparing a data-dependent Gaussian prior and the detailed training prediction.
The proposed data-dependent Gaussian prior objective (D2GPo) is defined over a prior topological order of tokens and is poles apart from the data-independent Gaussian prior (L2 regularization) commonly adopted in smoothing the training of MLE.
Experimental results show that the proposed method makes effective use of a more detailed prior in the data and has improved performance in typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning.
Language understanding is the crown jewel of artificial intelligence.
As the well-known dictum by Richard Feynman states, "what I cannot create, I do not understand."
Language generation therefore reflects the level of development of language understanding.
Language generation models have seen remarkable advances in recent years, especially with the rapid development of deep neural networks (DNNs).
There are several models typically used in language generation, namely sequenceto-sequence (seq2seq) models (Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015; Vaswani et al., 2017) , generative adversarial networks (GANs) (Goodfellow et al., 2014) , variational autoencoders (Kingma & Welling, 2013) , and auto-regressive networks (Larochelle & Murray, 2011; Van Oord et al., 2016) .
Language generation is usually modeled as a sequence prediction task, which adopts maximum likelihood estimation (MLE) as the standard training criterion (i.e., objective).
MLE has had much success owing to its intuitiveness and flexibility.
However, sequence prediction has encountered the following series of problems due to MLE.
• Exposure bias: The model is not exposed to the full range of errors during training.
• Loss mismatch: During training, we maximize the log-likelihood, whereas, during inference, the model is evaluated by a different metric such as BLEU or ROUGE.
• Generation diversity: The generations are dull, generic (Sordoni et al., 2015; Serban et al., 2016; Li et al., 2016a) , repetitive, and short-sighted (Li et al., 2016b ).
• Negative diversity ignorance: MLE fails to assign proper scores to different incorrect model outputs, which means that all incorrect outputs are treated equally during training.
A variety of work has alleviated the above MLE training shortcomings apart from negative diversity ignorance.
Negative diversity ignorance is a result of unfairly downplaying the nuance of sequences' detailed token-wise structure.
When the MLE objective compares its predicted and ground-truth sequences, it takes a once-for-all matching strategy; the predicted sequence is given a binary label, either correct or incorrect.
However, these incorrect training predictions may be quite diverse and letting the model be aware of which incorrect predictions are more incorrect or less incorrect than others may more effectively guide model training.
For instance, an armchair might be mistaken with a deckchair, but it should usually not be mistaken for a mushroom.
To alleviate the issue of the negative diversity ignorance, we add an extra Gaussian prior objective to augment the current MLE training with an extra Kullback-Leibler divergence loss term.
The extra loss is computed by comparing two probability distributions, the first of which is from the detailed model training prediction and the second of which is from a ground-truth token-wise distribution and is defined as a kind of data-dependent Gaussian prior distribution.
The proposed data-dependent Gaussian prior objective (D2GPo) is then injected into the final loss through a KL divergence term.
The D2GPo is poles apart from the commonly adopted data-independent Gaussian prior (L2 regularization) for the purpose of smoothing the training of MLE, which is also directly added into the MLE loss.
Experimental results show that the proposed method makes effectively use of a more detailed prior in the data and improves the performance of typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning.
This work proposed a data-dependent Gaussian prior objective (D2GPo) for language generation tasks with the hope of alleviating the difficulty of negative diversity ignorance.
D2GPo imposes the prior from (linguistic) data over the sequence prediction models.
D2GPo outperformed strong baselines in experiments on classic language generation tasks (i.e., neural machine translation, text summarization, storytelling, and image captioning tasks).
|
We introduce an extra data-dependent Gaussian prior objective to augment the current MLE training, which is designed to capture the prior knowledge in the ground-truth data.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:385
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose an interactive classification approach for natural language queries.
Instead of classifying given the natural language query only, we ask the user for additional information using a sequence of binary and multiple-choice questions.
At each turn, we use a policy controller to decide if to present a question or pro-vide the user the final answer, and select the best question to ask by maximizing the system information gain.
Our formulation enables bootstrapping the system without any interaction data, instead relying on non-interactive crowdsourcing an-notation tasks.
Our evaluation shows the interaction helps the system increase its accuracy and handle ambiguous queries, while our approach effectively balances the number of questions and the final accuracy.
Responding to natural language queries through simple, single-step classification has been studied extensively in many applications, including user intent prediction Qu et al., 2019) , and information retrieval (Kang & Kim, 2003; Rose & Levinson, 2004) .
Typical methods rely on a single user input to produce an output, missing an opportunity to interact with the user to reduce ambiguity and improve the final prediction.
For example, users may under-specify a request due to incomplete understanding of the domain; or the system may fail to correctly interpret the nuances of the input query.
In both cases, a low quality input could be mitigated by further interaction with the user.
In this paper we propose a simple but effective interaction paradigm that consists of a sequence of binary and multiple choice questions allowing the system to ask the user for more information.
Figure 1 illustrates the types of interaction supported by this method, showcasing the opportunity for clarification while avoiding much of the complexity involved in unrestricted natural language interactions.
Following a natural language query from the user, our system then decides between posing another question to obtain more information or finalizing the current prediction.
Unlike previous work which assumes access to full interaction data Hu et al., 2018; Rao & Daumé III, 2018) , we are interested in bootstrapping an interaction system using simple and relatively little annotation effort.
This is particularly important in real-world applications, such as in virtual assistants, where the supported classification labels are subject to change and thereby require a lot of re-annotation.
We propose an effective approach designed for interaction efficiency and simple system bootstrapping.
Our approach adopts a Bayesian decomposition of the posterior distributions over classification labels and user's responses through the interaction process.
Due to the decomposition, we can efficiently compute and select the next question that provides the maximal expected information based on the posteriors.
To further balance the potential increase in accuracy with the cost of asking additional questions, we train a policy controller to decide whether to ask additional questions or return a final prediction.
Our method also enables separately collecting natural language annotations to model the distributions of class labels and user responses.
Specifically, we crowdsource initial natural language queries and question-answer pairs for each class label, alleviating the need for Wizard-of-Oz style dialog annotations (Kelley, 1984; Wen et al., 2016) .
Furthermore, we leverage the natural language descriptions of class labels, questions and answers to help estimate their correlation and reduce the need for heavy annotation.
Got it!
The article below might be helpful:
We propose an approach for interactive classification, where users can provide under-specified natural language queries and the system can inquire missing information through a sequence of simple binary or multi-choice questions.
Our method uses information theory to select the best question at every turn, and a lightweight policy to efficiently control the interaction.
We show how we can bootstrap the system without any interaction data.
We demonstrate the effectiveness of our approach on two tasks with different characteristics.
Our results show that our approach outperforms multiple baselines by a large margin.
In addition, we provide a new annotated dataset for future work on bootstrapping interactive classification systems.
|
We propose an interactive approach for classifying natural language queries by asking users for additional information using information gain and a reinforcement learning policy controller.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:386
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Convolutional neural networks (CNNs) have achieved state of the art performance on recognizing and representing audio, images, videos and 3D volumes; that is, domains where the input can be characterized by a regular graph structure.
However, generalizing CNNs to irregular domains like 3D meshes is challenging.
Additionally, training data for 3D meshes is often limited.
In this work, we generalize convolutional autoencoders to mesh surfaces.
We perform spectral decomposition of meshes and apply convolutions directly in frequency space.
In addition, we use max pooling and introduce upsampling within the network to represent meshes in a low dimensional space.
We construct a complex dataset of 20,466 high resolution meshes with extreme facial expressions and encode it using our Convolutional Mesh Autoencoder.
Despite limited training data, our method outperforms state-of-the-art PCA models of faces with 50% lower error, while using 75% fewer parameters.
Convolutional neural networks BID27 have achieved state of the art performance in a large number of problems in computer vision BID26 BID22 , natural language processing BID32 and speech processing BID20 .
In recent years, CNNs have also emerged as rich models for generating both images BID18 and audio .
These successes may be attributed to the multi-scale hierarchical structure of CNNs that allows them to learn translational-invariant localized features.
Since the learned filters are shared across the global domain, the number of filter parameters is independent of the domain size.
We refer the reader to BID19 for a comprehensive overview of deep learning methods and the recent developments in the field.Despite the recent success, CNNs have mostly been successful in Euclidean domains with gridbased structured data.
In particular, most applications of CNNs deal with regular data structures such as images, videos, text and audio, while the generalization of CNNs to irregular structures like graphs and meshes is not trivial.
Extending CNNs to graph structures and meshes has only recently drawn significant attention BID8 BID14 .
Following the work of BID14 on generalizing the CNNs on graphs using fast Chebyshev filters, we introduce a convolutional mesh autoencoder architecture for realistically representing high-dimensional meshes of 3D human faces and heads.The human face is highly variable in shape as it is affected by many factors such as age, gender, ethnicity etc.
The face also deforms significantly with expressions.
The existing state of the art 3D face representations mostly use linear transformations BID39 BID29 BID40 or higher-order tensor generalizations BID43 BID9 .
While these linear models achieve state of the art results in terms of realistic appearance and Euclidean reconstruction error, we show that CNNs can perform much better at capturing highly non-linear extreme facial expressions with many fewer model parameters.One challenge of training CNNs on 3D facial data is the limited size of current datasets.
Here we demonstrate that, since these networks have fewer parameters than traditional linear models, they can be effectively learned with limited data.
This reduction in parameters is attributed to the locally invariant convolutional filters that can be shared on the surface of the mesh.
Recent work has exploited thousands of 3D scans and 4D scan sequences for learning detailed models of 3D faces BID13 BID46 BID37 BID11 .
The availability of this data enables us to a learn rich non-linear representation of 3D face meshes that can not be captured easily by existing linear models.In summary, our work introduces a convolutional mesh autoencoder suitable for 3D mesh processing.
Our main contributions are:• We introduce a mesh convolutional autoencoder consisting of mesh downsampling and mesh upsampling layers with fast localized convolutional filters defined on the mesh surface.•
We use the mesh autoencoder to accurately represent 3D faces in a low-dimensional latent space performing 50% better than a PCA model that is used in state of the art methods BID39 for face representation.•
Our autoencoder uses up to 75% fewer parameters than linear PCA models, while being more accurate on the reconstruction error.•
We provide 20,466 frames of highly detailed and complex 3D meshes from 12 different subjects for a range of extreme facial expressions along with our code for research purposes. Our
data and code is located at http://withheld.for.review.This work takes a step towards the application of CNNs to problems in graphics involving 3D meshes. Key
aspects of such problems are the limited availability of training data and the need for realism. Our
work addresses these issues and provides a new tool for 3D mesh modeling.
While our convolutional Mesh Autoencoder leads to a representation that generalizes better for unseen 3D faces than PCA with much fewer parameters, our model has several limitations.
Our network is restricted to learning face representation for a fixed topology, i.e., all our data samples needs to have the same adjacency matrix, A. The mesh sampling layers are also based on this fixed adjacency matrix A, which defines only the edge connections.
The adjacency matrix does not take in to account the vertex positions thus affecting the performance of our sampling operations.
In future, we would like to incorporate this information into our learning framework.
Mesh Autoencoder PCA FLAME BID29 Table 5 : Quantitative evaluation of Extrapolation experiment.
The training set consists of the rest of the expressions.
Mean error is of the form [µ ± σ] with mean Euclidean distance µ and standard deviation σ.
The median error and number of frames in each expression sequnece is also shown.
All errors are in millimeters (mm).The
amount of data for high resolution faces is very limited. We
believe that generating more of such data with high variability between faces would improve the performance of Mesh Autoencoders for 3D face representations. The
data scarcity also limits our ability to learn models that can be trained for superior performance at higher dimensional latent space. The
data scarcity also produces noise in some reconstructions.
We have introduced a generalization of convolutional autoencoders to mesh surfaces with mesh downsampling and upsampling layers combined with fast localized convolutional filters in spectral space.
The locally invariant filters that are shared across the surface of the mesh significantly reduce the number of filter parameters in the network.
While the autoencoder is applicable to any class of mesh objects, we evaluated its quality on a dataset of realistic extreme facial expressions.
Table 6 : Comparison of FLAME and FLAME++.
FLAME++ is obtained by replacing expression model of FLAME with our mesh autoencoder.
All errors are in millimeters (mm).convolutional
filters capture a lot of surface details that are generally missed in linear models like PCA while using 75% fewer parameters. Our Mesh Autoencoder
outperforms the linear PCA model by 50% on interpolation experiments and generalizes better on completely unseen facial expressions.Face models are used in a large number of applications in computer animations, visual avatars and interactions. In recent years, a lot
of focus has been given to capturing highly detailed static and dynamic facial expressions. This work introduces a
direction in modeling these high dimensional face meshes that can be useful in a range of computer graphics applications.
|
Convolutional autoencoders generalized to mesh surfaces for encoding and reconstructing extreme 3D facial expressions.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:387
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Computing distances between examples is at the core of many learning algorithms for time series.
Consequently, a great deal of work has gone into designing effective time series distance measures.
We present Jiffy, a simple and scalable distance metric for multivariate time series.
Our approach is to reframe the task as a representation learning problem---rather than design an elaborate distance function, we use a CNN to learn an embedding such that the Euclidean distance is effective.
By aggressively max-pooling and downsampling, we are able to construct this embedding using a highly compact neural network.
Experiments on a diverse set of multivariate time series datasets show that our approach consistently outperforms existing methods.
Measuring distances between examples is a fundamental component of many classification, clustering, segmentation and anomaly detection algorithms for time series BID38 BID43 BID13 .
Because the distance measure used can have a significant effect on the quality of the results, there has been a great deal of work developing effective time series distance measures BID18 BID28 BID1 BID15 .
Historically, most of these measures have been hand-crafted.
However, recent work has shown that a learning approach can often perform better than traditional techniques BID16 BID33 BID9 .We
introduce a metric learning model for multivariate time series. Specifically
, by learning to embed time series in Euclidean space, we obtain a metric that is both highly effective and simple to implement using modern machine learning libraries. Unlike many
other deep metric learning approaches for time series, we use a convolutional, rather than a recurrent, neural network, to construct the embedding. This choice
, in combination with aggressive maxpooling and downsampling, results in a compact, accurate network.Using a convolutional neural network for metric learning per se is not a novel idea BID35 BID45 ; however, time series present a set of challenges not seen together in other domains, and how best to embed them is far from obvious. In particular
, time series suffer from:1. A lack of labeled
data. Unlike text or images
, time series cannot typically be annotated post-hoc by humans. This has given rise
to efforts at unsupervised labeling BID4 , and is evidenced by the small size of most labeled time series datasets. Of the 85 datasets
in the UCR archive BID10 , for example, the largest dataset has fewer than 17000 examples, and many have only a few hundred. 2. A lack of large
corpora. In addition to the
difficulty of obtaining labels, most researchers have no means of gathering even unlabeled time series at the same scale as images, videos, or text. Even the largest time
series corpora, such as those on Physiobank BID19 , are tiny compared to the virtually limitless text, image, and video data available on the web. 3. Extraneous data. There
is no guarantee that
the beginning and end of a time series correspond to the beginning and end of any meaningful phenomenon. I.e., examples of the class
or pattern of interest may take place in only a small interval within a much longer time series. The rest of the time series
may be noise or transient phenomena between meaningful events BID37 BID21 .4. Need for high speed. One consequence
of the presence of extraneous
data is that many time series algorithms compute distances using every window of data within a time series BID34 BID4 BID37 . A time series of length T has O(T ) windows of
a given length, so it is essential that the operations done at each window be efficient.As a result of these challenges, an effective time series distance metric must exhibit the following properties:• Efficiency: Distance measurement must be fast, in terms of both training time and inference time.• Simplicity: As evidenced by the continued dominance
of the Dynamic Time Warping (DTW) distance BID42 in the presence of more accurate but more complicated rivals, a distance measure must be simple to understand and implement.• Accuracy: Given a labeled dataset, the metric should
yield a smaller distance between similarly labeled time series. This behavior should hold even for small training sets
.Our primary contribution is a time series metric learning method, Jiffy, that exhibits all of these properties: it is fast at both training and inference time, simple to understand and implement, and consistently outperforms existing methods across a variety of datasets.We introduce the problem statement and the requisite definitions in Section 2. We summarize existing state-of-the-art approaches
(both neural and non-neural) in Section 3 and go on to detail our own approach in Section 4. We then present our results in Section 5. The paper
concludes with implications of our work and
avenues for further research.
We present Jiffy, a simple and efficient metric learning approach to measuring multivariate time series similarity.
We show that our method learns a metric that leads to consistent and accurate classification across a diverse range of multivariate time series.
Jiffy's resilience to hyperparameter choices and consistent performance across domains provide strong evidence for its utility on a wide range of time series datasets.Future work includes the extension of this approach to multi-label classification and unsupervised learning.
There is also potential to further increase Jiffy's speed by replacing the fully connected layer with a structured BID6 or binarized BID39
|
Jiffy is a convolutional approach to learning a distance metric for multivariate time series that outperforms existing methods in terms of nearest-neighbor classification accuracy.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:388
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Prefrontal cortex (PFC) is a part of the brain which is responsible for behavior repertoire.
Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module (BM) and corresponding end-to-end training strategy.
This approach allows the efficient learning of behaviors and preferences representation.
This property is particularly useful for user modeling (as for dialog agents) and recommendation tasks, as allows learning personalized representations of different user states.
In the experiment with video games playing, the resultsshow that the proposed method allows separation of main task’s objectives andbehaviors between different BMs.
The experiments also show network extendability through independent learning of new behavior patterns.
Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks.
Humans are highly intelligent species and are capable of solving a large variety of compound and open-ended tasks.
The performance on those tasks often varies depending on a number of factors.
In this work, we group them into two main categories: Strategy and Behaviour.
The first group contains all the factors leading to the achievement of a defined set of goals.
On the other hand, Behaviour is responsible for all the factors not directly linked to the goals and having no significant effect on them.
Examples of such factors can be current sentiment status or the unique personality and preferences that affect the way an individual makes decisions.
Existing Deep Networks have been focused on learning of a Strategy component.
This was achieved by optimization of a model for defined sets of goals, also the goal might be decomposed into sub-goals first, as in FeUdal Networks BID29 or Policy Sketches approach BID1 .
Behavior component, in turn, obtained much less attention from the DL community.
Although some works have been conducted on the identification of Behavior component in the input, such as works in emotion recognition BID15 BID11 BID17 .
To the best of our knowledge, there was no previous research on incorporation on Behavior Component or Behavior Representation in Deep Networks before.
Modeling Behaviour along with Strategy component is an important step to mimicking a real human behavior and creation of robust Human-Computer Interaction systems, such as a dialog agent, social robot or recommendation system.
The early work of artificial neural networks was inspired by brain structure BID9 BID16 , and the convolution operation and hierarchical layer design found in the network designed for visual analytic are inspired by visual cortex BID9 BID16 .
In this work, we again seek inspiration from the human brain architecture.
In the neuroscience studies, the prefrontal cortex (PFC) is the region of the brain responsible for the behavioral repertoire of animals BID18 ).
Similar to the connectivity of the brain cortex (as shown in Figure 1 ), we hypothesize that a behavior can be modeled as a standalone module within the deep network architecture.
Thus, in this work, we introduce a general purpose modular architecture of deep networks with a Behavioural Module (BM) focusing on impersonating the functionality of PFC.Apart from mimicking the PFC connectivity in our model, we also borrow the model training strategy from human behavior formation process.
As we are trying to mimic the functionality of a human brain we approached the problem from the perspective of Reinforcement Learning.
This approach also aligns with the process of unique personality development.
According to BID6 and BID5 unique personality can be explained by different dopamine functions caused by genetic influence.
These differences are also a reason for different Positive Emotionality (PE) Sensory Cortex (Conv Layers) PFC (Behavior Module) Motor Cortex (FC layers) Figure 1: Abstract illustration of the prefrontal cortex (PFC) connections of the brain BID18 and corresponding parts of the proposed model.
patterns (sensitivity to reward stimuli), which are in turn a significant factor in behavior formation process BID5 .
Inspired by named biological processes we introduce extra positive rewards (referring to positive-stimuli or dopamine release, higher the reward referring to higher sensitivity) to encourage specific actions and provoke the development of specific behavioral patterns in the trained agent.To validate our method, we selected the challenging domain of classic Atari 2600 games BID2 , where the simulated environment allows an AI algorithm to learn game playing by repeatedly seek to understand the input space, objectives and solution.
Based on this environment and an established agent (i.e. Deep Q-Network (DQN) BID20 ), the behavior of the agent can be represented by preferences over different sets of actions.
In other words, in the given setting, each behaviour is represented by a probability distribution over given action space.
In real-world tasks, the extra-reward can be represented by the human satisfaction by taken action along with the correctness of the output (main reward).Importantly
, the effect of human behavior is not restricted to a single task and can be observed in various similar situations. Although it
is difficult to correlate the effect of human behavior on completely different tasks, it is often easier to observe akin patterns in similar domains and problems. To verify this
, we study two BM transfer strategies to transfer a set of newly learned BMs across different tasks. As a human PFC
is responsible for behavior patterns in a variety of tasks, we also aim to achieve a zero-shot transfer of learned modules across different tasks.The contributions of our work are as follow:• We propose a novel modular architecture with behavior module and a learning method for the separation of behavior from a strategy component.• We provide a
0-shot transfer strategy for newly learned behaviors to previously unseen tasks. The proposed approach
ensures easy extendability of the model to new behaviors and transferability of learned BMs.• We demonstrate the effectiveness
of our approach on video games domain. The experimental results show good
separation of behavior with different BMs, as well as promising results when transfer learned BMs to new tasks. Along with that, we study the effects
of different hyper-parameters on the behavior separation process.
In this work, we have proposed a novel Modular Network architecture with Behavior Module, inspired by human brain Pre-Frontal Cortex connectivity.
This approach demonstrated the successful separation of the Strategy and Behavior functionalities among different network components.
This is particularly useful for network expandability through independent learning of new Behavior Modules.
Adversarial 0-shot transfer approach showed high potential of the learned BMs to be transferred to unseen tasks.
Experiments showed that learned behaviors are removable and do not degrade the performance of the network on the main task.
This property allows the model to work in a general setting, when user preferences are unknown.
The results also align with human behavior formation process.
We also conducted an exhaustive study on the effect of hyper-parameters on behavior learning process.
As a future work, we are planning to extend the work to other domains, such as style transfer, chat bots, and recommendation systems.
Also, we will work on improving module transfer quality.
In this appendix, we show the details of our preliminary study on various key parameters.
The experiments were conducted on the Behavior Separation task.
|
Extendable Modular Architecture is proposed for developing of variety of Agent Behaviors in DQN.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:389
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression.
In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence.
In many applications this expensive initialization is not practical, for example streaming algorithms --- where inputs are ephemeral and can only be inspected a small number of times.
In this paper we explore the learning of approximate set membership over a stream of data in one-shot via meta-learning.
We propose a novel memory architecture, the Neural Bloom Filter, which we show to be more compressive than Bloom Filters and several existing memory-augmented neural networks in scenarios of skewed data or structured sets.
One of the simplest questions one can ask of a set of data is whether or not a given query is contained within it.
Is q, our query, a member of S, our chosen set of observations?
This set membership query arises across many computing domains; from databases, network routing, and firewalls.
One could query set membership by storing S in its entirety and comparing q against each element.
However, more space-efficient solutions exist.The original and most widely implemented approximate set membership data-structure is the Bloom Filter BID2 .
It works by storing sparse distributed codes, produced from randomized hash functions, within a binary vector.
The Bloom-filter trades off space for an allowed false positive rate, which arises due to hash collisions.
However its error is one-sided; if an element q is contained in S then it will always be recognized.
It never emits false negatives.One can find Bloom Filters embedded within a wide range of production systems; from network security BID16 , to block malicious IP addresses; databases, such as Google's Bigtable BID7 , to avoid unnecessary disk lookups; cryptocurrency BID19 , to allow clients to filter irrelevant transactions; search, such as Facebook's typeahead search BID0 , to filter pages which do not contain query prefixes; and program verification BID13 , to avoid recomputation over previously observed states.While the main appeal of Bloom Filters is favourable compression, another important quality is the support for dynamic updates.
New elements can be inserted in O(1) time.
This is not the case for all approximate set membership data structures.
For example, perfect hashing saves ≈ 40% space over Bloom Filters but requires a pre-processing stage that is polynomial-time in the number of elements to store BID12 .
Whilst the static set membership problem is interesting, it limits the applicability of the algorithm.
For example, in a database application that is serving a high throughput of write operations, it may be intractable to regenerate the full data-structure upon each batch of writes.We thus focus on the data stream computation model BID27 , where input observations are assumed to be ephemeral and can only be inspected a constant number of timesusually once.
This captures many real-world applications: network traffic analysis, database query serving, and reinforcement learning in complex domains.
Devising an approximate set membership data structure that is not only more compressive than Bloom Filters, but can be applied to either dynamic or static sets, could have a significant performance impact on modern computing applications.
In this paper we investigate this problem using memory-augmented neural networks and meta-learning.We build upon the recently growing literature on using neural networks to replace algorithms that are configured by heuristics, or do not take advantage of the data distribution.
For example, Bloom Filters are indifferent to the data distribution.
They have near-optimal space efficiency when data is drawn uniformly from a universe set BID5 (maximal-entropy case) but (as we shall show) are sub-optimal when there is more structure.
Prior studies on this theme have investigated compiler optimization BID11 , computation graph placement , and data index structures such as b-trees BID22 .In
the latter work, BID22 explicitly consider the problem of static set membership. By
training a neural network over a fixed S (URLs from Google's Transparency Report) with negative examples in the form of held-out URLs, they observe 36% space reduction over a conventional Bloom Filter 1 . Crucially
this requires iterating over the storage set S a large number of times to embed its salient information into the weights of a neural network classifier. For a new
S this process would have to be repeated from scratch.Instead of learning from scratch, we draw inspiration from the few-shot learning advances obtained by meta-learning memory-augmented neural networks BID30 BID34 . In this setup
, tasks are sampled from a common distribution and a network learns to specialize to (learn) a given task with few examples. This matches
very well to applications where many Bloom Filters are instantiated over different subsets of a common data distribution. For example,
a Bigtable database usually contains one Bloom Filter per SSTable file. For a large
table that contains Petabytes of data, say, there can be over 100, 000 separate instantiated data-structures which share a common row key format and query distribution. Meta-learning
allows us to exploit this common redundancy.The main contributions of this paper are (1) A new sparse memory-augmented neural network architecture, the Neural Bloom Filter, which learns to write to memory using a distributed write scheme, and (2) An empirical evaluation of the Neural Bloom Filter meta-learned on one-shot approximate set membership problems of varying structure.We compare with the classical Bloom Filter alongside other memory-augmented neural networks such as the Differentiable Neural Computer and Memory Networks BID33 . We find when
there is no structure, that differentiates the query set elements and queries, the Neural Bloom Filter learns a solution similar to a Bloom Filter derivative (a Bloom-g filter BID28 ), but when there is a lot of structure the solution can be considerably more space-efficient.
In many situations neural networks are not a suitable replacement to Bloom Filters and their variants.
The Bloom Filter is robust to changes in data distribution, and adversarial attacks, because it delivers a bounded false positive rate for any sampled subset, unlike a neural network.
However in this paper we consider the questions, "When might a neural network provide better compression than a Bloom Filter?" and "What kind of neural architecture is practical?".
We see that a model which uses an external memory with an adaptable capacity, avoids BPTT with a feed-forward write scheme, and learns to address its memory, is the most promising option in contrast to popular memory models such as DNCs and LSTMs.
We term this model the Neural Bloom Filter due to the analogous incorporation of a hashing scheme, commutative write scheme, and multiplicative read mechanism.The Neural Bloom Filter relies on settings where cost of learning to query is possible and will be a net benefit to a population of existing bloom filters.
That is, because we rely on meta-learning, we need situations where we have an off-line dataset (both of stored elements and queries) that is similar enough to future data that we wish to store.
In the case of a large database we think this is warranted, a database with 100, 000 separate set membership data structures will benefit from a single (or periodic) meta-learning training routine that can run on a single machine and sample from the currently stored data, generating a large number of efficient data-structures.
We envisage the space cost of the network to be amortized by sharing it across many neural bloom filters, and the time-cost of executing the network to be offset by the continuous acceleration of dense linear algebra on modern hardware, and the ability to batch writes and queries efficiently.
A promising future direction would be to investigate the feasibility of this approach in a production system.
|
We investigate the space efficiency of memory-augmented neural nets when learning set membership.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:39
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A major component of overfitting in model-free reinforcement learning (RL) involves the case where the agent may mistakenly correlate reward with certain spurious features from the observations generated by the Markov Decision Process (MDP).
We provide a general framework for analyzing this scenario, which we use to design multiple synthetic benchmarks from only modifying the observation space of an MDP.
When an agent overfits to different observation spaces even if the underlying MDP dynamics is fixed, we term this observational overfitting.
Our experiments expose intriguing properties especially with regards to implicit regularization, and also corroborate results from previous works in RL generalization and supervised learning (SL).
Generalization for RL has recently grown to be an important topic for agents to perform well in unseen environments.
Complication arises when the dynamics of the environments entangle with the observation, which is often a high-dimensional projection of the true latent state.
One particular framework, which we denote by zero-shot supervised framework (Zhang et al., 2018a; Nichol et al., 2018; Justesen et al., 2018) and is used to study RL generalization, is to treat it analogous to a classical supervised learning (SL) problem -i.e. assume there exists a distribution of MDP's, train jointly on a finite "training set" sampled from this distribution, and check expected performance on the entire distribution, with the fixed trained policy.
In this framework, there is a spectrum of analysis, ranging from almost purely theoretical analysis (Wang et al., 2019; Asadi et al., 2018) to full empirical results on diverse environments Packer et al., 2018) .
However, there is a lack of analysis in the middle of this spectrum.
On the theoretical side, previous work do not provide analysis for the case when the underlying MDP is relatively complex and requires the policy to be a non-linear function approximator such as a neural network.
On the empirical side, there is no common ground between recently proposed empirical benchmarks.
This is partially caused by multiple confounding factors for RL generalization that can be hard to identify and separate.
For instance, an agent can overfit to the MDP dynamics of the training set, such as for control in Mujoco (Pinto et al., 2017; Rajeswaran et al., 2017) .
In other cases, an RNN-based policy can overfit to maze-like tasks in exploration , or even exploit determinism and avoid using observations (Bellemare et al., 2012; Machado et al., 2018) .
Furthermore, various hyperparameters such as the batch-size in SGD (Smith et al., 2018) , choice of optimizer (Kingma & Ba, 2014) , discount factor γ (Jiang et al., 2015) and regularizations such as entropy and weight norms (Cobbe et al., 2018) can also affect generalization.
Due to these confounding factors, it can be unclear what parts of the MDP or policy are actually contributing to overfitting or generalization in a principled manner, especially in empirical studies with newly proposed benchmarks.
In order to isolate these factors, we study one broad factor affecting generalization that is most correlated with themes in SL, specifically observational overfitting, where an agent overfits due to properties of the observation which are irrelevant to the latent dynamics of the MDP family.
To study this factor, we fix a single underlying MDP's dynamics and generate a distribution of MDP's by only modifying the observational outputs.
Our contributions in this paper are the following:
1. We discuss realistic instances where observational overfitting may occur and its difference from other confounding factors, and design a parametric theoretical framework to induce observational overfitting that can be applied to any underlying MDP.
2. We study observational overfitting with linear quadratic regulators (LQR) in a synthetic environment and neural networks such as multi-layer perceptrons (MLPs) and convolutions in classic Gym environments.
A primary novel result we demonstrate for all cases is that implicit regularization occurs in this setting in RL.
We further test the implicit regularization hypothesis on the benchmark CoinRun from using MLPs, even when the underlying MDP dynamics are changing per level.
3. In the Appendix, we expand upon previous experiments by including full training curve and hyperparamters.
We also provide an extensive analysis of the convex one-step LQR case under the observational overfitting regime, showing that under Gaussian initialization of the policy and using gradient descent on the training cost, a generalization gap must necessarily exist.
The structure of this paper is outlined as follows: Section 2 discusses the motivation behind this work and the synthetic construction to abstract certain observation effects.
Section 3 demonstrates numerous experiments using this synthetic construction that suggest implicit regularization is at work.
Finally, Section 3.4 tests the implicit regularization hypothesis on CoinRun, as well as ablates various ImageNet architectures and margin metrics in the Appendix.
We have identified and isolated a key component of overfitting in RL as the particular case of "observational overfitting", which is particularly attractive for studying architectural implicit regularizations.
We have analyzed this setting extensively, by examining 3 main components:
1. The analytical case of LQR and linear policies under exact gradient descent, which lays the foundation for understanding theoretical properties of networks in RL generalization.
2. The empirical but principled Projected-Gym case for both MLP and convolutional networks which demonstrates the effects of neural network policies under nonlinear environments.
3. The large scale case for CoinRun, which can be interpreted as a case where relevant features are moving across the input, where empirically, MLP overparametrization also improves generalization.
We noted that current network policy bounds using ideas from SL are unable to explain overparametrization effects in RL, which is an important further direction.
In some sense, this area of RL generalization is an extension of static SL classification from adding extra RL components.
For instance, adding a nontrivial "combination function" between f and g θ that is dependent on time (to simulate how object pixels move in a real game) is both an RL generalization issue and potentially video classification issue, and extending results to the memory-based RNN case will also be highly beneficial.
Furthermore, it is unclear whether such overparametrization effects would occur in off-policy methods such as Q-learning and also ES-based methods.
In terms of architectural design, recent works (Jacot et al., 2018; Garriga-Alonso et al., 2019; Lee et al., 2019) have shed light on the properties of asymptotically overparametrized neural networks in the infinite width and depth cases and their performance in SL.
Potentially such architectures (and a corresponding training algorithm) may be used in the RL setting which can possibly provide benefits, one of which is generalization as shown in this paper.
We believe that this work provides an important initial step towards solving these future problems.
We further verify that explicit regularization (norm based penalties) also reduces generalization gaps.
However, explicit regularization may be explained due to the bias of the synthetic tasks, since the first layer's matrix may be regularized to only "view" the output of f , especially as regularizing the first layer's weights substantially improves generalization.
Figure A2 : Explicit Regularization on layer norms.
We provide another deconvolution memorization test, using an LQR as the underlying MDP.
While fg-Gym-Deconv shows that memorization performance is dampened, this test shows that there can exist specific hard limits to memorization.
Specifically, NatureCNN can memorize 30 levels, but not 50; IMPALA can memorize 2 levels but not 5; IMPALA-LARGE cannot memorize 2 levels at all.
Training, Test Rewards (f = NULL) IMPALA_2_levels IMPALA_5_levels IMPALA_30_levels IMPALA_LARGE_2_levels NatureCNN_30_levels NatureCNN_50_levels Figure A3 : Deconvolution memorization test using LQR as underlying MDP.
|
We isolate one factor of RL generalization by analyzing the case when the agent only overfits to the observations. We show that architectural implicit regularizations occur in this regime.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:390
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a neural language model capable of unsupervised syntactic structure induction.
The model leverages the structure information to form better semantic representations and better language modeling.
Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information.
On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation.
In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.
In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network.
Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.
Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BID46 .
To generate a proper sentence, tokens are put together with a specific syntactic structure.
Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings.
Current neural language models can provide meaningful word represent BID0 BID41 .
However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BID53 .Developing
a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BID50 BID53 BID11 . Integrating
syntactic structure into a language model is important for different reasons: 1) to obtain
a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BID1 BID31 BID47 ; 2) to capture complex linguistic phenomena, like long-term dependency problem BID53 and the compositional effects BID50 ; 3) to provide shortcut for gradient back-propagation BID11 .A syntactic parser
is the most common source for structure information. Supervised parsers
can achieve very high performance on well constructed sentences. Hence, parsers can
provide accurate information about how to compose word semantics into sentence semantics BID50 , or how to generate the next word given previous words BID56 . However, only major
languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to
break language rules in many circumstances (such as writing a tweet). These defects limit
the generalization capability of supervised parsers.Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BID23 BID25 BID2 . Researchers are interested
in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BID35 ; to create a dependency structure to better suit a particular NLP application BID56 ; to empirically argue for or against the poverty of the stimulus BID12 BID10 ; and to examine cognitive issues in language learning BID51 .In this paper, we propose a
novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that
language can be naturally represented as a tree-structured graph. The model is composed of three
parts:1. A differentiable neural Parsing
Network uses a convolutional neural network to compute the syntactic distance, which represents the syntactic relationships between all successive pairs of words in a sentence, and then makes soft constituent decisions based on the syntactic distance.2. A Reading Network that recurrently
computes an adaptive memory representation to summarize information relevant to the current time step, based on all previous memories that are syntactically and directly related to the current token.3. A Predict Network that predicts the
next token based on all memories that are syntactically and directly related to the next token.We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is
close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms
some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts.
In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.
We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions.
We use a new structured attention mechanism to control skip connections in a recurrent neural network.
Hence induced syntactic structure information can be used to improve the model's performance.
Via this mechanism, the gradient can be directly backpropagated from the language model loss function into the neural Parsing Network.
The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks.
Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation.
|
In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:391
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Unsupervised embedding learning aims to extract good representations from data without the use of human-annotated labels.
Such techniques are apparently in the limelight because of the challenges in collecting massive-scale labels required for supervised learning.
This paper proposes a comprehensive approach, called Super-AND, which is based on the Anchor Neighbourhood Discovery model.
Multiple losses defined in Super-AND make similar samples gather even within a low-density space and keep features invariant against augmentation.
As a result, our model outperforms existing approaches in various benchmark datasets and achieves an accuracy of 89.2% in CIFAR-10 with the Resnet18 backbone network, a 2.9% gain over the state-of-the-art.
Deep learning and convolutional neural network have become an indispensable technique in computer vision (LeCun et al., 2015; Krizhevsky et al., 2012; Lawrence et al., 1997) .
Remarkable developments, in particular, were led by supervised learning that requires thousands or more labeled data.
However, high annotation costs have become a significant drawback in training a scalable and practical model in many domains.
In contrast, unsupervised deep learning that requires no label has recently started to get attention in computer vision tasks.
From clustering analysis (Caron et al., 2018; Ji et al., 2018) , and self-supervised model (Gidaris et al., 2018; Bojanowski & Joulin, 2017) to generative model (Goodfellow et al., 2014; Kingma & Welling, 2013; Radford et al., 2016) , various learning methods came out and showed possibilities and prospects.
Unsupervised embedding learning aims to extract visually meaningful representations without any label information.
Here "visually meaningful" refers to finding features that satisfy two traits:
(i) positive attention and
(ii) negative separation (Ye et al., 2019; Zhang et al., 2017c; Oh Song et al., 2016) .
Data samples from the same ground truth class, i.e., positive samples, should be close in the embedding space (Fig. 1a) ; whereas those from different classes, i.e., negative samples, should be pushed far away in the embedding space (Fig. 1b) .
However, in the setting of unsupervised learning, a model cannot have knowledge about whether given data points are positive samples or negative samples.
Several new methods have been proposed to find 'visually meaningful' representations.
The sample specificity method considers all data points as negative samples and separates them in the feature space (Wu et al., 2018; Bojanowski & Joulin, 2017) .
Although this method achieves high performance, its decisions are known to be biased from learning only from negative separation.
One approach utilizes data augmentation to consider positive samples in training (Ye et al., 2019) , which efficiently reduces any ambiguity in supervision while keeping invariant features in the embedding space.
Another approach is called the Anchor Neighborhood Discovery (AND) model, which alleviates the complexity in boundaries by discovering the nearest neighbor among the data points (Huang et al., 2019) .
Each of these approaches overcomes different limitations of the sample specificity method.
However, no unified approach has been proposed.
This paper presents a holistic method for unsupervised embedding learning, named Super-AND.
Super-AND extends the AND algorithm and unifies various but dominant approaches in this domain with its unique architecture.
Our proposed model not only focuses on learning distinctive features across neighborhoods, but also emphasizes edge information in embeddings and maintains the unchanging class information from the augmented data.
Besides combining existing techniques, we newly introduce Unification Entropy loss (UE-loss), an adversary of sample specificity loss, which is able to gather similar data points within a low-density space.
Extensive experiments are conducted on several benchmark datasets to verify the superiority of the model.
The results show the synergetic advantages among modules of Super-AND.
The main contributions of this paper are as follows:
• We effectively unify various techniques from state-of-the-art models and introduce a new loss, UE-loss, to make similar data samples gather in the low-density space.
• Super-AND outperforms all baselines in various benchmark datasets.
It achieved an accuracy of 89.2% in the CIFAR-10 dataset with the ResNet18 backbone network, compared to the state-of-the-art that gained 86.3%.
• The extensive experiments and the ablation study show that every component in Super-AND contributes to the performance increase, and also indicate their synergies are critical.
Our model's outstanding performance is a step closer to the broader adoption of unsupervised techniques in computer vision tasks.
The premise of data-less embedding learning is at its applicability to practical scenarios, where there exists only one or two examples per cluster.
Codes and trained data for Super-AND are accessible via a GitHub link.
Generative model.
This type of model is a powerful branch in unsupervised learning.
By reconstructing the underlying data distribution, a model can generate new data points as well as features from images without labels.
Generative adversarial network (Goodfellow et al., 2014) has led to rapid progress in image generation problems Arjovsky et al., 2017) .
While some attempts have been made in terms of unsupervised embedding learning (Radford et al., 2016) , the main objective of generative models lies at mimicking the true distribution of each class, rather than discovering distinctive categorical information the data contains.
Self-supervised learning.
This type of learning uses inherent structures in images as pseudo-labels and exploits labels for back-propagation.
For example, a model can be trained to create embeddings by predicting the relative position of a pixel from other pixels (Doersch et al., 2015) or the degree of changes after rotating images (Gidaris et al., 2018) .
Predicting future frames of a video can benefit from this technique (Walker et al., 2016) .
Wu et al. (2018) proposed the sample specificity method that learns feature representation from capturing apparent discriminability among instances.
All of these methods are suitable for unsupervised embedding learning, although there exists a risk of false knowledge from generated labels that weakly correlate with the underlying class information.
Learning invariants from augmentation.
Data augmentation is a strategy that enables a model to learn from datasets with an increased variety of instances.
Popular techniques include flipping, scaling, rotation, and grey-scaling.
These techniques do not deform any crucial features of data, but only change the style of images.
Some studies hence use augmentation techniques and train models Clustering analysis.
This type of analysis is an extensively studied area in unsupervised learning, whose main objective is to group similar objects into the same class.
Many studies either leveraged deep learning for dimensionality reduction before clustering (Schroff et al., 2015; Baldi, 2012) or trained models in an end-to-end fashion (Xie et al., 2016; Yang et al., 2016) .
Caron et al. (2018) proposed a concept called deep cluster, an iterative method that updates its weights by predicting cluster assignments as pseudo-labels.
However, directly reasoning the global structures without any label is error-prone.
The AND model, which we extend in this work, combines the advantages of sample specificity and clustering strategy to mitigate the noisy supervision via neighborhood analysis (Huang et al., 2019) .
This paper presents Super-AND, a holistic technique for unsupervised embedding learning.
Besides the synergetic advantage combining existing methods brings, the newly proposed UE-loss that groups nearby data points even in a low-density space while maintaining invariant features via data augmentation.
The experiments with both coarse-grained and fine-grained datasets demonstrate our model's outstanding performance against the state-of-the-art models.
Our efforts to advance unsupervised embedding learning directly benefit future applications that rely on various image clustering tasks.
The high accuracy achieved by Super-AND makes the unsupervised learning approach an economically viable option where labels are costly to generate.
|
We proposed a comprehensive approach for unsupervised embedding learning on the basis of AND algorithm.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:392
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard.
In the case of digital histopathological analysis, highly trained pathologists must review vast whole-slide-images of extreme digital resolution (100,000^2 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions.
The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models.
We propose a method for disease available during training.
Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge.
We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique fromatp the field of semantic segmentation and object detection.
Histopathological image analysis (HIA) is a critical element of diagnosis in many areas of medicine, and especially in oncology, where it defines the gold standard metric.
Recent works have sought to leverage modern developments in machine learning (ML) to aid pathologists in disease detection tasks, but the majority of these techniques require localized annotation masks as training data.
These annotations are even more costly to obtain than the original diagnosis, as pathologists must spend time to assemble pixel-by-pixel segmentation maps of diseased tissue at extreme resolution, thus HIA datasets with annotations are very limited in size.
Additionally, such localized annotations may not be available when facing new problems in HIA, such as new disease subtybe classification, prognosis estimation, or drug response prediction.
Thus, the critical question for HIA is: can one design a learning architecture which achieves accurate classification with no additional localized annotation?
A successful technique would be able train algorithms to assist pathologists during analysis, and could also be used to identify previously unknown structures and regions of interest.Indeed, while histopathology is the gold standard diagnostic in oncology, it is extremely costly, requiring many hours of focus from pathologists to make a single diagnosis BID21 BID30 .
Additionally, as correct diagnosis for certain diseases requires pathologists to identify a few cells out of millions, these tasks are akin to "finding a needle in a haystack."
Hard numbers on diagnostic error rates in histopathology are difficult to obtain, being dependent upon the disease and tissue in question as well as self-reporting by pathologists of diagnostic errors.
However, as reported in the review of BID25 , false negatives in cancer diagnosis can lead not only to catastrophic consequences for the patient, but also to incredible financial risk to the pathologist.
Any tool which can aid pathologists to focus their attention and effort to the must suspect regions can help reduce false-negatives and improve patient outcomes through more accurate diagnoses BID8 .
Medical researchers have looked to computer-aided diagnosis for decades, but the lack of computational resources and data have prevented wide-spread implementa-tion and usage of such tools BID11 .
Since the advent of automated digital WSI capture in the 1990s, researchers have sought approaches for easing the pathologist's workload and improve patient outcomes through image processing algorithms BID11 BID22 .
Rather than predicting final diagnosis, many of these procedures focused instead on segmentation, either for cell-counting, or for the detection of suspect regions in the WSI.
Historical methods have focused on the use of hand-crafted texture or morphological BID5 features used in conjunction with unsupervised techniques such as K-means clustering or other dimensionality reduction techniques prior to classification via k-Nearest Neighbor or a support vector machine.Over the past decade, fruitful developments in deep learning BID19 have lead to an explosion of research into the automation of image processing tasks.
While the application of such advanced ML techniques to image tasks has been successful for many consumer applications, the adoption of such approaches within the field of medical imaging has been more gradual.
However, these techniques demonstrate remarkable promise in the field of HIA.
Specifically, in digital pathology with whole-slide-imaging (WSI) BID33 BID26 , highly trained and skilled pathologists review digitally captured microscopy images from prepared and stained tissue samples in order to make diagnoses.
Digital WSI are massive datasets, consisting of images captured at multiple zoom levels.
At the greatest magnification levels, a WSI may have a digital resolution upwards of 100 thousand pixels in both dimensions.
However, since localized annotations are very difficult to obtain, datasets may only contain WSI-level diagnosis labels, falling into the category of weakly-supervised learning.The use of DCNNs was first proposed for HIA in BID3 , where the authors were able to train a model for mitosis detection in H&E stained images.
A similar technique was applied for WSI for the detection of invasive ductal carcinoma in BID4 .
These approaches demonstrated the usefulness of learned features as an effective replacement for hand-crafted image features.
It is possible to train deep architectures from scratch for the classification of tile images BID29 BID13 .
However, training such DCNN architectures can be extremely resource intensive.
For this reason, many recent approaches applying DCNNs to HIA make use of large pre-trained networks to act as rich feature extractors for tiles BID15 BID17 BID21 BID32 BID27 .
Such approaches have found success as aggregation of rich representations from pre-trained DCNNs has proven to be quite effective, even without from-scratch training on WSI tiles.In this paper, we propose CHOWDER 1 , an approach for the interpretable prediction of general localized diseases in WSI with only weak, whole-image disease labels and without any additional expert-produced localized annotations, i.e. per-pixel segmentation maps, of diseased areas within the WSI.
To accomplish this, we modify an existing architecture from the field of multiple instance learning and object region detection BID9 to WSI diagnosis prediction.
By modifying the pre-trained DCNN model BID12 , introducing an additional set of fully-connected layers for context-aware classification from tile instances, developing a random tile sampling scheme for efficient training over massive WSI, and enforcing a strict set of regualrizations, we are able to demonstrate performance equivalent to the best human pathologists .
Notably, while the approach we propose makes use of a pre-trained DCNN as a feature extractor, the entire procedure is a true end-to-end classification technique, and therefore the transferred pre-trained layers can be fine-tuned to the context of H&E WSI.
We demonstrate, using only whole-slide labels, performance comparable to top-10 ranked methods trained with strong, pixel-level labels on the Camelyon-16 challenge dataset, while also producing disease segmentation that closely matches ground-truth annotations.
We also present results for diagnosis prediction on WSI obtained from The Cancer Genome Atlas (TCGA), where strong annotations are not available and diseases may not be strongly localized within the tissue sample.
We have shown that using state-of-the-art techniques from MIL in computer vision, such as the top instance and negative evidence approach of BID9 , one can construct an effective technique for diagnosis prediction and disease location for WSI in histopathology without the need Table 2 : Final leader boards for Camelyon-16 competition.
All competition methods had access to the full set of strong annotations for training their models.
In contrast, our proposed approach only utilizes image-wide diagnosis levels and obtains comparable performance as top-10 methods.
for expensive localized annotations produced by expert pathologists.
By removing this requirement, we hope to accelerate the production of computer-assistance tools for pathologists to greatly improve the turn-around time in pathology labs and help surgeons and oncologists make rapid and effective patient care decisions.
This also opens the way to tackle problems where expert pathologists may not know precisely where relevant tissue is located within the slide image, for instance for prognosis estimation or prediction of drug response tasks.
The ability of our approach to discover associated regions of interest without prior localized annotations hence appears as a novel discovery approach for the field of pathology.
Moreover, using the suggested localization from CHOWDER, one may considerably speed up the process of obtaining ground-truth localized annotations.A number of improvements can be made in the CHOWDER method, especially in the production of disease localization maps.
As presented, we use the raw values from convolutional embedding layer, which means that the resolution of the produced disease localization map is fixed to that of the sampled tiles.
However, one could also sample overlapping tiles and then use a data fusion technique to generate a final localization map.
Additionally, as a variety of annotations may be available, CHOWDER could be extended to the case of heterogenous annotation, e.g. some slides with expert-produced localized annotations and those with only whole-slide annotations.A FURTHER RESULTS Figure 5 : Visualization of metastasis detection on test image 2 of the Camelyon-16 dataset using our proposed approach.
Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border.
Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude.
Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach.
Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive.
Tiles without color were not included when randomly selecting tiles for inference.
Figure 6 : Visualization of metastasis detection on test image 92 of the Camelyon-16 dataset using our proposed approach.
Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border.
Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude.
Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach.
Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive.
Tiles without color were not included when randomly selecting tiles for inference.
|
We propose a weakly supervised learning method for the classification and localization of cancers in extremely high resolution histopathology whole slide images using only image-wide labels.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:393
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Massively multi-label prediction/classification problems arise in environments like health-care or biology where it is useful to make very precise predictions.
One challenge with massively multi-label problems is that there is often a long-tailed frequency distribution for the labels, resulting in few positive examples for the rare labels.
We propose a solution to this problem by modifying the output layer of a neural network to create a Bayesian network of sigmoids which takes advantage of ontology relationships between the labels to help share information between the rare and the more common labels.
We apply this method to the two massively multi-label tasks of disease prediction (ICD-9 codes) and protein function prediction (Gene Ontology terms) and obtain significant improvements in per-label AUROC and average precision.
In this paper, we study general techniques for improving predictive performance in massively multilabel classification/prediction problems in which there is an ontology providing relationships between the labels.
Such problems have practical applications in biology, precision health, and computer vision where there is a need for very precise categorization.
For example, in health care we have an increasing number of treatments that are only useful for small subsets of the patient population.
This forces us to create large and precise labeling schemes when we want to find patients for these personalized treatments.One large issue with massively multi-label prediction is that there is often a long-tailed frequency distribution for the labels with a large fraction of the labels having very few positive examples in the training data.
The corresponding low amount of training data for rare labels makes it difficult to train individual classifiers.
Current multi-task learning approaches enable us to somewhat circumvent this bottleneck through sharing information between the rare and cofmmon labels in a manner that enables us to train classifiers even for the data poor rare labels BID6 .In
this paper, we introduce a new method for massively multi-label prediction, a Bayesian network of sigmoids, that helps achieve better performance on rare classes by using ontological information to better share information between the rare and common labels. This
method is based on similar ideas found in Bayesian networks and hierarchical softmax BID18 . The
main distinction between this paper and prior work is that we focus on improving multi-label prediction performance with more complicated directed acyclic graph (DAG) structures between the labels while previous hierarchical softmax work focuses on improving runtime performance on multi-class problems (where labels are mutually exclusive) with simpler tree structures between the labels.In order to demonstrate the empirical predictive performance of our method, we test it on two very different massively multi-label tasks. The
first is a disease prediction task where we predict ICD-9 (diagnoses) codes from medical record data using the ICD-9 hierarchy to tie the labels together. The
second task is a protein function prediction task where we predict Gene Ontology terms BID0 BID5 from sequence information using the Gene Ontology DAG to combine the labels. Our
experiments indicate that our new method obtains better average predictive performance on rare labels while maintaining similar performance on common labels.
This paper introduces a new method for improving the performance of rare labels in massively multi-label problems with ontologically structured labels.
Our new method uses the ontological relationships to construct a Bayesian network of sigmoid outputs which enables us to express the probability of rare labels as a product of conditional probabilities of more common higher-level labels.
This enables us to share information between the labels and achieve empirically better performance in both AUROC and average precision for rare labels than flat sigmoid baselines in three separate experiments covering the two very different domains of protein function prediction and disease prediction.
This improvement in performance for rare labels enables us to make more precise predictions for smaller label categories and should be applicable to a variety of tasks that contain an ontology that defines relationships between labels.
|
We propose a new method for using ontology information to improve performance on massively multi-label prediction/classification problems.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:394
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse.
Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures.
This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance.
In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution.
Generative Adversarial Networks (GANs) were first introduced in Goodfellow et al. (2014) , where instead of the application of a mathematically well-established loss function an other differentiable neural network, a discriminator was applied to approximate the distance between two distributions.
These methods are popularly applied in data generation and has significantly improved the modelling capabilities of neural networks.
It was demonstrated in various use cases that these approaches can approximate complex high-dimensional distributions in practice Karras et al. (2017) , Yu et al. (2017) , Brock et al. (2018) .
Apart from the theoretical advantage of GANs and applying a discriminator network instead of a distance metric (e.g. 1 or 2 loss), modelling high-dimensional distributions with GANs often proves to be problematic in practice.
The two most common problems are mode collapse, where the generator gets stuck in a state where only a small portion of the whole distribution is modeled and convergence problems, where either the generator or the discriminator solves his task almost perfectly, providing low or no gradients for training for the other network.
Convergence problems were improved, by introducing the Wasserstein distance Gulrajani et al. (2017) , which instead of a point-wise distance calculation (e.g. cross-entropy or 1 distance) calculates a minimal transportation distance (earth mover's distance) between the two distributions.
The approximation and calculation of the Wasserstein distance is complex and difficult in highdimensions, since in case of a large sample size calculation and minimization of the transport becomes exponentially complex, also distance can have various magnitudes in the different dimensions.
In Deshpande et al. (2018) it was demonstrated that high-dimensional distributions can be approximated by using a high number of one dimensional projections.
For a selected projection the minimal transport between the one dimensional samples can be calculated by sorting both the real and the fake samples and assigning them according to their sorted indices correspondingly.
As an additional advantage, it was also demonstrated in Deshpande et al. (2018) that instead of the regular mini-max game of adversarial training, the distribution of the real samples could be approximated directly by the generator only, omitting the discriminator and turning training into a simple and more stable minimization problem.
The theory of this novel method is well described and it was demonstrated that it works in practice, but unfortunately for complex, high-dimensional distributions a large amount of projections are needed.
In Deshpande et al. (2019) it was demonstrated how the high number of random projections could be substituted by a single continuously optimized plane.
The parameters of this projection are optimized in an adversarial manner selecting the "worst" projection, which maximizes separation between the real and fake samples using a surrogate function.
This modification brought the regular adversarial training back and created a mini-max game again, where the generator creates samples which resemble well to the original distribution according to the selected plane and the discriminator tries to find a projection, which separates the real and fake samples from each other.
The essence of Sliced Wasserstein distances is how they provide a method to calculate minimal transportation between the projected samples in one-dimension with ease, which approximates the Wasserstein distance in the original high-dimension.
In theory this approach is sound and works well in practise.
It was proven in Kolouri et al. (2019) that the sliced Wasserstein distance satisfies the properties of non-negativity, identity of indiscernibles, symmetry, and triangle inequality, this way forming a true metric.
However it approximates high-dimensional distributions well, we would like to demonstrate in this paper that the assignment of real and fake samples by sorting them in one dimension also has its flaws and a greedy assignment approach can perform better on commonly applied datasets.
We would also argue regarding the application of the Wasserstein distance.
We will demonstrate that in many cases various assignments can result the same minimal transportation during training and calculation of the Wasserstein distance with sorting can alter the distribution of perfectly modeled samples even when only a single sample differs from the approximated distribution.
In this paper we have introduced greedy sample assignment for Max-Sliced Wasserstein GANs.
We have shown that using one-dimensional samples, in many cases multiple assignments can result optimal transportation and in most cases sorting changes all the samples, meanwhile those parts of the distribution which are at a "good" position should not generate error.
We proposed greedy assignment as a possible solution, where samples will be assigned to their most similar counterparts.
We have also introduced how the combination of the two methods can be applied resulting a hybrid approach in which it can automatically selected -based on the difference of the two measures -which assignment will be used.
We have demonstrated on simple toy datasets that greedy assignment performs better than sorting the samples and we have evaluated both the greedy and the hybrid methods on commonly investigated datasets (MNIST and CelebA).
With all datasets the greedy assignment resulted lower KullbackLeibler divergence and higher correlation than the traditional approach.
We have used the Max-Sliced Wasserstein distance for the base of our comparison, since this is the most recent version, which also results the best performance, but all the approaches can be exploited in case of regular Sliced Wasserstein distances as well.
Also our approach changes the distance calculation only and it can be applied together with various other improved techniques and architectures which are used in GAN training.
|
We apply a greedy assignment on the projected samples instead of sorting to approximate Wasserstein distance
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:395
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Lifelong machine learning focuses on adapting to novel tasks without forgetting the old tasks, whereas few-shot learning strives to learn a single task given a small amount of data.
These two different research areas are crucial for artificial general intelligence, however, their existing studies have somehow assumed some impractical settings when training the models.
For lifelong learning, the nature (or the quantity) of incoming tasks during inference time is assumed to be known at training time.
As for few-shot learning, it is commonly assumed that a large number of tasks is available during training.
Humans, on the other hand, can perform these learning tasks without regard to the aforementioned assumptions.
Inspired by how the human brain works, we propose a novel model, called the Slow Thinking to Learn (STL), that makes sophisticated (and slightly slower) predictions by iteratively considering interactions between current and previously seen tasks at runtime.
Having conducted experiments, the results empirically demonstrate the effectiveness of STL for more realistic lifelong and few-shot learning settings.
Deep Learning has been successful in various applications.
However, it still has a lot of areas to improve on to reach human's lifelong learning ability.
As one of its drawbacks, neural networks (NNs) need to be trained on large datasets before giving satisfactory performance.
Additionally, they usually suffer from the problem of catastrophic forgetting (McCloskey & Cohen (1989); French (1999) )-a neural network performs poorly on old tasks after learning a novel task.
In contrast, humans are able to incorporate new knowledge even from few examples, and continually throughout much of their lifetime.
To bridge this gap between machine and human abilities, effort has been made to study few-shot learning (Fei-Fei et al. (2006) ; Lake et al. (2011); Santoro et al. (2016) ; Vinyals et al. (2016) ; Snell et al. (2017) ; Ravi & Larochelle (2017b) ; Finn et al. (2017) ; ; Garcia & Bruna (2018) ; Qi et al. (2018) ), lifelong learning (Gepperth & Karaoguz (2016) ; Rusu et al. (2016) ; Kirkpatrick et al. (2017) ; Yoon et al. (2018) ; ; ; Serrà et al. (2018) ; Schwarz et al. (2018) ; Sprechmann et al. (2018) ; Riemer et al. (2019) ), and both (Kaiser et al. (2017) ).
The learning tasks performed by humans are, however, more complicated than the settings used by existing lifelong and few-shot learning works.
Task uncertainty: currently, lifelong learning models are usually trained with hyperparameters (e.g., number of model weights) optimized for a sequence of tasks arriving at test time.
The knowledge about future tasks (even their quantity) may be a too strong assumption in many real-world applications, yet without this knowledge, it is hard to decide the appropriate model architecture and capacity when training the models.
Sequential few-shot tasks: existing few-shot learning models are usually (meta-)trained using a large collection of tasks.
1 Unfortunately, this collection is not available in the lifelong learning scenarios where tasks come in sequentially.
Without seeing many tasks at training time, it is hard for an existing few-shot model to learn the shared knowledge behind the tasks and use the knowledge to speed up the learning of a novel task at test time.
Humans, on the other hand, are capable of learning well despite having only limited information and/or even when not purposely preparing for a particular set of future tasks.
Comparing how humans learn and think to how the current machine learning models are trained to learn and make predictions, we observe that the key difference lies on the part of thinking, which is the decision-making counterpart of models when making predictions.
While most NN-based supervised learning models use a single forward pass to predict, humans make careful and less error-prone decisions in a more sophisticated manner.
Studies in biology, psychology, and economics (Parisi et al. (2019) ; Kahneman & Egan (2011) ) have shown that, while humans make fast predictions (like machines) when dealing with daily familiar tasks, they tend to rely on a slow-thinking system that deliberately and iteratively considers interactions between current and previously learned knowledge in order to make correct decisions when facing unfamiliar or uncertain tasks.
We hypothesize that this slow, effortful, and less error-prone decision-making process can help bridge the gap of learning abilities between humans and machines.
We propose a novel brain-inspired model, called the Slow Thinking to Learn (STL), for taskuncertain lifelong and sequential few-shot machine learning tasks.
STL has two specialized but dependent modules, the cross-task Slow Predictor (SP) and per-task Fast Learners (FLs), that output lifelong and few-shot predictions, respectively.
We show that, by making the prediction process of SP more sophisticated (and slightly slower) at runtime, the learning process of all modules can be made easy at training time, eliminating the need to fulfill the aforementioned impractical settings.
Note that the techniques for slow predictions (Finn et al. (2017) ; Ravi & Larochelle (2017b) ; Nichol & Schulman (2018) ; Sprechmann et al. (2018) ) and fast learning (McClelland et al. (1995) ; Kumaran et al. (2016) ; Kaiser et al. (2017) ) have already been proposed in the literature.
Our contributions lie in that we
1) explicitly model and study the interactions between these two techniques, and
2) demonstrate, for the first time, how such interactions can greatly improve machine capability to solve the joint lifelong and few-shot learning problems encountered by humans everyday.
2 Slow Thinking to Learn (STL)
Figure 1: The Slow Thinking to Learn (STL) model.
To model the interactions between the shared SP f and per-task FLs {(g (t) , M (t) )} t , we feed the output of FLs into the SP while simultaneously letting the FLs learn from the feedback given by SP.
We focus on a practical lifelong and fewshot learning set-up:
, · · · arriving in sequence and the labeled examples
also coming in sequence, the goal is to design a model such that it can be properly trained by data
) collected up to any given time point s, and then make correct predictions for unlabeled data X (t) = {x (t,i) } i in any of the seen tasks, t ≤ s.
Note that, at training time s, the future tasks To solve Problem 1, we propose the Slow Thinking to Learn (STL) model, whose architecture is shown in Figure 1 .
The STL is a cascade where the shared Slow Predictor (SP) network f parameterized by θ takes the output of multiple task-specific Fast Learners (FLs) {(g (t) , M (t) )} t , t ≤ s, as input.
An FL for task T (t) consists of an embedding network g (t)2 parameterized by φ (t) and augmented with an external, episodic, non-parametric memory
Here, we use the Memory Module (Kaiser et al. (2017) ) as the external memory which saves the clusters of seen examples {(x (t,i) , y (t,i) )} i to achieve better storage efficiency-the h (t,j) of an entry (h (t,j) , v (t,j) ) denotes the embedding of a cluster of x (t,i) 's with the same label while the v (t,j) denotes the shared label.
We use the FL (g (t) , M (t) ) and SP f to make few-shot and lifelong predictions for task T (t) , respectively.
We let the number of FLs grow with the number of seen tasks in order to ensure that the entire STL model will have enough complexity to learn from possibly endless tasks in lifelong.
This does not imply that the SP will consume unbounded memory space to make predictions at runtime, as the FL for a specific task can be stored on a hard disk and loaded into the main memory only when necessary.
Slow Predictions.
The FL predicts the label of a test instance x using a single feedforward pass just like most existing machine learning models.
As shown in Figure 2 (a), the FL for task T (t) first embed the instance to get h = g (t) (x ) and then predicts the labelŷ FL of x by averaging the cluster labels
where KNN(h ) is the set of K nearest neighboring embeddings of h .
We havê
where h, h denotes the cosine similarity between h (t,j) and h .
On the other hand, the SP predicts the label of x with a slower, iterative process, which is shown in Figure 2 (b).
The SP first adapts (i.e., fine-tunes) its weights θ to KNN(h ) and their corresponding values stored in M (t) to getθ by solving
where loss(·) denotes a loss function.
Then, the SP makes a prediction byŷ SP = f (h ;θ ).
The adapted network fθ is discarded after making the prediction.
The slower decision-making process of SP may seem unnecessary and wasteful of computing resources at first glance.
Next, we explain why it is actually a good bargain.
Life-Long Learning with Task Uncertainty.
Since the SP makes predictions after runtime adaptation, we define the training objective of θ for task T (s) such that it minimizes the losses after being adapted for each seen task
The term loss(f (h;θ * ), v) denotes the empirical slow-prediction loss of the adapted SP on an example (x, y) in M (t) , whereθ * denotes the weights of the adapted SP for x following Eq. (1):
requires recursively solvingθ * for each (x, y) remembered by the FLs.
We use an efficient gradient-based approach proposed by Finn et al. (2017) ) to solve Eq. (2).
Please refer to Section 2.1 of the Appendix for more details.
Since the SP learns from the output of FLs, theθ * in Eq. (2) approximates a hypothesis used by an FL to predict the label of x.
The θ, after being trained, will be close to everyθ * and can be fine-tuned to become a hypothesis, meaning that θ encodes the invariant principles 3 underlying the hypotheses for different tasks.
(a)
(b)
(c) Figure 3 : The relative positions between the invariant representations θ and the approximate hypothesesθ (t) 's of FLs for different tasks T (t) 's on the loss surface defined by FLs after seeing the
(a) first,
(b) second, and
(c) third task.
Since θ−θ (t) ≤ R for any t in Eq. (2), the effective capacity of SP (at runtime) is the union of the capacity of all possible points within the dashed R-circle centered at θ.
Furthermore, after being sequentially trained by two tasks using Eq. (3), the θ will easily get stuck in the middle ofθ
(1) andθ (2) .
To solve the third task, the third FL needs to change its embedding function (and therefore the loss surface) such thatθ (3) falls into the R-circle centered at θ.
Recall that in Problem 1, the nature of tasks arriving after a training process is unknown, thus, it is hard to decide the right model capacity at training time.
A solution to this problem is to use an expandable network (Rusu et al. (2016) ; Yoon et al. (2018) ) and expand the network when training it for a new task, but the number of units to add during each expansion remains unclear.
Our STL walks around this problem by not letting the SP learn the tasks directly but making it learn the invariant principles behind the tasks.
Assuming that the underlying principles of the learned hypotheses for different tasks are universal and relatively simple, 4 one only needs to choose a model architecture with capacity that is enough to learn the shared principles in lifelong manner.
Note that limiting the capacity of SP at training time does not imply underfitting.
As shown in Figure 3 , the postadaptation capacity of SP at runtime can be much larger than the capacity decided during training.
Sequential Few-Shot Learning.
Although each FL is augmented with an external memory that has been shown to improve learning efficiency by the theory of complementary learning systems (McClelland et al. (1995) ; Kumaran et al. (2016) ), it is not sufficient for FLs to perform few-shot predictions.
Normally, these models need to be trained on many existing few-shot tasks in order to obtain good performance at test time.
Without assuming s in Problem 1 to be a large number, the STL takes a different approach that fast stabilizes θ and then let the FL for a new incoming task learn a good hypothesis by extrapolating from θ.
We define the training objective of g (s) , which is parameterized by φ (s) and augmented with memory M (s) , for the current task T (s) as follows:
where
) is the empirical loss term whose specific form depends on the type of external memory used (see Section 2.2 of the Appendix for more details), and
) is a regularization term, which we call the feedback term, whose inverse value denotes the usefulness of the FL in helping SP (f parameterized by θ) adapt.
Specifically, it is written as
The feedback term encourages each FL to learn unique and salient features for the respective task so the SP will not be confused by two tasks having similar embeddings.
As shown in Figure 3 (b), the relative position of θ gets "stuck" easily after seeing a few of previous tasks.
To solve the current task, g (s) needs to change the loss surface for θ such thatθ (s) falls into the R-circle centered at θ (Figure 3(c) ).
This makes θ an efficient guide (through the feedback term) to finding g (s) when there are only few examples and also few previous tasks.
We use an alternate training procedure to train the SP and FLs.
Please see Section 2.3 of the Appendix for more details.
Note that when sequentially training STL for task T (s) in lifelong, we can safely discard the data
in the previous tasks because the FLs are task-specific (see Eq. (3)) and the SP does not require raw examples to train (see Eq. (2)).
Inspired by the thinking process that humans undergo when making decisions, we propose STL, a cascade of per-task FLs and shared SP.
To the best of our knowledge, this is the first work that studies the interactions between the fast-learning and slow-prediction techniques and shows how such interactions can greatly improve machine capability to solve the joint lifelong and few-shot learning problems under challenging settings.
For future works, we will focus on integrating the STL with different types of external memory and studying the performance of STL in real-world deployments.
|
This paper studies the interactions between the fast-learning and slow-prediction models and demonstrate how such interactions can improve machine capability to solve the joint lifelong and few-shot learning problems.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:396
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner.
Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date.
We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale.
We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.
Meta-learning describes a broad family of techniques in machine learning that deals with the problem of learning to learn.
An emerging branch of meta-learning involves the use of hypernetworks, which are meta neural networks that generate the weights of a main neural network to solve a given task in an end-to-end differentiable manner.
Hypernetworks were originally introduced by Ha et al. (2016) as a way to induce weight-sharing and achieve model compression by training the same meta network to learn the weights belonging to different layers in the main network.
Since then, hypernetworks have found numerous applications including but not limited to: weight pruning (Liu et al., 2019) , neural architecture search (Brock et al., 2017; , Bayesian neural networks (Krueger et al., 2017; Ukai et al., 2018; Pawlowski et al., 2017; Henning et al., 2018; Deutsch et al., 2019) , multi-task learning (Pan et al., 2018; Shen et al., 2017; Klocek et al., 2019; Serrà et al., 2019; Meyerson & Miikkulainen, 2019) , continual learning (von Oswald et al., 2019) , generative models (Suarez, 2017; Ratzlaff & Fuxin, 2019) , ensemble learning (Kristiadi & Fischer, 2019) , hyperparameter optimization (Lorraine & Duvenaud, 2018) , and adversarial defense (Sun et al., 2017) .
Despite the intensified study of applications of hypernetworks, the problem of optimizing them to this day remains significantly understudied.
Given the lack of principled approaches to training hypernetworks, prior work in the area is mostly limited to ad-hoc approaches based on trial and error (c.f. Section 3).
For example, it is common to initialize the weights of a hypernetwork by sampling a "small" random number.
Nonetheless, these ad-hoc methods do lead to successful hypernetwork training primarily due to the use of the Adam optimizer (Kingma & Ba, 2014) , which has the desirable property of being invariant to the scale of the gradients.
However, even Adam will not work if the loss diverges (i.e. integer overflow) at initialization, which will happen in sufficiently big models.
The normalization of badly scaled gradients also results in noisy training dynamics where the loss function suffers from bigger fluctuations during training compared to vanilla stochastic gradient descent (SGD).
Wilson et al. (2017) showed that while adaptive optimizers like Adam may exhibit lower training error, they fail to generalize as well to the test set as non-adaptive gradient methods.
Moreover, Adam incurs a computational overhead and requires 3X the amount of memory for the gradients compared to vanilla SGD.
Small random number sampling is reminiscent of early neural network research (Rumelhart et al., 1986) before the advent of classical weight initialization methods like Xavier init (Glorot & Bengio, 2010) and Kaiming init (He et al., 2015) .
Since then, a big lesson learned by the neural network optimization community is that architecture specific initialization schemes are important to the ro-bust training of deep networks, as shown recently in the case of residual networks (Zhang et al., 2019) .
In fact, weight initialization for hypernetworks was recognized as an outstanding open problem by prior work (Deutsch et al., 2019) that had questioned the suitability of classical initialization methods for hypernetworks.
Our results We show that when classical methods are used to initialize the weights of hypernetworks, they fail to produce mainnet weights in the correct scale, leading to exploding activations and losses.
This is because classical network weights transform one layer's activations into another, while hypernet weights have the added function of transforming the hypernet's activations into the mainnet's weights.
Our solution is to develop principled techniques for weight initialization in hypernetworks based on variance analysis.
The hypernet case poses unique challenges.
For example, in contrast to variance analysis for classical networks, the case for hypernetworks can be asymmetrical between the forward and backward pass.
The asymmetry arises when the gradient flow from the mainnet into the hypernet is affected by the biases, whereas in general, this does not occur for gradient flow in the mainnet.
This underscores again why architecture specific initialization schemes are essential.
We show both theoretically and experimentally that our methods produce hypernet weights in the correct scale.
Proper initialization mitigates exploding activations and gradients or the need to depend on Adam.
Our experiments reveal that it leads to more stable mainnet weights, lower training loss, and faster convergence.
Section 2 briefly covers the relevant technical preliminaries and Section 3 reviews problems with the ad-hoc methods currently deployed by hypernetwork practitioners.
We derive novel weight initialization formulae for hypernetworks in Section 4, empirically evaluate our proposed methods in Section 5, and finally conclude in Section 6.
In all our experiments, hyperfan-in and hyperfan-out both lead to successful hypernetwork training with SGD.
We did not find a good reason to prefer one over the other (similar to He et al. (2015) 's observation in the classical case for fan-in and fan-out init).
For a long time, the promise of deep nets to learn rich representations of the world was left unfulfilled due to the inability to train these models.
The discovery of greedy layer-wise pre-training (Hinton et al., 2006; Bengio et al., 2007) and later, Xavier and Kaiming init, as weight initialization strategies to enable such training was a pivotal achievement that kickstarted the deep learning revolution.
This underscores the importance of model initialization as a fundamental step in learning complex representations.
In this work, we developed the first principled weight initialization methods for hypernetworks, a rapidly growing branch of meta-learning.
We hope our work will spur momentum towards the development of principled techniques for building and training hypernetworks, and eventually lead to significant progress in learning meta representations.
Other non-hypernetwork methods of neural network generation (Stanley et al., 2009; Koutnik et al., 2010) can also be improved by considering whether their generated weights result in exploding activations and how to avoid that if so.
|
The first principled weight initialization method for hypernetworks
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:397
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework.
VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator.
We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance.
This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion.
By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks.
|
A novel Bayesian deep learning framework that captures and relates hierarchical semantic and visual concepts, performing well on a variety of image and text modeling and generation tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:398
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Current classical planners are very successful in finding (non-optimal) plans, even for large planning instances.
To do so, most planners rely on a preprocessing stage that computes a grounded representation of the task.
Whenever the grounded task is too big to be generated (i.e., whenever this preprocess fails) the instance cannot even be tackled by the actual planner.
To address this issue, we introduce a partial grounding approach that grounds only a projection of the task, when complete grounding is not feasible.
We propose a guiding mechanism that, for a given domain, identifies the parts of a task that are relevant to find a plan by using off-the-shelf machine learning methods.
Our empirical evaluation attests that the approach is capable of solving planning instances that are too big to be fully grounded.
Given a model of the environment, classical planning attempts to find a sequence of actions that lead from an initial state to a state that satisfies a set of goals.
Planning models are typically described in the Planning Domain Definition Language (PDDL) BID16 ) in terms of predicates and action schemas with arguments that can be instantiated with a set of objects.
However, most planners work on a grounded representation without free variables, like STRIPS BID4 or FDR BID1 .
Grounding is the process of translating a task in the lifted (PDDL) representation to a grounded representation.
It requires to compute all valid instantiations that assign objects to the arguments of predicates and action parameters, even though only a small fraction of these instantiations might be necessary to solve the task.The size of the grounded task is exponential in the number of arguments in predicates and action schemas.
Although this is constant for all tasks of a given domain, and grounding can be done in polynomial time, it may still be prohibitive when the number of objects is large and/or some predicates or actions have many parameters.The success of planners like FF BID9 or LAMA BID24 in finding plans for large planning tasks is undeniable.
However, since most planners rely on grounding for solving a task, they fail without even starting the search for a plan whenever an instance cannot be grounded, making grounding a bottleneck for the success of satisficing planners.Grounding is particularly challenging in open multi-task environments, where the planning task is automatically generated with all available objects even if only a few of them are relevant to achieve the goals.
For example, in robotics, the planning task may contain all objects with which the robot may interact even if they are not needed BID13 ).
In network-security environments, like the one modeled in the Caldera domain BID17 , the planning task may contain all details about the network.
However, to the best of our knowledge, no method exists that attempts to focus the grounding on relevant parts of the task.We propose partial grounding, where, instead of instantiating the full planning task, we focus on the parts that are required to find a plan.
The approach is sound -if a plan is found for the partially grounded task then it is a valid plan for the original task -but incomplete -the partially grounded task will only be solvable if the operators in at least one plan have been grounded.
To do so, we give priority to operators that we deem more relevant to achieve the goal.
Inspired by relational learning approaches to domain control knowledge (e.g., BID31 , BID3 , BID11 ), we use machine learning methods to predict the probability that a given operator belongs to a plan.
We learn from small training instances, and generalize to larger ones by using relational features in standard classification and regression algorithms (e.g., BID12 ).
As an alternative model, we also experiment with relational trees to learn the probabilities BID18 .Empirical
results show that our learning models can predict which operators are relevant with high accuracy in several domains, leading to a very strong reduction of task size when grounding and solving huge tasks.
In this paper, we proposed an approach to partial grounding of planning tasks, to deal with tasks that cannot be fully grounded under the available time and memory resources.
Our algorithm heuristically guides the grounding process giving preference to operators that are deemed most relevant for solving the task.
To determine which operators are relevant, we train different machine learning models using optimal plans from small instances of the same domain.
We consider two approaches, a direct application of relational decision trees, and using relational features with standard classification and regression algorithms.
The empirical results show the effectiveness of the approach.
In most domains, the learned models are able to identify which operators are relevant with high accuracy, helping to reduce the number of grounded operators by several orders of magnitude, and greatly increasing coverage in large instances.
Figure 3 : The scatter plots show the number of operators of a fully grounded task on the x-axis.
The y-axis shows the number of operators that are needed to make the goal reachable in the grounding (leftmost two columns), and the number of operators that are needed to solve the task (rightmost two columns), for several priority functions.
|
This paper introduces partial grounding to tackle the problem that arises when the full grounding process, i.e., the translation of a PDDL input task into a ground representation like STRIPS, is infeasible due to memory or time constraints.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:399
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.
We propose a variational message-passing algorithm for variational inference in such models.
We make three contributions.
First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE).
Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE.
Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference.
By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.
To analyze real-world data, machine learning relies on models that can extract useful patterns.
Deep Neural Networks (DNNs) are a popular choice for this purpose because they can learn flexible representations.
Another popular choice are probabilistic graphical models (PGMs) which can find interpretable structures in the data.
Recent work on combining these two types of models hopes to exploit their complimentary strengths and provide powerful models that are also easy to interpret BID10 BID14 BID0 BID3 .To
apply such hybrid models to real-world problems, we need efficient algorithms that can extract useful structure from the data. However
, the two fields of deep learning and PGMs traditionally use different types of algorithms. For deep
learning, stochastic-gradient methods are the most popular choice, e.g., those based on back-propagation. These algorithms
are not only widely applicable, but can also employ amortized inference to enable fast inference at test time BID17 BID12 . On the other hand
, most popular algorithms for PGMs exploit the model's graphical conjugacy structure to gain computational efficiency, e.g., variational message passing (VMP) BID18 , expectation propagation BID16 , Kalman filtering BID4 BID5 , and more recently natural-gradient variational inference BID9 and stochastic variational inference BID8 . In short, the two
fields of deep learning and probabilistic modelling employ fundamentally different inferential strategies and a natural question is, whether we can design algorithms that combine their respective strengths.There have been several attempts to design such methods in the recent years, e.g., BID14 ; BID3 ; BID0 ; BID10 ; BID2 . Our work in this
paper is inspired by the previous work of BID10 that aims to combine message-passing, natural-gradient, and amortized inference. Our proposed method
in this paper simplifies and generalizes the method of BID10 .To do so, we propose
Structured Inference Networks (SIN) that incorporate the PGM structure in the standard inference networks used in variational auto-encoders (VAE) BID12 BID17 . We derive conditions
under which such inference networks can enable fast amortized inference similar to VAE. By using a recent VMP
method of BID11 , we The generative models are just like the decoder in VAE but they employ a structured prior, e.g., Fig. (a) has a mixture-model prior while Fig. (b) has a dynamical system prior. SINs, just like the encoder
in VAE, mimic the structure of the generative model by using parameters φ. One main difference is that
in SIN the arrows between y n and x n are reversed compared to the model, while rest of the arrows have the same direction.derive a variational message-passing algorithm whose messages automatically reduce to stochasticgradients for the deep components of the model, while perform natural-gradient updates for the PGM part. Overall, our algorithm enables
Structured, Amortized, and Natural-gradient (SAN) updates and therefore we call our algorithm the SAN algorithm. We show that our algorithm give
comparable performance to the method of BID10 while simplifying and generalizing it. The code to reproduce our results
is available at https://github.com/emtiyaz/vmp-for-svae/.
We propose an algorithm to simplify and generalize the algorithm of BID10 for models that contain both deep networks and graphical models.
Our proposed VMP algorithm enables structured, amortized, and natural-gradient updates given that the structured inference networks satisfy two conditions.
The two conditions derived in this paper generally hold for PGMs that do not force dense correlations in the latent variables x.
However, it is not clear how to extend our method to models where this is the case, e.g., Gaussian process models.
It is possible to use ideas from sparse Gaussian process models and we will investigate this in the future.
An additional issue is that our results are limited to small scale data.
We found that it is non-trivial to implement a message-passing framework that goes well with the deep learning framework.
We are going to pursue this direction in the future and investigate good platforms to integrate the capabilities of these two different flavors of algorithms.
|
We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:4
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network.
Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them.
We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network.
We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks.
Our approach only requires calculating two square curvature factor matrices for each layer.
Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage.
We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.
Neural networks are most commonly trained in a maximum a posteriori (MAP) setting, which only yields point estimates of the parameters, ignoring any uncertainty about them.
This often leads to overconfident predictions, especially in regimes that are weakly covered by training data or far away from the data manifold.
While the confidence of wrong predictions is usually irrelevant in a research context, it is essential that a Machine Learning algorithm knows when it does not know in the real world, as the consequences of mistakes can be fatal, be it when driving a car or diagnosing a disease.The Bayesian framework of statistics provides a principled way for avoiding overconfidence in the parameters by treating them as unknown quantities and integrating over all possible values.
Specifically, for the prediction of new data under a model, it fits a posterior distribution over the parameters given the training data and weighs the contribution of each setting of the parameters to the prediction by the probability of the data under those parameters times their prior probability.
However, the posterior of neural networks is usually intractable due to their size and nonlinearity.There has been previous interest in integrating neural networks into the Bayesian framework BID26 BID15 BID28 BID1 , however these approaches were designed for small networks by current standards.
Recent adaptations to architectures of modern scale rely on crude approximations of the posterior to become tractable.
All of BID9 BID14 BID2 assume independence between the individual weights.
While they achieve good results on small datasets, this strong restriction of the posterior is susceptible to underestimating the uncertainty, in particular when optimising the variational bound.
The approach in BID6 requires the use of certain stochastic regularisers which are not commonly present in most recent architectures.
Furthermore, it is not clear if the approximate posterior defined by these regularisers is a good fit to the true posterior.Recent work on second-order optimisation of neural networks BID27 BID3 has demonstrated that the diagonal blocks of the curvature can be well approximated by a Kronecker product.
We combine this insight with the idea of modelling the posterior over the weights as a Gaussian, using a Laplace approximation BID26 with Kronecker factored covariance matrices.
This leads to a computationally efficient matrix normal posterior distribution BID11 over the weights of every layer.
Since the Laplace approximation is applied after training, our approach can be used to obtain uncertainty estimates from existing networks.
We presented a scalable approximation to the Laplace approximation for the posterior of a neural network and provided experimental results suggesting that the uncertainty estimates are on par with current alternatives like Dropout, if not better.
It enables practitioners to obtain principled uncertainty estimates from their models, even if they were trained in a maximum likelihood/MAP setting.There are many possible extensions to this work.
One would be to automatically determine the scale and regularisation hyperparameters of the Kronecker factored Laplace approximation using the model evidence similar to how BID26 interpolates between the data log likelihood and the width of the prior.
The model evidence could further be used to perform Bayesian model averaging on ensembles of neural networks, potentially improving their generalisation ability and uncertainty estimates.
A challenging application would be active learning, where only little data is available relative to the number of curvature directions that need to be estimated.
|
We construct a Kronecker factored Laplace approximation for neural networks that leads to an efficient matrix normal distribution over the weights.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:40
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we introduce a novel method to interpret recurrent neural networks (RNNs), particularly long short-term memory networks (LSTMs) at the cellular level.
We propose a systematic pipeline for interpreting individual hidden state dynamics within the network using response characterization methods.
The ranked contribution of individual cells to the network's output is computed by analyzing a set of interpretable metrics of their decoupled step and sinusoidal responses.
As a result, our method is able to uniquely identify neurons with insightful dynamics, quantify relationships between dynamical properties and test accuracy through ablation analysis, and interpret the impact of network capacity on a network's dynamical distribution.
Finally, we demonstrate generalizability and scalability of our method by evaluating a series of different benchmark sequential datasets.
In this paper, we proposed a method for response characterization for LSTM networks to predict cell-contributions to the overall decision of a learned network on both the cell and network-level resolution.
We further verified and validated our predictions by performing an ablation analysis to identify cell's which contribution heavily to the network's output decision with our simple response characterization method.
The resulting method establishes a novel building block for interpreting LSTM networks.
The LSTM network's dynamic-space is broad and cannot be fully captured by fundamental input sequences.
However, our methodology demonstrates that practical sub-regions of dynamics are reachable by response metrics which we use to build a systematic testbench for LSTM interpretability.
We have open-sourced our algorithm to encourage other researchers to further explore dynamics of LSTM cells and interpret the kinetics of their sequential models.In the future, we aim to extend our approach to even more data modalities and analyze the training phase of LSTMs to interpret the learning of the converged dynamics presented in this work.7
Acknowledgment
|
Introducing the response charactrization method for interpreting cell dynamics in learned long short-term memory (LSTM) networks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:400
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties.
The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns.
However, the mechanisms and functional significance of these spatial representations remain largely mysterious.
As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs.
Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells.
All these different functional types of neurons have been observed experimentally.
The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies.
Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.
Understanding the neural code in the brain has long been driven by studying feed-forward architectures, starting from Hubel and Wiesel's famous proposal on the origin of orientation selectivity in primary visual cortex BID19 .
Inspired by the recent development in deep learning BID25 BID30 BID18 BID39 , there has been a burst of interest in applying deep feedforward models, in particular convolutional neural networks (CNN) BID29 , to study the sensory systems, which hierarchically extract useful features from sensory inputs (see e.g., BID61 ; BID24 ; BID22 ; BID60 ).For
more cognitive tasks, neural systems often need to maintain certain internal representations of relevant variables in the absence of external stimuli-a process that requires more than feature extraction. We
will focus on spatial navigation, which typically requires the brain to maintain a representation of self-location and update it according to the animal's movements and landmarks of the environment. Physiological
studies done in rodents and other mammals (including humans, non-human primates and bats) have revealed a variety of neural correlates of space in Hippocampus and Entorhinal Cortex (EC), including place cells BID41 , grid cells BID10 BID15 BID11 BID62 BID23 BID20 , along with border cells BID49 , band-like cells BID27 and others (see FIG0 ). In particular
, each grid cell only fires when the animal occupies a distinct set of physical locations, and strikingly these locations lie on a lattice. The study of
the neural underpinning of spatial cognition has provided an important window into how high-level cognitive functions are supported in the brain BID0 .How might the
spatial navigation task be solved using a network of neurons? Recurrent neural
networks (RNNs) BID18 BID12 BID43 BID54 BID13 BID53 seem particularly useful for these tasks. Indeed, recurrent-based
continuous attractor networks have been one popular type of models proposed for the formation of grid cells BID4 BID5 and place cells BID45 . Such models have provided
valuable insights into one set of possible mechanisms that could support the formation of the grids. However, these models typically
rely on fine-tuned connectivity patterns, in particular the models need a subtle yet systematic asymmetry in the connectivity pattern to move the attractor state according to the animal's own movement. The existence of such a specific
2D connectivity in rodent EC remains unclear. Additionally, previous models have
mainly focused on grid cells, while other types of responses that co-exist in the Entorhinal Cortex have been largely ignored. It would be useful to have a unified
model that can simultaneously explain different types of neural responses in EC.Motivated by these considerations, here we present an alternative modeling approach for understanding the representation of space in the neural system. Specifically, we trained a RNN to perform
some spatial navigation tasks. By leveraging the recent development in RNN
training and knowledge of the navigation system in the brain, we show that training a RNN with biologically relevant constraints naturally gives rise to a variety of spatial response profiles as observed in EC, including grid-like responses. To our knowledge, this is the first study to
show that grid-like responses could emerge from training a RNN to perform navigation. Our result implies that the neural representation
in EC may be seen as a natural way for the brain to solve the navigation task efficiently BID55 . More generally, it suggests that RNNs can be a powerful
tool for understanding the neural mechanisms of certain high-level cognitive functions. recorded when an animal navigates in a square environment
, replotted from BID27 , with the heat map representing the firing rate of this neuron as a function of the animal's location (red corresponds to high firing rate); a "band-like" cell from BID27 ; a border cell from BID49 ; an irregular spatially tuned cell from BID7 ; a "speed cell" from BID26 , which exhibits roughly linear dependence on the rodent's running speed; a "heading direction cell" from BID46 , which shows systematic change of firing rate depending on animal's heading direction. b) The network consists of N = 100 recurrently connected
units (or neurons) which receive two external inputs, representing the animal's speed and heading direction. The two outputs linearly weight the neurons in the RNN.
The goal of training is to make the responses of the two
output neurons accurately represent the animal's physical location. c) Typical trajectory after training. As shown, the output
of the RNN can accurately, though not
perfectly, track the animal's location during navigation.
In this paper, we trained RNNs to perform path integration (dead-reckoning) in 2D arenas.
We found that after training RNNs with appropriate regularization, the model neurons exhibit a variety of spatial and velocity tuning profiles that match neurophysiology in EC.
What's more, there is also similarity in terms of when these distinct neuron types emerge during training/development.
The EC has long been thought to be involved in path integration and localization of the animal's location .
The general agreement between the different response properties in our model and the neurophysiology provide strong evidence supporting the hypothesis that the neural population in EC may provide an efficient code for representation self-locations based on the velocity input.Recently, there has been increased interest in using complex neural network models to understand the neural code.
But the focus has been on using feedforward architectures, in particular CNNs BID29 .
Given the abundant recurrent connections in the brain, it seems a particularly fruitful avenue to take advantage of the recent development in RNNs to help with neuroscience questions BID34 BID50 BID37 BID53 .
Here, we only show one instance following this approach.
However, the insight from this work could be general, and potentially useful for other cognitive functions as well.The finding that metabolic constraints lead to the emergence of grid-like responses may be seen as conceptually related to the efficient coding hypothesis in visual processing BID1 , in particular the seminal work on the emergence of the V1-like Gabor filters in a sparse coding model by BID42 .
Indeed, our work is partly inspired by these results.
While there are conceptual similarities, however, we should also note there are differences between the sparse coding work and ours.
First, the sparsity constraint in sparse coding can be naturally viewed as a particular prior while in the context of the recurrent network, it is difficult to interpret that way.
Second, the grid-like responses are not the most sparse solution one could imagine.
In fact, they are still quite dense compared to a more spatially localized representation.
Third, the grid-like patterns that emerged in our network are not filters based on the raw input, rather the velocity inputs need to be integrated first in order to encode spatial locations.
Our work is also inspired by recent work using the efficient coding idea to explain the functional architecture of the grid cells BID55 .
It has been shown that efficient coding considerations could explain the particular set of grid scales observed in rodents BID52 .
However, in that work, the firing patterns of the neurons are assumed to have a lattice structure to start with.
Furthermore, our work is related to the study by Sussillo and others BID53 , in which they show that regularization of RNN models are important for generating solutions that are similar to the neural activity observed in motor cortex.
In Sussillo et al., a smoothness constraint together with others lead to simple oscillatory neural dynamics that well matches the neural data.
We have not incorporated a smoothness constraint into our network.Additionally, we note that there are a few recent studies which use place cells as the input to generate grid cells BID8 BID51 , which are fundamentally different from our work.
In these feedforward network models, the grid cells essentially perform dimensionality reduction based on the spatial input from place cells.
However, the main issue with these models is that, it is unclear how place cells acquire spatial tuning in the first place.
To the contrary, our model takes the animal's velocity as the input, and addresses the question of how the spatial tuning can be generated from such input, which are known to exist in EC BID46 BID26 .
In another related study BID21 , the authors train a RNN with LSTM units BID18 to perform different navigation tasks.
However, no grid-like spatial firing patterns are reported.Although our model shows a qualitative match to the neural responses observed in the EC, nonetheless it has several major limitations, with each offering interesting future research directions.
First, the learning rule we use seems to be biologically implausible.
We are interested in exploring how a more biologically plausible learning rule could give rise to similar results BID32 BID37 BID14 .
Second, the simulation results do not show a variety of spatial scales in grid-like cells.
Experimentally, it is known that grid cells have multiple spatial scales, that scale geometrically with a ratio 1.4 BID52 , and this particular scale ratio is predicted by efficient coding of space BID55 .
We are investigating how to modify the model to get a hierarchy of spatial scales, perhaps by incorporating more neurons or modifying the regularization.
Last but not least, we have focused on the representation produced by the trained RNN.
An equally important set of questions concern how the networks actually support the generation of such a representation.
As a preliminary effort, we have examined the connectivity patterns of the trained network, and they do not seem to resemble the connectivity patterns required by standard attractor network models.
Maybe this should not be seen as too surprising.
After all, the trained networks can produce a diverse set of neural responses, while the previous models only led to grid responses.
It would be interesting for future work to systematically examine the questions related to the underlying mechanisms.
To quantify the speed selectivity of each unit we first fit a line to the tuning curve of unit activity as a function of speed.
The speed selectivity is the absolute value of the slope.
If the unit activity is not modulated by speed then the speed selectivity is 0.
To quantify the direction selectivity of each unit we calculated the average unit activity as a function of direction input and then took the maximum minus minimum of this tuning curve.
If the unit activity is not modulated by direction then the direction selectivity is 0.
To quantify the spatial selectivity we used lifetime sparseness BID56 .
If the unit activity is not modulated by spatial location then the spatial selectivity is 0.
Each dot in the figures below show the selectivity for a single unit.
|
To our knowledge, this is the first study to show how neural representations of space, including grid-like cells and border cells as observed in the brain, could emerge from training a recurrent neural network to perform navigation tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:401
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song.
Such components include voice, bass, drums and any other accompaniments.
While end-to-end models that directly generate the waveform are state-of-the-art in many audio synthesis problems, the best multi-instrument source separation models generate masks on the magnitude spectrum and achieve performances far above current end-to-end, waveform-to-waveform models.
We present an in-depth analysis of a new architecture, which we will refer to as Demucs, based on a (transposed) convolutional autoencoder, with a bidirectional LSTM at the bottleneck layer and skip-connections as in U-Networks (Ronneberger et al., 2015).
Compared to the state-of-the-art waveform-to-waveform model, Wave-U-Net (Stoller et al., 2018), the main features of our approach in addition of the bi-LSTM are the use of trans-posed convolution layers instead of upsampling-convolution blocks, the use of gated linear units, exponentially growing the number of channels with depth and a new careful initialization of the weights.
Results on the MusDB dataset show that our architecture achieves a signal-to-distortion ratio (SDR) nearly 2.2 points higher than the best waveform-to-waveform competitor (from 3.2 to 5.4 SDR).
This makes our model match the state-of-the-art performances on this dataset, bridging the performance gap between models that operate on the spectrogram and end-to-end approaches.
Cherry first noticed the "cocktail party effect" (Cherry, 1953) : how the human brain is able to separate a single conversation out of a surrounding noise from a room full of people chatting.
Bregman later tried to understand how the brain was able to analyse a complex auditory signal and segment it into higher level streams.
His framework for auditory scene analysis (Bregman, 1990 ) spawned its computational counterpart, trying to reproduce or model accomplishments of the brains with algorithmic means (Wang & Brown, 2006) , in particular regarding source separation capabilities.
When producing music, recordings of individual instruments called stems are arranged together and mastered into the final song.
The goal of source separation is to recover those individual stems from the mixed signal.
Unlike the cocktail party problem, there is not a single source of interest to differentiate from an unrelated background noise, but instead a wide variety of tones and timbres playing in a coordinated way.
In the SiSec Mus evaluation campaign for music separation (Stöter et al., 2018) , those individual stems were grouped into 4 broad categories: (1) drums, (2) bass, (3) other, (4) vocals.
Given a music track which is a mixture of these four sources, also called the mix, the goal is to generate four waveforms that correspond to each of the original sources.
We consider here the case of supervised source separation, where the training data contain music tracks (i.e., mixtures), together with the ground truth waveform for each of the sources.
State-of-the-art approaches in music source separation still operate on the spectrograms generated by the short-time Fourier transform (STFT).
They produce a mask on the magnitude spectrums for each frame and each source, and the output audio is generated by running an inverse STFT on the masked spectrograms reusing the input mixture phase Takahashi et al., 2018) .
Several architectures trained end-to-end to directly synthesize the waveforms have been proposed (Lluís et al., 2018; Jansson et al., 2017) , but their performances are far below the state-of-the-art: in Figure 1 : Mel-spectrogram for a 0.8 seconds extract of the bass source from the track "Stich Up" of the MusDB test.
From left to right: ground truth, Conv-Tasnet estimate and Demucs estimate.
We observe that Conv-Tasnet missed one note entirely.
the last SiSec Mus evaluation campaign (Stöter et al., 2018) , the best model that directly predicts waveforms achieves an average signal-to-noise ratio (SDR) over all four sources of 3.2, against 5.3 for the best approach that predicts spectrograms masks (also see Table 1 in Section 6).
An upper bound on the performance of all methods relying on masking spectrograms is given by the SDR obtained when using a mask computed using the ground truth sources spectrograms, for instance the Ideal Ratio Mask (IRM) or the Ideal Binary Mask (IBM) oracles.
For speech source separation, Luo & Mesgarani (2019) proposed Conv-Tasnet, a model that reuses the masking approach of spectrogram methods but learns the masks jointly with a convolutional front-end, operating directly in the waveform domain for both the inputs and outputs.
Conv-Tasnet surpasses both the IRM and IBM oracles.
Our first contribution is to adapt the Conv-Tasnet architecture, originally designed for monophonic speech separation and audio sampled at 8 kHz, to the task of sterephonic music source separation for audio sampled at 44.1 kHz.
Our experiments show that Conv-Tasnet outperforms all previous methods by a large margin, with an SDR of 5.7, but still under the SDR of the IRM oracle at 8.2 (Stöter et al., 2018) .
However, while Conv-Tasnet separates with a high accuracy the different sources, we observed artifacts when listening to the generated audio: a constant broadband noise, hollow instruments attacks or even missing parts.
They are especially noticeable on the drums and bass sources and we give one such example on Figure 1 .
Conv-Tasnet uses an over-complete linear representation on which it applies a mask obtained from a deep convolutional network.
Because both the encoder and decoder are linear, the masking operation cannot synthesize new sounds.
We conjecture that the overlap of multiples instruments sometimes lead to a loss of information that is not reversible by a masking operation.
To overcome the limitations of Conv-Tasnet, our second contribution is to propose Demucs, a new architecture for music source separation.
Similarly to Conv-Tasnet, Demucs is a deep learning model that directly operates on the raw input waveform and generates a waveform for each source.
Demucs is inspired by models for music synthesis rather than masking approaches.
It is a U-net architecture with a convolutional encoder and a decoder based on wide transposed convolutions with large strides inspired by recent work on music synthesis (Défossez et al., 2018) .
The other critical features of the approach are a bidirectional LSTM between the encoder and the decoder, increasing the number of channels exponentially with depth, gated linear units as activation function (Dauphin et al., 2017) which also allow for masking, and a new initialization scheme.
We present experiments on the MusDB benchmark, which first show that both Conv-Tasnet and Demucs achieve performances significantly better than the best methods that operate on the spectrogram, with Conv-Tasnet being better than Demucs in terms of SDR.
We also perform human evaluations that compare Conv-Tasnet and our Demucs, which show that Demucs has significantly better perceived quality.
The smaller SDR of Demucs is explained by more contamination from other sources.
We also conduct an in-depth ablation study of the Demucs architecture to demonstrate the impact of the various design decisions.
Finally, we carry out additional experiments by adding 150 songs to the training set.
In this experiment, Demucs and TasNet both achieve an SDR of 6.3, suggesting that the gap in terms of SDR between the two models diminishes with more data, making the Demucs approach promising.
The 6.3 points of SDR also set a new state-of-the-art, since it improves on the best previous result of 6.0 on the MusDB test set obtained by training with 800 additional songs.
We discuss in more detail the related work in the next Section.
We then describe the original ConvTasnet model of Luo & Mesgarani (2018) and its adaptation to music source separation.
Our Demucs architecture is detailed in Section 4.
We present the experimental protocol in Section 5, and the experimental results compared to the state-of-the-art in Section 6.
Finally, we describe the results of the human evaluation and the ablation study.
We showed that Conv-Tasnet, a state-of-the-art architecture for speech source separation that predicts masks on a learnt front-end over the waveform domain, achieves state-of-the-art performance for music source separation, improving over all previous spectrogram or waveform domain methods by 0.4 SDR.
While Conv-Tasnet has excellent performance to separate sources, it suffers from noticeable artifacts as confirmed by human evaluations.
We developed an alternative approach, Demucs, that combines the ability to mask over a learnt representation with stronger decoder capacity that allows for audio synthesis.
We conjecture that this can be useful when information is lost in the mix of instruments and cannot simply be recovered by masking.
We show that our approach produces audio of significantly higher quality as measures by mean opinion scores and matches the SDR of Conv-Tasnet when trained with 150 extra tracks.
We believe those results make it a promising alternative to methods based on masking only.
|
We match the performance of spectrogram based model with a model trained end-to-end in the waveform domain
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:402
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Although challenging, strategy profile evaluation in large connected learner networks is crucial for enabling the next wave of machine learning applications.
Recently, $\alpha$-Rank, an evolutionary algorithm, has been proposed as a solution for ranking joint policy profiles in multi-agent systems.
$\alpha$-Rank claimed scalability through a polynomial time implementation with respect to the total number of pure strategy profiles.
In this paper, we formally prove that such a claim is not grounded.
In fact, we show that $\alpha$-Rank exhibits an exponential complexity in number of agents, hindering its application beyond a small finite number of joint profiles.
Realizing such a limitation, we contribute by proposing a scalable evaluation protocol that we title $\alpha^{\alpha}$-Rank.
Our method combines evolutionary dynamics with stochastic optimization and double oracles for \emph{truly} scalable ranking with linear (in number of agents) time and memory complexities.
Our contributions allow us, for the first time, to conduct large-scale evaluation experiments of multi-agent systems, where we show successful results on large joint strategy profiles with sizes in the order of $\mathcal{O}(2^{25})$ (i.e., $\approx \text{$33$ million strategies}$) -- a setting not evaluable using current techniques.
Scalable policy evaluation and learning have been long-standing challenges in multi-agent reinforcement learning (MARL) with two difficulties obstructing progress.
First, joint-strategy spaces exponentially explode when a large number of strategic decision-makers is considered, and second, the underlying game dynamics may exhibit cyclic behavior (e.g. the game of Rock-Paper-Scissor) rendering an appropriate evaluation criteria non-trivial.
Focusing on the second challenge, much work in multi-agent systems followed a game-theoretic treatment proposing fixed-points, e.g., Nash (Nash et al., 1950) equilibrium, as potentially valid evaluation metrics.
Though appealing, such measures are normative only when prescribing behaviors of perfectly rational agents -an assumption rarely met in reality Grau-Moya et al. (2018) ; Wen et al. (2019) .
In fact, many game dynamics have been proven not converge to any fixed-point equilibria (Hart & Mas-Colell, 2003; Viossat, 2007) , but rather to limit cycles (Palaiopanos et al., 2017; Bowling & Veloso, 2001) .
Apart from these aforementioned inconsistencies, solving for a Nash equilibrium even for "simple" settings, e.g. two-player games is known to be PPAD-complete (Chen & Deng, 2005 ) -a demanding complexity class when it comes to computational requirements.
To address some of the above limitations, recently proposed α-Rank as a graph-based game-theoretic solution to multi-agent evaluation.
α-Rank adopts Markov Conley Chains to highlight the presence of cycles in game dynamics, and attempts to compute stationary distributions as a mean for strategy profile ranking.
Though successful in small-scale applications, α-Rank severely suffers in scalability contrary to polynomial time claims made in .
In fact, we show that α-Rank exhibits exponential time and memory complexities shedding light on the small-scale empirical study conducted in , whereby the largest reported game included only four agents with four available strategies each.
In this work, we put forward α α -Rank as a scalable alternative for multi-agent evaluation with linear time and memory demands.
Our method combines numerical optimization with evolutionary game theory for a scalable solver capable of handling large joint spaces with millions of strategy profiles.
To handle even larger profiles, e.g., tens to hundreds of millions, we further introduce an oracle Figure 1: Example of population based evaluation on N = 3 learners each with 3 strategies and 5 copies.
a) Each population obtains a fitness value P i depending on the strategies chosen,
b) mutation strategy (red star), and
c) population either selecting original strategy, or adopting the novel strategy.
( McMahan et al., 2003) mechanism transforming joint evaluation into a sequence of incremental sub-games with varying sizes.
Given our algorithmic advancements, we justify our claims in a largescale empirical study involving systems with O(2 25 ) possible strategy profiles.
We first demonstrate the computation advantages of α α -Rank on varying size stochastic matrices against other implementations in Numpy, PyTorch, and OpenSpiel .
With these successes, we then consider experiments unsolvable by current techniques.
Precisely, we evaluate multi-agent systems in self-driving and Ising model scenarios each exhibiting a prohibitively-large strategy space (i.e., order of thousands for the former, and tens of millions for the latter).
Here, we again show that α α -Rank is capable of recovering correct strategy ranking in such complex domains.
So far, we have presented scalable multi-agent evaluations through stochastic optimization.
We can further boost scalability (to tens of millions of joint profiles) of our method by introducing an oracle mechanism.
The heuristic of oracles was first introduced in solving large-scale zero-sum matrix games (McMahan et al., 2003) .
The idea is to first create a restricted sub-game in which all players are only allowed to play a restricted number of strategies, which are then expanded by adding incorporating each of the players' best-responses to opponents; the sub-game will be replayed with agents' augmented strategy pools before a new round of best responses is found.
The worse-case scenario of introducing oracles would be to solve the original evaluation problem in full size.
The best response is assumed to be given by an oracle that can be simply implemented by a grid search.
Precisely, given the top-rank profile π
at iteration k, the goal for agent i is to select 4 the optimal π * i from the pre-defined strategy pool S i to maximize the reward
with x [k]
h denoting the state, u
−i,h ) denoting the actions from agent i and the opponents, respectively.
The heuristic of solving the full game from restricted sub-games is crucial especially when it is prohibitively expensive to list all joint-strategy profiles, e.g., in scenarios involving tens-of-millions of joint profiles.
For a complete exposition, we summarize the pseudo-code in Algorithm 1.
In the first phase, vanilla α α -Rank is executed (lines 4-9), while in the second (lines 11 -13), α α -Rank with Oracle (if turned on) is computed.
To avoid any confusion, we refer to the latter as α α -Oracle.
Note that even though in the two-player zero-sum games, the oracle algorithm (McMahan et al., 2003) is guaranteed to converge to the minimax equilibrium.
Providing valid convergence guarantees for α α -Oracle is an interesting direction for future work.
In this paper, we rather demonstrate the effectiveness of such an approach in a large-scale empirical study as shown in Section 4.
In this paper, we demonstrated that the approach in exhibits exponential time and memory complexities.
We then proposed α α -Rank as a scalable solution for multi-agent evaluation with linear time and memory demands.
In a set of experiments, we demonstrated that our method is truly scalable capable of handling large strategy spaces.
There are a lot of interesting avenues for future research.
First, we plan to theoretically analyze convergence properties of the resulting oracle algorithm, and further introduce policy learning through oracles.
Second, we plan take our method to the real-world by conducting multi-robot experiments.
joint and transition probability matrix T [k] .
The second-smallest eigenvalue of the normalized Laplacian of the graph associated with the Markov chain is given by:
, with s i denoting the number of strategies of agent i.
Proof : For simplicity we drop round index k in the below derivation.
Notice, the underlying graph for the constructed Markov Chain can be represented as a Cartesian product of N complete graphs
Indeed, two vertices π [k] ,π [k] ∈ G are connected by the edge if and if only these joint strategy profiles differ in at most one individual strategy, i.e ∃!i
∈ {1, . . . , N } :
−i }.Hence
, the spectral properties of G can be described in terms of spectral properties of K si as follows (Barik et al., 2015) :
) is the i th eigenvalue of the unnormalized Laplacian of the complete graph K sj and ϑ i,j is the corresponding eigenvector 7 .
The spectrum of unnormalized Laplacian of the complete graph K si is given by Spectr(K si ) = {0, s i − 1} and the only eigenvector corresponding to zero eigenvalue is 1 ∈ R si .
Therefore, the minimum non-zero eigenvalue of unnormalized Laplacian of G is given by min i s i − 1.
Finally, due to the fact that G is a regular graph (with degree of each node is equal to N i=1 s i − N + 1), the smallest non-zero eigenvalue of the normalized Laplacian of G is given by
Giving this result, the overall time complexity of Power Method is bounded by O n × log
= O (log n).
As for the memory complexity, Power Method requires has the same requirements as PageRank algorithm.
8 These results imply that Power Method scales exponentially with number of agents N , and therefore, inapplicable when N is large.
|
We provide a scalable solution to multi-agent evaluation with linear rate complexity in both time and memory in terms of number of agents
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:403
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks are known to be annotation-hungry.
Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks.
Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data.
In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques.
In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner.
To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network.
During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively.
Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods.
Code is available at https://github.com/LiJunnan1992/DivideMix .
The remarkable success in training deep neural networks (DNNs) is largely attributed to the collection of large datasets with human annotated labels.
However, it is extremely expensive and time-consuming to label extensive data with high-quality annotations.
On the other hand, there exist alternative and inexpensive methods for mining large-scale data with labels, such as querying commercial search engines (Li et al., 2017a) , downloading social media images with tags (Mahajan et al., 2018) , leveraging machine-generated labels (Kuznetsova et al., 2018) , or using a single annotator to label each sample (Tanno et al., 2019) .
These alternative methods inevitably yield samples with noisy labels.
A recent study (Zhang et al., 2017) shows that DNNs can easily overfit to noisy labels and results in poor generalization performance.
Existing methods on learning with noisy labels (LNL) primarily take a loss correction approach.
Some methods estimate the noise transition matrix and use it to correct the loss function (Patrini et al., 2017; Goldberger & Ben-Reuven, 2017) .
However, correctly estimating the noise transition matrix is challenging.
Some methods leverage the predictions from DNNs to correct labels and modify the loss accordingly (Reed et al., 2015; Tanaka et al., 2018) .
These methods do not perform well under high noise ratio as the predictions from DNNs would dominate training and cause overfitting.
To overcome this, Arazo et al. (2019) adopt MixUp augmentation.
Another approach selects or reweights samples so that noisy samples contribute less to the loss (Jiang et al., 2018; Ren et al., 2018) .
A challenging issue is to design a reliable criteria to select clean samples.
It has been shown that DNNs tend to learn simple patterns first before fitting label noise (Arpit et al., 2017) .
Therefore, many methods treat samples with small loss as clean ones (Jiang et al., 2018; Arazo et al., 2019) .
Among those methods, Co-teaching (Han et al., 2018) and Co-teaching+ train two networks where each network selects small-loss samples in a mini-batch to train the other.
Another active area of research that also aims to reduce annotation cost is semi-supervised learning (SSL).
In SSL, the training data consists of unlabeled samples in addition to the labeled samples.
Significant progress has been made in leveraging unlabeled samples by enforcing the model to produce low entropy predictions on unlabeled data (Grandvalet & Bengio, 2004) or consistent predictions on perturbed input (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Miyato et al., 2019) .
Recently, Berthelot et al. (2019) propose MixMatch, which unifies several dominant SSL approaches in one framework and achieves state-of-the-art performance.
Despite the individual advances in LNL and SSL, their connection has been underexplored.
In this work, we propose DivideMix, which addresses learning with label noise in a semi-supervised manner.
Different from most existing LNL approaches, DivideMix discards the sample labels that are highly likely to be noisy, and leverages the noisy samples as unlabeled data to regularize the model from overfitting and improve generalization performance.
The key contributions of this work are:
• We propose co-divide, which trains two networks simultaneously.
For each network, we dynamically fit a Gaussian Mixture Model (GMM) on its per-sample loss distribution to divide the training samples into a labeled set and an unlabeled set.
The divided data is then used to train the other network.
Co-divide keeps the two networks diverged, so that they can filter different types of error and avoid confirmation bias in self-training.
• During SSL phase, we improve MixMatch with label co-refinement and co-guessing to account for label noise.
For labeled samples, we refine their ground-truth labels using the network's predictions guided by the GMM for the other network.
For unlabeled samples, we use the ensemble of both networks to make reliable guesses for their labels.
• We experimentally show that DivideMix significantly advances state-of-the-art results on multiple benchmarks with different types and levels of label noise.
We also provide extensive ablation study and qualitative results to examine the effect of different components.
2 RELATED WORK
In this paper, we propose DivideMix for learning with noisy labels by leveraging SSL.
Our method trains two networks simultaneously and achieves robustness to noise through dataset co-divide, label co-refinement and co-guessing.
Through extensive experiments across multiple datasets, we show that DivideMix consistently exhibits substantial performance improvements compared to state-of-the-art methods.
For future work, we are interested in incorporating additional ideas from SSL to LNL, and vice versa.
Furthermore, we are also interested in adapting DivideMix to other domains such as NLP.
|
We propose a novel semi-supervised learning approach with SOTA performance on combating learning with noisy labels.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:404
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a new algorithm to train a robust neural network against adversarial attacks.
Our algorithm is motivated by the following two ideas.
First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness.
Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way.
Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net.
Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks.
On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training (Madry 2017) and random self-ensemble (Liu, 2017) under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of ImageNet.
Deep neural networks have demonstrated state-of-the-art performances on many difficult machine learning tasks.
Despite the fundamental breakthroughs in various tasks, deep neural networks have been shown to be utterly vulnerable to adversarial attacks BID32 BID11 .
Carefully crafted perturbations can be added to the inputs of the targeted model to drive the performances of deep neural networks to chance-level.
In the context of image classification, these perturbations are imperceptible to human eyes but can change the prediction of the classification model to the wrong class.
Algorithms seek to find such perturbations are denoted as adversarial attacks BID5 BID4 BID28 , and some attacks are still effective in the physical world BID17 BID9 .
The inherent weakness of lacking robustness to adversarial examples for deep neural networks brings out security concerns, especially for security-sensitive applications which require strong reliability.To defend from adversarial examples and improve the robustness of neural networks, many algorithms have been recently proposed BID27 BID37 BID17 BID12 .
Among them, there are two lines of work showing effective results on medium-sized data (e.g., CIFAR-10).
The first line of work uses adversarial training to improve robustness, and the recent algorithm proposed in BID25 has been recognized as one of the most successful defenses, as shown in .
The second line of work adds stochastic components in the neural network to hide gradient information from attackers.
In the black-box setting, stochastic outputs can significantly increase query counts for attacks using finite-difference techniques BID5 , and even in the white-box setting the recent Random Self-Ensemble (RSE) approach proposed by BID23 achieves similar performance to Madry's adversarial training algorithm.In this paper, we propose a new defense algorithm called Adv-BNN.
The idea is to combine adversarial training and Bayesian network, although trying BNNs in adversarial attacks is not new (e.g. BID20 BID10 BID30 ), and very recently BID36 also tried to combine Bayesian learning with adversarial training, this is the first time we scale the problem to complex data and our approach achieves better robustness than previous defense methods.
The contributions of this paper can be summarized below:• Instead of adding randomness to the input of each layer (as what has been done in RSE), we directly assume all the weights in the network are stochastic and conduct training with techniques commonly used in Bayesian Neural Network (BNN).•
We propose a new mini-max formulation to combine adversarial training with BNN, and show the problem can be solved by alternating between projected gradient descent and SGD.•
We test the proposed Adv-BNN approach on CIFAR10, STL10 and ImageNet143 datasets, and show significant improvement over previous approaches including RSE and adversarial training.Notations A neural network parameterized by weights w ∈ R d is denoted by f (x; w), where x ∈ R p is an input example and y is the corresponding label, the training/testing dataset is D tr/te with size N tr/te respectively. When
necessary, we abuse D tr/te to define the empirical distribu- DISPLAYFORM0 δ(x i )δ(y i ), where δ(·) is the Dirac delta function. x o
represents the original input and x adv denotes the adversarial example. The
loss function is represented as f (x i ; w), y i , where i is the index of the data point. Our
approach works for any loss but we consider the cross-entropy loss in all the experiments. The
adversarial perturbation is denoted as ξ ∈ R p , and adversarial example is generated by x adv = x o + ξ. In
this paper, we focus on the attack under norm constraint BID25 , so that ξ ≤ γ. In
order to align with the previous works, in the experiments we set the norm to · ∞ . The
Hadamard product is denoted as .
To conclude, we find that although the Bayesian neural network has no defense functionality, when combined with adversarial training, its robustness against adversarial attack increases significantly.
So this method can be regarded as a non-trivial combination of BNN and the adversarial training: robust classification relies on the controlled local Lipschitz value, while adversarial training does not generalize this property well enough to the test set; if we train the BNN with adversarial examples, the robustness increases by a large margin.
Admittedly, our method is still far from the ideal case, and it is still an open problem on what the optimal defense solution will be.
|
We design an adversarial training method to Bayesian neural networks, showing a much stronger defense to white-box adversarial attacks
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:405
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.
In this work, we explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence.
We explore a number of strategies to carry out this attack, starting with simple boosting of the malicious agent's update to overcome the effects of other agents' updates.
To increase attack stealth, we propose an alternating minimization strategy, which alternately optimizes for the training loss and the adversarial objective.
We follow up by using parameter estimation for the benign agents' updates to improve on attack success.
Finally, we use a suite of interpretability techniques to generate visual explanations of model decisions for both benign and malicious models and show that the explanations are nearly visually indistinguishable.
Our results indicate that even a highly constrained adversary can carry out model poisoning attacks while simultaneously maintaining stealth, thus highlighting the vulnerability of the federated learning setting and the need to develop effective defense strategies.
Federated learning introduced by BID11 has recently emerged as a popular implementation of distributed stochastic optimization for large-scale deep neural network training.
It is formulated as a multi-round strategy in which the training of a neural network model is distributed between multiple agents.
In each round, a random subset of agents, with local data and computational resources, is selected for training.
The selected agents perform model training and share only the parameter updates with a centralized parameter server, that facilitates aggregation of the updates.
Motivated by privacy concerns, the server is designed to have no visibility into an agents' local data and training process.
The aggregation algorithm is agnostic to the data distribution at the agents.In this work, we exploit this lack of transparency in the agent updates, and explore the possibility of a single malicious agent performing a model poisoning attack.
The malicious agent's objective is to cause the jointly trained global model to misclassify a set of chosen inputs with high confidence, i.e., it seeks to introduce a targeted backdoor in the global model.
In each round, the malicious agent generates its update by optimizing for a malicious objective different than the training loss for federated learning.
It aims to achieve this by generating its update by directly optimizing for the malicious objective.
However, the presence of a multitude of other agents which are simultaneously providing updates makes this challenging.
Further, the malicious agent must ensure that its update is undetectable as aberrant.Contributions: To this end, we propose a sequence of model poisoning attacks, with the aim of achieving the malicious objective while maintaining attack stealth.
For each strategy, we consider both attack strength as well as stealth.
We start with malicious update boosting, designed to negate the combined effect of the benign agents, which enables the adversary to achieve its malicious objective with 100% confidence.
However, we show that boosted updates can be detected as aberrant using two measures of stealth, accuracy checking on the benign objective and parameter update statistics.
Observing that the only parameter updates that need to be boosted are those that con-tribute to the malicious objective, we design an alternating minimization strategy that improves attack stealth.
This strategy alternates between training loss minimization and the boosting of updates for the malicious objective and is able to achieve high success rate on both the benign and malicious objectives.
In addition, we show that estimating the other agents' updates improves attack success rates.
Finally, we use a suite of interpretability techniques to generate visual explanations of the decisions made by a global model with and without a targeted backdoor.
Interestingly, we observe that the explanations are nearly visually indistinguishable.
This establishes the attack stealth along yet another axis of measurement and indicates that backdoors can be inserted without drastic changes in model focus at the input.
In this paper, we have started an exploration of the vulnerability of multi-party machine learning algorithms such as federated learning to model poisoning adversaries, who can take advantage of the very privacy these models are designed to provide.
In future work, we plan to explore more sophisticated detection strategies at the server, which can provide guarantees against the type of attacker we have considered here.
In particular, notions of distances between weight distributions are promising defensive tools.
Our attacks in this paper demonstrate that federated learning in its basic form is very vulnerable to model poisoning adversaries, as are recently proposed Byzantine resilient aggregation mechanisms.
While detection mechanisms can make these attacks more challenging, they can be overcome, demonstrating that multi-party machine learning algorithms robust to attackers of the type considered here must be developed.
|
Effective model poisoning attacks on federated learning able to cause high-confidence targeted misclassification of desired inputs
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:406
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Despite rapid advances in speech recognition, current models remain brittle to superficial perturbations to their inputs.
Small amounts of noise can destroy the performance of an otherwise state-of-the-art model.
To harden models against background noise, practitioners often perform data augmentation, adding artificially-noised examples to the training set, carrying over the original label.
In this paper, we hypothesize that a clean example and its superficially perturbed counterparts shouldn't merely map to the same class--- they should map to the same representation.
We propose invariant-representation-learning (IRL): At each training iteration, for each training example, we sample a noisy counterpart.
We then apply a penalty term to coerce matched representations at each layer (above some chosen layer).
Our key results, demonstrated on the LibriSpeech dataset are the following:
(i) IRL significantly reduces character error rates (CER)on both `clean' (3.3% vs 6.5%) and `other' (11.0% vs 18.1%) test sets;
(ii) on several out-of-domain noise settings (different from those seen during training), IRL's benefits are even more pronounced.
Careful ablations confirm that our results are not simply due to shrinking activations at the chosen layers.
|
In this paper, we hypothesize that superficially perturbed data points shouldn’t merely map to the same class---they should map to the same representation.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:407
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper we design a harmonic acoustic model for pitch detection.
This model arranges conventional convolution and sparse convolution in a way such that the global harmonic patterns captured by sparse convolution are composed of the enough number of local patterns captured by layers of conventional convolution.
When trained on the MAPS dataset, the harmonic model outperforms all existing pitch detection systems trained on the same dataset.
Most impressively, when trained on MAPS with simple data augmentation, the harmonic model with an LSTM layer on top surpasses an up-to-date, more complex pitch detection system trained on the MAESTRO dataset to which complicated data augmentation is applied and whose training split is an order-of-magnitude larger than the training split of MAPS.
The harmonic model has demonstrated potential to be used for advanced automatic music transcription (AMT) systems.
In this paper we designed a harmonic acoustic model for pitch detection.
This model effectively captures the complex frequency interactions characterizing polyphonic pitched music through conventional convolution and sparse convolution inspired by the harmonic structure of pitched music.
In its pure form without RNN and data augmentation, the harmonic model outperformed most of the existing pitch detection systems.
Most noticeably, when trained on MAPS and data augmentation is done, the harmonic model with an LSTM layer on top outdid the complex system in Hawthorne et al. (2019) trained on MAESTRO whose training split 15 times as large as the training split of MAPS.
Thus, the harmonic model has shown great potential to be used for building advanced AMT systems.
A possible future direction is to make more potential of complex spectrograms, instead of using only amplitude spectrograms.
A mixture of signal can be inseparable in the real number domain but could be separable in the complex number domain.
Trabelsi et al. (2018) has done some preliminary study in this direction.
However, our own study showed that the technique of deep complex network proposed in Trabelsi et al. (2018) did not yield a performance comparable with that of real networks.
Therefore, definitely more can be done.
|
harmonic acoustic model
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:408
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning domain-invariant representation is a dominant approach for domain generalization.
However, previous methods based on domain invariance overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and the invariance.
This study proposes a novel method {\em adversarial feature learning under accuracy constraint (AFLAC)}, which maximizes domain invariance within a range that does not interfere with accuracy.
Empirical validations show that the performance of AFLAC is superior to that of baseline methods, supporting the importance of considering the dependency and the efficacy of the proposed method to overcome the problem.
In supervised learning we typically assume that samples are obtained from the same distribution in training and testing; however, because this assumption does not hold in many practical situations it reduces the classification accuracy for the test data BID20 .
One typical situation is domain generalization (DG) BID1 BID18 BID19 BID2 : we have labeled data from several source domains and collectively exploit them such that the trained system generalizes to other unseen, but somewhat similar, target domain(s).
This paper considers DG under the situation where domain d and class y labels are statistically dependent owing to some common latent factor z FIG0 -(c)), which we referred to as domainclass dependency.
For example, the WISDM Activity Prediction dataset (WISDM, BID10 ), where y and d correspond to activities and wearable device users, exhibits this dependency because (1) some activities (e.g., jogging) are strenuous to the extent that some unathletic subjects avoided them (data characteristics), or (2) other activities were added only after the study began and the initial subjects could not perform them (data-collection errors).
The dependency is common in real-world datasets BID23 and a similar setting has been investigated in domain adaptation (DA) studies, but most prior DG studies overlooked the dependency.Most prior DG methods utilize invariant feature learning (IFL) (e.g., ).
IFL attempts to learn feature representation h from input data x which is invariant to d.
When source and target domains have some common structure (see, ), IFL prevents the classifier from overfitting to source domains FIG0 ).
However, under the dependency, merely imposing the domain invariance can adversely affect the classification accuracy as pointed out by BID21 and illustrated in FIG0 .
Although that trade-off occurs in source domains (because DG uses only source data during optimization), it can also negatively affect the classification performance for target domain(s).
For example, if the target domain has characteristics similar (or same as an extreme case) to those of a certain source domain, giving priority to domain invariance obviously interferes with the DG performance ( FIG0 ).In
this paper, considering that prioritizing domain invariance under the trade-off can negatively affect the DG performance, we propose a novel method adversarial feature learning under accuracy constraint (AFLAC), which maximizes domain invariance within a range that does not interfere with the classification accuracy FIG0 -(e)) on adversarial training. Specifically
, AFLAC is intended to achieve accuracy-constrained domain invariance, which we define as the maximum H(d|h) (H denotes entropy) value under the condition H(y|x) = H(y|h) (h has as much y information as x). Empirical validations
show that the performance of AFLAC is superior to that of baseline methods, supporting the importance of considering domain-class dependency and the efficacy of the proposed approach for overcoming the issue.
In this paper, we proposed a novel method AFLAC, which maximizes domain invariance within a range that does not interfere with classification accuracy on adversarial training.
Empirical validations show the superior DG performance of AFLAC to the baseline methods, supporting the importance of the domain-class dependency in domain generalization tasks and the efficacy of the proposed method for overcoming the issue.
|
Address the trade-off caused by the dependency of classes on domains by improving domain adversarial nets
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:409
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Spectral embedding is a popular technique for the representation of graph data.
Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering.
In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix.
Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers.
We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores.
Spectral embedding is a standard technique for the representation of graph data (Ng et al., 2002; Belkin & Niyogi, 2002) .
Given the adjacency matrix A ∈ R n×n + of the graph, it is obtained by solving either the eigenvalue problem:
or the generalized eigenvalue problem:
where D = diag(A1 n ) is the degree matrix, with 1 n the all-ones vector of dimension n, L = D − A is the Laplacian matrix of the graph, Λ ∈ R k×k is the diagonal matrix of the k smallest (generalized) eigenvalues of L and X ∈ R n×k is the corresponding matrix of (generalized) eigenvectors.
In this paper, we only consider the generalized eigenvalue problem, whose solution is given by the spectral decomposition of the normalized Laplacian matrix L norm = I − D −1/2 AD −1/2 (Luxburg, 2007) .
The spectral embedding can be interpreted as equilibrium states of some physical systems (Snell & Doyle, 2000; Spielman, 2007; Bonald et al., 2018) , a desirable property in modern machine learning.
However, it tends to produce poor results on real datasets if applied directly on the graph (Amini et al., 2013) .
One reason is that real graphs are most often disconnected due to noise or outliers in the dataset.
In order to improve the quality of the embedding, two main types of regularization have been proposed.
The first artificially increases the degree of each node by a constant factor (Chaudhuri et al., 2012; Qin & Rohe, 2013) , while the second adds a constant to all entries of the original adjacency matrix (Amini et al., 2013; Joseph et al., 2016; Zhang & Rohe, 2018) .
In the practically interesting case where the original adjacency matrix A is sparse, the regularized adjacency matrix is dense but has a so-called sparse + low rank structure, enabling the computation of the spectral embedding on very large graphs (Lara, 2019) .
While (Zhang & Rohe, 2018) explains the effects of regularization through graph conductance and (Joseph et al., 2016) through eigenvector perturbation on the Stochastic Block Model, there is no simple interpretation of the benefits of graph regularization.
In this paper, we show on a simple block model that the complete graph regularization forces the spectral embedding to separate the blocks in decreasing order of size, making the embedding less sensitive to noise or outliers in the data.
Indeed, (Zhang & Rohe, 2018) identified that, without regularization, the cuts corresponding to the first dimensions of the spectral embedding tend to separate small sets of nodes, so-called dangling sets, loosely connected to the rest of the graph.
Our work shows more explicitly that regularization forces the spectral embedding to focus on the largest clusters.
Moreover, our analysis involves some explicit characterization of the eigenvalues, allowing us to quantify the impact of the regularization parameter.
The rest of this paper is organized as follows.
Section 2 presents block models and an important preliminary result about their aggregation.
Section 3 presents the main result of the paper, about the regularization of block models, while Section 4 extends this result to bipartite graphs.
Section 5 presents the experiments and Section 6 concludes the paper.
|
Graph regularization forces spectral embedding to focus on the largest clusters, making the representation less sensitive to noise.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:41
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent advances in deep generative models have lead to remarkable progress in synthesizing high quality images.
Following their successful application in image processing and representation learning, an important next step is to consider videos.
Learning generative models of video is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects.
While recent generative models of video have had some success, current progress is hampered by the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples.
To this extent we propose Fréchet Video Distance (FVD), a new metric for generative models of video based on FID.
We contribute a large-scale human study, which confirms that FVD correlates well with qualitative human judgment of generated videos.
Recent advances in deep generative models have lead to remarkable success in synthesizing highquality images (Karras et al., 2018; Brock et al., 2018) .
A natural next challenge is to consider video generation.
This is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects.
Generative models of video will enable many applications, including missing-frame prediction (Jiang et al., 2018) , improved instance segmentation (Haller & Leordeanu, 2017) , or complex (relational) reasoning tasks by conducting inference (Lerer et al., 2016) .While
great progress has been made in recent years, video generation models are still in their infancy, and generally unable to synthesize more than a few seconds of video (Babaeizadeh et al., 2017) . Learning
a good dynamics model remains a major challenge in generating real world videos. However,
in order to qualitatively measure progress in synthesizing videos, we require metrics that consider visual quality, temporal coherence, and diversity of generated samples.We contribute Fréchet Video Distance (FVD), a new metric for generative models of video. FVD builds
on the principles underlying Fréchet Inception Distance (FID; Heusel et al. (2017) ), which has been successfully applied to images. We introduce
a feature representation that captures the temporal coherence of the content of a video, in addition to the quality of each frame. Unlike popular
* Both authors contributed equally to this work while interning at Google Brain.Figure 1: Generated videos by various models ranked according to FVD (lower is better). metrics such as
Peak Signal to Noise Ratio (PSNR) or the Structural Similarity (SSIM; Wang et al. (2004) ) index, FVD considers a distribution over videos, thereby avoiding the drawbacks of framelevel metrics (Huynh-Thu & Ghanbari, 2012) . We contribute extensive
experiments to evaluate FVD, including a large-scale human study which confirms that FVD coincides well with qualitative human judgment of generated videos.
We introduced the Fréchet Video Distance (FVD), a new evaluation metric for generative models of video, and an important step towards better evaluation of models for video generation.
Our experiments confirm that FVD is accurate in evaluating videos that were modified to include static noise, and temporal noise.
More importantly, a large scale human study among generated videos from several recent generative models reveals that FVD consistently outperforms SSIM and PSNR in agreeing with human judgment.
|
We propose FVD: a new metric for generative models of video based on FID. A large-scale human study confirms that FVD correlates well with qualitative human judgment of generated videos.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:410
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Despite advances in deep learning, artificial neural networks do not learn the same way as humans do.
Today, neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on learnt tasks when tasks are presented one at a time -- this phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from incoming data.
In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our model consists of a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences.
We
(i) substantiate our claim that replay should be generative,
(ii) show the benefits of generative replay and dual memory via experiments, and
(iii) demonstrate improved performance retention even for small models with low capacity.
Our architecture displays many important characteristics of the human memory and provides insights on the connection between sleep and learning in humans.
Many machine learning models, when trained sequentially on tasks, forget how to perform the previously learnt tasks.
This phenomenon called catastrophic forgetting is prominent in neural networks BID23 .
Without a way to avert catastrophic forgetting, a learning system needs to store all training data and relearn on it along with new incoming data, when retraining.
Hence, it is an important challenge to overcome in order to enable systems to learn continuously.
BID23 first suggested that the underlying cause of forgetting was the distributed shared representation of tasks via network weights.
Subsequent works attempted to remedy the issue by reducing representational overlap between input representations via activation sharpening algorithms BID17 , orthogonal recoding of inputs BID19 or orthogonal activations at all hidden layers BID24 BID5 .
More recent works have explored activations like dropout BID9 and local winner-takes-all BID36 to create sparse, less correlated feature representations.
But such sparse encodings can be task specific at times and in general act as heuristics to mildly pacify the underlying problem.Further, natural cognitive systems are also connectionist in nature and yet they forget gradually but not 'catastrophically'.
For instance, humans demonstrate gradual systematic forgetting.
Frequently and recently encountered tasks tend to survive much longer in the human memory, while those rarely encountered are slowly forgotten.
Some of the earlier tasks may be seen again, but it is not necessary for them to be retained in memory BID7 .
Hence only sparsifying representations does not solve the problem.
Instead, neuroscientific evidence suggests that humans have evolved mechanisms to separately learn new incoming tasks and consolidate the learning with previous knowledge to avert catastrophic forgetting BID22 BID29 BID7 .Complementary
learning systems: BID22 suggested that this separation has been achieved in the human brain via evolution of two separate areas of the brain, the hippocampus and the neocortex. The neocortex
is a long term memory which specializes in consolidating new information with previous knowledge and gradually learns the joint structure of all tasks and experiences; whereas the hippocampus acts as a temporary memory to rapidly learn new tasks and then slowly transfer the knowledge to neocortex after acquisition.Experience replay: Another factor deemed essential for sequential learning is experience replay. BID22 ; BID29
have emphasized the importance of replayed data patterns in the human brain during sleep and waking rest. BID31 BID32 proposed
several replay techniques (a.k.a. pseudopattern rehearsal) to achieve replay, but they involved generating replay data without storing input representations and our experiments show that they lack the accuracy required for consolidation.Weight consolidation or freezing: Recent evidence from neuroscience also suggests that mammalian brain protects knowledge in the neocortex via task-specific consolidation of neural synapses over long periods of time BID37 BID0 . Such techniques have
recently been employed in progressive neural networks BID34 and Pathnets BID4 both of which freeze neural network weights after learning tasks. BID16 have used the
fisher information matrix (FIM) to slow down learning on network weights which correlate with previously acquired knowledge.In this paper, we address the catastrophic forgetting problem by drawing inspiration from the above neuroscientific insights and present a method to overcome catastrophic forgetting. More specifically,
we propose a dual-memory architecture for learning tasks sequentially while averting catastrophic forgetting. Our model comprises
of two generative models: a short-term memory (STM) to emulate the human hippocampal system and a long term memory (LTM) to emulate the neocortical learning system. The STM learns new
tasks without interfering with previously learnt tasks in the LTM. The LTM stores all
previously learnt tasks and aids the STM in learning tasks similar to previous tasks. During sleep/down-time
, the STM generates and transfers samples of learnt tasks to the LTM. These are gradually consolidated
with the LTM's knowledge base of previous tasks via generative replay.Our approach is inspired from the strengths of deep generative models, experience replay and the complementary learning systems literature. We demonstrate our method's effectiveness
in averting catastrophic forgetting by sequentially learning multiple tasks. Moreover, our experiments shed light on some
characteristics of human memory as observed in the psychology and neuroscience literature.
In this section we show that DGDMN shares some more remarkable characteristics with the human memory and present a discussion of some more related ideas.
Due to space constraints, visualizations of the learnt latent structures when training jointly vs. sequentially have been deferred to appendix A. The hyperparameters of DGDMN (κ and n ST M ) have intuitive interpretations and we have provided simple heuristics to choose them without any complex searches (in appendix B).Resilience
to noise and occlusion: We use a VAE to be able to reconstruct representations of samples. Reconstructed
images are less noisy and can recover from partial occlusion, which gives our model human-like abilities to recognize objects in noisy, distorted or occluded images. We test our LTM
model and a NN model by jointly training on uncorrupted Digits data and testing on noisy and occluded images. We see that the
LTM is more robust to noisy and occluded images and exhibits smoother degradation in classification accuracy because of its denoising reconstructive properties (see FIG7 ). The choice of underlying
generative model: Our consolidation ability and retention performance relies heavily on the generation and reconstruction ability of the underlying generative model. We chose a VAE for its reconstructive
capabilities but our architecture is agnostic to the choice of the underlying generative model as long as the generator can generate reliable samples and reconstruct incoming samples accurately. Hence, variants of Generative Adversarial
Networks (GAN) Goodfellow et al. (2014) like BiGANs BID2 , ALI (Dumoulin et al., 2017) and AVB BID25 can also be used for the generative model depending on the modeled domain.Why use short-term memory?: Our LTM always learns from STTMs and never
from real data, and the STTMs' errors slowly propagate into the LTM and contribute to forgetting. An alternative could be to directly store data
from new incoming tasks, consolidate it into the LTM after periodic intervals, and then discard the data. We show the accuracy curves on Digits dataset
for this approach in FIG8 . This results in higher retention compared to
DGDMN in FIG3 because LTM now learns from real data. However, this approach is not truly online since
recently learnt tasks cannot be used immediately until after a sleep phase. Since the STM's error can be made smaller by using
high capacity generators and classifiers, we suggest using a STM for true online continual learning.Connections to knowledge distillation: Previous works on (joint) multitask learning have also proposed approaches to learn individual tasks with small networks and then "distilling" them jointly into a larger neural network . Such distillation can sometimes improve performance
on individual tasks if they share structure and at other times mitigate inter-task interference due to refinement of learnt functions while distilling BID30 . Though we do not use temperature-controlled soft-labels
while consolidating tasks into the LTM (unlike distillation), we surmise that due to refinement and compression during consolidation phase, DGDMN is also able to learn joint task structure effectively while mitigating interference between tasks.Approaches based on synaptic consolidation: Though our architecture draws inspiration from complementary learning systems and experience replay in the human brain, there is also considerable neuroscientific evidence for synaptic consolidation in the human brain (like in EWC). It might be interesting to explore how synaptic consolidation
can be incorporated in our dual memory architecture without causing stagnation and we leave this to future work. We also plan to extend our architecture to learning optimal policies
over time via reinforcement learning without explicit replay memories.
In this work, we have developed a model capable of learning continuously on sequentially incoming tasks, while averting catastrophic forgetting.
Our model employs a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences.
We have shown that generative replay performs the best for long-term performance retention even for neural networks with small capacity, while demonstrating the benefits of using generative replay and a dual memory architecture via our experiments.
Our model hyperparameters have simple interpretations and can be set without much tuning.
Moreover, our architecture displays remarkable parallels with the human memory system and provides useful insights about the connection between sleep and learning in humans.
Deep Generative Replay (algorithm 1), as described in section 3.1, consolidates new tasks for a DGM with previously learnt tasks.
It first computes sampling fractions for new tasks (η tasks ) and previously learnt tasks (η gen ) and ensures a minimum fraction (κ) per new task (lines 3-6).
Then it computes the number of samples to generate from previous tasks and whether to subsample the incoming task samples to satisfy the memory capacity N max (lines 7-12).
Finally, it generates the required number of samples from previous tasks, reconstructs all data and trains the DGM on resulting data (lines 13-16).
For a dictionary D, D is the total number of tasks in D counting repetitions, while |D| is the total number of tasks without repetitions.
|X| is the number of samples in set X. BID35 have recently proposed a similar idea independently and BID27 have also employed a generative replay in two-layer restricted boltzmann machines, but they do not describe balancing new and generated samples and cannot recognize repeated tasks (section 4.2).
Their generative replay without a dual memory architecture is costly to train (section 4.3) and a lack of reconstruction for new samples makes their representations less robust to noise and occlusions (section 5).
|
A dual memory architecture inspired from human brain to learn sequentially incoming tasks, while averting catastrophic forgetting.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:411
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Most research on lifelong learning applies to images or games, but not language.
We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling.
LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity.
Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples.
When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task.
The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model.
Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound.
The source code is available at https://github.com/jojotenya/LAMOL.
The current dominant paradigm for machine learning is to run an algorithm on a given dataset to produce a trained model specifically for a particular purpose; this is isolated learning (Chen & Liu, 2016, p. 150) .
In isolated learning, the model is unable to retain and accumulate the knowledge it has learned before.
When a stream of tasks are joined to be trained sequentially, isolated learning faces catastrophic forgetting (McCloskey & Cohen, 1989) due to a non-stationary data distribution that biases the model (left figure of Figure 1 ).
In contrast, lifelong learning is designed to address a stream of tasks by accumulating interconnected knowledge between learned tasks and retaining the performance of those tasks.
A human easily achieves lifelong learning, but this is nontrivial for a machine; thus lifelong learning is a vital step toward artificial general intelligence.
In this paper, we focus on lifelong language learning, where a machine achieves lifelong learning on a stream of natural language processing (NLP) tasks.
To the best of our knowledge, lifelong language learning has been studied in only a few instances; for sentiment analysis (Chen et al., 2015b; Xia et al., 2017) , conversational agents (Lee, 2017) , word representation learning (Xu et al., 2018) , sentence representation learning (Liu et al., 2019 ), text classification, and question answering (d'Autume et al., 2019) .
However, in all previous work, the tasks in the stream are essentially the same task but in different domains.
To achieve lifelong language learning on fundamentally different tasks, we propose LAMOL -LAnguage MOdeling for Lifelong language learning.
It has been shown that many NLP tasks can be considered question answering (QA) (Bryan McCann & Socher, 2018) .
Therefore, we address multiple NLP tasks with a single model by training a language model (LM) that generates an answer based on the context and the question.
Treating QA as language modeling is beneficial because the LM can be pre-trained on a large number of sentences without any labeling (Radford et al., 2019) ; however, this does not directly solve the problem of LLL.
If we train an LM on a stream of tasks, catastrophic forgetting still occurs.
However, as an LM is intrinsically a text generator, we can use it to answer questions while generating pseudo-samples of Figure 1 : Left: After learning Task 2, the learner has already forgetten how to solve Task 1.
This is "catastrophic forgetting".
Middle: The basic idea of the data-based LLL approach.
A generator is learned to generate examples it has seen before.
Using the generator, the learner also learns from examples from the previous task to prevent it from forgetting.
Right: A language model that simultaneously takes on the roles of learner and generator.
the previous task to be replayed later.
LAMOL is inspired by the data-based approach for LLL in which a generator learns to generate samples in previous tasks (middle of Figure 1 ) (Hanul Shin & Kim, 2017; Kemker & Kanan, 2017) .
In contrast to previous approaches, LAMOL needs no extra generator (right of Figure 1 ).
LAMOL is also similar to multitask training, but the model itself generates data from previous tasks instead of using real data.
Our main contributions in this paper are:
• We present LAMOL, a simple yet effective method for LLL.
Our method has the advantages of no requirements in terms of extra memory or model capacity.
We also do not need to know how many tasks to train in advance and can always train on additional tasks when needed.
• Experimental results show that our methods outperform baselines and other state-of-the-art methods by a considerable margin and approaches the multitasking upper bound within 2-3%.
• Furthermore, we propose adding task-specific tokens during pseudo-sample generation to evenly split the generated samples among all previous tasks.
This extension stabilizes LLL and is particularly useful when training on a large number of tasks.
• We analyze how different amounts of pseudo-samples affect the final performance of LAMOL, considering results both with and without the task-specific tokens.
• We open-source our code to facilitate further LLL research.
We propose LAMOL, a simple yet effective method for LLL based on language modeling.
A single LM achieves LLL without additional model components and without keeping old examples.
Moreover, any pre-trained LM can be used to leverage a large amount of unlabeled text to improve LLL.
Finally, more tasks can be added whenever needed.
|
Language modeling for lifelong language learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:412
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words.
A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors.
Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors.
However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory.
Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way.
Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.
Modern deep learning approaches for natural language processing (NLP) often rely on vector representation of words to convert discrete space of human language into continuous space best suited for further processing through a neural network.
For a language with vocabulary of size d, a simple way to achieve this mapping is to use one-hot representation -each word is mapped to its own row of a d × d identity matrix.
There is no need to actually store the identity matrix in memory, it is trivial to reconstruct the row from the word identifier.
Word embedding approaches such as word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) use instead vectors of dimensionality p much smaller than d to represent words, but the vectors are not necessarily extremely sparse nor mutually orthogonal.
This has two benefits: the embeddings can be trained on large text corpora to capture the semantic relationship between words, and the downstream neural network layers only need to be of width proportional to p, not d, to accept a word or a sentence.
We do, however, need to explicitly store the d × p embedding matrix in GPU memory for efficient access during training and inference.
Vocabulary sizes can reach d = 10 5 or 10 6 (Pennington et al., 2014) , and dimensionality of the embeddings used in current systems ranges from p = 300 (Mikolov et al., 2013; Pennington et al., 2014) to p = 1024 (Devlin et al., 2018) .
The d × p embedding matrix thus becomes a substantial, often dominating, part of the parameter space of a learning model.
In classical computing, information is stored in bits -a single bit represents an element from the set B = {0, 1}, it can be in one of two possible states.
A quantum equivalent of a bit, a qubit, is fully described by a single two-dimensional complex unit-norm vector, that is, an element from the set C 2 .
A state of an n-qubit quantum register corresponds to a vector in C 2 n .
To have exponential dimensionality of the state space, though, the qubits in the register have to be interconnected so that their states can become entangled; a set of all possible states of n completely separated, independent qubits can be fully represented by C 2n instead of C 2 n .
Entanglement is a purely quantum phenomenon -we can make quantum bits interconnected, so that a state of a two-qubit system cannot be decomposed into states of individual qubits.
We do not see entanglement in classical bits, which are always independent -we can describe a byte by separately listing the state of each of the eight bits.
We can, however, approximate quantum register classically -store vectors of size m using O (log m) space, at the cost of losing the ability to express all possible m-dimensional vectors that an actual O (log m)-qubit quantum register would be able to represent.
As we show in this paper, the loss of representation power does not have a significant impact on NLP machine learning algorithms that use the approximation approaches to store and manipulate the high-dimensional word embedding matrix.
|
We use ideas from quantum computing to proposed word embeddings that utilize much fewer trainable parameters.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:413
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary.
Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. hand- engineered features).
Humans, however, do not learn to communicate based on well-summarized features.
In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols.
The agents play an image description game where the image contains factors such as colors and shapes.
We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding.
Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment.
One of the key requirements for artificial general intelligence (AGI) to thrive in the real world is its ability to communicate with humans in natural language.
Natural language processing (NLP) has been an active field of research for a long time, and the introduction of deep learning BID18 enabled great progress in NLP tasks such as translation, image captioning, text generation and visual question answering Vinyals et al., 2015; BID13 BID10 Serban et al., 2016; BID19 BID0 .
However, training machines in a supervised manner with a large dataset has its limits when it comes to communication.
Supervised methods are effective for capturing statistical associations between discrete symbols (i.e. words, letters).
The essence of communication is more than just predicting the most likely word to come next; it is a means to coordinate with others and potentially achieve a common goal BID1 BID7 Wittgenstein, 1953 ).An
alternative path to teaching machines the art of communication is to give them a specific task and encourage them to learn how to communicate on their own. This
approach will encourage the agents to use languages grounded to task-related entities as well as communicate with other agents, which is one of the ways humans learn to communicate BID5 . Recently
, there have been several notable works that demonstrated the emergence of communication between neural network agents. Even though
each work produced very interesting results of its own, in all cases, communication was either achieved with a single discrete symbol (as opposed to a sequence of discrete symbols) BID8 BID17 or via a continuous value (Sukhbaatar et al., 2016; BID12 . Not only is
human communication un-differentiable, but also using a single discrete symbol is quite far from natural language communication. One of the
key features of human language is its compositional nature; the meaning of a complex expression is determined by its structure and the meanings of its constituents BID9 . More recently
, BID22 and BID16 trained the agents to communicate in grounded, compositional language. In both studies
, however, inputs given to the agents were hand-engineered features (disentangled input) rather than raw perceptual signals that we receive as humans.In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. Unlike previous
works, our setup poses greater challenges to the agents since visual understanding and discrete communication have to be induced from scratch in parallel. We place the agents
in a two-person image description game, where images contain objects of various color and shape. Inspired by the pioneering
work of BID3 , we employ a communication philosophy named obverter to train the agents. Having its root in the theory
of mind (Premack & Woodruff, 1978) and human language development BID21 , the obverter technique motivates an agent to search over messages and generate the ones that maximize their own understanding. The contribution of our work
can be summarized as follows:• We train artificial agents to learn to disentangle raw image pixels and communicate in compositional language at the same time.• We describe how the obverter
technique, a differentiable learning algorithm for discrete communication, could be employed in a communication game with raw visual input.• We visualize how the agents are
perceiving the images and show that they learn to disentangle color and shape without any explicit supervision other than the communication one.• Experiment results suggest that
the agents could develop, out of raw image input, a language with compositional properties, given a proper pressure from the environment (i.e. the image description game).Finally, while our exposition follows
a multi-agent perspective, it is also possible to interpret our results in the single-agent setting. We are effectively learning a neural
network that is able to learn disentangled compositional representations of visual scenes, without any supervision. Subject to the constraints imposed by
their environment, our agents learn disentangled concepts, and how to compose these to form new concepts. This is an important milestone in the
path to AGI.
In this work, we used the obverter technique to train neural network agents to communicate in a two-person image description game.
Through qualitative analysis, visualization and the zero-shot test, we have shown that even though the agents receive raw perception in the form of image pixels, under the right environment pressures, the emerged language had properties consistent with the ones found in compositional languages.As an evaluation strategy, we followed previous works and focused on assessing the necessary conditions of compositional languages.
However, the exact definition of compositional language is still somewhat debatable, and, to the best of our knowledge, there is no reliable way to mathematically quantify the degree of compositionality of an arbitrary language.
Therefore, in order to encourage active research and discussion among researchers in this domain, we propose for future work, a quantitatively measurable definition of compositionality.
We believe compositionality of a language is not binary (e.g. language A is compositional/not compositional), but a spectrum.
For example, human language has some aspects that are compositional (e.g., syntactic constructions, most morphological combinations) and some that are not (e.g., irregular verb tenses in English, character-level word composition).
It is also important to clearly define grounded language and compositional language.
If one agent says abc (eat red apple) and another says cba (apple red eat), and they both understand each other, are they speaking compositional language?
We believe such questions should be asked and addressed to shape the definition of compositionality.In addition to the definition/evaluation of compositional languages, there are numerous directions of future work.
Observing the emergence of a compositional language among more than two agents is an apparent next step.
Designing an environment to motivate the agents to disentangle more than two factors is also an interesting direction.
Training agents to consider the context (i.e. pragmatics), such as giving each agent several images instead of one, is another exciting future work.
A EMERGENCE OF GRAMMAR, BID3 In BID3 , the author successfully trained neural agents to develop a structured (i.e. grammatical) language using disentangled meaning vectors as the input.
Using 10 subject vectors and 10 predicate vectors, all represented as explicit binary vectors, total 100 meaning vectors could be composed TAB7 ).
Each digit in the subject vector 5a serves a clear role, respectively representing speaker(sp), hearer(hr), other(ot), and plural(pl).
The predicate vector values, on the other hand, are randomly chosen so that each predicate vector will have three 1's and three 0's.
The combination of ten subject vectors and ten predicate vectors allows 100 meaning vectors.The author used twenty neural agents for the experiment.
Each agent was implemented with the vanilla recurrent neural networks (RNN), where the hidden vector h's size was 10, same as the size of the meaning vector m in order to treat h as the agent's understanding of m.
In each training round a single learner (i.e. listener) and ten teachers (i.e. speaker) were randomly chosen.
Each teacher, given all 100 m's in random order, generates a message s 5 for each m and sends it to the learner.
The messages are generated using the obverter techinque, which is described in Algorithm 1.
The learner is trained to minimize the mean squared error (MSE) between h (after consuming the s) and m.
After the learner has learned from all ten teachers, the next round begins, repeating the process until the error goes below some threshold.Algorithm 1: Message generation process used in BID3 .
DISPLAYFORM0 9 Append i to s; DISPLAYFORM1 Terminate;When the training was complete, the author was able to find strong patterns in the messages used by the agents ( Table 6 ).
Note that the messages using predicates tired, scared, sick and happy especially follow a very clear pattern.
Batali also conducted a zero-shot test where the agents were trained without the diagonal elements in Table 6 and tested with all 100 meaning vectors.
The agents were able to successfully communicate even when held-out meaning vectors were used, but the Table 6 : (Top) Messages used by a majority of the population for each of the given meanings.(Bottom
) A potential
analysis of the system in terms of a root plus modifications. Italic symbols
are used to specify predicates and roman symbols are used to specify subjects. Messages in parentheses
cannot be made to fit into this analysis.messages used for the held-out meaning vectors did not show as strong compositional patterns as the non-zero-shot case.
|
We train neural network agents to develop a language with compositional properties from raw pixel input.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:414
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g., autonomous vehicles).
In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions.
It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode).
While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data.
In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories.
The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples.
Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation.
To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP).
Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories.
Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space.
We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data.
Forecasting future trajectories of vehicles and human has many useful applications in autonomous driving, virtual reality and assistive living.
What makes trajectory forecasting challenging is that the future is uncertain and multi-modal -vehicles can choose different routes and people can perform different future actions.
In many safety-critical applications, it is important to consider a diverse set of possible future trajectories, even those that are less likely, so that necessary preemptive actions can be taken.
For example, an autonomous vehicle should understand that a neighboring car can merge into its lane even though the car is most likely to keep driving straight.
To address this requirement, we need to take a generative approach to trajectory forecasting that can fully characterize the multimodal distribution of future trajectories.
To capture all modes of a data distribution, variational autoencoders (VAEs) are well-suited generative models.
However, random samples from a learned VAE model with Gaussian latent codes are not guaranteed to be diverse for two reasons.
First, the sampling procedure is stochastic and the VAE samples can fail to cover some minor modes even with a large number of samples.
Second, since VAE sampling is based on the implicit likelihood function encoded in the training data, if most of the training data is centered around a specific mode while other modes have less data ( Fig. 1
(a) ), the VAE samples will reflect this bias and concentrate around the major mode ( Fig. 1
(b) ).
To tackle this problem, we propose to learn a diversity sampling function (DSF) that can reliably generate a diverse set of trajectory samples ( Fig. 1
(c) ).
The proposed DSF is a deterministic parameterized function that maps forecasting context features (e.g., past trajectories) to a set of latent codes.
The latent codes are decoded by the VAE docoder into a set of future trajectory samples, denoted as the DSF samples.
In order to optimize the DSF, we formulate a diversity loss based on a determinantal point process (DPP) (Macchi, 1975) to evaluate the diversity of the DSF samples.
The DPP defines the probability of choosing a random subset from the set of trajectory samples.
It models the negative correlations between samples: the inclusion of a sample reduces the probability of including a similar sample.
This makes the DPP an ideal tool for modeling the diversity within a set.
In particular, we use the expected cardinality of the DPP as the diversity measure, which is defined as the expected size of a random subset drawn from the set of trajectory samples according to the DPP.
Intuitively, since the DPP inhibits selection of similar samples, if the set of trajectory samples is more diverse, the random subset is more likely to select more samples from the set.
The expected cardinality of the DPP is easy to compute and differentiable, which allows us to use it as the objective to optimize the DSF to enable diverse trajectory sampling.
Our contributions are as follows: (1) We propose a new forecasting approach that learns a diversity sampling function to produce a diverse set of future trajectories; (2) We propose a novel application of DPPs to optimize a set of items (trajectories) in continuous space with a DPP-based diversity measure; (3) Experiments on synthetic data and human motion validate that our method can reliably generate a more diverse set of future trajectories compared to state-of-the-art generative models.
We proposed a novel forecasting approach using a DSF to optimize over the sample space of a generative model.
Our method learns the DSF with a DPP-based diversity measure to generate a diverse set of trajectories.
The diversity measure is a novel application of DPPs to optimize a set of items in continuous space.
Experiments have shown that our approach can generate more diverse vehicle trajectories and human motions compared to state-of-the-art baseline forecasting approaches.
2: Output: cVAE encoder network f φ (x, ψ) and decoder network g θ (z, ψ) 3: Initialize φ and θ randomly 4: while not converged do 5:
Compute parameters (µ, σ) of the posterior distribution q φ (z|x, ψ) using f φ (x, ψ)
Sample V Gaussian noises { 1 , . . . , V } from N (0, I)
Transform noises to latent samples from q φ (z|x, ψ):
Decode latent samples into reconstructed trajectories {x 1 , . . . ,x V } using g θ (z, ψ)
Calculate the cVAE loss L cvae according to Eq. 6 11:
Update φ and θ with ∇ φ L cvae and ∇ θ L cvae 12:
end for 13: end while Figure 6 : Network architectures for synthetic data and human motion.
Top: for synthetic data, we use a CNN to process the obstacle map f and directly flatten trajectories x and h into vectors.
The reconstructed trajectoryx is decoded with an MLP.
Bottom: for human motion, we use Bi-LSTMs to extract temporal features for x and h and decode the reconstructed trajectoryx with a forward LSTM.
|
We learn a diversity sampling function with DPPs to obtain a diverse set of samples from a generative model.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:415
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
There is mounting evidence that pretraining can be valuable for neural network language understanding models, but we do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn.
With this in mind, we compare four objectives---language modeling, translation, skip-thought, and autoencoding---on their ability to induce syntactic and part-of-speech information, holding constant the genre and quantity of training data.
We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data, which suggests that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information.
We also find that a randomly-initialized, frozen model can perform strikingly well on our auxiliary tasks, but that this effect disappears when the amount of training data for the auxiliary tasks is reduced.
Representation learning with deep recurrent neural networks has revolutionized natural language processing and replaced many of the expert-designed, linguistic features previously used.
Recently, researchers have begun to investigate the properties of representations learned by networks by training auxiliary classifiers that use the hidden states of frozen pretrained models to perform other tasks.
These investigations have shown that when deep LSTM RNNs (Hochreiter and Schmidhuber, 1997) are trained on tasks like machine translation, they latently identify substantial syntactic and semantic information about their input sentences, including part-of-speech (Shi et al., 2016; Belinkov et al., 2017a,b; Blevins et al., 2018) .These
intriguing findings lead us to ask the following questions:1. How does
the training task affect how well models latently learn syntactic properties? Which tasks
are better at inducing these properties?2. How does
the
amount of data the model is trained on affect these results? When does training
on more data help?We investigate these
questions by holding the data source and model architecture constant, while varying both the training task and the amount of training data. Specifically, we examine
models trained on English-German (En-De) translation, language modeling, skip-thought (Kiros et al., 2015) , and autoencoding, in addition to an untrained baseline model. We control for the data
domain by exclusively training on datasets from the 2016 Conference on Machine Translation (WMT; Bojar et al., 2016) . We train models on all
tasks using the parallel En-De corpus and a small subset of that corpus, which allows us to make a fair comparison across all five models. Additionally, we augment
the parallel dataset with a large monolingual corpus from WMT to examine how the performance of the unsupervised tasks (all but translation) scale with more data.Throughout our work, we focus on the syntactic evaluation tasks of part-of-speech (POS) tagging and Combinatorial Categorical Grammar (CCG) supertagging. Supertagging is a building
block for parsing as these tags constrain the ways in which words can compose, largely determining the parse of the sentence. CCG supertagging thus allows
us to measure the degree to which models learn syntactic structure above the word. We focus our analysis on representations
learned by language models and by the encoders of sequence-to-sequence models, as translation encoders have been found to learn richer representations of POS and morphological information than translation decoders (Belinkov et al., 2017a) .We find that for POS and CCG tagging, bidirectional
language models (BiLMs)-created by separately training forward and backward language models, and concatenating their hidden statesoutperform models trained on all other tasks. Even BiLMs trained on relatively small amounts of data
(1 million sentences) outperform translation and skip-thought models trained on larger datasets (5 million and 63 million sentences respectively).Our inclusion of an untrained LSTM baseline allows us to
study the effect of training on state representations. We find, surprisingly, that randomly initialized LSTMs underperform
our best trained models by only a few percentage points when we use all of the available labeled data to train classifiers for our auxiliary tasks. When we reduce the amount of classifier training data though, the performance
of the randomly initialized LSTM model drops far below those of trained models. We hypothesize that this occurs because training the classifiers on large amounts
of auxiliary task data allows them to memorize configurations of words seen in the training set and their associated tags. We test this hypothesis by training classifiers to predict the identity of neighboring
words from a given hidden state, and find that randomly initialized models outperform all trained models on this task. Our findings demonstrate that our best trained models do well on the tagging tasks because
they are truly learning representations that conform to our notions of POS and CCG tagging, and not because the classifiers we train are able to recover neighboring word identity information well.
By controlling for the genre and quantity of the training data, we make fair comparisons between several data-rich training tasks in their ability to induce syntactic information.
We find that bidirectional language models (BiLMs) do better than translation and skip-thought encoders at extracting useful features for POS tagging and CCG supertagging.
Moreover, this improvement holds even when the BiLMs are trained on substantially less data than competing models.
Although, due to limited parallel data, we could not compare BiLMs and translation encoders on more than 5 million sentences, our results suggest that for syntactic information, there is no need to compare these two models trained on more data, as BiLMs consistently outperform translation encoders in all data regimes.We also find that randomly initialized encoders extract usable features for POS and CCG tagging, at least when the auxiliary POS and CCG classifiers are themselves trained on reasonably large amounts of data.
However, the performance of untrained models drops sharply relative to trained ones when using smaller amounts of the classifier data.
We investigate further and find that untrained models outperform trained ones on the task of neighboring word identity prediction, which confirms that trained encoders do not perform well on tagging tasks because the classifiers are simply memorizing word identity information.
We also find that both trained and untrained LSTMs store more local neighboring word identity information in lower layers and more distant word identity information in upper layers, which suggests that depth in LSTMs allow them to capture larger context information.Our results suggest that for transfer learning, bidirectional language models like ELMo (Peters et al., 2018) capture more useful features than translation encoders-and that this holds even on genres or languages for which data is not abundant.
However, the scope of our experiments is limited, and we still know little about the representations of models trained on other supervised tasks, or precisely how the choice of training task affects the type of syntactic information that is learned.
Our work also highlights the interesting behavior of randomly initialized LSTMs, which show an ability to preserve the contents of their inputs significantly better than trained models.
|
Representations from language models consistently perform better than translation encoders on syntactic auxiliary prediction tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:416
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied.
Consider, for example, the trade-off between thermal resistance, electrical conductivity, and mechanical stability needed to design a nano-porous template with optimal thermoelectric efficiency.
To that end, we leverage the posterior regularization framework andshow that this constraint satisfaction problem can be formulated as sampling froma Gibbs distribution.
The main challenges come from the black-box nature ofthose physical constraints, since they are obtained via solving highly non-linearPDEs.
To overcome those difficulties, we introduce Surrogate-based Constrained Langevin dynamics for black-box sampling.
We explore two surrogate approaches.
The first approach exploits zero-order approximation of gradients in the Langevin Sampling and we refer to it as Zero-Order Langevin.
In practice, this approach can be prohibitive since we still need to often query the expensive PDE solvers.
The second approach approximates the gradients in the Langevin dynamics with deep neural networks, allowing us an efficient sampling strategy using the surrogate model.
We prove the convergence of those two approaches when the target distribution is log-concave and smooth.
We show the effectiveness of both approaches in designing optimal nano-porous material configurations, where the goal is to produce nano-pattern templates with low thermal conductivity and reasonable mechanical stability.
In many real-world design problems, the optimal design needs to simultaneously satisfy multiple constraints, which can be expensive to estimate.
For example, in computational material design, the goal is to come up with material configurations, or samples, satisfying a list of physical constraints that are given by black-box numerical Partial Differential Equations (PDE) solvers.
Such solvers (for example, the Boltzmann Transport Equation solver) are often complex, expensive to evaluate, and offer no access to their inner variables or their gradients.
We pose this design-under-constraints problem as sampling from a Gibbs distribution defined on some compact support.
The problem of sampling from a distribution with unknown likelihood that can only be point-wise evaluated is called black-box sampling (Chen & Schmeiser, 1998; Neal, 2003) .
We show in this paper that constrained black-box sampling can be cast as a constrained Langevin dynamics with gradient-free methods.
Zero-order optimization via Gaussian smoothing was introduced in Nesterov & Spokoiny (2017) and extended to black-box sampling with Langevin dynamics in Shen et al. (2019) .
We extend this approach to the constrained setting from a black-box density with compact support.
However, one shortcoming of this approach is that it is computationally very expensive since it requires repeatedly querying PDE solvers in order to get an estimate of the gradient.
To alleviate computational issues, we propose Surrogate Model Based Langevin dynamics, that consists of two steps:
(i) Learning (using training data) an approximation of the gradient of the potential of the Gibbs distribution.
We show that learning the gradient, rather than the potential itself, is important for the mixing of the Langevin dynamics towards the target Gibbs distribution.
We devise several objective functions, as well as deep neural-network architectures for parameterizing the approximating function class, for learning the gradient of the potential function.
(ii) We then use the surrogate gradient model in the constrained Langevin dynamics in lieu of the black-box potential.
Using the surrogate enables more efficient sampling, since it avoids querying the expensive PDE solvers, and obtaining gradients is as efficient as evaluating the functions themselves using automatic differentiation frameworks such as PyTorch or TensorFlow.
To summarize, our main contributions are as follows:
1. We cast the problem of generating samples under constraints in the black-box setting as sampling from a Gibbs distribution.
2. We introduce Constrained Zero-Order Langevin Monte Carlo, using projection or proximal methods, and provide the proof of its convergence to the target Gibbs distribution.
3. We introduce Surrogate Model Based Projected Langevin Monte Carlo via learning the gradient of the potential of the Gibbs distribution using deep neural networks or reproducing kernel spaces, and prove its convergence to the target distribution when used in conjunction with projection or proximal based methods.
We shed the light on the importance of the approximation of the gradient of the potential, and we show how to achieve this using Hermite and Taylor learning.
4. We showcase the usability and effectiveness of the proposed methods for the design of nanoporous configurations with improved thermoelectric efficiency.
The design consists of finding new configurations with optimized pore locations, such that the resulting configurations have favorable thermal conductivity (i.e., minimal κ) and desired mechanical stability (von Mises Stress σ ≤ τ , where τ is some preset threshold).
In this paper we introduced Surrogate-Based Constrained Langevin Sampling for black-box sampling from a Gibbs distribution defined on a compact support.
We studied two approaches for defining the surrogate: the first through zero-order methods and the second via learning gradient approximations using deep neural networks.
We showed the proofs of convergence of the two approaches in the log-concave and smooth case.
While zero-order Langevin had prohibitive computational cost, learned surrogate model Langevin enjoy a good tradeoff of lightweight computation and approximation power.
We applied our black-box sampling scheme to the problem of nano-material configuration design, where the black box constraints are given by expensive PDE solvers, and showed the efficiency and the promise of our method in finding optimal configurations.
Among different approaches for approximating the gradient, the zero-order ones (PLMC, ProxLMC) show overall superior performance, at a prohibitive computational cost.
We established that the deep the surrogate (Taylor-1 ProxLMC) is a viable alternative to zero-order methods, achieving reasonable performance, and offering 15x speedup over zero-order methods.
|
We propose surrogate based Constrained Langevin sampling with application in nano-porous material configuration design.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:417
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
There is growing interest in geometrically-inspired embeddings for learning hierarchies, partial orders, and lattice structures, with natural applications to transitive relational data such as entailment graphs.
Recent work has extended these ideas beyond deterministic hierarchies to probabilistically calibrated models, which enable learning from uncertain supervision and inferring soft-inclusions among concepts, while maintaining the geometric inductive bias of hierarchical embedding models.
We build on the Box Lattice model of Vilnis et al. (2018), which showed promising results in modeling soft-inclusions through an overlapping hierarchy of sets, parameterized as high-dimensional hyperrectangles (boxes).
However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile.
In this work, we present a novel hierarchical embedding model, inspired by a relaxation of box embeddings into parameterized density functions using Gaussian convolutions over the boxes.
Our approach provides an alternative surrogate to the original lattice measure that improves the robustness of optimization in the disjoint case, while also preserving the desirable properties with respect to the original lattice.
We demonstrate increased or matching performance on WordNet hypernymy prediction, Flickr caption entailment, and a MovieLens-based market basket dataset.
We show especially marked improvements in the case of sparse data, where many conditional probabilities should be low, and thus boxes should be nearly disjoint.
Embedding methods have long been a key technique in machine learning, providing a natural way to convert semantic problems into geometric problems.
Early examples include the vector space BID17 and latent semantic indexing BID4 ) models for information retrieval.
Embeddings experienced a renaissance after the publication of Word2Vec BID12 , a neural word embedding method BID2 BID13 ) that could run at massive scale.Recent years have seen an interest in structured or geometric representations.
Instead of representing e.g. images, words, sentences, or knowledge base concepts with points, these methods instead associate them with more complex geometric structures.
These objects can be density functions, as in Gaussian embeddings BID21 BID0 , convex cones, as in order embeddings BID20 BID9 , or axis-aligned hyperrectangles, as in box embeddings BID22 BID18 .
These geometric objects more naturally express ideas of asymmetry, entailment, ordering, and transitive relations than simple points in a vector space, and provide a strong inductive bias for these tasks.In this work, we focus on the probabilistic Box Lattice model of BID22 , because of its strong empirical performance in modeling transitive relations, probabilistic interpretation (edges in a relational DAG are replaced with conditional probabilities), and ability to model complex joint probability distributions including negative correlations.
Box embeddings (BE) are a generalization of order embeddings (OE) BID20 and probabilistic order embeddings (POE) BID9 that replace the vector lattice ordering (notions of overlapping and enclosing convex cones) in OE and POE with a more general notion of overlapping boxes (products of intervals).While
intuitively appealing, the "hard edges" of boxes and their ability to become easily disjoint, present difficulties for gradient-based optimization: when two boxes are disjoint in the model, but have overlap in the ground truth, no gradient can flow to the model to correct the problem. This
is of special concern for (pseudo-)sparse data, where many boxes should have nearly zero overlap, while others should have very high overlap. This
is especially pronounced in the case of e.g. market basket models for recommendation, where most items should not be recommended, and entailment tasks, most of which are currently artificially resampled into a 1:1 ratio of positive to negative examples. To address
the disjoint case, BID22 introduce an ad-hoc surrogate function. In contrast
, we look at this problem as inspiration for a new model, based on the intuition of relaxing the hard edges of the boxes into smoothed density functions, using a Gaussian convolution with the original boxes.We demonstrate the superiority of our approach to modeling transitive relations on WordNet, Flickr caption entailment, and a MovieLens-based market basket dataset. We match or
beat existing state of the art results, while showing substantial improvements in the pseudosparse regime.
We presented an approach to smoothing the energy and optimization landscape of probabilistic box embeddings and provided a theoretical justification for the smoothing.
Due to a decreased number of hyper-parameters this model is easier to train, and, furthermore, met or surpassed current state-ofthe-art results on several interesting datasets.
We further demonstrated that this model is particularly effective in the case of sparse data and more robust to poor initialization.Tackling the learning problems presented by rich, geometrically-inspired embedding models is an open and challenging area of research, which this work is far from the last word on.
This task will become even more pressing as the embedding structures become more complex, such as unions of boxes or other non-convex objects.
To this end, we will continue to explore both function lattices, and constraint-based approaches to learning.
|
Improve hierarchical embedding models using kernel smoothing
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:418
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a weakly-supervised data augmentation approach to improve Named Entity Recognition (NER) in a challenging domain: extracting biomedical entities (e.g., proteins) from the scientific literature.
First, we train a neural NER (NNER) model over a small seed of fully-labeled examples.
Second, we use a reference set of entity names (e.g., proteins in UniProt) to identify entity mentions with high precision, but low recall, on an unlabeled corpus.
Third, we use the NNER model to assign weak labels to the corpus.
Finally, we retrain our NNER model iteratively over the augmented training set, including the seed, the reference-set examples, and the weakly-labeled examples, which results in refined labels.
We show empirically that this augmented bootstrapping process significantly improves NER performance, and discuss the factors impacting the efficacy of the approach.
The increasing wealth of available data fuels numerous machine learning applications.
Unfortunately, much of this data is unlabeled, unstructured and noisy.
Supervised learning achieves the best task performance, but obtaining training labels is expensive.
Crowd-sourcing could provide labels at scale, but may not be feasible for acquiring high-quality labels in technical domains, such as biomedicine that requires expert annotators.
In this paper, we explore augmented bootstrapping methods that leverage automatically assigned noisy labels obtained from a large unlabeled corpus.
The biomedical literature is a high-impact domain with scarce annotations.
Unlocking the knowledge in this data requires machine reading systems that automatically extract important concepts in the text, such as entities and their relations.
A critical component of such systems is reliable Named Entity Recognition (NER), which aims to identify parts of the text that refer to a named entity (e.g., a protein).
In line with advancements in many domains, most state-of-the-art NER approaches use a deep neural network model that relies on a large labeled training set, which is not usually available in biomedical domains.
To address label scarcity, we propose a framework to train any effective neural NER model by leveraging partially labeled data.
We do this by creating an augmented training set using a small fully-labeled seed set, and an unlabeled corpus set, which we weakly and automatically label, and then refine its labels via an iterative process.
Our main contributions include: (1) An augmented bootstrapping approach combining information from a reference set with iterative refinements of soft labels to improve NER in a challenging domain (biomedicine) where labelling is expensive.
(2) A detailed analysis in a controlled setting to study different aspects affecting performance.
(3) An analysis of reference-based automated approaches to labeling data, showing that naive labeling decreases performance and how to overcome it.
We proposed a method to improve NER with limited labeled data, which is often the case in technical domains, such as biomedicine.
Our method combines bootstrapping and weakly-labeled data augmentation by using a small fully-labeled seed dataset and a large unlabeled corpus, automated labelling using a reference set, and an iterative label refinement process.
Our experimental evaluation shows performance equivalent to systems trained with an order of magnitude more labeled data.
In future work, we aim to explore additional augmentation methods over other challenging datasets.
We plan to apply the findings of these controlled experiments to a much larger in-the-wild scenario where we use all the available labeled data as the seed and operate over a large corpus (e.g., all of PubMed, PubMed Central) to improve state-of-the-art NER performance.
|
Augmented bootstrapping approach combining information from a reference set with iterative refinements of soft labels to improve Name Entity Recognition from biomedical literature.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:419
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The exposure bias problem refers to the training-inference discrepancy caused by teacher forcing in maximum likelihood estimation (MLE) training for auto-regressive neural network language models (LM).
It has been regarded as a central problem for natural language generation (NLG) model training.
Although a lot of algorithms have been proposed to avoid teacher forcing and therefore to alleviate exposure bias, there is little work showing how serious the exposure bias problem is.
In this work, we first identify the auto-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias.
We then develop a precise, quantifiable definition for exposure bias.
However, according to our measurements in controlled experiments, there's only around 3% performance gain when the training-inference discrepancy is completely removed.
Our results suggest the exposure bias problem could be much less serious than it is currently assumed to be.
Language model (LM) is a central module for natural language generation (NLG) tasks (Young et al., 2017) such as machine translation (Wu et al., 2017) , dialogue response generation , image captioning (Lin et al., 2014) , etc.
For decades, maximum likelihood estimation (MLE) has been the the most widely used objective for LM training.
However, there is a popular belief in the natural language processing (NLP) community that standard MLE training will cause "exposure bias" and lead to a performance degradation during the test-time language generation.
The exposure bias problem (Bengio et al., 2015; Ranzato et al., 2016) refers to the following discrepancy between MLE training and test-time generation for language models: During training, the language model predicts the next word conditioned on history words sampled from the groundtruth data distribution.
And during generation, the model generates words conditioned on history sequences generated by the model itself.
However, due to the exposure to real data during training, the language model is biased to only perform well on the ground-truth history distribution.
As a result, during generation the errors will accumulate along the generated sequence, and the distribution generated by the model will be distorted.
The forced exposure to ground-truth data during training is also referred to as "teacher forcing".
Given its defintion, the exposure bias problem could rise in the general cases when the model needs to make a sequence of decisions or generations (e.g. music/pixel/speech generation (Lamb et al., 2016) ).
In this work, we focus on the task of language generation, because the exposure bias problem is originally proposed in this field (Bengio et al., 2015) , and has since attracted huge research attention.
In order to avoid teacher forcing, many training algorithms (Bengio et al., 2015; Lamb et al., 2016; Ranzato et al., 2016; Yu et al., 2016; Zhu et al., 2018; Lu et al., 2018; Lin et al., 2017; Guo et al., 2017; Rajeswar et al., 2017; Wiseman & Rush, 2016; Nie et al., 2019; Shi et al., 2018) have been proposed as alternatives to MLE training.
Most of these works utilize techniques from generative adversarial network (GAN) (Goodfellow et al., 2014) or reinforcement learning (RL) (Sutton & Barto, 1998) .
In this paper, we refer to these algorithms as non-MLE methods or text GANs.
Despite the huge research efforts devoted to alleviate exposure bias, surprisingly, its existence or significance is much less studied.
In particular, to the best of our knowledge, no existing work Table 1 : Samples of a MLE-trained STOA transformer LM when fed with different types of length-10 history prefix.
To save space, we omitted the first 7 words of the random history.
attempts to directly show the seriousness of exposure bias in an empirical or theoretical way.
This work is motivated by the belief that a good solution should be built upon a testable and quantifiable problem definition.
In this rest of this paper, we first identify the "self-recovery" ability of popular LM models, which casts doubt on the original claim of exposure bias.
We then develop a precise and quantifiable definition of exposure bias, and validate its seriousness in controlled experiments.
In this section, we focus on answering the following question: "Does the EB-M measurement correctly reflect the significance of exposure bias?"
In short, our answer is not really.
The problem is that the distortion of the marginal P l+1 M |M is not only affected by the presumably existing exposure bias problem alone, but also by the mismatch between the history distribution P M from P D for W 1:l , which grows with the length of the history.
Therefore, even if the measured EB-M is significantly larger than one, we can not conclude that exposure bias causes serious deterioration.
We provide an example to illustrate this argument: Example 1.
Suppose L = 2, and V = {A, B}.
P D and P M are crafted as follows:
However, the only problem P M has is the mismatch between the history distributions (P M and P D ) for W 1 .
The next set of experiments also suggest that EB-M does not precisely reflect exposure bias.
On the EMNLP-news data-set (specified in Appendix B), we compare EB-M measurements for several non-MLE training methods with the baseline MLE model.
We include results for Scheduled Sampling (SS) (Bengio et al., 2015) , Cooperative Training (CoT) (Lu et al., 2018) , and Adversarial Ranking (RankGAN) (Lin et al., 2017) .
We provide implementation details for non-MLE methods in Appendix C. Intuitively, these methods will cause the model to be biased to behave well with model samples as history, instead of data samples.
Therefore, we expect EB-M measurement for non-MLE trained models to be smaller than MLE trained models.
However, Figure 1 shows that the measurements for different training frameworks are almost the same.
We believe the reason is that the EB-M measurements are only reflecting the trivial mismatch between the history distributions.
Is it possible that the original definition of exposure bias (Bengio et al., 2015; Ranzato et al., 2016) exactly refers to this mismatch between the model and data history distributions?
However, note that this mismatch is inevitable for any imperfect model, and non-MLE training algorithms can not solve it.
We believe a better, more precise definition is needed to discriminate exposure bias from this trivial mismatch.
Motivated by this view, we propose a second approach in the section below.
Following the discussion in the last section, we wish our measurement to be independent of the quality of the history distribution.
In light of that, we design a quantity to measure the model's conditional generation quality.
Let P H ∈ {P M , P D } denote the history distribution as in the MGD definition (5).
With history length l fixed, we define the conditional generation deviation (CGD) with history distribution P H for P M using metric d as:
where we assume that P D (· | W 1:l )) is computable, and use it to measure the quality of the model's conditional distribution.
For the choice of the distribution distance d, in addition to d T V and d JS , we introduce greedy decoding divergence (d GD ) defined as:
where 1 is the indicator function, and P, Q ∈ P. The distance d GD 2 reflects the model's accuracy during greedy decoding.
Similar to MGD, exposure bias should imply a significant gap between CGD(P M |M , l, d) and CGD(P M |D , l, d).
We again define rate of exposure bias at history length l with metric d to be:
For our definition of EB-C, a natural question is why we only focus on the generation distribution of the very next word.
The reason is we want to precisely measure how the error caused by the history part affect the generation part, by keeping them separate.
If we measure the deviation of, for example, two sampled tokens, the definition will be confusing: Because the second sampled token will be affected not only by the accumulated error induced by the history (sampled from the model), but also by the first generated token as history.
To get a better understanding of the intuition behind the definition of EB-C, we recommend readers to read Appendix A about our NMT experiment.
Since CGD requires inference for ground-truth data distribution P D , we first consider experiments in a synthetic setting.
In text-GAN literature (Yu et al., 2016; Lin et al., 2017) , a randomly-initialized one-layer LSTM model with hidden dimension of 32 is usually used as P D in synthetic experiments (we denote this setting as M random 32
).
However, the model is small-scale and does not reflect any structure existing in real-world text.
To improve upon this approach, we take the MLE baseline model trained on EMNLP-news data (described in Appendix B) as P D in this synthetic setting.
We denote the data model (P D ) as M news 512 .
We then train two LSTM LM (P M ) with different capacities using samples from the data model, with the standard MLE objective.
One is a one-layer LSTM with hidden width of 512 (denoted as LSTM-512), the other one is with hidden width of 32 (denoted as LSTM-32).
We train P M for 100 epochs using the Adam optimizer with learning rate 0.001.
In each epoch, 250k sentences (same to the size of the original EMNLP-news data) of length L = 50 are sampled from M news-512 as training data to avoid over-fitting.
We show perplexity (PPL) results of the trained models in Appendix F. Finally, EB-C is calculated using 100k 3 samples from P M and P D .
In Figure 2 , we show EB-C measurements with different metrics d m , and the two models give similar results.
It is shown that EB-C has a steady but slow increasing trend as history length increases.
This is expected as a consequence of exposure bias, because P M deviates farther from P D as history length increases.
However, the average value of EB-C is less than 1.03 (the largest average value is from d JS for the LSTM-512 experiment), meaning that the gap between CGD(P M |M , l, d) and CGD(P M |D , l, d) is not large.
Also, note that in most NLG applications (such as machine translation or image captioning), the generated sequence typically has short length (less than 20).
In that range of history length, the EB-C measurements that exposure bias only has minimal influence.
In Appendix E, we repeat the experiment for a transformer LM (Dai et al., 2019) , and get very similar EB-C measurements.
These measurements imply a striking conclusion : (Informal) Even if all the bad effects from exposure bias for MLE LM training are removed, the relative performance gain is at most 3%.
If the sequence length is not very long, the gain is less than 1%..
To dive deeper into the cause of the gap in CGD, we experiment with corrupted versions of P M as history distribution.
We first specify a corrupt rate c ∈ [0, 1], and randomly substitute words in a history sample from P M to a "noise" word drawn uniformly from the vocabulary with probability c.
Consequently, larger c will cause the history distribution to deviate farther from the groundtruth P D .
In Figure 3 , we show CGD measurement versus the corrupted history P corrupt M .
Large gaps are observed between CGD(P M |M corrupt ) and CGD(P M |D ).
Therefore, the small gap between CGD(P M |M ) and CGD(P M |D ) in Figure 2 results from the small deviation between the history distribution P M and P D .
In other word, P M has learned a "good enough" distribution that is able to keep it in the well-behaving region during sampling.
With these observations, we conclude that, in the synthetic setting considered, exposure bias does exist, but is much less serious than it is presumed to be.
Although there exists mismatch between the history distribution P M and P D , the mismatch is still in the model's "comfortable zone".
In other words, the LSTM LM is more robust than exposure bias claims it to be.
To concretize the this argument, we provide an example LM and show that MLE training is unlikely to generate models with a large EB-C value.
Example 2.
Again suppose L = 2, and V = {A, B}, the ground-truth data distribution is uniform on {AA, AB, BB, BA}.
P M is crafted as follows:
. Note that the model behaves bad when W 1 = A, which is of high probability during sampling.
In this work, we first identify the self-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias, which has been regarded as a central problem for MLE training by the LM community.
We then explore two intuitive approaches to quantify the significance of exposure bias for LM training.
The first quantification EB-M relies on the marginal generation distribution and reveals some vagueness in the original definition of exposure bias.
We argue that we should focus on the model's generation performance in terms of its conditional distribution and propose a second quantification EB-C, which we regard as the precise definition for exposure bias.
We design a evaluation of EB-C at different history length with real human (turkers from AMT) as the data model, for a SOTA transformer LM.
It is shown that removing the training-testing discrepancy only gives around 2% of performance gain.
Our synthetic experiments also gives very similar measurements.
By analyzing EB-C measurements with perturbed history samples, we hypothesise that although the mismatch between the data and model distribution for history prefix exists, it is still in the model's "comfortable zone".
With these results, we claim that on the contrary to the popular belief, exposure bias is only a minor problem in MLE-based LM training.
To wrap up, we discuss the fundamental question "Is MLE training really biased?", from the perspective of objective functions.
Note that the MLE objective (1) can be re-written as:
where D KL denotes the Kullback-Leibler divergence, and θ denotes the trainable parameters in P M .
Therefore, MLE training is minizing the divergence from P M , which is exactly the model's sampling distribution, from P D .
While it's true that the training is "exposed" to data samples, we can not simply deduce the objective is "biased".
We want to end our discussion with two remarks.
First, the proposed quantification approaches should not be used as the only metric for NLG.
For example, a position-aware uni-gram LM, which generates words independent of previous context, has no exposure bias problem and can pass our test easily.
Second, the intention of this work is not to discourage researchers from exploring non-MLE training algorithms for LM.
It is completely possible that an training objective different from
, can lead to better generation performance (Lu et al., 2018; Huszr, 2015) .
However, though non-MLE algorithms avoid teacher forcing, these algorithms (using GAN or RL for example) are usually less stable and more difficult to tune.
Given that the quantified measurement of exposure bias is insignificant, we think it should be questioned whether adopting these techniques to avoid exposure bias is a wise trade-off.
|
We show that exposure bias could be much less serious than it is currently assumed to be for MLE LM training.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:42
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Quantum machine learning methods have the potential to facilitate learning using extremely large datasets.
While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels.
One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors.
Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines.
The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.
Data sets used for training machine learning models are becoming increasingly large, leading to continued interest in fast methods for solving large-scale classification problems.
One of the approaches being explored is training the predictive model using a quantum algorithm that has access to the training set stored in quantum-accessible memory.
In parallel to research on efficient architectures for quantum memory (Blencowe, 2010) , work on quantum machine learning algorithms and on quantum learning theory is under way (see for example Refs.
(Biamonte et al., 2017; Dunjko & Briegel, 2018; Schuld & Petruccione, 2018) and (Arunachalam & de Wolf, 2017) for review).
An early example of this approach is Quantum LS-SVM (Rebentrost et al., 2014a) , which achieves exponential speedup compared to classical LS-SVM algorithm.
Quantum LS-SVM uses quadratic least-squares loss and squared-L 2 regularizer, and the optimization problem can be solved using the seminal HHL (Harrow et al., 2009 ) algorithm for solving quantum linear systems of equations.
While progress has been made in quantum algorithms for supervised learning, it has been recently advocated that the focus should shift to unsupervised and semi-supervised setting (Perdomo-Ortiz et al., 2018) .
In many domains, the most laborious part of assembling a training set is the collection of sample labels.
Thus, in many scenarios, in addition to the labeled training set of size m we have access to many more feature vectors with missing labels.
One way of utilizing these additional data points to improve the classification model is through semi-supervised learning.
In semi-supervised learning, we are given m observations x 1 , ..., x m drawn from the marginal distribution p(x), where the l (l m) first data points come with labels y 1 , ..., y l drawn from conditional distribution p(y|x).
Semi-supervised learning algorithms exploit the underlying distribution of the data to improve classification accuracy on unseen samples.
In the approach considered here, the training samples are connected by a graph that captures their similarity.
Here, we introduce a quantum algorithm for semi-supervised training of a kernel support vector machine classification model.
We start with the existing Quantum LS-SVM (Rebentrost et al., 2014a) , and use techniques from sample-based Hamiltonian simulation (Kimmel et al., 2017) to add a semisupervised term based on Laplacian SVM (Melacci & Belkin, 2011) .
As is standard in quantum machine learning (Li et al., 2019) , the algorithm accesses training points and the adjacency matrix of the graph connecting samples via a quantum oracle.
We show that, with respect to the oracle, the proposed algorithm achieves the same quantum speedup as LS-SVM, that is, adding the semisupervised term does not lead to increased computational complexity.
|
We extend quantum SVMs to semi-supervised setting, to deal with the likely problem of many missing class labels in huge datasets.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:420
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks have become the state-of-the-art models in numerous machine learning tasks.
However, general guidance to network architecture design is still missing.
In our work, we bridge deep neural network design with numerical differential equations.
We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations.
This finding brings us a brand new perspective on the design of effective deep architectures.
We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks.
As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations.
The LM-architecture is an effective structure that can be used on any ResNet-like networks.
In particular, we demonstrate that LM-ResNet and LM-ResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters.
In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress (>50%) the original networks while maintaining a similar performance.
This can be explained mathematically using the concept of modified equation from numerical analysis.
Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks.
Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture.
As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.
Deep learning has achieved great success in may machine learning tasks.
The end-to-end deep architectures have the ability to effectively extract features relevant to the given labels and achieve state-of-the-art accuracy in various applications BID3 ).
Network design is one of the central task in deep learning.
Its main objective is to grant the networks with strong generalization power using as few parameters as possible.
The first ultra deep convolutional network is the ResNet BID16 which has skip connections to keep feature maps in different layers in the same scale and to avoid gradient vanishing.
Structures other than the skip connections of the ResNet were also introduced to avoid gradient vanishing, such as the dense connections BID20 , fractal path BID27 and Dirac initialization BID50 .
Furthermore, there has been a lot of attempts to improve the accuracy of image classifications by modifying the residual blocks of the ResNet.
BID49 suggested that we need to double the number of layers of ResNet to achieve a fraction of a percent improvement of accuracy.
They proposed a widened architecture that can efficiently improve the accuracy.
BID51 pointed out that simply modifying depth or width of ResNet might not be the best way of architecture design.
Exploring structural diversity, which is an alternative dimension in network design, may lead to more effective networks.
In BID43 , BID51 , BID47 , and BID19 , the authors further improved the accuracy of the networks by carefully designing residual blocks via increasing the width of each block, changing the topology of the network and following certain empirical observations.
In the literature, the network design is mainly empirical.It remains a mystery whether there is a general principle to guide the design of effective and compact deep networks.Observe that each residual block of ResNet can be written as u n+1 = u n + ∆tf (u n ) which is one step of forward Euler discretization (AppendixA.1) of the ordinary differential equation (ODE) u t = f (u) (E, 2017) .
This suggests that there might be a connection between discrete dynamic systems and deep networks with skip connections.
In this work, we will show that many state-of-the-art deep network architectures, such as PolyNet BID51 , FractalNet BID27 and RevNet BID12 , can be consider as different discretizations of ODEs.
From the perspective of this work, the success of these networks is mainly due to their ability to efficiently approximate dynamic systems.
On a side note, differential equations is one of the most powerful tools used in low-level computer vision such as image denoising, deblurring, registration and segmentation BID36 BID2 BID4 .
This may also bring insights on the success of deep neural networks in low-level computer vision.
Furthermore, the connection between architectures of deep neural networks and numerical approximations of ODEs enables us to design new and more effective deep architectures by selecting certain discrete approximations of ODEs.
As an example, we design a new network structure called linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method in numerical ODEs BID1 .
This architecture can be applied to any ResNet-like networks.
In this paper, we apply the LM-architecture to ResNet and ResNeXt BID47 ) and achieve noticeable improvements on CIFAR and ImageNet with comparable numbers of trainable parameters.
We also explain the performance gain using the concept of modified equations from numerical analysis.It is known in the literature that introducing randomness by injecting noise to the forward process can improve generalization of deep residual networks.
This includes stochastic drop out of residual blocks BID21 and stochastic shakes of the outputs from different branches of each residual block BID11 .
In this work we show that any ResNet-like network with noise injection can be interpreted as a discretization of a stochastic dynamic system.
This gives a relatively unified explanation to the stochastic learning process using stochastic control.
Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the proposed LM-architecture.
As an example, we introduce stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.
|
This paper bridges deep network architectures with numerical (stochastic) differential equations. This new perspective enables new designs of more effective deep neural networks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:421
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications.
In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).
The process of implementing client-side software based on a Graphical User Interface (GUI) mockup created by a designer is the responsibility of developers.
Implementing GUI code is, however, time-consuming and prevent developers from dedicating the majority of their time implementing the actual functionality and logic of the software they are building.
Moreover, the computer languages used to implement such GUIs are specific to each target runtime system; thus resulting in tedious and repetitive work when the software being built is expected to run on multiple platforms using native technologies.
In this paper, we describe a model trained end-to-end with stochastic gradient descent to simultaneously learns to model sequences and spatio-temporal visual features to generate variable-length strings of tokens from a single GUI image as input.Our first contribution is pix2code, a novel application of Convolutional and Recurrent Neural Networks to generate computer tokens from a single GUI screenshot as input.
That is, no engineered feature extraction pipeline nor expert heuristics was designed to process the input data; our model learns from the pixel values of the input image alone.
Our experiments demonstrate the effectiveness of our method for generating computer code for various platforms (i.e. iOS and Android native mobile interfaces, and multi-platform web-based HTML/CSS interfaces) without the need for any change or specific tuning to the model.
In fact, pix2code can be used as such to support different target languages simply by being trained on a different dataset.
A video demonstrating our system is available online 1 .Our
second contribution is the release of our synthesized datasets consisting of both GUI screenshots and associated source code for three different platforms. Our
datasets and our pix2code implemention are publicly available 2 to foster future research.
|
CNN and LSTM to generate markup-like code describing graphical user interface images.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:422
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Computer vision tasks such as image classification, image retrieval and few-shot learning are currently dominated by Euclidean and spherical embeddings, so that the final decisions about class belongings or the degree of similarity are made using linear hyperplanes, Euclidean distances, or spherical geodesic distances (cosine similarity).
In this work, we demonstrate that in many practical scenarios hyperbolic embeddings provide a better alternative.
Figure 1: An example of two-dimensional Poincaré embeddings computed by a hyperbolic neural network trained on MNIST, and evaluated additionally on Omniglot.
Ambiguous and unclear images from MNIST, as well as most of the images from Omniglot are embedded near the center, while samples with clear class labels (or characters from Omniglot similar to one of the digits) lie near the boundary.
High-dimensional embeddings are ubiquitous in modern computer vision.
Many, perhaps most, modern computer vision systems learn non-linear mappings (in the form of deep convolutional networks) from the space of images or image fragments into high-dimensional spaces.
The operations at the end of deep networks imply a certain type of geometry of the embedding spaces.
For example, image classification networks (Krizhevsky et al., 2012; LeCun et al., 1989) use linear operators (matrix multiplication) to map embeddings in the penultimate layer to class logits.
The class boundaries in the embedding space are thus piecewise-linear, and pairs of classes are separated by Euclidean hyperplanes.
The embeddings learned by the model in the penultimate layer, therefore, live in the Euclidean space.
The same can be said about systems where Euclidean distances are used to perform image retrieval (Oh Song et al., 2016; Sohn, 2016; Wu et al., 2017) , face recognition (Parkhi et al., 2015; Wen et al., 2016) or one-shot learning (Snell et al., 2017) .
Alternatively, some few-shot learning (Vinyals et al., 2016) , face recognition (Schroff et al., 2015) and person re-identification methods (Ustinova & Lempitsky, 2016; Yi et al., 2014) learn spherical embeddings, so that sphere projection operator is applied at the end of a network that computes the embeddings.
Cosine similarity (closely associated with sphere geodesic distance) is then used by such architectures to match images.
Euclidean spaces with their zero curvature and spherical spaces with their positive curvature have certain profound implications on the nature of embeddings that existing computer vision systems can learn.
In this work, we argue that hyperbolic spaces with negative curvature might often be more appropriate for learning embedding of images.
Towards this end, we add the recently-proposed hyperbolic network layers to the end of several computer vision networks, and present a number of experiments corresponding to image classification, one-shot, and few-shot learning and person re-identification.
We show that in many cases, the use of hyperbolic geometry improves the performance over Euclidean or spherical embeddings.
Motivation for hyperbolic image embeddings.
The use of hyperbolic spaces in natural language processing (Nickel & Kiela, 2017; Tifrea et al., 2018; Dhingra et al., 2018 ) is motivated by their natural ability to embed hierarchies (e.g., tree graphs) with low distortion (Sarkar, 2011) .
Hierarchies are ubiquitous in natural language processing.
First, there are natural hierarchies corresponding to, e.g., biological taxonomies and linguistic ontologies.
Likewise, a more generic short phrase can have many plausible continuations and is therefore semantically-related to a multitude of long phrases that are not necessarily closely related to each other (in the semantic sense).
The innate suitability of hyperbolic spaces to embedding hierarchies (Sala et al., 2018a; Sarkar, 2011) explains the success of such spaces in natural language processing (Nickel & Kiela, 2017) .
Here, we argue that similar hierarchical relations between images are common in computer vision tasks (Figure 2 ).
One can observe the following example cases:
• In image retrieval, an overview photograph is related to many images that correspond to the close-ups of different distinct details.
Likewise, for classification tasks in-the-wild, an image containing the representatives of multiple classes is related to images that contain representatives of the classes in isolation.
Embedding a dataset that contains composite images into continuous space is therefore similar to embedding a hierarchy.
• In some tasks, more generic images may correspond to images that contain less information and are therefore more ambiguous.
E.g., in face recognition, a blurry and/or low-resolution face image taken from afar can be related to many high-resolution images of faces that clearly belong to distinct people.
Again natural embeddings for image datasets that have widely varying image quality/ambiguity calls for retaining such hierarchical structure.
In order to build deep learning models which operate on the embeddings to hyperbolic spaces, we capitalize on recent developments , which construct the analogues of familiar layers (such as a feed-forward layer, or a multinomial regression layer) in hyperbolic spaces.
We show that many standard architectures used for tasks of image classification, and in particular in the few-shot learning setting can be easily modified to operate on hyperbolic embeddings, which in many cases also leads to their improvement.
|
We show that hyperbolic embeddings are useful for high-level computer vision tasks, especially for few-shot classification.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:423
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
High-dimensional time series are common in many domains.
Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations.
However, most representation learning algorithms for time series data are difficult to interpret.
This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time.
To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling.
This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance.
We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original.
Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space.
This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty.
We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.
Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data.
Interpretable representation learning on time series is a seminal problem for uncovering the latent structure in complex systems, such as chaotic dynamical systems or medical time series.
In areas where humans have to make decisions based on large amounts of data, interpretability is fundamental to ease the human task.
Especially when decisions have to be made in a timely manner and rely on observing some chaotic external process over time, such as in finance or medicine, the need for intuitive interpretations is even stronger.
However, many unsupervised methods, such as clustering, make misleading i.i.d. assumptions about the data, neglecting their rich temporal structure and smooth behaviour over time.
This poses the need for a method of clustering, where the clusters assume a topological structure in a lower dimensional space, such that the representations of the time series retain their smoothness in that space.
In this work, we present a method with these properties.We choose to employ deep neural networks, because they have a very successful tradition in representation learning BID5 .
In recent years, they have increasingly been combined with generative modeling through the advent of generative adversarial networks (GANs) BID13 and variational autoencoders (VAEs) BID18 .
However, the representations learned by these models are often considered cryptic and do not offer the necessary interpretability .
A lot of work has been done to improve them in this regard, in GANs as well as VAEs BID16 BID9 .
Alas, these works have focused entirely on continuous representations, while discrete ones are still underexplored.In order to define temporal smoothness in a discrete representation space, the space has to be equipped with a topological neighborhood relationship.
One type of representation space with such a structure is induced by the self-organizing map (SOM) BID21 .
The SOM allows to map states from an uninterpretable continuous space to a lower-dimensional space with a predefined topologically interpretable structure, such as an easily visualizable two-dimensional grid.
However, while yielding promising results in visualizing static state spaces, such as static patient states BID27 , the classical SOM formulation does not offer a notion of time.
The time component can be incorporated using a probabilistic transition model, e.g. a Markov model, such that the representations of a single time point are enriched with information from the adjacent time points in the series.
It is therefore potentially fruitful to apply the approaches of probabilistic modeling alongside representation learning and discrete dimensionality reduction in an end-to-end model.In this work, we propose a novel deep architecture that learns topologically interpretable discrete representations in a probabilistic fashion.
Moreover, we introduce a new method to overcome the non-differentiability in discrete representation learning architectures and develop a gradient-based version of the classical selforganizing map algorithm with improved performance.
We present extensive empirical evidence for the model's performance on synthetic and real world time series from benchmark data sets, a synthetic dynamical system with chaotic behavior and real world medical data.
A schematic overview of our proposed model is depicted in FIG0 .
An input x ∈ R d is mapped to a latent encoding z e ∈ R m (usually m < d) by computing z e = f θ (x), where f θ (·) is parameterized by the encoder neural network.
The encoding is then assigned to an embedding z q ∈ R m in the dictionary of embeddings E = {e 1 , . . . , e k | e i ∈ R m } by sampling z q ∼ p(z q |z e ).
The form of this distribution is flexible and can be a design choice.
In order for the model to behave similarly to the original SOM algorithm (see below), in our experiments we choose the distribution to be categorical with probability mass 1 on the closest embedding to z e , i.e. p(z q |z e ) = 1[z q = arg min e∈E z e − e 2 ], where 1[·] is the indicator function.
A reconstructionx of the input can then be computed asx = g φ (z), where g φ (·) is parameterized by the decoder neural network.
Since the encodings and embeddings live in the same space, one can compute two different reconstructions, namelyx e = g φ (z e ) andx q = g φ (z q ).To
achieve a topologically interpretable neighborhood structure, the embeddings are connected to form a self-organizing map. A
self-organizing map consists of k nodes V = {v 1 , . . . , v k }, where every node corresponds to an embedding in the data space e v ∈ R d and a representation in a lower-dimensional discrete space m v ∈ M , where usually M ⊂ N 2 . During
training on a data set D = {x 1 , . . . , x n }, a winner nodẽ v is chosen for every point x i according toṽ = arg min v∈V e v − x i 2 . The embedding
vector for every [red] . In order to achieve
a discrete representation, every latent data point (z e ) is mapped to its closest node in the SOM (z q ). A Markov transition
model [blue] is learned to predict the next discrete representation (z t+1 q ) given the current one (z t q ). The discrete representations
can then be decoded by another neural network back into the original data space. node u ∈ V is then updated according
to e u ← e u + N (m u , mṽ)η(x i − e u ), where η is the learning rate and N (m u , mṽ) is a neighborhood function between the nodes defined on the representation space M . There can be different design choices
for N (m u , mṽ). A more thorough review of the self-organizing
map algorithm is deferred to the appendix (Sec. A).We choose to use a two-dimensional SOM because
it facilitates visualization similar to BID27 . Since we want the architecture to be trainable
end-to-end, we cannot use the standard SOM training algorithm described above. Instead, we devise a loss function term whose
gradient corresponds to a weighted version of the original SOM update rule (see below). We implement it in such a way that any time an
embedding e i,j at position (i, j) in the map gets updated, it also updates all the embeddings in its immediate neighborhood N (e i,j ). The neighborhood is defined as N (e i,j ) = {e
i−1,j , e i+1,j , e i,j−1 , e i,j+1 } for a two-dimensional map.The loss function for a single input x looks like DISPLAYFORM0 where x, z e , z q ,x e andx q are defined as above and α and β are weighting hyperparameters.Every term in this function is specifically designed to optimize a different model component. The first term is the reconstruction loss L reconstruction
(x,x q ,x e ) = x−x q 2 + x−x e 2 . The first subterm of this is the discrete reconstruction loss
, which encourages the assigned SOM node z q (x) to be an informative representation of the input. The second subterm encourages the encoding z e (x) to also be
an informative representation. This ensures that all parts of the model have a fully differentiable
credit assignment path to the loss function, which facilitates training. Note that the reconstruction loss corresponds to the evidence lower
bound (ELBO) of the VAE part of our model BID18 . Since we assume a uniform prior over z q , the KL-term in the ELBO
is constant w.r.t. the parameters and can be ignored during optimization.The term L commitment encourages the encodings and assigned SOM nodes to be close to each other and is defined as DISPLAYFORM1 2 . Closeness of encodings and embeddings should be expected to already
follow from the L reconstruction term in a fully differentiable architecture. However, due to the nondifferentiability of the embedding assignment
in our model, the L commitment term has to be explicitly added to the objective in order for the encoder to get gradient information about z q . DISPLAYFORM2 2 , where N (·) is the set of neighbors in the discrete
space as defined above and sg [·] is the gradient stopping operator that does not change the outputs during the forward pass, but sets the gradients to 0 during the backward pass. It encourages the neighbors of the assigned SOM node z q to also be
close to z e , thus enabling the embeddings to exhibit a self-organizing map property, while stopping the gradients on z e such that the encoding is not pulled in the direction of the neighbors. This term enforces a neighborhood relation between the discrete codes
and encourages all SOM nodes to ultimately receive gradient information from the data. The gradient stopping in this term is motivated by the observation that
the data points themselves do not get moved in the direction of their assigned SOM node's neighbors in the original SOM algorithm either (see above). We want to optimize the embeddings based on their neighbors, but not the
respective encodings, since any single encoding should be as close as possible to its assigned embedding and not receive gradient information from any other embeddings that it is not assigned to. Note that the gradient update of a specific SOM node in this formulation
depends on its distance to the encoding, while the step size in the original SOM algorithm is constant. It will be seen that this offers some benefits in terms of optimization
and convergence (see Sec. 4.1).
The SOM-VAE can recover topologically interpretable state representations on time series and static data.
It provides an improvement to standard methods in terms of clustering performance and offers a way to learn discrete two-dimensional representations of the data manifold in concurrence with the reconstruction task.
It introduces a new way of overcoming the non-differentiability of the discrete representation assignment and contains a gradient-based variant of the traditional self-organizing map that is more performant than the original one.
On a challenging real world medical data set, our model learns more informative representations with respect to medically relevant prediction targets than competitor methods.
The learned representations can be visualized in an interpretable way and could be helpful for clinicians to understand patients' health states and trajectories more intuitively.It will be interesting to see in future work whether the probabilistic component can be extended to not just improve the clustering and interpretability of the whole model, but also enable us to make predictions.
Promising avenues in that direction could be to increase the complexity by applying a higher order Markov Model, a Hidden Markov Model or a Gaussian Process.
Another fruitful avenue of research could be to find more theoretically principled ways to overcome the non-differentiability and compare them with the empirically motivated ones.
Lastly, one could explore deviating from the original SOM idea of fixing a latent space structure, such as a 2D grid, and learn the neighborhood structure as a graph directly from data.
|
We present a method to learn interpretable representations on time series using ideas from variational autoencoders, self-organizing maps and probabilistic models.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:424
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series.
The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks.
It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network.
The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset.
The pro-posed architecture achieves promising results as compared to convolutional and recurrent neural networks.
The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible.
Time series forecasting is focused on modeling the predictors of future values of time series given their past.
As in many cases the relationship between past and future observations is not deterministic, this amounts to expressing the conditional probability distribution as a function of the past observations: p(X t+d |X t , X t−1 , . . .) = f (X t , X t−1 , . . .).This
forecasting problem has been approached almost independently by econometrics and machine learning communities.In this paper we examine the capabilities of convolutional neural networks (CNNs), BID25 in modeling the conditional mean of the distribution of future observations; in other words, the problem of autoregression. We focus
on time series with multivariate and noisy signal. In particular
, we work with financial data which has received limited public attention from the deep learning community and for which nonparametric methods are not commonly applied. Financial time
series are particularly challenging to predict due to their low signal-to-noise ratio (cf. applications of Random Matrix Theory in econophysics BID24 BID3 ) and heavy-tailed distributions BID8 . Moreover, the
predictability of financial market returns remains an open problem and is discussed in many publications (cf. efficient market hypothesis of BID11 ).A common situation
with financial data is that the same signal (e.g. value of an asset) is observed from different sources (e.g. financial news, analysts, portfolio managers in hedge funds, marketmakers in investment banks) in asynchronous moments of time. Each of these sources
may have a different bias and noise with respect to the original signal that needs to be recovered (cf. time series in FIG0 ). Moreover, these sources
are usually strongly correlated and lead-lag relationships are possible (e.g. a market-maker with more clients can update its view more frequently and precisely than one with fewer clients). Therefore, the significance
of each of the available past observations might be dependent on some other factors that can change in time. Hence, the traditional econometric
models such as AR, VAR, VARMA (Hamilton, 1994) might not be sufficient. Yet their relatively good performance
motivates coupling such linear models with deep neural networks that are capable of learning highly nonlinear relationships. Quotes from four different market participants
(sources) for the same CDS 1 throughout one day. Each trader displays from time to time the prices
for which he offers to buy (bid) and sell (ask) the underlying CDS. The filled area marks the difference between the
best sell and buy offers (spread) at each time.For these reasons, we propose SignificanceOffset Convolutional Neural Network, a Convolutional Network extension of standard autoregressive models BID34 BID35 equipped with a nonlinear weighting mechanism and provide empirical evidence on its competitiveness with standard multilayer CNN and recurrent Long-Short Term Memory network BID18 . The mechanism is inspired by the gating systems
that proved successful in recurrent neural networks BID18 BID6 and highway networks BID37 .2 RELATED WORK 2.1 TIME SERIES FORECASTING Literature in time series forecasting is rich and has a long history in the field of econometrics which makes extensive use of linear stochastic models such as AR, ARIMA and GARCH processes to mention a few. Unlike in machine learning, research in econometrics
is more focused on explaining variables rather than improving out-of-sample prediction power. In practice, one can notice that these models 'over-fit
' on financial time series: their parameters are unstable and out-of-sample performance is poor.Reading through recent proceedings of the main machine learning venues (e.g. ICML, NIPS, AIS-TATS, UAI), one can notice that time series are often forecast using Gaussian processes BID31 BID38 BID19 , especially when time series are irregularly sampled BID9 BID26 . Though still largely independent, researchers have started
to "bring together the machine learning and econometrics communities" by building on top of their respective fundamental models yielding to, for example, the Gaussian Copula Process Volatility model BID42 . Our paper is in line with this emerging trend by coupling
AR models and neural networks.Over the past 5 years, deep neural networks have surpassed results from most of the existing literature in many fields BID33 : computer vision BID23 , audio signal processing and speech recognition BID32 , natural language processing (NLP) BID1 BID7 BID14 BID21 . Although sequence modeling in NLP, i.e. prediction of the
next character or word, is related to our forecasting problem (1), the nature of the sequences is too dissimilar to allow using the same cost functions and architectures. Same applies to the adversarial training proposed by BID28
for video frame prediciton, as such approach favors most plausible scenarios rather than outputs close to all possible outputs, while the latter is usually required in financial time series due to stochasticity of the considered processes.Literature on deep learning for time series forecasting is still scarce (cf. BID12 for a recent review). Literature on deep learning for financial time series forecasting
is even scarcer though interest in using neural networks for financial predictions is not new BID30 BID29 . More recent papers include BID36 that used 4-layer perceptrons in
modeling price change distributions in Limit Order Books, and BID2 who applied more recent WaveNet architecture of van den BID39 to several short univariate and bivariate time-series (including financial ones). Despite claim of applying deep learning, BID17 use autoencoders with
a single hidden layer to compress multivariate financial data. Besides these and claims of secretive hedge funds (it can be marketing
surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge. In this paper, we investigate the gold standard architectures' (simple
Convolutional Neural Network (CNN), Residual Network, multi-layer LSTM) capabilities on AR-like artificial asynchronous and noisy time series, and on real financial data from the credit default swap market where some inefficiencies may exist, i.e. time series may not be totally random.
In this article, we proposed a weighting mechanism that, coupled with convolutional networks, forms a new neural network architecture for time series prediction.
The proposed architecture is designed for regression tasks on asynchronous signals in the presence of high amount of noise.
This approach has proved to be successful in forecasting financial and artificially generated asynchronous time series outperforming popular convolutional and recurrent networks.The proposed model can be further extended by adding intermediate weighting layers of the same type in the network structure.
Another possible generalization that requires further empirical studies can be obtained by leaving the assumption of independent offset values for each past observation, i.e. considering not only 1x1 convolutional kernels in the offset sub-network.Finally, we aim at testing the performance of the proposed architecture on other real-life datasets with relevant characteristics.
We observe that there exists a strong need for common 'econometric' datasets benchmark and, more generally, for time series (stochastic processes) regression.APPENDIX A NONLINEARITY IN THE ASYNCHRONOUSLY SAMPLED AUTOREGRESSIVE TIME SERIES Lemma 1.
Let X(t) be an AR(2) time series given by DISPLAYFORM0 where (ε(t)) t=1,2,... are i.i.d. error terms.
Then DISPLAYFORM1 for any t > k ≥ 2, where a k , b k are rational functions of a and b.Proof.
The proof follows a simple induction.
It is sufficient to show that DISPLAYFORM2 where DISPLAYFORM3 and E k (t) is a linear combination of {ε(t − i), i = 0, 1, . . . , k − 2}.
Basis of the induction is trivially satisfied via 15.
In the induction step, we assume that 17 holds for k.
For t > k + 1 we have DISPLAYFORM4 .
Multiplying sides of this equation by b and adding av k X(t − 1) we obtain DISPLAYFORM5 Since aX(t − 1) + bX(t − 2) = X(t) − ε(t) we get DISPLAYFORM6 As DISPLAYFORM7 is a linear combination of {ε(t − i), i = 0, 1, . . . , k − 1}, the above equation proves 17 for k = k + 1.
|
Convolutional architecture for learning data-dependent weights for autoregressive forecasting of time series.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:425
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients.
Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained.
Because samples are preferentially moved in the direction of other classes \iffalse -- which are typically clustered in input space -- \fi we refer to this method as directional adversarial training, or DAT.
We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT.
We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples.
We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes.
Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp.
In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.
Deep learning applications often require complex networks with a large number of parameters (He et al., 2016; Zagoruyko & Komodakis, 2016; Devlin et al., 2018) .
Although neural networks perform so well that their ability to generalize is an area of study in itself (Zhang et al., 2017a; Arpit et al., 2017) , their high complexity nevertheless causes them to overfit their training data (Kukacka et al., 2017) .
For this reason, effective regularization techniques are in high demand.
There are two flavors of regularization: complexity curtailing and data augmentation 1 .
Complexity curtailing methods constrain models to learning in a subset of parameter space which has a higher probability of generalizing well.
Notable examples are weight decay (Krogh & Hertz, 1991) and dropout (Srivastava et al., 2014) .
Data augmentation methods add transformed versions of training samples to the original training set.
Conventionally, transformed samples retain their original label, so that models effectively see a larger set of data-label training pairs.
Commonly applied transformations in image applications include flips, crops and rotations.
A recently devised family of augmentation schemes called adversarial training has attracted active research interest (Szegedy et al., 2013; Goodfellow et al., 2014; Miyato et al., 2016; Athalye et al., 2018; Shaham et al., 2018; He et al., 2018) .
Adversarial training seeks to reduce a model's propensity to misclassify minimally perturbed training samples, or adversarials.
While attack algorithms used for testing model robustness may search for adversarials in unbounded regions of input space, adversarial training schemes generally focus on perturbing training samples within a bounded region, while retaining the sample's original label (Goodfellow et al., 2015; Shaham et al., 2018) .
Another recently proposed data augmentation scheme is MixUp (Zhang et al., 2017b) , in which new samples are generated by mixing pairs of training samples using linear coefficients.
Despite its well established generalization performance (Zhang et al., 2017b; Guo et al., 2018; Verma et al., 2018) , the working mechanism of MixUp is not well understood.
Guo et al. (2018) suggest viewing MixUp as imposing local linearity on the model using points outside of the data manifold.
While this perspective is insightful, we do not believe it paints a full picture of how MixUp operates.
A recent study (Lamb et al., 2019) provides empirical evidence that MixUp improves adversarial robustness, but does not present MixUp as a form of adversarial training.
We build a framework to understand MixUp in a broader context: we argue that adversarial training is a central working principle of MixUp.
To support this contention, we connect MixUp to a MixUplike scheme which does not perform label mixing, and we relate this scheme to adversarial training.
Without label mixing, MixUp becomes a conventional augmentation scheme: input samples are moved, but their original labels are retained.
Because samples are moved in the direction of other samples -which are typically clustered in input space -we describe this method as 'directional'.
Because this method primarily moves training samples in the direction of adversarial classes, this method is analogous to adversarial training.
We thus refer to MixUp without label mixing as directional adversarial training (DAT).
We show that MixUp converges to a subset of DAT under mild conditions, and we thereby argue that adversarial training is a working principle of MixUp.
Inspired by this new understanding of MixUp as a form of adversarial training, and upon realizing that MixUp is (asymptotically) a subset of DAT, we introduce Untied MixUp (UMixUp), a simple enhancement of MixUp which converges to the entire family of DAT schemes, as depicted in Figure 1 .
Untied Mixup mixes data-label training pairs in a similar way to MixUp, with the distinction that the label mixing ratio is an arbitrary function of the sample mixing ratio.
We perform experiments to show that UMixUp's classification performance improves upon MixUp.
In short, this research is motivated by a curiosity to better understand the working of MixUp.
In-sodoing we aim to:
1. Establish DAT as analogous to adversarial training.
This is discussed in section 4.
2. Establish UMixUp as a superset of MixUp, and as converging to the entire family of DAT schemes.
In-so-doing,
a) establish MixUp's convergence to a subset of DAT, and thereby that it operates analogously to adversarial training; and
b) establish UMixUp as a broader class of MixUp-like schemes that operate analogously to adversarial training.
This is discussed in 5.
3. Establish empirically that UMixUp's classification performance improves upon MixUp.
This is discussed in section 6.
Finally we note that this paper has another contribution.
Conventionally, MixUp is only applicable to baseline models that use cross entropy loss.
All analytical results we develop in this paper are applicable to a wider family of models using any loss function which we term target-linear.
We define target-linearity and experiment with a new loss function called negative cosine-loss to show its potential.
Regular (non-calligraphic) capitalized letters such as X will denote random variables, and their lowercase counterparts, e.g., x, will denote realizations of a random variable.
Any sequence, (a 1 , a 2 , . . . , a n ) will be denoted by a n 1 .
Likewise (A 1 , A 2 , . . . , A n ) will be denoted by A n 1 , and a sequence of sample pairs ((x 1 , x 1 ), (x 2 , x 2 ), . . . , (x n , x n )) denoted by (x, x ) n 1 .
For any value a ∈ [0, 1], we will use a as a short notation for 1 − a.
Classification Setting Consider a standard classification problem, in which one wishes to learn a classifier that predicts the class label for a sample.
Formally, let X be a vector space in which the samples of interest live and let Y be the set of all possible labels associated with these samples.
The set of training samples will be denoted by D, a subset of X .
We will use t(x
) to denote the true label of x. Let
F be a neural network function, parameterized by θ, which maps X to another vector space Z. Let ϕ : Y → Z be a function that maps a label in Y to an element in Z such that for any y, y ∈ Y, if y = y , then ϕ(y)
= ϕ(y ).
In the space Z, we refer to F (x) as the model's prediction.
With slight abuse of language, we will occasionally refer to both t(x) and ϕ(t(x)) as the "label" of x.
Let : Z ×Z → R be a loss function, using which one defines an overall loss function as
Here we have taken the notational convention that the first argument of represents the model's prediction and the second represents the target label.
In this setting, the learning problem is formulated as minimizing L with respect to its characterizing parameters θ.
|
We present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:426
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Plan recognition aims to look for target plans to best explain the observed actions based on plan libraries and/or domain models.
Despite the success of previous approaches on plan recognition, they mostly rely on correct action observations.
Recent advances in visual activity recognition have the potential of enabling applications such as automated video surveillance.
Effective approaches for such problems would require the ability to recognize the plans of agents from video information.
Traditional plan recognition algorithms rely on access to detailed planning domain models.
One recent promising direction involves learning approximate (or shallow) domain models directly from the observed activity sequences.
Such plan recognition approaches expect observed action sequences as inputs.
However, visual inference results are often noisy and uncertain, typically represented as a distribution over possible actions.
In this work, we develop a visual plan recognition framework that recognizes plans with an approximate domain model learned from uncertain visual data.
|
Handling Uncertainty in Visual Perception for Plan Recognition
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:427
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB).
In particular, we describe a neural module, DrKIT, that traverses textual data like a virtual KB, softly following paths of relations between mentions of entities in the corpus.
At each step the operation uses a combination of sparse-matrix TFIDF indices and maximum inner product search (MIPS) on a special index of contextual representations.
This module is differentiable, so the full system can be trained completely end-to-end using gradient based methods, starting from natural language inputs.
We also describe a pretraining scheme for the index mention encoder by generating hard negative examples using existing knowledge bases.
We show that DrKIT improves accuracy by 9 points on 3-hop questions in the MetaQA dataset, cutting the gap between text-based and KB-based state-of-the-art by 70%.
DrKIT is also very efficient, processing upto 10x more queries per second than existing state-of-the-art QA systems.
Large knowledge bases (KBs), such as FreeBase and Wikidata, organize information around entities, which makes it easy to reason over their contents.
For example, given a query like "When was Grateful Dead's lead singer born?", one can identify the entity Grateful Dead and the path of relations LeadSinger, BirthDate to efficiently extract the answer-provided that this information is present in the KB.
Unfortunately, large KBs are often incomplete (Min et al., 2013) .
While relation extraction methods can be used to populate KBs, this process is inherently error-prone, and errors in extraction can propagate to downstream tasks.
Advances in open-domain QA (Moldovan et al., 2002; Yang et al., 2019) suggest an alternativeinstead of performing relation extraction, one could treat a large corpus as a virtual KB by answering queries with spans from the corpus.
This ensures facts are not lost in the relation extraction process, but also poses challenges.
One challenge is that it is relatively expensive to answer questions using QA models which encode each document in a query-dependent fashion (Chen et al., 2017; Devlin et al., 2019) -even with modern hardware (Strubell et al., 2019; Schwartz et al., 2019) .
The cost of QA is especially problematic for certain complex questions, such as the example question above.
If the passages stating that "Jerry Garcia was the lead singer of Grateful Dead" and "Jerry Garcia was born in 1942" are far apart in the corpus, it is difficult for systems that retrieve and read a single passage to find an answer-even though in this example, it might be easy to answer the question after the relations were explicitly extracted into a KB.
More generally, complex questions involving sets of entities or paths of relations may require aggregating information from entity mentions in multiple documents, which is expensive.
One step towards efficient QA is the recent work of Seo et al. (2018; on phrase-indexed question answering (PIQA), in which spans in the text corpus are associated with question-independent contextual representations and then indexed for fast retrieval.
Natural language questions are then answered by converting them into vectors that are used to perform inner product search (MIPS) against the index.
This ensures efficiency during inference.
However, this approach cannot be directly used to answer complex queries, since by construction, the information stored in the index is about the local context around a span-it can only be used for questions where the answer can be derived by reading a single passage.
This paper addresses this limitation of phrase-indexed question answering.
We introduce an efficient, end-to-end differentiable framework for doing complex QA over a large text corpus that has been encoded in a query-independent manner.
Specifically, we consider "multi-hop" complex queries which can be answered by repeatedly executing a "soft" version of the operation below, defined over a set of entities X and a relation R: Y = X.follow(R) = {x : ∃x ∈ X s.t. R(x, x ) holds} In past work soft, differentiable versions of this operation were used to answer multi-hop questions against an explicit KB (Cohen et al., 2019) .
Here we propose a more powerful neural module which approximates this operation against an indexed corpus.
In our module, the input X is a sparse vector representing a weighted set of entities, and the relation R is a dense feature vector, e.g. a vector derived from a neural network over a natural language query.
The output Y is another sparse vector representing the weighted set of entities, aggregated over entity mentions in the top-k spans retrieved from the index.
The spans in turn are retrieved using a MIPS query constructed from X and R, and we discuss pretraining schemes for the index in §2.3.
For multi-hop queries, the output entities Y can be recursively passed as input to the next iteration of the same module.
The weights of the entities in Y are differentiable w.r.t the MIPS queries, which allows end-to-end learning without any intermediate supervision.
We discuss an implementation based on sparse matrix-vector products, whose runtime and memory depend only on the number of spans K retrieved from the index.
This is crucial for scaling up to large corpora, and leads to upto 15x faster inference than existing state-of-the-art multi-hop and open-domain QA systems.
The system we introduce is called DrKIT (for Differentiable Reasoning over a Knowledge base of Indexed Text).
We test DrKIT on the MetaQA benchmark for complex question answering, and show that it improves on prior text-based systems by 5 points on 2-hop and 9 points on 3-hop questions, reducing the gap between text-based ad KB-based systems by 30% and 70%, respectively.
We also test DrKIT on a new dataset of multi-hop slot-filling over Wikipedia articles, and show that it outperforms DrQA (Chen et al., 2017) and PIQA (Seo et al., 2019) adapted to this task.
We present DrKIT, a differentiable module that is capable of answering multi-hop questions directly using a large entity-linked text corpus.
DrKIT is designed to imitate traversal in KB over the text corpus, providing ability to follow relations in the "virtual" KB over text.
We achieve state-of-theart results on MetaQA dataset for answering natural language questions, with a 9 point increase in the 3-hop case.
We also developed an efficient implementation using sparse operations and inner product search, which led to a 10x increase in QPS over baseline approaches.
We use p = 400 dimensional embeddings for the mentions and queries, and 200-dimensional embeddings each for the start and end positions.
This results in an index of size 750MB.
When computing A E→M , the entity to mention co-occurrence matrix, we only retain mentions in the top 50 paragraphs matched with an entity, to ensure sparsity.
Further we initialize the first 4 layers of the question encoder with the Transformer network from pre-training.
For the first hop, we assign Z 0 as a 1-hot vector for the least frequent entity detected in the question using an exact match.
The number of nearest neighbors K and the softmax temperature λ were tuned on the dev set of each task, and we found K = 10000 and λ = 4 to work best.
We pretrain the index on a combination of the MetaQA corpus, using the KB provided with MetaQA for distance data, and the Wikidata corpus.
Table 3 .
|
Differentiable multi-hop access to a textual knowledge base of indexed contextual representations
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:428
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In spite of their great success, traditional factorization algorithms typically do not support features (e.g., Matrix Factorization), or their complexity scales quadratically with the number of features (e.g, Factorization Machine).
On the other hand, neural methods allow large feature sets, but are often designed for a specific application.
We propose novel deep factorization methods that allow efficient and flexible feature representation.
For example, we enable describing items with natural language with complexity linear to the vocabulary size—this enables prediction for unseen items and avoids the cold start problem.
We show that our architecture can generalize some previously published single-purpose neural architectures.
Our experiments suggest improved training times and accuracy compared to shallow methods.
In recent years, predictive tasks that traditionally have been solved with factorization are now being studied within the context of neural networks.
These solutions often work as black boxes, and many times they are designed specifically for a single task with an arbitrary network that may not have much justification.
We propose Deep Structured Factorization Machine, a family of general-purpose factorization techniques that can be used stand-alone or as a "design pattern" within a larger neural network.
Our work provides some insight into how to enable general-purpose factorization within neural architectures without losing interpretability and a principled design.Previous factorization methods do not scale to large feature sets and make strong assumptions about their latent structure.
Our main contribution is that we enable a general-purpose framework that enables efficient factorization of datasets with complex feature sets.
For example, applications of factorization in natural language scale quadratically in the number of words in the vocabulary.
Our solution allows inference with linear runtime complexity on the vocabulary size.
Previous work has explored how to improve factorization's accuracy (see § 3.3) with its current limitations withstanding; alternatively, some have proposed how to make it tractable for a particular domain-for example, text BID22 .
We believe that we are the first ones to propose an efficient general-purpose method.
Interestingly, our experiments indicate that Structured Deep Factorization has large improvements in predictive accuracy and runtime compared to some recent ad-hoc models.
We present a general purpose method for factorizing large feature sets; we demonstrate it in several applications, such as using text to enable prediction for unseen items and circumvent the cold-start problem.
Future work may soften our requirement of domain knowledge-in general, our methods require feature groups and feature extraction functions defined by experts.
We did not pursue an exhaustive comparison with previously published methods; for example, there are other algorithms that rely on Bayesian optimization BID3 to infer the item embeddings from text which we did not benchmark.
Although we apply our methods on six datasets altogether, further experimentation may be able to situate under which conditions our methods are effective.Our methods generalize previously published single-purpose neural networks.
For example, TagSpace BID20 ) is a very successful method, but it is limited to a single textual feature.
With the correct feature extraction function, Structured Deep-In Factorization Machine can be used to implement a TagSpace model.Compared to previous general-purpose approaches, our work makes less assumptions about the training data and allows more flexibility.
We provide evidence that the factorization hypothesis may be too restrictive-when relaxed we see higher predictive accuracy with a dramatic improvement of training speed.
We show experimental results outperforming an algorithm specifically designed for text-even when using the same feature extraction CNN.
This suggests that the need for ad-hoc networks should be situated in relationship to the improvements over a general-purpose method.
To the extent of our knowledge, our work is the first to propose a general purpose factorization algorithm that enables efficient inference on arbitrary feature sets.
|
Scalable general-purpose factorization algorithm-- also helps to circumvent cold start problem.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:429
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks.
Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games.
We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation.
We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.
The study of emergent communication is important for two related problems in language development, both human and artificial: language evolution, the development of communication protocols from scratch BID27 ; and language acquisition, the ability of an embodied agent to learn an existing language.
In this paper we focus on the problem of how environmental or pre-linguistic conditions affect the nature of the communication protocol that an agent learns.
The increasing realism and complexity of environments being used for grounded language learning BID8 BID18 present an opportunity to analyse these effects in detail.In line with previous work on emergent communication, we are strongly motivated by the view that language derives meaning from its use BID39 BID37 .
This perspective especially motivates the study of language emergence in cases where co-operative agents try to achieve shared goals in game scenarios BID34 BID6 BID26 , and is related to the study of multi-agent and self-play methods that have found great success in other areas of machine learning BID1 BID30 .
Here we focus on simple referential games, in which one agent must communicate to another a target object in the agent's environment.One of the most important properties of natural language is compositionality.
Smaller building blocks (e.g. words, morphemes) are used to generate unbounded numbers of more complex forms (e.g. sentences, multi-word expressions), with the meaning of the larger form being determined by the meanings of its parts and how they are put together BID14 .
Compositionality is an advantage in any communication protocol as it allows in principle infinite expression through a finite dictionary and a finite set of combination rules.
In emergent communication research, previous work has shown that agents can produce (somewhat) compositional protocols when engaging in language games BID34 .
However, the computational agents were typically situated in artificial worlds containing just a handful of objects, represented as disentangled, structured, and sometimes even atomic symbols, e.g. attribute-based or one-hot vectors BID2 BID5 BID13 BID0 BID26 .
However, humans receive raw sensorimotor hank you! h a n k y o u ! rather than symbolic input, and little work to date has tested whether these findings carry over when agents are situated in less idealized worlds that bear more similarity to the kind of entangled and noisy environments to which humans are typically exposed.
We presented a series of studies investigating the properties of protocols emerging when reinforcement learning agents are trained end-to-end on referential communication games.
We found that when agents are presented with disentangled input data in the form of attribute vectors, this inherent compositional structure is successfully retained in the output.
Moreover, we showed that communication can also be achieved in cases where agents are presented with raw pixel data, a type of input that aligns better with the raw sensorimotor data that humans are exposed to.
At the same time, we found that their ability to form compositional protocols in these cases is hampered by their ability to pull apart the objects' factors of variations.
Altogether, we were able to successfully scale up traditional research from the language evolution literature on emergent communication tasks to the contemporary deep learning framework, thus opening avenues to more realistic, and large scale, computational simulations of language emergence with complex image stimuli.
|
A controlled study of the role of environments with respect to properties in emergent communication protocols.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:43
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of “situated instructions”.
These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task.
Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself.
The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces.
Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial.
This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.
Physical task guidance can be delivered via Augmented Reality (AR) since assembly often requires both hands and continuous attention to the task [19] .
Additionally, assembly tutorials have instructions directly associated with physical objects, so AR can reduce the need for excessive context switching between the instructions and the physical structure by projecting those instructions into the environment.
These benefits have been demonstrated in fields such as Facilities Management [19] , Maintenance [47] , and Internet of Things (IoT) device management [19, 46] .
Additionally, prior work in AR assembly guidance has shown that these benefits can translate to carrying out assembly tasks [2, 17, 20, 39] .
While significant previous work has looked at the benefits of following tutorials in AR, much less has looked at how to author these tutorials.
Beyond the technical requirements of an authoring interface, an ideal tutorial may look different depending on the end user of the tutorial.
This problem is exacerbated in AR as there are many different modalities in which tutorial content can be presented.
While one person may appreciate guiding animations in AR, another may prefer static text and images, and yet another may prefer video tutorials from one or multiple perspectives.
With AuthAR, we present a system for building tutorials for assembly tasks that can accommodate the needs of these different types of end users.
AuthAR generates video, and pictorial representations semi-automatically while the tutorial author completes the task.
Furthermore, AuthAR allows tutorial authors to create and refine a tutorial in situ, integrating content authoring into the process of completing the task.
This approach adds little additional overhead and reduces the need for post-processing of the tutorial.
This paper presents the AuthAR system for generating mixed media assembly tutorials.
Informed by prior work on content/tutorial authoring, and tutorial playback and walkthrough, we build the system with an eye toward non-obtrusive content authoring and generation of important components for tutorial playback, summarized in a set of design guidelines.
We validate the system's ability to create a tutorial by stepping through the process of creating a tutorial to build a laptop stand, automatically generating an XML representation of the tutorial.
Initial observations suggest the tool will be valuable, and possible ways the system could be extended and refined in future iterations.
Toward validating AuthAR, we discuss our initial observations in testing with tutorial authors, present an example application that parses and displays the generated tutorial for end users, and explain extensibility beyond the presented use case.
In doing so, we consider improvements to AuthAR, and design considerations for other in situ AR content authoring tools.
AuthAR enables tutorial authors to generate mixed media tutorials semi-automatically to guide end users through the assembly process.
We automatically record expert demonstration where possible and allow for in situ editing for refinements and additions.
We built AuthAR with several design guidelines in mind, validated with the authoring of a tutorial for assembling a laptop stand, and discuss the extensibility to assembly of other tasks by simply loading different virtual models into AuthAR.
We see AuthAR enabling authoring of tutorials that could reach a widespread population with mixed media tutorials flexible to the preferences of each individual user.
|
We present a mixed media assembly tutorial authoring system that streamlines creation of videos, images, text and dynamic instructions in situ.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:430
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Monitoring patients in ICU is a challenging and high-cost task.
Hence, predicting the condition of patients during their ICU stay can help provide better acute care and plan the hospital's resources.
There has been continuous progress in machine learning research for ICU management, and most of this work has focused on using time series signals recorded by ICU instruments.
In our work, we show that adding clinical notes as another modality improves the performance of the model for three benchmark tasks: in-hospital mortality prediction, modeling decompensation, and length of stay forecasting that play an important role in ICU management.
While the time-series data is measured at regular intervals, doctor notes are charted at irregular times, making it challenging to model them together.
We propose a method to model them jointly, achieving considerable improvement across benchmark tasks over baseline time-series model.
With the advancement of medical technology, patients admitted into the intensive care unit (ICU) are monitored by different instruments on their bedside, which measure different vital signals about patient's health.
During their stay, doctors visit the patient intermittently for check-ups and make clinical notes about the patient's health and physiological progress.
These notes can be perceived as summarized expert knowledge about the patient's state.
All these data about instrument readings, procedures, lab events, and clinical notes are recorded for reference.
Availability of ICU data and enormous progress in machine learning have opened up new possibilities for health care research.
Monitoring patients in ICU is a challenging and high-cost task.
Hence, predicting the condition of patients during their ICU stay can help plan better resource usage for patients that need it most in a cost-effective way.
Prior works (Harutyunyan et al., 2017; BID4 BID18 BID16 BID1 have focused exclusively on modeling the problem using the time series signals from medical instruments.
Expert knowledge from doctor's notes has been ignored in the literature.In this work, we use clinical notes in addition to the time-series data for improved prediction on benchmark ICU management tasks (Harutyunyan et al., 2017) .
While the time-series data is measured continuously, the doctor notes are charted at intermittent times.
This creates a new challenge to model continuous time series and discrete time note events jointly.
We propose such a multi-modal deep neural network that comprises of recurrent units for the time-series and convolution network for the clinical notes.
We demonstrate that adding clinical notes improves the AUC-PR scores on in-hospital mortality prediction (+7.8%) and modeling decompensation (+6.1%), and kappa score on length of stay forecasting (+3.4%).
Identifying the patient's condition in advance is of critical importance for acute care and ICU management.
Literature has exclusively focused on using time-series measurements from ICU instruments to this end.
In this work, we demonstrate that utilizing clinical notes along with time-series data can improve the prediction performance significantly.
In the future, we expect to improve more using advanced models for the clinical notes since text summarizes expert knowledge about a patient's condition.
|
We demostarte that using clinical notes in conjuntion with ICU instruments data improves the perfomance on ICU management benchmark tasks
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:431
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Existing works in deep Multi-Agent Reinforcement Learning (MARL) mainly focus on coordinating cooperative agents to complete certain tasks jointly.
However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league.
Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination.
The main difficulties of expensive coordination are that
i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and
ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time.
In this work, we address this problem through an event-based deep RL approach.
Our main contributions are threefold.
(1) We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy.
(2) We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors.
(3) We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers.
Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically.
Deep Multi-Agent Reinforcement Learning (MARL) has been widely used in coordinating cooperative agents to jointly complete certain tasks where the agent is assumed to be selfless (fully cooperative), i.e., the agent is willing to sacrifice itself to maximize the team reward.
However, in many cases of the real world, the agents are self-interested, such as taxi drivers in a taxi company (fleets) and clubs in a league.
For instance, in the example of taxi fleets (Miao et al., 2016) , drivers may prefer to stay in the area with high customer demand to gain more reward.
It is unfair and not efficient to compel the taxi driver to selflessly contribute to the company, e.g., to stay in the low customer demand area.
Forcing the drivers to selflessly contribute may increase the income for the company in a short-term but it will finally causes the low efficient and unsustainable of that company in the long run because the unsatisfied drivers may be demotivated and even leave the company.
Another important example is that the government wants some companies to invest on the poverty area to achieve the fairness of the society, which may inevitably reduce the profits of companies.
Similar to previous example, the companies may leave when the government forces them to invest.
A better way to achieve coordination among followers and achieve the leader's goals is that the manager of the company or the government needs to provide bonuses to followers, like the taxi company pays extra bonuses for serving the customers in rural areas and the government provides subsidies for investing in the poverty areas, which we term as expensive coordination.
In this paper, we solve the large-scale sequential expensive coordination problem with a novel RL training scheme.
There are several lines of works related to the expensive coordination problem, including mechanism design (Nisan & Ronen, 2001 ) and the principal-agent model (Laffont & Martimort, 2009 ).
However, these works focus more on static decisions (each agent only makes a single decision).
To consider sequential decisions, the leader-follower MDP game (Sabbadin & Viet, 2013; 2016) and the RL-based mechanism design (Tang, 2017; Shen et al., 2017) are introduced but most of their works only focus on matrix games or small-scale Markov games, which cannot be applied to the case with the large-scale action or state space.
The most related work is M 3 RL (Shu & Tian, 2019) where the leader assigns goals and bonuses by using a simple attention mechanism (summing/averaging the features together) and mind (behaviors) tracking to predict the followers' behaviors and makes response to the followers' behaviors.
But they only consider the rule-based followers, i.e., followers with fixed preference, and ignore the followers' behaviors responding to the leader's policy, which significantly simplifies the problem and leads the unreasonability of the model.
In the expensive coordination problem, there are two critical issues which should be considered: 1) the leader's long-term decision process where the leader has to consider both the long-term effect of itself and long-term behaviors of the followers when determining his action to incentivise the coordination among followers, which is not considered in (Sabbadin & Viet, 2013; Mguni et al., 2019) ; and 2) the complex interactions between the leader and followers where the followers will adapt their policies to maximize their own utility given the leader's policy, which makes the training process unstable and hard, if not unable, to converge in large-scale environment, especially when the leader changes his actions frequently, which is ignored by (Tharakunnel & Bhattacharyya, 2007; Shu & Tian, 2019) .
In this work, we address these two issues in the expensive coordination problem through an abstraction-based deep RL approach.
Our main contributions are threefold.
(1) We model the leader's decision-making process as a semiMarkov Decision Process (semi-MDP) and propose a novel event-based policy gradient to learn the leader's policy considering the long-term effect (leader takes actions at important points rather than at each step to avoid myopic decisions.) (Section 4.1).
(2) A well-performing leader's policy is also highly dependent on how well the leader knows the followers.
To predict the followers' behaviors precisely, we show the leader-follower consistency scheme.
Based on the scheme, the follower-aware module, the follower-specific attention module, and the sequential decision module are proposed to capture these followers' behaviors and make accurate response to their behaviors (Section 4.2).
(3) To accelerate the training process, we propose an action abstraction-based policy gradient algorithm for the followers.
This approach is able to reduce followers' decision space and thus simplifies the interaction between the leader and followers as well as accelerates the training process of followers (Section 4.3).
Experiments in resource collections, navigation and predatorprey show that our method outperforms the state-of-the-art methods dramatically.
This paper proposes a novel RL training scheme for Stackelberg Markov Games with single leader and multiple self-interested followers, which considers the leader's long-term decision process and complicated interaction between followers with three contributions.
1) To consider the long-term effect of the leader's behavior, we develop an event-based policy gradient for the leader's policy.
2) To predict the followers' behaviors and make accurate response to their behaviors, we exploit the leader-follower consistency to design a novel follower-aware module and follower-specific attention mechanism.
3) We propose an action abstraction-based policy gradient algorithm to accelerate the training process of followers.
Experiments in resource collections, navigation, and predator-prey game reveal that our method outperforms the state-of-the-art methods dramatically.
We are willing to highlight that SMGs contribute to the RL (especially MARL) community with three key aspects: 1).
As we mentioned in the Introduction, most of the existing MARL methods assume that all the agents are willing to sacrifice themselves to maximize the total rewards, which is not true in many real-world non-cooperative scenarios.
On the contrary, our proposed method realistically assumes that agents are self-interested.
Thus, SMGs provide a new scheme focusing more on the self-interested agents.
We think this aspect is the most significant contribution to the RL community.
2).
The SMGs can be regarded as the multi-agent system with different roles (the leader and the followers) (Wilson et al., 2008) and our method provides a solution to that problem.
3).
Our methods also contribute to the hierarchical RL, i.e., it provides a non-cooperative training scheme between the high-level policy (the leaders) and the low-level policy (the followers), which plays an important role when the followers are self-interested.
Moreover, our EBPG also propose an novel policy gradient method for the temporal abstraction structure.
There are several directions we would like to investigate to further extend our SMG model:
i) we will consider multiple cooperative/competitive leaders and multiple self-interested followers, which is the case in the labor market,
ii) we will consider multi-level leaders, which is the case in the hierarchical organizations and companies and
iii) we will consider the adversarial attacks to our SMG model, which may induce extra cost to the leader for efficient coordination.
We believe that our work is a preliminary step towards a deeper understanding of the leader-follower scheme in both research and the application to society.
|
We propose an event-based policy gradient to train the leader and an action abstraction policy gradient to train the followers in leader-follower Markov game.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:432
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent work has studied the emergence of language among deep reinforcement learning agents that must collaborate to solve a task.
Of particular interest are the factors that cause language to be compositional---i.e., express meaning by combining words which themselves have meaning.
Evolutionary linguists have found that in addition to structural priors like those already studied in deep learning, the dynamics of transmitting language from generation to generation contribute significantly to the emergence of compositionality.
In this paper, we introduce these cultural evolutionary dynamics into language emergence by periodically replacing agents in a population to create a knowledge gap, implicitly inducing cultural transmission of language.
We show that this implicit cultural transmission encourages the resulting languages to exhibit better compositional generalization.
Compositionality is an important structure of language that reflects a disentangled understanding of the world -enabling the expression of infinitely many concepts using finitely many elements.
Agents that have compositional understandings of the world generalize in obviously correct ways even in the face of limited training examples (Lake & Baroni, 2018) .
For example, an agent with a compositional understanding of blue squares and purple triangles should also understand purple squares without directly observing any of them.
Developing artificial agents that can ground, understand, and produce compositional (and therefore more interpretable) language could greatly improve generalization to new instances and ease human-AI interactions.
In building theories of how compositionality emerges in human languages, work in evolutionary linguistics looks to the process of cultural transmission (Kirby, 2001; Kirby et al., 2008) .
Cultural transmission of language occurs when a group of agents pass their language on to a new group of agents, e.g. parents who teach their children to speak as they do.
Because this education is incomplete and biased, it allows the language itself to change over time via a process known as cultural evolution.
This paradigm (Kirby et al., 2014) explains the emergence of compositionality as a result of expressivity and compressibility -i.e. to be most effective, a language should be expressive enough to differentiate between all possible meanings (e.g., objects) and compressible enough to be learned easily.
Work in the evolutionary linguistics community has shown that over multiple 'generations' these competing pressures result in the emergence of compositional languages both in simulation (Kirby, 2001 ) and with human subjects (Kirby et al., 2008) .
These studies aim to understand humans whereas we want to understand and design artificial neural networks.
Approaching the problem from another direction, recent work in AI has studied language emergence in such multi-agent, goal-driven tasks.
These works have demonstrated that agent languages will emerge to enable coordination-centric tasks to be solved without direct or even indirect language supervision (Foerster et al., 2016; Sukhbaatar et al., 2016; Lazaridou et al., 2017; Das et al., 2017) .
However, the resulting languages are usually not compositional and are difficult to interpret, even by other machines (Andreas et al., 2017) .
Some existing work has studied means to encourage compositional language formation (Mordatch & Abbeel, 2018; , but these settings study fixed populations of agents -i.e. examining language within a single generation.
In this work we bridge these two areas -examining the effect of generational cultural transmission on the compositionality of emergent languages in a multi-agent, goal-driven setting.
We introduce cultural transmission into language emergence between neural agents.
The starting point of our study is a goal-oriented dialog task (similar to that of ), summarized in Fig. 1a .
During learning we periodically replace some agents with new ones (gray agents).
These new agents do not know any language, but instead of creating one they learn it from older agents.
This creates generations of language that become more compositional over time.
We study this in the context of a cooperative dialog-based reference game involving two agents communicating in discrete symbols ; an example dialog is shown in Fig. 1a .
To examine cultural transmission, we extend this setting to a population of agents (Fig. 1b) and introduce a simple mechanism to induce the expressivity and compressibility pressures inherent in cultural transmission.
Specifically, we periodically re-initialize some subset of the agents in the population.
In order to perform well at the task, the population's emergent language must be sufficiently expressive to reference all the objects (expressivity) and must be easily learnable by these 'new' agents (compressibility).
The new agents have a randomized language whereas the surviving agents already know a grounded language.
This "knowledge gap" creates an implicit 'teaching' setting that is analogous to the explicit transmission stage in models of iterative learning (Kirby, 2001 ).
Through our experiments and analysis, we show that periodic agent replacement is an effective way to induce cultural transmission and yields more compositionally generalizable language in our setting.
To summarize, our contributions are: -We propose a method for inducing implicit cultural transmission in neural language models.
-We introduce new metrics to measure the similarity between agent languages and verify cultural transmission has occurred as a result of our periodic agent replacement protocol.
-We show our cultural transmission procedure induces compositionality in neural language models, going from 13% accuracy on a compositionally novel test set to 46% in the best configuration.
Further, we show this is complementary with previous priors which encourage compositionality.
In this work we investigated cultural transmission in deep neural dialog agents, applying it to language emergence.
The evolutionary linguistics community has long used cultural transmission to explain how compositional languages could have emerged.
The deep learning community, having recently become interested in language emergence, has not investigated that link until now.
Instead of explicit models of cultural transmission familiar in evolutionary linguistics, we favor an implicit model where language is transmitted from generation to generation only because it helps agents achieve their goals.
We show that this does indeed cause cultural transmission and compositionality.
Future work.
While our work used an implicit version of cultural transmission, we are interested in investigating the effect of explicit versions of cultural transmission on language structure.
In another direction, cultural transmission may also provide an appropriate prior for neural representations of non-language information.
|
We use cultural transmission to encourage compositionality in languages that emerge from interactions between neural agents.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:433
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Based on our observation that there exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer, and that the dimension of the concatenated feature vector almost equals the summation of the dimension on each feature map, we propose a singular value decomposition (SVD) based approach to estimate the dimension of the deep manifolds for a typical convolutional neural network VGG19.
We choose three categories from the ImageNet, namely Persian Cat, Container Ship and Volcano, and determine the local dimension of the deep manifolds of the deep layers through the tangent space of a target image.
Through several augmentation methods, we found that the Gaussian noise method is closer to the intrinsic dimension, as by adding random noise to an image we are moving in an arbitrary dimension, and when the rank of the feature matrix of the augmented images does not increase we are very close
to the local dimension of the manifold.
We also estimate the dimension of the deep manifold based on the tangent space for each of the maxpooling layers.
Our results show that the dimensions of different categories are close to each other and decline quickly along the convolutional layers and fully connected layers.
Furthermore, we show that the dimensions decline quickly inside the Conv5 layer.
Our work provides new insights for the intrinsic structure of deep neural networks and helps unveiling the inner organization of the black box of deep neural networks.
To have a better understanding of deep neural networks, a recent important trend is to analyze the structure of the high-dimensional feature space.
Capitalizing on the manifold hypothesis BID1 BID12 , the distribution of the generated data is assumed to concentrate in regions of low dimensionality.
In other words, it is assumed that activation vectors of deep neural networks lie on different low dimensional manifolds embedded in high dimensional feature space.Note that the rationality of many manifold learning algorithms based on deep learning and autoencoders is that one learns an explicit or implicit coordinate system for leading factors of variation.
These factors can be thought of as concepts or abstractions that help us understand the rich variability in the data, which can explain most of the structure in the unknown data distribution.
See BID3 for more information.The dimension estimation is crucial in determining the number of variables in a linear system, or in determining the number of degrees of freedom of a dynamic system, which may be embedded in the hidden layers of neural networks.
Moreover, many algorithms in manifold learning require the intrinsic dimensionality of the data as a crucial parameter.
Therefore, the problem of estimating the intrinsic dimensionality of a manifold is of great importance, and it is also a crucial start for manifold learning.Unfortunately, the manifold of interest in AI (especially for deep neural networks), is such a rugged manifold with a great number of twists, ups and downs with strong curvature.
Thus, there is a fundamental difficulty for the manifold learning, as raised in BID0 , that is, if the manifolds are not very smooth, one may need a considerable number of training examples to cover each one of these variations, and there is no chance for us to generalize to unseen variations.Our work is based on an important characterization of the manifold, namely, the set of its tangent hyperplanes.
For a point p on a d-dimensional manifold, the tangent hyperplane is given by a local basis of d vectors that span the local directions of variations allowed on the manifold.
As illustrated in Figure 1 , these local directions specify how one can change p infinitesmally while staying on the manifold.
Figure 1 : A two-dimensional manifold with a small region where data points concentrate, along with a tangent plane and associated tangent directions, forming a basis that specifies the directions of small moves one can make to stay on the manifold.Based on above analysis, our work focuses on a thorough exploration of the local hyperplane dimension of the activation manifold in deep neural networks.
Creating an artificial data cluster concentrated in regions of the local tangent hyperplane, we apply SVD to the data cluster in different layers or feature maps in neural networks.
Through thorough analysis, we reach the following fascinating results.•
There exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer.•
For convolutional layers, the dimension of the concatenated feature vector almost equals the summation of the dimension on each feature map.•
The dimensions of different image categories are close and the dimension declines quickly along the layers.To our knowledge this is the first thorough exploration of manifold dimension on very deep neural networks. We
wish our work sheds light on new understandings and inspires further investigations on the structure of manifolds in deep neural networks.
Through extensive experiments, we found that there exists a dramatic drop for the singular values of the fully connected layers or a single feature map of the convolutional layer, and the dimension of the concatenated feature vector almost equals the summation of the dimension of each feature map for several feature maps randomly picked.
Based on the interesting observations we obtained, we developed an efficient and effective SVD based method to estimate the local dimension of deep manifolds in the VGG19 neural network.
We found that the dimensions are close for different images of the same category and even images of different categories, and the dimension declines quickly along the convolutional layers and fully connected layers.
Our results supports the lowdimensional manifold hypothesis for deep networks, and our exploration helps unveiling the inner organization of deep networks.
Our work will also inspire further possibility of observing every feature map separately for the dimension of convolutional layers, rather than directly working on the whole activation feature maps, which is costly or even impossible for the current normal computing power.
|
We propose a SVD based method to explore the local dimension of activation manifold in deep neural networks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:434
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Large pre-trained Transformers such as BERT have been tremendously effective for many NLP tasks.
However, inference in these large-capacity models is prohibitively slow and expensive
. Transformers are essentially a stack of self-attention layers which encode each input position using the entire input sequence as its context
. However, we find that it may not be necessary to apply this expensive sequence-wide self-attention over at all layers
. Based on this observation, we propose a decomposition to a pre-trained Transformer that allows the lower layers to process segments of the input independently enabling parallelism and caching
. We show that the information loss due to this decomposition can be recovered in the upper layers with auxiliary supervision during fine-tuning
. We evaluate de-composition with pre-trained BERT models on five different paired-input tasks in question answering, sentence similarity, and natural language inference
. Results show that decomposition enables faster inference (up to 4x), significant memory reduction (up to 70%) while retaining most (up to 99%) of the original performance
. We will release the code at<anonymized url>.
Inference in large Transformer-based NLP models such as BERT (Devlin et al., 2019) requires prohibitively high-levels of compute, making it expensive to support large volume processing in data centers, and almost infeasible to run on resource constrained mobile devices.
These Transformer models create effective representations using self-attention, a mechanism that allows them to effectively account for wide textual contexts.
However, applying self-attention over the entire input for all layers is computationally expensive.
This raises a natural question: Is self-attention over the entire input necessary in all of the layers?
Previous studies (Tenney et al., 2019; Hao et al., 2019; Clark et al., 2019b) have shown that lower layers tend to capture syntactic phenomena that mostly depend on local contexts and that higher layers capture more semantic phenomena that are relevant to downstream tasks, which depend on longer global contexts.
This suggests that considering only local context in lower layers of Transformer and considering full global context in upper layers can provide speedup at a very small cost in terms of effectiveness.
In this work we focus on paired-input NLP tasks such as reading comprehension, natural language inference and sentence pair similarity.
These tasks provide a natural boundary for the locality of text (e.g., question vs. passage in QA).
Because of this natural decomposition in two segments, we can compute representations for lower layers with only the local segment as the context and compute representations for upper layers with both segments as the context.
This decomposition technique has multiple benefits: It allows for parallel processing of each segment, caching of segments that are available offline, and a significant reduction in runtime memory.
Moreover, since the architecture remains largely same, the original pre-trained weights can be reused in the decomposed model.
To compensate for the differences in the decomposed setting, we augment the fine-tuning loss on the target task with a distillation loss that minimizes the output-level as well as layer-level divergences.
We evaluate the decomposition idea using the BERT model on five different pairwise tasks.
The decomposition achieves substantial speedup (2 to 4.3x) and reduction in memory (51.1% to 76.8%) for only small loss in effectiveness (0.2 to 1.8 points).
Moreover, we find that with decomposition the larger BERT model can even run faster than the original smaller BERT model, while still being more accurate.
Transformers have improved the effectiveness of NLP tools by their ability to incorporate large contexts effectively in multiple layers.
This however imposes a significant complexity cost.
In this work, we showed that modeling such large contexts may not always be necessary and leverage this insight to build a decomposition of the Transformer model that provides substantial improvements in inference speed, memory reduction, while retaining most of the original model's accuracy.
This decomposition model provides a simple yet strong starting point for efficient models as NLP moves towards increasingly larger models handling wider contexts.
|
Inference in large Transformers is expensive due to the self-attention in multiple layers. We show a simple decomposition technique can yield a faster, low memory-footprint model that is just as accurate of the original models.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:435
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Exploration while learning representations is one of the main challenges Deep
Reinforcement Learning (DRL) faces today.
As the learned representation is dependant in the observed data, the exploration strategy has a crucial role.
The popular DQN algorithm has improved significantly the capabilities of Reinforcement
Learning (RL) algorithms to learn state representations from raw data, yet, it uses
a naive exploration strategy which is statistically inefficient.
The Randomized
Least Squares Value Iteration (RLSVI) algorithm (Osband et al., 2016), on the
other hand, explores and generalizes efficiently via linearly parameterized value
functions.
However, it is based on hand-designed state representation that requires
prior engineering work for every environment.
In this paper, we propose a Deep
Learning adaptation for RLSVI.
Rather than using hand-design state representation, we use a state representation that is being learned directly from the data by a
DQN agent.
As the representation is being optimized during the learning process,
a key component for the suggested method is a likelihood matching mechanism,
which adapts to the changing representations.
We demonstrate the importance of
the various properties of our algorithm on a toy problem and show that our method
outperforms DQN in five Atari benchmarks, reaching competitive results with the
Rainbow algorithm.
In Reinforcement Learning (RL), an agent seeks to maximize the cumulative rewards obtained from interactions with an unknown environment (Sutton et al., 1998) .
Since the agent can learn only by its interactions with the environment, it faces the exploration-exploitation dilemma: Should it take actions that will maximize the rewards based on its current knowledge or instead take actions to potentially improve its knowledge in the hope of achieving better future performance.
Thus, to find the optimal policy the agent needs to use an appropriate exploration strategy.
Classic RL algorithms were designed to face problems in the tabular settings where a table containing a value for each state-action pair can be stored in the computer's memory.
For more general settings, where generalization is required, a common practice is to use hand-designed state representation (or state-action), upon which a function approximation can be learned to represent the value for each state and action.
RL algorithms based on linear function approximation have demonstrated stability, data efficiency and enjoys convergence guarantees under mild assumptions (Tsitsiklis & Van Roy, 1997; Lagoudakis & Parr, 2003) .
They require that the desired learned function, e.g. Qfunction, will be a linear combination of the state representation.
This is, of course, a hard constraint as the representation is hand-designed, where the designer often does not know how the optimal value-function will look like.
Furthermore, hand-designed representation is environment-specific and requires re-designing for every new environment.
The DQN algorithm (Mnih et al., 2015) has changed RL.
Using Deep Neural Networks (DNN) as function approximators, the DQN algorithm enabled the learning of policies directly from raw highdimensional data and led to unprecedented achievements over a wide variety of domains (Mnih et al., 2015) .
Over the years, many improvements to DQN were presented, suggesting more fitting network architectures (Wang et al., 2015) , reducing overestimation (Van Hasselt et al., 2016; Anschel et al., 2017) or improving its data efficiency .
Despite its great success, DQN uses the overly simple -greedy strategy for exploration.
This strategy is one of the simplest exploration strategies that currently exist.
The agent takes random action with probability and takes the optimal action according to its current belief with probability 1 − .
This strategy is commonly used despite its simplicity and proven inefficiency (Osband et al., 2016) .
The main shortcoming of -greedy and similar strategies derives from the fact that they do not use observed data to improve exploration.
To explore, it takes a completely random action, regardless of the experience obtained by the agent.
Thompson Sampling (TS) (Thompson, 1933) , is one of the oldest heuristics to address the 'exploration/exploitation' trade-off in sequential decision-making problems.
Its variations were proposed in RL (Wyatt, 1998; Strens, 2000) and various bandits settings (Chapelle & Li, 2011; Scott, 2010) .
For Multi-Armed Bandit (MAB) problems, TS is very effective both in theory (Agrawal & Goyal, 2012; and practice (Chapelle & Li, 2011) .
Intuitively, TS randomly takes actions according to the probability it believes to be optimal.
In practice, a prior distribution is assumed over the model's parameters p(w), and a posterior distribution p(w|D) is computed using the Bayes theorem, where D is the observed data.
TS acts by sampling models from the posterior distribution, and plays the best action according to these samples.
Randomized Least Squares Value Iteration (Osband et al., 2016) is an RL algorithm which uses linear function approximation and is inspired by Thompson Sampling.
It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models.
This algorithm was proven to be efficient in tabular settings, with a bound on the expected regret that match the worst-case lower bound up to logarithmic factors.
More importantly, it demonstrates efficiency even when generalization is required.
Alas, as it assumes a linearly parametrized value function on a hand-designed state representation, the success of this algorithm crucially depends on the quality of the given state representation.
In this paper, we present a new DRL algorithm that combines the exploration mechanism of RLSVI with the representation learning mechanism of DQN; we call it the Deep Randomized Least Squares Value Iteration (DRLSVI) algorithm.
We use standard DQN to learn state representation and explores by using the last layer's activations of DQN as state representation for RLSVI.
To compensate for the constantly changing representation and the finite memory of DQN, we use a likelihood matching mechanism, which allows the transfer of information held by an old representation regarding past experience.
We evaluate our method on a toy-problem -the Augmented Chain environment -for a qualitative evaluation of our method on a small MDP with a known optimal value function.
Then, we compare our algorithm to the DQN and Rainbow algorithms on several Atari benchmarks.
We show that it outperforms DQN both in learning speed and performance.
A Deep Learning adaptation to RLSVI was presented which learn the state representation directly from the data.
We demonstrated the different properties of our method in experiments and showed the promise of our method.
We hope to further reduce the complexity and running time of our algorithm in future work.
|
A Deep Learning adaptation of Randomized Least Squares Value Iteration
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:436
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The complexity of large-scale neural networks can lead to poor understanding of their internal details.
We show that this opaqueness provides an opportunity for adversaries to embed unintended functionalities into the network in the form of Trojan horse attacks.
Our novel framework hides the existence of a malicious network within a benign transport network.
Our attack is flexible, easy to execute, and difficult to detect.
We prove theoretically that the malicious network's detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise.
Our attack exposes an important, previously unknown loophole that unveils a new direction in machine learning security.
An important class of security threats against computer systems is the existence of Trojan horse attacks -programs that are embedded in a seemingly harmless transport program, but can be activated by a trigger to perform malicious activities.
This threat is common in software, where the malicious program may steal user information or modify the underlying system's behavior (Felt et al., 2011) .
Similar attacks have also been studied in depth for hardware circuits (Chakraborty et al., 2009) .
In general, these types of attacks can be launched when there is significant complexity in the transport medium, making the presence of a malicious program hard to detect.
Due to the complex architecture of modern neural networks, both the model and their behavior are arguably obscure to humans (Ribeiro et al., 2016; Selvaraju et al., 2017; Koh & Liang, 2017) .
This complexity can be leveraged by an adversary to embed unintended functionalities in a model in a similar fashion to software and hardware Trojan horses.
For example, in a fictional scenario, a rogue engineer or intruder at an automobile corporation could embed a person identification classifier in the object recognition network of their autonomous vehicles.
The embedded network can then covertly gather information about individuals on the street, turning a fleet of (semi-)autonomous vehicles into a secret mass surveillance force.
Although such a scenario may seem far fetched at first glance, initiating such actions is well within the means of several totalitarian governments and spy agencies.
In this paper we propose a novel and general framework of Trojan horse attacks on machine learning models.
Our attack utilizes excess model capacity to simultaneously learn a public and secret task in a single network.
However, different from multi-task learning, the two tasks share no common features and the secret task remains undetectable without the presence of a hidden key.
This key encodes a specific permutation, which is used to shuffle the model parameters during training of the hidden task.
The gradient updates for the concealed model act similar to benign additive noise with respect to the gradients of the public model (Abadi et al., 2016) , which behaves indistinguishable to a standard classifier on the public task.
We demonstrate empirically and prove theoretically that the identity and presence of a secret task cannot be detected without knowledge of the secret permutation.
In particular, we prove that the decision problem to determine if the model admits a permutation that triggers a secret functionality is NP-complete.
We experimentally validate our method on a standard ResNet50 network (He et al., 2016) and show that, without any increase in parameters, the model can achieve the same performance on the intended and on the secret tasks as if it was trained exclusively on only one of them.
Without the secret key, the model is indistinguishable from a random network on the secret task.
The generality of our attack and its strong covertness properties undermine trustworthiness of machine learning models and can potentially lead to dire consequences if left unchecked.
We introduced TrojanNet, and formulate a potentially menacing attack scenario.
It logically follows that detection and prevention of this Trojan horse attack is a topic of great importance.
However, this may be a daunting task, as we show theoretically that the detection problem can be formulated as an NP-complete decision problem, and is therefore computationally infeasible in its general form.
While strategies such as Markov Chain Monte Carlo have been used in similar contexts to efficiently reduce the search space (Diaconis, 2009) , the number of candidate permutations may be too large in our case.
In fact, the number of permutations for a single convolutional layer of ResNet50 can be upwards of (64 × 64 × 3 × 3)!
≈ 1.21 × 10 152336 !
While our paper focuses on malicious uses of the TrojanNet framework, it can potentially be utilized for improving the security of neural networks as well.
Our framework has striking resemblance to symmetric key encryption in cryptography (Katz & Lindell, 2014) .
This enables the sharing of neural networks across an insecure, monitored communication channel in a similar fashion as steganography (Petitcolas et al., 1999) -the hiding of structured signals in files such as images, audio or text.
We hope to explore benevolent uses of TrojanNet in future work.
A APPENDIX
|
Parameters of a trained neural network can be permuted to produce a completely separate model for a different task, enabling the embedding of Trojan horse networks inside another network.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:437
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.