input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
People ask questions that are far richer, more informative, and more creative than current AI systems.
We propose a neural program generation framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network.
From extensive experiments using an information-search game, we show that our method can ask optimal questions in synthetic settings, and predict which questions humans are likely to ask in unconstrained settings.
We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised data.
People can ask rich, creative questions to learn efficiently about their environment.
Question asking is central to human learning yet it is a tremendous challenge for computational models.
There is always an infinite set of possible questions that one can ask, leading to challenges both in representing the space of questions and in searching for the right question to ask.
Machine learning has been used to address aspects of this challenge.
Traditional methods have used heuristic rules designed by humans (Heilman & Smith, 2010; Chali & Hasan, 2015) , which are usually restricted to a specific domain.
Recently, neural network approaches have also been proposed, including retrieval methods which select the best question from past experience (Mostafazadeh et al., 2016 ) and encoder-decoder frameworks which map visual or linguistic inputs to questions (Serban et al., 2016; Mostafazadeh et al., 2016; Yuan et al., 2017; Yao et al., 2018) .
While effective in some settings, these approaches do not consider settings where the questions are asked about partially unobservable states.
Besides, these methods are heavily data-driven, limiting the diversity of generated questions and requiring large training sets for different goals and contexts.
There is still a large gap between how people and machines ask questions.
Recent work has aimed to narrow this gap by taking inspiration from cognitive science.
For instance, Lee et al. (2018) incorporates aspects of "theory of mind" (Premack & Woodruff, 1978) in question asking by simulating potential answers to the questions, but the approach relies on imperfect agents for natural language understanding which may lead to error propagation.
Related to our approach, Rothe et al. (2017) proposed a powerful question-asking framework by modeling questions as symbolic programs, but their algorithm relies on hand-designed program features and requires expensive calculations to ask questions.
We use "neural program generation" to bridge symbolic program generation and deep neural networks, bringing together some of the best qualities of both approaches.
Symbolic programs provide a compositional "language of thought" (Fodor, 1975) for creatively synthesizing which questions to ask, allowing the model to construct new ideas based on familiar building blocks.
Compared to natural language, programs are precise in their semantics, have clearer internal structure, and require a much smaller vocabulary, making them an attractive representation for question answering systems as well (Johnson et al., 2017; Yi et al., 2018; Mao et al., 2019) .
However, there has been much less work using program synthesis for question asking, which requires searching through infinitely many questions (where many questions may be informative) rather than producing a single correct answer to a question.
Deep neural networks allow for rapid question-synthesis using encoder-decoder modeling, eliminating the need for the expensive symbolic search and feature evaluations in Rothe et al. (2017) .
Together, the questions can be synthesized quickly and evaluated formally for quality groundtruth board partly revealed board example questions
How long is the red ship?
(size Red)
Is purple ship horizontal?
(== (orient Purple)
H) Do all three ships have the same size?
(=== (map (λ x (size x)) (set AllShips)))
Figure 1: The Battleship task.
Blue, red, and purple tiles are ships, dark gray tiles are water, and light gray tiles are hidden.
The agent can see a partly revealed board, and should ask a question to seek information about the hidden board.
Example questions and translated programs are shown on the right.
We recommend viewing the figures in color.
(e.g. the expected information gain), which as we show can be used to train question asking systems using reinforcement learning.
In this paper, we develop a neural program generation model for asking questions in an informationsearch game similar to "Battleship" used in previous work (Gureckis & Markant, 2009; Rothe et al., 2017; .
The model uses a convolutional encoder to represent the game state, and a Transformer decoder (Vaswani et al., 2017) for generating questions.
Building on the work of Rothe et al. (2017) , the model uses a grammar-enhanced question asking framework, such that questions as programs are formed through derivation using a context free grammar.
Importantly, we show that the model can be trained from human demonstrations of good questions using supervised learning, along with a data augmentation procedure that leverages previous work to produce additional human-like questions for training.
Our model can also be trained without such demonstrations using reinforcement learning.
We evaluate the model on several aspects of human question asking, including reasoning about optimal questions in synthetic scenarios, density estimation based on free-form question asking, and creative generation of genuinely new questions.
To summarize, our paper makes three main contributions:
1) We propose a neural network for modeling human question-asking behavior,
2) We propose a novel reinforcement learning framework for generating creative human-like questions by exploiting the power of programs, and
3) We evaluate different properties of our methods extensively through three different experiments.
We train our model in a fully supervised fashion.
Accuracy for the counting and missing tile tasks is shown in Figure 3 .
The full neural program generation model shows strong reasoning abilities, achieving high accuracy for both the counting and missing tile tasks, respectively.
We also perform ablation analysis of the encoder filters of the model, and provide the results in Appendix D.
The results for the compositionality task are summarized in Table 1 .
When no training data regarding the held out question type is provided, the model cannot generalize to situations systematically different from training data, exactly as pointed out in previous work on the compositional skills of encoder-decoder models (Lake & Baroni, 2018) .
However, when the number of additional training data increases, the model quickly incorporates the new question type while maintaining high accuracy on the familiar question tasks.
On the last row of Table 1 , we compare our model with another version where the decoder is replaced by two linear transformation operations which directly classify the ship type and location (details in Appendix B.1).
This model has 33.0% transfer accuracy on compositional scenarios never seen during training.
This suggests that the model has the potential to generalize to unseen scenarios if the task can be decomposed to subtasks and combined together.
We evaluate the log-likelihood of reference questions generated by our full model as well as some lesioned variants of the full model, including a model without pretraining, a model with the Transformer decoder replaced by an LSTM decoder, a model with the convolutional encoder replaced by a simple MLP encoder, and a model that only has a decoder (unconditional language model).
Though the method from Rothe et al. (2017) also works on this task, here we cannot compare with their method for two reasons.
One is that our dataset is constructed using their method, so the likelihood of their method should be an upper bound in our evaluation setting.
Additionally, they can only approximate the log-likelihood due to an intractable normalizing constant, and thus it difficult to directly compare with our methods.
Two different evaluation sets are used, one is sampled from the same process on new boards, the other is a small set of questions collected from human annotators.
In order to calculate the log-likelihood of human questions, we use translated versions of these questions that were used in previous work (Rothe et al., 2017) , and filtered some human questions that score poorly according to the generative model used for training the neural network (Appendix B.2).
A summary of the results is shown in Table 2a .
The full model performs best on both datasets, suggesting that pretraining, the Transformer decoder, and the convolutional encoder are all important components of the approach.
However, we find that the model without an encoder performs reasonably well too, even out-performing the full model with a LSTM-decoder on the human-produced questions.
This suggests that while contextual information from the board leads to improvements, it is not the most important factor for predicting human questions.
To further investigate the role of contextual information and whether or not the model can utilize it effectively, we conduct another analysis.
Intuitively, if there is little uncertainty about the locations of the ships, observing the board is critical since there are fewer good questions to ask.
To examine this factor, we divide the scenarios based on the entropy of the hypothesis space of possible ship locations into a low entropy set (bottom 30%), medium entropy set (40% in the middle), and high entropy set (top 30%).
We evaluate different models on the split sets of sampled data and report the results in Table 2b .
When the entropy is high, it is easier to ask a generally good question like "how long is the red ship" without information of the board, so the importance of the encoder is reduced.
If entropy is low, the models with access to the board has substantially higher log-likelihood than the model without encoder.
Also, the first experiment (section 5.1) would be impossible without an encoder.
Together, this implies that our model can capture important context-sensitive characteristics of how people ask questions.
The models are evaluated on 2000 randomly sampled boards, and the results are shown in Table 3 .
Note that any ungrammatical questions are excluded when we calculate the number of unique questions.
First, when the text-based model is evaluated on new contexts, 96.3% of the questions it generates were included in the training data.
We also find that the average EIG and the ratio of EIG>0 is worse than the supervised model trained on programs.
Some of these deficiencies are due to the very limited text-based training data, but using programs instead can help overcome these limitations.
With the program-based framework, we can sample new boards and questions to create a much larger dataset with executable program representations.
This self-supervised training helps to boost performance, especially when combined with grammar-enhanced RL.
From the table, the grammar-enhanced RL model is able to generate informative and creative questions.
It can be trained from scratch without examples of human questions, and produces many novel questions with high EIG.
In contrast, the supervised model rarely produces new questions beyond the training set.
The sequence-level RL model is also comparatively weak at generating novel questions, perhaps because it is also pre-trained on human questions.
It also more frequently generates ungrammatical questions.
We also provide examples in Figure 4 to show the diversity of questions generated by the grammar enhanced model, and more in the supplementary materials.
Figure 4a shows novel questions the model produces, which includes clever questions such as "Where is the bottom right of all the purple and blue tiles?" or "What is the size of the blue ship minus the purple ship?", while it can also sometimes generates meaningless questions such as "Is the blue ship shorter than itself?"
Additional examples of generated questions are provided in Appendix B. Is any ship two tiles long?
(> (++ (map (lambda x (== (size x) 2)) (set AllShips))) 0)
Are there any ships in row 1?
(> (++ (map (lambda y (and (== (rowL y) 1) (not (== (color y) Water)))) (set AllTiles))) 0)
Is part of a ship on tile 4-6?
(not (== (color 4-6)
Water)) What is the size of the blue ship?
(setSize (coloredTiles Blue))
What is the size of the purple ship?
(size Purple)
Which column is the first part of the blue ship?
(colL (topleft (coloredTiles Blue)))
What is the orientation of the blue ship?
With the grammar enhanced framework, we can also guide the model to ask different types of questions, consistent with the goal-directed nature and flexibility of human question asking.
The model can be queried for certain types of questions by providing different start conditions to the model.
Instead of starting derivation from the start symbol "A", we can start derivation from a intermediate state such as "B" for Boolean questions or a more complicated "(and B B)" for composition of two Boolean questions.
In Figure 4b , we show examples where the model is asked to generate four specific types of questions: true/false questions, number questions, location-related questions, and compositional true/false questions.
We see that the model can flexibly adapt to new constraints and generate meaningful questions.
In Figure 4c , we compare the model generated questions with human questions, each randomlysampled from the model outputs and the human dataset.
These examples again demonstrate that our model is able to generate clever and human-like questions.
However, we also find that people sometimes generate questions with quantifiers such as "any" and "all", which are operationalized in program form with lambda functions.
These questions are complicated in representation and not favored by our model, showing a current limitation in our model's capacity.
We introduce a neural program generation framework for question asking task under partially unobservable settings, which is able to generate creative human-like questions with human question demonstrations by supervised learning or without demonstrations by grammar-enhanced reinforcement learning.
Programs provide models with a "machine language of thought" for compositional thinking, and neural networks provide an efficient means of question generation.
We demonstrate the effectiveness of our method in extensive experiments covering a range of human question asking abilities.
The current model has important limitations.
It cannot generalize to systematically different scenarios, and it sometimes generates meaningless questions.
We plan to further explore the model's compositional abilities in future work.
Another promising direction is to model question asking and question answering jointly within one framework, which could guide the model to a richer sense of the question semantics.
Besides, allowing the agent to iteratively ask questions and try to win the game is another interesting future direction.
We would also like to use our framework in dialog systems and open-ended question asking scenarios, allowing such systems to synthesize informative and creative questions.
|
We introduce a model of human question asking that combines neural networks and symbolic programs, which can learn to generate good questions with or without supervised examples.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:528
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The classification of images taken in special imaging environments except air is the first challenge in extending the applications of deep learning.
We report on an UW-Net (Underwater Network), a new convolutional neural network (CNN) based network for underwater image classification.
In this model, we simulate the visual correlation of background attention with image understanding for special environments, such as fog and underwater by constructing an inception-attention (I-A) module.
The experimental results demonstrate that the proposed UW-Net achieves an accuracy of 99.3% on underwater image classification, which is significantly better than other image classification networks, such as AlexNet, InceptionV3, ResNet and Se-ResNet.
Moreover, we demonstrate the proposed IA module can be used to boost the performance of the existing object recognition networks.
By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10.7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the background attention in the image classifications.
Underwater images and videos contain a lot of valuable information for many underwater scientific researches (Klausner & Azimi-Sadjadi, 2019; Peng et al., 2018) .
However, the image analysis systems and classification algorithms designed for natural images (Redmon & Farhadi, 2018; He et al., 2017) cannot be directly applied to the underwater images due to the complex distortions existed in underwater images (e.g., low contrast, blurring, non-uniform brightness, non-uniform color casting and noises) and there is, to the best of our knowledge, no model for underwater image classification.
Except for the inevitable distortions exhibited in underwater images, there are other three key problems for the classification of underwater images: (1) the background in underwater images taken in different environments are various; (2) the salient objects such as ruins, fish, diver exist not only in underwater environment, but also in air.
The features extracted from the salient objects cannot be relied on primarily in the classification of underwater images; and (3) since the classification of underwater images is only a dualistic classification task, the structure of the designed network should be simple to avoid the over-fitting.
Increasing the depth and width of a CNN can usually improve the performance of the model, but is more prone to cause over-fitting when the training dataset is limited, and needs more computational resource (LeCun et al., 2015; Srivastava et al., 2014) .
To remit this issue, (Szegedy et al., 2015) proposed the inception module, which simultaneously performs the multi-scale convolution and pooling on a level of CNN to output multi-scale features.
In addition, the attention mechanism (Chikkerur et al., 2010; Borji & Itti, 2012) is proposed and applied in the recent deep models which takes the advantage that human vision pays attention to different parts of the image depending on the recognition tasks Zhu et al., 2018; Ba et al., 2014) .
Although these strategies play an important role in advancing the field of image classifications, we find that the large-scale features such as the background area play a more important role in the visual attention mechanism when people understanding of underwater images, which is unlike the attention mechanism applied in natural scene image classification (Xiao et al., 2015; Fu et al., 2017) .
In this paper, we propose an underwater image classification network, called UW-Net.
The overview network structure is shown in Fig. 1 .
Unlike other models, the UW-Net pays more attention to the Figure 1 : The structure of the UW-Net.
The bottom part is the output of the eighth layer in the I-A module.
The red area represents a higher response of features for the underwater image classification.
As shown, our I-A module concerns more about the background regions of underwater images.
background features of images by construct the inception-attention (I-A) modules and thus achieves better performance.
The contributions of this paper are as follows:
(i) to the best of our knowledge, it is the first CNN-based model for underwater image classification;
(ii) an inception-attention module is proposed, which joints the multi-dimensional inception module with the attention module to realize the multiple weighting of the output of various scales of features;
(iii) this work is a first attempt to simulate the visual correlation between understanding images and background areas through I-A modules.
The rest of the paper is organized as follows: Section 2 introduces the related work.
The proposed UW-Net is described in Section 3.
Section 4 illustrates the experimental results and analysis, and we summarize this paper in Section 5.
A new underwater image classification network UW-Net is proposed in this work, wherein an inception-attention module is constructed.
In this model, we simulate the visual correlation between understanding images and background areas through I-A modules, which joint the multidimensional inception module with the attention module to realize the multiple weighting of the output of various scales of features.
The 100% accuracy on the training set and 99.3% accuracy on the testing set of the UW-Net is achieved benefiting from the refinement of the usefulness of multiscale features by the I-A module.
In the future, we will try to improve the performance of other underwater image visual analysis models by introducing the proposed I-A module.
|
A visual understanding mechanism for special environment
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:529
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Reinforcement learning provides a powerful and general framework for decision
making and control, but its application in practice is often hindered by the need
for extensive feature and reward engineering.
Deep reinforcement learning methods
can remove the need for explicit engineering of policy or value features, but
still require a manually specified reward function.
Inverse reinforcement learning
holds the promise of automatic reward acquisition, but has proven exceptionally
difficult to apply to large, high-dimensional problems with unknown dynamics.
In
this work, we propose AIRL, a practical and scalable inverse reinforcement learning
algorithm based on an adversarial reward learning formulation that is competitive
with direct imitation learning algorithms.
Additionally, we show that AIRL is
able to recover portable reward functions that are robust to changes in dynamics,
enabling us to learn policies even under significant variation in the environment
seen during training.
While reinforcement learning (RL) provides a powerful framework for automating decision making and control, significant engineering of elements such as features and reward functions has typically been required for good practical performance.
In recent years, deep reinforcement learning has alleviated the need for feature engineering for policies and value functions, and has shown promising results on a range of complex tasks, from vision-based robotic control BID12 to video games such as Atari BID13 and Minecraft BID16 .
However, reward engineering remains a significant barrier to applying reinforcement learning in practice.
In some domains, this may be difficult to specify (for example, encouraging "socially acceptable" behavior), and in others, a naïvely specified reward function can produce unintended behavior BID2 .
Moreover, deep RL algorithms are often sensitive to factors such as reward sparsity and magnitude, making well performing reward functions particularly difficult to engineer.Inverse reinforcement learning (IRL) BID19 BID14 refers to the problem of inferring an expert's reward function from demonstrations, which is a potential method for solving the problem of reward engineering.
However, inverse reinforcement learning methods have generally been less efficient than direct methods for learning from demonstration such as imitation learning BID10 , and methods using powerful function approximators such as neural networks have required tricks such as domain-specific regularization and operate inefficiently over whole trajectories BID6 .
There are many scenarios where IRL may be preferred over direct imitation learning, such as re-optimizing a reward in novel environments BID7 or to infer an agent's intentions, but IRL methods have not been shown to scale to the same complexity of tasks as direct imitation learning.
However, adversarial IRL methods BID6 a) hold promise for tackling difficult tasks due to the ability to adapt training samples to improve learning efficiency.Part of the challenge is that IRL is an ill-defined problem, since there are many optimal policies that can explain a set of demonstrations, and many rewards that can explain an optimal policy BID15 .
The maximum entropy (MaxEnt) IRL framework introduced by BID24 handles the former ambiguity, but the latter ambiguity means that IRL algorithms have difficulty distinguishing the true reward functions from those shaped by the environment dynamics.
While shaped rewards can increase learning speed in the original training environment, when the reward is deployed at test-time on environments with varying dynamics, it may no longer produce optimal behavior, as we discuss in Sec. 5.
To address this issue, we discuss how to modify IRL algorithms to learn rewards that are invariant to changing dynamics, which we refer to as disentangled rewards.In this paper, we propose adversarial inverse reinforcement learning (AIRL), an inverse reinforcement learning algorithm based on adversarial learning.
Our algorithm provides for simultaneous learning of the reward function and value function, which enables us to both make use of the efficient adversarial formulation and recover a generalizable and portable reward function, in contrast to prior works that either do not recover a reward functions BID10 , or operates at the level of entire trajectories, making it difficult to apply to more complex problem settings BID6 a) .
Our experimental evaluation demonstrates that AIRL outperforms prior IRL methods BID6 on continuous, high-dimensional tasks with unknown dynamics by a wide margin.
When compared to GAIL BID10 , which does not attempt to directly recover rewards, our method achieves comparable results on tasks that do not require transfer.
However, on tasks where there is considerable variability in the environment from the demonstration setting, GAIL and other IRL methods fail to generalize.
In these settings, our approach, which can effectively disentangle the goals of the expert from the dynamics of the environment, achieves superior results.
|
We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:53
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep networks have achieved impressive results across a variety of important tasks.
However, a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples.
We propose \emph{Fortified Networks}, a simple transformation of existing networks, which “fortifies” the hidden layers in a deep network by identifying when the hidden states are off of the data manifold, and maps these hidden states back to parts of the data manifold where the network performs well.
Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments (i) demonstrate improved robustness to standard adversarial attacks in both black-box and white-box threat models; (ii) suggest that our improvements are not primarily due to the problem of deceptively good results due to degraded quality in the gradient signal (the gradient masking problem) and (iii) show the advantage of doing this fortification in the hidden layers instead of the input space.
We demonstrate improvements in adversarial robustness on three datasets (MNIST, Fashion MNIST, CIFAR10), across several attack parameters, both white-box and black-box settings, and the most widely studied attacks (FGSM, PGD, Carlini-Wagner).
We show that these improvements are achieved across a wide variety of hyperparameters.
The success of deep neural networks across a variety of tasks has also driven applications in domains where reliability and security are critical, including self-driving cars BID6 , health care, face recognition BID25 , and the detection of malware BID17 .
Security concerns arise when an agent using such a system could benefit from the system performing poorly.
Reliability concerns come about when the distribution of input data seen during training can differ from the distribution on which the model is evaluated.Adversarial examples BID11 result from attacks on neural network models, applying small perturbations to the inputs that change the predicted class.
Such perturbations can be small enough to be unnoticeable to the naked eye.
It has been shown that gradient-based methods allow one to find modifications of the input that often change the predicted class BID26 BID11 .
More recent work demonstrated that it is possible to create modifications such that even when captured through a camera, they change the predicted class with high probability BID7 .Some
of the most prominent classes of defenses against adversarial examples include feature squeezing BID29 , adapted encoding of the input (Jacob BID14 , and distillation-related approaches BID20 . Existing
defenses provide some robustness but most are not easy to deploy. In addition
, many have been shown to be providing the illusion of defense by lowering the quality of the gradient signal, without actually providing improved robustness BID1 . Still others
require training a generative model directly in the visible space, which is still difficult today even on relatively simple datasets.Our work differs from the approaches using generative models in the input space in that we instead employ this robustification on the distribution of the learned hidden representations, which makes the The plot on the right shows direct experimental evidence for this hypothesis: we added fortified layers with different capacities to MLPs trained on MNIST, and display the value of the total reconstruction errors for adversarial examples divided by the total reconstruction errors for clean examples. A high value
indicates success at detecting adversarial examples. Our results
support the central motivation for fortified networks: that off-manifold points can much more easily be detected in the hidden space (as seen by the relatively constant ratio for the autoencoder in hidden space) and are much harder to detect in the input space (as seen by this ratio rapidly falling to zero as the input-space autoencoder's capacity is reduced).identification
of off-manifold examples easier. We do this by
training denoising autoencoders on top of the hidden layers of the original network. We call this
method Fortified Networks.We demonstrate that Fortified Networks (i) can be generically
added into an existing network; (ii) robustify the network
against adversarial attacks and (iii) provide a reliable signal
of the existence of input data that do not lie on the manifold on which it the network trained.In the sections that follow, we discuss the intuition behind the fortification of hidden layers and lay out some of the method's salient properties. Furthermore, we evaluate our proposed
approach on MNIST, Fashion-MNIST, CIFAR10 datasets against whitebox and blackbox attacks.
Protecting against adversarial examples could be of paramount importance in mission-critical applications.
We have presented Fortified Networks, a simple method for the robustification of existing deep neural networks.
Our method is practical, as fortifying an existing network entails introducing DAEs between the hidden layers of the network, which can be automated.
Furthermore, the DAE reconstruction error at test time is a reliable signal of distribution shift, which can result in examples unlike those encountered during training.
High error can signify either adversarial attacks or significant domain shift; both are important cases for the analyst or system to be aware of.
Moreover, fortified networks are efficient: since not every layer needs to be fortified to achieve improvements, fortified networks are an efficient way to improve robustness to adversarial examples.
For example, we have shown improvements on ResNets where only two fortified layers are added, and thus the change to the computational cost is very slight.
Finally, fortified networks are effective, as they improve results on adversarial defense on three datasets (MNIST, Fashion MNIST, and CIFAR10), across a variety of attack parameters (including the most widely used ε values), across three widely studied attacks (FGSM, PGD, Carlini-Wagner L2), and in both the black-box and white-box settings.A EXPERIMENTAL SETUP All attacks used in this work were carried out using the Cleverhans BID21 ) library.A.1
WHITE-BOX ATTACKS Our convolutional models (Conv, in the tables) have 2 strided convolutional layers with 64 and 128 filters followed by an unstrided conv layer with 128 filters.
We use ReLU activations between layers then followed by a single fully connected layer.
The convolutional and fully-connected DAEs have a single bottleneck layer with leaky ReLU activations with some ablations presented in the table below.With white-box PGD attacks, we used only convolutional DAEs at the first and last conv layers with Gaussian noise of σ = 0.01 whereas with FGSM attacks we used a DAE only at the last fully connected layer.
The weight on the reconstruction error λ rec and adversarial cost λ adv were set to 0.01 in all white-box attack experiments.
We used the Adam optimizer with a learning rate of 0.001 to train all models.The table below lists results a few ablations with different activation functions in the autoencoder Our black-box results are based on a fully-connected substitute model (input-200-200-output) , which was subsequently used to attack a fortified convolutional network.
The CNN was trained for 50 epochs using adversarial training, and the predictions of the trained CNN were used to train the substitute model.
6 iterations of Jacobian data augmentation were run during training of the substitute, with λ = 0.1.
The test set data holdout for the adversary was fixed to 150 examples.
The learning rate was set to 0.003 and the Adam optimizer was used to train both models.
TAB0 : More attack steps to uncover gradient masking effects.
|
Better adversarial training by learning to map back to the data manifold with autoencoders in the hidden states.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:530
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural networks could misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset.
In this work, we study the binary classification of linearly separable datasets and show that linear classifiers could also have decision boundaries that lie close to their training dataset if cross-entropy loss is used for training.
In particular, we show that if the features of the training dataset lie in a low-dimensional affine subspace and the cross-entropy loss is minimized by using a gradient method, the margin between the training points and the decision boundary could be much smaller than the optimal value.
This result is contrary to the conclusions of recent related works such as (Soudry et al., 2018), and we identify the reason for this contradiction.
In order to improve the margin, we introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class.
We show that the decision boundary of a linear classifier trained with differential training indeed achieves the maximum margin.
The results reveal the use of cross-entropy loss as one of the hidden culprits of adversarial examples and introduces a new direction to make neural networks robust against them.
Training neural networks is challenging and involves making several design choices.
Among these are the architecture of the network, the training loss function, the optimization algorithm used for training, and their hyperparameters, such as the learning rate and the batch size.
Most of these design choices influence the solution obtained by the training procedure and have been studied in detail BID9 BID4 BID5 Wilson et al., 2017; BID17 BID19 .
Nevertheless, one choice has been mostly taken for granted when the network is trained for a classification task: the training loss function.Cross-entropy loss function is almost the sole choice for classification tasks in practice.
Its prevalent use is backed theoretically by its association with the minimization of the Kullback-Leibler divergence between the empirical distribution of a dataset and the confidence of the classifier for that dataset.
Given the particular success of neural networks for classification tasks BID11 BID18 BID5 , there seems to be little motivation to search for alternatives for this loss function, and most of the software developed for neural networks incorporates an efficient implementation for it, thereby facilitating its use.Recently there has been a line of work analyzing the dynamics of training a linear classifier with the cross-entropy loss function BID15 b; BID7 .
They specified the decision boundary that the gradient descent algorithm yields on linearly separable datasets and claimed that this solution achieves the maximum margin.1
However, these claims were observed not to hold in the simple experiments we ran.
For example, FIG6 displays a case where the cross-entropy minimization for a linear classifier leads to a decision boundary which attains an extremely poor margin and is nearly orthogonal to the solution given by the hard-margin support vector machine (SVM).We
set out to understand this discrepancy between the claims of the previous works and our observations on the simple experiments. We
can summarize our contributions as follows.
We compare our results with related works and discuss their implications for the following subjects.Adversarial examples.
State-of-the-art neural networks have been observed to misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset (Szegedy et al., 2013; BID3 MoosaviDezfooli et al., 2017; .
Our results reveal that the combination of gradient methods, cross-entropy loss function and the low-dimensionality of the training dataset (at least in some domain) has a responsibility for this problem.
Note that SVM with the radial basis function was shown to be robust against adversarial examples, and this was attributed to the high nonlinearity of the radial basis function in BID3 .
Given that the SVM uses neither the cross entropy loss function nor the gradient descent algorithm for training, we argue that the robustness of SVM is no surprise -independent of its nonlinearity.
Lastly, effectiveness of differential training for neural networks against adversarial examples is our ongoing work.
The activations feeding into the soft-max layer could be considered as the features for a linear classifier.
Plot shows the cumulative variance explained for these features as a function of the number of principle components used.
Almost all the variance in the features is captured by the first 20 principle components out of 84, which shows that the input to the soft-max layer resides predominantly in a low-dimensional subspace.Low-dimensionality of the training dataset.
As stated in Remark 3, as the dimension of the affine subspace containing the training dataset gets very small compared to the dimension of the input space, the training algorithm will become more likely to yield a small margin for the classifier.
This observation confirms the results of BID13 , which showed that if the set of training data is projected onto a low-dimensional subspace before feeding into a neural network, the performance of the network against adversarial examples is improved -since projecting the inputs onto a low-dimensional domain corresponds to decreasing the dimension of the input space.
Even though this method is effective, it requires the knowledge of the domain in which the training points are low-dimensional.
Because this knowledge will not always be available, finding alternative training algorithms and loss functions that are suited for low-dimensional data is still an important direction for future research.Robust optimization.
Using robust optimization techniques to train neural networks has been shown to be effective against adversarial examples BID12 BID0 .
Note that these techniques could be considered as inflating the training points by a presumed amount and training the classifier with these inflated points.
Consequently, as long as the cross-entropy loss is involved, the decision boundaries of the neural network will still be in the vicinity of the inflated points.
Therefore, even though the classifier is robust against the disturbances of the presumed magnitude, the margin of the classifier could still be much smaller than what it could potentially be.Differential training.
We introduced differential training, which allows the feature mapping to remain trainable while ensuring a large margin between different classes of points.
Therefore, this method combines the benefits of neural networks with those of support vector machines.
Even though moving from 2N training points to N 2 seems prohibitive, it points out that a true classification should in fact be able to differentiate between the pairs that are hardest to differentiate, and this search will necessarily require an N 2 term.
Some heuristic methods are likely to be effective, such as considering only a smaller subset of points closer to the boundary and updating this set of points as needed during training.
If a neural network is trained with this procedure, the network will be forced to find features that are able to tell apart between the hardest pairs.Nonseparable data.
What happens when the training data is not linearly separable is an open direction for future work.
However, as stated in Remark 4, this case is not expected to arise for the state-of-the-art networks, since they have been shown to achieve zero training error even on randomly generated datasets (Zhang et al., 2017) , which implies that the features represented by the output of their penultimate layer eventually become linearly separable.
A PROOF OF THEOREM 1Theorem 1 could be proved by using Theorem 2, but we provide an independent proof here.
Gradient descent algorithm with learning rate δ on the cross-entropy loss (1) yields DISPLAYFORM0 1 + e −w x + δỹ e −w ỹ 1 + e −w ỹ .Ifw(0
) = 0, thenw(t) = p(t)x + q(t)ỹ for all t ≥ 0, wherė DISPLAYFORM1 Then we can writeα Lemma 2. If b
< 0, then there exists t 0 ∈ (0, ∞) such that DISPLAYFORM2 Proof. Note
that DISPLAYFORM3 which implies that DISPLAYFORM4 as long as DISPLAYFORM5 By using Lemma 2, DISPLAYFORM6 Proof. Solving
the set of equations DISPLAYFORM7 , DISPLAYFORM8 Proof. Note thatż
≥ a/2 andv ≥ c/2; therefore, DISPLAYFORM9 if either side exists. Remember thaṫ
DISPLAYFORM10 We can compute f (w) = 2acw + bcw 2 + ab b 2 w 2 + 2abw + a 2 . The function
f is strictly increasing and convex for w > 0. We have DISPLAYFORM11
Therefore, when b ≥ a, the only fixed point of f over [0, ∞) is the origin, and when a > b, 0 and (a − b)/(c − b) are the only fixed points of f over [0, ∞). Figure 4 shows the curves
over whichu = 0 andẇ = 0. Since lim t→∞ u = lim t→∞
w, the only points (u, w) can converge to are the fixed points of f . Remember thaṫ DISPLAYFORM12
so when a > b, the origin (0, 0) is unstable in the sense of Lyapunov, and (u, w) cannot converge to it. Otherwise, (0, 0) is the only
fixed point, and it is stable. As a result, DISPLAYFORM13 Figure
4: Stationary points of function f . DISPLAYFORM14 Proof. From Lemma 6
, DISPLAYFORM15 Consequently
, DISPLAYFORM16 which gives the same solution as Lemma 5: DISPLAYFORM17 Proof. We can obtain a lower bound for square
of the denominator as DISPLAYFORM18 DISPLAYFORM19 As a result, Then, we can write w as DISPLAYFORM20 Remember, by definition, w SVM = arg min w 2 s.t. w, x i + y j ≥ 2 ∀i ∈ I, ∀j ∈ J.Since the vector u also satisfies u, x i + y j = w, x i + y j ≥ 2 for all i ∈ I, j ∈ J, we have u ≥ w SVM = 1 γ . As a result, the margin obtained by minimizing
the cross-entropy loss is DISPLAYFORM21
|
We show that minimizing the cross-entropy loss by using a gradient method could lead to a very poor margin if the features of the dataset lie on a low-dimensional subspace.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:531
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks.
However, RNN still has a limited capacity to manipulate long-term memory.
To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms.
In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM).
The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem.
Moreover, the rotational unit also serves as associative memory.
We evaluate our model on synthetic memorization, question answering and language modeling tasks.
RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task.
RUM’s performance in the bAbI Question Answering task is comparable to that of models with attention mechanism.
We also improve the state-of-the-art result to 1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB) task, which is to signify the applications of RUM to real-world sequential data.
The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation.
Recurrent neural networks are widely used in a variety of machine learning applications such as language modeling BID7 ), machine translation BID5 ) and speech recognition BID11 ).
Their flexibility of taking inputs of dynamic length makes RNN particularly useful for these tasks.
However, the traditional RNN models such as Long Short-Term Memory (LSTM, BID12 ) and Gated Recurrent Unit (GRU, BID5 ) exhibit some weaknesses that prevent them from achieving human level performance:
1) limited memory-they can only remember a hidden state, which usually occupies a small part of a model;
2) gradient vanishing/explosion BID4 ) during training-trained with backpropagation through time the models fail to learn long-term dependencies.Several ways to address those problems are known.
One solution is to use soft and local attention mechanisms BID5 ), which is crucial for most modern applications of RNN.
Nevertheless, researchers are still interested in improving basic RNN cell models to process sequential data better.
Numerous works BID7 ; BID2 ) use associative memory to span a large memory space.
For example, a practical way to implement associative memory is to set weight matrices as trainable structures that change according to input instances for training.
Furthermore, the recent concept of unitary or orthogonal evolution matrices BID0 ; BID14 ) also provides a theoretical and empirical solution to the problem of memorizing long-term dependencies.Here, we propose a novel RNN cell that resolves simultaneously those weaknesses of basic RNN.
The Rotational Unit of Memory is a modified gated model whose rotational operation acts as associative memory and is strictly an orthogonal matrix.
We tested our model on several benchmarks.
RUM is able to solve the synthetic Copying Memory task while traditional LSTM and GRU fail.
For synthetic Recall task, RUM exhibits a stronger ability to remember sequences, hence outperforming state-of-the-art RNN models such as Fastweight RNN BID2 ) and WeiNet (Zhang & Zhou (2017) ).
By using RUM we achieve the state-of-the-art result in the real-world Character Level Penn Treebank task.
RUM also outperforms all basic RNN models in the bAbI question answering task.
This performance is competitive with that of memory networks, which take advantage of attention mechanisms.Our contributions are as follows:1.
We develop the concept of the Rotational Unit that combines the memorization advantage of unitary/orthogonal matrices with the dynamic structure of associative memory; 2.
The Rotational Unit of Memory serves as the first phase-encoded model for Recurrent Neural Networks, which improves the state-of-the-art performance of the current frontier of models in a diverse collection of sequential task.
We proposed a novel RNN architecture: Rotational Unit of Memory.
The model takes advantage of the unitary and associative memory concepts.
RUM outperforms many previous state-of-the-art models, including LSTM, GRU, GORU and NTM in synthetic benchmarks: Copying Memory and Associative Recall tasks.
Additionally, RUM's performance in real-world tasks, such as question answering and language modeling, is competetive with that of advanced architectures, some of which include attention mechanisms.
We claim the Rotational Unit of Memory can serve as the new benchmark model that absorbs all advantages of existing models in a scalable way.
Indeed, the rotational operation can be applied to many other fields, not limited only to RNN, such as Convolutional and Generative Adversarial Neural Networks.
|
A novel RNN model which outperforms significantly the current frontier of models in a variety of sequential tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:532
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
While many recent advances in deep reinforcement learning rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment dynamics.
One prospect is to pursue hybrid approaches, as in AlphaGo, which combines Monte-Carlo Tree Search (MCTS)—a model-based method—with deep-Q networks (DQNs)—a model-free method.
MCTS requires generating rollouts, which is computationally expensive.
In this paper, we propose to simulate roll-outs, exploiting the latest breakthroughs in image-to-image transduction, namely Pix2Pix GANs, to predict the dynamics of the environment.
Our proposed algorithm, generative adversarial tree search (GATS), simulates rollouts up to a specified depth using both a GAN- based dynamics model and a reward predictor.
GATS employs MCTS for planning over the simulated samples and uses DQN to estimate the Q-function at the leaf states.
Our theoretical analysis establishes some favorable properties of GATS vis-a-vis the bias-variance trade-off and empirical results show that on 5 popular Atari games, the dynamics and reward predictors converge quickly to accurate solutions.
However, GATS fails to outperform DQNs in 4 out of 5 games.
Notably, in these experiments, MCTS has only short rollouts (up to tree depth 4), while previous successes of MCTS have involved tree depth in the hundreds.
We present a hypothesis for why tree search with short rollouts can fail even given perfect modeling.
The earliest and best-publicized applications of deep reinforcement learning (DRL) involve Atari games (Mnih et al., 2015) and the board game of Go (Silver et al., 2016) , where experience is inexpensive because the environments are simulated.
In such scenarios, DRL can be combined with Monte-Carlo tree search (MCTS) methods (Kearns et al., 2002; Kocsis & Szepesvári, 2006) for planning, where the agent executes roll-outs on the simulated environment (as far as computationally feasible) to finds suitable policies.
However, for RL problems with long episodes, e.g. Go, MCTS can be very computationally expensive.
In order to speed up MCTS for Go and learn an effective policy, Alpha Go (Silver et al., 2016 ) employs a depth-limited MCTS with the depth in the hundreds on their Go emulator and use an estimated Q-function to query the value of leaf nodes.
However, in real-world applications, such as robotics (Levine et al., 2016) and dialogue systems (Lipton et al., 2016) , collecting samples often takes considerable time and effort.
In such scenarios, the agent typically cannot access either the environment model or a corresponding simulator.Recently, generative adversarial networks (GANs) BID15 have emerged as a popular tool for synthesizing realistic-seeming data, especially for high-dimensional domains, including images and audio.
Unlike previous approaches to image generation, which typically produced blurry images due to optimizing an L1 or L2 objective, GANs produces crisp images.
Since theire original conception as an unsupervised method, GANs have been extended for conditional generation, e.g., generating an image conditioned on a label (Mirza & Osindero, 2014; Odena et al., 2016) or the next frame in a video given a context window (Mathieu et al., 2015) .
Recently, the PIX2PIX approach has demonstrated impressive results on a range of image-to-image transduction tasks (Isola et al., 2017) .In
this work, we propose and analyze generative adversarial tree search (GATS), a new DRL algorithm that utilizes samples from the environment to learn both a Q-function approximator, a near-term reward predictor, and a GAN-based model of the environment's dynamics (state transitions). Together
, the dynamics model and reward predictor constitute a learned simulator on which MCTS can be performed. GATS leverages
PIX2PIX GANs to learn a generative dynamics model (GDM) that efficiently learns the dynamics of the environment, producing images that agree closely with the actual observed transactions and are also visually crisp. We thoroughly
study various image transduction models, arriving ultimately at a GDM that converges quickly (compared to the DQN), and appears from our evaluation to be reasonably robust to subtle distribution shifts, including some that destroy a DQN policy. We also train
a reward predictor that converges quickly, achieving negligible error (over 99% accuracy). GATS bridges
model-based and model-free reinforcement learning, using the learned dynamics and reward predictors to simulate roll-outs in combination with a DQN. Specifically
, GATS deploys the MCTS method for planning over a bounded tree depth and uses the DQN algorithm to estimate the Q-function as a value for the leaf states (Mnih et al., 2015; Van Hasselt et al., 2016) .One notable
aspect of the GATS algorithm is its flexibility, owing to consisting of a few modular building blocks: (i) value learning
: we deployed DQN and DDQN (ii) planning: we
use pure Monte Carlo sampling; (iii) a reward predictor
: we used a simple 3-class classifier; (iv) dynamics model: we
propose the GDM architecture. Practically, one can swap
in other methods for any among these blocks and we highlight some alternatives in the related work. Thus, GATS constitutes a
general framework for studying the trade-offs between model-based and model-free reinforcement learning.
Discussion of negative results In this section, we enumerate several hypotheses for why GATS under-performs DQN despite near-perfect modeling, and discuss several attempts to improve GATS based on these hypotheses.
The following are shown in TAB1 .
DISPLAYFORM0 Replay Buffer: The agent's decision under GATS sometimes differs from that of the learned Q model.
Therefore, it is important that we allow the Q-learner to observe the outcome of important outcomes in the generated MCTS states.
To address this problem, we tried storing the samples generated in tree search and use them to further train the Q-model.
We studied two scenarios:
(i) using plain DQN with no generated samples and
(ii) using Dyna-Q to train the Q function on the generated samples in MCTS.
However, these techniques did not improve the performance of GATS.Optimizer: Since the problem is slightly different from DQN, especially in the Dyna-Q setting with generated frames, we tried a variety of different learning rates and minibatch sizes to tune the Q-learner.
|
Surprising negative results on Model Based + Model deep RL
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:533
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs.
As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters.
We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic.
Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs.
In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance.
We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.
Generative Adversarial Networks (GANs; BID10 provide a powerful method for general-purpose generative modeling of datasets.
Given examples from some distribution, a GAN attempts to learn a generator function, which maps from some fixed noise distribution to samples that attempt to mimic a reference or target distribution.
The generator is trained to trick a discriminator, or critic, which tries to distinguish between generated and target samples.
This alternative to standard maximum likelihood approaches for training generative models has brought about a rush of interest over the past several years.
Likelihoods do not necessarily correspond well to sample quality BID13 , and GAN-type objectives focus much more on producing plausible samples, as illustrated particularly directly by Danihelka et al. (2017) .
This class of models has recently led to many impressive examples of image generation (e.g. Huang et al., 2017a; Jin et al., 2017; Zhu et al., 2017) .GANs
are, however, notoriously tricky to train (Salimans et al., 2016) . This
might be understood in terms of the discriminator class. BID10
showed that, when the discriminator is trained to optimality among a rich enough function class, the generator network attempts to minimize the Jensen-Shannon divergence between the generator and target distributions. This
result has been extended to general f -divergences by Nowozin et al. (2016) . According
to BID1 , however, it is likely that both the GAN and reference probability measures are supported on manifolds within a larger space, as occurs for the set of images in the space of possible pixel values. These manifolds
might not intersect at all, or at best might intersect on sets of measure zero. In this case, the
Jensen-Shannon divergence is constant, and the KL and reverse-KL divergences are infinite, meaning that they provide no useful gradient for the generator to follow. This helps to explain
some of the instability of GAN training.The lack of sensitivity to distance, meaning that nearby but non-overlapping regions of high probability mass are not considered similar, is a long-recognized problem for KL divergence-based discrepancy measures (e.g. Gneiting & Raftery, 2007, Section 4.2) . It is natural to address
this problem using Integral Probability Metrics (IPMs; Müller, 1997) : these measure the distance between probability measures via the largest discrepancy in expectation over a class of "well behaved" witness functions. Thus, IPMs are able to signal
proximity in the probability mass of the generator and reference distributions. (Section 2 describes this framework
in more detail.) BID1 proposed to use the Wasserstein
distance between distributions as the discriminator, which is an integral probability metric constructed from the witness class of 1-Lipschitz functions. To implement the Wasserstein critic,
Arjovsky et al. originally proposed weight clipping of the discriminator network, to enforce k-Lipschitz smoothness. Gulrajani et al. (2017) improved on
this result by directly constraining the gradient of the discriminator network at points between the generator and reference samples. This new Wasserstein GAN implementation
, called WGAN-GP, is more stable and easier to train.A second integral probability metric used in GAN variants is the maximum mean discrepancy (MMD), for which the witness function class is a unit ball in a reproducing kernel Hilbert space (RKHS). Generative adversarial models based on
minimizing the MMD were first considered by Li et al. (2015) and Dziugaite et al. (2015) . These works optimized a generator to minimize
the MMD with a fixed kernel, either using a generic kernel on image pixels or by modeling autoencoder representations instead of images directly. BID9 instead minimized the statistical power
of an MMD-based test with a fixed kernel. Such approaches struggle with complex natural
images, where pixel distances are of little value, and fixed representations can easily be tricked, as in the adversarial examples of BID10 .Adversarial training of the MMD loss is thus an
obvious choice to advance these methods. Here the kernel MMD is defined on the output of
a convolutional network, which is trained adversarially. Recent notable work has made use of the IPM representation
of the MMD to employ the same witness function regularization strategies as BID1 and Gulrajani et al. (2017) , effectively corresponding to an additional constraint on the MMD function class. Without such constraints, the convolutional features are unstable
and difficult to train BID9 . Li et al. (2017b) essentially used the weight clipping strategy of
Arjovsky et al., with additional constraints to encourage the kernel distribution embeddings to be injective. 1 In light of the observations by Gulrajani et al., however, we use
a gradient constraint on the MMD witness function in the present work (see Sections 2.1 and 2.2).2 Bellemare et al. (2017) 's method, the Cramér GAN, also used the gradient
constraint strategy of Gulrajani et al. in their discriminator network. As we discuss in Section 2.3, the Cramér GAN discriminator is related to
the energy distance, which is an instance of the MMD (Sejdinovic et al., 2013) , and which can therefore use a gradient constraint on the witness function. Note, however, that there are important differences between the Cramér GAN
critic and the energy distance, which make it more akin to the optimization of a scoring rule: we provide further details in Appendix A. Weight clipping and gradient constraints are not the only approaches possible: variance features (Mroueh et al., 2017) and constraints (Mroueh & Sercu, 2017) can work, as can other optimization strategies (Berthelot et al., 2017; Li et al., 2017a) .Given that both the Wasserstein distance and the MMD are integral probability
metrics, it is of interest to consider how they differ when used in GAN training. Bellemare et al. (2017) showed that optimizing the empirical Wasserstein distance
can lead to biased gradients for the generator, and gave an explicit example where optimizing with these biased gradients leads the optimizer to incorrect parameter values, even in expectation. They then claim that the energy distance does not suffer from these problems. As
our main theoretical contribution, we substantially clarify the bias situation
in Section 3. First, we show (Theorem 1) that the natural maximum mean discrepancy estimator, including
the estimator of energy distance, has unbiased gradients when used "on top" of a fixed deep network representation. The generator gradients obtained from a trained representation, however, will be biased relative
to the desired gradients of the optimal critic based on infinitely many samples. This situation is exactly analogous to WGANs: the generator's gradients with a fixed critic are
unbiased, but gradients from a learned critic are biased with respect to the supremum over critics.MMD GANs, though, do have some advantages over Wasserstein GANs. Certainly we would not expect the MMD on its own to perform well on raw image data, since these
data lie on a low dimensional manifold embedded in a higher dimensional pixel space. Once the images are mapped through appropriately trained convolutional layers, however, they can
follow a much simpler distribution with broader support across the mapped domain: a phenomenon also observed in autoencoders (Bengio et al., 2013) . In this setting, the MMD with characteristic kernels BID4 shows strong discriminative performance
between distributions. To achieve comparable performance, a WGAN without the advantage of a kernel on the transformed space
requires many more convolutional filters in the critic. In our experiments (Section 5), we find that MMD GANs achieve the same generator performance as WGAN-GPs
with smaller discriminator networks, resulting in GANs with fewer parameters and computationally faster training. Thus, the MMD GAN discriminator can be understood as a hybrid model that plays to the strengths of both
the initial convolutional mappings and the kernel layer that sits on top.
|
Explain bias situation with MMD GANs; MMD GANs work with smaller critic networks than WGAN-GPs; new GAN evaluation metric.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:535
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We extend the Consensus Network framework to Transductive Consensus Network (TCN), a semi-supervised multi-modal classification framework, and identify its two mechanisms: consensus and classification.
By putting forward three variants as ablation studies, we show both mechanisms should be functioning together.
Overall, TCNs outperform or align with the best benchmark algorithms when only 20 to 200 labeled data points are available.
|
A semi-supervised multi-modal classification framework, TCN, that outperforms various benchmarks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:536
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Separating mixed distributions is a long standing challenge for machine learning and signal processing.
Applications include: single-channel multi-speaker separation (cocktail party problem), singing voice separation and separating reflections from images.
Most current methods either rely on making strong assumptions on the source distributions (e.g. sparsity, low rank, repetitiveness) or rely on having training samples of each source in the mixture.
In this work, we tackle the scenario of extracting an unobserved distribution additively mixed with a signal from an observed (arbitrary) distribution.
We introduce a new method: Neural Egg Separation - an iterative method that learns to separate the known distribution from progressively finer estimates of the unknown distribution.
In some settings, Neural Egg Separation is initialization sensitive, we therefore introduce GLO Masking which ensures a good initialization.
Extensive experiments show that our method outperforms current methods that use the same level of supervision and often achieves similar performance to full supervision.
Humans are remarkably good at separating data coming from a mixture of distributions, e.g. hearing a person speaking in a crowded cocktail party.
Artificial intelligence, on the the hand, is far less adept at separating mixed signals.
This is an important ability as signals in nature are typically mixed, e.g. speakers are often mixed with other speakers or environmental sounds, objects in images are typically seen along other objects as well as the background.
Understanding mixed signals is harder than understanding pure sources, making source separation an important research topic.Mixed signal separation appears in many scenarios corresponding to different degrees of supervision.
Most previous work focused on the following settings:Full supervision: The learner has access to a training set including samples of mixed signals {y i } ∈ Y as well as the ground truth sources of the same signals {b i } ∈ B and {x i } ∈ X (such that y i = x i + b i ).
Having such strong supervision is very potent, allowing the learner to directly learn a mapping from the mixed signal y i to its sources (x i , b i ).
Obtaining such strong supervision is typically unrealistic, as it requires manual separation of mixed signals.
Consider for example a musical performance, humans are often able to separate out the different sounds of the individual instruments, despite never having heard them play in isolation.
The fully supervised setting does not allow the clean extraction of signals that cannot be observed in isolation e.g. music of a street performer, car engine noises or reflections in shop windows.
GLO vs. Adversarial Masking: GLO Masking as a stand alone technique usually performed worse than Adversarial Masking.
On the other hand, finetuning from GLO masks was far better than finetuning from adversarial masks.
We speculate that mode collapse, inherent in adversarial training, makes the adversarial masks a lower bound on the X source distribution.
GLOM can result in models that are too loose (i.e. that also encode samples outside of X ).
But as an initialization for NES finetuning, it is better to have a model that is too loose than a model which is too tight.Supervision Protocol: Supervision is important for source separation.
Completely blind source separation is not well specified and simply using general signal statistics is generally unlikely to yield competitive results.
Obtaining full supervision by providing a labeled mask for training mixtures is unrealistic but even synthetic supervision in the form of a large training set of clean samples from each source distribution might be unavailable as some sounds are never observed on their own (e.g. sounds of car wheels).
Our setting significantly reduces the required supervision to specifying if a certain sound sample contains or does not contain the unobserved source.
Such supervision can be quite easily and inexpensively provided.
For further sample efficiency increases, we hypothesize that it would be possible to label only a limited set of examples as containing the target sound and not, and to use this seed dataset to finetune a deep sound classifier to extract more examples from an unlabeled dataset.
We leave this investigation to future work.
In this paper we proposed a novel method-Neural Egg Separation-for separating mixtures of observed and unobserved distributions.
We showed that careful initialization using GLO Masking improves results in challenging cases.
Our method achieves much better performance than other methods and was usually competitive with full-supervision.
|
An iterative neural method for extracting signals that are only observed mixed with other signals
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:537
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In health, machine learning is increasingly common, yet neural network embedding (representation) learning is arguably under-utilized for physiological signals.
This inadequacy stands out in stark contrast to more traditional computer science domains, such as computer vision (CV), and natural language processing (NLP).
For physiological signals, learning feature embeddings is a natural solution to data insufficiency caused by patient privacy concerns -- rather than share data, researchers may share informative embedding models (i.e., representation models), which map patient data to an output embedding.
Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the "stacked" models (i.e., feature embedding model followed by prediction model).
PHASE is novel in three ways: 1) To our knowledge, PHASE is the first instance of transferal of neural networks to create physiological signal embeddings.
2) We present a tractable method to obtain feature attributions through stacked models.
We prove that our stacked model attributions can approximate Shapley values -- attributions known to have desirable properties -- for arbitrary sets of models.
3) PHASE was extensively tested in a cross-hospital setting including publicly available data.
In our experiments, we show that PHASE significantly outperforms alternative embeddings -- such as raw, exponential moving average/variance, and autoencoder -- currently in use.
Furthermore, we provide evidence that transferring neural network embedding/representation learners between distinct hospitals still yields performant embeddings and offer recommendations when transference is ineffective.
Representation learning (i.e., learning embeddings) BID14 has been applied to medical images and clinical text (Tajbakhsh et al., 2016; BID16 BID13 ) but has been under-explored for time series physiological signals in electronic health records.
This paper introduces the PHASE (PHysiologicAl Signal Embeddings) framework to learn embeddings of physiological signals FIG1 ), which can be used for various prediction tasks FIG1 , and has been extensively tested in terms of its transferability using data from multiple hospitals ( FIG1 ).
In addition, this paper introduces an interpretability method to compute per-sample feature attributions of the original features (i.e., not embeddings) for a prediction result in a tricky "stacked" model situation (i.e., embedding model followed by prediction model) ( FIG1 ).Based
on computer vision (CV) and natural language processing (NLP), exemplars of representation learning, physiological signals are well suited to embeddings. In particular
, CV and NLP share two notable traits with physiological signals. The first is
consistency. For CV, the
domain has consistent features: edges, colors, and other visual attributes. For NLP, the
domain is a particular language with semantic relationships consistent across bodies of text. For sequential
signals, physiological patterns are arguably consistent across individuals. The second attribute
is complexity. Across these three domains
, each particular domain is sufficiently complex such that learning embeddings is non-trivial. Together, consistency and
complexity suggest that for a particular domain, every research group independently spends a significant time to learn embeddings that may ultimately be Figure 1: The PHASE framework, which consists of embedding learning, prediction, interpretation, and transference. The checkered patterns denote
that a model is being trained in the corresponding stage, whereas solid colors denote fixed weights/models. The red side of the LSTM denotes
the hidden layer we will use to generate embeddings. In (c), the size of the black circles
on the left represent the feature attributions being assigned to the original input features. The signals and the outputs of the LSTMs
are vectors. Multiple connections into a single XGB model
are simply concatenated. More details on the experimental setup can be
found in Sections 4.1 and 6.1.quite similar. In order to avoid this negative externality,
NLP and CV have made great progress on standardizing their embeddings; in health, physiological signals are a natural next step.Furthermore, physiological signals have unique properties that make them arguably better suited to representation learning than traditional CV and NLP applications. First, physiological signals are typically generated
in the health domain, which is constrained by patient privacy concerns. These concerns make sharing data between hospitals next
to impossible; however, sharing models between hospitals is intuitively safer and generally accepted. Second, a key component to successful transfer learning
is a community of researchers that work on related problems. According to Faust et al. (2018) , there were at least
fifty-three research publications using deep learning methods for physiological signals in the past ten years. Additionally, we discuss particular examples of neural
networks for physiological signals in Section 2.2. These varied applications of neural networks imply that
there is a large community of machine learning research scientists working on physiological signals, a community that could one day work collaboratively to help patients by sharing models.Although embedding learning has many aforementioned advantages, it makes interpretation more difficult. Naive applications of existing interpretation methods (
Shrikumar et al., 2016; Sundararajan et al., 2017; do not work for models trained using learned embeddings, because they will assign attributions to the embeddings. Feature attributions assigned to embeddings will be meaningless
, because the embeddings do not map to any particular input feature. Instead, each embedding is a complicated, potentially non-linear
combination of the original raw physiological signals. In a health domain, inability to meaningfully interpret your model
is unsatisfactory. Healthcare providers and patients alike generally want to know the
reasoning behind predictions/diagnoses. Interpretability can enhance both scientific discovery as well as
provide credibility to predictive models. In order to provide a principled methodology for mapping embedding
attributions back into physiological signal attributions, we provide a proof that justifies PHASE's Shapley value framework in Section 3.3. This framework generalizes across arbitrary stacked models and currently
encompasses neural network models (e.g., linear models, neural networks) and tree-based models (e.g., gradient boosting machines and random forests).In the following sections, we discuss previous related work (Section 2) and
describe the PHASE framework (Section 3). In Section 4, we first evaluate how well our neural network embeddings make
accurate predictions (Section 4.2.1). Second, we evaluate whether transferring these embedding learners still enables
accurate predictions across three different hospitals separated by location and across hospital departments (Section 4.2.2). Lastly, we present a visualization of our methodology for providing Shapley value
feature attributions through stacked models in Section 4.2.3.
This paper presents PHASE, a new approach to machine learning with physiological signals based on transferring embedding learners.
PHASE has potentially far-reaching impacts, because neural networks inherently create an embedding before the final output layer.
As discussed in Section 2.2, there is a large body of research independently working on neural networks for physiological signals.
PHASE offers a potential method of collaboration by analyzing partially supervised univariate networks as semi-private ways to share meaningful signals without sharing data sets.In the results section we offer several insights into transference of univariate LSTM embedding functions.
First, closeness of upstream (LSTM) and downstream prediction tasks is indeed important for both predictive performance and transference.
For performance, we found that predicting the minimum of the future five minutes was sufficient for the LSTMs to generate good embeddings.
For transference, predicting the minimum of the next five minutes was sufficient to transfer across similar domains (operating room data from an academic medical center and a trauma center) when predicting hypoxemia.
However when attempting to utilize a representation from Hospital P, we found that the difference between operating rooms and intensive care units was likely too large to provide good predictions.
Two solutions to this include fine tuning the Min LSTM models as well as acknowledging the large amount of domain shift and training specific LSTM embedding models with a particular downstream prediction in mind.
Last but not least, this paper introduced a way to obtain feature attributions for stacked models of neural networks and trees.
By showing that Shapley values may be computed as the mean over single reference Shapley values, this model stacking framework generalizes to all models for which single reference Shapley values can be obtained, which was quantitatively verified in Section 4.2.3.We intend to release code pertinent to training the LSTM models, obtaining embeddings, predicting with XGB models, and model stacking feature attributions -submitted as a pull request to the SHAP github (https://github.com/slundberg/shap).
Additionally, we intend to release our embedding models, which we primarily recommend for use in forecasting "hypo" predictions.In the direction of future work, it is important to carefully consider representation learning in health -particularly in light of model inversion attacks as discussed in Fredrikson et al. (2015) .
To this end, future work in making precise statements about the privacy of models deserves attention, for which one potential avenue may be differential privacy (Dwork, 2008) .
Other important areas to explore include extending these results to higher sampling frequencies.
Our data was sampled once per minute, but higher resolution data may beget different neural network architectures.
Lastly, further work may include quantifying the relationship between domain shifts in hospitals and PHASE and determining other relevant prediction tasks for which embeddings can be applied (e.g., "hyper" predictions, doctor action prediction, etc.
Labels For hypoxemia, a particular time point t is labelled to be one if the minimum of the next five minutes is hypoxemic (min(SaO t+1:t+6 2 ) ≤ 92).
All points where the current time step is currently hypoxemic are ignored (SaO t 2 ≤ 92).
Additionally we ignore time points where the past ten minutes were all missing or the future five minutes were all missing.
Hypocapnia and hypotension are only labelled for hospitals 0 and 1.
Additionally, we have stricter label conditions.
We labeled the current time point t to be one if (min(S t−10:t ) > T ) and the minimum of the next five minutes is "hypo" (min(S t+1:t+5 ) ≤ T ).
We labeled the current time point t to be zero if (min(S t−10:t ) > T ) and the minimum of the next ten minutes is not "hypo" (min(S t+1:t+10 ) > T ).
All other time points were not considered.
For hypocapnia, the threshold T = 34 and the signal S is ETCO 2 .
For hypotension the threshold is T = 59 and the signal S is NIBPM.
Additionally we ignore time points where the past ten minutes were all missing or the future five minutes were all missing.
As a result, we have different sample sizes for different prediction tasks (reported in TAB7 ).
For Min predictions, the label is the value of min(S t+1:t+5 ), points without signal for in the future five minutes are ignored.
For Auto predictions, the label is all the time points: S t−59:t .
The sample sizes for Min and Auto are the same and are reported in Table 3.
Table 3 : Sample sizes for the Min and Auto predictions for training the LSTM autoencoders.
For the autoencoders we utilize the same data, without looking at the labels.
We only utilize the 15 features above the line in both hospitals ( Figure 5 ) for training our models.
(2000) , implemented in the Keras library with a Tensorflow back-end.
We train our networks with either regression (Auto and Min embeddings) or classification (Hypox) objectives.
For regression, we optimize using Adam with an MSE loss function.
For classification we optimize using RMSProp with a binary cross-entropy loss function (additionally, we upsample to maintain balanced batches during training).
Our model architectures consist of two hidden layers, each with 200 LSTM cells with dense connections between all layers.
We found that important steps in training LSTM networks for our data are to impute missing values by the training mean, standardize data, and to randomize sample ordering prior to training (allowing us to sample data points in order without replacement).
To prevent overfitting, we utilized dropouts between layers as well as recurrent dropouts for the LSTM nodes.
Using a learning rate of 0.001 gave us the best final results.
The LSTM models were run to convergence (until their validation accuracy did not improve for five rounds of batch stochastic gradient descent).
In order to train these models, we utilize three GPUs (GeForce GTX 1080 Ti graphics cards).
DISPLAYFORM0
|
Physiological signal embeddings for prediction performance and hospital transference with a general Shapley value interpretability method for stacked models.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:538
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients.
Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex.
This was a major challenge until recently, when provable algorithms for dictionary learning were proposed.
Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients.
Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients.
This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest.
To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately.
Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations.
Finally, we corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques.
Sparse models avoid overfitting by favoring simple yet highly expressive representations.
Since signals of interest may not be inherently sparse, expressing them as a sparse linear combination of a few columns of a dictionary is used to exploit the sparsity properties.
Of specific interest are overcomplete dictionaries, since they provide a flexible way of capturing the richness of a dataset, while yielding sparse representations that are robust to noise; see BID13 ; Chen et al. (1998); Donoho et al. (2006) .
In practice however, these dictionaries may not be known, warranting a need to learn such representations -known as dictionary learning (DL) or sparse coding BID14 .
Formally, this entails learning an a priori unknown dictionary A ∈ R n×m and sparse coefficients x * (j) ∈ R m from data samples y (j) ∈ R n generated as DISPLAYFORM0 This particular model can also be viewed as an extension of the low-rank model BID15 .
Here, instead of sharing a low-dimensional structure, each data vector can now reside in a separate low-dimensional subspace.
Therefore, together the data matrix admits a union-of-subspace model.
As a result of this additional flexibility, DL finds applications in a wide range of signal processing and machine learning tasks, such as denoising (Elad and Aharon, 2006) , image inpainting BID12 , clustering and classification (Ramirez et al., 2010; BID16 BID17 BID18 2019b; a) , and analysis of deep learning primitives (Ranzato et al., 2008; BID0 ; see also Elad (2010) , and references therein.Notwithstanding the non-convexity of the associated optimization problems (since both factors are unknown), alternating minimization-based dictionary learning techniques have enjoyed significant success in practice.
Popular heuristics include regularized least squares-based BID14 BID8 BID12 BID9 BID7 , and greedy approaches such as the method of optimal directions (MOD) (Engan et al., 1999) and k-SVD (Aharon et al., 2006) .
However, dictionary learning, and matrix factorization models in general, are difficult to analyze in theory; see also BID10 .To
this end, motivated from a string of recent theoretical works BID1 BID4 Geng and Wright, 2014) , provable algorithms for DL have been proposed recently to explain the success of aforementioned alternating minimization-based algorithms (Agarwal et al., 2014; Arora et al., 2014; BID20 . However
, these works exclusively focus on guarantees for dictionary recovery. On the
other hand, for applications of DL in tasks such as classification and clusteringwhich rely on coefficient recovery -it is crucial to have guarantees on coefficients recovery as well.Contrary to conventional prescription, a sparse approximation step after recovery of the dictionary does not help; since any error in the dictionary -which leads to an error-in-variables (EIV) (Fuller, 2009 ) model for the dictionary -degrades our ability to even recover the support of the coefficients (Wainwright, 2009) . Further
, when this error is non-negligible, the existing results guarantee recovery of the sparse coefficients only in 2 -norm sense (Donoho et al., 2006) . As a result
, there is a need for scalable dictionary learning techniques with guaranteed recovery of both factors.
we note that Arora15(''biased'') and Arora15(''unbiased'') incur significant bias, while NOODL converges to A * linearly.
NOODL also converges for significantly higher choices of sparsity k, i.e., for k = 100 as shown in panel (d), beyond k = O( √ n), indicating a potential for improving this bound.
Further, we observe that Mairal '09 exhibits significantly slow convergence as compared to NOODL.
Also, in panels (a-ii), (b-ii), (c-ii) and (d-ii) we show the corresponding performance of NOODL in terms of the error in the overall fit ( Y − AX F / Y F ), and the error in the coefficients and the dictionary, in terms of relative Frobenius error metric discussed above.
We observe that the error in dictionary and coefficients drops linearly as indicated by our main result.
We present NOODL, to the best of our knowledge, the first neurally plausible provable online algorithm for exact recovery of both factors of the dictionary learning (DL) model.
NOODL alternates between:
(a) an iterative hard thresholding (IHT)-based step for coefficient recovery, and
(b) a gradient descent-based update for the dictionary, resulting in a simple and scalable algorithm, suitable for large-scale distributed implementations.
We show that once initialized appropriately, the sequence of estimates produced by NOODL converge linearly to the true dictionary and coefficients without incurring any bias in the estimation.
Complementary to our theoretical and numerical results, we also design an implementation of NOODL in a neural architecture for use in practical applications.
In essence, the analysis of this inherently non-convex problem impacts other matrix and tensor factorization tasks arising in signal processing, collaborative filtering, and machine learning.
|
We present a provable algorithm for exactly recovering both factors of the dictionary learning model.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:539
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well?
Our work responds to \citet{zhang2016understanding}, who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs.
We show that the same phenomenon occurs in small linear models.
These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization.
We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy.
We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large.
Interpreting stochastic gradient descent as a stochastic differential equation, we identify the ``noise scale" $g = \epsilon (\frac{N}{B} - 1) \approx \epsilon N/B$, where $\epsilon$ is the learning rate, $N$ the training set size and $B$ the batch size.
Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, $B_{opt} \propto \epsilon N$.
We verify these predictions empirically.
This paper shows Bayesian principles can explain many recent observations in the deep learning literature, while also discovering practical new insights.
BID27 trained deep convolutional networks on ImageNet and CIFAR10, achieving excellent accuracy on both training and test sets.
They then took the same input images, but randomized the labels, and found that while their networks were now unable to generalize to the test set, they still memorized the training labels.
They claimed these results contradict learning theory, although this claim is disputed BID18 BID7 .
Nonetheless, their results beg the question; if our models can assign arbitrary labels to the training set, why do they work so well in practice?
Meanwhile BID19 observed that if we hold the learning rate fixed and increase the batch size, the test accuracy usually falls.
This striking result shows improving our estimate of the full-batch gradient can harm performance.
BID11 observed a linear scaling rule between batch size and learning rate in a deep ResNet, while BID15 proposed a square root rule on theoretical grounds.Many authors have suggested "broad minima" whose curvature is small may generalize better than "sharp minima" whose curvature is large BID4 BID14 .
Indeed, BID7 argued the results of BID27 can be understood using "nonvacuous" PAC-Bayes generalization bounds which penalize sharp minima, while BID19 showed stochastic gradient descent (SGD) finds wider minima as the batch size is reduced.
However BID6 challenged this interpretation, by arguing that the curvature of a minimum can be arbitrarily increased by changing the model parameterization.
In this work we show:• The results of BID27 are not unique to deep learning; we observe the same phenomenon in a small "over-parameterized" linear model.
We demonstrate that this phenomenon is straightforwardly understood by evaluating the Bayesian evidence in favor of each model, which penalizes sharp minima but is invariant to the model parameterization.•
SGD integrates a stochastic differential equation whose "noise scale" g ≈ N/B, where is the learning rate, N training set size and B batch size. Noise
drives SGD away from sharp minima, and therefore there is an optimal batch size which maximizes the test set accuracy. This
optimal batch size is proportional to the learning rate and training set size 1 .We describe
Bayesian model comparison in section 2. In section
3 we replicate the observations of BID27 in a linear model, and show they are explained by the Bayesian evidence. In section
4 we show there is an optimum batch size which maximizes the test set accuracy, and in section 5 we derive scaling rules between the optimum batch size, learning rate, training set size and momentum coefficient. Throughout
this work, "generalization gap" refers to the gap in test accuracy between small and large batch SGD training, not the gap in accuracy between training and test sets.
Just like deep neural networks, linear models which generalize well on informative labels can memorize random labels of the same inputs.
These observations are explained by the Bayesian evidence, which is composed of the cost function and an "Occam factor".
The Occam factor penalizes sharp minima but it is invariant to changes in model parameterization.
Mini-batch noise drives SGD away from sharp minima, and therefore there is an optimum batch size which maximizes the test accuracy.
Interpreting SGD as the discretization of a stochastic differential equation, we predict this optimum batch size should scale linearly with both the learning rate and the training set size, B opt ∝ N .
We derive an additional scaling rule, B opt ∝ 1/(1 − m), between the optimal batch size and the momentum coefficient.
We verify these scaling rules empirically and discuss their implications.
|
Generalization is strongly correlated with the Bayesian evidence, and gradient noise drives SGD towards minima whose evidence is large.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:54
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Real-world Relation Extraction (RE) tasks are challenging to deal with, either due to limited training data or class imbalance issues.
In this work, we present Data Augmented Relation Extraction (DARE), a simple method to augment training data by properly finetuning GPT2 to generate examples for specific relation types.
The generated training data is then used in combination with the gold dataset to train a BERT-based RE classifier.
In a series of experiments we show the advantages of our method, which leads in improvements of up to 11 F1 score points compared to a strong baseline.
Also, DARE achieves new state-of-the-art in three widely used biomedical RE datasets surpassing the previous best results by 4.7 F1 points on average.
Relation Extraction (RE) is the task of identifying semantic relations from text, for given entity mentions in it.
This task, along with Named Entity Recognition, has become increasingly important recently due to the advent of knowledge graphs and their applications.
In this work, we focus on supervised RE (Zeng et al., 2014; Lin et al., 2016; Wu et al., 2017; Verga et al., 2018) , where relation types come from a set of predefined categories, as opposed to Open Information Extraction approaches that represent relations among entities using their surface forms (Banko et al., 2007; Fader et al., 2011) .
RE is inherently linked to Natural Language Understanding in the sense that a successful RE model should manage to capture adequately well language structure and meaning.
So, almost inevitably, the latest advances in language modelling with Transformer-based architectures (Radford et al., 2018a; Devlin et al., 2018; Radford et al., 2018b) have been quickly employed to also deal with RE tasks (Soares et al., 2019; Lin et al., 2019; Shi and Lin, 2019; Papanikolaou et al., 2019) .
These recent works have mainly leveraged the discriminative power of BERT-based models to improve upon the state-of-the-art.
In this work we take a step further and try to assess whether the text generating capabilities of another language model, GPT-2 (Radford et al., 2018b) , can be applied to augment training data and deal with class imbalance and small-sized training sets successfully.
Specifically, given a RE task we finetune a pretrained GPT-2 model per each relation type and then use the resulting finetuned models to generate new training samples.
We then combine the generated data with the gold dataset and finetune a pretrained BERT model (Devlin et al., 2018) on the resulting dataset to perform RE.
We conduct extensive experiments, studying different configurations for our approach and compare DARE against two strong baselines and the stateof-the-art on three well established biomedical RE benchmark datasets.
The results show that our approach yields significant improvements against the rest of the approaches.
To the best of our knowledge, this is the first attempt to augment training data with GPT-2 for RE.
In Table 1 we show some generated examples with GPT-2 models finetuned on the datasets that are used in the experiments (refer to Section 4).
In the following, we provide a brief overview of related works in Section 2, we then describe our approach in Section 3, followed by our experimental results (Section 4) and the conclusions (Section 5).
We have presented DARE, a novel method to augment training data in Relation Extraction.
Given a gold RE dataset, our approach proceeds by finetuning a pre-trained GPT-2 model per relation type and then uses the finetuned models to generate new training data.
We sample subsets of the synthetic data with the gold dataset to finetune an ensemble of RE classifiers that are based on BERT.
On a series of experiments we show empirically that our method is particularly suited to deal with class imbalance or limited data settings, recording improvements up to 11 F1 score points over two strong baselines.
We also report new state-of-the-art performance on three biomedical RE benchmarks.
Our work can be extended with minor improvements on other Natural Language Understanding tasks, a direction that we would like to address in future work.
|
Data Augmented Relation Extraction with GPT-2
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:540
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN).
The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs.
To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form.
Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis.
As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme.
We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods.
Whenever machine learning methods are used for safety-critical applications such as medical image analysis or autonomous driving, it is crucial to provide a precise estimation of the failure probability of the learned predictor.
Therefore, most of the current learning approaches return distributions rather than single, most-likely predictions.
For example, DNNs trained for classification usually use the softmax function to provide a distribution over predicted class labels.
Unfortunately, this method tends to severely underestimate the true failure probability, leading to overconfident predictions (Guo et al., 2017) .
The main reason for this is that neural networks are typically trained with a principle of maximum likelihood, neglecting their epistemic or model uncertainty with the point estimates.
A widely known work by Gal (2016) shows that this can be mitigated by using dropout at test time.
This so-called Monte-Carlo dropout (MC-dropout) has the advantage that it is relatively easy to use and therefore very popular in practice.
However, MC-dropout also has significant drawbacks.
First, it requires a specific stochastic regularization during training.
This limits its use on already well trained architectures, because current networks are often trained with other regularization techniques such as batch normalization.
Moreover, it uses a Bernoulli distribution to represent the complex model uncertainty, which in return, leads to an underestimation of the predictive uncertainty.
Several strong alternatives exist without these drawbacks.
Variational inference Kingma et al., 2015; Graves, 2011) and expectation propagation (Herandez-Lobato & Adams, 2015) are such examples.
Yet, these methods use a diagonal covariance matrix which limits their applicability as the model parameters are often highly correlated.
Building upon these, Sun et al. (2017) ; Louizos & Welling (2016) ; Zhang et al. (2018) ; Ritter et al. (2018a) show that the correlations between the parameters can also be computed efficiently by decomposing the covariance matrix of MND into Kronecker products of smaller matrices.
However, not all matrices can be Kronecker decomposed and thus, these simplifications usually induce crude approximations (Bae et al., 2018) .
As the dimensionality of statistical manifolds are prohibitively too large in DNNs, more expressive, efficient but still easy to use ways of representing such high dimensional distributions are required.
To tackle this challenge, we propose to represent the model uncertainty in sparse information form of MND.
As a first step, we devise a new Laplace Approximation (LA) for DNNs, in which we improve the state-of-the-art Kronecker factored approximations of the Hessian (George et al., 2018) by correcting the diagonal variance in parameter space.
We show that these can be computed efficiently, and that the information matrix of the resulting parameter posterior is more accurate in terms of the Frobenius norm.
In this way the model uncertainty is approximated in information form of the MND.
counts [-] Figure 1: Main idea.
(a) Covariance matrix Σ for DNNs is intractable to infer, store and sample (an example taken from our MNIST experiments).
(b) Our main insight is that the spectrum (eigenvalues) of information matrix (inverse of covariance) tend to be sparse.
(c) Exploiting this insight a Laplace Approximation scheme is devised which applies a spectral sparsification (LRA) while keeping the diagonals exact.
With this formulation, the complexity becomes tractable for sampling while producing more accurate estimates.
Here, the diagonal elements (nodes in graphical interpretation) corresponds to information content in a parameter whereas the corrections (links) are the off-diagonals.
As this results in intractable inverse operation for sampling, we further propose a novel low-rank representation of the resulting Kronecker factorization, which paves the way to applications on large network structures trained on realistically sized data sets.
To realize such sparsification, we propose a novel algorithm that enables a low-rank approximation of the Kronecker factored eigenvalue decomposition, and we demonstrate an associated sampling computations.
Our experiments demonstrate that our approach is effective in providing more accurate uncertainty estimates and calibration on considered benchmark data sets.
A detailed theoretical analysis is also provided for further insights.
We summarize our main contributions below.
• A novel Laplace Approximation scheme with a diagonal correction to the eigenvalue rescaled approximations of the Hessian, as a practical inference tool (section 2.2).
• A novel low-rank representation of Kronecker factored eigendecomposition that preserves Kronecker structure (section 2.3).
This results in a sparse information form of MND.
• A novel algorithm to enable a low rank approximation (LRA) for the given representation of MND (algorithm 1) and derivation of a memory-wise tractable sampler (section B.2).
• Both theoretical (section C) and experimental results (section 4) showing the applicability of our approach.
In our experiments, we showcase the state-of-the-art performance within the class of Bayesian Neural Networks that are scalable and training-free.
To our knowledge we explore a sparse information form to represent the model uncertainty of DNNs for the first time.
Figure 1 depicts our main idea which we provide more rigorous formulation next.
We address an effective approach of representing model uncertainty in deep neural networks using Multivariate Normal Distribution, which has been thought computationally intractable so far.
This is achieved by designing its novel sparse information form.
With one of the most expressive representation of model uncertainty in current Bayesian deep learning literature, we show that uncertainty can be estimated more accurately than existing methods.
For future works, we plan to demonstrate a real world application of this approach, pushing beyond the validity of concepts.
|
An approximate inference algorithm for deep learning
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:541
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications.
In comparison to supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data.
In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts.
The two parts complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning.
One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold.
Both of the two regularizers are achieved by the strategy of virtual adversarial training.
Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and practical datasets.
The recent success of supervised learning (SL) models, like deep convolutional neural networks, highly relies on the huge amount of labeled data.
However, though obtaining data itself might be relatively effortless in various circumstances, to acquire the annotated labels is still costly, limiting the further applications of SL methods in practical problems.
Semi-supervised learning (SSL) models, which requires only a small part of data to be labeled, does not suffer from such restrictions.
The advantage that SSL depends less on well-annotated datasets makes it of crucial practical importance and draws lots of research interests.
The common setting in SSL is that we have access to a relatively small amount of labeled data and much larger amount of unlabeled data.
And we need to train a classifier utilizing those data.
Comparing to SL, the main challenge of SSL is how to make full use of the huge amount of unlabeled data, i.e., how to utilize the marginalized input distribution p(x) to improve the prediction model i.e., the conditional distribution of supervised target p(y|x).
To solve this problem, there are mainly three streams of research.The first approach, based on probabilistic models, recognizes the SSL problem as a specialized missing data imputation task for classification problem.
The common scheme of this method is to establish a hidden variable model capturing the relationship between the input and label, and then applies Bayesian inference techniques to optimize the model BID10 Zhu et al., 2003; BID21 .
Suffering from the estimation of posterior being either inaccurate or computationally inefficient, this approach performs less well especially in high-dimensional dataset BID10 .The
second line tries to construct proper regularization using the unlabeled data, to impose the desired smoothness on the classifier. One
kind of useful regularization is achieved by adversarial training BID8 , or virtual adversarial training (VAT) when applied to unlabeled data BID15 . Such
regularization leads to robustness of classifier to adversarial examples, thus inducing smoothness of classifier in input space where the observed data is presented. The
input space being high dimensional, though, the data itself is concentrated on a underlying manifold of much lower dimensionality BID2 BID17 Chapelle et al., 2009; BID22 . Thus
directly performing VAT in input space might overly regularize and does potential harm to the classifier. Another
kind of regularization called manifold regularization aims to encourage invariance of classifier on manifold BID25 BID0 BID18 BID11 BID22 , rather than in input space as VAT has done. Such manifold
regularization is implemented by tangent propagation BID25 BID11 or manifold Laplacian norm BID0 BID13 , requiring evaluating the Jacobian of classifier (with respect to manifold representation of data) and thus being highly computationally inefficient.The third way is related to generative adversarial network (GAN) BID7 . Most GAN based
approaches modify the discriminator to include a classifier, by splitting the real class of original discriminator into K subclasses, where K denotes the number of classes of labeled data BID24 BID19 BID5 BID20 . The features extracted
for distinguishing the example being real or fake, which can be viewed as a kind of coarse label, have implicit benefits for supervised classification task. Besides that, there are
also works jointly training a classifier, a discriminator and a generator BID14 .Our work mainly follows
the second line. We firstly sort out three
important assumptions that motivate our idea:The manifold assumption The observed data presented in high dimensional space is with high probability concentrated in the vicinity of some underlying manifold of much lower dimensionality BID2 BID17 Chapelle et al., 2009; BID22 . We denote the underlying
manifold as M. We further assume that the classification task concerned relies and only relies on M BID22 . The noisy observation assumption
The observed data x can be decomposed into two parts as x = x 0 + n, where x 0 is exactly supported on the underlying manifold M and n is some noise independent of x 0 BID1 BID21 . With the assumption that the classifier
only depends on the underlying manifold M, the noise part might have undesired influences on the learning of the classifier. The semi-supervised learning assumption
If two points x 1 , x 2 ∈ M are close in manifold distance, then the conditional probability p(y|x 1 ) and p(y|x 2 ) are similar BID0 BID22 BID18 . In other words, the true classifier, or
the true condition distribution p(y|X) varies smoothly along the underlying manifold M. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = " > A DISPLAYFORM4 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = " > A DISPLAYFORM5 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f j E y Y 9 J 5 C G f U l R v p 4 t P 7 5 Z Z a A t s = " > A DISPLAYFORM6 c 0 w R T 9 e + O m m f W V l n i K i e 7 2 8 f e R P y f 1 y 8 p 3 R 7 U U h c l o R Y P g 9 J S A e U w S R G G 0 q A g V T n C h Z F u V x A j b r g g l 7 X v Q g g f n / y U H G 9 0 w 6 A b f t 1 s 7 3 2 a x b H A V t l 7 t s Z C t s X 2 2 G d 2 x H p M s B / s i t 2 w W + / S u / b u v P u H 0 j l v 1 v O O / Q P v 9 x 8 6 h a h y < / l a t e x i t > x 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM7 4 2 c d e G 4 N 8 / + S E Y n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM8 4 2 c d e G 4 N 8 / + S E Y n n Z 9 r + t / f t 3 q n 6 3 j a J C X 5 B X p E J + 8 I X 3 y g Z y T A e H k K / l O f p J f z r X z w / n t / L m 1 b j j r m U N y p 5 y / / w C 0 + a o E < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " U v o A x a M + o b w r T 4 7 t H V I T M R l 7 n k U = " > A A A C P n i c b V B N a 9 t A E F 2 l T e o q X 2 5 y 7 DISPLAYFORM9
We present the tangent-normal adversarial regularization, a novel regularization strategy for semisupervised learning, composing of regularization on the tangent and normal space separately.
The tangent adversarial regularization enforces manifold invariance of the classifier, while the normal adversarial regularization imposes robustness of the classifier against the noise contained in the observed data.
Experiments on artificial dataset and multiple practical datasets demonstrate that our approach outperforms other state-of-the-art methods for semi-supervised learning.
The performance of our method relies on the quality of the estimation of the underlying manifold, hence the breakthroughs on modeling data manifold could also benefit our strategy for semi-supervised learning, which we leave as future work.
represent two different classes.
The observed data is sampled as x = x 0 + n, where x 0 is uniformly sampled from M and n ∼ N (0, 2 −2 ).
We sample 6 labeled training data, 3 for each class, and 3, 000 unlabeled training data, as shown in FIG9 .
|
We propose a novel manifold regularization strategy based on adversarial training, which can significantly improve the performance of semi-supervised learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:542
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Universal approximation property of neural networks is one of the motivations to use these models in various real-world problems.
However, this property is not the only characteristic that makes neural networks unique as there is a wide range of other approaches with similar property.
Another characteristic which makes these models interesting is that they can be trained with the backpropagation algorithm which allows an efficient gradient computation and gives these universal approximators the ability to efficiently learn complex manifolds from a large amount of data in different domains.
Despite their abundant use in practice, neural networks are still not well understood and a broad range of ongoing research is to study the interpretability of neural networks.
On the other hand, topological data analysis (TDA) relies on strong theoretical framework of (algebraic) topology along with other mathematical tools for analyzing possibly complex datasets.
In this work, we leverage a universal approximation theorem originating from algebraic topology to build a connection between TDA and common neural network training framework.
We introduce the notion of automatic subdivisioning and devise a particular type of neural networks for regression tasks: Simplicial Complex Networks (SCNs).
SCN's architecture is defined with a set of bias functions along with a particular policy during the forward pass which alternates the common architecture search framework in neural networks.
We believe the view of SCNs can be used as a step towards building interpretable deep learning models.
Finally, we verify its performance on a set of regression problems.
It is well-known that under mild assumptions on the activation function, a neural network with one hidden layer and a finite number of neurons can approximate continuous functions.
This characteristic of neural networks is generally referred to as the universal approximation property.
There are various theoretical universal approximators.
For example, a result of the Stone-Weierstrass theorem Stone (1948) ; Cotter (1990) is that multivariate polynomials are dense in the space of continuous real valued functions defined over a hypercube.
Another example is that the reproducing kernel Hilbert space (RKHS) associated with kernel functions with particular properties can be dense in the same space of functions.
Kernel functions with this property are called universal kernels Micchelli et al. (2006) .
A subsequent result of this theory is that the set of functions generated by a Gaussian process regression with an appropriate kernel can approximate any continuous function over a hypercube with arbitrary precision.
Although multivariate polynomials and Gaussian processes also have this approximation property, each has practical limitations that cause neural networks to be used more often in practice compared to these approaches.
For instance, polynomial interpolations may result a model that overfits to the data and suffers from a poor generalization, and Gaussian processes often become computationally intractable for a large number of training data Bernardo et al..
Neural networks, with an efficient structure for gradient computation using backpropagation, can be trained using gradient based optimization for large datasets in a tractable time.
Moreover, in contrast to existing polynomial interpolations, neural networks generalize well in practice.
Theoretical and empirical understanding of the generalization power of neural networks is an ongoing research Novak et al. (2018) ; Neyshabur et al. (2017) .
Topological Data Analysis (TDA), a geometric approach for data analysis, is a growing field which provides statistical and algorithmic methods to analyze the topological structures of data often referred to as point clouds.
TDA methods mainly relied on deterministic methods until recently where w l,0 w l,1 ...
... statistical approaches were proposed for this purpose Carriere et al. (2017); Chazal & Michel (2017) .
In general, TDA methods assume a point cloud in a metric space with an inducing distance (e.g. Euclidean, Hausdorff, or Wasserstein distance) between samples and build a topological structure upon point clouds.
The topological structure is then used to extract geometric information from data Chazal & Michel (2017) .
These models are not trained with gradient based approaches and they are generally limited to predetermined algorithms whose application to high dimensional spaces may be challenging Chazal (2016) .
In this work, by leveraging geometrical perspective of TDA, we provide a class of restricted neural networks that preserve the universal approximation property and can be trained using a forward pass and the backpropagation algorithm.
Motivated by the approximation theorem used to develop our method, Simplicial Complex Network (SCN) is chosen to refer these models.
SCNs do not require an activation function and architecture search in the way that conventional neural networks do.
Their hidden units are conceptually well defined, in contrast to feed-forward neural networks for which the role of a hidden unit is yet an ongoing problem.
SCNs are discussed in more details in later sections.
Our contribution can be summarized in building a novel class of neural networks which we believe can be used in the future for developing deep models that are interpretable, and robust to perturbations.
The rest of this paper is organized as follows: Section 2 is specified for the explanation of SCNs and their training procedure.
In section 3, related works are explained.
Sections 4, 5, and 6 are specified to experiments, limitations, and conclusion.
In this work, we have used techniques from topological data analysis to build a class of neural network architectures with the universal approximation property which can be trained using the common neural network training framework.
Topological data analysis methods are based on the geometrical structure of the data and have strong theoretical analysis.
SCNs are made using the geometrical view of TDA and we believe that they can be used as a step towards building interpretable deep learning models.
Most of the experiments in the paper are synthetic.
More practical applications of the paper is considered as an immediate continual work.
Moreover, throughout this work, bias functions of the simplest kinds (constant parameters) were used.
We mentioned earlier that a bias function may be an arbitrary function of its input to keep the universal approximation property of SCNs.
A natural idea is to use common neural network architectures as the bias function.
In this case, backpropagation can be continued to the bias function parameters as well.
This is also considered as another continuation of this work.
|
A novel method for supervised learning through subdivisioning the input space along with function approximation.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:543
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks.
However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks.
We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks.
We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks.
We further show that our attacks are transferable since they generalize to many models, and are successful even when the attacker is restricted.
Unsupervised node embedding (network representation learning) approaches are becoming increasingly popular and achieve state-of-the-art performance on many network learning tasks BID5 .
The goal is to embed each node in a low-dimensional feature space such that the graph's structure is captured.
The learned embeddings are subsequently used for downstream tasks such as link prediction, node classification, community detection, and visualization.
Among the variety of proposed approaches, techniques based on random walks (RWs) (Perozzi et al.; Grover & Leskovec) are highly successful since they incorporate higher-order relational information.
Given the increasing popularity of these method, there is a strong need for an analysis of their robustness.
In particular, we aim to study the existence and effects of adversarial perturbations.
A large body of research shows that traditional (deep) learning methods can easily be fooled/attacked: even slight deliberate data perturbations can lead to wrong results BID17 BID28 BID6 BID12 BID26 BID10 .So
far, however, the question of adversarial perturbations for node embeddings has not been addressed. This
is highly critical, since especially in domains where graph embeddings are used (e.g. the web) adversaries are common and false data is easy to inject: e.g. spammers might create fake followers on social media or fraudsters might manipulate friendship relations in social networks. Can
node embedding approaches be easily fooled? The
answer to this question is not immediately obvious. On
one hand, the relational (non-i.i.d.) nature of the data might improve robustness since the embeddings are computed for all nodes jointly rather than for individual nodes in isolation. On
the other hand, the propagation of information might also lead to cascading effects, where perturbations in one part of the graph might affect many other nodes in another part of the graph.Compared to the existing works on adversarial attacks our work significantly differs in various aspects. First
, by operating on plain graph data, we do not perturb the features of individual instances but rather their interaction/dependency structure. Manipulating
the structure (the graph) is a highly realistic scenario. For example,
one can easily add or remove fake friendship relations on a social network, or write fake reviews to influence graph-based recommendation engines. Second, the
node embedding works are typically trained in an unsupervised and transductive fashion. This means
that we cannot rely on a single end-task that our attack might exploit to find appropriate perturbations, and we have to handle a challenging poisoning attack where the model is learned after the attack. That is, the
model cannot be assumed to be static as in most other adversarial attack works. Lastly, since
graphs are discrete classical gradient-based approaches BID28 for finding adversarial perturbations that were designed for continuous data are not well suited. Particularly
for RW-based methods, the gradient computation is not directly possible since they are based on a non-differentiable sampling procedure. How to design
efficient algorithms that are able to find adversarial perturbations in such a challenging -discrete and combinatorial -graph domain?We propose a principled
strategy for adversarial attacks on unsupervised node embeddings. Exploiting results from
eigenvalue perturbation theory BID35 we are able to efficiently solve a challenging bi-level optimization problem associated with the poisoning attack. We assume an attacker with
full knowledge about the data and the model, thus, ensuring reliable vulnerability analysis in the worst case. Nonetheless, our experiments
on transferability demonstrate that our strategy generalizes -attacks learned based on one model successfully fool other models as well.Overall, we shed light on an important problem that has not been studied so far. We show that node embeddings
are sensitive to adversarial attacks. Relatively few changes are needed
to significantly damage the quality of the embeddings even in the scenario where the attacker is restricted. Furthermore, our work highlights
that more work is needed to make node embeddings robust to adversarial perturbations and thus readily applicable in production systems.
We demonstrate that node embeddings are vulnerable to adversarial attacks which can be efficiently computed and have a significant negative effect on node classification and link prediction.
Furthermore, successfully poisoning the system is possible with relatively small perturbations and under restriction.
More importantly, our attacks generalize -the adversarial edges are transferable across different models.
Future work includes modeling the knowledge of the attacker, attacking other network representation learning methods, and developing effective defenses against such attacks.
|
Adversarial attacks on unsupervised node embeddings based on eigenvalue perturbation theory.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:544
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains.
However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge.
We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework.
The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks.
In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain.
We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains.
Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions.
Hierarchical reinforcement learning methods hold the promise of faster learning in complex state spaces and better transfer across tasks, by exploiting planning at multiple levels of detail BID0 .
A taxi driver, for instance, ultimately must execute a policy in the space of torques and forces applied to the steering wheel and pedals, but planning directly at this low level is beset by the curse of dimensionality.
Algorithms like HAMS, MAXQ, and the options framework permit powerful forms of hierarchical abstraction, such that the taxi driver can plan at a higher level, perhaps choosing which passengers to pick up or a sequence of locations to navigate to BID19 BID3 BID13 .
While these algorithms can overcome the curse of dimensionality, they require the designer to specify the set of higher level actions or subtasks available to the agent.
Choosing the right subtask structure can speed up learning and improve transfer across tasks, but choosing the wrong structure can slow learning BID17 BID1 .
The choice of hierarchical subtasks is thus critical, and a variety of work has sought algorithms that can automatically discover appropriate subtasks.One line of work has derived subtasks from properties of the agent's state space, attempting to identify states that the agent passes through frequently BID18 .
Subtasks are then created to reach these bottleneck states (van Dijk & Polani, 2011; BID17 BID4 .
In a domain of rooms, this style of analysis would typically identify doorways as the critical access points that individual skills should aim to reach (Şimşek & Barto, 2009 ).
This technique can rely only on passive exploration of the agent, yielding subtasks that do not depend on the set of tasks to be performed, or it can be applied to an agent as it learns about a particular ensemble of tasks, thereby suiting the learned options to a particular task set.Another line of work converts the target MDP into a state transition graph.
Graph clustering techniques can then identify connected regions, and subtasks can be placed at the borders between connected regions BID11 .
In a rooms domain, these connected regions might correspond to rooms, with their borders again picking out doorways.
Alternately, subtask states can be identified by their betweenness, counting the number of shortest paths that pass through each specific node (Şimşek & Barto, 2009; BID17 .
Other recent work utilizes the eigenvectors of the graph laplacian to specify dense rewards for option policies that are defined over the full state space BID10 .
Finally, other methods have grounded subtask discovery in the information each state reveals about the eventual goal (van Dijk & Polani, 2011) .
Most of these approaches aim to learn options with a single or low number of termination states, can require high computational expense BID17 , and have not been widely used to generate multiple levels of hierarchy (but see BID24 ; BID12 ).Here
we describe a novel subtask discovery algorithm based on the recently introduced Multitask linearly-solvable Markov decision process (MLMDP) framework BID14 , which learns a basis set of tasks that may be linearly combined to solve tasks that lie in the span of the basis BID21 . We show
that an appropriate basis can naturally be found through non-negative matrix factorization BID8 BID3 , yielding intuitive decompositions in a variety of domains. Moreover
, we show how the technique may be iterated to learn deeper hierarchies of subtasks.In line with a number of prior methods, BID17 BID12 our method operates in the batch off-line setting; with immediate application to probabilistic planning. The subtask
discovery method introduced in BID10 , which also utilizes matrix factorization techniques to discover subtasks albeit from a very different theoretical foundation, is notable for its ability to operate in the online RL setting, although it is not immediately clear how the approach taken therein might achieve a deeper hierarchical architecture, or enable immediate generalization to novel tasks.
We present a novel subtask discovery mechanism based on the low rank approximation of the desirability basis afforded by the LMDP framework.
The new scheme reliably uncovers intuitive decompositions in a variety of sample domains.
Unlike methods based on pure state abstraction, the proposed scheme is fundamentally dependent on the task ensemble, recovering different subtask representations for different task ensembles.
Moreover, by leveraging the stacking procedure for hierarchical MLMDPs, the subtask discovery mechanism may be straightforwardly iterated to yield powerful hierarchical abstractions.
Finally, the unusual construction allows us to analytically probe a number of natural questions inaccessible to other methods; we consider specifically a measure of the quality of a set of subtasks, and the equivalence of different sets of subtasks.A current drawback of the approach is its reliance on a discrete, tabular, state space.
Scaling to high dimensional problems will require applying state function approximation schemes, as well as online estimation of Z directly from experience.
These are avenues of current work.
More abstractly, the method might be extended by allowing for some concept of nonlinear regularized composition allowing more complex behaviours to be expressed by the hierarchy.
|
We present a novel algorithm for hierarchical subtask discovery which leverages the multitask linear Markov decision process framework.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:545
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper introduces a framework for solving combinatorial optimization problems by learning from input-output examples of optimization problems.
We introduce a new memory augmented neural model in which the memory is not resettable (i.e the information stored in the memory after processing an input example is kept for the next seen examples).
We used deep reinforcement learning to train a memory controller agent to store useful memories.
Our model was able to outperform hand-crafted solver on Binary Linear Programming (Binary LP).
The proposed model is tested on different Binary LP instances with large number of variables (up to 1000 variables) and constrains (up to 700 constrains).
An intelligent agent with a long-term memory processes raw data (as images, speech and natural language sentences) and then transfer these data streams into knowledge.
The knowledge stored in the long-term memory can be used later in inference either by retrieving segments of memory during recalling, matching stored concepts with new raw data (e.g. image classification tasks) or solving more complex mathematical problems that require memorizing either the method of solving a problem or simple steps during solving.
For example, the addition of long-digit numbers requires memorizing both the addition algorithm and the carries produced from the addition operations BID28 .In
neural models, the weights connecting the layers are considered long term memories encoding the algorithm that transform inputs to outputs. Other
neural models as recurrent neural networks (RNNs) introduce a short-term memory encoded as hidden states of previous inputs BID12 BID7 .In memory
augmented neural networks (MANNs), a controller writes memories projected from its hidden state to a memory bank (usually in the form of a matrix), the controller then reads from the memory using some addressing mechanisms and generates a read vector which will be fed to the controller in the next time step BID6 . The memory
will contain information about each of the input sequence tokens and the controller enriches its memory capacity by using the read vector form the previous time step.Unfortunately, In MANNs the memory is not a long-term memory and is re-settable when new examples are processed, making it unable to capture general knowledge about the inputs domain. In context
of natural language processing, one will need general knowledge to answer open-ended questions that do not rely on temporal information only but also on general knowledge from previous input streams. In long-digits
multiplication, it will be easier to store some intermediate multiplication steps as digit by digit multiplications and use them later when solving other instances than doing the entire multiplication digit by digit each time from scratch.Neural networks have a large capacity of memorizing, a long-term persistent memory will even increase the network capacity to memorize but will decrease the need for learning coarse features of the inputs that requires more depth.Storing features of the inputs will create shortcut paths for the network to learn the correct targets. Such a network
will no longer need to depend on depth to learn good features of the inputs but instead will depend on stored memory features. In other words
a long-term memory can provide intermediate answers to the network. Unlike regular
MANNs and RNNs, a long-term memory can provide shortcut connections to both inputs features and previous time steps inputs.Consider when the memory contains the output of previous examples, the network would cheat from the memory to provide answers. Training such
a network will focus on two stages: (1) Learning to find similarities between memory vectors and current input data, (2) learning to transform memory vectors into meaningful representations for producing the final output.The No Free Lunch Theorem of optimization BID25 states that: any two algorithms are equivalent when their performance is averaged across all possible problems, this means that an algorithm that solve certain classes of problems efficiently will be incompetent in other problems. In the setting
of combinatorial optimization, there is no algorithm able to do better than a random strategy in expectation. The only way an
algorithm outperforms another is to be specialized to a certain class of optimization problems BID0 . Learning optimization
algorithms from scratch using pairs of input-output examples is a way to outperform other algorithms on certain classes. It is further interesting
to investigate the ability of learned models to generate better solutions than hand crafted solvers.The focus of this paper is on designing neural models to solve Binary Linear Programming (or 0-1 Integer Programming) which is a special case of Integer Linear Programming problems where all decision variables are binary. The 0-1 integer programming
is one of Krap's 21 NP-complete problems introduced in BID9 . The goal of Binary LP is to
optimize a linear function under certain constrains. It is proved by BID3 that Binary
LP expresses the complexity class NP (i.e any problem in the complexity class NP can be modeled as Binary LP instance).The standard form of a Binary LP
problem is: DISPLAYFORM0 where c and b are vectors and A is a matrix.We propose a general framework for long-term memory neural models that uses reinforcement learning to store memories from a neural network. A long-term memory is not resettable
and may or may not store hidden states from individual time steps. Instead a long term memory stores information
that is considered to be useful for solving similar instances. The controller that decides to write memories
follows a policy function that properly constructs the memory contents. We train and test this framework on synthetic
data set of Binary LP instances. We analyze the model capability of generalization
to more complex instances beyond the training data set.
This paper introduced a long term memory coupled with a neural network, that is able to memorize useful input features to solve similar instances.
We applied LTMN model to solve Binary LP instances.
The LTMN was able to learn from supervised targets provided by a handcrafted solver, and generate better solutions than the solver.
The LTMN model was able to generalize to more complex instances beyond those in the training set.
|
We propose a memory network model to solve Binary LP instances where the memory information is perseved for long-term use.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:546
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition.
Performance has further been improved by leveraging unlabeled data, often in the form of a language model.
In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task.
We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying
i) faster convergence and better generalization, and
ii) almost complete transfer to a new domain while using less than 10% of the labeled training data.
Sequence-to-sequence (Seq2Seq) BID1 models have achieved state-of-the-art results on many natural language processing problems including automatic speech recognition BID2 BID4 , neural machine translation , conversational modeling and many more.
These models learn to generate a variable-length sequence of tokens (e.g. texts) from a variable-length sequence of input data (e.g. speech or the same texts in another language).
With a sufficiently large labeled dataset, vanilla Seq2Seq can model sequential mapping well, but it is often augmented with a language model to further improve the fluency of the generated text.Because language models can be trained from abundantly available unsupervised text corpora which can have as many as one billion tokens BID13 BID19 , leveraging the rich linguistic information of the label domain can considerably improve Seq2Seq's performance.
A standard way to integrate language models is to linearly combine the score of the task-specific Seq2Seq model with that of an auxiliary langauge model to guide beam search BID5 BID20 .
BID10 proposed an improved algorithm called Deep Fusion that learns to fuse the hidden states of the Seq2Seq decoder and a neural language model with a gating mechanism, after the two models are trained independently.While this approach has been shown to improve performance over the baseline, it has a few limitations.
First, because the Seq2Seq model is trained to output complete label sequences without a language model, its decoder learns an implicit language model from the training labels, taking up a significant portion of the decoder capacity to learn redundant information.
Second, the residual language model baked into the Seq2Seq decoder is biased towards the training labels of the parallel corpus.
For example, if a Seq2Seq model fully trained on legal documents is later fused with a medical language model, the decoder still has an inherent tendency to follow the linguistic structure found in legal text.
Thus, in order to adapt to novel domains, Deep Fusion must first learn to discount the implicit knowledge of the language.In this work, we introduce Cold Fusion to overcome both these limitations.
Cold Fusion encourages the Seq2Seq decoder to learn to use the external language model during training.
This means that Seq2Seq can naturally leverage potentially limitless unsupervised text data, making it particularly proficient at adapting to a new domain.
The latter is especially important in practice as the domain from which the model is trained can be different from the real world use case for which it is deployed.
In our experiments, Cold Fusion can almost completely transfer to a new domain for the speech recognition task with 10 times less data.
Additionally, the decoder only needs to learn task relevant information, and thus trains faster.The paper is organized as follows: Section 2 outlines the background and related work.
Section 3 presents the Cold Fusion method.
Section 4 details experiments on the speech recognition task that demonstrate Cold Fusion's generalization and domain adaptation capabilities.2
BACKGROUND AND RELATED WORK 2.1 SEQUENCE-TO-SEQUENCE MODELS A basic Seq2Seq model comprises an encoder that maps an input sequence x = (x 1 , . . . , x T ) into an intermediate representation h, and a decoder that in turn generates an output sequence y = (y 1 , . . . , y K ) from h BID21 .
The decoder can also attend to a certain part of the encoder states with an attention mechanism.
The attention mechanism is called hybrid attention BID7 , if it uses both the content and the previous context to compute the next context.
It is soft if it computes the expectation over the encoder states BID1 as opposed to selecting a slice out of the encoder states.For the automatic speech recognition (ASR) task, the Seq2Seq model is called an acoustic model (AM) and maps a sequence of spectrogram features extracted from a speech signal to characters.
In this work, we presented a new general Seq2Seq model architecture where the decoder is trained together with a pre-trained language model.
We study and identify architectural changes that are vital for the model to fully leverage information from the language model, and use this to generalize better; by leveraging the RNN language model, Cold Fusion reduces word error rates by up to 18% compared to Deep Fusion.
Additionally, we show that Cold Fusion models can transfer more easily to new domains, and with only 10% of labeled data nearly fully transfer to the new domain.
|
We introduce a novel method to train Seq2Seq models with language models that converge faster, generalize better and can almost completely transfer to a new domain using less than 10% of labeled data.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:547
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks.
Two distinct research paradigms have studied this question.
Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch.
In contrast, online (regret based) learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation.
This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning.
We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting.
Theoretically, this work provides an O(logT) regret guarantee for the FTML algorithm.
Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.
Two distinct research paradigms have studied how prior tasks or experiences can be used by an agent to inform future learning.
Meta-learning (Schmidhuber, 1987) casts this as the problem of learning to learn, where past experience is used to acquire a prior over model parameters or a learning procedure.
Such an approach, where we draw upon related past tasks and form associated priors, is particularly crucial to effectively learn when data is scarce or expensive for each task.
However, meta-learning typically studies a setting where a set of meta-training tasks are made available together upfront as a batch.
In contrast, online learning (Hannan, 1957 ) considers a sequential setting where tasks are revealed one after another, but aims to attain zero-shot generalization without any task-specific adaptation.
We argue that neither setting is ideal for studying continual lifelong learning.
Metalearning deals with learning to learn, but neglects the sequential and non-stationary nature of the world.
Online learning offers an appealing theoretical framework, but does not generally consider how past experience can accelerate adaptation to a new task.
In this work, we motivate and present the online meta-learning problem setting, where the agent simultaneously uses past experiences in a sequential setting to learn good priors, and also adapt quickly to the current task at hand.Our contributions: In this work, we first formulate the online meta-learning problem setting.
Subsequently, we present the follow the meta-leader (FTML) algorithm which extends MAML (Finn et al., 2017) to this setting.
FTML is analogous to follow the leader in online learning.
We analyze FTML and show that it enjoys a O(log T ) regret guarantee when competing with the best metalearner in hindsight.
In this endeavor, we also provide the first set of results (under any assumptions) where MAML-like objective functions can be provably and efficiently optimized.
We also develop a practical form of FTML that can be used effectively with deep neural networks on large scale tasks, and show that it significantly outperforms prior methods in terms of learning efficiency on vision-based sequential learning problems with the MNIST, CIFAR, and PASCAL 3D+ datasets.
In this paper, we introduced the online meta-learning problem statement, with the aim of connecting the fields of meta-learning and online learning.
Online meta-learning provides, in some sense, a more natural perspective on the ideal real-world learning procedure.
An intelligent agent interacting with a constantly changing environment should utilize streaming experience to both master the task at hand, and become more proficient at learning new tasks in the future.
We summarize prior work related to our setting in Appendix D. For the online meta-learning setting, we proposed the FTML algorithm and showed that it enjoys logarithmic regret.
We then illustrated how FTML can be adapted to a practical algorithm.
Our experimental evaluations demonstrated that the proposed practical variant outperforms prior methods.
|
We introduce the online meta learning problem setting to better capture the spirit and practice of continual lifelong learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:548
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.
We employ two methods—taking the maximum attention weight and computing the maximum spanning tree—to extract implicit dependency relations from the attention weights of each layer/head, and compare them to the ground-truth Universal Dependency (UD) trees.
We show that, for some UD relation types, there exist heads that can recover the dependency type significantly better than baselines on parsed English text, suggesting that some self-attention heads act as a proxy for syntactic structure.
We also analyze BERT fine-tuned on two datasets—the syntax-oriented CoLA and the semantics-oriented MNLI—to investigate whether fine-tuning affects the patterns of their self-attention, but we do not observe substantial differences in the overall dependency relations extracted using our methods.
Our results suggest that these models have some specialist attention heads that track individual dependency types, but no generalist head that performs holistic parsing significantly better than a trivial baseline, and that analyzing attention weights directly may not reveal much of the syntactic knowledge that BERT-style models are known to learn.
Pretrained Transformer models like OpenAI GPT BID9 and BERT BID1 have shown stellar performance on language understanding tasks.
BERT and BERTbased models significantly improve the state-ofthe-art on many tasks such as constituency parsing BID5 , question answering BID11 , and have attained top positions on the GLUE leaderboard .
As BERT becomes a staple component of many NLP models, many researchers have attempted to analyze the linguistic knowledge that BERT has learned by analyzing the BERT model BID3 or training probing classifiers on the contextualized embeddings of BERT BID12 .BERT
, as a Transformer-based language model, computes the hidden representation at each layer for each token by attending to all the tokens in an input sentence. The
attention heads of Transformer have been claimed to capture the syntactic structure of the sentences BID13 . Intuitively
, for a given token, some specific tokens in the sentence would be more linguistically related to it than the others, and therefore the selfattention mechanism should be expected to allocate more weight to the linguistically related tokens in computing the hidden state of the given token. In this work
, we aim to investigate the hypothesis that syntax is implicitly encoded by BERT's self-attention heads. We use two
relation extraction methods to extract dependency relations from all the self-attention heads of BERT. We analyze
the resulting dependency relations to investigate whether the attention heads of BERT implicitly track syntactic dependencies significantly better than chance, and what type of dependency relations BERT learn.We extract the dependency relations from the self-attention heads instead of the contextualized embeddings of BERT. In contrast
to probing models, our dependency extraction methods require no further training. Our experiments
suggest that the attention heads of BERT encode most dependency relation types with substantially higher accuracy than our baselines-a randomly initialized Transformer and relative positional baselines. Finetuning BERT
on the syntax-oriented CoLA does not appear to impact the accuracy of extracted dependency relations. However, when fine-tuned
on the semantics-oriented MNLI dataset, there is a slight improvement in accuracy for longer-term clausal relations and a slight loss in accuracy for shorter-term relations. Overall, while BERT models
obtain non-trivial accuracy for some dependency types such as nsubj, obj, nmod, aux, and conj, they do not substantially outperform the trivial right-branching trees in terms of undirected unlabeled attachment scores (UUAS). Therefore, although the attention
heads of BERT reflect a small number of dependency relation types, it does not reflect the full extent of the significant amount of syntactic knowledge BERT is shown to learn by the previous probing work.
In this work, we investigate whether the attention heads of BERT exhibit the implicit syntax depen- dency by extracting and analyzing the dependency relations from the attention heads of BERT at all layers.
We use two simple dependency relation extraction methods that require no additional training, and observe that there are attention heads of BERT that track more than 75% of the dependency types with higher accuracy than our baselines.
However, the hypothesis that the attention heads of BERT track the dependency syntax is not well-supported as the linguistically uninformed baselines outperform BERT on nearly 25% of the dependency types.
Additionally, BERT's performance in terms of UUAS is only slightly higher than that of the trivial right-branching trees, suggesting that the dependency syntax learned by the attention heads is trivial.
Additionally, we observe that fine-tuning on the CoLA and MNLI does not affect the pattern of self-attention, although the fine-tuned models shows different performance from BERT on the GLUE benchmark.
|
Attention weights don't fully expose what BERT knows about syntax.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:549
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized.
Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets.
The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features.
Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query.
Finally, we train the whole model jointly and update the input parameters until convergence.
Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging results have been achieved.
In recent years, with the advancement of science and technology, especially the rapid development of high-end manufacturing, in the field of industrial non-destructive testing, in many cases, it is necessary to perform defect detection without damaging or affecting the performance and internal structure of the device under test.
Therefore, there is an increasing demand for corresponding detection devices.
In complex industrial environments (such as aviation, internal combustion engines, chemical engineering, etc.), it is of great research significance to detect faults and defects in closed chambers.
In this paper, the use of positron annihilation gamma photon imaging positron emission imaging technology for industrial nondestructive testing is studied.
The occurrence of positron annihilation is not affected by factors such as high temperature, high pressure, corrosion, etc., so it can penetrate the dense metal material cavity, realize the undisturbed and non-destructive trace imaging of the detected object, and obtain the detected object after processing.
Describe the image and perform a state analysis.
Therefore, the quality of imaging technology directly affects the analysis of fault detection results.
Positron Emission Tomography (PET) was first used in medical imaging.
The principle is that when a radioactive positron nucleus decays, a proton in the nucleus is converted into a neutron, and a positron and a neutron are released.
The positron will quickly combine with the electrons in the material in a very short time, causing a positron annihilation phenomenon, producing a pair of gamma photon pairs with opposite directions and energy of 511KeV.
Photon pairs are collected, identified, processed, and finally reconstructed to obtain medical images.
Commonly used PET reconstruction algorithms are analytic method (K, 2000) and statistical method (Shepp & Vardi, 2007) .
The currently widely used algorithms are MLEM and OSEM.
At present, PET technology has been widely used in the clinical diagnosis of human diseases.
The advantages are quite obvious, the imaging quality is higher, and it shows great advantages in medical research.
The principle of positron emission in industrial non-destructive fields is similar to medical imaging, but it has its own unique difficulties: the detection environment is more harsh, the sampling time is short, and at the same time, due to the phenomenon of scattering and attenuation of photons, industrial positron imaging is obtained.
The image quality is even worse.
Therefore, the reconstructed image needs to be further processed to obtain a higher quality image.
In this paper, we propose adversarial networks of positron image memory module based on attention mechanism.
Using medical images as basic data sets, introducing knowledge of migration learning, building memory module according to the contribution of detail features to images, a positron image generation network in the field of industrial non-destructive testing is obtained through joint training, thus achieving higher quality generation of industrial positron images.
In summary, our main contributions in this paper are as follows:
We are the first to advocate an idea of using Generative Adversarial Networks to enhance the detail of the positron image in the industrial non-destructive field, and realize the generation and processing of the scarce image data in the professional field.
We use the medical CT image dataset as the basic training sample of the network framework, which is based on the idea of migration learning, and then extract the features of a small number of industrial non-destructively detected positron images, which can improve the details of the generated images, and make the network model have better applicability in the field of industrial non-destructive testing.
We combine the attention-based mechanism in the professional domain image feature extraction.
By constructing a memory module containing industrial positron image features, we can generate image generation in a specific domain, and finally conduct an industrial lossless positron image generation model.
We train the whole network jointly, through the discriminant network of the antagonistic generation network, the front-end network was back-propagated, the input parameters were updated, and the model was optimized.
Finally, the convergence was achieved and The Turing test was passed successfully.
In this paper, we introduce an application of GAN in the field of nondestructive testing for specific industries.
We combine the knowledge of transfer learning to make up the problem of insufficient data.
The key point is to introduce attention mechanism to construct a positron image feature memory module, which can reuse image features under the condition of scarce data.
At the same time, an attention loss function is added to the discriminative net to further improve the generator performance.
Experiments show that compared with the start-of-the-art generation methods in deep learning, the model in our paper has an obvious improvement in the quality of industrial positron image generation.
In the future, our focus is to further study the application of generative adversarial networks in industrial positron image processing, and to further improve the quality of domain images.
|
adversarial nets, attention mechanism, positron images, data scarcity
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:55
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
State-of-the-art face super-resolution methods employ deep convolutional neural networks to learn a mapping between low- and high-resolution facial patterns by exploring local appearance knowledge.
However, most of these methods do not well exploit facial structures and identity information, and struggle to deal with facial images that exhibit large pose variation and misalignment.
In this paper, we propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Firstly, the 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge.
Secondly, the Spatial Attention Mechanism is used to better exploit this hierarchical information (i.e. intensity similarity, 3D facial structure, identity content) for the super-resolution problem.
Extensive experiments demonstrate that the proposed algorithm achieves superior face super-resolution results and outperforms the state-of-the-art.
Face images provide crucial clues for human observation as well as computer analysis (Fasel & Luettinb, 2003; Zhao et al., 2003) .
However, the performance of most face image tasks, such as face recognition and facial emotion detection (Han et al., 2018; Thies et al., 2016) , degrades dramatically when the resolution of a facial image is relatively low.
Consequently, face super-resolution, also known as face hallucination, was coined to restore a low-resolution face image to its high-resolution counterpart.
A multitude of deep learning methods (Zhou & Fan, 2015; Yu & Porikli, 2016; Zhu et al., 2016; Cao et al., 2017; Dahl et al., 2017a; Yu et al., 2018b) have been successfully applied in face Super-Resolution (SR) problems and achieve state-of-the-art results.
But super-resolving arbitrary facial images, especially at high magnification factors, is still an open and challenging problem due to the ill-posed nature of the SR problem and the difficulty in learning and integrating strong priors into a face hallucination model.
Some researches (Grm et al., 2018; Yu et al., 2018a; Ren et al., 2019) on exploiting the face priors to assist neural networks to capture more facial details have been proposed recently.
A face hallucination model incorporating identity priors is presented in Grm et al. (2018) .
But the identity prior is extracted only from the multi-scale up-sampling results in the training procedure and therefore cannot provide enough extra priors to guide the network to achieve a better result.
Yu et al. (2018a) employ facial component heatmaps to encourage the upsampling stream to generate super-resolved faces with higher-quality details, especially for large pose variations.
Although heatmaps can provide global component regions, it cannot learn the reconstruction of detailed edges, illumination or expression priors.
Besides, all of these aforementioned face SR approaches ignore facial structure and identity recovery.
In contrast to previous methods, we propose a novel face super-resolution method that embeds 3D face structures and identity priors.
Firstly, a deep 3D face reconstruction branch is set up to explicitly obtain 3D face render priors which facilitate the face super-resolution branch.
Specifically, the 3D face render prior is generated by the ResNet-50 network (He et al., 2016) .
It contains rich hierarchical information, such as low-level (e.g., sharp edge, illumination) and perception level (e.g., identity).
The Spatial Attention Mechanism is proposed here to adaptively integrate the 3D facial prior into the network.
Specifically, we employ the Spatial Feature Transform (SFT) (Wang et al., 2018) to generate affine transformation parameters for spatial feature modulation.
Afterwards, it encourages the network to learn the spatial interdepenencies of features between 3D facial priors and input images after adding the attention module into the network.
The main contributions of this paper are:
1. A novel face SR model is proposed by explicitly exploiting facial structure in the form of facial-prior estimation.
The estimated 3D facial prior provides not only spatial information of facial components but also their visibility information, which are ignored by the pixel-level content.
2. We propose a feature-fusion-based network to better extract and integrate the face rendered priors by employing the Spatial Attention Mechanism (SAM).
3. We qualitatively and quantitatively explore multi-scale face super-resolution, especially at very low input resolutions.
The proposed network achieves better SR criteria and superior visual quality compared to state-of-the-art face SR methods.
In this paper, we proposed a novel network that incorporates 3D facial priors of rendered faces and identity knowledge.
The 3D rendered branch utilizes the face rendering loss to encourage a highquality guided image providing clear spatial locations of facial components and other hierarchical information (i.e., expression, illumination, and face pose).
To well exploit 3D priors and consider the channel correlation between priors and inputs, the Spatial Attention Mechanism is presented by employing the Spatial Feature Transform and Attention block.
The comprehensive experimental results have demonstrated that the proposed method can deliver the better performance and largely decrease artifacts in comparison with the state-of-the-art methods by using significantly fewer parameters.
|
We propose a novel face super resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:550
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning.
On labeled examples, the model is trained with standard cross-entropy loss.
On an unlabeled example, the model first performs inference (acting as a "teacher") to produce soft targets.
The model then learns from these soft targets (acting as a ``"student").
We deviate from prior work by adding multiple auxiliary student prediction layers to the model.
The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image).
The students can learn from the teacher (the full model) because the teacher sees more of each example.
Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data.
When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN.
We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data.
On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.
Deep learning classifiers work best when trained on large amounts of labeled data.
However, acquiring labels can be costly, motivating the need for effective semi-supervised learning techniques that leverage unlabeled examples during training.
Many semi-supervised learning algorithms rely on some form of self-labeling.
In these approaches, the model acts as both a "teacher" that makes predictions about unlabeled examples and a "student" that is trained on the predictions.
As the teacher and the student have the same parameters, these methods require an additional mechanism for the student to benefit from the teacher's outputs.One approach that has enjoyed recent success is adding noise to the student's input BID0 BID50 .
The loss between the teacher and the student becomes a consistency cost that penalizes the difference between the model's predictions with and without noise added to the example.
This trains the model to give consistent predictions to nearby data points, encouraging smoothness in the model's output distribution with respect to the input.
In order for the student to learn effectively from the teacher, there needs to be a sufficient difference between the two.
However, simply increasing the amount of noise can result in unrealistic data points sent to the student.
Furthermore, adding continuous noise to the input makes less sense when the input consists of discrete tokens, such in natural language processing.We address these issues with a new method we call Cross-View Training (CVT).
Instead of only training the full model as a student, CVT adds auxiliary softmax layers to the model and also trains them as students.
The input to each student layer is a sub-network of the full model that sees a restricted view of the input example (e.g., only seeing part of an image), an idea reminiscent of cotraining BID1 .
The full model is still used as the teacher.
Unlike when using a large amount of input noise, CVT does not unrealistically alter examples during training.
However, the student layers can still learn from the teacher because the teacher has a better, unrestricted view of the input.
Meanwhile, the student layers improve the model's representations (and therefore the teacher) as they learn to make accurate predictions with a limited view of the input.
Our method can be easily combined with adding noise to the students, but works well even when no noise is added.We propose variants of our method for Convolutional Neural Network (CNN) image classifiers, Bidirectional Long Short-Term Memory (BiLSTM) sequence taggers, and graph-based dependency parsers.
For CNNs, each auxiliary softmax layer sees a region of the input image.
For sequence taggers and dependency parsers, the auxiliary layers see the input sequence with some context removed.
For example, one auxiliary layer is trained to make predictions without seeing any tokens to the right of the current one.We first evaluate Cross-View Training on semi-supervised CIFAR-10 and semi-supervised SVHN.
When combined with Virtual Adversarial Training BID39 , CVT improves upon the current state-of-the-art on both datasets.
We also train semi-supervised models on five tasks from natural language processing: English dependency parsing, combinatory categorical grammar supertagging, named entity recognition, text chunking, and part-of-speech tagging.
We use the 1 billion word language modeling benchmark BID3 as a source of unlabeled data.
CVT works substantially better than purely supervised training, resulting in models that improve upon or are competitive with the current state-of-the-art on every task.
We consider these results particularly important because many recently proposed semi-supervised learning methods work best on continuous inputs and have only been evaluated on vision tasks BID0 BID50 BID26 BID59 .
In contrast, CVT can handle discrete inputs such as language very effectively.
|
Self-training with different views of the input gives excellent results for semi-supervised image recognition, sequence tagging, and dependency parsing.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:551
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
I show how it can be beneficial to express Metropolis accept/reject decisions in terms of comparison with a uniform [0,1] value, and to then update this uniform value non-reversibly, as part of the Markov chain state, rather than sampling it independently each iteration.
This provides a small improvement for random walk Metropolis and Langevin updates in high dimensions.
It produces a larger improvement when using Langevin updates with persistent momentum, giving performance comparable to that of Hamiltonian Monte Carlo (HMC) with long trajectories.
This is of significance when some variables are updated by other methods, since if HMC is used, these updates can be done only between trajectories, whereas they can be done more often with Langevin updates.
This is seen for a Bayesian neural network model, in which connection weights are updated by persistent Langevin or HMC, while hyperparameters are updated by Gibbs sampling.
|
A non-reversible way of making accept/reject decisions can be beneficial
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:552
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES).
Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies.
We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement.
Moreover, ES-MAML can handle new types of nonsmooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable.
We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.
Meta-learning is a paradigm in machine learning that aims to develop models and training algorithms which can quickly adapt to new tasks and data.
Our focus in this paper is on meta-learning in reinforcement learning (RL) , where data efficiency is of paramount importance because gathering new samples often requires costly simulations or interactions with the real world.
A popular technique for RL meta-learning is Model Agnostic Meta Learning (MAML) (Finn et al., 2017; , a model for training an agent which can quickly adapt to new and unknown tasks by performing one (or a few) gradient updates in the new environment.
We provide a formal description of MAML in Section 2.
MAML has proven to be successful for many applications.
However, implementing and running MAML continues to be challenging.
One major complication is that the standard version of MAML requires estimating second derivatives of the RL reward function, which is difficult when using backpropagation on stochastic policies; indeed, the original implementation of MAML (Finn et al., 2017) did so incorrectly, which spurred the development of unbiased higher-order estimators (DiCE, (Foerster et al., 2018) ) and further analysis of the credit assignment mechanism in MAML (Rothfuss et al., 2019) .
Another challenge arises from the high variance inherent in policy gradient methods, which can be ameliorated through control variates such as in T-MAML (Liu et al., 2019) , through careful adaptive hyperparameter tuning (Behl et al., 2019; Antoniou et al., 2019) and learning rate annealing (Loshchilov & Hutter, 2017) .
To avoid these issues, we propose an alternative approach to MAML based on Evolution Strategies (ES), as opposed to the policy gradient underlying previous MAML algorithms.
We provide a detailed discussion of ES in Section 3.1.
ES has several advantages:
1. Our zero-order formulation of ES-MAML (Section 3.2, Algorithm 3) does not require estimating any second derivatives.
This dodges the many issues caused by estimating second derivatives with backpropagation on stochastic policies (see Section 2 for details).
2. ES is conceptually much simpler than policy gradients, which also translates to ease of implementation.
It does not use backpropagation, so it can be run on CPUs only.
3. ES is highly flexible with different adaptation operators (Section 3.3).
4. ES allows us to use deterministic policies, which can be safer when doing adaptation (Section 4.3).
ES is also capable of learning linear and other compact policies (Section 4.2).
On the point (4), a feature of ES algorithms is that exploration takes place in the parameter space.
Whereas policy gradient methods are primarily motivated by interactions with the environment through randomized actions, ES is driven by optimization in high-dimensional parameter spaces with an expensive querying model.
In the context of MAML, the notions of "exploration" and "task identification" have thus been shifted to the parameter space instead of the action space.
This distinction plays a key role in the stability of the algorithm.
One immediate implication is that we can use deterministic policies, unlike policy gradients which is based on stochastic policies.
Another difference is that ES uses only the total reward and not the individual state-action pairs within each episode.
While this may appear to be a weakness, since less information is being used, we find in practice that it seems to lead to more stable training profiles.
This paper is organized as follows.
In Section 2, we give a formal definition of MAML, and discuss related works.
In Section 3, we introduce Evolutionary Strategies and show how ES can be applied to create a new framework for MAML.
In Section 4, we present numerical experiments, highlighting the topics of exploration (Section 4.1), the utility of compact architectures (Section 4.2), the stability of deterministic policies (Section 4.3), and comparisons against existing MAML algorithms in the few-shot regime (Section 4.4).
Additional material can be found in the Appendix.
We have presented a new framework for MAML based on ES algorithms.
The ES-MAML approach avoids the problems of Hessian estimation which necessitated complicated alterations in PG-MAML and is straightforward to implement.
ES-MAML is flexible in the choice of adaptation operators, and can be augmented with general improvements to ES, along with more exotic adaptation operators.
In particular, ES-MAML can be paired with nonsmooth adaptation operators such as hill climbing, which we found empirically to yield better exploratory behavior and better performance on sparse-reward environments.
ES-MAML performs well with linear or compact deterministic policies, which is an advantage when adapting if the state dynamics are possibly unstable.
but slightly worse than the full PG-MAML, and does not report comparisons with and without the Hessian on RL MAML.
(Rothfuss et al., 2019; Liu et al., 2019) argue for the importance of the second-order terms in proper credit assignment, but use heavily modified estimators (LVC, control variates; see Section 2) in their experiments, so the performance is not directly comparable to the 'naive' estimator in Algorithm 4.
Our interpretation is that Algorithm 4 has high variance, making the Hessian estimates inaccurate, which can slow training on relatively 'easier' tasks like ForwardBackward walking but possibly increase the exploration on four corners.
We also compare FO-NoHessian against Algorithm 3 on Forward-Backward HalfCheetah and Ant in Figure A2 .
In this experiment, the two methods ran on servers with different number of workers available, so we measure the score by the total number of rollouts.
We found that FO-NoHessian was slightly faster than Algorithm 3 when measured by rollouts on Ant, but FO-NoHessian had notably poor performance when the number of queries was low (K = 5) on HalfCheetah, and failed to reach similar scores as the others even after running for many more rollouts.
|
We provide a new framework for MAML in the ES/blackbox setting, and show that it allows deterministic and linear policies, better exploration, and non-differentiable adaptation operators.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:553
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Transforming one probability distribution to another is a powerful tool in Bayesian inference and machine learning.
Some prominent examples are constrained-to-unconstrained transformations of distributions for use in Hamiltonian Monte-Carlo and constructing flexible and learnable densities such as normalizing flows.
We present Bijectors.jl, a software package for transforming distributions implemented in Julia, available at github.com/TuringLang/Bijectors.jl.
The package provides a flexible and composable way of implementing transformations of distributions without being tied to a computational framework.
We demonstrate the use of Bijectors.jl on improving variational inference by encoding known statistical dependencies into the variational posterior using normalizing flows, providing a general approach to relaxing the mean-field assumption usually made in variational inference.
When working with probability distributions in Bayesian inference and probabilistic machine learning, transforming one probability distribution to another comes up quite often.
For example, when applying Hamiltonian Monte Carlo on constrained distributions, the constrained density is usually transformed to an unconstrained density for which the sampling is performed (Neal, 2012) .
Another example is to construct highly flexible and learnable densities often referred to as normalizing flows (Dinh et al., 2014; Huang et al., 2018; Durkan et al., 2019) ; for a review see Kobyzev et al. (2019) .
When a distribution P is transformed into some other distribution Q using some measurable function b, we write Q = b * P and say Q is the push-forward of P .
When b is a differentiable bijection with a differentiable inverse, i.e. a diffeomorphism or a bijector (Dillon et al., 2017) , the induced or pushed-forward distribution Qit is obtained by a simple application of change of variables.
Specifically, given a distribution P on some Ω ⊆ R d with density p : Ω → [0, ∞), and a bijector b : Ω →Ω for someΩ ⊆ R d , the induced or pushed forward distribution Q = b * P has density q(y
) = p b −1 (
y) |det J b −1
(y)| or q b(x
) = p(x
) |det J b (x)|
We presented Bijectors.jl, a framework for working with bijectors and thus transformations of distributions.
We then demonstrated the flexibility of Bijectors.jl in an application of introducing correlation structure to the mean-field ADVI approach.
We believe Bijectors.jl will be a useful tool for future research, especially in exploring normalizing flows and their place in variational inference.
An interesting note about the NF variational posterior we constructed is that it only requires a constant number of extra parameters on top of what is required by mean-field normal VI.
This approach can be applied in more general settings where one has access to the directed acyclic graph (DAG) of the generative model we want to perform inference.
Then this approach will scale linearly with the number of unique edges between random variables.
It is also possible in cases where we have an undirected graph representing a model by simply adding a coupling in both directions.
This would be very useful for tackling issues faced when using mean-field VI and would be of interest to explore further.
For related work we have mainly compared against Tensorflow's tensorflow probability, which is used by other known packages such pymc4, and PyTorch's torch.distributions, which is used by packages such as pyro.
Other frameworks which make heavy use of such transformations using their own implementations are stan, pymc3, and so on.
But in these frameworks the transformations are mainly used to transform distributions from constrained to unconstrained and vice versa with little or no integration between those transformation and the more complex ones, e.g. normalizing flows.
pymc3 for example support normalizing flows, but treat them differently from the constrained-to-unconstrained transformations.
This means that composition between standard and parameterized transformations is not supported.
Of particular note is the bijectors framework in tensorflow probability introduced in (Dillon et al., 2017) .
One could argue that this was indeed the first work to take such a drastic approach to the separation of the determinism and stochasticity, allowing them to implement a lot of standard distributions as a TransformedDistribution.
This framework was also one of the main motivations that got the authors of Bijectors.jl interested in making a similar framework in Julia.
With that being said, other than the name, we have not set out to replicate tensorflow probability and most of the direct parallels were observed after-the-fact, e.g. a transformed distribution is defined by the TransformedDistribution type in both frameworks.
Instead we believe that Julia is a language well-suited for such a framework and therefore one can innovate on the side of implementation.
For example in Julia we can make use of code-generation or meta-programming to do program transformations in different parts of the framework, e.g. the composition b • b −1 is transformed into the identity function at compile time.
|
We present a software framework for transforming distributions and demonstrate its flexibility on relaxing mean-field assumptions in variational inference with the use of coupling flows to replicate structure from the target generative model.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:554
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multi-domain learning (MDL) aims at obtaining a model with minimal average risk across multiple domains.
Our empirical motivation is automated microscopy data, where cultured cells are imaged after being exposed to known and unknown chemical perturbations, and each dataset displays significant experimental bias.
This paper presents a multi-domain adversarial learning approach, MuLANN, to leverage multiple datasets with overlapping but distinct class sets, in a semi-supervised setting.
Our contributions include:
i) a bound on the average- and worst-domain risk in MDL, obtained using the H-divergence;
ii) a new loss to accommodate semi-supervised multi-domain learning and domain adaptation;
iii) the experimental validation of the approach, improving on the state of the art on two standard image benchmarks, and a novel bioimage dataset, Cell.
Advances in technology have enabled large scale dataset generation by life sciences laboratories.
These datasets contain information about overlapping but non-identical known and unknown experimental conditions.
A challenge is how to best leverage information across multiple datasets on the same subject, and to make discoveries that could not have been obtained from any individual dataset alone.Transfer learning provides a formal framework for addressing this challenge, particularly crucial in cases where data acquisition is expensive and heavily impacted by experimental settings.
One such field is automated microscopy, which can capture thousands of images of cultured cells after exposure to different experimental perturbations (e.g from chemical or genetic sources).
A goal is to classify mechanisms by which perturbations affect cellular processes based on the similarity of cell images.
In principle, it should be possible to tackle microscopy image classification as yet another visual object recognition task.
However, two major challenges arise compared to mainstream visual object recognition problems BID51 .
First, biological images are heavily impacted by experimental choices, such as microscope settings and experimental reagents.
Second, there is no standardized set of labeled perturbations, and datasets often contain labeled examples for a subset of possible classes only.
This has limited microscopy image classification to single datasets and does not leverage the growing number of datasets collected by the life sciences community.
These challenges make it desirable to learn models across many microscopy datasets, that achieve both good robustness w.r.t. experimental settings and good class coverage, all the while being robust to the fact that datasets contain samples from overlapping but distinct class sets.Multi-domain learning (MDL) aims to learn a model of minimal risk from datasets drawn from distinct underlying distributions BID20 , and is a particular case of transfer learning BID46 .
As such, it contrasts with the so-called domain adaptation (DA) problem BID7 BID5 BID22 BID46 .
DA aims at learning a model with minimal risk on a distribution called "target" by leveraging other distributions called "sources".
Notably, most DA methods assume that target classes are identical to source classes, or a subset thereof in the case of partial DA BID77 .The
expected benefits of MDL, compared to training a separate model on each individual dataset, are two-fold. First
, MDL leverages more (labeled and unlabeled) information, allowing better generalization while accommodating the specifics of each domain BID20 BID72 . Thus
, MDL models have a higher chance of ab initio performing well on a new domain − a problem referred to as domain generalization BID44 or zero-shot domain adaptation BID74 . Second
, MDL enables knowledge transfer between domains: in unsupervised and semi-supervised settings, concepts learned on one domain are applied to another, significantly reducing the need for labeled examples from the latter BID46 . Learning
a single model from samples drawn from n distributions raises the question of available learning guarantees regarding the model error on each distribution. BID32 introduced
the notion of H-divergence to measure the distance between source and target marginal distributions in DA. BID4 have shown
that a finite sample estimate of this divergence can be used to bound the target risk of the learned model.The contributions of our work are threefold. First, we extend
the DA guarantees to MDL (Sec. 3.1), showing that the risk of the learned model over all considered domains is upper bounded by the oracle risk and the sum of the H-divergences between any two domains. Furthermore, an
upper bound on the classifier imbalance (the difference between the individual domain risk, and the average risk over all domains) is obtained, thus bounding the worst-domain risk. Second, we propose
the approach Multi-domain Learning Adversarial Neural Network (MULANN), which extends Domain Adversarial Neural Networks (DANNs) BID22 to semi-supervised DA and MDL. Relaxing the DA assumption
, MULANN handles the so-called class asymmetry issue (when each domain may contain varying numbers of labeled and unlabeled examples of a subset of all possible classes), through designing a new loss (Sec. 3.2). Finally, MULANN is empirically
validated in both DA and MDL settings (Sec. 4), as it significantly outperforms the state of the art on three standard image benchmarks BID52 BID35 , and a novel bioimage benchmark, CELL, where the state of the art involves extensive domain-dependent pre-processing.Notation. Let X denote an input space and
Y = {1, . . . , L} a set of classes. For i = 1, . . . , n, dataset S
i is an iid sample drawn from distribution D i on X × Y. The marginal distribution of D i on X is denoted by D X i . Let H be a hypothesis space; for
each h in H (h : X → Y) we define the risk under distribution D i as i (h) = P x,y∼Di (h(x) = y). h i (respectively h ) denotes
the oracle
hypothesis
according to distribution D i (resp. with minimal total risk over all domains): DISPLAYFORM0 In the semi-supervised setting, the label associated with an instance might be missing. In the following, "domain" and "distribution" will
be used interchangeably, and the "classes of a domain" denote the classes for which labeled or unlabeled examples are available in this domain.
This paper extends the use of domain adversarial learning to multi-domain learning, establishing how the H-divergence can be used to bound both the risk across all domains and the worst-domain risk (imbalance on a specific domain).
The stress is put on the notion of class asymmetry, that is, when some domains contain labeled or unlabeled examples of classes not present in other domains.
Showing the significant impact of class asymmetry on the state of the art, this paper also introduces MULANN, where a new loss is meant to resist the contractive effects of the adversarial domain discriminator and to repulse (a fraction of) unlabeled examples from labeled ones in each domain.The merits of the approach are satisfactorily demonstrated by comparison to DANN and MADA on DIGITS, RoadSigns and OFFICE, and results obtained on the real-world CELL problem establish a new baseline for the microscopy image community.A perspective for further study is to bridge the gap between the proposed loss and importance sampling techniques, iteratively exploiting the latent representation to identify orphan samples and adapt the loss while learning.
Further work will also focus on how to identify and preserve relevant domain-specific behaviours while learning in a domain adversarial setting (e.g., if different cell types have distinct responses to the same class of perturbations).
|
Adversarial Domain adaptation and Multi-domain learning: a new loss to handle multi- and single-domain classes in the semi-supervised setting.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:555
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce our Distribution Regression Network (DRN) which performs regression from input probability distributions to output probability distributions.
Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions.
On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art.
Furthermore, DRN generalizes the conventional multilayer perceptron (MLP).
In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution.
The field of regression analysis is largely established with methods ranging from linear least squares to multilayer perceptrons.
However, the scope of the regression is mostly limited to real valued inputs and outputs BID4 BID14 .
In this paper, we perform distribution-todistribution regression where one regresses from input probability distributions to output probability distributions.Distribution-to-distribution regression (see work by BID17 ) has not been as widely studied compared to the related task of functional regression BID3 .
Nevertheless, regression on distributions has many relevant applications.
In the study of human populations, probability distributions capture the collective characteristics of the people.
Potential applications include predicting voting outcomes of demographic groups BID5 and predicting economic growth from income distribution BID19 .
In particular, distribution-to-distribution regression is very useful in predicting future outcomes of phenomena driven by stochastic processes.
For instance, the Ornstein-Uhlenbeck process, which exhibits a mean-reverting random walk, has wide-ranging applications.
In the commodity market, prices exhibit mean-reverting patterns due to market forces BID23 .
It is also used in quantitative biology to model phenotypic traits evolution BID0 .Variants
of the distribution regression task have been explored in literature BID18 . For the
distribution-to-distribution regression task, BID17 proposed an instance-based learning method where a linear smoother estimator (LSE) is applied across the inputoutput distributions. However
, the computation time of LSE scales badly with the size of the dataset. To that
end, BID16 developed the Triple-Basis Estimator (3BE) where the prediction time is independent of the number of data by using basis representations of distributions and Random Kitchen Sink basis functions. BID9 proposed
the Extrapolating the Distribution Dynamics (EDD) method which predicts the future state of a time-varying probability distribution given a sequence of samples from previous time steps. However, it is
unclear how it can be used for the general case of regressing distributions of different objects.Our proposed Distribution Regression Network (DRN) is based on a completely different scheme of network learning, motivated by spin models in statistical physics and similar to artificial neural networks. In many variants
of the artificial neural network, the network encodes real values in the nodes BID21 BID10 BID1 . DRN is novel in
that it generalizes the conventional multilayer perceptron (MLP) by encoding a probability distribution in each node. Each distribution
in DRN is treated as a single object which is then processed by the connecting weights. Hence, the propagation
behavior in DRN is much richer, enabling DRN to represent distribution regression mappings with fewer parameters than MLP. We experimentally demonstrate
that compared to existing methods, DRN achieves comparable or better regression performance with fewer model parameters. Figure 1 : (Left) An example
DRN with multiple input probability distributions and multiple hidden layers mapping to an output probability distribution. (Right) A connection unit in
the network
, with 3 input nodes in layer l − 1 connecting to a node in layer l. Each node encodes a probability
distribution, as illustrated by the probability density function P (l) k . The tunable parameters are the
connecting weights and the bias parameters at the output node.
The distribution-to-distribution regression task has many useful applications ranging from population studies to stock market prediction.
In this paper, we propose our Distribution Regression Network which generalizes the MLP framework by encoding a probability distribution in each node.Our DRN is able to learn the regression mappings with fewer model parameters compared to MLP and 3BE.
MLP has not been used for distribution-to-distribution regression in literature and we have adapted it for this task.
Though both DRN and MLP are network-based methods, they encode the distribution very differently.
By generalizing each node to encode a distribution, each distribution in DRN is treated as a single object which is then processed by the connecting weight.
Thus, the propagation behavior in DRN is much richer, enabling DRN to represent the regression mappings with fewer parameters.
In 3BE, the number of model parameters scales linearly with the number of projection coefficients of the distributions and number of Random Kitchen Sink features.
In our experiments, DRN is able to achieve similar or better regression performance using less parameters than 3BE.
Furthermore, the runtime for DRN is competitive with other methods (see comparison of mean prediction times in Appendix C).For
future work, we look to extend DRN for variants of the distribution regression task such as distribution-to-real regression and distribution classification. Extensions
may also be made for regressing multivariate distributions.
|
A learning network which generalizes the MLP framework to perform distribution-to-distribution regression
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:556
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence.
While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call.
The time span between events can carry important information about the sequence dependence of human behaviors.
In this work, we propose a set of methods for using time in sequence prediction.
Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization.
We also introduce two methods for using next event duration as regularization for training a sequence prediction model.
We discuss these methods based on recurrent neural nets.
We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks.
The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings.
Event sequence prediction is a task to predict the next event 1 based on a sequence of previously occurred events.
Event sequence prediction has a broad range of applications, e.g., next word prediction in language modeling BID10 , next place prediction based on the previously visited places, or next app to launch given the usage history.
Depending on how the temporal information is modeled, event sequence prediction often decomposes into the following two categories: discrete-time event sequence prediction and continuous-time event sequence prediction.Discrete-time event sequence prediction primarily deals with sequences that consist of a series of tokens (events) where each token can be indexed by its order position in the sequence.
Thus such a sequence evolves synchronously in natural unit-time steps.
These sequences are either inherently time-independent, e.g, each word in a sentence, or resulted from sampling a sequential behavior at an equally-spaced point in time, e.g., busy or not busy for an hourly traffic update.
In a discrete-time event sequence, the distance between events is measured as the difference of their order positions.
As a consequence, for discrete-time event sequence modeling, the primary goal is to predict what event will happen next.Continuous-time event sequence prediction mainly attends to the sequences where the events occur asynchronously.
For example, the time interval between consecutive clinical visits of a patient may potentially vary largely.
The duration between consecutive log-in events into an online service can change from time to time.
Therefore, one primary goal of continuous-time event sequence prediction is to predict when the next event will happen in the near future.Although these two tasks focus on different aspects of a future event, how to learn a proper representation for the temporal information in the past is crucial to both of them.
More specifically, even though for a few discrete-time event sequence prediction tasks (e.g., neural machine translation), they do not involve an explicit temporal information for each event (token), a proper representation of the position in the sequence is still of great importance, not to mention the more general cases where each event is particularly associated with a timestamp.
For example, the next destination people want to go to often depends on what other places they have gone to and how long they have stayed in each place in the past.
When the next clinical visit BID3 will occur for a patient depends on the time of the most recent visits and the respective duration between them.
Therefore, the temporal information of events and the interval between them are crucial to the event sequence prediction in general.
However, how to effectively use and represent time in sequence prediction still largely remains under explored.A natural and straightforward solution is to bring time as an additional input into an existing sequence model (e.g., recurrent neural networks).
However, it is notoriously challenging for recurrent neural networks to directly handle continuous input that has a wide value range, as what is shown in our experiments.
Alternatively, we are inspired by the fact that humans are very good at characterizing time span as high-level concepts.
For example, we would say "watching TV for a little while" instead of using the exact minutes and seconds to describe the duration.
We also notice that these high-level descriptions about time are event dependent.
For example, watching movies for 30 minutes might feel much shorter than waiting in the line for the same amount of time.
Thus, it is desirable to learn and incorporate these time-dependent event representations in general.
Our paper offers the following contributions:• We propose two methods for time-dependent event representation in a neural sequence prediction model: time masking of event embedding and event-time joint embedding.
We use the time span associated with an event to better characterize the event by manipulating its embedding to give a recurrent model additional resolving power for sequence prediction.•
We propose to use next event duration as a regularizer for training a recurrent sequence prediction model. Specifically
, we define two flavors of duration-based regularization: one is based on the negative log likelihood of duration prediction error and the other measures the cross entropy loss of duration prediction in a projected categorical space.• We evaluated
these proposed methods as well as several baseline methods on five datasets (four are public). These datasets
span a diverse range of sequence behaviors, including mobile app usage, song listening pattern, and medical history. The baseline methods
include vanilla RNN models and those found in the recent literature. These experiments offer
valuable findings about how these methods improve prediction accuracy in a variety of settings.
We proposed a set of methods for leveraging the temporal information for event sequence prediction.
Based on our intuition about how humans tokenize time spans as well as previous work on contextual representation of words, we proposed two methods for time-dependent event representation.
They transform a regular event embedding with learned time masking and form time-event joint embedding based on learned soft one-hot encoding.
We also introduced two methods for using next duration as a way of regularization for training a sequence prediction model.
Experiments on a diverse range of real data demonstrate consistent performance gain by blending time into the event representation before it is fed to a recurrent neural network.
|
Proposed methods for time-dependent event representation and regularization for sequence prediction; Evaluated these methods on five datasets that involve a range of sequence prediction tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:557
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired.
This property makes these algorithms appealing for real world problems such as robot control.
In practice, however, standard off-policy algorithms fail in the batch setting for continuous control.
In this paper, we propose a simple solution to this problem.
It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task.
Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources.
We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.
Batch reinforcement learning (RL) (Ernst et al., 2005; Lange et al., 2011) is the problem of learning a policy from a fixed, previously recorded, dataset without the opportunity to collect new data through interaction with the environment.
This is in contrast to the typical RL setting which alternates between policy improvement and environment interaction (to acquire data for policy evaluation).
In many real world domains collecting new data is laborious and costly, both in terms of experimentation time and hardware availability but also in terms of the human labour involved in supervising experiments.
This is especially evident in robotics applications (see e.g. Haarnoja et al. 2018b; Kalashnikov et al. 2018 for recent examples learning on robots).
In these settings where gathering new data is expensive compared to the cost of learning, batch RL promises to be a powerful solution.
There exist a wide class of off-policy algorithms for reinforcement learning designed to handle data generated by a behavior policy µ which might differ from π, the policy that we are interested in learning (see e.g. Sutton & Barto (2018) for an introduction).
One might thus expect solving batch RL to be a straightforward application of these algorithms.
Surprisingly, for batch RL in continuous control domains, however, Fujimoto et al. (2018) found that policies obtained via the naïve application of off-policy methods perform dramatically worse than the policy that was used to generate the data.
This result highlights the key challenge in batch RL: we need to exhaustively exploit the information that is in the data but avoid drawing conclusions for which there is no evidence (i.e. we need to avoid over-valuing state-action sequences not present in the training data).
As we will show in this paper, the problems with existing methods in the batch learning setting are further exacerbated when the provided data contains behavioral trajectories from different policies µ 1 , . . . , µ N which solve different tasks, or the same task in different ways (and thus potentially execute conflicting actions) that are not necessarily aligned with the target task that π should accomplish.
We empirically show that previously suggested adaptations for off-policy learning (Fujimoto et al., 2018; Kumar et al., 2019) can be led astray by behavioral patterns in the data that are consistent (i.e. policies that try to accomplish a different task or a subset of the goals for the target task) but not relevant for the task at hand.
This situation is more damaging than learning from noisy or random data where the behavior policy is sub-optimal but is not predictable, i.e. the randomness is not a correlated signal that will be picked up by the learning algorithm.
We propose to solve this problem by restricting our solutions to 'stay close to the relevant data'.
This is done by:
1) learning a prior that gives information about which candidate policies are potentially supported by the data (while ensuring that the prior focuses on relevant trajectories),
2) enforcing the policy improvement step to stay close to the learned prior policy.
We propose a policy iteration algorithm in which the prior is learned to form an advantage-weighted model of the behavior data.
This prior biases the RL policy towards previously experienced actions that also have a high chance of being successful in the current task.
Our method enables stable learning from conflicting data sources and we show improvements on competitive baselines in a variety of RL tasks -including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.
We also find that utilizing an appropriate prior is sufficient to stabilize learning; demonstrating that the policy evaluation step is implicitly stabilized when a policy iteration algorithm is used -as long as care is taken to faithfully evaluate the value function within temporal difference calculations.
This results in a simpler algorithm than in previous work (Fujimoto et al., 2018; Kumar et al., 2019) .
In this work, we considered the problem of stable learning from logged experience with off-policy RL algorithms.
Our approach consists of using a learned prior that models the behavior distribution contained in the data (the advantage weighted behavior model) towards which the policy of an RL algorithm is regularized.
This allows us to avoid drawing conclusions for which there is no evidence in the data.
Our approach is robust to large amounts of sub-optimal data, and compares favourably to strong baselines on standard continuous control benchmarks.
We further demonstrate that our approach can work in challenging robot manipulation domains -learning some tasks without ever seeing a single trajectory for them.
A ALGORITHM A full algorithm listing for our procedure is given in Algorithm 1.
|
We develop a method for stable offline reinforcement learning from logged data. The key is to regularize the RL policy towards a learned "advantage weighted" model of the data.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:558
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
One of the main challenges in applying graph convolutional neural networks on gene-interaction data is the lack of understanding of the vector space to which they belong and also the inherent difficulties involved in representing those interactions on a significantly lower dimension, viz Euclidean spaces.
The challenge becomes more prevalent when dealing with various types of heterogeneous data.
We introduce a systematic, generalized method, called iSOM-GSN, used to transform ``multi-omic'' data with higher dimensions onto a two-dimensional grid.
Afterwards, we apply a convolutional neural network to predict disease states of various types.
Based on the idea of Kohonen's self-organizing map, we generate a two-dimensional grid for each sample for a given set of genes that represent a gene similarity network.
We have tested the model to predict breast and prostate cancer using gene expression, DNA methylation and copy number alteration, yielding prediction accuracies in the 94-98% range for tumor stages of breast cancer and calculated Gleason scores of prostate cancer with just 11 input genes for both cases.
The scheme not only outputs nearly perfect classification accuracy, but also provides an enhanced scheme for representation learning, visualization, dimensionality reduction, and interpretation of the results.
Large scale projects such as "The Cancer Genome Atlas" (TCGA) generate a plethora of multidimensional data by applying high-resolution microarrays and next generation sequencing.
This leads to diverse multi-dimensional data in which the need for devising dimensionality reduction and representation learning methods to integrate and analyze such data arises.
An earlier study by Shen et al. proposed algorithms iCluster (Shen et al., 2009a) and iCluster+ (Shen et al., 2009b) , which made use of the latent variable model and principal component analysis (PCA) on multi-omic data and aimed to cluster cancer data into sub-types; even though it performed well, it did not use multi-omics data.
In another study, (Lyu and Haque, 2018) attempted to apply heatmaps as a dimensionality reduction scheme on gene expression data to deduce biological insights and then classify cancer types from a Pan-cancer cohort.
However, the accuracy obtained by using that method was limited to 97% on Pan-cancer data, lacking the benefits of integrated multi-omics data.
In a recent study (Choy et al., 2019) used self-Organizing maps (SOMs) to embed gene expression data into a lower dimensional map, while the works of (Bustamam et al., 2018; Mallick et al., 2019; Paul and Shill, 2018; Loeffler-Wirth et al., 2019) generate clusters using SOMs on gene expression data with different aims.
In addition, the work of (Hopp et al., 2018) combines gene expression and DNA methylation to identify subtypes of cancer similar to those of (Roy et al., 2018) , which identifies modules of co-expressing genes.
On the other hand, the work of (Kartal et al., 2018) uses SOMs to create a generalized regression neural network, while the model proposed in (Yoshioka and Dozono, 2018; Shah and Luo, 2017) uses SOMs to classify documents based on a word-tovector model.
Apart from dimensionality reduction methods, attempts have been made by applying supervised deep machine learning, such as deepDriver (Luo et al., 2019) , which predicts candidate driver genes based on mutation-based features and gene similarity networks.
Although these works have been devised to use embedding and conventional machine learning approaches, the use deep neural networks on multi-omics data integration is still in its infancy.
In addition, these methods lack Gleason Score Number of Samples Group 3+4 147 34 4+3 101 43 4+5,5+4 139 9 Table 1 : Distribution of the different Gleason groups considered for PRCA.
in adequacy to generalize them multi-omics data to predic disease states.
More specifically, none of these models combine the strength of SOMs for representation learning combined with the CNN for image classification as we do in this work.
In this paper, a deep learning-based method is proposed, and is used to predict disease states by integrating multi-omic data.
The method, which we call iSOM-GSN, leverages the power of SOMs to transform multi-omic data into a gene similarity network (GSN) by the use of gene expression data.
Such data is then combined with other genomic features to improve prediction accuracy and help visualization.
To our knowledge, this the first deep learning model that uses SOMs to transform multi-omic data into a GSN for representation learning, and uses CNNs for classification of disease states or other clinical features.
The main contributions of this work can be summarized as follows:
• A deep learning method for prediction of tumor aggressiveness and progression using iSOM-GSN.
• A new strategy to derive gene similarity networks via self-organizing maps.
• Use of iSOM-GSN to identify relevant biomarkers without handcrafted feature engineering.
• An enhanced scheme to interpret and visualize multi-dimensional, multi-omics data.
• An efficient model for graph representation learning.
This paper presents a framework that uses a self-organizing map and a convolutional neural network used to conduct data integration, representation learning, dimensionality reduction, feature selection and classification simultaneously to harness the full potential of integrated high-dimensional large scale cancer genomic data.
We have introduced a new way to create gene similarity networks, which can lead to novel gene interactions.
We have also provided a scheme to visualize high-dimensional, multi-omics data onto a two-dimensional grid.
In addition, we have devised an approach that could also be used to integrate other types of multi-omic data and predict any clinical aspects or states of diseases, such as laterality of the tumor, survivability, or cancer sub types, just to mention a few.
This work can also be extended to classify Pan-cancer data.
Omics can be considered as a vector and more than three types of data (i.e., beyond RGB images) can be incorporated for classification.
Apart from integrating multi-omics data, the proposed approach can be considered as an unsupervised clustering algorithm, because of the competitive learning nature of SOMs.
We can also apply iSOM-GSN on other domains, such as predicting music genre's for users based on their music preference.
As a first step, we have applied the SOM to a Deezer dataset and the results are encouraging 14.
Applications of iSOM-GSN can also be in drug response or re-purposing, prediction of passenger or oncogenes, revealing topics in citation networks, and other prediction tasks.
|
This paper presents a deep learning model that combines self-organizing maps and convolutional neural networks for representation learning of multi-omics data
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:559
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We revisit the Recurrent Attention Model (RAM, Mnih et al. (2014)), a recurrent neural network for visual attention, from an active information sampling perspective.
We borrow ideas from neuroscience research on the role of active information sampling in the context of visual attention and gaze (Gottlieb, 2018), where the author suggested three types of motives for active information sampling strategies.
We find the original RAM model only implements one of them.
We identify three key weakness of the original RAM and provide a simple solution by adding two extra terms on the objective function.
The modified RAM
1) achieves faster convergence,
2) allows dynamic decision making per sample without loss of accuracy, and
3) generalizes much better on longer sequence of glimpses which is not trained for, compared with the original RAM.
We revisit the Recurrent Attention Model (RAM, ), a recurrent neural network for visual attention, from an active information sampling perspective.
The RAM, instead of processing the input image for classification in full, only takes a glimpse at a small patch of the image at a time.
The recurrent attention mechanism learns where to look at to obtain new information based on the internal state of the network.
After a pre-defined number of glimpses, RAM finally makes a prediction as output.
Compared with the attention mechanism which now dominates the AI/NLP research such as Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2018) , this recurrent attention mechanism is fundamentally different, as it is used to obtain new information (active sampling of information), rather than processing information that is already fully observed.
In this paper, we identify three weaknesses of this widely-cited approach.
First, the convergence of RAM training is slow.
Second, RAM does not support dynamic number of glimpses per sample, but uses a fixed number of glimpses for every sample.
Third and perhaps most importantly, the performance of the original RAM does not improve but rather decrease dramatically if it takes more glimpses, which is weird and against intuition.
We provide a simple solution of adding two extra terms in the objective function of RAM, insipred from neuroscience research (Gottlieb, 2018) which discusses the logic and neural substrates of information sampling policies in the context of visual attention and gaze.
Base on the evidence available so far, Gottlieb (2018) suggested three kinds of motives for the active sampling strategies of decision-making while the original RAM only implements one of them.
We incorporate the other two motives in the objective function, and by doing so we
1) achieve much faster convergence and
2) instantly enbale decision making with a dynamic number of glimpse for different samples with no loss of accuracy.
3) More importantly, we find that the modified RAM generalizes much better to longer sequence of glimpses which is not trained for.
We evaluate on MNIST dataset as in the orignal RAM paper.
We set the train-time number of glimpses N = 6 for it achieves the best test-time accuracy in .
Implementation details see the source code 1 .
We first show in Figure 1 that the two new terms in the objective both contribute to a faster convergence.
We test four cases
1) the orignal objective,
2) add the J intrinsic , 3) add J uncertainty , 4) add both new terms.
We see in Figure 1 that both of our new objective in isolation help a faster learning and together give the fastest convergence.
As in Figure 2 , we test the trained models with varying number of glimpses.
(We want to emphasize that the focus is not the absolute performance , but rather the generalization on more glimpses than train time.)
We fisrt evaluate the non-dynamic case (fixed number for all samples).
The performance of the original RAM decrease dramatically when N > 10.
Adding both terms, the modified RAM does not suffer the decrease anymore even when N is large .
Also, it is interesting that adding only the uncertainty term, we observe the improvement is very slight and the intrinsic term effectively stablizes the prediction accuracy given more glimpses.
We also test the dynamic case by varying the exploration rate.
We see that dynamic number of glimpses does not hurt the performance very much, which confirms with the hypothesis that some samples are easier to discriminate and thus need fewer glimpses.
One may argue that the given longer training time or other hyperparameter tuning, RAM will eventually reach a point where it can give stable prediction accuracy on more glimpses, and the new objective only make it converge faster to that point.
But during our experiments, we find with λ 2 = 0.1 the J intrinsic term can effectively stablize the prediction given more glimpses, even when trained for only 1 epoch.
We observe that the l2-norm of internal states of orginal RAM becomes very large given a longer sequence of glimpses while the modified RAM with J intrinsic remains stable.
|
Inspired by neuroscience research, solve three key weakness of the widely-cited recurrent attention model by simply adding two terms on the objective function.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:56
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
State-of-the-art performances on language comprehension tasks are achieved by huge language models pre-trained on massive unlabeled text corpora, with very light subsequent fine-tuning in a task-specific supervised manner.
It seems the pre-training procedure learns a very good common initialization for further training on various natural language understanding tasks, such that only few steps need to be taken in the parameter space to learn each task.
In this work, using Bidirectional Encoder Representations from Transformers (BERT) as an example, we verify this hypothesis by showing that task-specific fine-tuned language models are highly close in parameter space to the pre-trained one.
Taking advantage of such observations, we further show that the fine-tuned versions of these huge models, having on the order of $10^8$ floating-point parameters, can be made very computationally efficient.
First, fine-tuning only a fraction of critical layers suffices.
Second, fine-tuning can be adequately performed by learning a binary multiplicative mask on pre-trained weights, \textit{i.e.} by parameter-sparsification.
As a result, with a single effort, we achieve three desired outcomes: (1) learning to perform specific tasks, (2) saving memory by storing only binary masks of certain layers for each task, and (3) saving compute on appropriate hardware by performing sparse operations with model parameters.
One very puzzling fact about overparameterized deep neural networks is that sheer increases in dimensionality of the parameter space seldom make stochastic gradient-based optimization more difficult.
Given an effective network architecture reflecting proper inductive biases, deeper and/or wider networks take just about the same, if not a lower, number of training iterations to converge, a number often by orders of magnitude smaller than the dimensionality of the parameter space.
For example, ResNet-18 (parameter count 11.7M) and ResNet-152 (parameter count 60.2M) both train to converge, at similar convergence rates, in no more than 600K iterations on Imagenet (He et al., 2015) .
Meaningful optimization seems to happen in only a very low-dimensional parameter subspace, viz. the span of those relatively few weight updates, with its dimensionality not ostensibly scaling with the model size.
In other words, the network seems already perfectly converged along most of the parameter dimensions at initialization, suggesting that training only marginally alters a high-dimensional parameter configuration.
This phenomenon is epitomized in fine-tuning of pre-trained models.
Pre-training is a, often unsupervised, learning procedure that yields a good common initialization for further supervised learning of various downstream tasks.
The better a pre-trained model is, the fewer iterations are required on average to fine-tune it to perform specific tasks, resulting in fine-tuned models hypothetically closer 1 to the pre-trained one in parameter space.
However, better pre-trained models are, almost always, larger models (Hestness et al., 2017) , and nowhere is this trend more prominent than recent pretrained language models that achieved state-of-the-art natural language understanding performance, e.g. GPT-2 (Radford et al., 2019 ) has 1.5B parameters.
Thus, a problem naturally arises hand-in-hand with an obvious hint to its solution: as pre-trained models get larger, on the one hand, computation of each fine-tuned model becomes more expensive in terms of both memory and compute for inference, while on the other hand, greater closeness between the pre-trained and fine-tuned models in the parameter space prescribes a higher degree of computational redundancy that could be potentially avoided.
Additionally, there might exist more computationally efficient fine-tuned networks that are not necessarily close to, but cheaply attainable from, the pre-trained parameters, which are shared across all tasks.
In this study, we seek to address these questions, using Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) and the General Language Understanding Evaluation (GLUE) benchmark tasks (Wang et al., 2018) as a working example.
We first found that the fine-tuned and pre-trained parameters are both L 1 -close and angular-close in parameter space, consistent with the small number of fine-tuning iterations separating them.
Next, we demonstrated that there also exist good fine-tuned models that are L 0 -close (i.e. having a small number of different components) to the pre-trained one.
Further, we showed that there exist good fine-tuned parameters that are L 0 -small (i.e. sparse, or having a large fraction of zero components).
Finally, we successfully found fine-tuned language models that are both L 0 -small and L 0 -close to the pre-trained models.
We remark the practical implications of these constraints.
By forcing fine-tuned parameters to be L 0 -close to the pre-trained ones, one only needs to store a small number of different weights per task, in addition to the common pre-trained weights, substantially saving parameter memory.
By forcing fine-tuned parameters to be sparse, one potentially saves memory and compute, provided proper hardware acceleration of sparse linear algebraic operations.
Surprisingly, our findings also reveal an abundance of good task-specific parameter configurations within a sparse L 0 -vicinity of large pre-trained language models like BERT: a specific task can be learned by simply masking anywhere between 1% to 40% of the pre-trained weights to zero.
See Figure 1 for an explanation of the L 0 -and sparse L 0 -vicinities.
Figure 1: An illustration of the L 0 -vicinity and the sparse L 0 -vicinity of a pre-trained parameter in a three-dimensional parameter space.
The L 0 -vicinity is continuous and contains parameters that are L 0 -close, whereas the sparse L 0 -vicinity is a discrete subset of L 0 -close parameters that are also L 0 -small.
We show that, due to surprisingly frequent occurrences of good parameter configurations in the sparse L 0 -vicinity of large pre-trained language models, two techniques are highly effective in producing efficient fine-tuned networks to perform specific language understanding tasks: (1) optimizing only the most sensitive layers and (2) learning to sparsify parameters.
In contrast to commonly employed post-training compression methods that have to trade off with performance degradation, our procedure of generating sparse networks is by itself an optimization process that learns specific tasks.
|
Sparsification as fine-tuning of language models
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:560
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning.
It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost.
Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize.
We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails.
We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.
Training artificial agents to perform complex tasks is essential for many applications in robotics, video games and dialogue.
If success on the task can be accurately described using a reward or cost function, reinforcement learning (RL) methods offer an approach to learning policies which has been shown to be successful in a wide variety of applications (Mnih et al., 2015; Hessel et al., 2018) However, in other cases the desired behavior may only be roughly specified and it is unclear how to design a reward function to characterize it.
For example, training a video game agent to adopt more human-like behavior using RL would require designing a reward function which characterizes behaviors as more or less human-like, which is difficult.
Imitation learning (IL) offers an elegant approach whereby agents are trained to mimic the demonstrations of an expert rather than optimizing a reward function.
Its simplest form consists of training a policy to predict the expert's actions from states in the demonstration data using supervised learning.
While appealingly simple, this approach suffers from the fact that the distribution over states observed at execution time can differ from the distribution observed during training.
Minor errors which initially produce small deviations from the expert trajectories become magnified as the policy encounters states further and further from its training distribution.
This phenomenon, initially noted in the early work of (Pomerleau, 1989) , was formalized in the work of (Ross & Bagnell, 2010) who proved a quadratic O( T 2 ) bound on the regret and showed that this bound is tight.
The subsequent work of (Ross et al., 2011) showed that if the policy is allowed to further interact with the environment and make queries to the expert policy, it is possible to obtain a linear bound on the regret.
However, the ability to query an expert can often be a strong assumption.
In this work, we propose a new and simple algorithm called DRIL (Disagreement-Regularized Imitation Learning) to address the covariate shift problem in imitation learning, in the setting where the agent is allowed to interact with its environment.
Importantly, the algorithm does not require any additional interaction with the expert.
It operates by training an ensemble of policies on the demonstration data, and using the disagreement in their predictions as a cost which is optimized through RL together with a supervised behavioral cloning cost.
The motivation is that the policies in the ensemble will tend to agree on the set of states covered by the expert, leading to low cost, but are more likely to disagree on states not covered by the expert, leading to high cost.
The RL cost thus pushes the agent back towards the distribution of the expert, while the supervised cost ensures that it mimics the expert within the expert's distribution.
Our theoretical results show that, subject to realizability and optimization oracle assumptions, our algorithm obtains a O( κ T ) regret bound for tabular MDPs, where κ is a measure which quantifies a tradeoff between the concentration of the demonstration data and the diversity of the ensemble outside the demonstration data.
We evaluate DRIL empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning, often recovering expert performance with only a few trajectories.
Addressing covariate shift has been a long-standing challenge in imitation learning.
In this work, we have proposed a new method to address this problem by penalizing the disagreement between an ensemble of different policies sampled from the posterior.
Importantly, our method requires no additional labeling by an expert.
Our experimental results demonstrate that DRIL can often match expert performance while using only a small number of trajectories across a wide array of tasks, ranging from tabular MDPs to pixel-based Atari games and continuous control tasks.
On the theoretical side, we have shown that our algorithm can provably obtain a low regret bound for tabular problems in which the κ parameter is low.
There are multiple directions for future work.
On the theoretical side, extending our analysis to continuous state spaces and characterizing the κ parameter on a larger array of problems would help to better understand the settings where our method can expect to do well.
Empirically, there are many other settings in structured prediction (Daumé et al., 2009 ) where covariate shift is an issue and where our method could be applied.
For example, in dialogue and language modeling it is common for generated text to become progressively less coherent as errors push the model off the manifold it was trained on.
Our method could potentially be used to fine-tune language or translation models (Cho et al., 2014; Welleck et al., 2019) after training by applying our uncertainty-based cost function to the generated text.
A PROOFS
Proof.
We will first show that for any π ∈ Π and U ⊆ S, we have
. We can rewrite this as:
We begin by bounding the first term:
We next bound the second term:
Now observe we can decompose the RL cost as follows:
Putting these together, we get the following:
Here we have used the fact that β(U) ≤ 1 since 0 ≤ π(a|s) ≤ 1 and α(U) ≥ s∈U
Taking the minimum over subsets U ⊆ S, we get J exp (π) ≤ κJ alg (π).
Proof.
Plugging the optimal policy into J alg , we get:
We will first bound Term 1:
We will next bound Term 2:
The last step follows from our optimization oracle assumption:
Combining the bounds on the two terms, we get J alg (π ) ≤ 2 .
Since π ∈ Π, the result follows.
Theorem 1.
Letπ be the result of minimizing J alg using our optimization oracle, and assume that
Proof.
By our optimization oracle and Lemma 2, we have
Combining with Lemma 1, we get:
Applying Theorem 1 from (Ross et al., 2011) , we get J(π) ≤ J(π ) + 3uκ T .
|
Method for addressing covariate shift in imitation learning using ensemble uncertainty
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:561
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present and discuss a simple image preprocessing method for learning disentangled latent factors.
In particular, we utilize the implicit inductive bias contained in features from networks pretrained on the ImageNet database.
We enhance this bias by explicitly fine-tuning such pretrained networks on tasks useful for the NeurIPS2019 disentanglement challenge, such as angle and position estimation or color classification.
Furthermore, we train a VAE on regionally aggregate feature maps, and discuss its disentanglement performance using metrics proposed in recent literature.
Fully unsupervised methods, that is, without any human supervision, are doomed to fail for tasks such as learning disentangled representations (Locatello et al., 2018) .
In this contribution, we utilize the implicit inductive bias contained in models pretrained on the ImageNet database (Russakovsky et al., 2014) , and enhance it by finetuning such models on challenge-relevant tasks such as angle and position estimation or color classification.
In particular, our submission for challenge stage 2 builds on our submission from stage 1 1 , in which we employed pretrained CNNs to extract convolutional feature maps as a preprocessing step before training a VAE (Kingma and Welling, 2013) .
Although this approach already results in partial disentanglement, we identified two issues with the feature vectors extracted this way.
Firstly, the feature extraction network is trained on ImageNet, which is rather dissimilar to the MPI3d dataset used in the challenge.
Secondly, the feature aggregation mechanism was chosen ad-hoc and likely does not retain all information needed for disentanglement.
We attempt to fix these issues by finetuning the feature extraction network as well as learning the aggregation of feature maps from data by using the labels of the simulation datasets MPI3d-toy and MPI3d-realistic.
On the public leaderboard (i.e. on MPI3D-real ), our best submission achieves the first rank on the FactorVAE (Kim and Mnih, 2018) , and DCI (Eastwood and Williams, 2018 ) metrics, with a large gap to the second-placed entry.
See appendix A for a discussion of the results.
Unsurprisingly, introducing prior knowledge simplifies the disentanglement task considerably, reflected in improved scores.
To do so, our approach makes use of task-specific supervision obtained from simulation, which restricts its applicability.
Nevertheless, it constitutes a demonstration that this type of supervision can transfer to better disentanglement on real world data, which was one of the goals of the challenge.
|
We use supervised finetuning of feature vectors to improve transfer from simulation to the real world
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:562
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy.
In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals.
In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning.
However, reinforcement learning agents have only recently been endowed with such capacity for hindsight.
In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms.
Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.
In a traditional reinforcement learning setting, an agent interacts with an environment in a sequence of episodes, observing states and acting according to a policy that ideally maximizes expected cumulative reward.
If an agent is required to pursue different goals across episodes, its goal-conditional policy may be represented by a probability distribution over actions for every combination of state and goal.
This distinction between states and goals is particularly useful when the probability of a state transition given an action is independent of the goal pursued by the agent.Learning such goal-conditional behavior has received significant attention in machine learning and robotics, especially because a goal-conditional policy may generalize desirable behavior to goals that were never encountered by the agent BID17 BID3 Kupcsik et al., 2013; Deisenroth et al., 2014; BID16 BID29 Kober et al., 2012; Ghosh et al., 2018; Mankowitz et al., 2018; BID11 .
Consequently, developing goal-based curricula to facilitate learning has also attracted considerable interest (Fabisch & Metzen, 2014; Florensa et al., 2017; BID20 BID19 . In hierarchical reinforcement learning, goal-conditional policies may enable agents to plan using subgoals, which abstracts the details involved in lower-level decisions BID10 BID26 Kulkarni et al., 2016; Levy et al., 2017) .In
a typical sparse-reward environment, an agent receives a non-zero reward only upon reaching a goal state. Besides
being natural, this task formulation avoids the potentially difficult problem of reward shaping, which often biases the learning process towards suboptimal behavior BID9 . Unfortunately
, sparse-reward environments remain particularly challenging for traditional reinforcement learning algorithms BID0 Florensa et al., 2017) . For example,
consider an agent tasked with traveling between cities. In a sparse-reward
formulation, if reaching a desired destination by chance is unlikely, a learning agent will rarely obtain reward signals. At the same time,
it seems natural to expect that an agent will learn how to reach the cities it visited regardless of its desired destinations.In this context, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended is called hindsight. This capacity was
recently introduced by BID0 to off-policy reinforcement learning algorithms that rely on experience replay (Lin, 1992) . In earlier work,
Karkus et al. (2016) introduced hindsight to policy search based on Bayesian optimization BID5 .In this paper, we
demonstrate how hindsight can be introduced to policy gradient methods BID27 BID28 BID22 , generalizing this idea to a successful class of reinforcement learning algorithms BID13 Duan et al., 2016) .In contrast to previous
work on hindsight, our approach relies on importance sampling BID2 . In reinforcement learning
, importance sampling has been traditionally employed in order to efficiently reuse information obtained by earlier policies during learning BID15 BID12 Jie & Abbeel, 2010; BID7 . In comparison, our approach
attempts to efficiently learn about different goals using information obtained by the current policy for a specific goal. This approach leads to multiple
formulations of a hindsight policy gradient that relate to well-known policy gradient results.In comparison to conventional (goal-conditional) policy gradient estimators, our proposed estimators lead to remarkable sample efficiency on a diverse selection of sparse-reward environments.
We introduced techniques that enable learning goal-conditional policies using hindsight.
In this context, hindsight refers to the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended.
Prior to our work, hindsight has been limited to off-policy reinforcement learning algorithms that rely on experience replay BID0 and policy search based on Bayesian optimization (Karkus et al., 2016) .In
addition to the fundamental hindsight policy gradient, our technical results include its baseline and advantage formulations. These
results are based on a self-contained goal-conditional policy framework that is also introduced in this text. Besides
the straightforward estimator built upon the per-decision hindsight policy gradient, we also presented a consistent estimator inspired by weighted importance sampling, together with the corresponding baseline formulation. A variant
of this estimator leads to remarkable comparative sample efficiency on a diverse selection of sparsereward environments, especially in cases where direct reward signals are extremely difficult to obtain. This crucial
feature allows natural task formulations that require just trivial reward shaping.The main drawback of hindsight policy gradient estimators appears to be their computational cost, which is directly related to the number of active goals in a batch. This issue may
be mitigated by subsampling active goals, which generally leads to inconsistent estimators. Fortunately, our
experiments suggest that this is a viable alternative. Note that the success
of hindsight experience replay also depends on an active goal subsampling heuristic (Andrychowicz et al., 2017, Sec. 4.5) .The inconsistent hindsight
policy gradient estimator with a value function baseline employed in our experiments sometimes leads to unstable learning, which is likely related to the difficulty of fitting such a value function without hindsight. This hypothesis is consistent
with the fact that such instability is observed only in the most extreme examples of sparse-reward environments. Although our preliminary experiments
in using hindsight to fit a value function baseline have been successful, this may be accomplished in several ways, and requires a careful study of its own. Further experiments are also required
to evaluate hindsight on dense-reward environments.There are many possibilities for future work besides integrating hindsight policy gradients into systems that rely on goal-conditional policies: deriving additional estimators; implementing and evaluating hindsight (advantage) actor-critic methods; assessing whether hindsight policy gradients can successfully circumvent catastrophic forgetting during curriculum learning of goal-conditional policies; approximating the reward function to reduce required supervision; analysing the variance of the proposed estimators; studying the impact of active goal subsampling; and evaluating every technique on continuous action spaces. Theorem A.1. The gradient ∇η(θ) of the
expected return
with respect to θ is given by DISPLAYFORM0 Proof. The partial derivative ∂η(θ)/∂θ j of the
expected return η(θ) with respect to θ j is given by DISPLAYFORM1 The likelihood-ratio trick allows rewriting the previous equation as DISPLAYFORM2 Note that DISPLAYFORM3 Therefore, DISPLAYFORM4 A.2 THEOREM 3.1Theorem 3.1 (Goal-conditional
policy gradient). The gradient ∇η(θ) of the expected return
with respect to θ is given by DISPLAYFORM5 Proof. Starting from Eq. 17, the partial derivative
∂η(θ)/∂θ j of η(θ) with respect to θ j is given by DISPLAYFORM6 The previous equation can be rewritten as DISPLAYFORM7 Let c denote an expectation inside Eq. 19 for t ≥ t. In that case, A t ⊥ ⊥ S t | S t , G, Θ, and
so DISPLAYFORM8 Reversing the likelihood-ratio trick, DISPLAYFORM9 Therefore, the terms where t ≥ t can be dismissed from Eq. 19, leading to DISPLAYFORM10 The previous equation can be conveniently rewritten as DISPLAYFORM11 A.3 LEMMA A.1Lemma A.1. For every j, t, θ, and
associated real-valued
(baseline) function b DISPLAYFORM12 Proof. Letting c denote an expectation inside Eq. 24
, DISPLAYFORM13 Reversing the likelihood-ratio trick, DISPLAYFORM14 A.4 THEOREM 3.2 Theorem 3.2 (Goal-conditional policy
gradient, baseline formulation). For every t, θ, and associated real-valued (baseline
) function b θ t , the gradient ∇η(θ) of the expected return with respect to θ is given by DISPLAYFORM15 Proof. The result is obtained by subtracting Eq. 24 from Eq.
23. Importantly, for every combination of θ and t, it would
also be possible to have a distinct baseline function for each parameter in θ.A.5 LEMMA A.2 Lemma A.2. The gradient ∇η(θ) of the expected
return with
respect to
θ is given by DISPLAYFORM16 Proof. Starting from Eq. 23 and rearranging terms, DISPLAYFORM17
By the definition of action-value function, DISPLAYFORM18 A.6 THEOREM 3.3Theorem 3.3 (Goal-conditional policy gradient,
advantage formulation). The gradient ∇η(θ) of the expected return with respect to
θ is given by DISPLAYFORM19 Proof. The result is obtained by choosing b θ t = V θ t and subtracting
Eq. 24 from Eq. 29.A.7 THEOREM A.2For arbitrary j and θ, consider the following definitions
of f and h. DISPLAYFORM20 DISPLAYFORM21 For every b j ∈ R, using Theorem 3.1 and
the fact that DISPLAYFORM22 Proof. The result is an application of Lemma D.4. The following theorem relies
on importance sampling, a traditional technique
used to obtain estimates related to a random variable X ∼ p using samples from an arbitrary positive distribution q. This technique relies on the following equalities:
|
We introduce the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended to policy gradient methods.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:563
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets.
However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements.
Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive.
We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure.
In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved.
Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning.
In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable.
Using CATER, we provide insights into some of the most recent state of the art deep video architectures.
While deep features have revolutionized static image analysis, video descriptors have struggled to outperform classic hand-crafted descriptors (Wang & Schmid, 2013) .
Though recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016) , simpler 2D models (Wang et al., 2016b) still routinely appear among top performers in video benchmarks such as the Kinetics Challenge at CVPR'17.
This raises the natural question: are videos trivially understandable by simply averaging the predictions over a sampled set of frames?
At some level, the answer must be no. Reasoning about high-level cognitive concepts such as intentions, goals, and causal relations requires reasoning over long-term temporal structure and order (Shoham, 1987; Bobick, 1997) .
Consider, for example, the movie clip in Fig. 1 (a) , where an actor leaves the table, grabs a firearm from another room, and returns.
Even though no gun is visible in the final frames, an observer can easily infer that the actor is surreptitiously carrying the gun.
Needless to say, any single frame from the video seems incapable of supporting that inference, and one needs to reason over space and time in order to reach that conclusion.
As a simpler instance of the problem, consider the cup-and-balls magic routine 1 , or the gamblingbased shell game 2 , as shown in Fig. 1
(b) .
In these games, an operator puts a target object (ball) under one of multiple container objects (cups), and moves them about, possibly revealing the target at various times and recursively containing cups within other cups.
The task at the end is to tell which of the cups is covering the ball.
Even in its simplest instantiation, one can expect any human or computer system that solves this task to require the ability to model state of the world over long temporal horizons, reason about occlusion, understand the spatiotemporal implications of containment, etc.
An important aspect of both our motivating examples is the adversarial nature of the task, where the operator in control is trying to make the observer fail.
Needless to say, a frame by frame prediction model would be incapable of solving such tasks.
Figure 1: Real world video understanding.
Consider this iconic movie scene from The Godfather in
(a), where the protagonist leaves the table, goes to the bathroom to extract a hidden firearm, and returns to the table presumably with the intentions of shooting a person.
While the gun itself is visible in only a few frames of the whole clip, it is trivial for us to realize that the protagonist has it in the last frame.
An even simpler instantiation of such a reasoning task could be the cup-and-ball shell game in
(b) , where the task is to determine which of the cups contain the ball at the end of the trick.
Can we design similarly hard tasks for computers?
Given these motivating examples, why don't spatiotemporal models dramatically outperform their static counterparts for video understanding?
We posit that this is due to limitations of existing video benchmarks.
Even though video datasets have evolved from the small regime with tens of labels (Soomro et al., 2012; Kuehne et al., 2011; Schuldt et al., 2004) to large with hundreds of labels (Sigurdsson et al., 2016; Kay et al., 2017) , tasks have remained highly correlated to the scene and object context.
For example, it is trivial to recognize a swimming action given a swimming pool in the background (He et al., 2016b) .
This is further reinforced by the fact that state of the art pose-based action recognition models are outperformed by simpler frame-level models (Wang et al., 2016b) on the Kinetics (Kay et al., 2017) benchmark, with a difference of nearly 45% in accuracy!
Sigurdsson et al. also found similar results for their Charades (Sigurdsson et al., 2016) benchmark, where adding ground truth object information gave the largest boosts to action recognition performance (Sigurdsson et al., 2017) .
In this work, we take an alternate approach to developing a video understanding dataset.
Inspired by the recent CLEVR dataset (Johnson et al., 2017) (that explores spatial reasoning in tabletop scenes) and inspired by the adversarial parlor games above (that require temporal reasoning), we introduce CATER, a diagnostic dataset for Compositional Actions and TEmporal Reasoning in dynamic tabletop scenes.
We define three tasks on the dataset, each with an increasingly higher level of complexity, but set up as classification problems in order to be comparable to existing benchmarks for easy transfer of existing models and approaches.
Specifically, we consider primitive action recognition, compositional action recognition, and adversarial target tracking under occlusion and containment.
However, note that this does not limit the usability of our dataset to these tasks, and we provide full metadata with the rendered videos that can be used for more complex, structured prediction tasks like detection, tracking, forecasting, and so on.
Our dataset does not model an operator (or hand) moving the tabletop objects, though this could be simulated as well in future variants, as in (Rogez et al., 2015) .
Being synthetic, CATER can easily be scaled up in size and complexity.
It also allows for detailed model diagnostics by controlling various dataset generation parameters.
We use CATER to benchmark state-of-the-art video understanding models Hochreiter & Schmidhuber, 1997) , and show even the best models struggle on our dataset.
We also uncover some insights into the behavior of these models by changing parameters such as the temporal duration of an occlusion, the degree of camera motion, etc., which are difficult to both tune and label in real-world video data.
We use CATER to analyze several leading network designs on hard spatiotemporal tasks.
We find most models struggle on our proposed dataset, especially on the snitch localization task which requires long term reasoning.
Interestingly, average pooling clip predictions or short temporal cues (optical flow) perform rather poorly on CATER, unlike most previous benchmarks.
Such temporal reasoning challenges are common in the real world (eg.
Fig.
1
(a) ), and solving those would be the cornerstone of the next improvements in machine video understanding.
We believe CATER would serve as an intermediary in building systems that will reason over space and time to understand actions.
That said, CATER is, by no means, a complete solution to the video understanding problem.
Like any other synthetic or simulated dataset, it should be considered in addition to real world benchmarks.
While we have focused on classification tasks for simplicity, our fully-annotated dataset can be used for much richer parsing tasks such as spacetime action localization.
One of our findings is that while high-level semantic tasks such as activity recognition may be addressable with current architectures given a richly labeled dataset, "mid-level" tasks such as tracking still pose tremendous challenges, particularly under long-term occlusions and containment.
We believe addressing such challenges will enable broader temporal reasoning tasks that capture intentions, goals, and causal behavior.
We analyze the top most confident
a) correct and
b) incorrect predictions on the test videos for localization task.
For each video, we show the last frame, followed by a top-down view of the 6 × 6 grid.
The grid is further overlayed with:
1) the ground truth positions of the snitch over time, shown as the golden trail, which fades in color over time =⇒ brighter yellow depicts later positions; and
2) the softmax prediction confidence scores for each location (black is low, white is high).
The model has easiest time classifying the location when the snitch does not move much or moves early on in the video.
Full video in supplementary.
|
We propose a new video understanding benchmark, with tasks that by-design require temporal reasoning to be solved, unlike most existing video datasets.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:564
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We address the efficiency issues caused by the straggler effect in the recently emerged federated learning, which collaboratively trains a model on decentralized non-i.i.d. (non-independent and identically distributed) data across massive worker devices without exchanging training data in the unreliable and heterogeneous networks.
We propose a novel two-stage analysis on the error bounds of general federated learning, which provides practical insights into optimization.
As a result, we propose a novel easy-to-implement federated learning algorithm that uses asynchronous settings and strategies to control discrepancies between the global model and delayed models and adjust the number of local epochs with the estimation of staleness to accelerate convergence and resist performance deterioration caused by stragglers.
Experiment results show that our algorithm converges fast and robust on the existence of massive stragglers.
Distributed machine learning has received increasing attention in recent years, e.g., distributed stochastic gradient descent (DSGD) approaches (Gemulla et al., 2011; Lan et al., 2017) and the well-known parameter server paradigm (Agarwal & Duchi, 2011; Li et al., 2013; 2014) .
However, these approaches always suffer from communication overhead and privacy risk (McMahan et al., 2017) .
Federated learning (FL) (Konečnỳ et al., 2016 ) is proposed to alleviate the above issues, where a subset of devices are randomly selected, and training data in devices are locally kept when training a global model, thus reducing communication and protecting user privacy.
Furthermore, FL approaches are dedicated to a more complex context with
1) non-i.i.d. (Non-independent and identically distributed), unbalanced and heterogeneous data in devices,
2) constrained computing resources with unreliable connections and unstable environments (McMahan et al., 2017; Konečnỳ et al., 2016) .
Typically, FL approaches apply weight averaging methods for model aggregation, e.g., FedAvg (McMahan et al., 2017) and its variants (Sahu et al., 2018; Wang et al., 2018; Kamp et al., 2018; Leroy et al., 2019; Nishio & Yonetani, 2019) .
Such methods are similar to the synchronous distributed optimization domain.
However, synchronous optimization methods are costly in synchronization (Chen et al., 2018) , and they are potentially inefficient due to the synchrony even when collecting model updates from a much smaller subset of devices (Xie et al., 2019b) .
Besides, waiting time for slow devices (i.e., stragglers or stale workers) is inevitable due to the heterogeneity and unreliability as mentioned above.
The existence of such devices is proved to affect the convergence of FL (Chen et al., 2018) .
To address this problem, scholars propose asynchronous federated learning (AFL) methods (Xie et al., 2019a; Mohammad & Sorour, 2019; Samarakoon et al., 2018) that allow model aggregation without waiting for slow devices.
However, asynchrony magnifies the straggler effect because
1) when the server node receives models uploaded by the slow workers, it probably has already updated the global model for many times, and
2) real-world data are usually heavy-tailed in distributed heterogeneous devices, where the rich get richer, i.e., the straggler effect accumulates when no adjustment operations in stale workers, and eventually it affects the convergence of the global model.
Furthermore, dynamics in AFL brings more challenges in parameter tuning and speed-accuracy trade-off, and the guidelines for designing efficient and stale-robust algorithms in this context are still missing.
Contributions Our main contributions are summarized as follows.
We first establish a new twostage analysis on federated learning, namely training error decomposition and convergence analysis.
To the best of our knowledge, it is the first analysis based on the above two stages that address the optimization roadmap for the general federated learning entirely.
Such analysis provides insight into designing efficient and stale-robust federated learning algorithms.
By following the guidelines of the above two stages, we propose a novel FL algorithm with asynchronous settings and a set of easy-to-implement training strategies.
Specifically, the algorithm controls model training by estimating the model consistency and dynamically adjusting the number of local epochs on straggle workers to reduce the impact of staleness on the convergence of the global model.
We conduct experiments to evaluate the efficiency and robustness of our algorithm on imbalanced and balanced data partitions with different proportions of straggle worker nodes.
Results show that our approach converges fast and robust on the existence of straggle worker nodes compared to the state-of-the-art solutions.
Related Work Our work is targeting the AFL and staleness resilience approaches in this context.
Straggler effect (also called staleness) is one of the main problems in the similar asynchronous gradient descent (Async-SGD) approaches, which has been discussed by various studies and its remedies have been proposed (Hakimi et al., 2019; Lian et al., 2015; Chen et al., 2016; Cui et al., 2016; Chai et al., 2019; Zheng et al., 2017; Dai et al., 2018; Hakimi et al., 2019) .
However, these works are mainly targeting the distributed Async-SGD scenarios, which is different from FL as discussed in the previous section.
Existing FL solutions that address the straggler effect are mainly consensus-based.
Consensus mechanisms are introduced where a threshold metric (i.e., control variable) is computed, and only the workers who satisfy this threshold are permitted to upload their model (Chen et al., 2018; Smith et al., 2017; Nishio & Yonetani, 2019) .
Thus it significantly reduces the number of communications and updates model without waiting for straggle workers.
However, current approaches are mainly focusing on synchronized FL.
Xie et al. (2019a) propose an AFL algorithm which uses a mixing hyperparameter to adaptively control the trade-off between the convergence speed and error reduction on staleness.
However, this work and above mentioned FL solutions only consider the staleness caused by network delay instead of imbalanced data size in each worker and only evaluate on equal size of local data, which is inconsistent with the real-world cases.
Our approach is similar to (Xie et al., 2019a) , but instead we adaptively control the number of local epochs combined with the approximation of staleness and model discrepancy, and prove the performance guarantee on imbalanced data partitions.
We illustrate our approach in the rest of this paper.
In this paper, we propose a new two-stage analysis on federated learning, and inspired by such analysis, we propose a novel AFL algorithm that accelerates convergence and resists performance deterioration caused by stragglers simultaneously.
Experimental results show that our approach converges two times faster than baselines, and it can resist the straggler effect without sacrificing accuracy and communication.
As a byproduct, our approach improves the generalization ability of neural network models.
We will theoretically analyze it in future work.
Besides, while not the focus of our work, security and privacy are essential concerns in federated learning, and as the future work, we can apply various security methods to our approach.
Furthermore, besides the stale- We respectively test the performance with 20%, 60%, 80%, and 90% of stale workers.
The green dotted line is FedAvg which waits all selected workers.
resistance ability, the discrepancy estimation in our method also has the potential ability to resist malicious attacks to the worker nodes such as massive Byzantine attacks, which has been addressed in (Bagdasaryan et al., 2018; Li et al., 2019; Muñoz-González et al., 2019) .
We will analyze and evaluate such ability in future work.
|
We propose an efficient and robust asynchronous federated learning algorithm on the existence of stragglers
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:565
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data.
We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights.
This addition shows significant increases of performance on some of the tasks from the bAbI dataset.
In the recent years, Recurrent Neural Networks (RNNs) have been successfully used to tackle problems with data that can be represented in the shape of time series.
Application domains include Natural Language Processing (NLP) (translation BID12 , summarisation BID9 , question answering and more), speech recogition BID5 BID3 ), text to speech systems BID0 , computer vision tasks BID13 BID16 , and differentiable programming language interpreters BID10 BID11 ).An
intuitive explanation for the success of RNNs in fields such as natural language understanding is that they allow words at the beginning of a sentence or paragraph to be memorised. This
can be crucial to understanding the semantic content. Thus
in the phrase "The cat ate the fish" it is important to memorise the subject (cat). However
, often later words can change the meaning of a senstence in subtle ways. For example
, "The cat ate the fish, didn't it" changes a simple statement into a question. In this paper
, we study a mechanism to enhance a standard RNN to enable it to modify its memory, with the hope that this will allow it to capture in the memory cells sequence information using a shorter and more robust representation.One of the most used RNN units is the Long Short-Term Memory (LSTM) BID7 . The core of
the LSTM is that each unit has a cell state that is modified in a gated fashion at every time step. At a high level
, the cell state has the role of providing the neural network with memory to hold long-term relationships between inputs. There are many
small variations of LSTM units in the literature and most of them yield similar performance BID4 .The memory (cell
state) is expected to encode information necessary to make the next prediction. Currently the ability
of the LSTMs to rotate and swap memory positions is limited to what can be achieved using the available gates. In this work we introduce
a new operation on the memory that explicitly enables rotations and swaps of pairwise memory elements. Our preliminary tests show
performance improvements on some of the bAbI tasks compared with LSTM based architectures.
A limitation of the models in our experiments is only applying pairwise 2D rotations.
Representations of past input can be larger groups of the cell state vector, thus 2D rotations might not fully exploit the benefits of transformations.
In the future we hope to explore rotating groups of elements and multi-dimensional rotations.
Rotating groups of elements of the cell state could potentially also force the models to learn a more structured representation of the world, similar to how forcing a model to learn specific representations of scenes, as presented in BID6 , yields semantic representations of the scene.Rotations also need not be fully flexible.
Introducing hard constraints on the rotations and what groups of parameters can be rotated might lead the model to learn richer memory representations.
Future work could explore how adding such constraints impacts learning times and final performance on different datasets, but also look at what constraints can qualitatively improve the representation of long-term dependencies.In this work we presented prelimiary tests for adding rotations to simple models but we only used a toy dataset.
The bAbI dataset has certain advantages such as being small thus easy to train many models on a single machine, not having noise as it is generated from a simulation, and having a wide range of tasks of various difficulties.
However it is a toy dataset that has a very limited vocabulary and lacks the complexity of real world datasets (noise, inconsistencies, larger vocabularies, more complex language constructs, and so on).
Another limitation of our evaluation is only using text, specifically question answering.
To fully evaluate the idea of adding rotations to memory cells, in the future, we aim to look into incorporating our rotations on different domains and tasks including speech to text, translation, language generation, stock prices, and other common problems using real world datasets.Tuning the hyperparameters of the rotation models might give better insights and performance increases and is something we aim to incorporate in our training pipeline in the future.A brief exploration of the angles produced by u and the weight matrix W rot show that u does not saturate, thus rotations are in fact applied to our cell states and do not converge to 0 (or 360 degress).
A more in-depth qualitative analysis of the rotation gate is planned for future work.
Peeking into the activations of our rotation gates could help understand the behaviour of rotations and to what extent they help better represent long-term memory.A very successful and popular mutation of the LSTM is the Gated Recurrent Unit (GRU) unit BID1 .
The GRU only has an output as opposed to both a cell state and an output and uses fewer gates.
In the future we hope to explore adding rotations to GRU units and whether we can obtain similar results.
We have introduced a novel gating mechanism for RNN units that enables applying a parametrised transformation matrix to the cell state.
We picked pairwise 2D rotations as the transformation and shown how this can be added to the popular LSTM units to create what we call RotLSTM.
Figure 3: Accuracy comparison on training, validation (val) and test sets over 40 epochs for LSTM and RotLSTM models.
The models were trained 10 times and shown is the average accuracy and in faded colour is the standard deviation.
Test set accuracy was computed every 10 epochs.We trained a simple model using RotLSTM units and compared them with the same model based on LSTM units.
We show that for the LSTM-based architetures adding rotations has a positive impact on most bAbI tasks, making the training require fewer epochs to achieve similar or higher accuracy.
On some tasks the RotLSTM model can use a lower dimensional cell state vector and maintain its performance.
Significant accracy improvements of approximatively 20% for the RotLSTM model over the LSTM model are visible on bAbI tasks 5 (three argument relations) and 18 (reasoning about size).
|
Adding a new set of weights to the LSTM that rotate the cell memory improves performance on some bAbI tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:566
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We address the problem of marginal inference for an exponential family defined over the set of permutation matrices.
This problem is known to quickly become intractable as the size of the permutation increases, since its involves the computation of the permanent of a matrix, a #P-hard problem.
We introduce Sinkhorn variational marginal inference as a scalable alternative, a method whose validity is ultimately justified by the so-called Sinkhorn approximation of the permanent.
We demonstrate the efectiveness of our method in the problem of probabilistic identification of neurons in the worm C.elegans
Let P ∈ R n×n be a binary matrix representing a permutation of n elements (i.e. each row and column of P contains a unique 1).
We consider the distribution over P defined as
where A, B F is the Frobenius matrix inner product, log L is a parameter matrix and Z L is the normalizing constant.
Here we address the problem of marginal inference, i.e. computing the matrix of expectations ρ := E(P).
This problem is known to be intractable since it requires access to Z L , also known as the permanent of L, and whose computation is known to be a #P-hard problem Valiant (1979) To overcome this difficulty we introduce Sinkhorn variational marginal inference, which can be computed efficiently and is straightforward to implement.
Specifically, we approximate ρ as S(L), the Sinkhorn operator applied to L (Sinkhorn, 1964) .
S(L) is defined as the (infinite) successive row and column normalization of L (Adams and Zemel, 2011; , a limit that is known to result in a doubly stochastic matrix (Altschuler et al., 2017) .
In section 2 we argue the Sinkhorn approximation is sensible, and in section 3 we describe the problem of probabilistic inference of neural identity in C.elegans and demonstrate the Sinkhorn approximation produces the best results.
We have introduced the Sinkhorn approximation for marginal inference, and our it is a sensible alternative to sampling, and it may provide faster, simpler and more accurate approximate marginals than the Bethe approximation, despite typically leading to worse permanent approximations.
We leave for future work a thorough analysis of the relation between quality of permanent approximation and corresponding marginals.
Also, it can be verified that S(L) = diag(x)Ldiag(y
), where
diag(x), diag(y
) are some
positive vectors x, y turned into diagonal matrices (Peyré et al., 2019) . Then,
Additionally, we obtain the (log) Sinkhorn approximation of the permanent of L, perm S (L), by evaluating S(L) in the problem it solves, (2.3).
By simple algebra and using the fact that S(L) is a doubly stochastic matrix we see that
By combining the last three displays we obtain
from which the result follows.
|
New methodology for variational marginal inference of permutations based on Sinkhorn algorithm, applied to probabilistic identification of neurons
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:567
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The robustness of neural networks to adversarial examples has received great attention due to security implications.
Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness.
In this paper, we provide theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation.
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.
The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks.
Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that
(i) CLEVER is aligned with the robustness indication measured by the $\ell_2$ and $\ell_\infty$ norms of adversarial examples from powerful attacks, and
(ii) defended networks using defensive distillation or bounded ReLU indeed give better CLEVER scores.
To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifiers.
Recent studies have highlighted the lack of robustness in state-of-the-art neural network models, e.g., a visually imperceptible adversarial image can be easily crafted to mislead a well-trained network BID28 BID9 BID3 .
Even worse, researchers have identified that these adversarial examples are not only valid in the digital space but also plausible in the physical world BID17 BID8 .
The vulnerability to adversarial examples calls into question safety-critical applications and services deployed by neural networks, including autonomous driving systems and malware detection protocols, among others.In the literature, studying adversarial examples of neural networks has twofold purposes:
(i) security implications: devising effective attack algorithms for crafting adversarial examples, and
(ii) robustness analysis: evaluating the intrinsic model robustness to adversarial perturbations to normal examples.
Although in principle the means of tackling these two problems are expected to be independent, that is, the evaluation of a neural network's intrinsic robustness should be agnostic to attack methods, and vice versa, existing approaches extensively use different attack results as a measure of robustness of a target neural network.
Specifically, given a set of normal examples, the attack success rate and distortion of the corresponding adversarial examples crafted from a particular attack algorithm are treated as robustness metrics.
Consequently, the network robustness is entangled with the attack algorithms used for evaluation and the analysis is limited by the attack capabilities.
More importantly, the dependency between robustness evaluation and attack approaches can cause biased analysis.
For example, adversarial training is a commonly used technique for improving the robustness of a neural network, accomplished by generating adversarial examples and retraining the network with corrected labels.
However, while such an adversarially trained network is made robust to attacks used to craft adversarial examples for training, it can still be vulnerable to unseen attacks.Motivated by the evaluation criterion for assessing the quality of text and image generation that is completely independent of the underlying generative processes, such as the BLEU score for texts BID25 and the INCEPTION score for images BID27 , we aim to propose a comprehensive and attack-agnostic robustness metric for neural networks.
Stemming from a perturbation analysis of an arbitrary neural network classifier, we derive a universal lower bound on the minimal distortion required to craft an adversarial example from an original one, where the lower bound applies to any attack algorithm and any p norm for p ≥ 1.
We show that this lower bound associates with the maximum norm of the local gradients with respect to the original example, and therefore robustness evaluation becomes a local Lipschitz constant estimation problem.
To efficiently and reliably estimate the local Lipschitz constant, we propose to use extreme value theory BID6 for robustness evaluation.
In this context, the extreme value corresponds to the local Lipschitz constant of our interest, which can be inferred by a set of independently and identically sampled local gradients.With the aid of extreme value theory, we propose a robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.
We note that CLEVER is an attack-independent robustness metric that applies to any neural network classifier.
In contrast, the robustness metric proposed in BID11 , albeit attack-agnostic, only applies to a neural network classifier with one hidden layer.We highlight the main contributions of this paper as follows:• We propose a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.
To the best of our knowledge, CLEVER is the first robustness metric that is attack-independent and can be applied to any arbitrary neural network classifier and scales to large networks for ImageNet.•
The proposed CLEVER score is well supported by our theoretical analysis on formal robustness guarantees and the use of extreme value theory. Our
robustness analysis extends the results in BID11 from continuously differentiable functions to a special class of non-differentiable functions -neural+ networks with ReLU activations.• We
corroborate the effectiveness of CLEVER by conducting experiments on state-of-theart models for ImageNet, including ResNet BID10 , Inception-v3 BID29 and MobileNet (Howard et al., 2017) . We also
use CLEVER to investigate defended networks against adversarial examples, including the use of defensive distillation BID23 and bounded ReLU BID34 . Experimental
results show that our CLEVER score well aligns with the attack-specific robustness indicated by the 2 and ∞ distortions of adversarial examples.
In this paper, we propose the CLEVER score, a novel and generic metric to evaluate the robustness of a target neural network classifier to adversarial examples.
Compared to the existing robustness evaluation approaches, our metric has the following advantages:
(i) attack-agnostic;
(ii) applicable to any neural network classifier;
(iii) comes with strong theoretical guarantees; and
(iv) is computationally feasible for large neural networks.
Our extensive experiments show that the CLEVER score well matches the practical robustness indication of a wide range of natural and defended networks.
|
We propose the first attack-independent robustness metric, a.k.a CLEVER, that can be applied to any neural network classifier.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:568
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multi-agent collaboration is required by numerous real-world problems.
Although distributed setting is usually adopted by practical systems, local range communication and information aggregation still matter in fulfilling complex tasks.
For multi-agent reinforcement learning, many previous studies have been dedicated to design an effective communication architecture.
However, existing models usually suffer from an ossified communication structure, e.g., most of them predefine a particular communication mode by specifying a fixed time frequency and spatial scope for agents to communicate regardless of necessity.
Such design is incapable of dealing with multi-agent scenarios that are capricious and complicated, especially when only partial information is available.
Motivated by this, we argue that the solution is to build a spontaneous and self-organizing communication (SSoC) learning scheme.
By treating the communication behaviour as an explicit action, SSoC learns to organize communication in an effective and efficient way.
Particularly, it enables each agent to spontaneously decide when and who to send messages based on its observed states.
In this way, a dynamic inter-agent communication channel is established in an online and self-organizing manner.
The agents also learn how to adaptively aggregate the received messages and its own hidden states to execute actions.
Various experiments have been conducted to demonstrate that SSoC really learns intelligent message passing among agents located far apart.
With such agile communications, we observe that effective collaboration tactics emerge which have not been mastered by the compared baselines.
Many real-world applications involve participation of multiple agents, for example, multi-robot control BID12 , network packet delivery BID20 and autonomous vehicles planning BID0 , etc..
Learning such systems is ideally required to be autonomous (e.g., using reinforcement learning).
Recently, with the rise of deep learning, deep reinforcement learning (RL) has demonstrated many exciting results in several challenging scenarios e.g. robotic manipulation BID3 [10], visual navigation BID22 BID10 , as well as the well-known application in game playing BID13 [17] etc..
However, unlike its success in solving single-agent tasks, deep RL still faces many challenges in solving multi-agent learning scenarios.Modeling multiple agents has two extreme solutions: one is treating all agents as an unity to apply a single centralized framework, the other is modelling the agents as completely independent learners.
Studies following the former design are often known as "centralized approach", for example BID18 BID14 etc.
The obvious advantage of this class of approaches is a good guarantee of optimality since it is equivalent to the single agent Markov decision process (MDP) essentially.
However, it is usually unfeasible to assume a global controller that knows everything about the environment in practice.
The other class of methods can be marked as "independent multi-agent reinforcement learning".
These approaches assumes a totally independent setting in which the agents treat all others as a part of the observed environment.
BID2 has pointed out that such a setup will suffer from the problem of non-stationarity, which renders it hard to learn an optimal joint policy.In essence, there are three key factors that determine a communication.
That is when, where and how the participants initiate the communication.
Most of existing approaches, including the abovementioned Meanfield and Commnet, try to predefine each ingredient and thus lead to an inflexible communication architecture.
Recently, VAIN BID4 and ATOC BID6 incorporate attentional communication for collaborative multi-agent reinforcement learning.
Compared with Meanfield and Commnet, VAIN and ATOC have made one step further towards more flexible communication.
However, the step is still limited.
Take ATOC as an example, although it learns a dynamic attention to diversify agent messages, the message flow is only limited to the local range.
This is unfavorable for learning complex and long range communications.
The communication time is also specified manually (every ten steps).
Hence it is requisite to find a new method that allows more flexible communication on both learnable time and scopes.In this regard, we propose a new solution with learnable spontaneous communication behaviours and self-organizing message flow among agents.
The proposed architecture is named as "Spontaneous and Self-Organizing Communication" (SSoC) network.
The key to such a spontaneous communication lies in the design that the communication is treated as an action to be learned in a reinforcement manner.
The corresponding action is called "Speak".
Each agent is eligible to take such an action based on its current observation.
Once an agent decides to "Speak", it sends a message to partners within the communication scope.
In the next step, agents receiving this message will decide whether to pass the message forward to more distant agents or keep silence.
This is exactly how SSoC distinguishes itself from existing approaches.
Instead of predestining when and who will participate in the communication, SSoC agents start communication only when necessary and stop transferring received messages if they are useless.
A self-organizing communication policy is learned via maximizing the total collaborative reward.
The communication process of SSoC is depicted in Fig.1 .
It shows an example of the message flow among four communicating agents.
Specifically, agent 3 sends a message to ask for help for remote partners.
Due to agent 3's communication range, the message can be seen only by agent 1.
Then agent 1 decides to transfer the collected message to its neighbors.
Finally agent 2 and agent 4 read the messages from agent 3.
These two agents are directly unreachable from agent 3.
In this way, each agent learns to send or transfer messages spontaneously and finally form a communication route.
Compared with the communication channels predefined in previous works, the communication here is dynamically changing according to real needs of the participating agents.
Hence the communication manner forms a self-organizing mechanism.We instantiate SSoC with a policy network with four functional units as shown in FIG0 .
Besides the agent's original action, an extra "Speak" action is output based on the current observation and hidden states.
Here we simply design "Speak" as a binary {0, 1} output.
Hence it works as a "switch" to control whether to send or transfer a message.
The "Speak" action determines when and who to communicate in a fully spontaneous manner.
A communication structure will naturally emerge after several steps of message propagation.
Here in our SSoC method, the "Speak" policy is learned by a reward-driven reinforcement learning algorithm.
The assumption is that a better message propagation strategy should also lead to a higher accumulated reward.We evaluate SSoC on several representative benchmarks.
As we have observed, the learned policy does demonstrate novel clear message propagation patterns which enable complex collaborative strategies, for example, remote partners can be requested to help the current agent to get over hard times.
We also show the high efficiency of communication by visualizing a heat map showing how often the agents "speak".
The communication turns out to be much sparser than existing predefined communication channels which produce excessive messages.
With such emerged collaborations enabled by SSoC's intelligent communication manner, it is also expected to see clear performance gains compared with existing methods on the tested tasks.
In this paper, we propose a SSoC network for MARL tasks.
Unlike previous methods which often assume a predestined communication structure, the SSoC agent learns when to start a communication or transfer its received message via a novel "Speak" action.
Similar to the agent's original action, this "Speak" can also be learned in a reinforcement manner.
With such a spontaneous communication action, SSoC is able to establish a dynamic self-organizing communication structure according to the current state.
Experiments have been performed to demonstrate better collaborative policies and improved on communication efficiency brought by such a design.
In future work, we will continue to enhance the learning of "Speak" action e.g. encoding a temporal abstraction to make the communication flow more stable or develop a specific reward for this "Speak" action.
|
This paper proposes a spontaneous and self-organizing communication (SSoC) learning scheme for multi-agent RL tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:569
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.
However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs.
Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research.
In this paper, we present the investigation on active learning with GNNs for node classification tasks.
Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning.
With a theoretical bound analysis we justify the design choice of our approach.
In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly.
Graph Neural Networks (GNN) (Kipf & Welling, 2016; Veličković et al., 2017; Hamilton et al., 2017; Wu et al., 2019) have been widely applied in many supervised and semi-supervised learning scenarios such as node classifications, edge predictions and graph classifications over the past few years.
Though GNN frameworks are effective at fusing both the feature representations of nodes and the connectivity information, people are longing for enhancing the learning efficiency of such frameworks using limited annotated nodes.
This property is in constant need as the budget for labeling is usually far less than the total number of nodes.
For example, in biological problems where a graph represents the chemical structure (Gilmer et al., 2017; Jin et al., 2018 ) of a certain drug assembled through atoms, it is not easy to obtain a detailed analysis of the function for each atom since getting expert labeling advice is very expensive.
On the other hand, people can carefully design a small "seeding pool" so that by selecting "representative" nodes or atoms as the training set, a GNN can be trained to get an automatic estimation of the functions for all the remaining unlabeled ones.
Active Learning (AL) (Settles, 2009; Bodó et al., 2011) , following this lead, provides solutions that select "informative" examples as the initial training set.
While people have proposed various methods for active learning on graphs (Bilgic et al., 2010; Kuwadekar & Neville, 2011; Moore et al., 2011; Rattigan et al., 2007) , active learning for GNN has received relatively few attention in this area.
Cai et al. (2017) and Gao et al. (2018) are two major works that study active learning for GNN.
The two papers both use three kinds of metrics to evaluate the training samples, namely uncertainty, information density, and graph centrality.
The first two metrics make use of the GNN representations learnt using both node features and the graph; while they might be reasonable with a good (well-trained) GNN model, the metrics are not informative when the label budget is limited and/or the network weights are under-trained so that the learned representation is not good.
On the other hand, graph centrality ignores the node features and might not get the real informative nodes.
Further, methods proposed in Cai et al. (2017) ; Gao et al. (2018) only combine the scores using simple linear weighted-sum, which do not solve these problems principally.
We propose a method specifically designed for GNN that naturally avoids the problems of methods above 1 .
Our method select the nodes based on node features propagated through the graph structure, 1 Our code will be released upon acceptance.
making it less sensitive to inaccuracies of representation learnt by under-trained models.
Then we cluster the nodes using K-Medoids clustering; K-Medoids is similar to the conventional K-Means, but constrains the centers to be real nodes in the graph.
Theoretical results and practical experiments prove the strength of our algorithm.
• We perform a theoretical analysis for our method and study the relation between its classification loss and the geometry of the propagated node features.
• We show the advantage of our method over Coreset (Sener & Savarese, 2017) by comparing the bounds.
We also conjecture that similar bounds are not achievable if we use raw unpropagated node features.
• We compare our method with several AL methods and obtain the best performance over all benchmark datasets.
We study the active learning problem in the node classification task for Graph Convolution Networks (GCNs).
We propose a propagated node feature selection approach (FeatProp) to comply with the specific structure of GCNs and give a theoretical result characterizing the relation between its classification loss and the geometry of the propagated node features.
Our empirical experiments also show that FeatProp outperforms the state-of-the-art AL methods consistently on most benchmark datasets.
Note that FeatProp only focuses on sampling representative points in a meaningful (graph) representation, while uncertainty-based methods select the active nodes from a different criterion guided by labels, how to combine that category of methods with FeatProp in a principled way remains an open and yet interesting problem for us to explore.
|
This paper introduces a clustering-based active learning algorithm on graphs.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:57
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the BERT language representation model and the sequence generation model with BERT encoder for multi-label text classification task.
We experiment with both models and explore their special qualities for this setting.
We also introduce and examine experimentally a mixed model, which is an ensemble of multi-label BERT and sequence generating BERT models.
Our experiments demonstrated that BERT-based models and the mixed model, in particular, outperform current baselines in several metrics achieving state-of-the-art results on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts.
Multi-label text classification (MLTC) is an important natural language processing task with many applications, such as document categorization, automatic text annotation, protein function prediction (Wehrmann et al., 2018) , intent detection in dialogue systems, and tickets tagging in client support systems (Molino et al., 2018) .
In this task, text samples are assigned to multiple labels from a finite label set.
In recent years, it became clear that deep learning approaches can go a long way toward solving text classification tasks.
However, most of the widely used approaches in MLTC tend to neglect correlation between labels.
One of the promising yet fairly less studied methods to tackle this problem is using sequence-to-sequence modeling.
In this approach, a model treats an input text as a sequence of tokens and predict labels in a sequential way taking into account previously predicted labels.
Nam et al. (2017) used Seq2Seq architecture with GRU encoder and attention-based GRU decoder, achieving an improvement over a standard GRU model on several datasets and metrics.
Yang et al. (2018b) continued this idea by introducing Sequence Generation Model (SGM) consisting of BiLSTM-based encoder and LSTM decoder coupled with additive attention mechanism .
In this paper, we argue that the encoder part of SGM can be successfully replaced with a heavy language representation model such as BERT (Devlin et al., 2018) .
We propose Sequence Generating BERT model (BERT+SGM) and a mixed model which is an ensemble of vanilla BERT and BERT+SGM models.
We show that BERT+SGM model achieves decent results after less than a half of an epoch of training, while the standard BERT model needs to be trained for 5-6 epochs just to achieve the same accuracy and several dozens epochs more to converge.
On public datasets, we obtain 0.4%, 0.8%, and 1.6% average improvement in miF 1 , maF 1 , and accuracy respectively in comparison with BERT.
On datasets with hierarchically structured classes, we achieve 2.8% and 1.5% average improvement in maF 1 and accuracy.
Our main contributions are as follows:
1. We present the results of BERT as an encoder in the sequence-to-sequence framework for MLTC datasets with and without a given hierarchical tree structure over classes.
2. We introduce and examine experimentally a novel mixed model for MLTC.
3. We fine-tune the vanilla BERT model to perform multi-label text classification.
To the best of our knowledge, this is the first work to experiment with BERT and explore its particular properties for the multi-label setting and hierarchical text classification.
4. We demonstrate state-of-the-art results on three well-studied MLTC datasets with English texts and two private Yandex Taxi datasets with Russian texts.
We present the results of the suggested models and baselines on the five considered datasets in Table 2 .
First, we can see that both BERT and BERT+SGM show favorable results on multi-label classification datasets mostly outperforming other baselines by a significant margin.
On RCV1-v2 dataset, it is clear that the BERT-based models perform the best in micro-F 1 metrics.
The methods dealing with the class structure (tree hierarchy in HMCN and HiLAP, label frequency in BERT+SGM) also have the highest macro-F 1 score.
In some cases, BERT performs better than the sequence-to-sequence version, which is especially evident on the Reuters-21578 dataset.
Since BERT+SGM has more learnable parameters, a possible reason might be a fewer number of samples provided on the dataset.
However, sometimes BERT+SGM might be a more preferable option: on RCV1-v2 dataset the macro-F 1 metrics of BERT + SGM is much larger while other metrics are still comparable with the BERT's results.
Also, for both Yandex Taxi datasets on the Russian language, we can see that the hamming accuracy and the set accuracy of the BERT+SGM model is higher compared to other models.
On Y.Taxi Riders there is also an improvement in terms of macro-F 1 metrics.
In most cases, better performance can be achieved after mixing BERT and BERT+SGM.
On public datasets, we see 0.4%, 0.8%, and 1.6% average improvement in miF 1 , maF 1 , and accuracy respectively in comparison with BERT.
On datasets with tree hierarchy over classes, we observe 2.8% and 1.5% average improvement in maF 1 and accuracy.
Metrics of interest for the mixed model depending on α on RCV1-v2 validation set are shown in Figure 4 .
Visualization of feature importance for BERT and sequence generating BERT models is provided in Appendix A.
In our experiments, we also found that BERT for multi-label text classification tasks takes far more epochs to converge compared to 3-4 epochs needed for multi-class datasets (Devlin et al., 2018) .
For AAPD, we performed 20 epochs of training; for RCV1-v2 and Reuters-21578 -around 30 epochs; for Russian datasets -45-50 epochs.
BERT + SGM achieves decent accuracy much faster than multi-label BERT and converges after 8-12 epochs.
The behavior of performance of both models on the validation set of Reuters-21578 during the training process is shown in Figure 3 .
Another finding of our experiments is that the beam size in the inference stage does not appear to influence much on the performance.
We obtained optimal results with the beam size in the range from 5 to 9.
However, a greedy approach with the beam size 1 still gives similar results with less than 1.5% difference in the metrics.
A possible explanation for this might be that, while in neural machine translation (NMT) the word ordering in the output sequence matters a lot and there might be confusing options, label set generation task is much simpler and we do not have any problems with ordering.
Also, due to a quite limited 'vocabulary' size |L|, we may not have as many options here to perform a beam search as in NMT or another natural sequence generation task.
In this research work, we examine BERT and sequence generating BERT on the multi-label setting.
We experiment with both models and explore their particular properties for this task.
We also introduce and examine experimentally a mixed model which is an ensemble of vanilla BERT and sequence-to-sequence BERT models.
Our experimental studies showed that BERT-based models and the mixed model, in particular, outperform current baselines by several metrics achieving state-of-the-art results on three well-studied multi-label classification datasets with English texts and two private Yandex Taxi datasets with Russian texts.
We established that multi-label BERT typically needs several dozens of epochs to converge, unlike to BERT+SGM model which demonstrates decent results just after a few hundreds of iterations (less than a half of an epoch).
|
On using BERT as an encoder for sequential prediction of labels in multi-label text classification task
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:570
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Click Through Rate (CTR) prediction is a critical task in industrial applications, especially for online social and commerce applications.
It is challenging to find a proper way to automatically discover the effective cross features in CTR tasks.
We propose a novel model for CTR tasks, called Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM).
Instead of learning the cross features directly, DeepEnFM adopts the Transformer encoder as a backbone to align the feature embeddings with the clues of other fields.
The embeddings generated from encoder are beneficial for the further feature interactions.
Particularly, DeepEnFM utilizes a bilinear approach to generate different similarity functions with respect to different field pairs.
Furthermore, the max-pooling method makes DeepEnFM feasible to capture both the supplementary and suppressing information among different attention heads.
Our model is validated on the Criteo and Avazu datasets, and achieves state-of-art performance.
This paper studies the problem of predicting the Click Through Rate (CTR), which is an essential task in industrial applications, such as online advertising, and e-commerce.
To be exact, the advertisements of cost-per-click (CPC) advertising system are normally ranked by the eCPM (effective cost per mille), which is computed as the prodcut of bid price and CTR (click-through rate).
To predict CTR precisely, feature representation is an important step in extracting the good, interpretable patterns from training data.
For example, the co-occurrence of "Valentine's Day", "chocolate" and "male" can be viewed as one meaningful indicator/feature for the recommendation.
Such handcrafted feature type is predominant in CTR prediction (Lee et al., 2012) , until the renaissance of Deep Neural Networks (DNNs).
Recently, a more effective manner, i.e., representation learning has been investigated in CTR prediction with some works (Guo et al., 2017; Qu et al., 2016; Wang et al., 2017; Lian et al., 2018; Song et al., 2018) , which implicitly or explicitly learn the embeddings of high-order feature extractions among neurons or input elements by the expressive power of DNNs or FM.
Despite their noticeable performance improvement, DNNs and explicit high order feature-based methods (Wang et al., 2017; Guo et al., 2017; Lian et al., 2018) seek better feature interactions merely based on the naive feature embeddings.
Few efforts have been made in addressing the task of holistically understanding and learning representations of inputs.
This leads to many practical problems, such as "polysemy" in the learned feature embeddings existed in previous works.
For example, the input feature 'chocolate' is much closer to the 'snack' than 'gift' in normal cases, while we believe 'chocolate' should be better paired with 'gift' if given the occurrence input as "Valentine's Day".
This is one common polysemy problem in CTR prediction.
Towards fully understanding the inputs, we re-introduce to CTR, the idea of Transformer encoder (Vaswani et al., 2017) , which is oriented in Natural Language Processing (NLP).
Such an encoder can efficiently accumulate and extract patterns from contextual word embeddings in NLP, and thus potentially would be very useful in holistically representation learning in CTR.
Critically, the Transformer encoder has seldom been applied to CTR prediction with the only one exception arxiv paper AutoInt (Song et al., 2018) , which, however, simply implements the multi-head selfattention (MHSA) mechanism of encoders, to directly extract high-order feature interactions.
We argue that the output of MHSA/encoder should be still considered as first-order embedding influenced by the other fields, rather than a high-order interaction feature.
To this end, our main idea is to apply the encoder to learn a context-aware feature embedding, which contains the clues from the content of other features.
Thus the "polysemy" problem can be solved naturally, and the second-order interaction of such features can represent more meaning.
Contrast to AutoInt (Song et al., 2018) , which feeds the output of encoder directly to the prediction layer or a DNN, our work not only improves the encoder to be more suitable for CTR task, but also feeds the encoder output to FM, since both our encoder and FM are based on vector-wise learning mechanism.
And we adopt DNN to learn the bit-wise high-order feature interactions in a parallel way, which avoids interweaving the vector-wise and bit-wise interactions in a stacked way.
Formally, we propose a novel framework -Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM).
DeepEnFM focuses on generating better contextual aligned vectors for FM and uses DNN as a bit-wise information supplement.
The architecture adopting both Deep and FM part is inspired by DeepFM (Guo et al., 2017) .
The encoder is endowed with bilinear attention and max-pooling power.
First, we observed that unlike the random order of words in a sentence, the features in a transaction are in a fixed order of fields.
For example, the fields of features are arranged in an order of {Gender, Age, Price ...}.
When the features are embedded in dense vectors, the first and second vectors in a transaction always represent the field "Gender" and "Age".
To make use of this advantage, we add a bilinear mechanism to the Transformer encoder.
We use bilinear functions to replace the simple dot product in attention.
In this way, feature similarity of different field pairs is modeled with different functions.
The embedding size in CTR tasks is usually around 10, which allows the application of bilinear functions without unbearable computing complexity.
Second, the original multi-head outputs are merged by concatenation, which considers the outputs are complementary to each other.
We argue that there are also suppressing information between different heads.
We apply a max-pooling merge mechanism to extract both complementary and suppressing information from the multi-head outputs.
Experimental results on Criteo and Avazu datasets have demonstrated the efficacy of our proposed model.
In this paper, we propose a novel framework named Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM), which aims to learn a better aligned vector embedding through the encoder.
The encoder combines the bilinear attention and max-pooling method to gather both the complementary and suppressing information from the content of other fields.
The extensive experiments demonstrate that our approach achieves state-of-art performance on Criteo and Avazu dataset.
|
DNN and Encoder enhanced FM with bilinear attention and max-pooling for CTR
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:571
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence.
In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons.
Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn different hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations -- are well calibrated.
However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches.
This is because the used log-likelihood estimate discourages diversity.
In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states.
We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset.
Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.
The ability to anticipate future scene states which involves mapping one scene state to likely future states under uncertainty is key for autonomous agents to successfully operate in the real world e.g., to anticipate the movements of pedestrians and vehicles for autonomous vehicles.
The future states of street scenes are inherently uncertain and the distribution of outcomes is often multi-modal.
This is especially true for important classes like pedestrians.
Recent works on anticipating street scenes BID13 BID9 BID23 do not systematically consider uncertainty.Bayesian inference provides a theoretically well founded approach to capture both model and observation uncertainty but with considerable computational overhead.
A recently proposed approach BID6 BID10 uses dropout to represent the posterior distribution of models and capture model uncertainty.
This approach has enabled Bayesian inference with deep neural networks without additional computational overhead.
Moreover, it allows the use of any existing deep neural network architecture with minor changes.However, when the underlying data distribution is multimodal and the model set under consideration do not have explicit latent state/variables (as most popular deep deep neural network architectures), the approach of BID6 ; BID10 is unable to recover the true model uncertainty (see FIG0 and BID19 ).
This is because this approach is known to conflate risk and uncertainty BID19 .
This limits the accuracy of the models over a plain deterministic (non-Bayesian) approach.
The main cause is the data log-likelihood maximization step during optimization -for every data point the average likelihood assigned by all models is maximized.
This forces every model to explain every data point well, pushing every model in the distribution to the mean.
We address this problem through an objective leveraging synthetic likelihoods BID26 BID21 which relaxes the constraint on every model to explain every data point, thus encouraging diversity in the learned models to deal with multi-modality.In this work:
1. We develop the first Bayesian approach to anticipate the multi-modal future of street scenes and demonstrate state-of-the-art accuracy on the diverse Cityscapes dataset without compromising on calibrated probabilities,
2. We propose a novel optimization scheme for dropout based Bayesian inference using synthetic likelihoods to encourage diversity and accurately capture model uncertainty,
3. Finally, we show that our approach is not limited to street scenes and generalizes across diverse tasks such as digit generation and precipitation forecasting.
We propose a novel approach for predicting real-world semantic segmentations into the future that casts a convolutional deep learning approach into a Bayesian formulation.
One of the key contributions is a novel optimization scheme that uses synthetic likelihoods to encourage diversity and deal with multi-modal futures.
Our proposed method shows state of the art performance in challenging street scenes.
More importantly, we show that the probabilistic output of our deep learning architecture captures uncertainty and multi-modality inherent to this task.
Furthermore, we show that the developed methodology goes beyond just street scene anticipation and creates new opportunities to enhance high performance deep learning architectures with principled formulations of Bayesian inference.
|
Dropout based Bayesian inference is extended to deal with multi-modality and is evaluated on scene anticipation tasks.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:572
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision.
The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise.
The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications.
In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue.
Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise.
We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces.
Image-to-image translation and more generally conditional image generation lie at the heart of computer vision.
Conditional Generative Adversarial Networks (cGAN) (Mirza & Osindero, 2014) have become a dominant approach in the field, e.g. in dense 1 regression (Isola et al., 2017; Pathak et al., 2016; Ledig et al., 2017; BID1 Liu et al., 2017; Miyato & Koyama, 2018; Yu et al., 2018; Tulyakov et al., 2018) .
They accept a source signal as input, e.g. prior information in the form of an image or text, and map it to the target signal (image).
The mapping of cGAN does not constrain the output to the target manifold, thus the output can be arbitrarily off the target manifold (Vidal et al., 2017) .
This is a critical problem both for academic and commercial applications.
To utilize cGAN or similar methods as a production technology, we need to study their generalization even in the face of intense noise.Similarly to regression, classification also suffers from sensitivity to noise and lack of output constraints.
One notable line of research consists in complementing supervision with unsupervised learning modules.
The unsupervised module forms a new pathway that is trained with the same, or different data samples.
The unsupervised pathway enables the network to explore the structure that is not present in the labelled training set, while implicitly constraining the output.
The addition of the unsupervised module is only required during the training stage and results in no additional computational cost during inference.
Rasmus et al. (2015) and Zhang et al. (2016) modified the original bottom-up (encoder) network to include top-down (decoder) modules during training.
However, in dense regression both bottom-up and top-down modules exist by default, and such methods are thus not trivial to extend to regression tasks.Motivated by the combination of supervised and unsupervised pathways, we propose a novel conditional GAN which includes implicit constraints in the latent subspaces.
We coin this new model 'Robust Conditional GAN' (RoCGAN).
In the original cGAN the generator accepts a source signal and maps it to the target domain.
In our work, we (implicitly) constrain the decoder to generate samples that span only the target manifold.
We replace the original generator, i.e. encoder-decoder, with a two pathway module (see FIG0 ).
The first pathway, similarly to the cGAN generator, performs regression while the second is an autoencoder in the target domain (unsupervised pathway).
The two pathways share a similar network structure, i.e. each one includes an encoder-decoder network.
The weights of the two decoders are shared which promotes the latent representations of the two pathways to be semantically similar.
Intuitively, this can be thought of as constraining the output of our dense regression to span the target subspace.
The unsupervised pathway enables the utilization of all the samples in the target domain even in the absence of a corresponding input sample.
During inference, the unsupervised pathway is no longer required, therefore the testing complexity remains the same as in cGAN.
(a) The source signal is embedded into a low-dimensional, latent subspace, which is then mapped to the target subspace.
The lack of constraints might result in outcomes that are arbitrarily off the target manifold.
(b) On the other hand, in RoCGAN, steps 1b and 2b learn an autoencoder in the target manifold and by sharing the weights of the decoder, we restrict the output of the regression (step 2a).
All figures in this work are best viewed in color.In the following sections, we introduce our novel RoCGAN and study their (theoretical) properties.
We prove that RoCGAN share similar theoretical properties with the original GAN, i.e. convergence and optimal discriminator.
An experiment with synthetic data is designed to visualize the target subspaces and assess our intuition.
We experimentally scrutinize the sensitivity of the hyper-parameters and evaluate our model in the face of intense noise.
Moreover, thorough experimentation with both images from natural scenes and human faces is conducted in two different tasks.
We compare our model with both the state-of-the-art cGAN and the recent method of Rick Chang et al. (2017) .
The experimental results demonstrate that RoCGAN outperform the baseline by a large margin in all cases.Our contributions can be summarized as following:• We introduce RoCGAN that leverages structure in the target space.
The goal is to promote robustness in dense regression tasks.•
We scrutinize the model performance under (extreme) noise and adversarial perturbations.To the authors' knowledge, this robustness analysis has not been studied previously for dense regression.•
We conduct a thorough experimental analysis for two different tasks. We
outline how RoCGAN can be used in a semi-supervised learning task, how it performs with lateral connections from encoder to decoder.Notation: Given a set of N samples, s (n) denotes the n th conditional label, e.g. a prior image; y (n) denotes the respective target image. Unless
explicitly mentioned otherwise || · || will declare an 1 norm. The symbols
L * define loss terms, while λ * denote regularization hyper-parameters optimized on the validation set.
We introduce the Robust Conditional GAN (RoCGAN) model, a new conditional GAN capable of leveraging unsupervised data to learn better latent representations, even in the face of large amount of noise.
RoCGAN's generator is composed of two pathways.
The first pathway (reg pathway), performs the regression from the source to the target domain.
The new, added pathway (AE pathway) is an autoencoder in the target domain.
By adding weight sharing between the two decoders, we implicitly constrain the reg pathway to output images that span the target manifold.
In this following sections (of the appendix) we include additional insights, a theoretical analysis along with additional experiments.
The sections are organized as following:• In sec. B we validate our intuition for the RoCGAN constraints through the linear equivalent.•
A theoretical analysis is provided in sec. C.•
We implement different networks in sec. D to assess whether the performance gain can be attributed to a single architecture.•
An ablation study is conducted in sec. E comparing the hyper-parameter sensitivity and the robustness in the face of extreme noise.The FIG3 , 7, 8 include all the outputs of the synthetic experiment of the main paper. As
a reminder, the output vector is [x + 2y + 4, e x + 1, x + y + 3, x + 2] with x, y ∈ [−1, 1].
|
We introduce a new type of conditional GAN, which aims to leverage structure in the target space of the generator. We augment the generator with a new, unsupervised pathway to learn the target structure.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:573
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples.
To solve the problem, some regularization adversarial training methods, constraining the output label or logit, have been studied.
In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention.
Instead of constraining a hard distribution (e.g., one-hot vectors or logit) in adversarial training, ATLPA uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.
Specifically, in addition to minimizing the empirical loss, ATLPA encourages attention map for pairs of examples to be similar.
When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.
We evaluate ATLPA with the state of the art algorithms, the experiment results show that our method outperforms these baselines with higher accuracy.
Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation $\epsilon$ is 64 and 128 with 10 to 200 attack iterations.
In recent years, deep neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance (Krizhevsky et al., 2012; He et al., 2015; Li et al., 2019a) .
Success of deep neural networks has led to an explosion in demand.
Recent studies (Szegedy et al., 2013; Goodfellow et al., 2014; Carlini & Wagner, 2016; Moosavi-Dezfooli et al., 2016; Bose & Aarabi, 2018) have shown that they are all vulnerable to the attack of adversarial examples.
Small and often imperceptible perturbations to the input images are sufficient to fool the most powerful deep neural networks.
In order to solve this problem, many defence methods have been proposed, among which adversarial training is considered to be the most effective one .Adversarial
training (Goodfellow et al., 2014; Madry et al., 2017; Kannan et al., 2018; Tramèr et al., 2017; Pang et al., 2019) defends against adversarial perturbations by training networks on adversarial images that are generated on-the-fly during training. Although aforementioned
methods demonstrated the power of adversarial training in defence, we argue that we need to perform research on at least the following two aspects in order to further improve current defence methods.
Strictness vs. Tolerant.
Most existing defence methods only fit the outputs of adversarial examples to the one-hot vectors of clean examples counterparts.
Kannan et al. (2018) also fit confidence distribution on the all logits of clean examples counterparts, they call it as Logits Pair.
Despite its effectiveness, this is not necessarily the optimal target to fit, because except for maximizing the confidence score of the primary class (i.e., the ground-truth), allowing for some secondary classes (i.e., those visually similar ones to the ground-truth) to be preserved may help to alleviate the risk of over-fitting (Yang et al., 2018) .
We fit Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.
We believe that limited attention should be devoted to top-k classes of the confidence score, rather than strictly fitting the confidence distribution of all classes.
A More Tolerant Teacher Educates Better Students.
Process vs. Result.
In Fig. 1 , we visualize the spatial attention map of a flower and its corresponding adversarial image on ResNet-101 (He et al., 2015) pretrained on ImageNet (Russakovsky et al., 2015) .
The figure suggests that adversarial perturbations, while small in the pixel space, lead to very substantial noise in the attention map of the network.
Whereas the features for the clean image appear to focus primarily on semantically informative content in the image, the attention map for the adversarial image are activated across semantically irrelevant regions as well.
The state of the art adversarial training methods only encourage hard distribution of deep neural networks output (e.g., one-hot vectors (Madry et al., 2017; Tramèr et al., 2017) or logit (Kannan et al., 2018) ) for pairs of clean examples and adversarial counterparts to be similar.
In our opinion, it is not enough to align the difference between the clean examples and adversarial counterparts only at the output layer of the network, and we need to align the attention maps of middle layers of the whole network, e.g.,o uter layer outputs of conv2.x, conv3.x, conv4.x, conv5.x in ResNet-101.
We can't just focus on the result, but also on the process.
(Russakovsky et al., 2015) .
(a) is original image and
(b) is corresponding adversarial image.For ResNet-101, which we use exclusively in this paper, we grouped filters into stages as described in (He et al., 2015) .
These stages are conv2.x, conv3.x, conv4.x, conv5.x.
The contributions of this paper are the following:
• We propose a novel regularized adversarial training framework ATLPA : a method that uses Tolerant Logit and encourages attention map for pairs of examples to be similar.
When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.
Instead of constraining a hard distribution in adversarial training, Tolerant Logit consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.
• We explain the reason why our ATLPA can improve the robustness of the model from three dimensions: average activations on discriminate parts, the diversity among learned features of different classes and trends of loss landscapes.
• We show that our ATLPA achieves the state of the art defense on a wide range of datasets against strong PGD gray-box and black-box attacks.
Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation ∈ {0.25, 0.5} i.e. L ∞ ∈ {0.25, 0.5} with 10 to 200 attack iterations.
To our knowledge, such a strong attack has not been previously explored on a wide range of datasets.
The rest of the paper is organized as follows: in Section 2 related works are summarized, in Section 3 definitions and threat models are introduced, in Section 4 our ATLPA is introduced, in Section 5 experimental results are presented and discussed, and finally in Section 6 the paper is concluded.
2 RELATED WORK evaluate the robustness of nine papers (Buckman et al., 2018; Ma et al., 2018; Guo et al., 2017; Dhillon et al., 2018; Xie et al., 2017; Song et al., 2017; Samangouei et al., 2018; Madry et al., 2017; Na et al., 2017) accepted to ICLR 2018 as non-certified white-box-secure defenses to adversarial examples.
They find that seven of the nine defenses use obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.Obfuscated gradients provide a limited increase in robustness and can be broken by improved attack techniques they develop.
The only defense they observe that significantly increases robustness to adversarial examples within the threat model proposed is adversarial training (Madry et al., 2017) .
Adversarial training (Goodfellow et al., 2014; Madry et al., 2017; Kannan et al., 2018; Tramèr et al., 2017; Pang et al., 2019) defends against adversarial perturbations by training networks on adversarial images that are generated on-the-fly during training.
For adversarial training, the most relevant work to our study is (Kannan et al., 2018) , which introduce a technique they call Adversarial Logit Pairing(ALP), a method that encourages logits for pairs of examples to be similar.
(Engstrom et al., 2018; Mosbach et al., 2018 ) also put forward different opinions on the robustness of ALP.
Our ATLPA encourages attention map for pairs of examples to be similar.
When applied to clean examples and their adversarial counterparts, ATLPA improves accuracy on adversarial examples over adversarial training.
(Araujo et al., 2019) adds random noise at training and inference time, adds denoising blocks to the model to increase adversarial robustness, neither of the above approaches focuses on the attention map.
Following (Pang et al., 2018; Yang et al., 2018; Pang et al., 2019) , we propose Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level.
In terms of methodologies, our work is also related to deep transfer learning and knowledge distillation problems, the most relevant work to our study are (Zagoruyko & Komodakis, 2016; Li et al., 2019b) , which constrain the L 2 -norm of the difference between their behaviors (i.e., the feature maps of outer layer outputs in the source/target networks).
Our ATLPA constrains attention map for pairs of clean examples and their adversarial counterparts to be similar.
In this paper, we propose a novel regularized adversarial training framework ATLPA a method that uses Tolerant Logit which consists of confidence distribution on top-k classes and captures inter-class similarities at the image level, and encourages attention map for pairs of examples to be similar.
We show that our ATLPA achieves the state of the art defense on a wide range of datasets against strong PGD gray-box and black-box attacks.
We explain the reason why our ATLPA can improve the robustness of the model from three dimensions: average activations on discriminate parts, the diversity among learned features of different classes and trends of loss landscapes.
The results of visualization and quantitative calculation show that our method is helpful to improve the robustness of the model.
|
In this paper, we propose a novel regularized adversarial training framework ATLPA,namely Adversarial Tolerant Logit Pairing with Attention.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:574
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances.
In this work, we address one such setting which requires solving a task with a novel set of actions.
Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks.
Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning.
Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action.
We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy.
We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments.
Our results and videos can be found at sites.google.com/view/action-generalization/
|
We address the problem of generalization of reinforcement learning to unseen action spaces.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:575
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals.
The standard way of learning in such models is by estimating the conditional intensity function.
However, parameterizing the intensity function usually incurs several trade-offs.
We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times.
We draw on the literature on normalizing flows to design models that are flexible and efficient.
We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form.
The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data.
Visits to hospitals, purchases in e-commerce systems, financial transactions, posts in social media -various forms of human activity can be represented as discrete events happening at irregular intervals.
The framework of temporal point processes is a natural choice for modeling such data.
By combining temporal point process models with deep learning, we can design algorithms able to learn complex behavior from real-world data.
Designing such models, however, usually involves trade-offs along the following dimensions: flexibility (can the model approximate any distribution?), efficiency (can the likelihood function be evaluated in closed form?), and ease of use (is sampling and computing summary statistics easy?).
Existing methods (Du et al., 2016; Mei & Eisner, 2017; Omi et al., 2019) that are defined in terms of the conditional intensity function typically fall short in at least one of these categories.
Instead of modeling the intensity function, we suggest treating the problem of learning in temporal point processes as an instance of conditional density estimation.
By using tools from neural density estimation (Bishop, 1994; Rezende & Mohamed, 2015) , we can develop methods that have all of the above properties.
To summarize, our contributions are the following:
• We connect the fields of temporal point processes and neural density estimation.
We show how normalizing flows can be used to define flexible and theoretically sound models for learning in temporal point processes.
• We propose a simple mixture model that performs on par with the state-of-the-art methods.
Thanks to its simplicity, the model permits closed-form sampling and moment computation.
• We show through a wide range of experiments how the proposed models can be used for prediction, conditional generation, sequence embedding and training with missing data.
as a sequence of strictly positive inter-event times τ i = t i − t i−1 ∈ R + .
Representations in terms of t i and τ i are isomorphic -we will use them interchangeably throughout the paper.
The traditional way of specifying the dependency of the next arrival time t on the history H t = {t j ∈ T : t j < t} is using the conditional intensity function λ *
(t) := λ(t|H t ).
Here, the * symbol reminds us of dependence on H t .
Given the conditional intensity function, we can obtain the conditional probability density function (PDF) of the time τ i until the next event by integration (Rasmussen, 2011) as p * (τ i ) := p(τ i |H ti ) = λ * (t i−1 + τ i ) exp − τi 0 λ * (t i−1 +
s)ds .
Learning temporal point processes.
Conditional intensity functions provide a convenient way to specify point processes with a simple predefined behavior, such as self-exciting (Hawkes, 1971 ) and self-correcting (Isham & Westcott, 1979) processes.
Intensity parametrization is also commonly used when learning a model from the data: Given a parametric intensity function λ * θ (t) and a sequence of observations T , the parameters θ can be estimated by maximizing the log-likelihood: θ * = arg max θ i log p * θ (τ i ) = arg max θ i log λ *
The main challenge of such intensity-based approaches lies in choosing a good parametric form for λ
Universal approximation.
The SOSFlow and DSFlow models can approximate any probability density on R arbitrarily well (Jaini et al., 2019, Theorem 3), (Krueger et al., 2018, Theorem 4) .
It turns out, a mixture model has the same universal approximation (UA) property.
Theorem 1 (DasGupta, 2008, Theorem 33.2) .
Let p(x) be a continuous density on R. If q(x) is any density on R and is also continuous, then, given ε > 0 and a compact set S ⊂ R, there exist number of components K ∈ N, mixture coefficients w ∈ ∆ K−1 , locations µ ∈ R K , and scales
This results shows that, in principle, the mixture distribution is as expressive as the flow-based models.
Since we are modeling the conditional density, we additionally need to assume for all of the above models that the RNN can encode all the relevant information into the history embedding h i .
This can be accomplished by invoking the universal approximation theorems for RNNs (Siegelmann & Sontag, 1992; Schäfer & Zimmermann, 2006) .
Note that this result, like other UA theorems of this kind (Cybenko, 1989; Daniels & Velikova, 2010) , does not provide any practical guarantees on the obtained approximation quality, and doesn't say how to learn the model parameters.
Still, UA intuitively seems like a desirable property of a distribution.
This intuition is supported by experimental results.
In Section 5.1, we show that models with the UA property consistently outperform the less flexible ones.
Interestingly, Theorem 1 does not make any assumptions about the form of the base density q(x).
This means we could as well use a mixture of distribution other than log-normal.
However, other popular distributions on R + have drawbacks: log-logistic does not always have defined moments and gamma distribution doesn't permit straightforward sampling with reparametrization.
Intensity function.
For both flow-based and mixture models, the conditional cumulative distribution function (CDF) F * (τ ) and the PDF p * (τ ) are readily available.
This means we can easily compute the respective intensity functions (see Appendix A).
However, we should still ask whether we lose anything by modeling p * (τ ) instead of λ * (t).
The main arguments in favor of modeling the intensity function in traditional models (e.g. self-exciting process) are that it's intuitive, easy to specify and reusable (Upadhyay & Rodriguez, 2019) .
"Intensity function is intuitive, while the conditional density is not." -While it's true that in simple models (e.g. in self-exciting or self-correcting processes) the dependence of λ * (t) on the history is intuitive and interpretable, modern RNN-based intensity functions (as in Du et al. (2016) ; Mei & Eisner (2017); Omi et al. (2019) ) cannot be easily understood by humans.
In this sense, our proposed models are as intuitive and interpretable as other existing intensity-based neural network models.
"λ * (t) is easy to specify, since it only has to be positive. On the other hand, p * (τ ) must integrate to one." -As we saw, by using either normalizing flows or a mixture distribution, we automatically enforce that the PDF integrates to one, without sacrificing the flexibility of our model.
"Reusability: If we merge two independent point processes with intensitites λ * 1 (t) and λ * 2 (t), the merged process has intensity λ * (t) = λ * 1 (t) + λ * 2 (t)." -An equivalent result exists for the CDFs F * 1 (τ ) and F * 2 (τ ) of the two independent processes.
The CDF of the merged process is obtained as
2 (τ ) (derivation in Appendix A).
As we just showed, modeling p * (τ ) instead of λ * (t) does not impose any limitation on our approach.
Moreover, a mixture distribution is flexible, easy to sample from and has well-defined moments, which favorably compares it to other intensity-based deep learning models.
We use tools from neural density estimation to design new models for learning in TPPs.
We show that a simple mixture model is competitive with state-of-the-art normalizing flows methods, as well as convincingly outperforms other existing approaches.
By looking at learning in TPPs from a different perspective, we were able to address the shortcomings of existing intensity-based approaches, such as insufficient flexibility, lack of closed-form likelihoods and inability to generate samples analytically.
We hope this alternative viewpoint will inspire new developments in the field of TPPs.
Constant intensity model as exponential distribution.
The conditional intensity function of the constant intensity model (Upadhyay et al., 2018 ) is defined as λ
H is the history embedding produced by an RNN, and b ∈ R is a learnable parameter.
By setting c = exp(v T h i + b), it's easy to see that the PDF of the constant intensity model p * (τ ) = c exp(−c) corresponds to an exponential distribution.
Summary The main idea of the approach by Omi et al. (2019) is to model the integrated conditional intensity function
using a feedforward neural network with non-negative weights
are non-negative weight matrices, and
(3) ∈ R are the remaining model parameters.
FullyNN as a normalizing flow Let z ∼ Exponential(1), that is
We can view f : R + → R + as a transformation that maps τ to z
We can now use the change of variables formula to obtain the conditional CDF and PDF of τ .
Alternatively, we can obtain the conditional intensity as
and use the fact that p
Both approaches lead to the same conclusion
However, the first approach also provides intuition on how to draw samplesτ from the resulting distribution p * (τ ) -an approach known as the inverse method (Rasmussen, 2011)
1. Samplez ∼ Exponential(1)
2. Obtainτ by solving f (τ ) −z = 0 for τ (using e.g. bisection method)
Similarly to other flow-based models, sampling from the FullyNN model cannot be done exactly and requires a numerical approximation.
|
Learn in temporal point processes by modeling the conditional density, not the conditional intensity.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:576
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a novel yet simple neural network architecture for topic modelling.
The method is based on training an autoencoder structure where the bottleneck represents the space of the topics distribution and the decoder outputs represent the space of the words distributions over the topics.
We exploit an auxiliary decoder to prevent mode collapsing in our model.
A key feature for an effective topic modelling method is having sparse topics and words distributions, where there is a trade-off between the sparsity level of topics and words.
This feature is implemented in our model by L-2 regularization and the model hyperparameters take care of the trade-off.
We show in our experiments that our model achieves competitive results compared to the state-of-the-art deep models for topic modelling, despite its simple architecture and training procedure.
The “New York Times” and “20 Newsgroups” datasets are used in the experiments.
|
A deep model for topic modelling
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:577
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker.
Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs.
Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC.
Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices.
In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i.e., case of zero-shot learning).
In particular, propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC.
We compare our results with the state-of-the-art StarGAN-VC architecture.
In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively.
The key strength of the proposed architectures is that it yields these results with less computational complexity.
AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters.
Language is the core of civilization, and speech is the most powerful and natural form of communication.
Human voice mimicry has always been considered as one of the most difficult tasks since it involves understanding of the sophisticated human speech production mechanism (Eriksson & Wretling (1997) ) and challenging concepts of prosodic transfer (Gomathi et al. (2012) ).
In the literature, this is achieved using Voice Conversion (VC) technique (Stylianou (2009) ).
Recently, VC has gained more attention due to its fascinating real-world applications in privacy and identity protection, military operations, generating new voices for animated and fictional movies, voice repair in medical-domain, voice assistants, etc.
Voice Conversion (VC) technique converts source speaker's voice in such a way as if it were spoken by the target speaker.
This is primarily achieved by modifying spectral and prosodic features while retaining the linguistic information in the given speech signal (Stylianou et al. (1998) ).
In addition, Voice cloning is one of the closely related task to VC (Arik et al. (2018) ).
However, in this research work we only focus to advance the Voice Conversion.
With the emergence of deep learning techniques, VC has become more efficient.
Deep learningbased techniques have made remarkable progress in parallel VC.
However, it is difficult to get parallel data, and such data needs alignment (which is a arduous process) to get better results.
Building a VC system from non-parallel data is highly challenging, at the same time valuable for practical application scenarios.
Recently, many deep learning-based style transfer algorithms have been applied for non-parallel VC task.
Hence, this problem can be formulated as a style transfer problem, where one speaker's style is converted into another while preserving the linguistic content as it is.
In particular, Conditional Variational AutoEncoders (CVAEs), Generative Adversarial Networks (GANs) (proposed by Goodfellow et al. (2014) ), and its variants have gained significant attention in non-parallel VC.
However, it is known that the training task for GAN is hard, and the convergence property of GAN is fragile (Salimans et al. (2016) ).
There is no substantial evidence that the gen-erated speech is perceptually good.
Moreover, CVAEs alone do not guarantee distribution matching and suffers from the issue of over smoothing of the converted features.
Although, there are few GAN-based systems that produced state-of-the-art results for non-parallel VC.
Among these algorithms, even fewer can be applied for many-to-many VC tasks.
At last, there is the only system available for zero-shot VC proposed by Qian et al. (2019) .
Zero-shot conversion is a technique to convert source speaker's voice into an unseen target speaker's speaker via looking at a few utterances of that speaker.
As known, solutions to a challenging problem comes with trade-offs.
Despite the results, architectures have become more complex, which is not desirable in real-world scenarios because the quality of algorithms or architectures is also measured by the training time and computational complexity of learning trainable parameters ).
Motivated by this, we propose computationally less expensive Adaptive GAN (AdaGAN), a new style transfer framework, and a new architectural training procedure that we apply to the GAN-based framework.
In AdaGAN, the generator encapsulates Adaptive Instance Normalization (AdaIN) for style transfer, and the discriminator is responsible for adversarial training.
Recently, StarGAN-VC (proposed by Kameoka et al. (2018) ) is a state-of-the-art method among all the GAN-based frameworks for non-parallel many-to-many VC.
AdaGAN is also GAN-based framework.
Therefore, we compare AdaGAN with StarGAN-VC for non-parallel many-to-many VC in terms of naturalness, speaker similarity, and computational complexity.
We observe that AdaGAN yields state-of-the-art results for this with almost 88.6% less computational complexity.
Recently proposed AutoVC (by Qian et al. (2019) ) is the only framework for zero-shot VC.
Inspired by this, we propose AdaGAN for zero-shot VC as an independent study, which is the first GAN-based framework to perform zeroshot VC.
We reported initial results for zero-shot VC using AdaGAN.The main contributions of this work are as follows:
• We introduce the concept of latent representation based many-to-many VC using GAN for the first time in literature.
• We show that in the latent space content of the speech can be represented as the distribution and the properties of this distribution will represent the speaking style of the speaker.
• Although AdaGAN has much lesser computation complexity, AdaGAN shows much better results in terms of naturalness and speaker similarity compared to the baseline.
In this paper, we proposed novel AdaGAN primarily for non-parallel many-to-many VC task.
Moreover, we analyzed our proposed architecture w.r.t. current GAN-based state-of-the-art StarGAN-VC method for the same task.
We know that the main aim of VC is to convert the source speaker's voice into the target speaker's voice while preserving linguistic content.
To achieve this, we have used the style transfer algorithm along with the adversarial training.
AdaGAN transfers the style of the target speaker into the voice of a source speaker without using any feature-based mapping between the linguistic content of the source speaker's speech.
For this task, AdaGAN uses only one generator and one discriminator, which leads to less complexity.
AdaGAN is almost 88.6% computationally less complex than the StarGAN-VC.
We have performed subjective analysis on the VCTK corpus to show the efficiency of the proposed method.
We can clearly see that AdaGAN gives superior results in the subjective evaluations compared to StarGAN-VC.
Motivated by the work of AutoVC, we also extended the concept of AdaGAN for the zero-shot conversion as an independent study and reported results.
AdaGAN is the first GAN-based framework for zero-shot VC.
In the future, we plan to explore high-quality vocoders, namely, WaveNet, for further improvement in voice quality.
The perceptual difference observed between the estimated and the ground truth indicates the need for exploring better objective function that can perceptually optimize the network parameters of GAN-based architectures, which also forms our immediate future work.
At τ → ∞, the assumptions that made in Section 5.1 are true.
Hence, from eq. (18), we can conclude that there exists a latent space where normalized latent representation of input features will be the same irrespective of speaking style.
Theorem 2: By optimization of min En,De L C X→Y + L sty X→Y , the assumptions made in Theorem 1 can be satisfied.
Proof: Our objective function is the following:
Iterate step by step to calculate the term (t 2 ) used in loss function L sty X→Y .
Consider, we have the latent representations S x1 and S y1 corresponding to the source and target speech, respectively.
Step 1: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 2 (τ ) + µ 2 (τ ) (Representation of t 1 ),
Step 2&3: En De S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 2 (τ ) + µ 2 (τ ) .
After applying decoder and encoder sequentially on latent representation, we will again get back to the same representation.
This is ensured by the loss function L C X→Y .
Formally, we want to make L C X→Y → 0.
Therefore, we can write step 4 as:
Step 4: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 2 (τ ) + µ 2 (τ ) (i.e., reconstructed t 1 ),
Step 5: 1σ 2 (τ ) S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) ¨σ 2 (τ ) +¨μ 2 (τ ) −¨μ 2 (τ ) (Normalization with its own (i.e., latent representation in Step 4) µ and σ during AdaIN ),
Step 6: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) (Final output of Step 5),
Step 7: S x1 (τ ) − µ 1 (τ ) σ 1 (τ ) σ 1 (τ ) + µ 1 (τ ) (Output after de-normalization in AdaIN . Representation of t 2 ), where µ 1 and σ 1 are the mean and standard deviations of the another input source speech, x 2 .
Now, using the mathematical representation of t 2 , we can write loss function L sty X→Y as:
According to eq. (19), we want to minimize the loss function L sty X→Y .
Formally, L sty X→Y → 0.
Therefore, we will get µ 1 = µ 1 , and σ 1 = σ 1 to achieve our goal.
Hence, mean and standard deviation of the same speaker are constant, and different for different speakers irrespective of the linguistic content.
We come to the conclusion that our loss function satisfies the necessary constraints (assumptions) required in proof of Theorem 1.
|
Novel adaptive instance normalization based GAN framework for non parallel many-to-many and zero-shot VC.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:578
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks.
Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context.
To tackle the problem, we propose a novel model called Sparse Transformer.
Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments.
Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance.
Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation.
In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance.
Understanding natural language requires the ability to pay attention to the most relevant information.
For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading.
However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension.
Such distraction hinders the understanding process, which calls for an effective attention.
This principle is also applicable to the computation systems for natural language.
Attention has been a vital component of the models for natural language understanding and natural language generation.
Recently, Vaswani et al. (2017) proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT).
Transformer has shown outstanding performance in natural language generation tasks.
More recently, the success of BERT (Devlin et al., 2018) in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer.
However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context.
This causes a lack of focus.
As illustrated in Figure 1 , the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant k words.
For the word "tim", the most related words should be "heart" and the immediate words.
Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as "him".
Recent works have studied applying sparse attention in Transformer model.
However, they either add local attention constraints (Child et al., 2019) which break long term dependency or hurt the time efficiency (Martins & Astudillo, 2016) .
Inspired by Ke et al. (2018) which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism.
We implement an explicit selection method based on top-k selection.
Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the k most contributive states.
Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer.
Figure 1 : Illustration of self-attention in the models.
The orange bar denotes the attention score of our proposed model while the blue bar denotes the attention scores of the vanilla Transformer.
The orange line denotes the attention between the target word "tim" and the selected top-k positions in the sequence.
In the attention of vanilla Transformer, "tim" assigns too many non-zero attention scores to the irrelevant words.
But for the proposal, the top-k largest attention scores removes the distraction from irrelevant words and the attention becomes concentrated.
We first validate our methods on three tasks.
For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses.
We are surprised to find that the proposed sparse attention method can also help with training as a regularization method.
Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment.
The contributions of this paper are presented below:
• We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer's attention through explicit selection.
• We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling.
Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks.
Specifically, our model reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation.
• Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves better results.
In this section, we performed several analyses for further discussion of Explicit Sparse Transformer.
First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019) .
Second, we discuss about the selection of the value of k.
Third, we demonstrate that the top-k sparse attention method helps training.
In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer.
|
This work propose Sparse Transformer to improve the concentration of attention on the global context through an explicit selection of the most relevant segments for sequence to sequence learning.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:579
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.
However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data.
In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information.
Since the partitioning strategy (slightly) increases the number of function evaluations (NFEs), InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation (ODE) solvers for better speed and performance.
We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10.
Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance.
Invertible models are attractive modelling choice in a range of downstream tasks that require accurate densities including anomaly detection (Bishop, 1994; Chandola et al., 2009 ) and model-based reinforcement learning (Polydoros & Nalpantidis, 2017) .
These models enable exact latent-variable inference and likelihood estimation.
A popular class of invertible models is the flow-based generative models (Dinh et al., 2017; Rezende & Mohamed, 2015; Kingma & Dhariwal, 2018; Grathwohl et al., 2018) that employ a change of variables to transform a simple distribution into more complicated ones while preserving the invertibility and exact likelihood estimation.
However, computing the likelihood in flow-based models is expensive and usually requires restrictive constraints on the architecture in order to reduce the cost of computation.
Recently, introduced a new type of invertible model, named the Continuous Normalizing Flow (CNF), which employs ordinary differential equations (ODEs) to transform between the latent variables and the data.
The use of continuous-time transformations in CNF, instead of the discrete ones, together with efficient numerical methods such as the Hutchinson's trace estimator (Hutchinson, 1989) , helps reduce the cost of determinant computation from O(d 3 ) to O(d), where d is the latent dimension.
This improvement opens up opportunities to scale up invertible models to complex tasks on larger datasets where invertibility and exact inference have advantages.
Until recently, CNF has mostly been trained using unlabeled data.
In order to take full advantage of the available labeled data, a conditioning method for CNF -which models the conditional likelihood, as well as the posterior, of the data and the labels -is needed.
Existing approaches for conditioning flow-based models can be utilized, but we find that these methods often do not work well on CNF.
This drawback is because popular conditioning methods for flow-based models, such as in (Kingma & Dhariwal, 2018) , make use of the latent code for conditioning and introduce independent parameters for different class categories.
However, in CNF, for invertibility, the dimension of the latent code needs to be the same as the dimension of the input data and therefore is substantial, which results in many unnecessary parameters.
These additional but redundant parameters increase the complexity of the model and hinder learning efficiency.
Such overparametrization also has a negative impact on other flow-based generative models, as was pointed out by (Liu et al., 2019) , but is especially bad in the case of CNF.
This is because the ODE solvers in CNF are sensitive to the complexity of the model, and the number of function evaluations that the ODE solvers request in a single forward pass (NFEs) increases significantly as the complexity of the model increases, thereby slowing down the training.
This growing NFEs issue has been observed in unconditional CNF but to a much lesser extent (Grathwohl et al., 2018) .
It poses a unique challenge to scale up CNF and its conditioned variants for real-world tasks and data.
Our contributions in this paper are as follows:
Contribution 1: We propose a simple and efficient conditioning approach for CNF, namely InfoCNF.
Our method shares the high-level intuition with the InfoGAN (Chen et al., 2016) , thus the eponym.
In InfoCNF, we partition the latent code into two separate parts: class-specific supervised code and unsupervised code which is shared between different classes (see Figure 1) .
We use the supervised code to condition the model on the given supervised signal while the unsupervised code captures other latent variations in the data since it is trained using all label categories.
The supervised code is also used for classification, thereby reducing the size of the classifier and facilitating the learning.
Splitting the latent code into unsupervised and supervised parts allows the model to separate the learning of the task-relevant features and the learning of other features that help fit the data.
We later show that the cross-entropy loss used to train InfoCNF corresponds to the mutual information between the generated image and codes in InfoGAN, which encourages the model to learn disentangled representations.
We have developed an efficient framework, namely InfoCNF, for conditioning CNF via partitioning the latent code into the supervised and unsupervised part.
We investigated the possibility of tuning the error tolerances of the ODE solvers to speed up and improve the performance of InfoCNF.
We invented InfoCNF with gating networks that learns the error tolerances from the data.
We empirically show the advantages of InfoCNF and InfoCNF with learned tolerances over the baseline CCNF.
Finally, we study possibility of improving large-batch training of our models using large learning rates and learned error tolerances of the ODE solvers.
|
We propose the InfoCNF, an efficient conditional CNF that employs gating networks to learn the error tolerances of the ODE solvers
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:58
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge.
We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence.
We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data.
When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%.
We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset.
ResNet trained on CPC ResNet trained on pixels With decreasing amounts of labeled data, supervised networks trained on pixels fail to generalize (red).
When trained on unsupervised representations learned with CPC, these networks retain a much higher accuracy in this low-data regime (blue).
Equivalently, the accuracy of supervised networks can be matched with significantly fewer labels.
Deep neural networks excel at perceptual tasks when labeled data are abundant, yet their performance degrades substantially when provided with limited supervision (Fig. 1, red ).
In contrast, humans and animals can quickly learn about new classes of objects from few examples (Landau et al., 1988; Markman, 1989) .
What accounts for this monumental difference in data-efficiency between biological and machine vision?
While highly-structured representations (e.g. as proposed by Lake et al., 2015) may improve data-efficiency, it remains unclear how to program explicit structures that capture the enormous complexity of real visual scenes like those in ImageNet (Russakovsky et al., 2015) .
An alternative hypothesis has proposed that intelligent systems need not be structured a priori, but can instead learn about the structure of the world in an unsupervised manner (Barlow, 1989; Hinton et al., 1999; LeCun et al., 2015) .
Choosing an appropriate training objective is an open problem, but a promising guiding principle has emerged recently: good representations should make the spatio-temporal variability in natural signals more predictable.
Indeed, human perceptual representations have been shown to linearize (or 'straighten') the temporal transformations found in natural videos, a property lacking from current supervised image recognition models (Hénaff et al., 2019) , and theories of both spatial and temporal predictability have succeeded in describing properties of early visual areas (Rao & Ballard, 1999; Palmer et al., 2015) .
In this work, we hypothesize that spatially predictable representations may allow artificial systems to benefit from human-like data-efficiency.
Contrastive Predictive Coding (CPC, van den Oord et al., 2018) is an unsupervised objective which learns such predictable representations.
CPC is a general technique that only requires in its definition that observations be ordered along e.g. temporal or spatial dimensions, and as such has been applied to a variety of different modalities including speech, natural language and images.
This generality, combined with the strong performance of its representations in downstream linear classification tasks, makes CPC a promising candidate for investigating the efficacy of predictable representations for data-efficient image recognition.
Our work makes the following contributions:
• We revisit CPC in terms of its architecture and training methodology, and arrive at a new implementation of CPC with dramatically-improved ability to linearly separate image classes (+17% Top-1 ImageNet classification accuracy).
• We then train deep networks on top of the resulting CPC representations using very few labeled images (e.g. 1% of the ImageNet dataset), and demonstrate test-time classification accuracy far above networks trained on raw pixels (73% Top-5 accuracy, a 28% absolute improvement), outperforming all other unsupervised representation learning methods (+15% Top-5 accuracy over the previous state-of-the-art ).
Surprisingly, this representation also surpasses supervised methods when given the entire ImageNet dataset (+1% Top-5 accuracy).
• We isolate the contributions of different components of the final model to such downstream tasks.
Interestingly, we find that linear classification accuracy is not always predictive of low-data classification accuracy, emphasizing the importance of this metric as a stand-alone benchmark for unsupervised learning.
• Finally, we assess the generality of CPC representations by transferring them to a new task and dataset: object detection on PASCAL-VOC 2007.
Consistent with the results from the previous section, we find CPC to give state-of-the-art performance in this setting.
We asked whether CPC could enable data-efficient image recognition, and found that it indeed greatly improves the accuracy of classifiers and object detectors when given small amounts of labeled data.
Surprisingly, CPC even improves results given ImageNet-scale labels.
Our results show that there is still room for improvement using relatively straightforward changes such as augmentation, optimization, and network architecture.
Furthermore, we found that the standard method for evaluating unsupervised representations-linear classification-is only partially predictive of efficient recognition performance, suggesting that further research should focus on efficient recognition as a standalone benchmark.
Overall, these results open the door toward research on problems where data is naturally limited, e.g. medical imaging or robotics.
image detection accuracy to other transfer methods.
The supervised baseline learns from the entire labeled ImageNet dataset and fine-tunes for PASCAL detection.
The second class of methods learns from the same unlabeled images before transferring.
All of these methods pre-train on the ImageNet dataset, except for DeeperCluster which learns from the larger, but uncurated, YFCC100M dataset (Thomee et al., 2015) .
All results are reported in terms of mean average precision (mAP).
† denotes methods implemented in this work.
|
Unsupervised representations learned with Contrastive Predictive Coding enable data-efficient image classification.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:580
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels.
We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently.
Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer.
Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights.
We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms.
Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training.
In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network.
Current state-of-the-art neural networks need extensive computational resources to be trained and can have capacities of close to one billion connections between neurons (Vaswani et al., 2017; Devlin et al., 2018; Child et al., 2019) .
One solution that nature found to improve neural network scaling is to use sparsity: the more neurons a brain has, the fewer connections neurons make with each other (Herculano-Houzel et al., 2010) .
Similarly, for deep neural networks, it has been shown that sparse weight configurations exist which train faster and achieve the same errors as dense networks .
However, currently, these sparse configurations are found by starting from a dense network, which is pruned and re-trained repeatedly -an expensive procedure.
In this work, we demonstrate the possibility of training sparse networks that rival the performance of their dense counterparts with a single training run -no re-training is required.
We start with random initializations and maintain sparse weights throughout training while also speeding up the overall training time.
We achieve this by developing sparse momentum, an algorithm which uses the exponentially smoothed gradient of network weights (momentum) as a measure of persistent errors to identify which layers are most efficient at reducing the error and which missing connections between neurons would reduce the error the most.
Sparse momentum follows a cycle of (1) pruning weights with small magnitude, (2) redistributing weights across layers according to the mean momentum magnitude of existing weights, and (3) growing new weights to fill in missing connections which have the highest momentum magnitude.
We compare the performance of sparse momentum to compression algorithms and recent methods that maintain sparse weights throughout training.
We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet-1k.
For CIFAR-10, we determine the percentage of weights needed to reach dense performance levels and find that AlexNet, VGG16, and Wide Residual Networks need between 35-50%, 5-10%, and 20-30% weights to reach dense performance levels.
We also estimate the overall speedups of training our sparse convolutional networks to dense performance levels on CIFAR-10 for optimal sparse convolution algorithms and naive dense convolution algorithms compared to dense baselines.
For sparse convolution, we estimate speedups between 2.74x and 5.61x and for dense convolution speedups between 1.07x and 1.36x.
In your analysis, ablations demonstrate that the momentum redistribution and growth components are increasingly important as networks get deeper and larger in size -both are critical for good ImageNet performance.
|
Redistributing and growing weights according to the momentum magnitude enables the training of sparse networks from random initializations that can reach dense performance levels with 5% to 50% weights while accelerating training by up to 5.6x.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:581
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
To provide principled ways of designing proper Deep Neural Network (DNN) models, it is essential to understand the loss surface of DNNs under realistic assumptions.
We introduce interesting aspects for understanding the local minima and overall structure of the loss surface.
The parameter domain of the loss surface can be decomposed into regions in which activation values (zero or one for rectified linear units) are consistent.
We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum.
This means that every differentiable local minimum is the global minimum of the corresponding region.
We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions.
There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs.
Deep Neural Networks (DNNs) have achieved state-of-the-art performances in computer vision, natural language processing, and other areas of machine learning .
One of the most promising features of DNNs is its significant expressive power.
The expressiveness of DNNs even surpass shallow networks as a network with few layers need exponential number of nodes to have similar expressive power (Telgarsky, 2016) .
The DNNs are getting even deeper after the vanishing gradient problem has been solved by using rectified linear units (ReLUs) BID12 .
Nowadays, RELU has become the most popular activation function for hidden layers.
Leveraging this kind of activation functions, depth of DNNs has increased to more than 100 layers BID7 .Another
problem of training DNNs is that parameters can encounter pathological curvatures of the loss surfaces prolonging training time. Some of
the pathological curvatures such as narrow valleys would cause unnecessary vibrations. To avoid
these obstacles, various optimization methods were introduced (Tieleman & Hinton, 2012; BID9 . These methods
utilize the first and second order moments of the gradients to preserve the historical trends. The gradient
descent methods also have a problem of getting stuck in a poor local minimum. The poor local
minima do exist (Swirszcz et al., 2016) in DNNs, but recent works showed that errors at the local minima are as low as that of global minima with high probability BID4 BID2 BID8 BID14 Soudry & Hoffer, 2017) .In case of linear
DNNs in which activation function does not exist, every local minimum is a global minimum and other critical points are saddle points BID8 . Although these beneficial
properties do not hold in general DNNs, we conjecture that it holds in each region of parameters where the activation values for each data point are the same as shown in FIG0 . We prove this for a simple
network. The activation values of a
node can be different between data points as shown in FIG0 , so it is hard to apply proof techniques used for linear DNNs. The whole parameter space
is a disjoint union of these regions, so we call it loss surface decomposition.Using the concepts of loss surface decomposition, we explain why poor local minima do exist even in large networks. There are poor local minima
where gradient flow disappears when using the ReLU (Swirszcz et al., 2016) . We introduce another kind of
poor local minima where the loss is same as that of linear regression. To be more general, we prove
that for each local minimum in a network, there exists a local minimum of the same loss in the larger network that is constructed by adding a node to that network. DISPLAYFORM0 T . In each region
, activation values
are the same. There are six nonempty regions. The
parameters on the boundaries hit
the non-differentiable point of the rectified linear unit.
We conjecture that the loss surface is a disjoint union of activation regions where every local minimum is a subglobal minimum.
Using the concept of loss surface decomposition, we studied the existence of poor local minima and experimentally investigated losses of subglobal minima.
However, the structure of non-differentiable local minima is not yet well understood yet.
These non-differentiable points exist within the boundaries of the activation regions which can be obstacles when using gradient descent methods.
Further work is needed to extend knowledge about the local minima, activation regions, their boundaries.
Let θ ∈ R A be a differentiable point, so it is not in the boundaries of the activation regions.
This implies that w T j x i + b j = 0 for all parameters.
Without loss of generality, we assume w T j x i + b j < 0.
Then there exist > 0 such that w T j x i + b j + < 0.
This implies that small changes in the parameters for any direction does not change the activation region.
Since L f (θ) and L g A (θ) are equivalent in the region R A , the local curvatures of these two function around the θ are also the same.
Thus, the θ is a local minimum (saddle point) in L f (θ) if and only if it is a local minimum (saddle point) in L g A (θ).
DISPLAYFORM0 is a linear transformation of p j , q j , and c, the DISPLAYFORM1 2 is convex in terms of p j , q j , and c.
Summation of convex functions is convex, so the lemma holds.A.3
PROOF OF THEOREM 2.5(1) Assume that activation values are not all zeros, and then consider the following Hessian matrix evaluated from v j and b j for some non-zero activation values a ij > 0: DISPLAYFORM2 Let v j = 0 and b j = 0, then two eigenvalues of the Hessian matrix are as follows: DISPLAYFORM3 There exist c > 0 such that g A (x i , θ) > y i for all
i. If we choose such c, then DISPLAYFORM4 ∂vj ∂bj > 0 which implies that two eigenvalues are positive and negative.
Since the Hessian matrix is not positive semidefinite nor negative semidefinite, the function L g A (θ) is non-convex and non-concave.(2
, 3) We
organize some of the gradients as follows: DISPLAYFORM5 We select a critical point θ * where ∇ wj L g A (θ * ) = 0, ∇ vj L g A (θ * ) = 0, ∇ bj L g A (θ * ) = 0, and ∇ c L g A (θ * ) = 0 for all j.
Case 1) Assume that ∇ pj L g A (θ * ) = 0 and ∇ qj L g A (θ * ) = 0 for all j.
These points are global minima, since ∇ c L g A (θ * ) = 0 and L g A (θ) is convex in terms of p j , q j , and c.Case 2) Assume that there exist j such that ∇ pj L g A (θ DISPLAYFORM6 There exist an element w * in w j such that ∇ vj ∇ w * L g A (θ * ) = 0. Consider
a Hessian matrix evaluated from w * and v j . Analogous
to the proof of (1), this matrix is not positive semidefinite nor negative semidefinite. Thus θ *
is a saddle point.Case 3) Assume that there exist j such that ∇ qj L g A (θ * ) = 0. Since ∇
bj L g A (θ * ) = v j ∇ qj L g A (θ * ) = 0, the v j is zero. Analogous
to the Case 2, a Hessian matrix evaluated from b j and v j is not positive semidefinite nor negative semidefinite. Thus θ *
is a saddle point.As a result, every critical point is a global minimum or a saddle point. Since L
g A (θ) is a differentiable function, every local minimum is a critical point. Thus every
local minimum is a global minimum.
|
The loss surface of neural networks is a disjoint union of regions where every local minimum is a global minimum of the corresponding region.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:582
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task.
In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting.
Human language 1 is a complex system, involving an intricate interplay between meaning (semantics) and structural rules between words and phrases (syntax).
Self-supervised neural sequence models for text trained with a language modeling objective, such as ELMo (Peters et al., 2018) , BERT (Devlin et al., 2019) , and RoBERTA (Liu et al., 2019b) , were shown to produce representations that excel in recovering both structure-related information (Gulordava et al., 2018; van Schijndel & Linzen; Wilcox et al., 2018; Goldberg, 2019) as well as in semantic information (Yang et al., 2019; Joshi et al., 2019) .
In this work, we study the problem of disentangling structure from semantics in neural language representations: we aim to extract representations that capture the structural function of words and sentences, but which are not sensitive to their content.
For example, consider the sentences:
We aim to learn a function from contextualized word representations to a space that exposes these similarities.
Crucially, we aim to do this in an unsupervised manner: we do not want to inform the process of the kind of structural information we want to obtain.
We do this by learning a transformation that attempts to remove the lexical-semantic information in a sentence, while trying to preserve structural properties.
Disentangling syntax from lexical semantics in word representations is a desired property for several reasons.
From a purely scientific perspective, once disentanglement is achieved, one can better control for confounding factors and analyze the knowledge the model acquires, e.g. attributing the predictions of the model to one factor of variation while controlling for the other.
In addition to explaining model predictions, such disentanglement can be useful for the comparison of the representations the model acquires to linguistic knowledge.
From a more practical perspective, disentanglement can be a first step toward controlled generation/paraphrasing that considers only aspects of the structure, akin to the style-transfer works in computer vision, i.e., rewriting a sentence while preserving its structural properties while ignoring its meaning, or vice-versa.
It can also inform search-based application in which one can search for "similar" texts while controlling various aspects of the desired similarity.
To achieve this goal, we begin with the intuition that the structural component in the representation (capturing the form) should remain the same regardless of the lexical semantics of the sentence (the meaning).
Rather than beginning with a parsed corpus, we automatically generate a large number of structurally-similar sentences, without presupposing their formal structure ( §3.1).
This allows us to pose the disentanglement problem as a metric-learning problem: we aim to learn a transformation of the contextualized representation, which is invariant to changes in the lexical semantics within each group of structurally-similar sentences ( §3.3).
We demonstrate the structural properties captured by the resulting representations in several experiments ( §4), among them automatic identification of structurally-similar words and few-shot parsing.
In this work, we propose an unsupervised method for the distillation of structural information from neural contextualized word representations.
We used a process of sequential BERT-based substitu- Figure 4 : Results of the few shot parsing setup tion to create a large number of sentences which are structurally similar, but semantically different.
By controlling for one aspect -structure -while changing the other -lexical choice, we learn a metric (via triplet loss) under which pairs of words that come from structurally-similar sentences are close in space.
We demonstrated that the representations acquired by this method share structural properties with their neighbors in space, and show that with a minimal supervision, those representations outperform ELMo in the task of few-shots parsing.
The method presented here is a first step towards a better disentanglement between various kinds of information that is represented in neural sequence models.
The method used to create the structurally equivalent sentences can be useful by its own for other goals, such as augmenting parse-tree banks (which are often scarce and require large resources to annotate).
In a future work, we aim to extend this method to allow for a more soft alignment between structurally-equivalent sentences.
Table 4 : Results in the closest-word queries, before and after the application of the syntactic transformation.
"Basline" refers to unmodified vectors derived from BERT, and "Transformed" refers to the vectors after the learned syntactic transformation f .
"Difficult" refers to evaluation on the subset of POS tags which are most structurally diverse.
|
We distill language models representations for syntax by unsupervised metric learning
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:583
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals.
The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations.
The graph stores no metric information, only connectivity of locations corresponding to the nodes.
We use SPTM as a planning module in a navigation system.
Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals.
The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three.
Deep learning (DL) has recently been used as an efficient approach to learning navigation in complex three-dimensional environments.
DL-based approaches to navigation can be broadly divided into three classes: purely reactive BID49 , based on unstructured general-purpose memory such as LSTM BID33 BID31 , and employing a navigation-specific memory structure based on a metric map BID36 .However
, extensive evidence from psychology suggests that when traversing environments, animals do not rely strongly on metric representations BID16 BID47 BID13 . Rather
, animals employ a range of specialized navigation strategies of increasing complexity. According
to BID13 , one such strategy is landmark navigation -"the ability to orient with respect to a known object". Another is
route-based navigation that "involves remembering specific sequences of positions". Finally, map-based
navigation assumes a "survey knowledge of the environmental layout", but the map need not be metric and in fact it is typically not: "[. . .] humans do not integrate experience on specific routes into a metric cognitive map for navigation [. . .] Rather, they primarily depend on a landmark-based navigation strategy, which can be supported by qualitative topological knowledge of the environment."In this paper, we propose semi-parametric topological memory (SPTM) -a deep-learning-based memory architecture for navigation, inspired by landmark-based navigation in animals. SPTM consists of two
components: a non-parametric memory graph G where each node corresponds to a location in the environment, and a parametric deep network R capable of retrieving nodes from the graph based on observations. The graph contains no
metric relations between the nodes, only connectivity information. While exploring the environment
, the agent builds the graph by appending observations to it and adding shortcut connections based on detected visual similarities. The network R is trained to retrieve
nodes from the graph based on an observation of the environment. This allows the agent to localize itself
in the graph. Finally, we build a complete SPTM-based
navigation agent by complementing the memory with a locomotion network L, which allows the agent to move between nodes in the graph. The R and L networks are trained in self-supervised
fashion, without any manual labeling or reward signal.We evaluate the proposed system and relevant baselines on the task of goal-directed maze navigation in simulated three-dimensional environments. The agent is instantiated in a previously unseen maze
and given a recording of a walk through the maze (images only, no information about actions taken or ego-motion). Then the agent is initialized at a new location in the
maze and has to reach a goal location in the maze, given an image of that goal. To be successful at this task, the agent must represent
the maze based on the footage it has seen, and effectively utilize this representation for navigation.The proposed system outperforms baseline approaches by a large margin. Given 5 minutes of maze walkthrough footage, the system
is able to build an internal representation of the environment and use it to confidently navigate to various goals within the maze. The average success rate of the SPTM agent in goal-directed
navigation across test environments is higher than the best-performing baseline by a factor of three. Qualitative results and an implementation of the method are
available at https://sites.google.com/view/SPTM.
We have proposed semi-parametric topological memory (SPTM), a memory architecture that consists of a non-parametric component -a topological graph, and a parametric component -a deep network capable of retrieving nodes from the graph given observations from the environment.
We have shown that SPTM can act as a planning module in a navigation system.
This navigation agent can efficiently reach goals in a previously unseen environment after being presented with only 5 minutes of footage.
We see several avenues for future work.
First, improving the performance of the networks R and L will directly improve the overall quality of the system.
Second, while the current system explicitly avoids using ego-motion information, findings from experimental psychology suggest that noisy ego-motion estimation and path integration are useful for navigation.
Incorporating these into our model can further improve robustness.
Third, in our current system the size of the memory grows linearly with the duration of the exploration period.
This may become problematic when navigating in very large environments, or in lifelong learning scenarios.
A possible solution is adaptive subsampling, by only retaining the most informative or discriminative observations in memory.
Finally, it would be interesting to integrate SPTM into a system that is trainable end-to-end.SUPPLEMENTARY MATERIAL S1 METHOD DETAILS S1.1 NETWORK ARCHITECTURESThe retrieval network R and the locomotion network L are both based on ResNet-18 BID19 .
Both take 160×120 pixel images as inputs.
The networks are initialized as proposed by BID19 .
We used an open ResNet implementation: https://github.com/raghakot/ keras-resnet/blob/master/resnet.py.The network R admits two observations as input.
Each of these is processed by a convolutional ResNet-18 encoder.
Each of the encoders produces a 512-dimensional embedding vector.
These are concatenated and fed through a fully-connected network with 4 hidden layers with 512 units each and ReLU nonlinearities.The network L also admits two observations, but in contrast with the network R it processes them jointly, after concatenating them together.
A convolutional ResNet-18 encoder is followed by a single fully-connected layer with 7 outputs and a softmax.
The 7 outputs correspond to all available actions: do nothing, move forward, move backward, move left, move right, turn left, and turn right.
|
We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:584
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The available resolution in our visual world is extremely high, if not infinite.
Existing CNNs can be applied in a fully convolutional way to images of arbitrary resolution, but as the size of the input increases, they can not capture contextual information.
In addition, computational requirements scale linearly to the number of input pixels, and resources are allocated uniformly across the input, no matter how informative different image regions are.
We attempt to address these problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it uses a hard attention mechanism to selectively process only the most informative image parts.
We conduct experiments on MNIST and ImageNet datasets, and we show that our models can significantly outperform fully convolutional counterparts, when the resolution of the input is that big that the receptive field of the baselines can not adequately cover the objects of interest.
Gains in performance come for less FLOPs, because of the selective processing that we follow.
Furthermore, our attention mechanism makes our predictions more interpretable, and creates a trade-off between accuracy and complexity that can be tuned both during training and testing time.
Our visual world is very rich, and there is information of interest in an almost infinite number of different scales.
As a result, we would like our models to be able to process images of arbitrary resolution, in order to capture visual information with arbitrary level of detail.
This is possible with existing CNN architectures, since we can use fully convolutional processing (Long et al. (2015) ), coupled with global pooling.
However, global pooling ignores the spatial configuration of feature maps, and the output essentially becomes a bag of features 1 .
To demonstrate why this an important problem, in Figure 1
(a) and
(b) we provide an example of a simple CNN that is processing an image in two different resolutions.
In
(a) we see that the receptive field of neurons from the second layer suffices to cover half of the kid's body, while in
(b) the receptive field of the same neurons cover area that corresponds to the size of a foot.
This shows that as the input size increases, the final representation becomes a bag of increasingly more local features, leading to the absence of coarselevel information, and potentially harming performance.
We call this phenomenon the receptive field problem of fully convolutional processing.
An additional problem is that computational resources are allocated uniformly to all image regions, no matter how important they are for the task at hand.
For example, in Figure 1 (b), the same amount of computation is dedicated to process both the left half of the image that contains the kid, and the right half that is merely background.
We also have to consider that computational complexity scales linearly with the number of input pixels, and as a result, the bigger the size of the input, the more resources are wasted on processing uninformative regions.
We attempt to resolve the aforementioned problems by proposing a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way.
The receptive field problem of fully convolutional processing.
A simple CNN consisted of 2 convolutional layers (colored green), followed by a global pooling layer (colored red), processes an image in two different resolutions.
The shaded regions indicate the receptive fields of neurons from different layers.
As the resolution of the input increases, the final latent representation becomes a bag of increasingly more local features, lacking coarse information.
(c) A sketch of our proposed architecture.
The arrows on the left side of the image demonstrate how we focus on image sub-regions in our top-down traversal, while the arrows on the right show how we combine the extracted features in a bottom-up fashion.
In Figure 1 (c) we provide a simplified sketch of our approach.
We start at level 1, where we process the input image in low resolution, to get a coarse description of its content.
The extracted features (red cube) are used to select out of a predefined grid, the image regions that are worth processing in higher resolution.
This process constitutes a hard attention mechanism, and the arrows on the left side of the image show how we extend processing to 2 additional levels.
All extracted features are combined together as denoted by the arrows on the right, to create the final image representation that is used for classification (blue cube).
We evaluate our model on synthetic variations of MNIST (LeCun et al., 1998 ) and on ImageNet (Deng et al., 2009 ), while we compare it against fully convolutional baselines.
We show that when the resolution of the input is that big, that the receptive field of the baseline 2 covers a relatively small portion of the object of interest, our network performs significantly better.
We attribute this behavior to the ability of our model to capture both contextual and local information by extracting features from different pyramid levels, while the baselines suffer from the receptive field problem.
Gains in accuracy are achieved for less floating point operations (FLOPs) compared to the baselines, due to the attention mechanism that we use.
If we increase the number of attended image locations, computational requirements increase, but the probability of making a correct prediction is expected to increase as well.
This is a trade-off between accuracy and computational complexity, that can be tuned during training through regularization, and during testing by stopping processing on early levels.
Finally, by inspecting attended regions, we are able to get insights about the image parts that our networks value the most, and to interpret the causes of missclassifications.
We proposed a novel architecture that is able to process images of arbitrary resolution without sacrificing spatial information, as it typically happens with fully convolutional processing.
This is achieved by approaching feature extraction as a top-down image pyramid traversal, that combines information from multiple different scales.
The employed attention mechanism allows us to adjust the computational requirements of our models, by changing the number of locations they attend.
This way we can exploit the existing trade-off between computational complexity and accuracy.
Furthermore, by inspecting the image regions that our models attend, we are able to get important insights about the causes of their decisions.
Finally, there are multiple future research directions that we would like to explore.
These include the improvement of the localization capabilities of our attention mechanism, and the application of our model to the problem of budgeted batch classification.
In addition, we would like our feature extraction process to become more adaptive, by allowing already extracted features to affect the processing of image regions that are attended later on.
Figure 8 we provide the parsing tree that our model implicitly creates.
|
We propose a novel architecture that traverses an image pyramid in a top-down fashion, while it visits only the most informative regions along the way.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:585
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters.
We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases.
We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer.
We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism.
We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function.
Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities.
Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values.
Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems.
We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs).
Regularization hyperparameters such as weight decay, data augmentation, and dropout (Srivastava et al., 2014) are crucial to the generalization of neural networks, but are difficult to tune.
Popular approaches to hyperparameter optimization include grid search, random search BID3 , and Bayesian optimization (Snoek et al., 2012) .
These approaches work well with low-dimensional hyperparameter spaces and ample computational resources; however, they pose hyperparameter optimization as a black-box optimization problem, ignoring structure which can be exploited for faster convergence, and require many training runs.We can formulate hyperparameter optimization as a bilevel optimization problem.
Let w denote parameters (e.g. weights and biases) and λ denote hyperparameters (e.g. dropout probability).
Let L T and L V be functions mapping parameters and hyperparameters to training and validation losses, respectively.
We aim to solve 1 : DISPLAYFORM0 Substituting the best-response function w * (λ) = arg min w L T (λ, w) gives a single-level problem: DISPLAYFORM1 If the best-response w * is known, the validation loss can be minimized directly by gradient descent using Equation 2, offering dramatic speed-ups over black-box methods.
However, as the solution to a high-dimensional optimization problem, it is difficult to compute w * even approximately.Following Lorraine & Duvenaud (2018) , we propose to approximate the best-response w * directly with a parametric functionŵ φ .
We jointly optimize φ and λ, first updating φ so thatŵ φ ≈ w * in a neighborhood around the current hyperparameters, then updating λ by usingŵ φ as a proxy for w * in Eq. 2: DISPLAYFORM2 Finding a scalable approximationŵ φ when w represents the weights of a neural network is a significant challenge, as even simple implementations entail significant memory overhead.
We show how to construct a compact approximation by modelling the best-response of each row in a layer's weight matrix/bias as a rank-one affine transformation of the hyperparameters.
We show that this can be interpreted as computing the activations of a base network in the usual fashion, plus a correction term dependent on the hyperparameters.
We justify this approximation by showing the exact best-response for a shallow linear network with L 2 -regularized Jacobian follows a similar structure.
We call our proposed networks Self-Tuning Networks (STNs) since they update their own hyperparameters online during training.STNs enjoy many advantages over other hyperparameter optimization methods.
First, they are easy to implement by replacing existing modules in deep learning libraries with "hyper" counterparts which accept an additional vector of hyperparameters as input 2 .
Second, because the hyperparameters are adapted online, we ensure that computational effort expended to fit φ around previous hyperparameters is not wasted.
In addition, this online adaption yields hyperparameter schedules which we find empirically to outperform fixed hyperparameter settings.
Finally, the STN training algorithm does not require differentiating the training loss with respect to the hyperparameters, unlike other gradient-based approaches (Maclaurin et al., 2015; Larsen et al., 1996) , allowing us to tune discrete hyperparameters, such as the number of holes to cut out of an image BID12 , data-augmentation hyperparameters, and discrete-noise dropout parameters.
Empirically, we evaluate the performance of STNs on large-scale deep-learning problems with the Penn Treebank (Marcus et al., 1993) and CIFAR-10 datasets (Krizhevsky & Hinton, 2009) , and find that they substantially outperform baseline methods.
We introduced Self-Tuning Networks (STNs), which efficiently approximate the best-response of parameters to hyperparameters by scaling and shifting their hidden units.
This allowed us to use gradient-based optimization to tune various regularization hyperparameters, including discrete hyperparameters.
We showed that STNs discover hyperparameter schedules that can outperform fixed hyperparameters.
We validated the approach on large-scale problems and showed that STNs achieve better generalization performance than competing approaches, in less time.
We believe STNs offer a compelling path towards large-scale, automated hyperparameter tuning for neural networks.
|
We use a hypernetwork to predict optimal weights given hyperparameters, and jointly train everything together.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:586
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Conditional Generative Adversarial Networks (cGANs) are finding increasingly widespread use in many application domains.
Despite outstanding progress, quantitative evaluation of such models often involves multiple distinct metrics to assess different desirable properties, such as image quality, conditional consistency, and intra-conditioning diversity.
In this setting, model benchmarking becomes a challenge, as each metric may indicate a different "best" model.
In this paper, we propose the Frechet Joint Distance (FJD), which is defined as the Frechet distance between joint distributions of images and conditioning, allowing it to implicitly capture the aforementioned properties in a single metric.
We conduct proof-of-concept experiments on a controllable synthetic dataset, which consistently highlight the benefits of FJD when compared to currently established metrics.
Moreover, we use the newly introduced metric to compare existing cGAN-based models for a variety of conditioning modalities (e.g. class labels, object masks, bounding boxes, images, and text captions).
We show that FJD can be used as a promising single metric for model benchmarking.
The use of generative models is growing across many domains (van den Oord et al., 2016c; Vondrick et al., 2016; Serban et al., 2017; Karras et al., 2018; Brock et al., 2019) .
Among the most promising approaches, Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) , auto-regressive models (van den Oord et al., 2016a; b) , and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have been driving significant progress, with the latter at the forefront of a wide-range of applications (Mirza & Osindero, 2014; Reed et al., 2016; Zhang et al., 2018a; Vondrick et al., 2016; Almahairi et al., 2018; Subramanian et al., 2018; Salvador et al., 2019) .
In particular, significant research has emerged from practical applications, which require generation to be based on existing context.
For example, tasks such as image inpainting, super-resolution, or text-to-image synthesis have been successfully addressed within the framework of conditional generation, with conditional GANs (cGANs) among the most competitive approaches.
Despite these outstanding advances, quantitative evaluation of GANs remains a challenge (Theis et al., 2016; Borji, 2018) .
In the last few years, a significant number of evaluation metrics for GANs have been introduced in the literature (Salimans et al., 2016; Heusel et al., 2017; Bińkowski et al., 2018; Shmelkov et al., 2018; Zhou et al., 2019; Kynkäänniemi et al., 2019; Ravuri & Vinyals, 2019) .
Although there is no clear consensus on which quantitative metric is most appropriate to benchmark GAN-based models, Inception Score (IS) (Salimans et al., 2016) and Fréchet Inception Distance (FID) (Heusel et al., 2017) have been extensively used.
However, both IS and FID were introduced in the context of unconditional image generation and, hence, focus on capturing certain desirable properties such as visual quality and sample diversity, which do not fully encapsulate all the different phenomena that arise during conditional image generation.
In conditional generation, we care about visual quality, conditional consistency -i.e., verifying that the generation respects its conditioning, and intra-conditioning diversity -i.e., sample diversity per conditioning.
Although visual quality is captured by both metrics, IS is agnostic to intra-conditioning diversity and FID only captures it indirectly.
1 Moreover, neither of them can capture conditional con-sistency.
In order to overcome these shortcomings, researchers have resorted to reporting conditional consistency and diversity metrics in conjunction with FID Park et al., 2019) .
Consistency metrics often use some form of concept detector to ensure that the requested conditioning appears in the generated image as expected.
Although intuitive to use, these metrics require pretrained models that cover the same target concepts in the same format as the conditioning (i.e., classifiers for image-level class conditioning, semantic segmentation for mask conditioning, etc.), which may or may not be available off-the-shelf.
Moreover, using different metrics to evaluate different desirable properties may hinder the process of model selection, as there may not be a single model that surpasses the rest in all measures.
In fact, it has recently been demonstrated that there is a natural trade-off between image quality and sample diversity (Yang et al., 2019) , which calls into question how we might select the correct balance of these properties.
In this paper we introduce a new metric called Fréchet Joint Distance (FJD), which is able to implicitly assess image quality, conditional consistency, and intra-conditioning diversity.
FJD computes the Fréchet distance on an embedding of the joint image-conditioning distribution, and introduces only small computational overhead over FID compared to alternative methods.
We evaluate the properties of FJD on a variant of the synthetic dSprite dataset (Matthey et al., 2017) and verify that it successfully captures the desired properties.
We provide an analysis on the behavior of both FID and FJD under different types of conditioning such as class labels, bounding boxes, and object masks, and evaluate a variety of existing cGAN models for real-world datasets with the newly introduced metric.
Our experiments show that (1) FJD captures the three highlighted properties of conditional generation; (2) it can be applied to any kind of conditioning (e.g., class, bounding box, mask, image, text, etc.); and (3) when applied to existing cGAN-based models, FJD demonstrates its potential to be used as a promising unified metric for hyper-parameter selection and cGAN benchmarking.
To our knowledge, there are no existing metrics for conditional generation that capture all of these key properties.
In this paper we introduce Fréchet Joint Distance (FJD), which is able to assess image quality, conditional consistency, and intra-conditioning diversity within a single metric.
We compare FJD to FID on the synthetic dSprite-textures dataset, validating its ability to capture the three properties of interest across different types of conditioning, and highlighting its potential to be adopted as a unified cGAN benchmarking metric.
We also demonstrate how FJD can be used to address the potentially ambiguous trade-off between image quality and sample diversity when performing model selection.
Looking forward, FJD could serve as valuable metric to ground future research, as it has the potential to help elucidate the most promising contributions within the scope of conditional generation.
In this section, we illustrate the claim made in Section 1 that FID cannot capture intra-conditioning diversity when the joint distribution of two variables changes but the marginal distribution of one of them is not altered.
Consider two multivariate Gaussian distributions, (X 1 , Y 1 ) ∼ N (0, Σ 1 ) and (X 2 , Y 2 ) ∼ N (0, Σ 2 ), where If we let X i take the role of the embedding of the conditioning variables (e.g., position) and Y i take the role of the embedding of the generated variables (i.e., images), then computing FID in this example would correspond to computing the FD between f Y1 and f Y2 , which is zero.
On the other hand, computing FJD would correspond to the FD between f X1,Y1 and f X2,Y2 , which equals 0.678.
But note that Dist1 and Dist2 have different degrees of intra-conditioning diversity, as illustrated by Figure 5 (right), where two histograms of f Yi|Xi∈(0.9,1.1) are displayed, showing marked differences to each other (similar plots can be constructed for other values of X i ).
Therefore, this example illustrates a situation in which FID is unable to capture changes in intra-conditioning diversity, while FJD is able to do so.
|
We propose a new metric for evaluating conditional GANs that captures image quality, conditional consistency, and intra-conditioning diversity in a single measure.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:587
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL.
On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori.
However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning.
To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions.
TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values.
We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network.
Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree.
We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games.
Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models.
A promising approach to improving model-free deep reinforcement learning (RL) is to combine it with on-line planning.
The model-free value function can be viewed as a rough global estimate which is then locally refined on the fly for the current state by the on-line planner.
Crucially, this does not require new samples from the environment but only additional computation, which is often available.One strategy for on-line planning is to use look-ahead tree search BID12 BID2 .
Traditionally, such methods have been limited to domains where perfect environment simulators are available, such as board or card games BID4 BID24 .
However, in general, models for complex environments with high dimensional observation spaces and complex dynamics must be learned from agent experience.
Unfortunately, to date, it has proven difficult to learn models for such domains with sufficient fidelity to realise the benefits of look-ahead planning BID17 BID29 .A
simple approach to learning environment models is to maximise a similarity metric between model predictions and ground truth in the observation space. This
approach has been applied with some success in cases where model fidelity is less important, e.g., for improving exploration BID3 BID17 . However
, this objective causes significant model capacity to be devoted to predicting irrelevant aspects of the environment dynamics, such as noisy backgrounds, at the expense of value-critical features that may occupy only a small part of the observation space (Pathak et al., Since the transition model is only weakly grounded in the actual environment, our approach can alternatively be viewed as a model-free method in which the fully connected layers of DQN are replaced by a recursive network that applies transition functions with shared parameters at each tree node expansion.The resulting architecture, which we call TreeQN, encodes an inductive bias based on the prior knowledge that the environment is a stationary Markov process, which facilitates faster learning of better policies. We also
present an actor-critic variant, ATreeC, in which the tree is augmented with a softmax layer and used as a policy network.We show that TreeQN and ATreeC outperform their DQN-based counterparts in a box-pushing domain and a suite of Atari games, with deeper trees often outperforming shallower trees, and TreeQN outperforming VPN BID18 on most Atari games. We also
present ablation studies investigating various auxiliary losses for grounding the transition model more strongly in the environment, which could improve performance as well as lead to interpretable internal plans. While we
show that grounding the reward function is valuable, we conclude that how to learn strongly grounded transition models and generate reliably interpretable plans without compromising performance remains an open research question.
In this section, we present our experimental results for TreeQN and ATreeC.7.1 GROUNDING FIG4 shows the result of a hyperparameter search on η r and η s , the coefficients of the auxiliary losses on the predicted rewards and latent states.
An intermediate value of η r helps performance but there is no benefit to using the latent space loss.
Subsequent experiments use η r = 1 and η s = 0.The predicted rewards that the reward-grounding objective encourages the model to learn appear both in its own Q-value prediction and in the target for n-step Q-learning.
Consequently, we expect this auxiliary loss to be well aligned with the true objective.
By contrast, the state-grounding loss (and other potential auxiliary losses) might help representation learning but would not explicitly learn any part of the desired target.
It is possible that this mismatch between the auxiliary and primary objective leads to degraded performance when using this form of state grounding.
One potential route to overcoming this obstacle to joint training would be pre-training a model, as done by BID34 .
Inside TreeQN this model could then be fine-tuned to perform well inside the planner.
We leave this possiblity to future work.
FIG3 shows the results of TreeQN with tree depths 1, 2, and 3, compared to a DQN baseline.
In this domain, there is a clear advantage for the TreeQN architecture over DQN.
TreeQN learns policies that are substantially better at avoiding obstacles and lining boxes up with goals so they can be easily pushed in later.
TreeQN also substantially speeds up learning.
We believe that the greater structure brought by our architecture regularises the model, encouraging appropriate state representations to be learned quickly.
Even a depth-1 tree improves performance significantly, as disentangling the estimation of rewards and next-state values makes them easier to learn.
This is further facilitated by the sharing of value-function parameters across branches.
We presented TreeQN and ATreeC, new architectures for deep reinforcement learning in discreteaction domains that integrate differentiable on-line tree planning into the action-value function or policy.
Experiments on a box-pushing domain and a set of Atari games show the benefit of these architectures over their counterparts, as well as over VPN.
In future work, we intend to investigate enabling more efficient optimisation of deeper trees, encouraging the transition functions to produce interpretable plans, and integrating smart exploration.
|
We present TreeQN and ATreeC, new architectures for deep reinforcement learning in discrete-action domains that integrate differentiable on-line tree planning into the action-value function or policy.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:588
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample.
Modeling the combinatorial label interactions in MLC has been a long-haul challenge.
Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC.
However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results.
In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC.
MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely.
The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data).
Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time.
Multi-label classification (MLC) is receiving increasing attention in tasks such as text categorization and image classification.
Accurate and scalable MLC methods are in urgent need for applications like assigning topics to web articles, classifying objects in an image, or identifying binding proteins on DNA.
The most common and straightforward MLC method is the binary relevance (BR) approach that considers multiple target labels independently BID0 .
However, in many MLC tasks there is a clear dependency structure among labels, which BR methods ignore.
Accordingly, probabilistic classifier chain (PCC) models were proposed to model label dependencies and formulate MLC in an autoregressive sequential prediction manner BID1 .
One notable work in the PCC category was from which implemented a classifier chain using a recurrent neural network (RNN) based sequence to sequence (Seq2Seq) architecture, Seq2Seq MLC.
This model uses an encoder RNN encoding elements of an input sequence, a decoder RNN predicting output labels one after another, and beam search that computes the probability of the next T predictions of labels and then chooses the proposal with the max combined probability.However, the main drawback of classifier chain models is that their inherently sequential nature precludes parallelization during training and inference.
This can be detrimental when there are a large number of positive labels as the classifier chain has to sequentially predict each label, and often requires beam search to obtain the optimal set.
Aside from time-cost disadvantages, PCC methods have several other drawbacks.
First, PCC methods require a defined ordering of labels for the sequential prediction, but MLC output labels are an unordered set, and the chosen order can lead to prediction instability .
Secondly, even if the optimal ordering is known, PCC methods struggle to accurately capture long-range dependencies among labels in cases where the number of positive labels is large (i.e., dense labels).
For example, the Delicious dataset has a median of 19 positive labels per sample, so it can be difficult to correctly predict the labels at the end of the prediction chain.
Lastly, many real-world applications prefer interpretable predictors.
For instance, in the task of predicting which proteins (labels) will bind to a DNA sequence based binding site, users care about how a prediction is made and how the interactions among labels influence the predictions 1 .Message
Passing Neural Networks (MPNNs) BID3 introduce a class of methods that model joint dependencies of variables using neural message passing rather than an explicit representation such as a probabilistic classifier chain. Message
passing allows for efficient inference by modelling conditional independence where the same local update procedure is applied iteratively to propagate information across variables. MPNNs provide
a flexible method for modeling multiple variables jointly which have no explicit ordering (and can be modified to incorporate an order, as explained in section 3). To handle the
drawbacks of BR and PCC methods, we propose a modified version of MPNNs for MLC by modeling interactions between labels using neural message passing.We introduce Message Passing Encoder-Decoder (MPED) Networks aiming to provide fast, accurate, and interpretable multi-label predictions. The key idea
is to replace RNNs and to rely on neural message passing entirely to draw global dependencies between input components, between labels and input components, and between labels. The proposed
MPED networks allow for significantly more parallelization in training and testing. The main contributions
of this paper are:• Novel approach for MLC. To the authors' best knowledge
, MPED is the first work using neural message passing for MLC.• Accurate MLC. Our model achieves
similar, or better
performance compared to the previous state of the art across five different MLC metrics. We validate our model on seven MLC datasets
which cover a wide spectrum of input data structure: sequences (English text, DNA), tabular (bag-of-words), and graph (drug molecules), as well as output label structure: unknown and graph.• Fast. Empirically our model achieves an average
1.7x speedup
over the autoregressive seq2seq MLC at training time and an average 5x speedup over its testing time.• Interpretable. Although deep-learning based systems have
widely been viewed
as "black boxes" due to their complexity, our attention based MPED models provide a straightforward way to explain label to label, input to label, and feature to feature dependencies.
In this work we present Message Passing Encoder-Decoder (MPED) Networks which achieve a significant speedup at close to the same performance as autoregressive models for MLC.
We open a new avenue of using neural message passing to model label dependencies in MLC tasks.
In addition, we show that our method is able to handle various input data types (sequence, tabular, graph), as well various output label structures (known vs unknown).
One of our future extensions is to adapt the current model to predict more dynamic outputs.
BID1 BID24
|
We propose Message Passing Encoder-Decode networks for a fast and accurate way of modelling label dependencies for multi-label classification.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:589
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data.
Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics.
Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data.
To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks.
Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks.
Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods.
Unsupervised learning is a fundamental, unsolved problem (Hastie et al., 2009 ) and has seen promising results in domains such as image recognition (Le et al., 2013) and natural language understanding BID19 .
A central use case of unsupervised learning methods is enabling better or more efficient learning of downstream tasks by training on top of unsupervised representations BID23 BID7 or fine-tuning a learned model BID13 .
However, since the downstream objective requires access to supervision, the objectives used for unsupervised learning are only a rough proxy for downstream performance.
If a central goal of unsupervised learning is to learn useful representations, can we derive an unsupervised learning objective that explicitly takes into account how the representation will be used?The
use of unsupervised representations for downstream tasks is closely related to the objective of meta-learning techniques: finding a learning procedure that is more efficient and effective than learning from scratch. However
, unlike unsupervised learning methods, meta-learning methods require large, labeled datasets and hand-specified task distributions. These
dependencies are major obstacles to widespread use of these methods for few-shot classification.To begin addressing these problems, we propose an unsupervised meta-learning method: one which aims to learn a learning procedure, without supervision, that is useful for solving a wide range of new, human-specified tasks. With
only raw, unlabeled observations, our model's goal is to learn a useful prior such that, after meta-training, when presented with a modestly-sized dataset for a human-specified task, the model can transfer its prior experience to efficiently learn to perform the new task. If we
can build such an algorithm, we can enable few-shot learning of new tasks without needing any labeled data nor any pre-defined tasks.To perform unsupervised meta-learning, we need to automatically construct tasks from unlabeled data. We study
several options for how this can be done. We find
that a good task distribution should be diverse, but also not too difficult: naïve random approaches for task generation produce tasks that contain insufficient regularity to enable useful meta-learning. To that
end, our method proposes tasks by first leveraging prior unsupervised learning algorithms to learn an embedding of the input data, and then performing an overcomplete partitioning of the dataset to construct numerous categorizations of the data. We show
how we can derive classification tasks from these categorizations for use with meta-learning algorithms. Surprisingly
, even with simple mechanisms for partitioning the embedding space, such as k-means clustering, we find that meta-learning acquires priors that, when used to learn new, human-designed tasks, learn those tasks more effectively than methods that directly learn on the embedding. That is, the
learning algorithm acquired through unsupervised meta-learning achieves better downstream performance than the original representation used to derive meta-training tasks, without introducing any additional assumptions or supervision. See Figure 1
for an illustration of the complete approach.The core idea in this paper is that we can leverage unsupervised embeddings to propose tasks for a meta-learning algorithm, leading to an unsupervised meta-learning algorithm that is particularly effective as pre-training for human-specified downstream tasks. In the following
sections, we formalize our problem assumptions and goal, which match those of unsupervised learning, and discuss several options for automatically deriving tasks from embeddings. We instantiate our
method with two meta-learning algorithms and compare to prior state-of-the-art unsupervised learning methods. Across four image
datasets (MNIST, Omniglot, miniImageNet, and CelebA), we find that our method consistently leads to effective downstream learning of a variety of human-specified tasks, including character recognition tasks, object classification tasks, and facial attribute discrimination tasks, without requiring any labels or hand-designed tasks during meta-learning and where key hyperparameters of our method are held constant across all domains. We show that, even
though our unsupervised meta-learning algorithm trains for one-shot generalization, one instantiation of our approach performs well not only on few-shot learning, but also when learning downstream tasks with up to 50 training examples per class. In fact, some of our
results begin to approach the performance of fully-supervised meta-learning techniques trained with fully-specified task distributions.... , , . Figure 1 : Illustration
of the
proposed unsupervised meta-learning procedure. Embeddings of raw observations
are clustered with k-means to construct partitions, which give rise to classification tasks. Each task involves distinguishing
between examples from N = 2 clusters, with Km-tr = 1 example from each cluster being a training input. The meta-learner's aim is to produce
a learning procedure that successfully solves these tasks.
We demonstrate that meta-learning on tasks produced using simple mechanisms based on embeddings improves upon the utility of these representations in learning downstream, human-specified tasks.
We empirically show that this holds across benchmark datasets and tasks in the few-shot classification literature BID26 BID22 , task difficulties, and embedding learning methods while fixing key hyperparameters across all experiments.In a sense, CACTUs can be seen as a facilitating interface between an embedding learning method and a meta-learning algorithm.
As shown in the results, the meta-learner's performance significantly depends on the nature and quality of the task-generating embeddings.
We can expect our method to yield better performance as the methods that produce these embedding functions improve, becoming better suited for generating diverse yet distinctive clusterings of the data.
However, the gap between unsupervised and supervised meta-learning will likely persist because, with the latter, the meta-training task distribution is human-designed to mimic the expected evaluation task distribution as much as possible.
Indeed, to some extent, supervised meta-learning algorithms offload the effort of designing and tuning algorithms onto the effort of designing and tuning task distributions.
With its evaluation-agnostic task generation, CACTUs-based meta-learning trades off performance in specific use-cases for broad applicability and the ability to train on unlabeled data.
In principle, CACTUs-based meta-learning may outperform supervised meta-learning when the latter is trained on a misaligned task distribution.
We leave this investigation to future work.While we have demonstrated that k-means is a broadly useful mechanism for constructing tasks from embeddings, it is unlikely that combinations of k-means clusters in learned embedding spaces are universal approximations of arbitrary class definitions.
An important direction for future work is to find examples of datasets and human-designed tasks for which CACTUs-based meta-learning results in ineffective downstream learning.
This will result in better understanding of the practical scope of applicability for our method, and spur further development in automatic task construction mechanisms for unsupervised meta-learning.A potential concern of our experimental evaluation is that MNIST, Omniglot, and miniImageNet exhibit particular structure in the underlying class distribution (i.e., perfectly balanced classes), since they were designed to be supervised learning benchmarks.
In more practical applications of machine learning, such structure would likely not exist.
Our CelebA results indicate that CACTUs is effective even in the case of a dataset without neatly balanced classes or attributes.
An interesting direction for future work is to better characterize the performance of CACTUs and other unsupervised pretraining methods with highly-unstructured, unlabeled datasets.Since MAML and ProtoNets produce nothing more than a learned representation, our method can be viewed as deriving, from a previous unsupervised representation, a new representation particularly suited for learning downstream tasks.
Beyond visual classification tasks, the notion of using unsupervised pre-training is generally applicable to a wide range of domains, including regression, speech (Oord et al., 2018) , language (Howard & Ruder, 2018) , and reinforcement learning BID28 .
Hence, our unsupervised meta-learning approach has the potential to improve unsupervised representations for a variety of such domains, an exciting avenue for future work.
|
An unsupervised learning method that uses meta-learning to enable efficient learning of downstream image classification tasks, outperforming state-of-the-art methods.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:59
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples.
Previous few-shot learning works have mainly focused on classification and reinforcement learning.
In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks.
Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions.
This enables a few labeled samples to approximate the function.
We design a Basis Function Learner network to encode basis functions for a task distribution, and a Weights Generator network to generate the weight vector for a novel task.
We show that our model outperforms the current state of the art meta-learning methods in various regression tasks.
Regression deals with the problem of learning a model relating a set of inputs to a set of outputs.
The learned model can be thought as function y = F (x) that gives a prediction y ∈ R dy given input x ∈ R dx where d y and d x are dimensions of the output and input respectively.
Typically, a regression model is trained on a large number of data points to be able to provide accurate predictions for new inputs.
Recently, there have been a surge in popularity on few-shot learning methods (Vinyals et al., 2016; Koch et al., 2015; Gidaris & Komodakis, 2018) .
Few-shot learning methods require only a few examples from each task to be able to quickly adapt and perform well on a new task.
These few-shot learning methods in essence are learning to learn i.e. the model learns to quickly adapt itself to new tasks rather than just learning to give the correct prediction for a particular input sample.
In this work, we propose a few shot learning model that targets few-shot regression tasks.
Our model takes inspiration from the idea that the degree of freedom of F (x) can be significantly reduced when it is modeled a linear combination of sparsifying basis functions.
Thus, with a few samples, we can estimate F (x).
The two primary components of our model are
(i) the Basis Function Learner network which encodes the basis functions for the distribution of tasks, and
(ii) the Weights Generator network which produces the appropriate weights given a few labelled samples.
We evaluate our model on the sinusoidal regression tasks and compare the performance to several meta-learning algorithms.
We also evaluate our model on other regression tasks, namely the 1D heat equation tasks modeled by partial differential equations and the 2D Gaussian distribution tasks.
Furthermore, we evaluate our model on image completion as a 2D regression problem on the MNIST and CelebA data-sets, using only a small subset of known pixel values.
To summarize, our contributions for this paper are:
• We propose to address few shot regression by linear combination of a set of sparsifying basis functions.
• We propose to learn these (continuous) sparsifying basis functions from data.
Traditionally, basis functions are hand-crafted (e.g. Fourier basis).
• We perform experiments to evaluate our approach using sinusoidal, heat equation, 2D Gaussian tasks and MNIST/CelebA image completion tasks.
An overview of our model as in meta-training.
Our system learns the basis functions Φ that can result in sparse representation for any task drawn from a certain task distribution.
The basis functions are encoded in the Basis Function Learner network.
The system produces predictions for a regression task by generating a weight vector, w for a novel task, using the Weights Generator network.
The prediction is obtained by taking a dot-product between the weight vector and learned basis functions.
|
We propose a method of doing few-shot regression by learning a set of basis functions to represent the function distribution.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:590
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Large-scale pre-trained language model, such as BERT, has recently achieved great success in a wide range of language understanding tasks.
However, it remains an open question how to utilize BERT for text generation tasks.
In this paper, we present a novel approach to addressing this challenge in a generic sequence-to-sequence (Seq2Seq) setting.
We first propose a new task, Conditional Masked Language Modeling (C-MLM), to enable fine-tuning of BERT on target text-generation dataset.
The fine-tuned BERT (i.e., teacher) is then exploited as extra supervision to improve conventional Seq2Seq models (i.e., student) for text generation.
By leveraging BERT's idiosyncratic bidirectional nature, distilling the knowledge learned from BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation.
Experiments show that the proposed approach significantly outperforms strong baselines of Transformer on multiple text generation tasks, including machine translation (MT) and text summarization.
Our proposed model also achieves new state-of-the-art results on the IWSLT German-English and English-Vietnamese MT datasets.
Large-scale pre-trained language model, such as ELMo (Peters et al., 2018) , GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) , has become the de facto first encoding step for many natural language processing (NLP) tasks.
For example, BERT, pre-trained with deep bidirectional Transformer (Vaswani et al., 2017) via masked language modeling and next sentence prediction, has revolutionized the state of the art in many language understanding tasks, such as natural language inference (Bowman et al., 2015) and question answering (Rajpurkar et al., 2016) .
However, beyond common practice of fine-tuning BERT for language understanding , applying BERT to language generation still remains an open question.
Text generation aims to generate natural language sentences conditioned on certain input, with applications ranging from machine translation (Cho et al., 2014; Bahdanau et al., 2015) , text summarization (Nallapati et al., 2016; Gehring et al., 2017; Chen & Bansal, 2018) ), to image captioning Xu et al., 2015; Gan et al., 2017) .
In this paper, we study how to use BERT for better text generation, which to the best of our knowledge is still a relatively unexplored territory.
Intuitively, as BERT is learned with a generative objective via Masked Language Modeling (MLM) during the pre-training stage, a natural assumption is that this training objective should have learned essential, bidirectional, contextual knowledge that can help enhance text generation.
Unfortunately, this MLM objective is not auto-regressive, which encumbers its direct application to auto-regressive text generation in practice.
In this paper, we tackle this challenge by proposing a novel and generalizable approach to distilling knowledge learned in BERT for text generation tasks.
We first propose a new Conditional Masked Language Modeling (C-MLM) task, inspired by MLM but requiring additional conditional input, which enables fine-tuning pre-trained BERT on a target dataset.
In order to extract knowledge from the fine-tuned BERT and apply it to a text generation model, we leverage the fine-tuned BERT as a teacher model that generates sequences of word probability logits for the training samples, and treat the text generation model as a student network, which can effectively learn from the teacher's outputs for imitation.
The proposed approach improves text generation by providing a good estimation on the word probability distribution for each token in a sentence, consuming both the left and the right context, the exploitation of which encourages conventional text generation models to plan ahead.
Text generation models are usually trained via Maximum Likelihood Estimation (MLE), or teacher forcing : at each time step, it maximizes the likelihood of the next word conditioned on its previous ground-truth words.
This corresponds to optimizing one-step-ahead prediction.
As there is no explicit signal towards global planning in the training objective, the generation model may incline to focusing on local structure rather than global coherence.
With our proposed approach, BERT's looking into the future ability can act as an effective regularization method, capturing subtle long-term dependencies that ensure global coherence and in consequence boost model performance on text generation.
An alternative way to leverage BERT for text generation is to initialize the parameters of the encoder or decoder of Seq2Seq with pre-trained BERT, and then fine-tuning on the target dataset.
However, this approach requires the encoder/decoder to have the same size as BERT, inevitably making the final text generation model too large.
Our approach, on the other hand, is modular and compatible to any text-generation model, and has no restriction on the model size (e.g., large or small) or model architecture (e.g., LSTM or Transformer).
The main contributions of this work are three-fold.
(i) We present a novel approach to utilizing BERT for text generation.
The proposed method induces sequence-level knowledge into the conventional one-step-ahead and teacher-forcing training paradigm, by introducing an effective regularization term to MLE training loss.
(ii) We conduct comprehensive evaluation on multiple text generation tasks, including machine translation, text summarization and image captioning.
Experiments show that our proposed approach significantly outperforms strong Transformer baselines and is generalizable to different tasks.
(iii) The proposed model achieves new state-of-the-art on both IWSLT14 German-English and IWSLT15 English-Vietnamese datasets.
In this work, we propose a novel and generic approach to utilizing pre-trained language models to improve text generation without explicit parameter sharing, feature extraction, or augmenting with auxiliary tasks.
Our proposed Conditional MLM mechanism leverages unsupervised language models pre-trained on large corpus, and then adapts to supervised sequence-to-sequence tasks.
Our distillation approach indirectly influences the text generation model by providing soft-label distributions only, hence is model-agnostic.
Experiments show that our model improves over strong Transformer baselines on multiple text generation tasks such as machine translation and abstractive summarization, and achieves new state-of-the-art on some of the translation tasks.
For future work, we will explore the extension of Conditional MLM to multimodal input such as image captioning.
|
We propose a model-agnostic way to leverage BERT for text generation and achieve improvements over Transformer on 2 tasks over 4 datasets.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:591
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Humans have the remarkable ability to correctly classify images despite possible degradation.
Many studies have suggested that this hallmark of human vision results from the interaction between feedforward signals from bottom-up pathways of the visual cortex and feedback signals provided by top-down pathways.
Motivated by such interaction, we propose a new neuro-inspired model, namely Convolutional Neural Networks with Feedback (CNN-F).
CNN-F extends CNN with a feedback generative network, combining bottom-up and top-down inference to perform approximate loopy belief propagation.
We show that CNN-F's iterative inference allows for disentanglement of latent variables across layers.
We validate the advantages of CNN-F over the baseline CNN.
Our experimental results suggest that the CNN-F is more robust to image degradation such as pixel noise, occlusion, and blur.
Furthermore, we show that the CNN-F is capable of restoring original images from the degraded ones with high reconstruction accuracy while introducing negligible artifacts.
Convolutional neural networks (CNNs) have been widely adopted for image classification and achieved impressive prediction accuracy.
While state-of-the-art CNNs can achieve near-or super-human classification performance [1] , these networks are susceptible to accuracy drops in the presence of image degradation such as blur and noise, or adversarial attacks, to which human vision is much more robust [2] .
This weakness suggests that CNNs are not able to fully capture the complexity of human vision.
Unlike the CNN, the human's visual cortex contains not only feedforward but also feedback connections which propagate the information from higher to lower order visual cortical areas as suggested by the predictive coding model [3] .
Additionally, recent studies suggest that recurrent circuits are crucial for core object recognition [4] .
A recently proposed model extends CNN with a feedback generative network [5] , moving a step forward towards more brain-like CNNs.
The inference of the model is carried out by the feedforward only CNN.
We term convolutional neural networks with feedback whose inference uses no iterations as CNN-F (0 iterations).
The generative feedback models the joint distribution of the data and latent variables.
This methodology is similar to how human brain works: building an internal model of the world [6] [7] .
Despite the success of CNN-F (0 iterations) in semi-supervised learning [5] and out-of-distribution detection [8] , the feedforward only CNN can be a noisy inference in practice and the power of the rendering top-down path is not fully utilized.
A neuro-inspired model that carries out more accurate inference is therefore desired for robust vision.
Our work is motivated by the interaction of feedforward and feedback signals in the brain, and our contributions are:
We propose the Convolutional Neural Network with Feedback (CNN-F) with more accurate inference.
We perform approximated loopy belief propagation to infer latent variables.
We introduce recurrent structure into our network by feeding the generated image from the feedback process back into the feedforward process.
We term the model with k-iteration inference as CNN-F (k iterations).
In the context without confusion, we will use the name CNN-F for short in the rest of the paper.
We demonstrate that the CNN-F is more robust to image degradation including noise, blur, and occlusion than the CNN.
In particular, our experiments show that CNN-F experiences smaller accuracy drop compared to the corresponding CNN on degraded images.
We verify that CNN-F is capable of restoring degraded images.
When trained on clean data, the CNN-F can recover the original image from the degraded images at test time with high reconstruction accuracy.
We propose the Convolutional Neural Networks with Feedback (CNN-F) which consists of both a classification pathway and a generation pathway similar to the feedforward and feedback connections in human vision.
Our model uses approximate loopy belief propagation for inferring latent variables, allowing for messages to be propagated along both directions of the model.
We also introduce recurrency by passing the reconstructed image and predicted label back into the network.
We show that CNN-F is more robust than CNN on corrupted images such as noisy, blurry, and occluded images and is able to restore degraded images when trained only on clean images.
|
CNN-F extends CNN with a feedback generative network for robust vision.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:592
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We develop new approximation and statistical learning theories of convolutional neural networks (CNNs) via the ResNet-type structure where the channel size, filter size, and width are fixed.
It is shown that a ResNet-type CNN is a universal approximator and its expression ability is no worse than fully-connected neural networks (FNNs) with a \textit{block-sparse} structure even if the size of each layer in the CNN is fixed.
Our result is general in the sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs.
Thanks to the general theory, it is shown that learning on CNNs satisfies optimality in approximation and estimation of several important function classes.
As applications, we consider two types of function classes to be estimated: the Barron class and H\"older class.
We prove the clipped empirical risk minimization (ERM) estimator can achieve the same rate as FNNs even the channel size, filter size, and width of CNNs are constant with respect to the sample size.
This is minimax optimal (up to logarithmic factors) for the H\"older class.
Our proof is based on sophisticated evaluations of the covering number of CNNs and the non-trivial parameter rescaling technique to control the Lipschitz constant of CNNs to be constructed.
Convolutional Neural Network (CNN) is one of the most popular architectures in deep learning research, with various applications such as computer vision (Krizhevsky et al. (2012) ), natural language processing (Wu et al. (2016) ), and sequence analysis in bioinformatics (Alipanahi et al. (2015) , Zhou & Troyanskaya (2015) ).
Despite practical popularity, theoretical justification for the power of CNNs is still scarce from the viewpoint of statistical learning theory.For fully-connected neural networks (FNNs), there is a lot of existing work, dating back to the 80's, for theoretical explanation regarding their approximation ability (Cybenko (1989) , Barron (1993) , Lu et al. (2017) , Yarotsky (2017), and Petersen & Voigtlaender (2017) ) and generalization power (Barron (1994) , Arora et al. (2018), and Suzuki (2018) ).
See also Pinkus (2005) and Kainen et al. (2013) for surveys of earlier works.
Although less common compared to FNNs, recently, statistical learning theory for CNNs has been studied, both about approximation ability (Zhou (2018) , Yarotsky (2018) , Petersen & Voigtlaender (2018) ) and about generalization power (Zhou & Feng (2018) ).
One of the standard approaches is to relate the approximation ability of CNNs with that of FNNs, either deep or shallow.
For example, Zhou (2018) proved that CNNs are a universal approximator of the Barron class (Barron (1993) , Klusowski & Barron (2016) ), which is a historically important function class in the approximation theory.
Their approach is to approximate the function using a 2-layered FNN (i.e., an FNN with a single hidden layer) with the ReLU activation function (Krizhevsky et al. (2012) ) and transform the FNN into a CNN.
Very recently independent of ours, Petersen & Voigtlaender (2018) showed any function realizable with an FNN can extend to an equivariant function realizable by a CNN that has the same order of parameters.
However, to the best of our knowledge, no CNNs that achieves the minimax optimal rate (Tsybakov (2008) , Giné & Nickl (2015) ) in important function classes, including the Hölder class, can keep the number of units in each layer constant with respect to the sample size.
Architectures that have extremely large depth, while moderate channel size and width have become feasible, thanks to recent methods such as identity mappings (He et al. (2016) , Huang et al. (2018) ), sophisticated initialization schemes (He et al. (2015) , Chen et al. (2018) ), and normalization techniques (Ioffe & Szegedy (2015) , Miyato et al. (2018) ).
Therefore, we would argue that there are growing demands for theories which can accommodate such constant-size architectures.In this paper, we analyze the learning ability of ResNet-type ReLU CNNs which have identity mappings and constant-width residual blocks with fixed-size filters.
There are mainly two reasons that motivate us to study this type of CNNs.
First, although ResNet is the de facto architecture in various practical applications, the approximation theory for ResNet has not been explored extensively, especially from the viewpoint of the relationship between FNNs and CNNs.
Second, constant-width CNNs are critical building blocks not only in ResNet but also in various modern CNNs such as Inception (Szegedy et al. (2015) ), DenseNet (Huang et al. (2017) ), and U-Net (Ronneberger et al. (2015) ), to name a few.
Our strategy is to replicate the learning ability of FNNs by constructing tailored ResNet-type CNNs.
To do so, we pay attention to the block-sparse structure of an FNN, which roughly means that it consists of a linear combination of multiple (possibly dense) FNNs (we define it rigorously in the subsequent sections).
Block-sparseness decreases the model complexity coming from the combinatorial sparsity patterns and promotes better bounds.
Therefore, it is often utilized, both implicitly or explicitly, in the approximation and learning theory of FNNs (e.g., Bölcskei et al. (2017) , Yarotsky (2018) ).
We first prove that if an FNN is block-sparse with M blocks (M -way block-sparse FNN), we can realize the FNN with a ResNet-type CNN with O(M ) additional parameters, which are often negligible since the original FNN already has Ω(M ) parameters.
Using this approximation, we give the upper bound of the estimation error of CNNs in terms of the approximation errors of block sparse FNNs and the model complexity of CNNs.
Our result is general in the sense that it is not restricted to a specific function class, as long as we can approximate it using block-sparse FNNs.To demonstrate the wide applicability of our methods, we derive the approximation and estimation errors for two types of function classes with the same strategy: the Barron class (of parameter s = 2) and Hölder class.
We prove, as corollaries, that our CNNs can achieve the approximation error of orderÕ(M ) for the β-Hölder class, where M is the number of parameters (we used M here, same as the number of blocks because it will turn out that CNNs have O(M ) blocks for these cases), N is the sample size, and D is the input dimension.
These rates are same as the ones for FNNs ever known in the existing literature.
An important consequence of our theory is that the ResNet-type CNN can achieve the minimax optimal estimation error (up to logarithmic factors) for β-Hölder class even if its filter size, channel size and width are constant with respect to the sample size, as opposed to existing works such as Yarotsky (2017) and Petersen & Voigtlaender (2018) , where optimal FNNs or CNNs could have a width or a channel size goes to infinity as N → ∞.In
summary, the contributions of our work are as follows:• We develop the approximation theory for CNNs via ResNet-type architectures with constant-width residual blocks. We
prove any M -way block-sparse FNN is realizable such a CNN with O(M ) additional parameters. That
means if FNNs can approximate a function with O(M ) parameters, we can approximate the function with CNNs at the same rate (Theorem 1).• We
derive the upper bound of the estimation error in terms of the approximation error of FNNs and the model complexity of CNNs (Theorem 2). This
result gives the sufficient conditions to derive the same estimation error as that of FNNs (Corollary 1).• We
apply our general theory to the Barron class and Hölder class and derive the approximation (Corollary 2 and 4) and
estimation (Corollary 3 and 5) error
rates, which are identical to those for FNNs, even if the CNNs have constant channel and filter size with respect to the sample size. In particular
, this is minimax optimal for the Hölder case.
In this paper, we established new approximation and statistical learning theories for CNNs by utilizing the ResNet-type architecture of CNNs and the block-sparse structure of FNNs.
We proved that any M -way block-sparse FNN is realizable using CNNs with O(M ) additional parameters, when the width of the FNN is fixed.
Using this result, we derived the approximation and estimation errors for CNNs from those for block-sparse FNNs.
Our theory is general because it does not depend on a specific function class, as long as we can approximate it with block-sparse FNNs.
To demonstrate the wide applicability of our results, we derived the approximation and error rates for the Barron class and Hölder class in almost same manner and showed that the estimation error of CNNs is same as that of FNNs, even if the CNNs have a constant channel size, filter size, and width with respect to the sample size.
The key techniques were careful evaluations of the Lipschitz constant of CNNs and non-trivial weight parameter rescaling of FNNs.One of the interesting open questions is the role of the weight rescaling.
We critically use the homogeneous property of the ReLU activation function to change the relative scale between the block-sparse part and the fully-connected part, if it were not for this property, the estimation error rate would be worse.
The general theory for rescaling, not restricted to the Barron nor Hölder class would be beneficial for deeper understanding of the relationship between the approximation and estimation capabilities of FNNs and CNNs.Another question is when the approximation and estimation error rates of CNNs can exceed that of FNNs.
We can derive the same rates as FNNs essentially because we can realize block-sparse FNNs using CNNs that have the same order of parameters (see Theorem 1).
Therefore, if we dig into the internal structure of FNNs, like repetition, more carefully, the CNNs might need fewer parameters and can achieve better estimation error rate.
Note that there is no hope to enhance this rate for the Hölder case (up to logarithmic factors) because the estimation rate using FNNs is already minimax optimal.
It is left for future research which function classes and constraints of FNNs, like block-sparseness, we should choose.
|
It is shown that ResNet-type CNNs are a universal approximator and its expression ability is not worse than fully connected neural networks (FNNs) with a \textit{block-sparse} structure even if the size of each layer in the CNN is fixed.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:593
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Few shot image classification aims at learning a classifier from limited labeled data.
Generating the classification weights has been applied in many meta-learning approaches for few shot image classification due to its simplicity and effectiveness.
However, we argue that it is difficult to generate the exact and universal classification weights for all the diverse query samples from very few training samples.
In this work, we introduce Attentive Weights Generation for few shot learning via Information Maximization (AWGIM), which addresses current issues by two novel contributions.
i) AWGIM generates different classification weights for different query samples by letting each of query samples attends to the whole support set.
ii) To guarantee the generated weights adaptive to different query sample, we re-formulate the problem to maximize the lower bound of mutual information between generated weights and query as well as support data.
As far as we can see, this is the first attempt to unify information maximization into few shot learning.
Both two contributions are proved to be effective in the extensive experiments and we show that AWGIM is able to achieve state-of-the-art performance on benchmark datasets.
While deep learning methods achieve great success in domains such as computer vision (He et al., 2016) , natural language processing (Devlin et al., 2018) , reinforcement learning (Silver et al., 2018) , their hunger for large amount of labeled data limits the application scenarios where only a few data are available for training.
Humans, in contrast, are able to learn from limited data, which is desirable for deep learning methods.
Few shot learning is thus proposed to enable deep models to learn from very few samples (Fei-Fei et al., 2006) .
Meta learning is by far the most popular and promising approach for few shot problems (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017; Ravi & Larochelle, 2016; Rusu et al., 2019) .
In meta learning approaches, the model extracts high level knowledge across different tasks so that it can adapt itself quickly to a new-coming task (Schmidhuber, 1987; Andrychowicz et al., 2016) .
There are several kinds of meta learning methods for few shot learning, such as gradient-based (Finn et al., 2017; Ravi & Larochelle, 2016) and metric-based (Snell et al., 2017; Sung et al., 2018) .
Weights generation, among these different methods, has shown effectiveness with simple formulation (Qi et al., 2018; Qiao et al., 2018; Gidaris & Komodakis, 2018; .
In general, weights generation methods learn to generate the classification weights for different tasks conditioned on the limited labeled data.
However, fixed classification weights for different query samples within one task might be sub-optimal, due to the few shot challenge.
We introduce Attentive Weights Generation for few shot learning via Information Maximization (AWGIM) in this work to address these limitations.
In AWGIM, the classification weights are generated for each query sample specifically.
This is done by two encoding paths where the query sample attends to the task context.
However, we show in experiments that simple cross attention between query samples and support set fails to guarantee classification weights fitted to diverse query data since the query-specific information is lost during weights generation.
Therefore, we propose to maximize the lower bound of mutual information between generated weights and query, support data.
As far as we know, AWGIM is the first work introducing Variational Information Maximization in few shot learning.
The induced computational overhead is minimal due to the nature of few shot problems.
Furthermore, by maximizing the lower bound of mutual information, AWGIM gets rid of inner update without compromising performance.
AWGIM is evaluated on two benchmark datasets and shows state-of-the-art performance.
We also conducted detailed analysis to validate the contribution of each component in AWGIM.
2 RELATED WORKS 2.1 FEW SHOT LEARNING Learning from few labeled training data has received growing attentions recently.
Most successful existing methods apply meta learning to solve this problem and can be divided into several categories.
In the gradient-based approaches, an optimal initialization for all tasks is learned (Finn et al., 2017) .
Ravi & Larochelle (2016) learned a meta-learner LSTM directly to optimize the given fewshot classification task.
Sun et al. (2019) learned the transformation for activations of each layer by gradients to better suit the current task.
In the metric-based methods, a similarity metric between query and support samples is learned.
(Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Li et al., 2019a) .
Spatial information or local image descriptors are also considered in some works to compute richer similarities (Lifchitz et al., 2019; Li et al., 2019b; Wertheimer & Hariharan, 2019) .
Generating the classification weights directly has been explored by some works.
Gidaris & Komodakis (2018) generated classification weights as linear combinations of weights for base and novel classes.
Similarly, Qiao et al. (2018) and Qi et al. (2018) both generated the classification weights from activations of a trained feature extractor.
Graph neural network denoising autoencoders are used in (Gidaris & Komodakis, 2019) .
Munkhdalai & Yu (2017) proposed to generate "fast weights" from the loss gradient for each task.
All these methods do not consider generating different weights for different query examples, nor maximizing the mutual information.
There are some other methods for few-shot classification.
Generative models are used to generate or hallucinate more data in Wang et al., 2018; Chen et al., 2019) .
Bertinetto et al. (2019) and used the closed-form solutions directly for few shot classification.
integrated label propagation on a transductive graph to predict the query class label.
In this work, we introduce Attentive Weights Generation via Information Maximization (AWGIM) for few shot image classification.
AWGIM learns to generate optimal classification weights for each query sample within the task by two encoding paths.
To guarantee this, the lower bound of mutual information between generated weights and query, support data is maximized.
As far as we know, AWGIM is the first work utilizing mutual information techniques for few shot learning.
The effectiveness of AWGIM is demonstrated by state-of-the-art performance on two benchmark datasets and extensive analysis.
|
A novel few shot learning method to generate query-specific classification weights via information maximization.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:594
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Conversational question answering (CQA) is a novel QA task that requires the understanding of dialogue context.
Different from traditional single-turn machine reading comprehension (MRC), CQA is a comprehensive task comprised of passage reading, coreference resolution, and contextual understanding.
In this paper, we propose an innovative contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models.
Our model leverages both inter-attention and self-attention to comprehend the conversation and passage.
Furthermore, we demonstrate a novel method to integrate the BERT contextual model as a sub-module in our network.
Empirical results show the effectiveness of SDNet.
On the CoQA leaderboard, it outperforms the previous best model's F1 score by 1.6%.
Our ensemble model further improves the F1 score by 2.7%.
Machine reading comprehension (MRC) is a core NLP task in which a machine reads a passage and then answers related questions.
It requires a deep understanding of both the article and the question, as well as the ability to reason about the passage and make inferences.
These capabilities are essential in applications like search engines and conversational agents.
In recent years, there have been numerous studies in this field (Huang et al., 2017; Seo et al., 2016; Liu et al., 2017) , with various innovations in text encoding, attention mechanisms and answer verification.
However, traditional MRC tasks often take the form of single-turn question answering.
In other words, there is no connection between different questions and answers to the same passage.
This oversimplifies the conversational manner humans naturally take when probing a passage, where question turns are assumed to be remembered as context to subsequent queries.
Figure 1 demonstrates an example of conversational question answering in which one needs to correctly refer "she" in the last two rounds of questions to its antecedent in the first question, "Cotton."
To accomplish this kind of task, the machine must comprehend both the current round's question and previous rounds of utterances in order to perform coreference resolution, pragmatic reasoning and semantic implication.
To facilitate research in conversation question answering (CQA), several public datasets have been published that evaluate a model's efficacy in this field, such as CoQA (Reddy et al., 2018) , QuAC and QBLink (Elgohary et al., 2018) .
In these datasets, to generate correct responses, models need to fully understand the given passage as well as the dialogue context.
Thus, traditional MRC models are not suitable to be directly applied to this scenario.
Therefore, a number of models have been proposed to tackle the conversational QA task.
DrQA+PGNet (Reddy et al., 2018) combines evidence finding and answer generation to produce answers.
BiDAF++ (Yatskar, 2018) achieves better results by employing answer marking and contextualized word embeddings on the MRC model BiDAF (Seo et al., 2016) .
FlowQA (Huang et al., 2018 ) leverages a recurrent neural network over previous rounds of questions and answers to absorb information from its history context.
Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton.
Cotton lived high up in a nice warm place above the barn where all of the farmer's horses slept.
But Cotton wasn't alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters...
In this paper, we propose a novel contextual attention-based deep neural network, SDNet, to tackle the conversational question answering task.
By leveraging inter-attention and self-attention on passage and conversation history, the model is able to comprehend dialogue flow and the passage.
Furthermore, we leverage the latest breakthrough in NLP, BERT, as a contextual embedder.
We design the alignment of tokenizers, linear combination and weight-locking techniques to adapt BERT into our model in a computation-efficient way.
SDNet achieves superior results over previous approaches.
On the public dataset CoQA, SDNet outperforms previous state-of-the-art model by 1.6% in overall F 1 score and the ensemble model further improves the F 1 by 2.7%.
Our future work is to apply this model to open-domain multiturn QA problem with large corpus or knowledge base, where the target passage may not be directly available.
This will be a more realistic setting to human question answering.
|
A neural method for conversational question answering with attention mechanism and a novel usage of BERT as contextual embedder
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:595
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality.
However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques.
Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations.
Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training.
In recent years, Generative adversarial networks (GANs) (Goodfellow et al., 2014) have been becoming the state-of-the-art in several generative modeling tasks, ranging from image generation (Karras et al., 2018) to imitation learning (Ho and Ermon, 2016) .
They are based on an idea of a two-player game, in which a discriminator tries to distinguish between real and generated data samples, while a generator tries to fool the discriminator, learning to produce realistic samples on the long run.
Wasserstein GAN (WGAN) was proposed as a solution to the issues present in the original GAN formulation.
Replacing the discriminator, WGAN trains a critic to approximate the Wasserstein distance between the real and generated distributions.
This introduced a new challenge, since Wasserstein distance estimation requires the function space of the critic to only consist of 1-Lipschitz functions.
To enforce the Lipschitz constraint on the WGAN critic, originally used weight clipping, which was soon replaced by the much more effective method of Gradient Penalty (GP) (Gulrajani et al., 2017) , which consists of penalizing the deviation of the critic's gradient norm from 1 at certain input points.
Since then, several variants of gradient norm penalization have been introduced (Petzka et al., 2018; Wei et al., 2018; Adler and Lunz, 2018; Zhou et al., 2019b) .
Virtual Adversarial Training (VAT) (Miyato et al., 2019 ) is a semi-supervised learning method for improving robustness against local perturbations of the input.
Using an iterative method based on power iteration, it approximates the adversarial direction corresponding to certain input points.
Perturbing an input towards its adversarial direction changes the network's output the most.
Inspired by VAT, we propose a method called Adversarial Lipschitz Regularization (ALR), enabling the training of neural networks with regularization terms penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient.
It provides means to generate a pair for each input point, for which the Lipschitz constraint is likely to be violated with high probability.
In general, enforcing Lipschitz continuity of complex models can be useful for a lot of applications.
In this work, we focus on applying ALR to Wasserstein GANs, as regularizing or constraining Lipschitz continuity has proven to have a high impact on training stability and reducing mode collapse.
Source code to reproduce the presented experiments is available at https://github.com/dterjek/adversarial_lipschitz_regularization.
Inspired by VAT, we proposed ALR and shown that it is an efficient and powerful method for learning Lipschitz constrained mappings implemented by neural networks.
Resulting in competitive performance when applied to the training of WGANs, ALR is a generally applicable regularization method.
It draws an important parallel between Lipschitz regularization and adversarial training, which we believe can prove to be a fruitful line of future research.
|
alternative to gradient penalty
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:596
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multi-task learning promises to use less data, parameters, and time than training separate single-task models.
But realizing these benefits in practice is challenging.
In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task.
There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks.
To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints.
We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures.
We also present a method for quick evaluation of such architectures with feature distillation.
Together these contributions allow us to quickly optimize for parameter-efficient multi-task models.
We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance.
Multi-task learning allows models to leverage similarities across tasks and avoid overfitting to the particular features of any one task (Caruana, 1997; Zamir et al., 2018) .
This can result in better generalization and more robust feature representations.
While this makes multi-task learning appealing for its potential performance improvements, there are also benefits in terms of resource efficiency.
Training a multi-task model should require less data, fewer training iterations, and fewer total parameters than training an equivalent set of task-specific models.
In this work we investigate how to automatically search over high performing multi-task architectures while taking such resource constraints into account.
Finding architectures that offer the best accuracy possible given particular resource constraints is nontrivial.
There are subtle trade-offs in performance when increasing or reducing use of parameters and operations.
Furthermore, with multiple tasks, one must take into account the impact of shared operations.
There is a large space of options for tweaking such architectures, in fact so large that it is difficult to tune an optimal configuration manually.
Neural architecture search (NAS) allows researchers to automatically search for models that offer the best performance trade-offs relative to some metric of efficiency.
Here we define a multi-task architecture as a single network that supports separate outputs for multiple tasks.
These outputs are produced by unique execution paths through the model.
In a neural network, such a path is made up of a subset of the total nodes and operations in the model.
This subset may or may not overlap with those of other tasks.
During inference, unused parts of the network can be ignored by either pruning out nodes or zeroing out their activations (Figure 1 ).
Such architectures mean improved parameter efficiency because redundant operations and features can be consolidated and shared across a set of tasks.
We seek to optimize for the computational efficiency of multi-task architectures by finding models that perform as well as possible while reducing average node use per task.
Different tasks will require different capacities to do well, so reducing average use requires effectively identifying which tasks will ask more of the model and which tasks can perform well with less.
In addition, performance is affected by how nodes are shared across tasks.
It is unclear when allocating resources whether sets of tasks would benefit from sharing parameters or would instead interfere.
Figure 1: Feature partitioning can be used to control how much network capacity is used by tasks, and how much sharing is done across tasks.
In this work we identify effective partitioning strategies to maximize performance while reducing average computation per task.
When searching over architectures, differences in resource use can be compared at different levels of granularity.
Most existing work in NAS and multi-task learning searches over the allocation and use of entire layers (Zoph & Le, 2016; Fernando et al., 2017; Rosenbaum et al., 2017) , we instead partition out individual feature channels within a layer.
This offers a greater degree of control over both the computation required by each task and the sharing that takes place between tasks.
The main obstacle to address in searching for effective multi-task architectures is the vast number of possibilities for performing feature partitioning as well as the significant amount of computation required to evaluate and compare arrangements.
A naive brute search over different partitioning strategies is prohibitively expensive.
We leverage our knowledge of the search space to explore it more effectively.
We propose a parameterization of partitioning strategies to reduce the size of the search space by eliminating unnecessary redundancies and more compactly expressing the key features that distinguish different architectures.
In addition, the main source of overhead in NAS is evaluation of sampled architectures.
It is common to define a surrogate operation that can be used in place of training a full model to convergence.
Often a smaller model will be trained for a much shorter number of iterations with the hope that the differences in accuracy that emerge early on correlate with the final performance of the full model.
We propose a strategy for evaluating multi-task architectures using feature distillation which provides much faster feedback on the effectiveness of a proposed partitioning strategy while correlating well with final validation accuracy.
In this work we provide:
• a parameterization that aids automatic architecture search by providing a direct and compact representation of the space of sharing strategies in multi-task architectures.
• an efficient method for evaluating proposed parameterizations using feature distillation to further accelerate the search process.
• results on Visual Decathlon (Rebuffi et al., 2017) to demonstrate that our search strategy allows us to effectively identify trade-offs between parameter use and performance on diverse and challenging image classification datasets.
In this work we investigate efficient multi-task architecture search to quickly find models that achieve high performance under a limited per-task budget.
We propose a novel strategy for searching over feature partitioning that automatically determines how much network capacity should be used by each task and how many parameters should be shared between tasks.
We design a compact representation to serve as a search space, and show that we can quickly estimate the performance of different partitioning schemes by using feature distillation.
|
automatic search for multi-task architectures that reduce per-task feature use
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:597
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
As distributed approaches to natural language semantics have developed and diversified, embedders for linguistic units larger than words (e.g., sentences) have come to play an increasingly important role.
To date, such embedders have been evaluated using benchmark tasks (e.g., GLUE) and linguistic probes.
We propose a comparative approach, nearest neighbor overlap (N2O), that quantifies similarity between embedders in a task-agnostic manner.
N2O requires only a collection of examples and is simple to understand: two embedders are more similar if, for the same set of inputs, there is greater overlap between the inputs' nearest neighbors.
We use N2O to compare 21 sentence embedders and show the effects of different design choices and architectures.
Continuous embeddings-of words and of larger linguistic units-are now ubiquitious in NLP.
The success of self-supervised pretraining methods that deliver embeddings from raw corpora has led to a proliferation of embedding methods, with an eye toward "universality" across NLP tasks.
Our focus here is on sentence embedders, and specifically their evaluation.
As with most NLP components, intrinsic (e.g., and extrinsic (e.g., GLUE; Wang et al., 2019) evaluations have emerged for sentence embedders.
Our approach, nearest neighbor overlap (N2O), is different: it compares a pair of embedders in a linguistics-and task-agnostic manner, using only a large unannotated corpus.
The central idea is that two embedders are more similar if, for a fixed query sentence, they tend to find nearest neighbor sets that overlap to a large degree.
By drawing a random sample of queries from the corpus itself, N2O can be computed on in-domain data without additional annotation, and therefore can help inform embedder choices in applications such as text clustering (Cutting et al., 1992) , information retrieval (Salton & Buckley, 1988) , and open-domain question answering (Seo et al., 2018) , among others.
After motivating and explaining the N2O method ( §2), we apply it to 21 sentence embedders ( §3-4).
Our findings ( §5) reveal relatively high functional similarity among averaged static (noncontextual) word type embeddings, a strong effect of the use of subword information, and that BERT and GPT are distant outliers.
In §6, we demonstrate the robustness of N2O across different query samples and probe sizes.
We also illustrate additional analyses made possible by N2O: identifying embeddingspace neighbors of a query sentence that are stable across embedders, and those that are not ( §7); and probing the abilities of embedders to find a known paraphrase ( §8).
The latter reveals considerable variance across embedders' ability to identify semantically similar sentences from a broader corpus.
In this paper, we introduce nearest neighbor overlap (N2O), a comparative approach to quantifying similarity between sentence embedders.
Using N2O, we draw comparisons across 21 embedders.
We also provide additional analyses made possible with N2O, from which we find high variation in embedders' treatment of semantic similarity.
GloVe.
We use three sets of standard pretrained GloVe embeddings: 100D and 300D embeddings trained on Wikipedia and Gigaword (6B tokens), and 300D embeddings trained on Common Crawl (840B tokens).
13 We handle tokenization and embedding lookup identically to word2vec; for the Wikipedia/Gigaword embeddings, which are uncased, we lower case all tokens as well.
FastText.
We use four sets of pretrained FastText embeddings: two trained on Wikipedia and other news corpora, and two trained on Common Crawl (each with an original version and one trained on subword information).
14 We use the Python port of the FastText implementation to handle tokenization, embedding lookup, and OOV embedding computation.
|
We propose nearest neighbor overlap, a procedure which quantifies similarity between embedders in a task-agnostic manner, and use it to compare 21 sentence embedders.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:598
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative Adversarial Networks (GANs) can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem.
Variational Autoencoders (VAE) on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality.
Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success.
This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior.
The synthetic likelihood ratio term also shows instability during training.
We propose a novel objective with a ``"Best-of-Many-Samples" reconstruction cost and a stable direct estimate of the synthetic likelihood.
This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality.
Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have achieved state-of-the-art sample quality in generative modeling tasks.
However, GANs do not explicitly estimate the data likelihood.
Instead, it aims to "fool" an adversary, so that the adversary is unable to distinguish between samples from the true distribution and the generated samples.
This leads to the generation of high quality samples (Adler & Lunz, 2018; Brock et al., 2019) .
However, there is no incentive to cover the whole data distribution.
Entire modes of the true data distribution can be missedcommonly referred to as the mode collapse problem.
In contrast, the Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) explicitly maximize data likelihood and can be forced to cover all modes (Bozkurt et al., 2018; Shu et al., 2018) .
VAEs enable sampling by constraining the latent space to a unit Gaussian and sampling through the latent space.
However, VAEs maximize a data likelihood estimate based on the L 1 /L 2 reconstruction cost which leads to lower overall sample quality -blurriness in case of image distributions.
Therefore, there has been a spur of recent work (Donahue et al., 2017; Larsen et al., 2016; Rosca et al., 2019) which aims integrate GANs in a VAE framework to improve VAE generation quality while covering all the modes.
Notably in Rosca et al. (2019) , GANs are integrated in a VAE framework by augmenting the L 1 /L 2 data likelihood term in the VAE objective with a GAN discriminator based synthetic likelihood ratio term.
However, Rosca et al. (2019) reports that in case of hybrid VAE-GANs, the latent space does not usually match the Gaussian prior.
This is because, the reconstruction log-likelihood in the VAE objective is at odds with the divergence to the latent prior (Tabor et al., 2018) (also in case of alternatives proposed by Makhzani et al. (2016) ; ).
This problem is further exacerbated with the addition of the synthetic likelihood term in the hybrid VAE-GAN objective -it is necessary for sample quality but it introduces additional constraints on the encoder/decoder.
This leads to the degradation in the quality and diversity of samples.
Moreover, the synthetic likelihood ratio term is unstable during training -as it is the ratio of outputs of a classifier, any instability in the output of the classifier is magnified.
We directly estimate the ratio using a network with a controlled Lipschitz constant, which leads to significantly improved stability.
Our contributions in detail are,
1. We propose a novel objective for training hybrid VAE-GAN frameworks, which relaxes the constraints on the encoder by giving the encoder multiple chances to draw samples with high likelihood enabling it to generate realistic images while covering all modes of the data distribution,
2. Our novel objective directly estimates the synthetic likelihood term with a controlled Lipschitz constant for stability,
3. Finally, we demonstrate significant improvement over prior hybrid VAE-GANs and plain GANs on highly muti-modal synthetic data, CIFAR-10 and CelebA.
We further compare our BMS-VAE-GAN to state-of-the-art GANs using the Standard CNN architecture in Table 6 with 100k generator iterations.
Our α-GAN + SN ablation significantly outperforms the state-of-the-art plain GANs (Adler & Lunz, 2018; Miyato et al., 2018) , showing the effectiveness of hybrid VAE-GANs with a stable direct estimate of the synthetic likelihood on this highly diverse dataset.
Furthermore, our BMS-VAE-GAN model trained using the best of T = 30 samples significantly improves over the α-GAN + SN baseline (23.4 vs 24.6 FID), showing the effectiveness of our "Best-of-Many-Samples".
We also compare to Tran et al. (2018) using 300k generator iterations, again outperforming by a significant margin (21.8 vs 22.9 FID).
The IoVM metric of Srivastava et al. (2017) (Tables 4 and 5 ), illustrates that we are also able to better reconstruct the image distribution.
The improvement in both sample quality as measured by the FID metric and data reconstruction as measured by the IoVM metric shows that our novel "Best-of-Many-Samples" objective helps us both match the prior in the latent space and achieve high data log-likelihood at the same time.
We train all models for 200k iterations and report the FID scores (Heusel et al., 2017) for all models using 10k/10k real/generated samples in Table 7 .
The pure auto-encoding based WAE (Tolstikhin et al., 2018) has the weakest performance due to blurriness.
Our pure autoencoding BMS-VAE (without synthetic likelihoods) improves upon the WAE (39.8 vs 41.2 FID), already demonstrating the effectiveness of using "Best-of-Many-Samples".
We see that the base DCGAN has the weakest performance among the GANs.
BEGAN suffers from partial mode collapse.
The SN-GAN improves upon WGAN-GP, showing the effectiveness of Spectral Normalization.
However, there exists considerable artifacts in its generations.
The α-GAN of Rosca et al. (2019) , which integrates the base DCGAN in its framework performs significantly better (31.1 vs 19.2 FID).
This shows the effectiveness of VAE-GAN frameworks in increasing quality and diversity of generations.
Our enhanced α-GAN + SN regularized with Spectral Normalization performs significantly better (15.1 vs 19.2 FID).
This shows the effectiveness of a regularized direct estimate of the synthetic likelihood.
Using the gradient penalty regularizer of Gulrajani et al. (2017) lead to drop of 0.4 FID.
Our BMS-VAE-GAN improves significantly over the α-GAN + SN baseline using the "Best-of-Many-Samples" (13.6 vs 15.1 FID).
The results at 128×128 resolution mirror the results at 64×64.
We additionally evaluate using the IoVM metric in Appendix C. We see that by using the "Best-of-Many-Samples" we obtain sharper ( Figure 4d ) results that cover more of the data distribution as shown by both the FID and IoVM.
We propose a new objective for training hybrid VAE-GAN frameworks which overcomes key limitations of current hybrid VAE-GANs.
We integrate,
1. A "Best-of-Many-Samples" reconstruction likelihood which helps in covering all the modes of the data distribution while maintaining a latent space as close to Gaussian as possible,
2. A stable estimate of the synthetic likelihood ratio..
Our hybrid VAE-GAN framework outperforms state-of-the-art hybrid VAE-GANs and plain GANs in generative modelling on CelebA and CIFAR-10, demonstrating the effectiveness of our approach.
|
We propose a new objective for training hybrid VAE-GANs which lead to significant improvement in mode coverage and quality.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:599
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices.
We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs).
PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size.
Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model.
We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression.
Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.
Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network).
A plethora of work has investigated scaling deep learning from a compute-or network-bound perspective (e.g., Dean et al., 2012; Cui et al., 2016; Abadi et al., 2015; Cui et al., 2014; Jouppi et al., 2017; Lim et al., 2019; Alistarh et al., 2017; Wen et al., 2017; Wangni et al., 2018; .
However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced.
Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (Li et al., 2016; Lim et al., 2019; Kurth et al., 2018) .
For example, the transportation of data for machine learning is a key factor in the design of modern data centers (Hazelwood et al., 2018) , which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (David Reinsel, 2018; Cheng et al., 2015; Rosenthal et al., 2012) .
This, combined with the memory wall-a lack of bandwidth between compute and memory-suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (Wulf & McKee, 1995; Kwon & Rhu, 2018; Hsieh et al., 2017; Zinkevich et al., 2010) .
The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes.
In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets.
Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity.
PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application's needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels.
Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level.
As a result, we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy.
Overall, we make the following contributions:
1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks.
2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data.
PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth.
3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression.
This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance.
To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems.
Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats.
We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy.
PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches.
PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically.
While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation.
|
We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:6
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels.
However, most successful applications to date require the two domains to be closely related (ex. image-to-image, video-video),
utilizing similar or shared networks to transform domain specific properties like texture, coloring, and line shapes.
Here, we demonstrate that it is possible to transfer across modalities (ex. image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces.
We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (ex. variational autoencoder and a generative adversarial network).
We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space.
The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations.
Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.
Domain transfer has long captured the imagination of inventors and artists alike.
The early precursor of the phonograph, the phonautograph, was actually inspired by the idea of "words which write themselves", where the shape of audio waveforms would transform into the shape of writing, capturing the content and character of the speaker's voice in shape and stroke of the written characters BID9 .
While perhaps fanciful at the time, modern deep learning techniques have shown similar complex transformations are indeed possible.Deep learning enables domain transfer by learning a smooth mapping between two domains such that the variations in one domain are reflected in the other.
This has been demonstrated to great effect within a data modality, for example transferring between two different styles of image BID12 BID18 , video BID26 and music BID23 .
The works have been the basis of interesting creative tools, as small intuitive changes in the source domain are reflected by small intuitive changes in the target domain.
Furthermore, the strong conditioning signal of the source domain makes learning transformations easier than learning a full generative model in each domain.Despite these successes, this line of work in domain transfer has several limitations.
The first limitation is that it requires that two domains should be closely related (e.g. image-to-image or videoto-video) .
This allows the model to focus on transferring local properties like texture and coloring instead of high-level semantics.
For example, directly applying these image-to-image transfer such as CycleGAN or its variants to images from distant domains leads to distorted and unrealistic results .
This agrees with the findings of BID3 who show that CycleGAN transformations are more akin to adversarial examples than style transfer, as the model Our method aims at transfer from one domain to another domain such that the correct semantics (e.g., label) is maintained across domains and local changes in the source domain should be reflected in the target domain.
To achieve this, we train a model to transfer between the latent spaces of pre-trained generative models on source and target domains.
(a) The training is done with three types of loss functions: (1) The VAE ELBO losses to encourage modeling of z 1 and z 2 , which are denoted as L2 and KL in the figure.
(2) The Sliced Wasserstein Distance loss to encourage cross-domain overlapping in the shared latent space, which is denoted as SWD.
(3) The classification loss to encourage intra-class overlap in the shared latent space, which is denoted as Classifier.
The training is semi-supervised, since (1) and (2) requires no supervision (classes) while only (3) needs such information.
(b) To transfer data from one domain x 1 (an image of digit "0") to another domain x 2 (an audio of human saying "zero", shown in form of spectrum in the example), we first encode x 1 to z 1 ∼ q(z 1 |x 1 ), which we then further encode to a shared latent vector z using our conditional encoder, z ∼ q(z |z 1 , D = 1), where D donates the operating domain.
We then decode to the latent space of the target domain z 2 = g(z|z , D = 2) using our conditional decoder, which finally is used to generate the transferred audio x 2 = g(x 2 |z 2 ).learns
to hide information about the source domain in near-imperceptible high-frequency variations of the target domain.The second limitation is data efficiency. Most conditional
GAN techniques, such as Pix2Pix BID12 and vid2vid BID26 , require very dense supervision from large volumes of paired data. This is usually
accomplished by extracting features, such as edges or a segmentation map, and then training the conditional GAN to learn the inverse mapping back to pixels. For many more interesting
transformations, no such easy alignment procedure exists, and paired data is scarce. We demonstrate the limitation
of existing approaches in Appendix C.For multi-modal domain transfer, we seek to train a model capable of transferring instances from a source domain (x 1 ) to a target domain (x 2 ), such that local variations in source domain are transferred to local variations in the target domain. We refer to this property as
locality. Thus, local interpolation in
the source domain would ideally be similar to local interpolation in target domain when transferred.There are many possible ways that two domains could align such that they maintain locality, with many different alignments of semantic attributes. For instance, for a limited
dataset, there is no a priori reason that images of the digit "0" and spoken utterances of the digit "0" would align with each other. Or more abstractly, there may
be no agreed common semantics for images of landscapes and passages of music, and it is at the liberty of the user to define such connections based on their own intent. Our goal in modeling is to respect
the user's intent and make sure that the correct semantics (e.g., labels) are shared between the two domains after transfer. We refer to this property as semantic
alignment. A user can thus sort a set of data points
from in each domain into common bins, which we can use to constrain the cross-domain alignment. We can quantitatively measure the degree
of semantic alignment by using a classifier to label transformed data and measuring the percentage of data points that fall into the same bin for the source and target domain. Our goal can thus be stated as learning
transformations that preserve locality and semantic alignment, while requiring as few labels from a user as possible.To achieve this goal and tackle prior limitations, we propose to abstract the domain domains with independent latent variable models, and then learn to transfer between the latent spaces of those models. Our main contributions include:• We propose
a shared "bridging" VAE to transfer between latent generative models. Locality and semantic alignment of transformations
are encouraged by applying a sliced-wasserstein distance, and a classification loss respectively to the shared latent space.• We demonstrate with qualitative and quantitative
results that our proposed method enables transfer both within a modality (image-to-image) and between modalities (image-to-audio).• Since we training a smaller secondary model in latent
space, we find improvements in training efficiency, measured by both in terms of the amount of required labeled data and well training time.
We have demonstrated an approach to learn mappings between disparate domains by bridging the latent codes of each domain with a shared autoencoder.
We find bridging VAEs are able to achieve high transfer accuracies, smoothly map interpolations between domains, and even connect different model types (VAEs and GANs).
Here, we have restricted ourselves to datasets with intuitive classlevel mappings for the purpose of quantitative comparisons, however, there are many interesting creative possibilities to apply these techniques between domains without a clear semantic alignment.
As a semi-supervised technique, we have shown bridging autoencoders to require less supervised labels, making it more feasible to learn personalized cross-modal domain transfer based on the creative guidance of individual users.
|
Conditional VAE on top of latent spaces of pre-trained generative models that enables transfer between drastically different domains while preserving locality and semantic alignment.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:60
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for life-long learning, effectively utilizing the previously acquired skills.
As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding interference from previous knowledge and improving the overall performance.
In this paper, within the continual learning paradigm, we introduce a method that effectively forgets the less useful data samples continuously across different tasks.
The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property.
This effectively maintains a constant training size across all tasks.
We first provide some mathematical intuition for the method and then demonstrate its effectiveness with experiments on variants of MNIST and CIFAR100 datasets.
It is a typical practice to design and optimize machine learning (ML) models to solve a single task.
On the other hand, humans, instead of learning over isolated complex tasks, are capable of generalizing and transferring knowledge and skills learned from one task to another.
This ability to remember, learn and transfer information across tasks is referred to as lifelong learning or continual learning BID16 BID3 BID11 .
The major challenge for creating ML models with lifelong learning ability is that they are prone to catastrophic forgetting BID9 BID10 .
ML models tend to forget the knowledge learned from previous tasks when re-trained on new observations corresponding to a different (but related) task.
Specifically when a deep neural network (DNN) is fed with a sequence of tasks, the ability to solve the first task will decline significantly after training on the following tasks.
The typical structure of DNNs by design does not possess the capability of preserving previously learned knowledge without interference between tasks or catastrophic forgetting.
There have been different approaches proposed to address this issue and they can be broadly categorized in three types: I) Regularization: It constrains or regularizes the model parameters by adding some terms in the loss function that prevent the model from deviating significantly from the parameters important to earlier tasks.
Typical algorithms include elastic weight consolidation (EWC) BID4 and continual learning through synaptic intelligence (SynInt) BID19 .
II) Architectural modification: It revises the model structure successively after each task in order to provide more memory and additional free parameters in the model for new task input.
Recent examples in this direction are progressive neural networks BID14 and dynamically expanding networks BID18 .
III) Memory replay: It stores data samples from previous tasks in a separate memory buffer and retrains the new model based on both the new task input and the memory buffer.
Popular algorithms here are gradient episodic memory (GEM) BID8 , incremental classifier and representation learning (iCaRL) BID12 .Among
these approaches, regularization is particularly prone to saturation of learning when the number of tasks is large. The additional
/ regularization term in the loss function will soon lose its competency when important parameters from different tasks are overlapped too many times. Modifications
on network architectures like progressive networks resolve the saturation issue, but do not scale as number and complexity of tasks increase. The scalability
problem is also present when using memory replay and often suffer from high computational and memory costs.In this paper, we propose a novel approach to lifelong learning with DNNs that addresses both the learning saturation and high computational complexity issues. In this method,
we progressively compresses the input information learned thus far along with the input from current task and form more efficiently condensed data samples. The compression
technique is based on the statistical leverage scores measure, and it uses frequent directions idea in order to connect the series of compression steps for a sequence of tasks. Our approach resembles
the use of memory replay since it preserves the original input data samples from earlier tasks for further training. However, our method does
not require extra memory for training and is cost efficient compared to most memory replay methods. Furthermore, unlike the
importance assigned to model specific parameters when using regularization methods like EWC or SynInt, we assign importance to the training data that is relevant in effectively learning new tasks, while forgetting less important information.
We presented a new approach in addressing the lifelong learning problem with deep neural networks.
It is inspired by the randomization and compression techniques typically used in statistical analysis.
We combined a simple importance sampling technique -leverage score sampling with the frequent directions concept and developed an online effective forgetting or compression mechanism that enables lifelong learning across a sequence of tasks.
Despite its simple structure, the results on MNIST and CIFAR100 experiments show its effectiveness as compared to recent state of the art.
|
A new method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:600
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Convolutional neural networks (CNNs) are inherently equivariant to translation.
Efforts to embed other forms of equivariance have concentrated solely on rotation.
We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN).
PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations.
The result is a network invariant to translation and equivariant to both rotation and scale.
PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier.
PTN achieves state-of-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling.
The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network.
Whether at the global pattern or local feature level BID8 , the quest for (in/equi)variant representations is as old as the field of computer vision and pattern recognition itself.
State-of-the-art in "hand-crafted" approaches is typified by SIFT (Lowe, 2004) .
These detector/descriptors identify the intrinsic scale or rotation of a region BID19 BID1 and produce an equivariant descriptor which is normalized for scale and/or rotation invariance.
The burden of these methods is in the computation of the orbit (i.e. a sampling the transformation space) which is necessary to achieve equivariance.
This motivated steerable filtering which guarantees transformed filter responses can be interpolated from a finite number of filter responses.
Steerability was proved for rotations of Gaussian derivatives BID6 and extended to scale and translations in the shiftable pyramid BID31 .
Use of the orbit and SVD to create a filter basis was proposed by BID26 and in parallel, BID29 proved for certain classes of transformations there exists canonical coordinates where deformation of the input presents as translation of the output.
Following this work, BID25 and BID10 ; Teo & BID33 proposed a methodology for computing the bases of equivariant spaces given the Lie generators of a transformation.
and most recently, BID30 proposed the scattering transform which offers representations invariant to translation, scaling, and rotations.The current consensus is representations should be learned not designed.
Equivariance to translations by convolution and invariance to local deformations by pooling are now textbook BID17 , p.335) but approaches to equivariance of more general deformations are still maturing.
The main veins are: Spatial Transformer Network (STN) BID13 which similarly to SIFT learn a canonical pose and produce an invariant representation through warping, work which constrains the structure of convolutional filters BID36 and work which uses the filter orbit BID3 to enforce an equivariance to a specific transformation group.In this paper, we propose the Polar Transformer Network (PTN), which combines the ideas of STN and canonical coordinate representations to achieve equivariance to translations, rotations, and dilations.
The three stage network learns to identify the object center then transforms the input into logpolar coordinates.
In this coordinate system, planar convolutions correspond to group-convolutions in rotation and scale.
PTN produces a representation equivariant to rotations and dilations without http://github.com/daniilidis-group//polar-transformer-networks Figure 1 : In the log-polar representation, rotations around the origin become vertical shifts, and dilations around the origin become horizontal shifts.
The distance between the yellow and green lines is proportional to the rotation angle/scale factor.
Top rows: sequence of rotations, and the corresponding polar images.
Bottom rows: sequence of dilations, and the corresponding polar images.the challenging parameter regression of STN.
We enlarge the notion of equivariance in CNNs beyond Harmonic Networks BID36 and Group Convolutions BID3 by capturing both rotations and dilations of arbitrary precision.
Similar to STN; however, PTN accommodates only global deformations.We present state-of-the-art performance on rotated MNIST and SIM2MNIST, which we introduce.
To summarize our contributions:• We develop a CNN architecture capable of learning an image representation invariant to translation and equivariant to rotation and dilation.•
We propose the polar transformer module, which performs a differentiable log-polar transform, amenable to backpropagation training. The
transform origin is a latent variable.• We
show how the polar transform origin can be learned effectively as the centroid of a single channel heatmap predicted by a fully convolutional network.
We have proposed a novel network whose output is invariant to translations and equivariant to the group of dilations/rotations.
We have combined the idea of learning the translation (similar to the spatial transformer) but providing equivariance for the scaling and rotation, avoiding, thus, fully connected layers required for the pose regression in the spatial transformer.
Equivariance with respect to dilated rotations is achieved by convolution in this group.
Such a convolution would require the production of multiple group copies, however, we avoid this by transforming into canonical coordinates.
We improve the state of the art performance on rotated MNIST by a large margin, and outperform all other tested methods on a new dataset we call SIM2MNIST.
We expect our approach to be applicable to other problems, where the presence of different orientations and scales hinder the performance of conventional CNNs.
|
We learn feature maps invariant to translation, and equivariant to rotation and scale.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:601
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task.
Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks.
Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model.
In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference.
Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference.
We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.
A remarkable aspect of human intelligence is the ability to quickly solve a novel problem and to be able to do so even in the face of limited experience in a novel domain.
Such fast adaptation is made possible by leveraging prior learning experience in order to improve the efficiency of later learning.
This capacity for meta-learning also has the potential to enable an artificially intelligent agent to learn more efficiently in situations with little available data or limited computational resources BID45 BID4 BID37 .In
machine learning, meta-learning is formulated as the extraction of domain-general information that can act as an inductive bias to improve learning efficiency in novel tasks (Caruana, 1998; BID52 . This
inductive bias has been implemented in various ways: as learned hyperparameters in a hierarchical Bayesian model that regularize task-specific parameters BID18 , as a learned metric space in which to group neighbors BID7 , as a trained recurrent neural network that allows encoding and retrieval of episodic information BID43 , or as an optimization algorithm with learned parameters BID45 BID3 .The model-agnostic
meta-learning (MAML) of BID12 is an instance of a learned optimization procedure that directly optimizes the standard gradient descent rule. The algorithm estimates
an initial parameter set to be shared among the task-specific models; the intuition is that gradient descent from the learned initialization provides a favorable inductive bias for fast adaptation. However, this inductive
bias has been evaluated only empirically in prior work BID12 .In this work, we present
a novel derivation of and a novel extension to MAML, illustrating that this algorithm can be understood as inference for the parameters of a prior distribution in a hierarchical Bayesian model. The learned prior allows
for quick adaptation to unseen tasks on the basis of an implicit predictive density over task-specific parameters. The reinterpretation as
hierarchical Bayes gives a principled statistical motivation for MAML as a meta-learning algorithm, and sheds light on the reasons for its favorable performance even among methods with significantly more parameters.More importantly, by casting gradient-based meta-learning within a Bayesian framework, we are able to improve MAML by taking insights from Bayesian posterior estimation as novel augmentations to the gradient-based meta-learning procedure. We experimentally demonstrate
that this enables better performance on a few-shot learning benchmark.
We have shown that model-agnostic meta-learning (MAML) estimates the parameters of a prior in a hierarchical Bayesian model.
By casting gradient-based meta-learning within a Bayesian framework, our analysis opens the door to novel improvements inspired by probabilistic machinery.As a step in this direction, we propose an extension to MAML that employs a Laplace approximation to the posterior distribution over task-specific parameters.
This technique provides a more accurate estimate of the integral that, in the original MAML algorithm, is approximated via a point estimate.
We show how to estimate the quantity required by the Laplace approximation using Kroneckerfactored approximate curvature (K-FAC), a method recently proposed to approximate the quadratic curvature of a neural network objective for the purpose of a second-order gradient descent technique.Our contribution illuminates the road to exploring further connections between gradient-based metalearning methods and hierarchical Bayesian modeling.
For instance, in this work we assume that the predictive distribution over new data-points is narrow and well-approximated by a point estimate.
We may instead employ methods that make use of the variance of the distribution over task-specific parameters in order to model the predictive density over examples from a novel task.Furthermore, it is known that the Laplace approximation is inaccurate in cases where the integral is highly skewed, or is not unimodal and thus is not amenable to approximation by a single Gaussian mode.
This could be solved by using a finite mixture of Gaussians, which can approximate many density functions arbitrarily well BID49 BID0 .
The exploration of additional improvements such as this is an exciting line of future work.
|
A specific gradient-based meta-learning algorithm, MAML, is equivalent to an inference procedure in a hierarchical Bayesian model. We use this connection to improve MAML via methods from approximate inference and curvature estimation.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:602
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This work provides an automatic machine learning (AutoML) modelling architecture called Autostacker.
Autostacker improves the prediction accuracy of machine learning baselines by utilizing an innovative hierarchical stacking architecture and an efficient parameter search algorithm.
Neither prior domain knowledge about the data nor feature preprocessing is needed.
We significantly reduce the time of AutoML with a naturally inspired algorithm - Parallel Hill Climbing (PHC).
By parallelizing PHC, Autostacker can provide candidate pipelines with sufficient prediction accuracy within a short amount of time.
These pipelines can be used as is or as a starting point for human experts to build on.
By focusing on the modelling process, Autostacker breaks the tradition of following fixed order pipelines by exploring not only single model pipeline but also innovative combinations and structures.
As we will show in the experiment section, Autostacker achieves significantly better performance both in terms of test accuracy and time cost comparing with human initial trials and recent popular AutoML system.
Machine Learning nowadays is the main approach for people to solve prediction problems by utilizing the power of data and algorithms.
More and more models have been proposed to solve diverse problems based on the character of these problems.
More specifically, different learning targets and collected data correspond to different modelling problems.
To solve them, data scientists not only need to know the advantages and disadvantages of various models, they also need to manually tune the hyperparameters within these models.
However, understanding thoroughly all of the models and running experiments to tune the hyperparameters involves a lot of effort and cost.
Thus, automating the modelling procedure is highly desired both in academic areas and industry.An AutoML system aims at providing an automatically generated baseline with better performance to support data scientists and experts with specific domain knowledge to solve machine learning problems with less effort.
The input to AutoML is a cleanly formatted dataset and the output is one or multiple modelling pipelines which enables the data scientists to begin working from a better starting point.
There are some pioneering efforts addressing the challenge of finding appropriate configurations of modelling pipelines and providing some mechanisms to automate this process.
However, these works often rely on fixed order machine learning pipelines which are obtained by mimicking the traditional working pipelines of human experts.
This initial constraint limits the potential of machine to find better pipelines which may or may not be straightforward, and may or may not have been tried by human experts before.In this work, we present an architecture called Autostacker which borrows the stacking Wolpert (1992) BID1 method from ensemble learning, but allows for the discovery of pipelines made up of simply one model or many models combined in an innovative way.
All of the automatically generated pipelines from Autostacker will provide a good enough starting point compared with initial trials of human experts.
However, there are several challenges to accomplish this:• The quality of the datasets.
Even though we are stepping into a big data era, we have to admit that there are still a lot of problems for which it is hard to collect enough data, especially data with little noise, such as historical events, medical research, natural disasters and so on.
We tackle this challenge by always using the raw dataset in all of the stacking layers Figure 1 : This figure describes the pipeline architecture of Autostacker.
Autostacker pipelines consists of one or multiple layers and one or multiple nodes inside each layer.
Each node represents a machine learning primitive model, such as SVM, MLP, etc.
The number of layers and the number of nodes per layer can be specified beforehand or they can be changeable as part of the hyperparameters.
In the first layer, the raw dataset is used as input.
Then in the following layers, the prediction results from each node will be added to the raw dataset as synthetic features (new colors).
The new dataset generated by each layer will be used as input to the next layer.while also adding synthetic features in each stacking layer to fully use the information in the current dataset.
More details are provided in the Approach section below.•
The generalization ability of the AutoML framework. As
mentioned above, existing AutoML frameworks only allow systems to generate an assembly line from data preprocessing and feature engineering to model selection where only a specific single model will be utilized by plugging in a previous model library. In
this paper, depending on the computational cost and time cost, we make the number of such primitive models a variable which can be changed dynamically during the pipeline generation process or initialized in the beginning. This
means that the simplest pipeline could be a single model, and the most complex pipeline could contain hundreds of primitive models as shown in Figure 1 • The large space of variables. The
second challenge mentioned above leads to this problem naturally. Considering
the whole AutoML framework, variables include the type of primitive machine learning models, the configuration settings of the framework (for instance, the number of primitive models in each stacking layer) and the hyperparameters in each primitive model. One way to
address this issue is to treat this as an optimization problem BID3 . Here in this
paper, we instead treat this challenge as a search problem. We propose to
use a naturally inspired algorithm, Parallel Hill Climbing (PHC), BID10 to effectively search for appropriate candidate pipelines.To make the definition of the problem clear, we will use the terminology listed below throughout this paper:• Primitive and Pipeline: primitive denotes an existed single machine learning model, for example, a DecisionTree. In addition,
these also include traditional ensemble learning models, such as Adaboost and Bagging. The pipeline
is the form of the output of Autostacker, which is a single primitive or a combination of primitives.• Layer and Node
: Figure 1 shows the architecture of Autostacker which is formed by multiple stacking layers and multiple nodes in each layers. Each node represents
a machine learning primitive model.
During the experiments and research process, we noticed that Autostacker still has several limitations.
Here we will describe these limitations and possible future solutions:• The ability to automate the machine learning process for large scale datasets is limited.
Nowadays, there are more sophisticated models or deep learning approaches which achieve very good results on large scale datasets and multi-task problems.
Our current primitive library and modelling structure is very limited at solving these problems.
One of the future solutions could be to incorporate more advanced primitives and to choose to use them when necessary.•
Autostacker can be made more efficient with better search algorithms. There
are a lot of modern evolutionary algorithms, and some of them are based on the Parallel Hill Climber that we use in this work. We believe
that Autostacker could be made faster by incorporating them. We also believe
traditional methods and knowledge from statistics and probability will be very helpful to better understand the output of Autostacker, such as by answering questions like: why do was a particular pipeline chosen as one of the final candidate pipelines?
In this work, we contribute to automating the machine learning modelling process by proposing Autostacker, a machine learning system with an innovative architecture for automatic modelling and a well-behaved efficient search algorithm.
We show how this system works and what the performance of this system is, comparing with human initial trails and related state of art techniques.
We also demonstrate the scaling and parallelization ability of our system.
In conclusion, we automate the machine learning modelling process by providing an efficient, flexible and well-behaved system which provides the potential to be generalized into complicated problems and is able to be integrated with data and feature processing modules.
|
Automate machine learning system with efficient search algorithm and innovative structure to provide better model baselines.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:603
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Surrogate models can be used to accelerate approximate Bayesian computation (ABC).
In one such framework the discrepancy between simulated and observed data is modelled with a Gaussian process.
So far principled strategies have been proposed only for sequential selection of the simulation locations.
To address this limitation, we develop Bayesian optimal design strategies to parallellise the expensive simulations.
We also touch the problem of quantifying the uncertainty of the ABC posterior due to the limited budget of simulations.
Approximate Bayesian computation (Marin et al., 2012; Lintusaari et al., 2017 ) is used for Bayesian inference when the analytic form of the likelihood function of a statistical model of interest is either unavailable or too costly to evaluate, but simulating the model is feasible.
Unfortunately, many models e.g. in genomics and epidemiology (Numminen et al., 2013; Marttinen et al., 2015; McKinley et al., 2018) and climate science (Holden et al., 2018) are costly to simulate making sampling-based ABC inference algorithms infeasible.
To increase sample-efficiency of ABC, various methods using surrogate models such as neural networks (Papamakarios and Murray, 2016; Papamakarios et al., 2019; Lueckmann et al., 2019; Greenberg et al., 2019) and Gaussian processes (Meeds and Welling, 2014; Wilkinson, 2014; Gutmann and Corander, 2016; Järvenpää et al., 2018 Järvenpää et al., , 2019a have been proposed.
In one promising surrogate-based ABC framework the discrepancy between the observed and simulated data is modelled with a Gaussian process (GP) (Gutmann and Corander, 2016; Järvenpää et al., 2018 Järvenpää et al., , 2019a .
Sequential Bayesian experimental design (or active learning) methods to select the simulation locations so as to maximise the sample-efficiency in this framework were proposed by Järvenpää et al. (2019a) .
However, one often has access to multiple computers to run some of the simulations in parallel.
In this work, motivated by the related problem of batch Bayesian optimisation (Ginsbourger et al., 2010; Desautels et al., 2014; Shah and Ghahramani, 2015; Wu and Frazier, 2016) and the parallel GP-based method by Järvenpää et al. (2019b) for inference tasks where noisy and potentially expensive log-likelihood evaluations can be obtained, we resolve this limitation by developing principled batch simulation methods which considerably decrease the wall-time needed for ABC inference.
The posterior distribution is often summarised for further decision making using e.g. expectation and variance.
When the computational resources for ABC inference are limited, it would be important to assess the accuracy of such summaries, but this has not been explicitly acknowledged in earlier work.
We devise an approximate numerical method to propagate the uncertainty of the discrepancy, represented by the GP model, to the resulting ABC posterior summaries.
We call our resulting framework as Bayesian ABC in analogy with the related problems of Bayesian quadrature (O'Hagan, 1991; Osborne et al., 2012; Briol et al., 2019) and Bayesian optimisation (BO) (Brochu et al., 2010; Shahriari et al., 2015) .
|
We propose principled batch Bayesian experimental design strategies and a method for uncertainty quantification of the posterior summaries in a Gaussian process surrogate-based approximate Bayesian computation framework.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:604
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Randomly initialized first-order optimization algorithms are the method of choice for solving many high-dimensional nonconvex problems in machine learning, yet general theoretical guarantees cannot rule out convergence to critical points of poor objective value.
For some highly structured nonconvex problems however, the success of gradient descent can be understood by studying the geometry of the objective.
We study one such problem -- complete orthogonal dictionary learning, and provide converge guarantees for randomly initialized gradient descent to the neighborhood of a global optimum.
The resulting rates scale as low order polynomials in the dimension even though the objective possesses an exponential number of saddle points.
This efficient convergence can be viewed as a consequence of negative curvature normal to the stable manifolds associated with saddle points, and we provide evidence that this feature is shared by other nonconvex problems of importance as well.
Many central problems in machine learning and signal processing are most naturally formulated as optimization problems.
These problems are often both nonconvex and highdimensional.
High dimensionality makes the evaluation of second-order information prohibitively expensive, and thus randomly initialized first-order methods are usually employed instead.
This has prompted great interest in recent years in understanding the behavior of gradient descent on nonconvex objectives (18; 14; 17; 11) .
General analysis of first-and second-order methods on such problems can provide guarantees for convergence to critical points but these may be highly suboptimal, since nonconvex optimization is in general an NP-hard probem BID3 .
Outside of a convex setting (28) one must assume additional structure in order to make statements about convergence to optimal or high quality solutions.
It is a curious fact that for certain classes of problems such as ones that involve sparsification (25; 6) or matrix/tensor recovery (21; 19; 1) first-order methods can be used effectively.
Even for some highly nonconvex problems where there is no ground truth available such as the training of neural networks first-order methods converge to high-quality solutions (40).Dictionary
learning is a problem of inferring a sparse representation of data that was originally developed in the neuroscience literature (30), and has since seen a number of important applications including image denoising, compressive signal acquisition and signal classification (13; 26) . In this work
we study a formulation of the dictionary learning problem that can be solved efficiently using randomly initialized gradient descent despite possessing a number of saddle points exponential in the dimension. A feature that
appears to enable efficient optimization is the existence of sufficient negative curvature in the directions normal to the stable manifolds of all critical points that are not global minima BID0 . This property
ensures that the regions of the space that feed into small gradient regions under gradient flow do not dominate the parameter space. FIG0 illustrates
the value of this property: negative curvature prevents measure from concentrating about the stable manifold. As a consequence
randomly initialized gradient methods avoid the "slow region" of around the saddle point. Negative curvature
helps gradient descent. Red: "slow region"
of small gradient around a saddle point. Green: stable manifold
associated with the saddle point. Black: points that flow
to the slow region. Left: global negative curvature
normal to the stable manifold. Right: positive curvature normal
to the stable manifold -randomly initialized gradient descent is more likely to encounter the slow region.The main results of this work is a convergence rate for randomly initialized gradient descent for complete orthogonal dictionary learning to the neighborhood of a global minimum of the objective. Our results are probabilistic since
they rely on initialization in certain regions of the parameter space, yet they allow one to flexibly trade off between the maximal number of iterations in the bound and the probability of the bound holding.While our focus is on dictionary learning, it has been recently shown that for other important nonconvex problems such as phase retrieval BID7 performance guarantees for randomly initialized gradient descent can be obtained as well. In fact, in Appendix C we show that
negative curvature normal to the stable manifolds of saddle points (illustrated in FIG0 ) is also a feature of the population objective of generalized phase retrieval, and can be used to obtain an efficient convergence rate.
The above analysis suggests that second-order properties -namely negative curvature normal to the stable manifolds of saddle points -play an important role in the success of randomly initialized gradient descent in the solution of complete orthogonal dictionary learning.
This was done by furnishing a convergence rate guarantee that holds when the random initialization is not in regions that feed into small gradient regions around saddle points, and bounding the probability of such an initialization.
In Appendix C we provide an additional example of a nonconvex problem that for which an efficient rate can be obtained based on an analysis that relies on negative curvature normal to stable manifolds of saddles -generalized phase retrieval.
An interesting direction of further work is to more precisely characterize the class of functions that share this feature.The effect of curvature can be seen in the dependence of the maximal number of iterations T on the parameter ζ 0 .
This parameter controlled the volume of regions where initialization would lead to slow progress and the failure probability of the bound 1 − P was linear in ζ 0 , while T depended logarithmically on ζ 0 .
This logarithmic dependence is due to a geometric increase in the distance from the stable manifolds of the saddles during gradient descent, which is a consequence of negative curvature.
Note that the choice of ζ 0 allows one to flexibly trade off between T and 1 − P. By decreasing ζ 0 , the bound holds with higher probability, at the price of an increase in T .
This is because the volume of acceptable initializations now contains regions of smaller minimal gradient norm.
In a sense, the result is an extrapolation of works such as (23) that analyze the ζ 0 = 0 case to finite ζ 0 .Our
analysis uses precise knowledge of the location of the stable manifolds of saddle points.For less symmetric problems, including variants of sparse blind deconvolution (41) and overcomplete tensor decomposition, there is no closed form expression for the stable manifolds. However
, it is still possible to coarsely localize them in regions containing negative curvature. Understanding
the implications of this geometric structure for randomly initialized first-order methods is an important direction for future work.One may hope that studying simple model problems and identifying structures (here, negative curvature orthogonal to the stable manifold) that enable efficient optimization will inspire approaches to broader classes of problems. One problem of
obvious interest is the training of deep neural networks for classification, which shares certain high-level features with the problems discussed in this paper. The objective
is also highly nonconvex and is conjectured to contain a proliferation of saddle points BID10 , yet these appear to be avoided by first-order methods BID15 for reasons that are still quite poorly understood beyond the two-layer case (39).[19] Prateek Jain,
Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix
completion using alternating minimization. DISPLAYFORM0 .
Thus critical
points are ones where either tanh( q µ ) = 0 (which cannot happen on S n−1 ) or tanh( q µ ) is in the nullspace of (I − qq * ), which implies tanh( q µ ) = cq for some constant b. The equation
tanh( x µ ) = bx has either a single solution at the origin or 3 solutions at {0, ±r(b)} for some r(b). Since this equation
must be solves simultaneously for every element of q, we obtain ∀i ∈ [n] : q i ∈ {0, ±r(b)}. To obtain solutions
on the sphere, one then uses the freedom we have in choosing b (and thus r(b)) such that q = 1. The resulting set of
critical points is thus DISPLAYFORM1 To prove the form of the stable manifolds, we first show that for q i such that |q i | = q ∞ and any q j such that |q j | + ∆ = |q i | and sufficiently small ∆ > 0, we have DISPLAYFORM2 For ease of notation we now assume q i , q j > 0 and hence ∆ = q i − q j , otherwise the argument can be repeated exactly with absolute values instead. The above inequality
can then be written as DISPLAYFORM3 If we now define DISPLAYFORM4 where the O(∆ 2 ) term is bounded. Defining a vector r
∈ R n by DISPLAYFORM5 we have r 2 = 1. Since tanh(x) is concave
for x > 0, and |r i | ≤ 1, we find DISPLAYFORM6 From DISPLAYFORM7 and thus q j ≥ 1 √ n − ∆. Using this inequality and
properties of the hyperbolic secant we obtain DISPLAYFORM8 and plugging in µ = c √ n log n for some c < 1 DISPLAYFORM9 log n + log log n + log 4).We can bound this quantity
by a constant, say h 2 ≤ 1 2 , by requiring DISPLAYFORM10 ) log n + log log n ≤ − log 8and for and c < 1, using − log n + log log n < 0 we have DISPLAYFORM11 Since ∆ can be taken arbitrarily small, it is clear that c can be chosen in an n-independent manner such that A ≤ − log 8. We then find DISPLAYFORM12
since this inequality is strict, ∆ can be chosen small enough such that O(∆ 2 ) < ∆(h 1 − h 2 ) and hence h > 0, proving 9.It follows that under negative gradient flow, a point with |q j | < ||q|| ∞ cannot flow to a point q such that |q j | = ||q || ∞ . From the form of the critical
points, for every such j, q must thus flow to a point such that q j = 0 (the value of the j coordinate cannot pass through 0 to a point where |q j | = ||q || ∞ since from smoothness of the objective this would require passing some q with q j = 0, at which point grad [f Sep ] (q ) j = 0).As for the maximal magnitude coordinates
, if there is more than one coordinate satisfying |q i1 | = |q i2 | = q ∞ , it is clear from symmetry that at any subsequent point q along the gradient flow line q i1 = q i2 . These coordinates cannot change sign since
from the smoothness of the objective this would require that they pass through a point where they have magnitude smaller than 1/ √ n, at which point some other coordinate must have a larger magnitude (in order not to violate the spherical constraint), contradicting the above result for non-maximal elements. It follows that the sign pattern of these
elements is preserved during the flow. Thus there is a single critical point to
which any q can flow, and this is given by setting all the coordinates with |q j | < q ∞ to 0 and multiplying the remaining coordinates by a positive constant to ensure the resulting vector is on S n . Denoting this critical point by α, there
is a vector b such that q = P S n−1 [a(α) + b] and supp(a(α)) ∩ supp(b) = ∅, b ∞ < 1 with the form of a(α) given by 5 . The collection of all such points defines
the stable manifold of α.Proof of Lemma 2: (Separable objective gradient projection). i) We consider the sign(w i ) = 1 case; the
sign(w i ) = −1 case follows directly.Recalling that DISPLAYFORM13 qn , we first prove DISPLAYFORM14 for some c > 0 whose form will be determined later. The inequality clearly holds for w i = q n
.To DISPLAYFORM15 verify that it holds for smaller
values of w i as well, we now show that ∂ ∂w i tanh w i µ − tanh q n µ w i q n − c(q n − w i ) < 0 which will ensure that it holds for all w i . We define s 2 = 1 − ||w|| 2 + w 2 i and denote q
n = s 2 − w 2 i to extract the w i dependence, givingWhere in the last inequality we used properties of the sech function and q n ≥ w i . We thus want to show DISPLAYFORM16 and it follows
that 10 holds. For µ < 1 BID15 we are guaranteed that c > 0.From
examining the RHS of 10 (and plugging in q n = s 2 − w 2 i ) we see that any lower bound on the gradient of an element w j applies also to any element |w i | ≤ |w j |. Since for |w j | = ||w|| ∞ we have q n − w j = w
j ζ, for every log( 1 µ )µ ≤ w i we obtain the bound DISPLAYFORM17 Proof of Theorem 1: (Gradient descent convergence rate for separable function).We obtain a convergence rate by first bounding the
number of iterations of Riemannian gradient descent in C ζ0 \C 1 , and then considering DISPLAYFORM18 . Choosing c 2 so that µ < 1 2 , we can apply Lemma
2, and for u defined in 7, we thus have DISPLAYFORM19 Since from Lemma 7 the Riemannian gradient norm is bounded by √ n, we can choose c 1 , c 2 such that µ log( DISPLAYFORM20 . This choice of η then satisfies the conditions of
Lemma 17 with r = µ log( DISPLAYFORM21 , M = √ n, which gives that after a gradient step DISPLAYFORM22 for some suitably chosenc > 0. If we now define by w (t) the t-th iterate of Riemannian
gradient descent and DISPLAYFORM23 and the number of iterations required to exit C ζ0 \C 1 is DISPLAYFORM24 To bound the remaining iterations, we use Lemma 2 to obtain that for every w ∈ C ζ0 \B ∞ r , DISPLAYFORM25 where we have used ||u DISPLAYFORM26 We thus have DISPLAYFORM27 Choosing DISPLAYFORM28 where L is the gradient Lipschitz constant of f s , from Lemma 5 we obtain DISPLAYFORM29 According to Lemma B, L = 1/µ and thus the above holds if we demand η < µ 2 . Combining 12 and 13 gives DISPLAYFORM30 .To obtain the final
rate, we use in g(w 0 ) − g * ≤ √ n andcη
< 1 ⇒ 1 log(1+cη) <C cη for somẽ C > 0. Thus one can choose C > 0 such that DISPLAYFORM31 From Lemma
1 the ball B ∞ r contains a global minimizer of the objective, located at the origin.The probability of initializing in Ȃ C ζ0 is simply given from Lemma 3 and by summing over the 2n possible choices of C ζ0 , one for each global minimizer (corresponding to a single signed basis vector).
|
We provide an efficient convergence rate for gradient descent on the complete orthogonal dictionary learning objective based on a geometric analysis.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:605
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age.
Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century.
In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning.
We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures.
We show that cre- atively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms).
We show strong gen- eralizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.
Reliable digital communication, both wireline (ethernet, cable and DSL modems) and wireless (cellular, satellite, deep space), is a primary workhorse of the modern information age.
A critical aspect of reliable communication involves the design of codes that allow transmissions to be robustly (and computationally efficiently) decoded under noisy conditions.
This is the discipline of coding theory; over the past century and especially the past 70 years (since the birth of information theory BID22 ) much progress has been made in the design of near optimal codes.
Landmark codes include convolutional codes, turbo codes, low density parity check (LDPC) codes and, recently, polar codes.
The impact on humanity is enormous -every cellular phone designed uses one of these codes, which feature in global cellular standards ranging from the 2nd generation to the 5th generation respectively, and are text book material BID16 .The
canonical setting is one of point-to-point reliable communication over the additive white Gaussian noise (AWGN) channel and performance of a code in this setting is its gold standard. The
AWGN channel fits much of wireline and wireless communications although the front end of the receiver may have to be specifically designed before being processed by the decoder (example: intersymbol equalization in cable modems, beamforming and sphere decoding in multiple antenna wireless systems); again this is text book material BID26 . There
are two long term goals in coding theory: (a) design
of new, computationally efficient, codes that improve the state of the art (probability of correct reception) over the AWGN setting. Since the
current codes already operate close to the information theoretic "Shannon limit", the emphasis is on robustness and adaptability to deviations from the AWGN settings (a list of channel models motivated by practical settings, (such as urban, pedestrian, vehicular) in the recent 5th generation cellular standard is available in Annex B of 3GPP TS 36.101.) (b) design
of new codes for multi-terminal (i.e., beyond point-to-point) settings -examples include the feedback channel, the relay channel and the interference channel.Progress over these long term goals has generally been driven by individual human ingenuity and, befittingly, is sporadic. For instance
, the time duration between convolutional codes (2nd generation cellular standards) to polar codes (5th generation cellular standards) is over 4 decades. Deep learning
is fast emerging as capable of learning sophisticated algorithms from observed data (input, action, output) alone and has been remarkably successful in a large variety of human endeavors (ranging from language BID11 to vision BID17 to playing Go BID23 ). Motivated by
these successes, we envision that deep learning methods can play a crucial role in solving both the aforementioned goals of coding theory.While the learning framework is clear and there is virtually unlimited training data available, there are two main challenges: (a) The space
of codes is very vast and the sizes astronomical; for instance a rate 1/2 code over 100 information bits involves designing 2 100 codewords in a 200 dimensional space. Computationally
efficient encoding and decoding procedures are a must, apart from high reliability over the AWGN channel. (b) Generalization
is highly desirable across block lengths and data rate that each work very well over a wide range of channel signal to noise ratios (SNR). In other words, one
is looking to design a family of codes (parametrized by data rate and number of information bits) and their performance is evaluated over a range of channel SNRs.For example, it is shown that when a neural decoder is exposed to nearly 90% of the codewords of a rate 1/2 polar code over 8 information bits, its performance on the unseen codewords is poor . In part due to these
challenges, recent deep learning works on decoding known codes using data-driven neural decoders have been limited to short or moderate block lengths BID4 BID13 . Other deep learning
works on coding theory focus on decoding known codes by training a neural decoder that is initialized with the existing decoding algorithm but is more general than the existing algorithm BID12 BID29 . The main challenge
is to restrict oneself to a class of codes that neural networks can naturally encode and decode. In this paper, we
restrict ourselves to a class of sequential encoding and decoding schemes, of which convolutional and turbo codes are part of. These sequential
coding schemes naturally meld with the family of recurrent neural network (RNN) architectures, which have recently seen large success in a wide variety of time-series tasks. The ancillary advantage
of sequential schemes is that arbitrarily long information bits can be encoded and also at a large variety of coding rates.Working within sequential codes parametrized by RNN architectures, we make the following contributions.(1) Focusing on convolutional
codes we aim to decode them on the AWGN channel using RNN architectures. Efficient optimal decoding of
convolutional codes has represented historically fundamental progress in the broad arena of algorithms; optimal bit error decoding is achieved by the 'Viterbi decoder' BID27 which is simply dynamic programming or Dijkstra's algorithm on a specific graph (the 'trellis') induced by the convolutional code. Optimal block error decoding
is the BCJR decoder BID0 which is part of a family of forward-backward algorithms. While early work had shown that
vanilla-RNNs are capable in principle of emulating both Viterbi and BCJR decoders BID28 BID21 we show empirically, through a careful construction of RNN architectures and training methodology, that neural network decoding is possible at very near optimal performances (both bit error rate (BER) and block error rate (BLER)). The key point is that we train
a RNN decoder at a specific SNR and over short information bit lengths (100 bits) and show strong generalization capabilities by testing over a wide range of SNR and block lengths (up to 10,000 bits). The specific training SNR is closely
related to the Shannon limit of the AWGN channel at the rate of the code and provides strong information theoretic collateral to our empirical results.(2) Turbo codes are naturally built on
top of convolutional codes, both in terms of encoding and decoding. A natural generalization of our RNN convolutional
decoders allow us to decode turbo codes at BER comparable to, and at certain regimes, even better than state of the art turbo decoders on the AWGN channel. That data driven, SGD-learnt, RNN architectures can
decode comparably is fairly remarkable since turbo codes already operate near the Shannon limit of reliable communication over the AWGN channel.(3) We show the afore-described neural network decoders
for both convolutional and turbo codes are robust to variations to the AWGN channel model. We consider a problem of contemporary interest: communication
over a "bursty" AWGN channel (where a small fraction of noise has much higher variance than usual) which models inter-cell interference in OFDM cellular systems (used in 4G and 5G cellular standards) or co-channel radar interference. We demonstrate empirically the neural network architectures can
adapt to such variations and beat state of the art heuristics comfortably (despite evidence elsewhere that neural network are sensitive to models they are trained on BID24 ). Via an innovative local perturbation analysis (akin to BID15 ))
, we demonstrate the neural network to have learnt sophisticated preprocessing heuristics in engineering of real world systems BID10 .
In this paper we have demonstrated that appropriately designed and trained RNN architectures can 'learn' the landmark algorithms of Viterbi and BCJR decoding based on the strong generalization capabilities we demonstrate.
This is similar in spirit to recent works on 'program learning' in the literature BID14 BID2 .
In those works, the learning is assisted significantly by a low level program trace on an input; here we learn the Viterbi and BCJR algorithms only by end-to-end training samples; we conjecture that this could be related to the strong "algebraic" nature of the Viterbi and BCJR algorithms.
The representation capabilities and learnability of the RNN architectures in decoding existing codes suggest a possibility that new codes could be leant on the AWGN channel itself and improve the state of the art (constituted by turbo, LDPC and polar codes).
Also interesting is a new look at classical multi-terminal communication problems, including the relay and interference channels.
Both are active areas of present research.
|
We show that creatively designed and trained RNN architectures can decode well known sequential codes and achieve close to optimal performances.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:606
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Adam is shown not being able to converge to the optimal solution in certain cases.
Researchers recently propose several algorithms to avoid the issue of non-convergence of Adam, but their efficiency turns out to be unsatisfactory in practice.
In this paper, we provide a new insight into the non-convergence issue of Adam as well as other adaptive learning rate methods.
We argue that there exists an inappropriate correlation between gradient $g_t$ and the second moment term $v_t$ in Adam ($t$ is the timestep), which results in that a large gradient is likely to have small step size while a small gradient may have a large step size.
We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating $v_t$ and $g_t$ will lead to unbiased step size for each gradient, thus solving the non-convergence problem of Adam.
Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates $v_t$ and $g_t$ by temporal shifting, i.e., using temporally shifted gradient $g_{t-n}$ to calculate $v_t$.
The experiment results demonstrate that AdaShift is able to address the non-convergence issue of Adam, while still maintaining a competitive performance with Adam in terms of both training speed and generalization.
First-order optimization algorithms with adaptive learning rate play an important role in deep learning due to their efficiency in solving large-scale optimization problems.
Denote g t ∈ R n as the gradient of loss function f with respect to its parameters θ ∈ R n at timestep t, then the general updating rule of these algorithms can be written as follows (Reddi et al., 2018) : DISPLAYFORM0 In the above equation, m t φ(g 1 , . . . , g t ) ∈ R n is a function of the historical gradients; v t ψ(g 1 , . . . , g t ) ∈ R n + is an n-dimension vector with non-negative elements, which adapts the learning rate for the n elements in g t respectively; α t is the base learning rate; and αt √ vt is the adaptive step size for m t .One
common choice of φ(g 1 , . . . , g t ) is the exponential moving average of the gradients used in Momentum (Qian, 1999) and Adam (Kingma & Ba, 2014) , which helps alleviate gradient oscillations. The
commonly-used ψ(g 1 , . . . , g t ) in deep learning community is the exponential moving average of squared gradients, such as Adadelta (Zeiler, 2012) , RMSProp (Tieleman & Hinton, 2012) , Adam (Kingma & Ba, 2014) and Nadam (Dozat, 2016) .Adam
(Kingma & Ba, 2014 ) is a typical adaptive learning rate method, which assembles the idea of using exponential moving average of first and second moments and bias correction. In general
, Adam is robust and efficient in both dense and sparse gradient cases, and is popular in deep learning research. However,
Adam is shown not being able to converge to optimal solution in certain cases. Reddi et
al. (2018) point out that the key issue in the convergence proof of Adam lies in the quantity DISPLAYFORM1 which is assumed to be positive, but unfortunately, such an assumption does not always hold in Adam. They provide
a set of counterexamples and demonstrate that the violation of positiveness of Γ t will lead to undesirable convergence behavior in Adam. Reddi et al.
(2018) then propose two variants, AMSGrad and AdamNC, to address the issue by keeping Γ t positive. Specifically
, AMSGrad definesv t as the historical maximum of v t , i.e.,v t = max {v i } t i=1 , and replaces v t withv t to keep v t non-decreasing and therefore forces Γ t to be positive; while AdamNC forces v t to have "long-term memory" of past gradients and calculates v t as their average to make it stable. Though these
two algorithms solve the non-convergence problem of Adam to a certain extent, they turn out to be inefficient in practice: they have to maintain a very large v t once a large gradient appears, and a large v t decreases the adaptive learning rate αt √ vt and slows down the training process.In this paper, we provide a new insight into adaptive learning rate methods, which brings a new perspective on solving the non-convergence issue of Adam. Specifically
, in Section 3, we study the counterexamples provided by Reddi et al. (2018) via analyzing the accumulated step size of each gradient g t . We observe
that in the common adaptive learning rate methods, a large gradient tends to have a relatively small step size, while a small gradient is likely to have a relatively large step size. We show that
the unbalanced step sizes stem from the inappropriate positive correlation between v t and g t , and we argue that this is the fundamental cause of the non-convergence issue of Adam.In Section 4, we further prove that decorrelating v t and g t leads to equal and unbiased expected step size for each gradient, thus solving the non-convergence issue of Adam. We subsequently
propose AdaShift, a decorrelated variant of adaptive learning rate methods, which achieves decorrelation between v t and g t by calculating v t using temporally shifted gradients. Finally, in Section
5, we study the performance of our proposed AdaShift, and demonstrate that it solves the non-convergence issue of Adam, while still maintaining a decent performance compared with Adam in terms of both training speed and generalization.
In this paper, we study the non-convergence issue of adaptive learning rate methods from the perspective of the equivalent accumulated step size of each gradient, i.e., the net update factor defined in this paper.
We show that there exists an inappropriate correlation between v t and g t , which leads to unbalanced net update factor for each gradient.
We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating v t and g t will lead to unbiased expected step size for each gradient, thus solving the non-convergence problem of Adam.
Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates v t and g t via calculating v t using temporally shifted gradient g t−n .In
addition, based on our new perspective on adaptive learning rate methods, v t is no longer necessarily the second moment of g t , but a random variable that is independent of g t and reflects the overall gradient scale. Thus
, it is valid to calculate v t with the spatial elements of previous gradients. We
further found that when the spatial operation φ outputs a shared scalar for each block, the resulting algorithm turns out to be closely related to SGD, where each block has an overall adaptive learning rate and the relative gradient scale in each block is maintained. The
experiment results demonstrate that AdaShift is able to solve the non-convergence issue of Adam. In
the meantime, AdaShift achieves competitive and even better training and testing performance when compared with Adam. FIG7
. It suggests
that for a fixed sequential online optimization problem, both of β 1 and β 2 determine the direction and speed of Adam optimization process. Furthermore
, we also study the threshold point of C and d, under which Adam will change to the incorrect direction, for each fixed β 1 and β 2 that vary among [0, 1). To simplify
the experiments, we keep d = C such that the overall gradient of each epoch being +1. The result
is shown in FIG7 , which suggests, at the condition of larger β 1 or larger β 2 , it needs a larger C to make Adam stride on the opposite direction. In other words
, large β 1 and β 2 will make the non-convergence rare to happen.
|
We analysis and solve the non-convergence issue of Adam.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:607
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Most domain adaptation methods consider the problem of transferring knowledge to the target domain from a single source dataset.
However, in practical applications, we typically have access to multiple sources.
In this paper we propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style (characterized in terms of low-level features variations) and the content.
For this reason we propose to project the image features onto a space where only the dependence from the content is kept, and then re-project this invariant representation onto the pixel space using the target domain and style.
In this way, new labeled images can be generated which are used to train a final target classifier.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
A well known problem in computer vision is the need to adapt a classifier trained on a given source domain in order to work on another domain, i.e. the target.
Since the two domains typically have different marginal feature distributions, the adaptation process needs to align the one to the other in order to reduce the domain shift (Torralba & Efros (2011) ).
In many practical scenarios, the target data are not annotated and Unsupervised Domain Adaptation (UDA) methods are required.
While most previous adaptation approaches consider a single source domain, in real world applications we may have access to multiple datasets.
In this case, Multi-Source Domain Adaptation (MSDA) (Yao & Doretto (2010) ; Mansour et al. (2009) ; Xu et al. (2018) ; Peng et al. (2019) ) methods may be adopted, in which more than one source dataset is considered in order to make the adaptation process more robust.
However, despite more data can be used, MSDA is challenging as multiple domain shift problems need to be simultaneously and coherently solved.
In this paper we tackle MSDA (unsupervised) problem and we propose a novel Generative Adversarial Network (GAN) for addressing the domain shift when multiple source domains are available.
Our solution is based on generating artificial target samples by transforming images from all the source domains.
Then the synthetically generated images are used for training the target classifier.
While this strategy has been recently adopted in single-source UDA scenarios (Russo et al. (2018) ; ; Liu & Tuzel (2016) ; Murez et al. (2018) ; Sankaranarayanan et al. (2018) ), we are the first to show how it can be effectively exploited in a MSDA setting.
The holy grail of any domain adaptation method is to obtain domain invariant representations.
Similarly, in multi-domain image-to-image translation tasks it is very crucial to obtain domain invariant representations in order to reduce the number of learned translations from O(N 2 ) to O(N ), where N is the number of domains.
Several domain adaptation methods (Roy et al. (2019) ; Carlucci et al. (2017) ; ; Tzeng et al. (2014) ) achieve domain-invariant representations by aligning only domain specific distributions.
However, we postulate that style is the most important latent factor that describe a domain and need to be modelled separately for obtaining optimal domain invariant representation.
More precisely, in our work we assume that the appearance of an image depends on three factors: i.e. the content, the domain and the style.
The domain models properties that are shared by the elements of a dataset but which may not be shared by other datasets, whereas, the factor style represents a property that is shared among different parts of a single image and describes low-level features which concern a specific image.
Our generator obtains the do-main invariant representation in a two-step process, by first obtaining style invariant representations followed by achieving domain invariant representation.
In more detail, the proposed translation is implemented using a style-and-domain translation generator.
This generator is composed of two main components, an encoder and a decoder.
Inspired by (Roy et al. (2019) ) in the encoder we embed whitening layers that progressively align the styleand-domain feature distributions in order to obtain a representation of the image content which is invariant to these factors.
Then, in the decoder, we project this invariant representation onto a new domain-and-style specific distribution with Whitening and Coloring (W C) ) batch transformations, according to the target data.
Importantly, the use of an intermediate, explicit invariant representation, obtained through W C, makes the number of domain transformations which need to be learned linear with the number of domains.
In other words, this design choice ensures scalability when the number of domains increases, which is a crucial aspect for an effective MSDA method.
Contributions.
Our main contributions can be summarized as follows.
(i) We propose the first generative model dealing with MSDA.
We call our approach TriGAN because it is based on three different factors of the images: the style, the domain and the content.
(ii) The proposed style-anddomain translation generator is based on style and domain specific statistics which are first removed from and then added to the source images by means of modified W C layers: Instance Whitening Transform (IW T ), Domain Whitening Transform (DW T ) (Roy et al. (2019) ), conditional Domain Whitening Transform (cDW T ) and Adaptive Instance Whitening Transform (AdaIW T ).
Notably, the IW T and AdaIW T are novel layers introduced with this paper.
(iii) We test our method on two MSDA datasets, Digits-Five (Xu et al. (2018) ) and Office-Caltech10 (Gong et al. (2012) ), outperforming state-of-the-art methods.
|
In this paper we propose generative method for multisource domain adaptation based on decomposition of content, style and domain factors.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:608
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones – which we refer to as co-generation – is an important challenge that is computationally demanding for all but the simplest settings.
This task has received a considerable amount of attention, particularly for classical ways of modeling distributions like structured prediction.
In contrast, almost nothing is known about this task when considering recently proposed techniques for modeling high-dimensional distributions, particularly generative adversarial nets (GANs).
Therefore, in this paper, we study the occurring challenges for co-generation with GANs.
To address those challenges we develop an annealed importance sampling (AIS) based Hamiltonian Monte Carlo (HMC) co-generation algorithm.
The presented approach significantly outperforms classical gradient-based methods on synthetic data and on CelebA.
While generative adversarial nets (GANs) [6] and variational auto-encoders (VAEs) [8] model a joint probability distribution which implicitly captures the correlations between multiple parts of the output, e.g., pixels in an image, and while those methods permit easy sampling from the entire output space domain, it remains an open question how to sample from part of the domain given the remainder?
We refer to this task as co-generation.
To enable co-generation for a domain unknown at training time, for GANs, optimization based algorithms have been proposed [15, 10] .
Intuitively, they aim at finding that latent sample which accurately matches the observed part.
However, successful training of the GAN leads to an increasingly ragged energy landscape, making the search for an appropriate latent variable via backpropagation through the generator harder and harder until it eventually fails.
To deal with this ragged energy landscape during co-generation, we develop a method using an annealed importance sampling (AIS) [11] based Hamiltonian Monte Carlo (HMC) algorithm [4, 12] , which is typically used to estimate (ratios of) the partition function [14, 13] .
Rather than focus on the partition function, the proposed approach leverages the benefits of AIS, i.e., gradually annealing a complex probability distribution, and HMC, i.e., avoiding a localized random walk.
We evaluate the proposed approach on synthetic data and imaging data (CelebA), showing compelling results via MSE and MSSIM metrics.
For more details and results please see our main conference paper [5] .
We propose a co-generation approach, i.e., we complete partially given input data, using annealed importance sampling (AIS) based on the Hamiltonian Monte Carlo (HMC).
Different from classical optimization based methods, specifically GD, which get easily trapped in local optima when solving this task, the proposed approach is much more robust.
Importantly, the method is able to traverse large energy barriers that occur when training generative adversarial nets.
Its robustness is due to AIS gradually annealing a probability distribution and HMC avoiding localized walks.
We show additional results for real data experiments.
We observe our proposed algorithm to recover masked images more accurately than baselines and to generate better high-resolution images given low-resolution images.
We show masked CelebA (Fig. 5) and LSUN (Fig. 6 ) recovery results for baselines and our method, given a Progressive GAN generator.
Note that our algorithm is pretty robust to the position of the z initialization, since the generated results are consistent in Fig. 5 .
(a)
(b)
(c)
|
Using annealed importance sampling on the co-generation problem.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:609
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains.
AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies.
Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information.
The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets.
Discrepancies exist between
1) the genomic data of pre-clinical and clinical datasets (the input space), and
2) the different measures of the drug response (the output space).
To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies.
Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately.
Deep neural networks (Goodfellow et al., 2016) have demonstrated the state-of-the-art performance in different problems, ranging from computer vision and natural language processing to genomics (Eraslan et al., 2019) and medicine (Topol, 2019) .
However, these networks often require a large number of samples for training, which is challenging and sometimes impossible to obtain in the real world applications.
Transfer learning (Pan & Yang, 2009) attempts to solve this challenge by leveraging the knowledge in a source domain, a large data-rich dataset, to improve the generalization performance on a small target domain.
Training a model on the source domain and testing it on the target domain violates the i.i.d assumption that the train and test data are from the same distribution.
The discrepancy in the input space decreases the prediction accuracy on the test data, which leads to poor generalization (Zhang et al., 2019) .
Many methods have been proposed to minimize the discrepancy between the source and the target domains using different metrics such as Jensen Shannon Divergence (Ganin & Lempitsky, 2014) , Maximum Mean Discrepancy (Gretton et al., 2012) , and Margin Disparity Discrepancy (Zhang et al., 2019) .
While transductive transfer learning (e.g. domain adaptation) uses a labeled source domain to improve generalization on an unlabeled target domain, inductive transfer learning (e.g. few-shot learning) uses a labeled source domain to improve the generalization on a labeled target domain where label spaces are different in the source and the target domains (Pan & Yang, 2009 ).
Adversarial domain adaptation has shown great performance in addressing the discrepancy in the input space for different applications (Schoenauer-Sebag et al., 2019; Hosseini-Asl et al., 2018; Pinheiro, 2018; Zou et al., 2018; Tsai et al., 2018; Long et al., 2018; , however, adversarial adaptation to address the discrepancies in both the input and output spaces has not yet been explored.
Our motivating application is pharmacogenomics (Smirnov et al., 2017) where the goal is to predict response to a cancer drug given the genomic data (e.g. gene expression).
Since clinical datasets in pharmacogenomics (patients) are small and hard to obtain, many studies have focused on large pre-clinical pharmacogenomics datasets such as cancer cell lines as a proxy to patients (Barretina et al., 2012; Iorio et al., 2016) .
A majority of the current methods are trained on cell line datasets and then tested on other cell line or patient datasets Geeleher et al., 2014) .
However, cell lines and patients data, even with the same set of genes, do not have identical distributions due to the lack of an immune system and the tumor microenvironment in cell lines (Mourragui et al., 2019) .
Moreover, in cell lines, the response is often measured by the drug concentration that reduces viability by 50% (IC50), whereas in patients, it is often based on changes in the size of the tumor and measured by metrics such as response evaluation criteria in solid tumors (RECIST) (Schwartz et al., 2016) .
This means that drug response prediction is a regression problem in cell lines but a classification problem in patients.
Therefore, discrepancies exist in both the input and output spaces in pharmacogenomics datasets.
Table A1 provides the definition of these biological terms.
In this paper, we propose Adversarial Inductive Transfer Learning (AITL), the first adversarial method of inductive transfer learning.
Different from existing methods for transfer learning, AITL adapts not only the input space but also the output space.
Our motivating application is transfer learning for pharmacogenomics datasets.
In our driving application, the source domain is the gene expression data obtained from the cell lines and the target domain is the gene expression data obtained from patients.
Both domains have the same set of genes (i.e., raw feature representation).
Discrepancies exist between the gene expression data in the input space, and the measure of the drug response in the output space.
AITL learns features for the source and target samples and uses these features as input for a multi-task subnetwork to predict drug response for both the source and the target samples.
The output space discrepancy is addressed by the multi-task subnetwork, which has one shared layer and separate classification and regression towers, and assigns binary labels (called cross-domain labels) to the source samples.
The multi-task subnetwork also alleviates the problem of small sample size in the target domain by sharing the first layer with the source domain.
To address the discrepancy in the input space, AITL performs adversarial domain adaptation.
The goal is that features learned for the source samples should be domain-invariant and similar enough to the features learned for the target samples to fool a global discriminator that receives samples from both domains.
Moreover, with the cross-domain binary labels available for the source samples, AITL further regularizes the learned features by class-wise discriminators.
A class-wise discriminator receives source and target samples from the same class label and should not be able to predict the domain accurately.
We evaluated the performance of AITL and state-of-the-art inductive and adversarial transductive transfer learning baselines on pharmacogenimcs datasets in terms of the Area Under the Receiver Operating Characteristic curve (AUROC) and the Area Under the Precision-Recall curve (AUPR).
In our experiments, AITL achieved a substantial improvement compared to the baselines, demonstrating the potential of transfer learning for drug response prediction, a crucial task of precision oncology.
To our surprise, ProtoNet and ADDA could not outperform the method of (Geeleher et al., 2014) and MOLI baselines.
For ProtoNet, this may be due to the depth of the backbone network.
A recent study has shown that a deeper backbone improves ProtoNet performance drastically in image classification Chen et al. (2019) .
However, in pharmacogenomics, employing a deep backbone is not realistic because of the much smaller sample size compared to an image classification application.
Another limitation for ProtoNet is the imbalanced number of training examples in different classes in pharmacogenomics datasets.
Specifically, the number of examples per class in the training episodes is limited to the number of samples of the minority class as ProtoNet requires the same number of examples from each class.
For ADDA, this lower performance may be due to the lack of end-to-end training of the classifier along with the global discriminator of this method.
The reason is that end-to-end training of the classifier along with the discriminators improved the performance of the second adversarial baseline in AUROC and AUPR compared to ADDA.
Moreover, the method of ) also showed a relatively better performance in AUPR compared to the method of (Geeleher et al., 2014) and MOLI.
In pharmacogenomics, patient datasets are small or not publicly available due to privacy and/or data sharing issues.
We believe including more patient samples and more drugs will increase generalization capability.
In addition, recent studies in pharmacogenomics have shown that using multiple genomic data types (known as multi-omics in genomics) works better than using only gene expression .
In this work, we did not consider such data due to the lack of patient samples with multi-omics and drug response data publicly available; however, in principle, AITL also works with such data.
Last but not least, we used pharmacogenomics as our motivating application for this new problem of transfer learning, but we believe that AITL can also be employed in other applications.
For example, in slow progressing cancers such as prostate cancer, large patient datasets with gene expression and short-term clinical data (source domain) are available, however, patient datasets with long-term clinical data (target domain) are small.
AITL may be beneficial to learn a model to predict these long-term clinical labels using the source domain and its short-term clinical labels (Sharifi-Noghabi et al., 2019a) .
Moreover, AITL can also be applied to the diagnosis of rare cancers with a small sample size.
Gene expression data of prevalent cancers with a large sample size, such as breast cancer, may be beneficial to learn a model to diagnose these rare cancers.
In this paper, we introduced a new problem in transfer learning motivated by applications in pharmacogenomics.
Unlike domain adaptation that only requires adaptation in the input space, this new problem requires adaptation in both the input and output spaces.
To address this problem, we proposed AITL, an Adversarial Inductive Transfer Learning method which, to the best of our knowledge, is the first method that addresses the discrepancies in both the input and output spaces.
AITL uses a feature extractor to learn features for target and source samples.
Then, to address the discrepancy in the output space, AITL utilizes these features as input of a multi-task subnetwork that makes predictions for the target samples and assign cross-domain labels to the source samples.
Finally, to address the input space discrepancy, AITL employs global and class-wise discriminators for learning domain-invariant features.
In our motivating application, pharmacogenomics, AITL adapts the gene expression data obtained from cell lines and patients in the input space, and also adapts different measures of the drug response between cell lines and patients in the output space.
In addition, AITL can also be applied to other applications such as rare cancer diagnosis or predicting long-term clinical labels for slow progressing cancers.
We evaluated AITL on four different drugs and compared it against state-of-the-art baselines from three categories in terms of AUROC and AUPR.
The empirical results indicated that AITL achieved a significantly better performance compared to the baselines showing the benefits of addressing the discrepancies in both the input and output spaces.
We conclude that AITL may be beneficial in pharmacogenomics, a crucial task in precision oncology.
For future research directions, we believe that the TCGA dataset consisting of gene expression data of more than 12,000 patients (without drug response outcome) can be incorporated in an unsupervised transfer learning setting to learn better domain-invariant features between cell lines and cancer patients.
In addition, we did not explore the impact of the chemical structures of the studied drugs in the prediction performance.
We believe incorporating this input with transfer learning in the genomic level can lead to a better performance.
Currently, AITL borrows information between the input domains indirectly via its multi-task subnetwork and assignment of cross-domain labels.
An interesting future direction can be to exchange this information between domains in a more explicit way.
Moreover, we also did not perform theoretical analysis on this new problem of transfer learning and we leave it for future work.
Finally, we did not distinguish between different losses in the multi-task subnetwork, however, in reality patients are more important than cell lines, and considering a higher weight for the corresponding loss in the cost function can improve the prediction performance.
|
A novel method of inductive transfer learning that employs adversarial learning and multi-task learning to address the discrepancy in input and output space
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:61
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly.
We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions.
Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite.
We also demonstrate encouraging experimental results.
Generative modelling is a cornerstone of machine learning and has received increasing attention.
Recent models like variational autoencoders (VAEs) BID32 BID45 and generative adversarial nets (GANs) BID21 BID25 , have delivered impressive advances in performance and generated a lot of excitement.Generative models can be classified into two categories: prescribed models and implicit models BID12 BID40 .
Prescribed models are defined by an explicit specification of the density, and so their unnormalized complete likelihood can be usually expressed in closed form.
Examples include models whose complete likelihoods lie in the exponential family, such as mixture of Gaussians BID18 , hidden Markov models BID5 , Boltzmann machines BID27 .
Because computing the normalization constant, also known as the partition function, is generally intractable, sampling from these models is challenging.On the other hand, implicit models are defined most naturally in terms of a (simple) sampling procedure.
Most models take the form of a deterministic parameterized transformation T θ (·) of an analytic distribution, like an isotropic Gaussian.
This can be naturally viewed as the distribution induced by the following sampling procedure:1.
Sample z ∼ N (0, I) 2.
Return x := T θ (z)The transformation T θ (·) often takes the form of a highly expressive function approximator, like a neural net.
Examples include generative adversarial nets (GANs) BID21 BID25 and generative moment matching nets (GMMNs) BID36 BID16 .
The marginal likelihood of such models can be characterized as follows: DISPLAYFORM0 where φ(·) denotes the probability density function (PDF) of N (0, I).In
general, attempting to reduce this to a closed-form expression is hopeless. Evaluating
it numerically is also challenging, since the domain of integration could consist of an exponential number of disjoint regions and numerical differentiation is ill-conditioned.These two categories of generative models are not mutually exclusive. Some models
admit both an explicit specification of the density and a simple sampling procedure and so can be considered as both prescribed and implicit. Examples include
variational autoencoders BID32 BID45 , their predecessors BID38 BID10 and extensions BID11 , and directed/autoregressive models, e.g., BID42 BID6 BID33 van den Oord et al., 2016 ).
In this section, we consider and address some possible concerns about our method.
We presented a simple and versatile method for parameter estimation when the form of the likelihood is unknown.
The method works by drawing samples from the model, finding the nearest sample to every data example and adjusting the parameters of the model so that it is closer to the data example.
We showed that performing this procedure is equivalent to maximizing likelihood under some conditions.
The proposed method can capture the full diversity of the data and avoids common issues like mode collapse, vanishing gradients and training instability.
The method combined with vanilla model architectures is able to achieve encouraging results on MNIST, TFD and CIFAR-10.
|
We develop a new likelihood-free parameter estimation method that is equivalent to maximum likelihood under some conditions
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:610
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints.
In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior.
We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent’s behavior.
Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent’s demonstrations given our knowledge of an MDP.
Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations.
We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle.
Advances in mechanical design and artificial intelligence continue to expand the horizons of robotic applications.
In these new domains, it can be difficult to design a specific robot behavior by hand.
Even manually specifying a task for a reinforcement-learning-enabled agent is notoriously difficult (Ho et al., 2015; Amodei et al., 2016) .
Inverse Reinforcement Learning (IRL) techniques can help alleviate this burden by automatically identifying the objectives driving certain behavior.
Since first being introduced as Inverse Optimal Control by Kalman (1964) , much of the work on IRL has focused on learning environmental rewards to represent the task of interest (Ng et al., 2000; Abbeel & Ng, 2004; Ratliff et al., 2006; Ziebart et al., 2008) .
While these types of IRL algorithms have proven useful in a variety of situations (Abbeel et al., 2007; Vasquez et al., 2014; Ziebart, 2010; , their basis in assuming that reward functions fully represent task specifications makes them ill suited to problem domains with hard constraints or non-Markovian objectives.
Recent work has attempted to address these pitfalls by using demonstrations to learn a rich class of possible specifications that can represent a task (Vazquez-Chanlatte et al., 2018) .
Others have focused specifically on learning constraints, that is, behaviors that are expressly forbidden or infeasible (Pardowitz et al., 2005; Pérez-D'Arpino & Shah, 2017; Subramani et al., 2018; McPherson et al., 2018; Chou et al., 2018) .
Such constraints arise in safety-critical systems, where requirements such as an autonomous vehicle avoiding collisions with pedestrians are more naturally expressed as hard constraints than as soft reward penalties.
It is towards the problem of inferring such constraints that we turn our attention.
In this work, we present a novel method for inferring constraints, drawing primarily from the Maximum Entropy approach to IRL described by Ziebart et al. (2008) .
We use this framework to reason about the likelihood of observing a set of demonstrations given a nominal task description, as well as about their likelihood if we imposed additional constraints on the task.
This knowledge allows us to select a constraint, or set of constraints, which maximizes the demonstrations' likelihood and best explains the differences between expected and demonstrated behavior.
Our method improves on prior work by being able to simultaneously consider constraints on states, actions and features in a Markov Decision Process (MDP) to provide a principled ranking of all options according to their effect on demonstration likelihood.
We have presented our novel technique for learning constraints from demonstrations.
We improve upon previous work in constraint-learning IRL by providing a principled framework for identifying the most likely constraint(s), and we do so in a way that explicitly makes state, action, and feature constraints all directly comparable to one another.
We believe that the numerical results presented in Section 4 are promising and highlight the usefulness of our approach.
Despite its benefits, one drawback of our approach is that the formulation is based on (3), which only exactly holds for deterministic MDPs.
As mentioned in Section 3.3, we plan to investigate the use of a maximum causal entropy approach to address this issue and fully handle stochastic MDPs.
Additionally, the methods presented here require all demonstrations to contain no violations of the constraints we will estimate.
We believe that softening this requirement, which would allow reasoning about the likelihood of constraints that are occasionally violated in the demonstration set, may be beneficial in cases where trajectory data is collected without explicit labels of success or failure.
Finally, the structure of Algorithm 1, which tracks the expected features accruals of trajectories over time, suggests that we may be able to reason about non-Markovian constraints by using this historical information to our advantage.
Overall, we believe that our formulation of maximum likelihood constraint inference for IRL shows promising results and presents attractive avenues for further investigation.
|
Our method infers constraints on task execution by leveraging the principle of maximum entropy to quantify how demonstrations differ from expected, un-constrained behavior.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:611
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.
In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system.
Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges.
We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.
Reinforcement learning (RL) can in principle enable real-world autonomous systems, such as robots, to autonomously acquire a large repertoire of skills.
Perhaps more importantly, reinforcement learning can enable such systems to continuously improve the proficiency of their skills from experience.
However, realizing this promise in reality has proven challenging: even with reinforcement learning methods that can acquire complex behaviors from high-dimensional low-level observations, such as images, the typical assumptions of the reinforcement learning problem setting do not fit perfectly into the constraints of the real world.
For this reason, most successful robotic learning experiments have been demonstrated with varying levels of instrumentation, in order to make it practical to define reward functions (e.g. by using auxiliary sensors (Haarnoja et al., 2018a; Kumar et al., 2016; Andrychowicz et al., 2018) ), and in order to make it practical to reset the environment between trials (e.g. using manually engineered contraptions ).
In order to really make it practical for autonomous learning systems to improve continuously through real-world operation, we must lift these constraints and design learning systems whose assumptions match the constraints of the real world, and allow for uninterrupted continuous learning with large amounts of real world experience.
What exactly is holding back our reinforcement learning algorithms from being deployed for learning robotic tasks (for instance manipulation) directly in the real world?
We hypothesize that our current reinforcement learning algorithms make a number of unrealistic assumptions that make real world deployment challenging -access to low-dimensional Markovian state, known reward functions, and availability of episodic resets.
In practice, this means that significant human engineering is required to materialize these assumptions in order to conduct real-world reinforcement learning, which limits the ability of learning-enabled robots to collect large amounts of experience automatically in a variety of naturally occuring environments.
Even if we can engineer a complex solution for instrumentation in one environment, the same may need to be done for every environment being learned in.
When using deep function approximators, actually collecting large amounts of real world experience is typically crucial for effective generalization.
The inability to collect large amounts of real world data autonomously significantly limits the ability of these robots to learn robust, generalizable behaviors.
In this work, we propose that overcoming these challenges requires designing robotic systems that possess three fundamental capabilities: (1) they are able to learn from their own raw sensory inputs, (2) they are able to assign rewards to their own behaviors with minimal human intervention, (3) they are able to learn continuously in non-episodic settings without requiring human operators to manually reset the environment.
We believe that a system with these capabilities will bring us significantly closer to the goal of continuously improv-ing robotic agents that leverage large amounts of their own real world experience, without requiring significant human instrumentation and engineering effort.
Having laid out these requirements, we propose a practical instantiation of such a learning system, which afford the above capabilities.
While prior works have studied each of these issues in isolation, combining solutions to these issues is non-trivial and results in a particularly challenging learning problem.
We provide a detailed empirical analysis of these issues, both in simulation and on a real-world robotic platform, and propose a number of simple but effective solutions that can make it possible to produce a complete robotic learning system that can learn autonomously, handle raw sensory inputs, learn reward functions from easily available supervision, and learn without manually designed reset mechanisms.
We show that this system is well suited for learning dexterous robotic manipulation tasks in the real world, and substantially outperforms ablations and prior work.
While the individual components that we combine to design our robotic learning system are based heavily on prior work, both the combination of these components and their specific instantiations are novel.
Indeed, we show that without the particular design decisions motivated by our experiments, naïve designs that follow prior work generally fail to satisfy one of the three requirements that we lay out.
We presented the design and instantiation of R3L , a system for real world reinforcement learning.
We identify and investigate the various ingredients required for such a system to scale gracefully with minimal human engineering and supervision.
We show that this system must be able to learn from raw sensory observations, learn from very easily specified reward functions without reward engineering, and learn without any episodic resets.
We describe the basic elements that are required to construct such a system, and identify unexpected learning challenges that arise from interplay of these elements.
We propose simple and scalable fixes to these challenges through introducing unsupervised representation learning and a randomized perturbation controller.
We show the effectiveness on such a system at learning without instrumentation in several simulated and real world environments.
The ability to train robots directly in the real world with minimal instrumentation opens a number of exciting avenues for future research.
Robots that can learn unattended, without resets or handdesigned reward functions, can in principle collect very large amounts of experience autonomously, which may enable very broad generalization in the future.
Furthermore, fully autonomous learning should make it possible for robots to acquire large behavioral repertoires, since each additional task requires only the initial examples needed to learn the reward.
However, there are also a number of additional challenges, including sample complexity, optimization and exploration difficulties on more complex tasks, safe operation, communication latency, sensing and actuation noise, and so forth, all of which would need to be addressed in future work in order to enable truly scalable realworld robotic learning.
Initialize RND target and predictor networks f (s),f (s)
Initialize VICE reward classifier r VICE (s)
Initialize replay buffer D
|
System to learn robotic tasks in the real world with reinforcement learning without instrumentation
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:612
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the problem of cross-lingual voice conversion in non-parallel speech corpora and one-shot learning setting.
Most prior work require either parallel speech corpora or enough amount of training data from a target speaker.
However, we convert an arbitrary sentences of an arbitrary source speaker to target speaker's given only one target speaker training utterance.
To achieve this, we formulate the problem as learning disentangled speaker-specific and context-specific representations and follow the idea of [1] which uses Factorized Hierarchical Variational Autoencoder (FHVAE).
After training FHVAE on multi-speaker training data, given arbitrary source and target speakers' utterance, we estimate those latent representations and then reconstruct the desired utterance of converted voice to that of target speaker.
We use multi-language speech corpus to learn a universal model that works for all of the languages.
We investigate the use of a one-hot language embedding to condition the model on the language of the utterance being queried and show the effectiveness of the approach.
We conduct voice conversion experiments with varying size of training utterances and it was able to achieve reasonable performance with even just one training utterance.
We also investigate the effect of using or not using the language conditioning.
Furthermore, we visualize the embeddings of the different languages and sexes.
Finally, in the subjective tests, for one language and cross-lingual voice conversion, our approach achieved moderately better or comparable results compared to the baseline in speech quality and similarity.
|
We use a Variational Autoencoder to separate style and content, and achieve voice conversion by modifying style embedding and decoding. We investigate using a multi-language speech corpus and investigate its effects.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:613
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification.
The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map.
Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification.
Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values.
Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter.
Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets.
When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset.
We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.
Feed-forward convolutional neural networks (CNNs) have demonstrated impressive results on a wide variety of visual tasks, such as image classification, captioning, segmentation, and object detection.
However, the visual reasoning which they implement in solving these problems remains largely inscrutable, impeding understanding of their successes and failures alike.One approach to visualising and interpreting the inner workings of CNNs is the attention map: a scalar matrix representing the relative importance of layer activations at different 2D spatial locations with respect to the target task BID21 .
This notion of a nonuniform spatial distribution of relevant features being used to form a task-specific representation, and the explicit scalar representation of their relative relevance, is what we term 'attention'.
Previous works have shown that for a classification CNN trained using image-level annotations alone, extracting the attention map provides a straightforward way of determining the location of the object of interest BID2 BID31 and/or its segmentation mask BID21 , as well as helping to identify discriminative visual properties across classes BID31 .
More recently, it has also been shown that training smaller networks to mimic the attention maps of larger and higher-performing network architectures can lead to gains in classification accuracy of those smaller networks BID29 .The
works of BID21 ; BID2 ; BID31 represent one series of increasingly sophisticated techniques for estimating attention maps in classification CNNs. However
, these approaches share a crucial limitation: all are implemented as post-hoc additions to fully trained networks. On the
other hand, integrated attention mechanisms whose parameters are learned over the course of end-to-end training of the entire network have been proposed, and have shown benefits in various applications that can leverage attention as a cue. These
include attribute prediction BID19 , machine translation BID1 , image captioning BID28 Mun et al., 2016) and visual question answering (VQA) BID24 BID26 . Similarly
to these approaches, we here represent attention as a probabilistic map over the input image locations, and implement its estimation via an end-to-end framework. The novelty
of our contribution lies in repurposing the global image representation as a query to estimate multi-scale attention in classification, a task which, unlike e.g. image captioning or VQA, does not naturally involve a query.Fig. 1 provides
an overview of the proposed method. Henceforth
, we will use the terms 'local features' and 'global features' to refer to features extracted by some layer of the CNN whose effective receptive fields are, respectively, contiguous proper subsets of the image ('local') and the entire image ('global'). By defining
a compatibility measure between local and global features, we redesign standard architectures such that they must classify the input image using only a weighted combination of local features, with the weights represented here by the attention map. The network
is thus forced to learn a pattern of attention relevant to solving the task at hand.We experiment with applying the proposed attention mechanism to the popular CNN architectures of VGGNet BID20 and ResNet BID11 , and capturing coarse-to-fine attention maps at multiple levels. We observe
that the proposed mechanism can bootstrap baseline CNN architectures for the task of image classification: for example, adding attention to the VGG model offers an accuracy gain of 7% on CIFAR-100. Our use of
attention-weighted representations leads to improved fine-grained recognition and superior generalisation on 6 benchmark datasets for domain-shifted classification. As observed
on models trained for fine-grained bird recognition, attention aware models offer limited resistance to adversarial fooling at low and moderate L ∞ -noise norms. The trained
attention maps outperform other CNN-derived attention maps BID31 , traditional saliency maps BID14 BID30 ), and top object proposals on the task of weakly supervised segmentation of the Object Discovery dataset ). In §5, we present
sample results which suggest that these improvements may owe to the method's tendency to highlight the object of interest while suppressing background clutter.
We propose a trainable attention module for generating probabilistic landscapes that highlight where and in what proportion a network attends to different regions of the input image for the task of classification.
We demonstrate that the method, when deployed at multiple levels within a network, affords significant performance gains in classification of seen and unseen categories by focusing on the object of interest.
We also show that the attention landscapes can facilitate weakly supervised segmentation of the predominant object.
Further, the proposed attention scheme is amenable to popular post-processing techniques such as conditional random fields for refining the segmentation masks, and has shown promise in learning robustness to certain kinds of adversarial attacks.
|
The paper proposes a method for forcing CNNs to leverage spatial attention in learning more object-centric representations that perform better in various respects.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:614
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recurrent neural network(RNN) is an effective neural network in solving very complex supervised and unsupervised tasks.There has been a significant improvement in RNN field such as natural language processing, speech processing, computer vision and other multiple domains.
This paper deals with RNN application on different use cases like Incident Detection , Fraud Detection , and Android Malware Classification.
The best performing neural network architecture is chosen
by conducting different chain of experiments for different network parameters and structures.The network is run up to 1000 epochs with learning rate set in the range of 0.01 to 0.5.Obviously, RNN performed very well when compared to classical machine learning algorithms.
This is mainly possible because RNNs implicitly extracts the underlying features and also identifies the characteristics of the data.
This lead to better accuracy.
In today's data world, malware is the common threat to everyone from big organizations to common people and we need to safeguard our systems, computer networks, and valuable data.
Cyber-crimes has risen to the peak and many hacks, data stealing, and many more cyber-attacks.
Hackers gain access through any loopholes and steal all valuable data, passwords and other useful information.Mainly in android platform malicious attacks increased due to increase in large number of application.In other hand its very easy for persons to develop multiple malicious malwares and feed it into android market very easily using a third party software's.Attacks can be through any means like e-mails, exe files, software, etc.
Criminals make use of security vulnerabilities and exploit their opponents.
This forces the importance of an effective system to handle the fraudulent activities.
But today's sophisticated attacking algorithms avoid being detected by the security Email address: [email protected] (Mohammed Harun Babu R) mechanisms.
Every day the attackers develop new exploitation techniques and escape from Anti-virus and Malware softwares.
Thus nowadays security solution companies are moving towards deep learning and machine learning techniques where the algorithm learns the underlying information from the large collection of security data itself and makes predictions on new data.
This, in turn, motivates the hackers to develop new methods to escape from the detection mechanisms.Malware attack remains one of the major security threat in cyberspace.
It is an unwanted program which makes the system behave differently than it is supposed to behave.
The solutions provided by antivirus software against this malware can only be used as a primary weapon of resistance because they fail to detect the new and upcoming malware created using polymorphic, metamorphic, domain flux and IP flux.
The machine learning algorithms were employed which solves complex security threats in more than three decades BID0 .
These methods have the capability to detect new malwares.
Research is going at a high phase for security problems like Intrusion Detection Systems(IDS), Mal-ware Detection, Information Leakage, etc.
Fortunately, today's Deep Learning(DL) approaches have performed well in various long-standing AI challenges BID1 such as nlp, computer vision, speech recognition.
Recently, the application of deep learning techniques have been applied for various use cases of cyber security BID2 .It
has the ability to detect the cyber attacks by learning the complex underlying structure, hidden sequential relationships and hierarchical feature representations from a huge set of security data. In
this paper, we are evaluating the efficiency of SVM and RNN machine learning algorithms for cybersecurity problems. Cybersecurity
provides a set of actions to safeguard computer networks, systems, and data. This paper is
arranged accordingly where related work are discussed in section 2 the background knowledge of recurrent neural network (RNN) in section 3 .In section 4 proposed
methodology including description,data set are discussed and at last results are furnished in Section 5. Section 6 is conclude
with conclusion.
In this paper performance of RNN Vs other classical machine learning classifiers are evaluated for cybersecuriy use cases such as Android malware classification, incident detection, and fraud detection.
In all the three
|
Recurrent neural networks for Cybersecurity use-cases
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:615
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations.
However, it remains unclear how neural circuits encode complex spatio-temporal patterns.
We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity.
Input alignment along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network.
Using mean field analysis, we derive the impact of input alignment on the overall stability of attractors formed.
Our results indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network's inference ability.
Brain actively untangles the input sensory data and fits them in behaviorally relevant dimensions that enables an organism to perform recognition effortlessly, in spite of variations DiCarlo et al. (2012) ; Thorpe et al. (1996) ; DiCarlo & Cox (2007) .
For instance, in visual data, object translation, rotation, lighting changes and so forth cause complex nonlinear changes in the original input space.
However, the brain still extracts high-level behaviorally relevant constructs from these varying input conditions and recognizes the objects accurately.
What remains unknown is how brain accomplishes this untangling.
Here, we introduce the concept of chaos-guided input alignment in a recurrent network (specifically, reservoir computing model) that provides an avenue to untangle stimuli in the input space and improve the ability of a stimulus to entrain neural dynamics.
Specifically, we show that the complex dynamics arising from the recurrent structure of a randomly connected reservoir Rajan & Abbott (2006) ; Kadmon & Sompolinsky (2015) ; Stern et al. (2014) can be used to extract an explicit phase relationship between the input stimulus and the spontaneous chaotic neuronal response.
Then, aligning the input phase along the dominant projections determining the intrinsic chaotic activity, causes the random chaotic fluctuations or trajectories of the network to become locally stable channels or dynamic attractor states that, in turn, improve its' inference capability.
In fact, using mean field analysis, we derive the effect of introducing varying phase association between the input and the network's spontaneous chaotic activity.
Our results demonstrate that successful formation of stable attractors is strongly determined from the input alignment.
We also illustrate the effectiveness of input alignment on a complex motor pattern generation task with reliable generation of learnt patterns over multiple trials, even in presence of external perturbations.
Models of cortical networks often use diverse plasticity mechanisms for effective tuning of recurrent connections to suppress the intrinsic chaos (or fluctuations) Laje & Buonomano (2013) ; Panda & Roy (2017) .
We show that input alignment alone produces stable and repeatable trajectories, even, in presence of variable internal neuronal dynamics for dynamical computations.
Combining input alignment with recurrent synaptic plasticity mechanism can further enable learning of stable correlated network activity at the output (or readout layer) that is resistant to external perturbation to a large extent.
Furthermore, since input subspace alignment allows us to operate networks at low amplitude while maintaining a stable network activity, it provides an additional advantage of higher dimensionality.
A network of higher dimensionality offers larger number of disassociated principal chaotic projections along which different inputs can be aligned (see Appendix A, Fig. A1(c) ).
Thus, for a classification task, wherein the network has to discriminate between 10 different inputs (of varying frequencies and underlying statistics), our notion of untangling with chaos-guided input alignment can, thus, serve as a foundation for building robust recurrent networks with improved inference ability.
Further investigation is required to examine which orientations specifically improve the discrimination capability of the network and the impact of a given alignment on the stability of the readout dynamics around an output target.
In summary, the analyses we present suggest that input alignment in the chaotic subspace has a large impact on the network dynamics and eventually determines the stability of an attractor state.
In fact, we can control the network's convergence toward different stable attractor channels during its voyage in the neural state space by regulating the input orientation.
This indicates that, besides synaptic strength variance Rajan & Abbott (2006) , a critical quantity that might be modified by modulatory and plasticity mechanisms controlling neural circuit dynamics is the input stimulus alignment.
|
Input Structuring along Chaos for Stability
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:616
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions.
GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t.
the generative parameters, and thus do not work for discrete data.
We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.
The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs (BGANs).
We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation.
In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN) bedrooms, and Imagenet without conditioning.
Generative adversarial networks (GAN, BID7 involve a unique generative learning framework that uses two separate models, a generator and discriminator, with opposing or adversarial objectives.
Training a GAN only requires back-propagating a learning signal that originates from a learned objective function, which corresponds to the loss of the discriminator trained in an adversarial manner.
This framework is powerful because it trains a generator without relying on an explicit formulation of the probability density, using only samples from the generator to train.GANs have been shown to generate often-diverse and realistic samples even when trained on highdimensional large-scale continuous data BID31 .
GANs however have a serious limitation on the type of variables they can model, because they require the composition of the generator and discriminator to be fully differentiable.With discrete variables, this is not true.
For instance, consider using a step function at the end of a generator in order to generate a discrete value.
In this case, back-propagation alone cannot provide the training signal, because the derivative of a step function is 0 almost everywhere.
This is problematic, as many important real-world datasets are discrete, such as character-or word-based representations of language.
The general issue of credit assignment for computational graphs with discrete operations (e.g. discrete stochastic neurons) is difficult and open problem, and only approximate solutions have been proposed in the past BID2 BID8 BID10 BID14 BID22 BID40 .
However, none of these have yet been shown to work with GANs.
In this work, we make the following contributions:• We provide a theoretical foundation for boundary-seeking GANs (BGAN), a principled method for training a generator of discrete data using a discriminator optimized to estimate an f -divergence BID29 BID30 .
The discriminator can then be used to formulate importance weights which provide policy gradients for the generator.•
We verify this approach quantitatively works across a set of f -divergences on a simple classification task and on a variety of image and natural language benchmarks.•
We demonstrate that BGAN performs quantitatively better than WGAN-GP BID9 in the simple discrete setting.•
We show that the boundary-seeking objective extends theoretically to the continuous case and verify it works well with some common and difficult image benchmarks. Finally
, we show that this objective has some improved stability properties within training and without.
On estimating likelihood ratios from the discriminator Our work relies on estimating the likelihood ratio from the discriminator, the theoretical foundation of which we draw from f -GAN BID30 .
The connection between the likelihood ratios and the policy gradient is known in previous literature BID15 , and the connection between the discriminator output and the likelihood ratio was also made in the context of continuous GANs BID26 BID39 .
However, our work is the first to successfully formulate and apply this approach to the discrete setting.Importance sampling Our method is very similar to re-weighted wake-sleep (RWS, BID3 , which is a method for training Helmholtz machines with discrete variables.
RWS also relies on minimizing the KL divergence, the gradients of which also involve a policy gradient over the likelihood ratio.
Neural variational inference and learning (NVIL, BID25 , on the other hand, relies on the reverse KL.
These two methods are analogous to our importance sampling and REINFORCE-based BGAN formulations above.GAN for discrete variables Training GANs with discrete data is an active and unsolved area of research, particularly with language model data involving recurrent neural network (RNN) generators BID20 .
Many REINFORCE-based methods have been proposed for language modeling BID20 BID6 which are similar to our REINFORCE-based BGAN formulation and effectively use the sigmoid of the estimated loglikelihood ratio.
The primary focus of these works however is on improving credit assignment, and their approaches are compatible with the policy gradients provided in our work.There have also been some improvements recently on training GANs on language data by rephrasing the problem into a GAN over some continuous space BID19 BID16 BID9 .
However, each of these works bypass the difficulty of training GANs with discrete data by rephrasing the deterministic game in terms of continuous latent variables or simply ignoring the discrete sampling process altogether, and do not directly solve the problem of optimizing the generator from a difference measure estimated from the discriminator.Remarks on stabilizing adversarial learning, IPMs, and regularization A number of variants of GANs have been introduced recently to address stability issues with GANs.
Specifically, generated samples tend to collapse to a set of singular values that resemble the data on neither a persample or distribution basis.
Several early attempts in modifying the train procedure (Berthelot et al., 2017; BID35 as well as the identifying of a taxonomy of working architectures BID31 addressed stability in some limited setting, but it wasn't until Wassertstein GANs (WGAN, BID1 were introduced that there was any significant progress on reliable training of GANs.WGANs rely on an integral probability metric (IPM, BID36 ) that is the dual to the Wasserstein distance.
Other GANs based on IPMs, such as Fisher GAN tout improved stability in training.
In contrast to GANs based on f -divergences, besides being based on metrics that are "weak", IPMs rely on restricting T to a subset of all possible functions.
For instance in WGANs, T = {T | T L ≤ K}, is the set of K-Lipschitz functions.
Ensuring a statistic network, T φ , with a large number of parameters is Lipschitz-continuous is hard, and these methods rely on some sort of regularization to satisfy the necessary constraints.
This includes the original formulation of WGANs, which relied on weight-clipping, and a later work BID9 which used a gradient penalty over interpolations between real and generated data.Unfortunately, the above works provide little details on whether T φ is actually in the constrained set in practice, as this is probably very hard to evaluate in the high-dimensional setting.
Recently, BID32 introduced a gradient norm penalty similar to that in BID9 without interpolations and which is formulated in terms of f -divergences.
In our work, we've found that this approach greatly improves stability, and we use it in nearly all of our results.
That said, it is still unclear empirically how the discriminator objective plays a strong role in stabilizing adversarial learning, but at this time it appears that correctly regularizing the discriminator is sufficient.
Reinterpreting the generator objective to match the proposal target distribution reveals a novel learning algorithm for training a generative adversarial network (GANs, BID7 .
This proposed approach of boundary-seeking provides us with a unified framework under which learning algorithms for both discrete and continuous variables are derived.
Empirically, we verified our approach quantitatively and showed the effectiveness of training a GAN with the proposed learning algorithm, which we call a boundary-seeking GAN (BGAN), on both discrete and continuous variables, as well as demonstrated some properties of stability.Starting image (generated) 10k updates GAN Proxy GAN BGAN 20k updates Figure 5 : Following the generator objective using gradient descent on the pixels.
BGAN and the proxy have sharp initial gradients that decay to zero quickly, while the variational lower-bound objective gradient slowly increases.
The variational lower-bound objective leads to very poor images, while the proxy and BGAN objectives are noticeably better.
Overall, BGAN performs the best in this task, indicating that its objective will not overly disrupt adversarial learning.Berthelot, David, Schumm, Tom, and Metz, Luke.
Began: Boundary equilibrium generative adversarial networks.
arXiv preprint arXiv:1703.10717, 2017.
|
We address training GANs with discrete data by formulating a policy gradient that generalizes across f-divergences
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:617
|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates.
The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces.
To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP.
We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline.
The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task.
Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks.
Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.
Deep reinforcement learning has achieved impressive results in recent years in domains such as video games from raw visual inputs BID10 , board games , simulated control tasks BID16 , and robotics ).
An important class of methods behind many of these success stories are policy gradient methods BID28 BID22 BID5 BID18 BID11 , which directly optimize parameters of a stochastic policy through local gradient information obtained by interacting with the environment using the current policy.
Policy gradient methods operate by increasing the log probability of actions proportional to the future rewards influenced by these actions.
On average, actions which perform better will acquire higher probability, and the policy's expected performance improves.A critical challenge of policy gradient methods is the high variance of the gradient estimator.
This high variance is caused in part due to difficulty in credit assignment to the actions which affected the future rewards.
Such issues are further exacerbated in long horizon problems, where assigning credits properly becomes even more challenging.
To reduce variance, a "baseline" is often employed, which allows us to increase or decrease the log probability of actions based on whether they perform better or worse than the average performance when starting from the same state.
This is particularly useful in long horizon problems, since the baseline helps with temporal credit assignment by removing the influence of future actions from the total reward.
A better baseline, which predicts the average performance more accurately, will lead to lower variance of the gradient estimator.The key insight of this paper is that when the individual actions produced by the policy can be decomposed into multiple factors, we can incorporate this additional information into the baseline to further reduce variance.
In particular, when these factors are conditionally independent given the current state, we can compute a separate baseline for each factor, whose value can depend on all quantities of interest except that factor.
This serves to further help credit assignment by removing the influence of other factors on the rewards, thereby reducing variance.
In other words, information about the other factors can provide a better evaluation of how well a specific factor performs.
Such factorized policies are very common, with some examples listed below.•
In continuous control and robotics tasks, multivariate Gaussian policies with a diagonal covariance matrix are often used. In
such cases, each action coordinate can be considered a factor. Similarly
, factorized categorical policies are used in game domains like board games and Atari.• In multi-agent
and distributed systems, each agent deploys its own policy, and thus the actions of each agent can be considered a factor of the union of all actions (by all agents). This is particularly
useful in the recent emerging paradigm of centralized learning and decentralized execution BID2 BID9 . In contrast to the previous
example, where factorized policies are a common design choice, in these problems they are dictated by the problem setting.We demonstrate that action-dependent baselines consistently improve the performance compared to baselines that use only state information. The relative performance gain
is task-specific, but in certain tasks, we observe significant speed-up in the learning process. We evaluate our proposed method
on standard benchmark continuous control tasks, as well as on a high-dimensional door opening task with a five-fingered hand, a synthetic high-dimensional target matching task, on a blind peg insertion POMDP task, and a multi-agent communication task. We believe that our method will
facilitate further applications of reinforcement learning methods in domains with extremely highdimensional actions, including multi-agent systems. Videos and additional results of
the paper are available at https://sites.google.com/view/ad-baselines.
An action-dependent baseline enables using additional signals beyond the state to achieve bias-free variance reduction.
In this work, we consider both conditionally independent policies and general policies, and derive an optimal action-dependent baseline.
We provide analysis of the variance DISPLAYFORM0 (a) Success percentage on the blind peg insertion task.
The policy still acts on the observations and does not know the hole location.
However, the baseline has access to this goal information, in addition to the observations and action, and helps to speed up the learning.
By comparison, in blue, the baseline has access only to the observations and actions.
reduction improvement over non-optimal baselines, including the traditional optimal baseline that only depends on state.
We additionally propose several practical action-dependent baselines which perform well on a variety of continuous control tasks and synthetic high-dimensional action problems.
The use of additional signals beyond the local state generalizes to other problem settings, for instance in POMDP and multi-agent tasks.
In future work, we propose to investigate related methods in such settings on large-scale problems.
|
Action-dependent baselines can be bias-free and yield greater variance reduction than state-only dependent baselines for policy gradient methods.
|
{
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
}
|
scitldr_aic:train:618
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.